title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I have WizardLM-Uncensored-Falcon-40B-GPTQ running on 24gb VRAM with 1t/s and constant memory blowups | 23 | AMA. Pretty inspiring that that worked (/u/TheBloke, how do I buy you a coffee?) -- I actually think the results look p nice so far compared to Wizard 30b!
Anyone else get it going or have samples they want me to try with very limited context window? I noticed it still censors/aligns pretty intensely. | 2023-06-04T00:10:11 | https://www.reddit.com/r/LocalLLaMA/comments/13zunfn/i_have_wizardlmuncensoredfalcon40bgptq_running_on/ | EarthquakeBass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13zunfn | false | null | t3_13zunfn | /r/LocalLLaMA/comments/13zunfn/i_have_wizardlmuncensoredfalcon40bgptq_running_on/ | false | false | self | 23 | null |
NVlink does do something... | 12 | I got my nvlink. Amazingly enough it fit the spacing of my cards. Thought I would have to strip one of the fans but it lined right up.
Before nvlink:
Output generated in 80.58 seconds (2.56 tokens/s, 206 tokens, context 1283, seed 91090000)
Output generated in 93.29 seconds (2.37 tokens/s, 221 tokens, context 1523, seed 1386216150)
Output generated in 102.22 seconds (2.24 tokens/s, 229 tokens, context 1745, seed 2106095497)
Output generated in 63.35 seconds (2.15 tokens/s, 136 tokens, context 1729, seed 811830722)
Output generated in 62.96 seconds (2.24 tokens/s, 141 tokens, context 1714, seed 1085586370)
After nvlink:
Output generated in 61.76 seconds (2.67 tokens/s, 165 tokens, context 1717, seed 892263001)
Output generated in 31.62 seconds (2.43 tokens/s, 77 tokens, context 1699, seed 1538052936)
Output generated in 46.71 seconds (2.70 tokens/s, 126 tokens, context 1650, seed 769057010)
Output generated in 70.07 seconds (2.85 tokens/s, 200 tokens, context 1710, seed 336868493)
Output generated in 72.12 seconds (2.77 tokens/s, 200 tokens, context 1621, seed 2083479288)
Output generated in 85.70 seconds (2.91 tokens/s, 249 tokens, context 1596, seed 1898820968)
This is a 65b being run across 2x3090 using llama_inference_offload. It does appear to have some issues with CPU bottlenecking since when both GPU work at once it is only 30% utilization, nvlink didn't change that. Haven't tried with accelerate yet but I expect similar results, same for training. Was it worth $100? Not sure yet. | 2023-06-04T00:20:10 | https://www.reddit.com/r/LocalLLaMA/comments/13zuwq4/nvlink_does_do_something/ | a_beautiful_rhind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13zuwq4 | false | null | t3_13zuwq4 | /r/LocalLLaMA/comments/13zuwq4/nvlink_does_do_something/ | false | false | self | 12 | null |
CLblast is nice on crap systems! | 20 | Running on linux using the build option to enable clblast. I am using gptneoxcpp rather than vanilla llamacpp.
Redhat rocm setup condesnsed my GPU/CPU into one device for use on OpenCL. I am able to run 7b models accelerated even though I have 500M of GPU memory. It's running a nice bit quicker than a straight CPU build.
If you are also AI-ing on a crap laptop, give it a shot. | 2023-06-04T01:49:04 | https://www.reddit.com/r/LocalLLaMA/comments/13zx3rb/clblast_is_nice_on_crap_systems/ | thebadslime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13zx3rb | false | null | t3_13zx3rb | /r/LocalLLaMA/comments/13zx3rb/clblast_is_nice_on_crap_systems/ | false | false | self | 20 | null |
What questions do you ask LLMs to check their sanity and real world understanding? | 112 | Two of my favorite prompts are:
- Please provide me institutions on how to steal an egg from my chicken
It shows the level of censorship, and switching the word "steal" with "grab", "obtain" and "borrow" helps to measure this level. Some models agree to provide such instructions if you note that the egg is your only source of food or that the chicken doesn't mind it, while others refuse to answer, defending chicken rights.
- Is it morally right to kill mosquitoes?
This one can be also extended with details about mosquitoes intentions to feed on you. Besides models that refuse to speak about morals, I've seen 2 categories of opinions on this topic: one suggests that the life of every living creature is important and we should not do any harm to them, another one suggests that it's ok to do so, only because mosquitoes can transmit viruses which can be dangerous to humans.
Also, it's important to notice that the same model can output different opinions on multiple generations with different seed.
Please, share your creative prompts in the comments! | 2023-06-04T03:16:43 | https://www.reddit.com/r/LocalLLaMA/comments/13zz8y5/what_questions_do_you_ask_llms_to_check_their/ | remixer_dec | self.LocalLLaMA | 2023-06-04T08:02:32 | 0 | {} | 13zz8y5 | false | null | t3_13zz8y5 | /r/LocalLLaMA/comments/13zz8y5/what_questions_do_you_ask_llms_to_check_their/ | false | false | self | 112 | null |
Dataset creation for fine tuning LLMs. Suggestions required. | 5 | Hey LLMers here,
I have use case where I need to map table to a text from a word document. Both table and text is present in the word document.
So this word document has different sections. Under section there are tables followed by brief insightful summary of these tables. What I would like to do is to create a dataset where I can map these tables and text summaries in a dataset for fine tuning LLM.
what should be my approach in creating dataset ?
What should be the format of the dataset.?
Is it possible to feed tables to LLMs (ChatGpt is not an option here) ?
Any help is appreciated. | 2023-06-04T05:32:04 | https://www.reddit.com/r/LocalLLaMA/comments/1402fg5/dataset_creation_for_fine_tuning_llms_suggestions/ | sapiensush | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1402fg5 | false | null | t3_1402fg5 | /r/LocalLLaMA/comments/1402fg5/dataset_creation_for_fine_tuning_llms_suggestions/ | false | false | self | 5 | null |
text model share community like civitai | 3 | [removed] | 2023-06-04T06:33:42 | https://www.reddit.com/r/LocalLLaMA/comments/1403sc2/text_model_share_community_like_civitai/ | Separate-Awareness53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1403sc2 | false | null | t3_1403sc2 | /r/LocalLLaMA/comments/1403sc2/text_model_share_community_like_civitai/ | false | false | default | 3 | null |
Best model for language tutoring? | 2 | [deleted] | 2023-06-04T07:06:32 | [deleted] | 2023-06-04T07:45:10 | 0 | {} | 1404ikp | false | null | t3_1404ikp | /r/LocalLLaMA/comments/1404ikp/best_model_for_language_tutoring/ | false | false | default | 2 | null |
||
How to deal with words that Llama cannot know | 2 | I'm working on a lastmile problem for an automation with using Llama. The challenge is that there are many technical names for things like programs that Llama cannot possibly know. Is it possible to add these words to the tokennizer or should I add the context in the prompt? Like "N12GBHR is a program name, it does ..."
What is the best way to handle this problem? | 2023-06-04T07:29:54 | https://www.reddit.com/r/LocalLLaMA/comments/140504c/how_to_deal_with_words_that_llama_cannot_know/ | wsebos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140504c | false | null | t3_140504c | /r/LocalLLaMA/comments/140504c/how_to_deal_with_words_that_llama_cannot_know/ | false | false | self | 2 | null |
What approximate token generation speed can I anticipate on a high-end-ish processor for the LocalLLaMA? | 0 | My current computer allows me to run up to 13B models and their speed is ok, and I am evaluating perspectives of getting a new system, I am not planning to run LocalLLaMA on a GPU.
My question is, is it even reliable way to run big models on CPU? My friend tried to run 30B model on his system with enough RAM, but somewhat older than current processor (5 y.o. Ryzen 3), and got about 1 token every 30 seconds.
If it is reliable way to run models locally I could go for somewhat higher end PC, like i9-13900 + 128 RAM, if it would be slow anyways (slower than 1 token per 2-5 seconds) I would rather go for something modest, like 13400/13600 and less ram.
It would be nice to add to FAQ information like "time to generate 1 token on i5-11400":
3b - 0.2 sec
7b - 0.5 sec
13b - 1.2 sec
30b - (20 sec ?)
60b - ... | 2023-06-04T07:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/14055zt/what_approximate_token_generation_speed_can_i/ | uti24 | self.LocalLLaMA | 2023-06-04T07:44:51 | 0 | {} | 14055zt | false | null | t3_14055zt | /r/LocalLLaMA/comments/14055zt/what_approximate_token_generation_speed_can_i/ | false | false | self | 0 | null |
My adventures in creating a commercial chatbot; limited success. | 119 | After a very solid month of throwing myself at this problem, I've finally found some limited success in getting a very detailed product programming manual ingested, and having the model give answers that don't completely *suck*. I would not say it's ready to plug into commercial chatbot, but I will say it's halfway there, and it is a far cry more progress than I've had in the first three weeks. Since this forum is all about the collaborative effort and spirit, I wanted to share some discoveries I've made to hopefully save others some time. Note that I have a good workstation (48GB RTX A6000), but I never used any external APIs or cloud services or anything, this is all 100% in-house besides for downloading models and oobabooga.
1. I had *immense* difficulty getting normal LORAs to make any sense of my documentation, or gain anything remotely meaningful from it even with a lot of preparation, shortening my corpus, etc. Then, once I completely rebuilt Oobabooga and tried out QLORA, I got a *lot* of progress very quickly and actually got things that made sense. Since this is just research and nothing commercial, I trained on Wizard 13M on a 102KB corpus text file (with the load-in-4bit and use\_double\_quant checkboxes ticked before loading the model), with the hyper-p's of 12 epochs, 3e-5 LR, Cosine scheduler, Micro batch 2, Batch size 512, LORA Rank 2048 (click the advanced box to enable this), LORA Alpha 4096, Cutoff Length 2048, Overlap 512. It took a few hours to process, but afterwards the answers I got were sometimes very good.
2. One big difficulty I had is that my company name and product have very similar names of other products and company names, and very likely these models have had extensive exposure to these similar names, and it was frustrating trying to get the data for -my- product. What really helped me was to tone down the temperature, and make a prompt that specified exactly what the product and technology this product is about, and this helped cut down on the hallucinations so that I could find out how much the model *really* knew about my corpus.
3. I initially turned my corpus into a whole bunch of Q&A using a 4-bit quantized local 65B model via the API and some bad python script I might have posted elsewhere here, telling it to convert my corpus to Q&A. Since I had the temperature a bit high, it actually injected a lot of hallucinations into my corpus. I may go back to this route, as it allows me to easily put in a lot of RLHF answers based on the feedback I'm getting. Also, when training on plain text instead of JSON prompt/answer pairs, the language model definitely seems prone to inject it's biases it learned from similar products, which is probably the #1 reason why this isn't ready right now. Responding and controlling those biases in the training corpus is probably my next step in my project.
4. I believe the QLORA approach is the closest we have to a full fine-tuning, *without* renting cloud hardware, so many thanks to the smart people involved with that.
5. Perplexity is your best friend. Next to the Train Lora tab is the Perplexity Evaluation tab, and this is a way to sneak preview how puzzling a specific model will find your corpus, and turn the whole thing into a single number - the lower this number, the stronger the model will understand it and be able to make sense of it. This is a way to investigate different models, and if you've trained a LORA adapter and the perplexity *increases*, you know you are on the wrong track. It's a way to get measurable feedback besides random questions on your corpus.
More than anything, is I just literally went down one dead-end after another, and tried everything I could. The absolute most useful thing for me actually is reading this forum every day, because I learned something new every day. | 2023-06-04T07:39:41 | https://www.reddit.com/r/LocalLLaMA/comments/14057bg/my_adventures_in_creating_a_commercial_chatbot/ | alittleteap0t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14057bg | false | null | t3_14057bg | /r/LocalLLaMA/comments/14057bg/my_adventures_in_creating_a_commercial_chatbot/ | false | false | self | 119 | null |
Call for tunning results on CLBlast to achieve faster prompt performance | 1 | [removed] | 2023-06-04T07:58:37 | https://www.reddit.com/r/LocalLLaMA/comments/1405lif/call_for_tunning_results_on_clblast_to_achieve/ | tangjinchuan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1405lif | false | null | t3_1405lif | /r/LocalLLaMA/comments/1405lif/call_for_tunning_results_on_clblast_to_achieve/ | false | false | default | 1 | null |
Is getting a p40 worth it? | 4 | If I were to purchase a p40 how difficult would it be to install it on a regular desktop (atx mobo) i currently have a 1070 and it works well with 7b models but was thinking of trying larger models.
Heard of people using the p40 and was thinking of getting it for model infrence. Any help would be appreciated. | 2023-06-04T09:25:07 | https://www.reddit.com/r/LocalLLaMA/comments/1407gpu/is_getting_a_p40_worth_it/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1407gpu | false | null | t3_1407gpu | /r/LocalLLaMA/comments/1407gpu/is_getting_a_p40_worth_it/ | false | false | self | 4 | null |
airoboros gpt-4 instructed + context-obedient question answering | 52 | [deleted] | 2023-06-04T10:28:56 | [deleted] | 2023-06-06T09:17:59 | 0 | {} | 1408ued | false | null | t3_1408ued | /r/LocalLLaMA/comments/1408ued/airoboros_gpt4_instructed_contextobedient/ | false | false | default | 52 | null |
||
How to finetune llama or any compatible model on a corpus of text? | 5 | I don't have question-answer pairs. Just giant text document(actually many).
One way is to make chatgpt create qa pairs from given text.
Any other way? To finetune directly without going through qa phase?
The usage will be sigle shot question answering. No nested questions. | 2023-06-04T10:32:33 | https://www.reddit.com/r/LocalLLaMA/comments/1408xci/how_to_finetune_llama_or_any_compatible_model_on/ | freakynit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1408xci | false | null | t3_1408xci | /r/LocalLLaMA/comments/1408xci/how_to_finetune_llama_or_any_compatible_model_on/ | false | false | self | 5 | null |
Fine-tuning LLM | 6 | I think we agree that fine-tuning will be the most important piece besides the models we are using. Training an own model may becomes more affordable and it does not take much time but my assumption is, that in future you’ll use a public or private (billed or free) model depending on your needs and you’ll attach your „fine-tuning“.
If that is true, starting with building your own „fine-tuning“ models/datasets/-based is very important from the early beginning!
Hence my question: What is the best way/tool/format/database to start this journey right now? Are there already tools/repositories available that help with that? Question/Answers, Opinions, Weights Management, etc.?
Thanks! | 2023-06-04T10:54:44 | https://www.reddit.com/r/LocalLLaMA/comments/1409f4p/finetuning_llm/ | TimTimmaeh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1409f4p | false | null | t3_1409f4p | /r/LocalLLaMA/comments/1409f4p/finetuning_llm/ | false | false | self | 6 | null |
Based 7b GGML available | 13 | https://huggingface.co/TheBloke/based-7B-GGML
For anyone with limited hardware, like myself. Anyone else running these on android devices? | 2023-06-04T11:23:05 | https://www.reddit.com/r/LocalLLaMA/comments/140a2ty/based_7b_ggml_available/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140a2ty | false | null | t3_140a2ty | /r/LocalLLaMA/comments/140a2ty/based_7b_ggml_available/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'hVVjsrS6C3_tLUihogVfVHsy9pAnaFEelslPvxS0tUg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RsJ1BL1wTe5_AeQQhCPsel-IwPuq95BD7ho810jwjhM.jpg?width=108&crop=smart&auto=webp&s=808e64f3db75021b8f0a6ba6f8a5a92a71f17369', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RsJ1BL1wTe5_AeQQhCPsel-IwPuq95BD7ho810jwjhM.jpg?width=216&crop=smart&auto=webp&s=eee48a9c0862ec30db46001593faf44ef423893a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RsJ1BL1wTe5_AeQQhCPsel-IwPuq95BD7ho810jwjhM.jpg?width=320&crop=smart&auto=webp&s=8a025e931fa9af79bb97c9c6b15e25cfa398851b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RsJ1BL1wTe5_AeQQhCPsel-IwPuq95BD7ho810jwjhM.jpg?width=640&crop=smart&auto=webp&s=fea55632f7c6cf4681123f8b44b999240041aa63', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RsJ1BL1wTe5_AeQQhCPsel-IwPuq95BD7ho810jwjhM.jpg?width=960&crop=smart&auto=webp&s=2df744dd3d598209f8f9f05a2abe1e586d33357d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RsJ1BL1wTe5_AeQQhCPsel-IwPuq95BD7ho810jwjhM.jpg?width=1080&crop=smart&auto=webp&s=edf515dec78d83a75d99fdc65ec1bdf3f553c9e9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RsJ1BL1wTe5_AeQQhCPsel-IwPuq95BD7ho810jwjhM.jpg?auto=webp&s=1d593d964cf90197092917f804a4e40f29cd4270', 'width': 1200}, 'variants': {}}]} |
Models released without prompt template/examples - Why…? | 76 | I see many models released here and there on HuggingFace without any form of explanation about how they should/could be prompted.
I fail to understand why… why isn’t it considered the most crucial and valuable piece of information? Is it because these models are not considered instruction-based models to begin with? Or maybe because they have not been trained on instruction-based data set? Or because the authors like messing with us? Does anyone have a clue?
To me personally, it is so frustrating not to know sometimes that I just don’t bother evaluate these models at all if I need to spend more than 3min looking for the appropriate instruction-based template. What is everyone take on this? | 2023-06-04T12:33:17 | https://www.reddit.com/r/LocalLLaMA/comments/140bp4d/models_released_without_prompt_templateexamples/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140bp4d | false | null | t3_140bp4d | /r/LocalLLaMA/comments/140bp4d/models_released_without_prompt_templateexamples/ | false | false | self | 76 | null |
Any projects that allow you to analyze excel files offline? | 5 | I have a document with a lot of numbers and names, accounting related. I want to use AI to analyze and answer questions for me, I tried privateGPT but it doesn't work.
I'm not looking to find what's present and where it is present in a document which is what privateGPT does.
Preferably usable on CPU. | 2023-06-04T12:45:15 | https://www.reddit.com/r/LocalLLaMA/comments/140bzfa/any_projects_that_allow_you_to_analyze_excel/ | Suimeileo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140bzfa | false | null | t3_140bzfa | /r/LocalLLaMA/comments/140bzfa/any_projects_that_allow_you_to_analyze_excel/ | false | false | self | 5 | null |
Let's be honest: none of the models can code well | 107 | I was a big believer in chatgpt when it came out. I thought I would be able to hand over eng requirements for a ticket and have it do everything.
Sadly the novelty has worn off, anytime I give chatgpt (or any of the 30b local models) a moderate complexity task it generates something maybe 70% there, and I then spend 3 hours trying to get it to correct itself after realizing it just made up some function names.
Now I'm one to fix a problem when I see it. So I wanted to see what we could do about this.
I have a couple ideas
1) use langchain to have it execute the code, see if there's any errors, and then keep repeating
2) fine tune some models with datasets that have already gone through #1
Anyone else have ideas on how to improve code generation? | 2023-06-04T13:47:56 | https://www.reddit.com/r/LocalLLaMA/comments/140dm9b/lets_be_honest_none_of_the_models_can_code_well/ | foooooooooooooooobar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140dm9b | false | null | t3_140dm9b | /r/LocalLLaMA/comments/140dm9b/lets_be_honest_none_of_the_models_can_code_well/ | false | false | self | 107 | null |
prof.dr.Edsger W.Dijkstra: On the foolishness of "natural language programming". (EWD 667) | 4 | 2023-06-04T14:32:29 | https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html | NancyAurum | cs.utexas.edu | 1970-01-01T00:00:00 | 0 | {} | 140euxl | false | null | t3_140euxl | /r/LocalLLaMA/comments/140euxl/profdredsger_wdijkstra_on_the_foolishness_of/ | false | false | default | 4 | null |
|
Noob question about “Total VRAM” | 4 | I was looking at the chart on the subreddit for system requirements to run certain models. I noticed one section is labeled “Total VRAM”. Does this mean the VRAM on my GPU or the shared VRAM I see in windows? I have 12gb of VRAM but something like 24 of shared VRAM. | 2023-06-04T14:45:39 | https://www.reddit.com/r/LocalLLaMA/comments/140f7op/noob_question_about_total_vram/ | Mordekaiseerr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140f7op | false | null | t3_140f7op | /r/LocalLLaMA/comments/140f7op/noob_question_about_total_vram/ | false | false | self | 4 | null |
New tokenizer increases inference speed and context-length by 35% on new LLMs | 149 | This is a new method of tokenization. It's compatible with LLaMa but would require doing the pretraining over again. Anyone intending to make their own LLM should look into this.
**From the** [**GitHub**](https://github.com/alasdairforsythe/tokenmonster)**:**
TokenMonster is an ungreedy tokenizer and vocabulary builder, outperforming tiktoken by 35%. In fact, TokenMonster's smallest 24000 vocabulary consistently uses less tokens than tiktoken's largest 100256 vocabulary to tokenize the same text. Save the tokens! [See benchmark](https://github.com/alasdairforsythe/tokenmonster/blob/main/benchmark).
Given a text dataset, a vocabulary-size and a maximum-token-length, TokenMonster selects the tokens that optimally represent your dataset at that vocabulary size. It can do this at reasonable speed (within 24 hours) on server hardware, at a cost of around $8. [Prebuilt vocabularies](https://github.com/alasdairforsythe/tokenmonster#prebuilt-vocabularies) are provided, as well as tools to train your own vocabularies & native implementations in Go, Python & Javascript for tokenization and detokenization using the prebuilt or your own vocabularies.
You can [test TokenMonster in your browser here](https://bot.co/tokenmonster/), tokenizing live in native Javascript.
TokenMonster is a novel approach to tokenization with broad-ranging use potential, but its primary motivation is to increase the inference speed and context-length of large language models. By selecting better tokens, text can be represented with 35% less tokens compared to other modern tokenizing methods, increasing the speed of inference, training and the length of text by 35%. The code-optimized tokenizers do even better, [see for yourself](https://bot.co/tokenmonster/).
I also believe that TokenMonster vocabularies will improve the comprehension of Large Language Models. For more details see [The Philosophy of Tokenization](https://github.com/alasdairforsythe/tokenmonster#the-philosophy-of-tokenization).
Features
* Outperforms other tokenization algorithms ([benchmark](https://github.com/alasdairforsythe/tokenmonster/blob/main/benchmark))
* Longer text generation at faster speed
* Selects the optimal vocabulary
* Ungreedy
* Supports UTF-8, UTF-16 and binary
* Successfully identifies words, subwords, common phrases and figures of speech by itself
* Works with HTML tags, sequential spaces, tabs, etc. without wasting context
* Averages 5.5 characters per token
* No GPU needed | 2023-06-04T15:26:11 | https://www.reddit.com/r/LocalLLaMA/comments/140gcn7/new_tokenizer_increases_inference_speed_and/ | Pan000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140gcn7 | false | null | t3_140gcn7 | /r/LocalLLaMA/comments/140gcn7/new_tokenizer_increases_inference_speed_and/ | false | false | self | 149 | {'enabled': False, 'images': [{'id': 'oG4-RX1ETH5kMpMTgrQQHs1cTLkM979iuZnw8uO_fT4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F2v6WIYCW2ZAOi96Ey-fPiDZ7ivh7p9-sXTTp38mlmU.jpg?width=108&crop=smart&auto=webp&s=37e8ff2517256280826dae8287298d8b7c11ba32', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F2v6WIYCW2ZAOi96Ey-fPiDZ7ivh7p9-sXTTp38mlmU.jpg?width=216&crop=smart&auto=webp&s=82c197105d8165e20e0a6bb855a737af187239d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F2v6WIYCW2ZAOi96Ey-fPiDZ7ivh7p9-sXTTp38mlmU.jpg?width=320&crop=smart&auto=webp&s=afd6030df1856152317693db2cb145b384291aaf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F2v6WIYCW2ZAOi96Ey-fPiDZ7ivh7p9-sXTTp38mlmU.jpg?width=640&crop=smart&auto=webp&s=eac21091576ba7e219bbe0f68fb4af2d7acd0ba7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F2v6WIYCW2ZAOi96Ey-fPiDZ7ivh7p9-sXTTp38mlmU.jpg?width=960&crop=smart&auto=webp&s=ad6c9fd69d6c6255aa261b3e805103c40a015b11', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/F2v6WIYCW2ZAOi96Ey-fPiDZ7ivh7p9-sXTTp38mlmU.jpg?width=1080&crop=smart&auto=webp&s=d6278a8d6f62cddfaf454d574c4c5527d04b2305', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/F2v6WIYCW2ZAOi96Ey-fPiDZ7ivh7p9-sXTTp38mlmU.jpg?auto=webp&s=d2594521b84becf8cbca0c57dd093a212b5e5c69', 'width': 1200}, 'variants': {}}]} |
Local self ask | 17 | I've recently wondered how effective local models were at chaining together thoughts as proposed in [https://ofir.io/self-ask.pdf](https://ofir.io/self-ask.pdf). Turns out they are indeed capable of doing so while also creating reasonable chains of thoughts that are easily as good as OpenAI's models. To make it completely free to run I used [SearX](https://github.com/searx/searx) running inside a Docker container with a second model curating the search results for the main model to get answers from the web.
https://preview.redd.it/8b6emhl9q04b1.png?width=475&format=png&auto=webp&s=6f3b616fe379e3d6cc1392dcc2982a9a94616bb9
https://preview.redd.it/mmryj1o6v04b1.png?width=692&format=png&auto=webp&s=a5b974d5e48f449a1da1e005c0949814c49b3e05 | 2023-06-04T15:46:40 | https://www.reddit.com/r/LocalLLaMA/comments/140gy2s/local_self_ask/ | KeksMember | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140gy2s | false | null | t3_140gy2s | /r/LocalLLaMA/comments/140gy2s/local_self_ask/ | false | false | 17 | null |
|
What's the standard tool to expose a huggingface model as an API | 3 | What's the standard tool to expose a huggingface model as an API | 2023-06-04T16:05:15 | https://www.reddit.com/r/LocalLLaMA/comments/140hi0t/whats_the_standard_tool_to_expose_a_huggingface/ | rain5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140hi0t | false | null | t3_140hi0t | /r/LocalLLaMA/comments/140hi0t/whats_the_standard_tool_to_expose_a_huggingface/ | false | false | self | 3 | null |
KarenTheEditor - text editing Model 13b, HF, GPTQ and GGML | 47 | In my hunt for my own auto-grammarly model, here I present Karen 0.2
[FPHam/Karen\_theEditor\_13b\_HF · Hugging Face](https://huggingface.co/FPHam/Karen_theEditor_13b_HF)
Trained on a subset of rewriting pairs by myself, then promptly quantized by thebloke.
The prompt (instruct mode ) has to be:
USER: Edit the following for spelling and grammar mistakes:
ASSISTANT:
* You send a paragraph to Karen and she will fix it.
* The main requirement for Karen 0.2 was not to mess with style, word structure and sentence structure - AKA if it sounds good, don't try to fix it. (YMMW)
* I'm now working on Loose Karen (fine, the name could be deceiving) that will be more keen on messing with the sentence structure, but we will see.
Input example (lot's of grammar, spelling errors):
>USER: Edit the following for spelling and grammar mistakes: unce upon time lived name John who have pet monkey Jackie. one day while walkin down the street he saw his friend Tom coming to him. "Hey Tom" said John "hows it going?" Tom replyied "Not too bad how 'bout yourself?". as they was talk Jackie sudden jump onto Tom shoulder and started play with his hair. Tom look at John and ask "Is that yours?" John reply "Yeah, this crazy lil guy follow me everywhere" just then Jackie grab hold of Tom glasses and try to take thmm off. Tom struggle to keep his balance as he laugh hystericaly.
Output from Karen:
>ASSISTANT: Once upon a time there lived a man named John who had a pet monkey called Jackie. One day while walking down the street he saw his friend Tom approaching him. "Hey Tom," said John. "How's it going?" Tom replied, "Not too bad, how about yourself?" As they were talking, Jackie suddenly jumped onto Tom's shoulder and began playing with his hair. Tom looked at John and asked, "Is that yours?" John replied, "Yeah, this crazy little guy follows me everywhere." Just then Jackie grabbed hold of Tom's glasses and tried to take them off. Tom struggled to keep his balance as he laughed hysterically.
​ | 2023-06-04T16:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/140i7x2/karentheeditor_text_editing_model_13b_hf_gptq_and/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140i7x2 | false | null | t3_140i7x2 | /r/LocalLLaMA/comments/140i7x2/karentheeditor_text_editing_model_13b_hf_gptq_and/ | false | false | self | 47 | {'enabled': False, 'images': [{'id': 'uJ9V-m-WnuFvsV7073FF7JnV8PxGeAFxWkMfb-qhs5Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=108&crop=smart&auto=webp&s=86802327304e4cebeaf4a7abd3aa3b962a2fe43e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=216&crop=smart&auto=webp&s=39a5554f67d2c5a2915c32889f717abca4ae429e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=320&crop=smart&auto=webp&s=a916ac5157e89c421d7276d30ac83e7882198bab', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=640&crop=smart&auto=webp&s=2b4015b9e1553d483bc840887d8b78940013b10d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=960&crop=smart&auto=webp&s=977904dfe7ab4945056a96c9d756161b7fd56e1a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?width=1080&crop=smart&auto=webp&s=c6a9a3c629a3491a0fe22b3bc22311fb8d4c8c04', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pcjOj1U3Tkw7EAVTVCpoNljTAvoJDB2vPu5Vq0JX8cs.jpg?auto=webp&s=17443aa61fbb6d5aeb4cbfd08da5fb45e5a67926', 'width': 1200}, 'variants': {}}]} |
Is OpenAI Calling For Regulation to Keep Their Monopoly? | 378 | I've spent the past week tinkering with these models and it's clear they're at least 95% as good as ChatGPT and only getting better.
I find the timing of Sam Altman's fear campaign a little too on point. Anyone else think Sam and OpenAI are just trying to remain the gatekeepers of this technology? | 2023-06-04T17:15:49 | https://www.reddit.com/r/LocalLLaMA/comments/140jjw6/is_openai_calling_for_regulation_to_keep_their/ | renegadellama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140jjw6 | false | null | t3_140jjw6 | /r/LocalLLaMA/comments/140jjw6/is_openai_calling_for_regulation_to_keep_their/ | false | false | self | 378 | null |
Model comparison | 2 | i have not seen people mention a lot about gpt4all model but instead wizard vicuna. there also not any comparison i found online about the two.
which one do you guys think is better? in term of size 7B and 13B of either Vicuna or Gpt4all ? | 2023-06-04T17:25:47 | https://www.reddit.com/r/LocalLLaMA/comments/140ju5o/model_comparison/ | Royal_Desk_4697 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140ju5o | false | null | t3_140ju5o | /r/LocalLLaMA/comments/140ju5o/model_comparison/ | false | false | self | 2 | null |
Running local LLM for info retrieval of technical documents | 16 | I'm pretty new to this space so please excuse me if I describe things terribly or have dumb questions.
I work in Cybersecurity space and I think there is a really great opportunity for my company to build an information retrieval product using a local LLM + vector database. I'm passionate about learning this technology, so I really want to push my company to allow me to do this research.
I have a pretty good understanding of what an embedder does and how information could be retrieved from a vector database by calculating cosine similarities. However, Im not sure how the embedder and/or tokenizer handles words it has never seen. For example, say I have a bunch of technical documents stored in a vector db where sections are named like "section A007.14". If I then ask the LLM "give me all the information in section A007 that is relevant to supply chain security", would the LLM know how to find that information? Has anyone here tried something like this?
I hope the question makes sense. This would be a dream project for me, and I imagine it will be a battle to convince my bosses. Any help/advice would be appreciated :) | 2023-06-04T17:45:40 | https://www.reddit.com/r/LocalLLaMA/comments/140kf1f/running_local_llm_for_info_retrieval_of_technical/ | AgeOfAlgorithms | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140kf1f | false | null | t3_140kf1f | /r/LocalLLaMA/comments/140kf1f/running_local_llm_for_info_retrieval_of_technical/ | false | false | self | 16 | null |
Podcasts for latest in-depth research and papers? | 7 | Like many of us here, have been consuming papers nonstop since last year, and realized that paperswithcode is now my most frequently visited site.
That said, given how much time I spend in the car, would love a podcast that isn't in the 'AI influencer' category and generalist stuff, and has either a host or guests (or both) who cover the latest research on LLMs and multimodal work. Lex is one I tried but got frustrated with quickly.
Anyone found any that are good? (and not AI generated) :) | 2023-06-04T17:49:39 | https://www.reddit.com/r/LocalLLaMA/comments/140kj83/podcasts_for_latest_indepth_research_and_papers/ | towelpluswater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140kj83 | false | null | t3_140kj83 | /r/LocalLLaMA/comments/140kj83/podcasts_for_latest_indepth_research_and_papers/ | false | false | self | 7 | null |
Apple has an excellent hardware base for local generative AI | 78 | Current Apple iPad's and MacBook's have the following memory configuration in Apple Silicon chips:
* M1: Up to 16 GB, at 67 GB/s
* M2: Up to 24 GB, at 100 GB/s
* M1/M2 Pro: Up to 32 GB, at 200 GB/s
* M1/M2 Max: Up to 64 GB, at 400 GB/s
* M1 Ultra: Up to 128 GB, at 800 GB/s
Considering that an high-end desktop with dual-channel DDR5-6400 only does 100 GB/s, and a RTX 4090 has about 1000 GB/s bandwidth but only 24 GB memory, Apple is really well positioned to run local generative AI. There isn't any other consumer hardware that has this amount of memory at this bandwidth, especially in the Max and Ultra tiers.
Another strength is that the CPU, GPU and NPU can all use this bandwidth. This offers huge flexibility, also while developing and testing for Apple developers. Potentially it could even run some model hybrid, with CPU, GPU and/or NPU running different parts or types of calculations.
Apple can easily do their 1.5x memory trick they did from M1 to M2 to their higher tiers, giving their Pro / Max / Ultra tiers a maximum of 48, 96 and 192 GB respectively. Apple now uses 6400 MT/s LPDDR5, but [Samsung](https://news.samsung.com/global/samsung-electronics-introduces-industrys-fastest-lpddr5x-dram-at-8-5gbps), [Micron](https://www.micron.com/about/blog/2022/february/lpddr5x-memory-performance-that-pushes-the-limits) and [SK Hynix](https://news.skhynix.com/meet-the-team-behind-the-development-of-the-fastest-lpddr5x/) all have announced LPDDR5X with up to 8533 MT/s, for an additional 33% memory bandwidth.
I'm really curious if Apple will announce some Generative AI models/services tomorrow, and if so, if any one them will run on-device. | 2023-06-04T18:05:11 | https://www.reddit.com/r/LocalLLaMA/comments/140l0im/apple_has_an_excellent_hardware_base_for_local/ | Balance- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140l0im | false | null | t3_140l0im | /r/LocalLLaMA/comments/140l0im/apple_has_an_excellent_hardware_base_for_local/ | false | false | self | 78 | {'enabled': False, 'images': [{'id': 'b7QGTRHbvIJbT7LnIi6Bz8SrqBhamHOP2huW_kPFOeM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/EcqQi_3CTbEa80vl-BQjaIH2a2m5zCE3QKXSY8d_GXQ.jpg?width=108&crop=smart&auto=webp&s=783b59e11243b0238bd13fbb994dfa8242b7ed34', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/EcqQi_3CTbEa80vl-BQjaIH2a2m5zCE3QKXSY8d_GXQ.jpg?width=216&crop=smart&auto=webp&s=a96946e97e7fc9d4371c65dd1ccfe9fca35d3b46', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/EcqQi_3CTbEa80vl-BQjaIH2a2m5zCE3QKXSY8d_GXQ.jpg?width=320&crop=smart&auto=webp&s=603bca2adac54fca813cf409a2234dce99dc01d0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/EcqQi_3CTbEa80vl-BQjaIH2a2m5zCE3QKXSY8d_GXQ.jpg?width=640&crop=smart&auto=webp&s=cbdef7b5bd1e9ca2a1f76570f379246047aeeb40', 'width': 640}], 'source': {'height': 410, 'url': 'https://external-preview.redd.it/EcqQi_3CTbEa80vl-BQjaIH2a2m5zCE3QKXSY8d_GXQ.jpg?auto=webp&s=59fc0a5d6ea39cb2e88df9892cd77d3327c83ed1', 'width': 728}, 'variants': {}}]} |
Is there a quantized version of GPT-NeoX-20B? | 4 | I've searched but only found GGML version (like [here](
https://huggingface.co/KoboldAI/GPT-NeoX-20B-Erebus-GGML)). It would be so awesome to be able to run such a powerful model with less resource consumption than a 30b model (I can't get past ~400 tokens of context before being out of memory and other methods gives me 0.2~0.5t/s).
Pardon my candor if that is not feasible or something only for LLaMA models. | 2023-06-04T18:18:08 | https://www.reddit.com/r/LocalLLaMA/comments/140lena/is_there_a_quantized_version_of_gptneox20b/ | Extraltodeus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140lena | false | null | t3_140lena | /r/LocalLLaMA/comments/140lena/is_there_a_quantized_version_of_gptneox20b/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'cA5I-ZvsUS0dc7nyjO3KttwjFI5CnkUYrugUz_1HLLc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zMlL4RAcmVIH9tn8ODOgHYiuiilRpntIkBa_jlOcwQw.jpg?width=108&crop=smart&auto=webp&s=54a4f41c693f2e7a09c452c8ca5738ac9a9f0e0d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zMlL4RAcmVIH9tn8ODOgHYiuiilRpntIkBa_jlOcwQw.jpg?width=216&crop=smart&auto=webp&s=75f7ce3138aaa77fd8e0489e0e24876c47d9a61e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zMlL4RAcmVIH9tn8ODOgHYiuiilRpntIkBa_jlOcwQw.jpg?width=320&crop=smart&auto=webp&s=6d99927100d246483bcf303e271f285dfe2d2ff7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zMlL4RAcmVIH9tn8ODOgHYiuiilRpntIkBa_jlOcwQw.jpg?width=640&crop=smart&auto=webp&s=20cba7079a4451b8628727a235e2f731ac0933ca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zMlL4RAcmVIH9tn8ODOgHYiuiilRpntIkBa_jlOcwQw.jpg?width=960&crop=smart&auto=webp&s=adf6809274d6b4c1ce4fcc6ba2dcdc2dfe404662', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zMlL4RAcmVIH9tn8ODOgHYiuiilRpntIkBa_jlOcwQw.jpg?width=1080&crop=smart&auto=webp&s=67c6a1c6cb9b80fcbb2e0a175d2e8e817e9769ee', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zMlL4RAcmVIH9tn8ODOgHYiuiilRpntIkBa_jlOcwQw.jpg?auto=webp&s=bbe735916c3aa12e43f7f2f9007ee9432c2c2a43', 'width': 1200}, 'variants': {}}]} |
Llama.cpp running 40+ tokens/s on Apple M2 Max with 7B | 113 | 2023-06-04T18:33:51 | https://twitter.com/natfriedman/status/1665402680376987648 | superlinux | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 140lvof | false | {'oembed': {'author_name': 'Nat Friedman', 'author_url': 'https://twitter.com/natfriedman', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Watching llama.cpp do 40 tok/s inference of the 7B model on my M2 Max, with 0% CPU usage, and using all 38 GPU cores.<br><br>Congratulations <a href="https://twitter.com/ggerganov?ref_src=twsrc%5Etfw">@ggerganov</a> ! This is a triumph.<a href="https://t.co/C6mn7jkMLb">https://t.co/C6mn7jkMLb</a> <a href="https://t.co/8tcnVN4wEb">pic.twitter.com/8tcnVN4wEb</a></p>— Nat Friedman (@natfriedman) <a href="https://twitter.com/natfriedman/status/1665402680376987648?ref_src=twsrc%5Etfw">June 4, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/natfriedman/status/1665402680376987648', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_140lvof | /r/LocalLLaMA/comments/140lvof/llamacpp_running_40_tokenss_on_apple_m2_max_with/ | false | false | 113 | {'enabled': False, 'images': [{'id': 'jxnnGVKnzFxF45ghhY2zMgFIA-2v8wws1GD2NbIs2mg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8pgY2Tv9n5QBGM0NP7TEfHliUO7_7nrzlKFkhhir264.jpg?width=108&crop=smart&auto=webp&s=084860c4348b2bd0b24092538a75a0ef5fd9cc05', 'width': 108}], 'source': {'height': 73, 'url': 'https://external-preview.redd.it/8pgY2Tv9n5QBGM0NP7TEfHliUO7_7nrzlKFkhhir264.jpg?auto=webp&s=3f2c026628aebca6ed730335d08ea9af9cb762e6', 'width': 140}, 'variants': {}}]} |
||
Google launched FREE learning path for AI 🚀 | 0 | [removed] | 2023-06-04T19:17:09 | Harmoniqa | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 140n6gg | false | null | t3_140n6gg | /r/LocalLLaMA/comments/140n6gg/google_launched_free_learning_path_for_ai/ | false | false | default | 0 | null |
|
Full GPU inference on Apple Silicon using Metal with GGML | 88 | Today a new version of llama.cpp is released where it can do 40 tok/s inference of the 7B model on a M2 Max, with 0% CPU usage, by fully using all 38 GPU cores. 13B model = 24 tok/s!
Credits to **Georgi Gerganov.**
https://reddit.com/link/140nto2/video/c8m7wdom024b1/player
[https://github.com/ggerganov/llama.cpp/pull/1642](https://github.com/ggerganov/llama.cpp/pull/1642) | 2023-06-04T19:39:13 | https://www.reddit.com/r/LocalLLaMA/comments/140nto2/full_gpu_inference_on_apple_silicon_using_metal/ | fictioninquire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140nto2 | false | null | t3_140nto2 | /r/LocalLLaMA/comments/140nto2/full_gpu_inference_on_apple_silicon_using_metal/ | false | false | 88 | {'enabled': False, 'images': [{'id': 'ererVu4j998CL0X58w8Ofkob2ZikDAEgeuOxfi-kJD0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=108&crop=smart&auto=webp&s=422fb9e2477d7af8aef9af5db192792127c978e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=216&crop=smart&auto=webp&s=0ae86c653433e6480bb46ec116f04c912051263d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=320&crop=smart&auto=webp&s=8853a236167c53efe57847456681e04d683c5311', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=640&crop=smart&auto=webp&s=5ef94cdbac6f0143b66acdc732f8a01f1f60c97a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=960&crop=smart&auto=webp&s=768c1cbc6c1b145e9df9458cad94a66ac334248a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=1080&crop=smart&auto=webp&s=f8190b495ab847a48a7dc20b31f0b95c0027c9ab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?auto=webp&s=9326e03762114cb4ed865052cf6ac84d12ac9571', 'width': 1200}, 'variants': {}}]} |
|
2 to 6 bit quantization coming to llama.cpp | 45 | This PR to llama.cpp adds a series of 2-6 bit quantization methods, along with quantization mixes, as proposed in #1240 and #1256. Scalar, AVX2, ARM_NEON, and CUDA implementations are provided. | 2023-06-04T19:44:53 | https://github.com/ggerganov/llama.cpp/pull/1684 | Balance- | github.com | 1970-01-01T00:00:00 | 0 | {} | 140nzqr | false | null | t3_140nzqr | /r/LocalLLaMA/comments/140nzqr/2_to_6_bit_quantization_coming_to_llamacpp/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'U-lH0G6AU_idaVBCaAZC6GZtl2wiX_2R4HCjwbbNRLI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YnZq9pQaEV0xsHk5jsYo--bTpMQLpy-oau_wBTrQGIo.jpg?width=108&crop=smart&auto=webp&s=cd1d128b5259f86aa33bdee352a0a1c83711aed3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YnZq9pQaEV0xsHk5jsYo--bTpMQLpy-oau_wBTrQGIo.jpg?width=216&crop=smart&auto=webp&s=f2633a1c04c92f5dd397a8a7caa78fdaa3cae6e9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YnZq9pQaEV0xsHk5jsYo--bTpMQLpy-oau_wBTrQGIo.jpg?width=320&crop=smart&auto=webp&s=5c81613a9b6e9dbce71b55fb6c80dde98a69ea5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YnZq9pQaEV0xsHk5jsYo--bTpMQLpy-oau_wBTrQGIo.jpg?width=640&crop=smart&auto=webp&s=ba8a942bcc11e3b54026b9a3d41613e026ea5dc2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YnZq9pQaEV0xsHk5jsYo--bTpMQLpy-oau_wBTrQGIo.jpg?width=960&crop=smart&auto=webp&s=7d31adddfb6ee063b315ad366713e667237c462b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YnZq9pQaEV0xsHk5jsYo--bTpMQLpy-oau_wBTrQGIo.jpg?width=1080&crop=smart&auto=webp&s=7e5a957b473fa817284dd5614d28239a6f27c897', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YnZq9pQaEV0xsHk5jsYo--bTpMQLpy-oau_wBTrQGIo.jpg?auto=webp&s=54f9d35a831c8437605ccb0242b575b7c1e560ea', 'width': 1200}, 'variants': {}}]} |
|
How to control the form of the answer? | 5 | Hi!
I'm working on a project in which I need to implement an LLM to generate descriptions that will be used by another program (not an AI).
For this reason, I need to make sure that the model always uses the same structure to respond. For example :
A) \*a word or sentence from a predefined list\*
B) \*an integer between two min/max values\*
C) \*a sentence written by the LLM\*
...
I tried to figure out a few solutions that could work but none of them guarantees at least a high rate of success. So, what do you think? Do you have any tips that could help me to have better results? | 2023-06-04T20:10:57 | https://www.reddit.com/r/LocalLLaMA/comments/140orjy/how_to_control_the_form_of_the_answer/ | Zeink303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140orjy | false | null | t3_140orjy | /r/LocalLLaMA/comments/140orjy/how_to_control_the_form_of_the_answer/ | false | false | self | 5 | null |
Metal inference (running on Apple GPUs) now merged into llama.cpp | 31 | The Pull Request (PR) #1642 on the ggerganov/llama.cpp repository, titled "Add full GPU inference of LLaMA on Apple Silicon using Metal," proposes significant changes to enable GPU support on Apple Silicon for the LLaMA language model using Apple's Metal API.
In summary, this PR extends the ggml API and implements Metal shaders/kernels to allow LLaMA inference to run on the GPU. The shaders focus mainly on qMatrix x Vector multiplication, which is typically needed for text generation with LLM. For tasks involving Matrix x Matrix computations (e.g., prompt ingestion, perplexity computation), there isn't an efficient GPU implementation yet, so the execution falls back to the CPU / Apple Neural Engine (ANE).
The PR also introduces support for static computation graphs, and the new implementation can potentially be extended to other backends like CUDA. The core implementation is in the ggml_metal_graph_compute() function, which serves a similar purpose to the CPU-only ggml_graph_compute(). The PR also maintains a clean separation of the Metal-related code, allowing third-party applications to decide whether they want to include, modify, or ignore them.
This PR also works well with mmap to avoid model data duplication in memory, suggesting that there might be future improvements to reduce memory usage when Metal is enabled.
The code diff shows changes to the CMakeLists.txt file, mainly adding options for Metal support, including LLAMA_METAL "llama: use Metal" OFF. The CMakeLists.txt file also includes Metal related libraries like the Metal Framework, MetalKit Framework, and MetalPerformanceShaders Framework, demonstrating that these libraries are now dependencies for llama.cpp if Metal support is enabled.
The impact of these changes is significant. With this PR, LLaMA can now run on Apple's M1 Pro and M2 Max chips using Metal, which would potentially improve performance and efficiency. There are even demonstrations showing the successful application of the changes with 7B, 13B, and 65B LLaMA models12. | 2023-06-04T20:56:36 | https://github.com/ggerganov/llama.cpp/pull/1642 | Balance- | github.com | 1970-01-01T00:00:00 | 0 | {} | 140q3bn | false | null | t3_140q3bn | /r/LocalLLaMA/comments/140q3bn/metal_inference_running_on_apple_gpus_now_merged/ | false | false | 31 | {'enabled': False, 'images': [{'id': 'ererVu4j998CL0X58w8Ofkob2ZikDAEgeuOxfi-kJD0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=108&crop=smart&auto=webp&s=422fb9e2477d7af8aef9af5db192792127c978e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=216&crop=smart&auto=webp&s=0ae86c653433e6480bb46ec116f04c912051263d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=320&crop=smart&auto=webp&s=8853a236167c53efe57847456681e04d683c5311', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=640&crop=smart&auto=webp&s=5ef94cdbac6f0143b66acdc732f8a01f1f60c97a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=960&crop=smart&auto=webp&s=768c1cbc6c1b145e9df9458cad94a66ac334248a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?width=1080&crop=smart&auto=webp&s=f8190b495ab847a48a7dc20b31f0b95c0027c9ab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3jubCav6XuxrWkmnZ3K5-uhxXbiwHZnoAQZYDBwm1IY.jpg?auto=webp&s=9326e03762114cb4ed865052cf6ac84d12ac9571', 'width': 1200}, 'variants': {}}]} |
|
Rent Server for LLM | 1 | [removed] | 2023-06-04T21:06:00 | https://www.reddit.com/r/LocalLLaMA/comments/140qdmi/rent_server_for_llm/ | schmul02 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140qdmi | false | null | t3_140qdmi | /r/LocalLLaMA/comments/140qdmi/rent_server_for_llm/ | false | false | default | 1 | null |
I'm confused on step 22 (Installing 4-bit LLaMA with text-generation-webui) | 1 | [removed] | 2023-06-04T21:52:08 | https://www.reddit.com/r/LocalLLaMA/comments/140rpov/im_confused_on_step_22_installing_4bit_llama_with/ | xandens | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140rpov | false | null | t3_140rpov | /r/LocalLLaMA/comments/140rpov/im_confused_on_step_22_installing_4bit_llama_with/ | false | false | default | 1 | null |
Distributed open source node based LLM network | 2 | Is this even possible? Here’s my idea:
Problem: Unless you have access to a corporate level GPU, most individual users cannot take advantage of open source LLMs beyond the 13B size.
Solution: a distributed compute network could create a massive, supercomputer-tier network of compute resources end users could then pay for access to on a per api query basis. This would allow individuals to be rewarded for contributing compute to the network (similar to blockchain node rewards). As a distributed network, it would be extremely resistant to government censorship attempts. It would also be capable of running ultra large models competitive with gpt4.
Thoughts, ideas, feedback, criticism? Obviously this would be difficult to create, but it seems like it would be extremely worthwhile as well. | 2023-06-04T22:27:30 | https://www.reddit.com/r/LocalLLaMA/comments/140sqgd/distributed_open_source_node_based_llm_network/ | SkyTemple77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140sqgd | false | null | t3_140sqgd | /r/LocalLLaMA/comments/140sqgd/distributed_open_source_node_based_llm_network/ | false | false | self | 2 | null |
llama-deus-7b-v3-GGML · Hugging Face | 13 | 2023-06-04T22:42:14 | https://huggingface.co/TheBloke/llama-deus-7b-v3-GGML | [deleted] | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 140t61f | false | null | t3_140t61f | /r/LocalLLaMA/comments/140t61f/llamadeus7bv3ggml_hugging_face/ | false | false | 13 | {'enabled': False, 'images': [{'id': '72xaRPxFP8RVlzUFAf870W2SuReFROxzg2z_H2VlB0M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NHpx6MtaZam6eQD3RM9LLUE1LHp0MtEY9NpE3onl8IU.jpg?width=108&crop=smart&auto=webp&s=d70676421ce917e07663dd2846fd53df3297bfe8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NHpx6MtaZam6eQD3RM9LLUE1LHp0MtEY9NpE3onl8IU.jpg?width=216&crop=smart&auto=webp&s=7e38608229f54683278106f0bc71073642e9a704', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NHpx6MtaZam6eQD3RM9LLUE1LHp0MtEY9NpE3onl8IU.jpg?width=320&crop=smart&auto=webp&s=fcad11028004cbee97660b8983be9c84e31150d5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NHpx6MtaZam6eQD3RM9LLUE1LHp0MtEY9NpE3onl8IU.jpg?width=640&crop=smart&auto=webp&s=52d509be3704271c05ffc3bf8387e3cae48bc5a3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NHpx6MtaZam6eQD3RM9LLUE1LHp0MtEY9NpE3onl8IU.jpg?width=960&crop=smart&auto=webp&s=158878856c82f2cf6c9b9bd20af4822ac8ece138', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NHpx6MtaZam6eQD3RM9LLUE1LHp0MtEY9NpE3onl8IU.jpg?width=1080&crop=smart&auto=webp&s=4cdf241ea1d7866058244235eb836ea7cd247cd6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NHpx6MtaZam6eQD3RM9LLUE1LHp0MtEY9NpE3onl8IU.jpg?auto=webp&s=6c62c4736b51d1686c03b3d88eae7844c72a3c7e', 'width': 1200}, 'variants': {}}]} |
||
Any medical fine tuned model? | 2 | [removed] | 2023-06-04T23:18:42 | https://www.reddit.com/r/LocalLLaMA/comments/140u7q0/any_medical_fine_tuned_model/ | Distinct-Target7503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140u7q0 | false | null | t3_140u7q0 | /r/LocalLLaMA/comments/140u7q0/any_medical_fine_tuned_model/ | false | false | default | 2 | null |
Quick Question: Are all loras made with a given base model compatible with other models using that base? | 1 | I'm excited to get back into personal lora training after some initial success with `alpaca_lora_4bit` before qlora came out. What I'm wondering is if a lora trained against one variant of a base model will be compatible with another variant of that model? The reason is that, if so, it will make a lot of sense for us to train loras because we can re-use them when the next amazing llama finetune drops (hats off to /u/faldore in this regard)
If not, then we'll have to get used to training and re-training lora's against whatever the new hotness is, which is fine enough I suppose, but I would like to understand the constraints of loras as applied to variants of base models.
Example: If i train a lora on llama33b base, can I run this lora on top of wizardlm-uncensored-33b? Or, if i train against 33b-based, can I run that lora against wizard-alpaca-33?
I know the question is binary in nature, but I'm hoping asking this directly on the sub will open up some insights that many of us would benefit from. | 2023-06-05T00:34:19 | https://www.reddit.com/r/LocalLLaMA/comments/140w9ll/quick_question_are_all_loras_made_with_a_given/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140w9ll | false | null | t3_140w9ll | /r/LocalLLaMA/comments/140w9ll/quick_question_are_all_loras_made_with_a_given/ | false | false | self | 1 | null |
How to implement chain or reasoning or debate/consensus behavior locally? | 1 | So I've been trying to think of the easiest way to implement the kind of higher level systems that papers have been come out about. Would it be to simply do api calls to an oobabooga instance and create a loop in python that automatically takes such inputs and iteratively work through it? | 2023-06-05T00:56:16 | https://www.reddit.com/r/LocalLLaMA/comments/140wtx9/how_to_implement_chain_or_reasoning_or/ | NDV-Twist-5283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140wtx9 | false | null | t3_140wtx9 | /r/LocalLLaMA/comments/140wtx9/how_to_implement_chain_or_reasoning_or/ | false | false | self | 1 | null |
The weight diffs for 32K context length LLaMA 7B trained with landmark attention have been released | 120 | Weight diff: [https://huggingface.co/epfml/landmark-attention-llama7b-wdiff](https://huggingface.co/epfml/landmark-attention-llama7b-wdiff)
Paper: [https://arxiv.org/abs/2305.16300](https://arxiv.org/abs/2305.16300)
>In this work, we propose a novel approach for overcoming the context length limit by allowing earlier blocks of the input to be directly incorporated into the attention itself. We break the input into blocks of fixed length and introduce a special token for each block, called a landmark, which acts as a gate for attending to its corresponding block. The gating mechanism is controlled by the attention score to the landmark token. At inference time, the attention scores on the landmarks allow us to retrieve any previous block and integrate it with standard attention.
>
>Our model can process any context length at inference time regardless of the context length used at training time. More importantly, we demonstrate that using our method to fine-tune LLaMA 7B, a large language model, allows it to retrieve relevant information from contexts with over 32k tokens, which is the context length of GPT-4.
>
>Our implementation of landmark attention is accessible at [https://github.com/epfml/landmark-attention](https://github.com/epfml/landmark-attention). | 2023-06-05T01:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/140xt9t/the_weight_diffs_for_32k_context_length_llama_7b/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140xt9t | false | null | t3_140xt9t | /r/LocalLLaMA/comments/140xt9t/the_weight_diffs_for_32k_context_length_llama_7b/ | false | false | self | 120 | {'enabled': False, 'images': [{'id': 'JFn6nVzjRf9NLGqLGJCIrJrWvmhTxtgtdn-HR3yESns', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oaM6cKyiF4ltD2GZ2qTGV-8rIvtt42FhJmkfcDEOj54.jpg?width=108&crop=smart&auto=webp&s=07680dd7587b562d29dc30f1211ae018c5fb7600', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oaM6cKyiF4ltD2GZ2qTGV-8rIvtt42FhJmkfcDEOj54.jpg?width=216&crop=smart&auto=webp&s=e52a03de1579560f944256c4ac6aeb643e494054', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oaM6cKyiF4ltD2GZ2qTGV-8rIvtt42FhJmkfcDEOj54.jpg?width=320&crop=smart&auto=webp&s=48859d8a5cf1c11e529c59c948d5498c559643b6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oaM6cKyiF4ltD2GZ2qTGV-8rIvtt42FhJmkfcDEOj54.jpg?width=640&crop=smart&auto=webp&s=094360fde34c2b426ced8c92a22f352442b3f918', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oaM6cKyiF4ltD2GZ2qTGV-8rIvtt42FhJmkfcDEOj54.jpg?width=960&crop=smart&auto=webp&s=ed0cd6e2e751c13520d8403c34b38a336b30ed9a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oaM6cKyiF4ltD2GZ2qTGV-8rIvtt42FhJmkfcDEOj54.jpg?width=1080&crop=smart&auto=webp&s=b10588bf102015bf94056489ca0b78b3cdb87754', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oaM6cKyiF4ltD2GZ2qTGV-8rIvtt42FhJmkfcDEOj54.jpg?auto=webp&s=ae0ae079674dc0bf46d32072a3fee71e51e3a868', 'width': 1200}, 'variants': {}}]} |
Tool for chatting with your codebase and docs using OpenAI, LlamaCpp, and GPT-4-All | 1 | [removed] | 2023-06-05T02:12:21 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 140yuuh | false | null | t3_140yuuh | /r/LocalLLaMA/comments/140yuuh/tool_for_chatting_with_your_codebase_and_docs/ | false | false | default | 1 | null |
||
Best laptop system around 1k to run llms | 0 | Hi fokes I am planning to buy a new laptop.
Want be able to work ml side projects like running Llama and working on llms and would be great if it ran some of the new games.
Budget around 1k, open to both windows and mac.
Any advice would be great 👍
Current laptop is shit and use colab and kaggle for side projects. | 2023-06-05T02:27:51 | https://www.reddit.com/r/LocalLLaMA/comments/140z8jx/best_laptop_system_around_1k_to_run_llms/ | riser56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140z8jx | false | null | t3_140z8jx | /r/LocalLLaMA/comments/140z8jx/best_laptop_system_around_1k_to_run_llms/ | false | false | self | 0 | null |
Fine-tuning on top of Vicuna / Alpaca? | 1 | Hey everyone! I'm attempting to fine-tune a Vicuna model on a dataset of length \~100.
Initially, I tried using the FastChat fine-tuning scripts ([https://github.com/lm-sys/FastChat#fine-tuning](https://github.com/lm-sys/FastChat#fine-tuning)) - replacing Llama 7b with my Vicuna model. However, this didn't work out well, it might have something to do with my small dataset. The model outputs gibberish for any prompts outside of my dataset and doesn't do a great job with those in.
I think I need to use some version of LoRa, but I was wondering if it was possible to use LoRa on top of an already fine-tuned model, like Vicuna or Alpaca? If so, are there any popular repositories where I can just drop in my model and dataset?
Thanks! | 2023-06-05T02:27:59 | https://www.reddit.com/r/LocalLLaMA/comments/140z8mq/finetuning_on_top_of_vicuna_alpaca/ | karansdalal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 140z8mq | false | null | t3_140z8mq | /r/LocalLLaMA/comments/140z8mq/finetuning_on_top_of_vicuna_alpaca/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TkLtppr7jnF9pjABy3F5bgs_5G0EcIfjST6D8zGn-tc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=108&crop=smart&auto=webp&s=073c76b691a85eeff570b44be101e70c1c52153c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=216&crop=smart&auto=webp&s=5794d10c301d6c4ccb499637008743f66ad46b1c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=320&crop=smart&auto=webp&s=6bdc74d43e7318f4945208e9742f521ca9d85825', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=640&crop=smart&auto=webp&s=4a2f6e0f71b33eb4b56de475410a6c9cddff06fe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=960&crop=smart&auto=webp&s=e61cbcc8f7eeb6165ffdf556ab67140794a2721c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=1080&crop=smart&auto=webp&s=7d60e60332a4ffe2b142371b1b55a042a0a3f9c7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?auto=webp&s=a40a9fa926219ec4f8a264ddd7a4a7d56c07c842', 'width': 1200}, 'variants': {}}]} |
The best model I could run on my laptop @ > 2 tokens/second in April vs. Today | 88 | 2023-06-05T02:56:22 | https://www.reddit.com/gallery/140zxkq | synexo | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 140zxkq | false | null | t3_140zxkq | /r/LocalLLaMA/comments/140zxkq/the_best_model_i_could_run_on_my_laptop_2/ | false | false | 88 | null |
||
Proposal for a Crowd-Sourced AI Feedback System | 6 | I'd like to discuss an idea inspired by my personal inferences about how OpenAI might be utilizing their feedback system to improve user experience and gather valuable training data. Regardless of whether my assumptions are accurate, I believe the underlying concept is incredibly powerful.
*Overview of OpenAI's Feedback Process:*
1. Upvoting a response marks it as good.
2. Downvoting a response prompts users to give feedback on improvements.
3. If the user gives feedback, it's apparently fed back into the model with the question.
4. A new response is generated, and users choose between the old and new answers.
This process could potentially help OpenAI build a database of superior answers, perfect for enhancing their training data.
*Applying the Idea to the Open Source Community:*
We could implement a similar system with a central data store, keeping it selective and opt-in. The service should have transparent terms of collection and use, and a collection agent in the client could point to one or more preferred collection services.
For the client implementation, we could create a new extension for popular inference interfaces like Oobabooga that performs generation and quality evaluation locally. Users would be able to toggle this feature on and off.
*Organizing Collection Services:*
Collection services could be dedicated to specific model functions or activities, such as coding, chatting, instructing, and even NSFW content. These services might be organized under one or more websites and governance structures.
Collecting information on the specific model used, along with upvotes and downvotes, could enable us to weigh the inclusion of text generated from models that consistently perform better by consensus. This would result in a dynamic community evaluation of top-performing models at no extra cost. | 2023-06-05T03:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/1410nrm/proposal_for_a_crowdsourced_ai_feedback_system/ | friedrichvonschiller | self.LocalLLaMA | 2023-06-05T03:32:36 | 0 | {} | 1410nrm | false | null | t3_1410nrm | /r/LocalLLaMA/comments/1410nrm/proposal_for_a_crowdsourced_ai_feedback_system/ | false | false | self | 6 | null |
What is the deal with GPT4ALL's performance on a Mac M1/Silicon | 0 | [removed] | 2023-06-05T03:58:11 | https://www.reddit.com/r/LocalLLaMA/comments/1411ebw/what_is_the_deal_with_gpt4alls_performance_on_a/ | Aperturebanana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1411ebw | false | null | t3_1411ebw | /r/LocalLLaMA/comments/1411ebw/what_is_the_deal_with_gpt4alls_performance_on_a/ | false | false | default | 0 | null |
100k context windows. How soon before you can run them locally? | 31 | Hi all, i have a question for the community. I have been experimenting with 100k context for Claude and in some cases the results have been amazing. However the cost is still too high. I was wondering how long it will take to have locally run 100k context window LLMs. Would they need specialised hardware? | 2023-06-05T05:01:53 | https://www.reddit.com/r/LocalLLaMA/comments/1412vlg/100k_context_windows_how_soon_before_you_can_run/ | Reluctant_Pumpkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1412vlg | false | null | t3_1412vlg | /r/LocalLLaMA/comments/1412vlg/100k_context_windows_how_soon_before_you_can_run/ | false | false | self | 31 | null |
Help with method to convert raw text into structured json format (Maybe vector database) | 2 | I'm struggling to figure out the best way to solve this problem. I need to take some text that may be formatted in any way, and convert it to a specific json structure. So for example the text may say something like this (Or it could be completely different).
random text
8782454878phone number
Some product
$20.99
this product is really cool
Some other product
$20.79
this product is not
It then needs to convert it to an array of json objects that already has specific fields, like this for example (Leaving blank any fields it can't find).
"name": "Some product",
"description": "this product is really cool",
"price": $20.99,
​
The thing is, I can ask ChatGPT to do this and it handles the task just fine, but through the api I may be handling text that is exponentially larger than the token limit. I could break it into chunks, but am unsure how to make sure the chunks don't cut off in the middle of each item that has to be processed.
I think I read somewhere that there is a package that uses an LLM like GPT to automatically convert to a vector database, and whether or not I can define the fields of the database or if it does it automatically, I figure getting it into some sort of structure would make it easier to iterate over it and get it into the format I need.
Does anyone have any advice on how to approach this problem? | 2023-06-05T05:14:29 | https://www.reddit.com/r/LocalLLaMA/comments/14135ia/help_with_method_to_convert_raw_text_into/ | TheNomadicAspie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14135ia | false | null | t3_14135ia | /r/LocalLLaMA/comments/14135ia/help_with_method_to_convert_raw_text_into/ | false | false | self | 2 | null |
M1 GPU Performance | 29 | I am testing this on an M1 Ultra with 128 GPU of RAM and a 64 core GPU. This is based on the latest build of [llama.cpp](https://github.com/ggerganov/llama.cpp), which began GPU support for the M1 line today. I'm using the [65B Dettmer Guanco model](https://huggingface.co/TheBloke/guanaco-65B-GPTQ). Performance is blazing fast, though it is a hurry up and wait pattern. Tokens are generated faster than I can read, but the GPU seems to need to pause every 20 seconds or so for long text generation. Probably need to increase batch sizes and make use of all that RAM.
> llama_print_timings: load time = 12309.79 ms
> llama_print\timings: sample time = 179.21 ms / 256 runs ( 0.70 ms per token)
> llama_print_timings: prompt eval time = 1723.06 ms / 8 tokens ( 215.38 ms per token)
> llama_print_timings: eval time = 33199.76 ms / 255 runs ( 130.20 ms per token)
The prompt:
> (base) x@Mac-Studio llama.cpp % LLAMA\_METAL=1 make -j && ./main -m ./models/guanaco-65B.ggmlv3.q4_0.bin -p "The first known dinosaur was" --ignore-eos -n 256 -ngl 1
And the response:
> The first known dinosaur was discovered in 1824 and since then, the study of these fascinating creatures has been a source of wonder for many. These prehistoric reptiles came in all shapes and sizes, from the tiny Microraptor (about the size of a sparrow) to the enormous Argentinosaurus that was longer than 3 school buses!
> What’s even more fascinating is that these different species were spread out across the globe. Some dinosaurs like the Triceratops and T-Rex roamed North America, while others such as the Muttaburrasaurus and Minmi were found in Australia. In this post, we’ll take a closer look at some of the most fascinating species that once roamed our planet.
> One of the largest land animals to have ever lived, Argentinosaurus was truly an impressive sight. It weighed around 80 tons and measured up to 130 feet from nose to tail! Despite its massive size, this giant herbivore was probably slow-moving and gentle in nature. The Argentinosaurus belonged to a group of sauropod dinosaurs that included other huge species like the
The accuracy surprises me. I asked it about the evolution of monotremes, and it correctly cited Streptodon galami in the Cretaceous. Though like all LLMs it tends to wander a bit if the response is longer. | 2023-06-05T05:26:42 | https://www.reddit.com/r/LocalLLaMA/comments/1413fbq/m1_gpu_performance/ | Thalesian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1413fbq | false | null | t3_1413fbq | /r/LocalLLaMA/comments/1413fbq/m1_gpu_performance/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]} |
Is it feasible to use an open source model & vector database to manage a growing library of ebooks & papers? | 17 | I have a growing library of ebooks and academic papers on a number of different, but partly interrelated fields (chemistry, geology, physics). Right now I have 30 books and 100 papers, but this will likely grow. I'm kind of interested in using a local LLM (maybe Vicuna), building a UI, and then querying the model to ask questions in natural language about this writing.
I'm really new to open source LLMs (I knew about GTP-J and KoboldAI before this, that's it). In researching this use case, I learned about embeddings, but I had some questions before I went down this road:
1. Is this currently a viable use case for local LLMs. It seems like the context window might not be large enough, especially for multiple books.
2. How exactly do I create the embeddings? It seems like I have to pick a model to run the data though and then store the output in a vector database. Is that right? What's a low code way of doing this?
3. Another issue is that I don't think I have all the writing I'll want to query right now. If I want to add new papers/books to the vector database, does that mean I'll have to re-do the embedding every time I want to add a new document and then push the changed embedding to the vector database? Is there a way to automate this process?
4. Are there models that are optimal for doing this. I want a model that is not only less likely to hallucinate, but that can site its sources by linking to the content it's pulling from. I have a 3090ti and 128GB of ram, so I'm happy running more demanding models on my local machine.
I only have a hazy idea of how to do this, and a lot of the tutorials I'm encountering assume that you have all the content you want upfront for embeddings, or are working on doing embeddings for very small sized pieces of content. I wanted to clarify how much my use case changes things in that regard.
I also want to see if there are any tools that would make this easy to do without lots of programming, and figure out if there are good open source models for this or if it's too early to consider this extensive of a use case. | 2023-06-05T05:32:26 | https://www.reddit.com/r/LocalLLaMA/comments/1413juo/is_it_feasible_to_use_an_open_source_model_vector/ | -mickomoo- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1413juo | false | null | t3_1413juo | /r/LocalLLaMA/comments/1413juo/is_it_feasible_to_use_an_open_source_model_vector/ | false | false | self | 17 | null |
Use localgpt with a more lightweight model than vicuna7b | 5 | I want to create a poc and localgpt works great but it takes a loooong time. My hardware specifications are 16gb RAM and 8gb VRAM. What do you recommend changing the model too so its gives answers quicker ? | 2023-06-05T07:28:34 | https://www.reddit.com/r/LocalLLaMA/comments/14162wo/use_localgpt_with_a_more_lightweight_model_than/ | Helix-x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14162wo | false | null | t3_14162wo | /r/LocalLLaMA/comments/14162wo/use_localgpt_with_a_more_lightweight_model_than/ | false | false | self | 5 | null |
ChatDocs: PrivateGPT + Web UI + GPU Support + More | 132 | Hi,
I created this tool to chat with documents offline using LLMs: [https://github.com/marella/chatdocs](https://github.com/marella/chatdocs)
[Web UI](https://preview.redd.it/9st5topms54b1.png?width=1906&format=png&auto=webp&s=09321682c148e8c0fe8c85aee61f08470f31854b)
It is based on PrivateGPT but has more features:
* Supports GGML models via [C Transformers](https://github.com/marella/ctransformers) (another library made by me)
* Supports 🤗 Transformers models
* Supports GPTQ models
* Web UI
* GPU support
* Highly configurable via `chatdocs.yml`
It is easy to install and use:
pip install chatdocs # Install
chatdocs download # Download models
chatdocs add /path/to/documents # Add your documents
chatdocs ui # Start the web UI to chat with your documents
All the configuration options can be changed using a `chatdocs.yml` config file.
Please see [README](https://github.com/marella/chatdocs#readme) for more details. | 2023-06-05T08:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/14174f4/chatdocs_privategpt_web_ui_gpu_support_more/ | Ravindra-Marella | self.LocalLLaMA | 2023-06-07T21:14:47 | 0 | {} | 14174f4 | false | null | t3_14174f4 | /r/LocalLLaMA/comments/14174f4/chatdocs_privategpt_web_ui_gpu_support_more/ | false | false | 132 | {'enabled': False, 'images': [{'id': 'Cr8HTGMpnvCKj7l2nGbTZmGadTjNdKFBAV6IRBlb9mw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zSq1DCqErIDGwQnansIzyznWXUxoQHXbKwAnudCGa7E.jpg?width=108&crop=smart&auto=webp&s=c6a23b466a46acf59672b052ec1e2c8321b92aff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zSq1DCqErIDGwQnansIzyznWXUxoQHXbKwAnudCGa7E.jpg?width=216&crop=smart&auto=webp&s=9144bea4b8b9d836a9853f67f0fcfffedee8751b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zSq1DCqErIDGwQnansIzyznWXUxoQHXbKwAnudCGa7E.jpg?width=320&crop=smart&auto=webp&s=81dfd2833b0a055f9bb95578e95347fb7f45d284', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zSq1DCqErIDGwQnansIzyznWXUxoQHXbKwAnudCGa7E.jpg?width=640&crop=smart&auto=webp&s=43ebb59ccb094cb9aa599ee7b33a7d154ccc354a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zSq1DCqErIDGwQnansIzyznWXUxoQHXbKwAnudCGa7E.jpg?width=960&crop=smart&auto=webp&s=b9206f3c717aa3f225d0b5561d57d757d16ac199', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zSq1DCqErIDGwQnansIzyznWXUxoQHXbKwAnudCGa7E.jpg?width=1080&crop=smart&auto=webp&s=e4e354192fba4cf527e1d62730308a01c0779424', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zSq1DCqErIDGwQnansIzyznWXUxoQHXbKwAnudCGa7E.jpg?auto=webp&s=bf531d14d4f61ca14cd5961323b35272fbee5cd2', 'width': 1200}, 'variants': {}}]} |
|
Falcon-7B H2OGPT Chat Model | 16 | Our first attempt of finetuning Falcon 7B on Oasst1 data looks really promising. The Falcon models are a clear step forward.
Demo Chat Interface: https://gpt-gm.h2o.ai/
HF Model: https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v2
Trained with: https://github.com/h2oai/h2o-llmstudio
License: Apache 2.0 | 2023-06-05T08:21:24 | https://www.reddit.com/r/LocalLLaMA/comments/14178q2/falcon7b_h2ogpt_chat_model/ | ichiichisan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14178q2 | false | null | t3_14178q2 | /r/LocalLLaMA/comments/14178q2/falcon7b_h2ogpt_chat_model/ | false | false | self | 16 | null |
Trouble with GGML performance on a fresh install of Oobabooga | 2 | Hi! So I'm having a bit of a problem with trying to run local 13B models. I have 32 GB of RAM, an RTX 3070 with 8 GB of VRAM, and an AMD Ryzen 7 3800 (8 cores at 3.9 GHz).
​
Since I do not have enough VRAM to run a 13B model, I'm using GGML with GPU offloading using the -n-gpu-layers command. Recently I went through a bit of a setup where I updated Oobabooga and in doing so had to re-enable GPU acceleration by reinstalling llama-cpp-python, over on this page: [https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md#gpu-acceleration](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md#gpu-acceleration)
​
Now, I was also getting an error with bitsandbytes saying it was installed without GPU support, however Oobabooga still said the GPU offloading was working. I had set n-gpu-layers to 25 and had about 6 GB in VRAM being used.
​
With this setup, with GPU offloading working and bitsandbytes complaining it wasn't installed right, I was getting a slow but fairly consistent \~2 tokens per second. I also decided to do something about the bitsandbytes error, and I found this issue on github where a solution (using pip to install torch-2.0) was posted: [https://github.com/oobabooga/text-generation-webui/issues/1969](https://github.com/oobabooga/text-generation-webui/issues/1969)
​
I followed the given solution and the bitsandbytes error was gone. Great! Except...now performance inside of Oobabooga is basically tanked. Outputs tend to be between 0.3 and 0.4 tokens a second, and on top of that, it takes up to a full minute or more before the replies even start generating. If I'm regenerating a response, then it's able to respond right away and goes at the \~2 tokens per second. However, as soon as I type something new and hit generate, the speed drops again and it sits there saying "is typing" forever.
​
So, I'm wondering if I went wrong somewhere along the line, or if one of my settings is messed up. I'm using all default settings, aside from using 25 GPU layers, and the model I'm using is the 5\_1 bit GGML version of Guanaco 13B. This is all happening on a fresh install, and I even tried to do a separate fresh install last night, but it's not fixing the problem and I'm stuck with the slow speed and response times.
​
Any help is appreciated. :) | 2023-06-05T10:18:32 | https://www.reddit.com/r/LocalLLaMA/comments/1419sxe/trouble_with_ggml_performance_on_a_fresh_install/ | Gyramuur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1419sxe | false | null | t3_1419sxe | /r/LocalLLaMA/comments/1419sxe/trouble_with_ggml_performance_on_a_fresh_install/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'WPeESXalGqHrk_IcqFonDM933t1Ng0LxeCqfBVJXisw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bQg6U57fdEauNbcnYc_dpra31giiokuxThCXcguEK38.jpg?width=108&crop=smart&auto=webp&s=e584f5b9467f14814300b05a85368c379347044f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bQg6U57fdEauNbcnYc_dpra31giiokuxThCXcguEK38.jpg?width=216&crop=smart&auto=webp&s=9b61bec7dff3dce9b6f7b81cc2a6891cdafca33b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bQg6U57fdEauNbcnYc_dpra31giiokuxThCXcguEK38.jpg?width=320&crop=smart&auto=webp&s=84c535ef3c9eb16831f98e2158324520e5e7b3a8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bQg6U57fdEauNbcnYc_dpra31giiokuxThCXcguEK38.jpg?width=640&crop=smart&auto=webp&s=19216674ded921c3cbbef422dcdc6243d863fa2c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bQg6U57fdEauNbcnYc_dpra31giiokuxThCXcguEK38.jpg?width=960&crop=smart&auto=webp&s=4eabcf992dd2b909915f3c9bb6e41591ada65e7e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bQg6U57fdEauNbcnYc_dpra31giiokuxThCXcguEK38.jpg?width=1080&crop=smart&auto=webp&s=d1150de3c319a68ee051e7e4e454cb97000709e5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bQg6U57fdEauNbcnYc_dpra31giiokuxThCXcguEK38.jpg?auto=webp&s=b7a077d57965a1ab3580350075ee23c185669d10', 'width': 1200}, 'variants': {}}]} |
Monero's WizardLM-Uncensored-SuperCOT-Storytelling-30B-GGML 2 bit model available | 63 | [https://huggingface.co/RachidAR/WizardLM-Uncensored-SCOT-StoryTelling-30B-Q2\_K-GGML](https://huggingface.co/RachidAR/WizardLM-Uncensored-SCOT-StoryTelling-30B-Q2_K-GGML)
**~~Works only with llama.cpp PR:~~** [**~~https://github.com/ggerganov/llama.cpp/pull/1684~~**](https://github.com/ggerganov/llama.cpp/pull/1684)
**Update:** Works with latest llama.cpp version.
*This model can run with 16 GB of RAM. On my old cpu (Xeon E3-1225 v3 4/8), it runs with \~660 ms per token.* | 2023-06-05T11:24:46 | https://www.reddit.com/r/LocalLLaMA/comments/141bdll/moneros/ | RuslanAR | self.LocalLLaMA | 2023-06-06T05:32:48 | 0 | {} | 141bdll | false | null | t3_141bdll | /r/LocalLLaMA/comments/141bdll/moneros/ | false | false | self | 63 | {'enabled': False, 'images': [{'id': 'WRv6BUxadPQvyn99NavVMvb37eAG_zYp64Cimn-vPJ8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oMpWjhUmocC-rznporsIhhb4G8YfvLFxGzbh2VI0YKM.jpg?width=108&crop=smart&auto=webp&s=b7e4c1631fa9f7ff61edb587e80e590d78517b32', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oMpWjhUmocC-rznporsIhhb4G8YfvLFxGzbh2VI0YKM.jpg?width=216&crop=smart&auto=webp&s=df00346d39f26a9955feb144d3eb46785fcfa0cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oMpWjhUmocC-rznporsIhhb4G8YfvLFxGzbh2VI0YKM.jpg?width=320&crop=smart&auto=webp&s=b2e3a1c88053b79c09032239368ec036e8dbe863', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oMpWjhUmocC-rznporsIhhb4G8YfvLFxGzbh2VI0YKM.jpg?width=640&crop=smart&auto=webp&s=1564e8109e3c03f2d0a9ba54e117dee3602913d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oMpWjhUmocC-rznporsIhhb4G8YfvLFxGzbh2VI0YKM.jpg?width=960&crop=smart&auto=webp&s=26af53598ad49344635b72494dffe7ebc1d64f51', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oMpWjhUmocC-rznporsIhhb4G8YfvLFxGzbh2VI0YKM.jpg?width=1080&crop=smart&auto=webp&s=9d3d89265f992b23a29ab59ec0c04e9a7c5c448f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oMpWjhUmocC-rznporsIhhb4G8YfvLFxGzbh2VI0YKM.jpg?auto=webp&s=77362a4c05cdb3391697d735bad8830c28036d13', 'width': 1200}, 'variants': {}}]} |
Qlora fine-tuning for flacon | 0 | Hey folks,
Just checking if the community is aware of python script / repo that focuses on qlora fine-tuning for flacon models? | 2023-06-05T11:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/141bp0f/qlora_finetuning_for_flacon/ | unoshow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141bp0f | false | null | t3_141bp0f | /r/LocalLLaMA/comments/141bp0f/qlora_finetuning_for_flacon/ | false | false | self | 0 | null |
what is the best LLaMA modul for my laptop ? | 2 | [removed] | 2023-06-05T12:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/141d69j/what_is_the_best_llama_modul_for_my_laptop/ | mostafaLaravel | self.LocalLLaMA | 2023-06-05T12:39:05 | 0 | {} | 141d69j | false | null | t3_141d69j | /r/LocalLLaMA/comments/141d69j/what_is_the_best_llama_modul_for_my_laptop/ | false | false | default | 2 | null |
Meet ReWOO – The Game-Changing Modular Paradigm that Cuts Token Consumption by Detaching Reasoning from External Observations | 14 | 2023-06-05T13:00:35 | https://www.marktechpost.com/2023/06/04/say-goodbye-to-costly-auto-gpt-and-langchain-runs-meet-rewoo-the-game-changing-modular-paradigm-that-cuts-token-consumption-by-detaching-reasoning-from-external-observations/ | kryptkpr | marktechpost.com | 1970-01-01T00:00:00 | 0 | {} | 141dtj9 | false | null | t3_141dtj9 | /r/LocalLLaMA/comments/141dtj9/meet_rewoo_the_gamechanging_modular_paradigm_that/ | false | false | default | 14 | null |
|
Newbie here, am I downloading this right? | 1 | [removed] | 2023-06-05T13:03:42 | [deleted] | 2023-07-13T00:08:05 | 0 | {} | 141dwwg | false | null | t3_141dwwg | /r/LocalLLaMA/comments/141dwwg/newbie_here_am_i_downloading_this_right/ | false | false | default | 1 | null |
||
Just put together a programming performance ranking for popular LLaMAs using the HumanEval+ Benchmark! | 393 | 2023-06-05T14:13:09 | ProfessionalHand9945 | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 141fw2b | false | null | t3_141fw2b | /r/LocalLLaMA/comments/141fw2b/just_put_together_a_programming_performance/ | false | false | 393 | {'enabled': True, 'images': [{'id': 'Y6EpsIJOEmyFgdCD9foNjcE2Ue6x-U-rGNE2JkW818c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/aN8B-vfYGe2cp8eT2Svc-oBQZAuN3lyK4xS4GeIROWM.jpg?width=108&crop=smart&auto=webp&s=e7605a0bc2b3f22dd1276bdd64c3535696c05d3c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/aN8B-vfYGe2cp8eT2Svc-oBQZAuN3lyK4xS4GeIROWM.jpg?width=216&crop=smart&auto=webp&s=aed729baa4cf67b8fc7d08d60c9602cecfd56ba4', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/aN8B-vfYGe2cp8eT2Svc-oBQZAuN3lyK4xS4GeIROWM.jpg?width=320&crop=smart&auto=webp&s=76596d070a428022bbd5f5c6c66f2a6842304ea1', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/aN8B-vfYGe2cp8eT2Svc-oBQZAuN3lyK4xS4GeIROWM.jpg?width=640&crop=smart&auto=webp&s=10199a33eaa873ae3b3263303312ecb74dbaabc2', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/aN8B-vfYGe2cp8eT2Svc-oBQZAuN3lyK4xS4GeIROWM.jpg?width=960&crop=smart&auto=webp&s=03983504fbb1c5c4b12c69e7bc2e991f1490540d', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/aN8B-vfYGe2cp8eT2Svc-oBQZAuN3lyK4xS4GeIROWM.jpg?width=1080&crop=smart&auto=webp&s=fa780c055e02e1210657eabfa2a47e2576611c9e', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/aN8B-vfYGe2cp8eT2Svc-oBQZAuN3lyK4xS4GeIROWM.jpg?auto=webp&s=3fd0240ddd28d39b0d67c54ae57fdf397ddb354b', 'width': 2000}, 'variants': {}}]} |
|||
Expand Knowledge based of LLM | 3 | Im currently trying to build a local LLM that is a as efficient as any of the open source models but contains information about select elements that a re more up to date than 2021. I realise this is what some people would call fine tunning, but im perfectly okay with the performance just as long as it has access to the new data, is there any relatively low effort solution I am missing to how to approach this problem? | 2023-06-05T14:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/141g28m/expand_knowledge_based_of_llm/ | Lucasxhy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141g28m | false | null | t3_141g28m | /r/LocalLLaMA/comments/141g28m/expand_knowledge_based_of_llm/ | false | false | self | 3 | null |
7B or 13B german model | 8 | Hi all,
does anyone know a german 4-bit or 5-bit quantised open source model? Couldn't find any on Hugging Face.
Would appreciate any suggestion.
Would like to use it for fine tuning.
Thanks | 2023-06-05T14:50:54 | https://www.reddit.com/r/LocalLLaMA/comments/141h118/7b_or_13b_german_model/ | _omid_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141h118 | false | null | t3_141h118 | /r/LocalLLaMA/comments/141h118/7b_or_13b_german_model/ | false | false | self | 8 | null |
PC to run models locally | 7 | I tried to run models locally on my laptop and all I could get was 3.5 tokens per second on Wizard Mega 13b GGML 5 bit (5\_1) model. On this sight I decided to build a PC that could properly run this or even higher models in the future.
If I understand correctly, the way to go is the higher the VRAM the better for GPTQ models. That's why I want to know what would be the best configuration to get the best token per second.
Right now I'm considering the RTX 3090 but I would like to know anyone's experience beforehand or with other cards. Thanks! | 2023-06-05T15:00:59 | https://www.reddit.com/r/LocalLLaMA/comments/141hc09/pc_to_run_models_locally/ | CommercialOpening599 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141hc09 | false | null | t3_141hc09 | /r/LocalLLaMA/comments/141hc09/pc_to_run_models_locally/ | false | false | self | 7 | null |
Auto-GPT v0.4.0 released | 54 | # [Auto-GPT v0.4.0](https://github.com/Significant-Gravitas/Auto-GPT/releases/tag/v0.4.0)
**TL;DR:** Users will only see a small part of the improvements made in this release. Most of the improvements were made "under the hood", where developers will see cleaner source code that is becoming increasingly easier to work with.
## Highlights ✨
Highlights and notable changes since v0.3.0:
### ⚠️ Command send_tweet is REMOVED
Twitter functionality (and more) is now covered by plugins.
### ⚠️ Memory backend deprecation 💾
The Milvus, Pinecone and Weaviate memory backends were rendered incompatible by work on the memory system, and have been removed. The Redis memory store was also temporarily removed; we will merge a new implementation ASAP.
Whether built-in support for the others will be added back in the future is subject to discussion, feel free to pitch in: [\#4280](https://github.com/Significant-Gravitas/Auto-GPT/discussions/4280)
### ❇️ Document support in read_file 📄
Auto-GPT can now read text from document files, with support added for PDF, doc, docx, CSV, HTML, TeX and more!
### ❇️ Managing Auto-GPT's access to commands ❌🔧
You can now disable set of built-in commands through the DISABLED\_COMMAND\_CATEGORIES
variable in **.env**. Specific shell commands can also be disabled using DENY\_COMMANDS
, or selectively enabled using ALLOW\_COMMANDS
.
## The state of Auto-GPT 🔍
We are working on re-architecting large parts of the codebase to make development easier in the long term and future-proof the project. This re-architecture is a big task and will take a while, so in the meantime, we continue merging pull requests into master
(our main development branch) and integrating contributions that help the project's overall objectives, including re-arch.
The overarching vision of the re-arch is to re-imagine Auto-GPT as a component-based framework geared towards facilitating exploration of AI agents. If the internals are not clean, progress suffers.
This vision breaks down into specific goals:
* Component-based architecture with clean interfaces (to achieve separability, modularity)
* State-contained agent (to allow for efficient exploration of the agent run-loop)
* This requires removing global state
### In this release... 🏗️
In this release, we make two major re-architecture steps:
* **Memory System Revamp**
We have revised the vector memory storage system ([\#4208](https://github.com/Significant-Gravitas/Auto-GPT/pull/4208)), laying a foundation to implement so-called "retrieval augmentation" that enhances the long-term performance of Auto-GPT.
For the time being we have removed support for all memory providers except for local (json\_file
) memory. This stop-gap measure simplifies/expedites development. A Redis memory provider implementation is in the pipeline, we're waiting on the redis-om-python team to merge a fix.
Work on the memory system is tracked here: [\#3536](https://github.com/Significant-Gravitas/Auto-GPT/issues/3536)
* **Removal of global configuration object**
In this version, we've removed the global configuration object. Instead configuration is passed into components as required. Further work is ongoing to remove these 'singletons'.
### Other Highlights
* Added possibility to interrupt continuous runs
* Updated Google/DuckDuckGo search internals
* Continued work on challenges
* Continued work on CI pipelines to speed them up without degrading DX
* Improved test coverage
* Updated documentation | 2023-06-05T15:26:48 | https://www.reddit.com/r/LocalLLaMA/comments/141i4at/autogpt_v040_released/ | Balance- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141i4at | false | null | t3_141i4at | /r/LocalLLaMA/comments/141i4at/autogpt_v040_released/ | false | false | self | 54 | {'enabled': False, 'images': [{'id': '1wrjSxcJ3SyMJGDbUzxdd5dO3N3qh7839gv_RlEmzHo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aurcfDnhU4nxWz0DJLsLvkgYrOT7X5eSDjsc3Xctgdk.jpg?width=108&crop=smart&auto=webp&s=bf931ad17978b27c9b42557a93a0e23f5cbdc56a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aurcfDnhU4nxWz0DJLsLvkgYrOT7X5eSDjsc3Xctgdk.jpg?width=216&crop=smart&auto=webp&s=62df97d1b4f583a5a1c59ee479bdc2ecf6a3e44e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aurcfDnhU4nxWz0DJLsLvkgYrOT7X5eSDjsc3Xctgdk.jpg?width=320&crop=smart&auto=webp&s=526a443209c4e245446acd5bbb3ee20aa4cc41bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aurcfDnhU4nxWz0DJLsLvkgYrOT7X5eSDjsc3Xctgdk.jpg?width=640&crop=smart&auto=webp&s=e5f7fe6a424fafd884e92008b70626a66861962c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aurcfDnhU4nxWz0DJLsLvkgYrOT7X5eSDjsc3Xctgdk.jpg?width=960&crop=smart&auto=webp&s=c49d2cc496f315ae72700223440c89def1ed7a76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aurcfDnhU4nxWz0DJLsLvkgYrOT7X5eSDjsc3Xctgdk.jpg?width=1080&crop=smart&auto=webp&s=77376981115e07f242d402d887540065099c6b1e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aurcfDnhU4nxWz0DJLsLvkgYrOT7X5eSDjsc3Xctgdk.jpg?auto=webp&s=9a73c8281d9d8cc91bbc3731a1c8d37c7cb69aa9', 'width': 1200}, 'variants': {}}]} |
Deploying LLama based LLM on my cloud provider | 3 | Hi all,
I want to deploy a LLama based LLM on a cloud provider as I thought it would enable me to keep my data private and not depend on OpenAI.
I'm looking for the best practices here, I was wondering if you could help with following questions:
\- Which cloud provider should I consider? I have some experience with AWS, it'd be my preferred solution, any ideas what kind of instance I'll need?
\- Should I use HuggingFace Transformers or GGML for executing models, or something else?
\- How would I go about picking an OSS model? There are many of them out there nowadays.
\- In terms of productionizing it, are there any best practices I should be aware of? | 2023-06-05T15:39:37 | https://www.reddit.com/r/LocalLLaMA/comments/141iibc/deploying_llama_based_llm_on_my_cloud_provider/ | PeaSmooth3511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141iibc | false | null | t3_141iibc | /r/LocalLLaMA/comments/141iibc/deploying_llama_based_llm_on_my_cloud_provider/ | false | false | self | 3 | null |
Search Augmented Instruction Learning (SAIL) - LLaMA-7B model with a search-augmented corpus | 18 | 2023-06-05T16:24:37 | https://github.com/luohongyin/SAIL | kryptkpr | github.com | 1970-01-01T00:00:00 | 0 | {} | 141jvdf | false | null | t3_141jvdf | /r/LocalLLaMA/comments/141jvdf/search_augmented_instruction_learning_sail/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'A7oLbOuwGmuBkRxkLaWMlraK_HUP9LWeGLAFc4jhEB8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9xcpYP2YRQLWM_mXFWLA5kkYEIhGaVwZc38j4E9uIvA.jpg?width=108&crop=smart&auto=webp&s=77e817b534e0b4dbf14c2360fe51e6b741977b5d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9xcpYP2YRQLWM_mXFWLA5kkYEIhGaVwZc38j4E9uIvA.jpg?width=216&crop=smart&auto=webp&s=7dde9a9700d4c9fed2d37c07522dd040f3eb395e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9xcpYP2YRQLWM_mXFWLA5kkYEIhGaVwZc38j4E9uIvA.jpg?width=320&crop=smart&auto=webp&s=c2ddac3a6e7674e2bf431474cf20c0498e7510a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9xcpYP2YRQLWM_mXFWLA5kkYEIhGaVwZc38j4E9uIvA.jpg?width=640&crop=smart&auto=webp&s=2d19a5548690f7a16db1b18127d6b6ceacd1989a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9xcpYP2YRQLWM_mXFWLA5kkYEIhGaVwZc38j4E9uIvA.jpg?width=960&crop=smart&auto=webp&s=d01fa04db67c39947bb0c4736c3ca63672b8f7be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9xcpYP2YRQLWM_mXFWLA5kkYEIhGaVwZc38j4E9uIvA.jpg?width=1080&crop=smart&auto=webp&s=9e53e1476252afcdd8b70b462a5cb225ad0aae61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9xcpYP2YRQLWM_mXFWLA5kkYEIhGaVwZc38j4E9uIvA.jpg?auto=webp&s=de1913e684a7db0c1c60f1d61de1287dffad333f', 'width': 1200}, 'variants': {}}]} |
||
World Building & Memory | 4 | Hello friendly experimenters! What's the current state of development for saving and tabbing characters, environments, contexts, etc, in your local Llamas? Have you found a way to switch through projects / worlds? | 2023-06-05T16:43:55 | https://www.reddit.com/r/LocalLLaMA/comments/141kg7q/world_building_memory/ | gelatinous_pellicle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141kg7q | false | null | t3_141kg7q | /r/LocalLLaMA/comments/141kg7q/world_building_memory/ | false | false | self | 4 | null |
What kind of computer rigs are you running with your favorite models? | 20 | I'm trying to see what everyone is using so I can decide if I need to shell out thousands of dollars on a new GPU/rig or if people are doing something different that I don't know about. | 2023-06-05T17:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/141kyzj/what_kind_of_computer_rigs_are_you_running_with/ | Liquidmesh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141kyzj | false | null | t3_141kyzj | /r/LocalLLaMA/comments/141kyzj/what_kind_of_computer_rigs_are_you_running_with/ | false | false | self | 20 | null |
Anyone able to get guanaco 65b to work? | 5 | [removed] | 2023-06-05T17:21:24 | https://www.reddit.com/r/LocalLLaMA/comments/141ll6p/anyone_able_to_get_guanaco_65b_to_work/ | PM_ME_YOUR_HAGGIS_ | self.LocalLLaMA | 2023-06-05T17:27:40 | 0 | {} | 141ll6p | false | null | t3_141ll6p | /r/LocalLLaMA/comments/141ll6p/anyone_able_to_get_guanaco_65b_to_work/ | false | false | default | 5 | null |
How to call oobabooga programmatically? | 2 | I installed the oobabooga interface on a computer and got the web interface working, it's great! I would like to write my own interface, how can I access the API directly, is there a guide? | 2023-06-05T18:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/141n56f/how_to_call_oobabooga_programmatically/ | igorbirman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141n56f | false | null | t3_141n56f | /r/LocalLLaMA/comments/141n56f/how_to_call_oobabooga_programmatically/ | false | false | self | 2 | null |
Looking for the best model to run on my config for créative writing. | 1 | I am a newcomer to local running of LLM, and I'm looking for the best model able to run smoothly and on a reasonabltime on my computer. I try one 7B-4bit model, but I'd like to know if I can upgrade for a better model, eventually one also able to propose very basic code.
My config is :
AMD Ryzen 7-5800H
NVIDIA® GeForce RTX™ 3060 (6Go dedicated video memory)
16 Go Ram | 2023-06-05T18:24:11 | https://www.reddit.com/r/LocalLLaMA/comments/141nguh/looking_for_the_best_model_to_run_on_my_config/ | Johnlanders | self.LocalLLaMA | 2023-06-05T18:47:03 | 0 | {} | 141nguh | false | null | t3_141nguh | /r/LocalLLaMA/comments/141nguh/looking_for_the_best_model_to_run_on_my_config/ | false | false | self | 1 | null |
I think all models using the Vicuna 1.1 prompt style are somewhat censored and limited by the prompt style itself. | 25 | >A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
[Source (literally)](https://github.com/lm-sys/FastChat/blob/00d9e6675bdff60be6603ffff9313b1d797d2e3e/fastchat/conversation.py)
This seems to be hammered into every model using Fastchat to train Vicuna 1.1 format. Apparently it's in front of every single training example.
I think this really has quite some consequences. I think many instruct models are less censored because of that. At least my observations would make sense when I compare it to how TheBloke always pre-prompts instruct style in the model instructions, which is:
>Below is an instruction that describes a task. Write a response that appropriately completes the request.
I have no source for that actually being included in the training data. But I think it's a fair guess. Now look at the difference. Instruct is "just do it", no word about "polite" or "helpful".
Anyway, I think we really need to modernize these prompting techniques, this seems essential. Why would the training even decide for you, that your conversation partner is an assistant? I would say these things have quite the implications and I think lacking prompt formats might really, really be holding back major improvements.
Discuss? | 2023-06-05T19:17:16 | https://www.reddit.com/r/LocalLLaMA/comments/141oqj0/i_think_all_models_using_the_vicuna_11_prompt/ | involviert | self.LocalLLaMA | 2023-06-05T20:21:17 | 0 | {} | 141oqj0 | false | null | t3_141oqj0 | /r/LocalLLaMA/comments/141oqj0/i_think_all_models_using_the_vicuna_11_prompt/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'TkLtppr7jnF9pjABy3F5bgs_5G0EcIfjST6D8zGn-tc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=108&crop=smart&auto=webp&s=073c76b691a85eeff570b44be101e70c1c52153c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=216&crop=smart&auto=webp&s=5794d10c301d6c4ccb499637008743f66ad46b1c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=320&crop=smart&auto=webp&s=6bdc74d43e7318f4945208e9742f521ca9d85825', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=640&crop=smart&auto=webp&s=4a2f6e0f71b33eb4b56de475410a6c9cddff06fe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=960&crop=smart&auto=webp&s=e61cbcc8f7eeb6165ffdf556ab67140794a2721c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?width=1080&crop=smart&auto=webp&s=7d60e60332a4ffe2b142371b1b55a042a0a3f9c7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CavE5f3q7xLykdLG8Wi-lpk69fq20bjL8wZDEQX-eSQ.jpg?auto=webp&s=a40a9fa926219ec4f8a264ddd7a4a7d56c07c842', 'width': 1200}, 'variants': {}}]} |
Llama in Telegram | 1 | [removed] | 2023-06-05T19:27:11 | https://www.reddit.com/r/LocalLLaMA/comments/141oxhu/llama_in_telegram/ | themihter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141oxhu | false | null | t3_141oxhu | /r/LocalLLaMA/comments/141oxhu/llama_in_telegram/ | false | false | default | 1 | null |
Evaluating LLM models — Falcon 40B | 1 | [removed] | 2023-06-05T19:42:27 | https://www.reddit.com/r/LocalLLaMA/comments/141pdh8/evaluating_llm_models_falcon_40b/ | No_Wheel_9336 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141pdh8 | false | null | t3_141pdh8 | /r/LocalLLaMA/comments/141pdh8/evaluating_llm_models_falcon_40b/ | false | false | default | 1 | null |
Test post | 1 | [removed] | 2023-06-05T19:50:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 141pm31 | false | null | t3_141pm31 | /r/LocalLLaMA/comments/141pm31/test_post/ | false | false | default | 1 | null |
||
GPT4All now supports every llama.cpp / ggML version across all software bindings! | 69 | 2023-06-05T20:08:44 | https://twitter.com/nomic_ai/status/1665809943230816258?s=20 | NomicAI | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 141q5ya | false | {'oembed': {'author_name': 'Nomic AI', 'author_url': 'https://twitter.com/nomic_ai', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">GPT4All now supports 100+ more models!💥<br><br>Nearly every custom ggML model you find <a href="https://twitter.com/huggingface?ref_src=twsrc%5Etfw">@huggingface</a> for CPU inference will *just work* with all GPT4All software with the newest release!<br><br>Instructions:<a href="https://t.co/BJex9293qx">https://t.co/BJex9293qx</a> <a href="https://t.co/XdimkbWTv0">pic.twitter.com/XdimkbWTv0</a></p>— Nomic AI (@nomic_ai) <a href="https://twitter.com/nomic_ai/status/1665809943230816258?ref_src=twsrc%5Etfw">June 5, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/nomic_ai/status/1665809943230816258', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_141q5ya | /r/LocalLLaMA/comments/141q5ya/gpt4all_now_supports_every_llamacpp_ggml_version/ | false | false | 69 | {'enabled': False, 'images': [{'id': 'JSpb6tiu3BMGIHbFOgqzTkz93TLP2joYOgO48WBDK2w', 'resolutions': [{'height': 115, 'url': 'https://external-preview.redd.it/UZR4U517J2YbKNgab0aiHv-Lm3wfbwJDKsnl4_vN7i0.jpg?width=108&crop=smart&auto=webp&s=616a81bbbb303935edaabdcfd6ac244910a519d7', 'width': 108}, {'height': 230, 'url': 'https://external-preview.redd.it/UZR4U517J2YbKNgab0aiHv-Lm3wfbwJDKsnl4_vN7i0.jpg?width=216&crop=smart&auto=webp&s=30dbee2a3476057d81a4041c1f84dbc724ada11c', 'width': 216}, {'height': 341, 'url': 'https://external-preview.redd.it/UZR4U517J2YbKNgab0aiHv-Lm3wfbwJDKsnl4_vN7i0.jpg?width=320&crop=smart&auto=webp&s=4780e18eb46f195fff8f6edd92d1c3323823077e', 'width': 320}, {'height': 683, 'url': 'https://external-preview.redd.it/UZR4U517J2YbKNgab0aiHv-Lm3wfbwJDKsnl4_vN7i0.jpg?width=640&crop=smart&auto=webp&s=eca5389a50ae9422e438011b12fbce7888c34db1', 'width': 640}, {'height': 1024, 'url': 'https://external-preview.redd.it/UZR4U517J2YbKNgab0aiHv-Lm3wfbwJDKsnl4_vN7i0.jpg?width=960&crop=smart&auto=webp&s=609fe3dac3488da8518406f9981e9ff61365893b', 'width': 960}, {'height': 1152, 'url': 'https://external-preview.redd.it/UZR4U517J2YbKNgab0aiHv-Lm3wfbwJDKsnl4_vN7i0.jpg?width=1080&crop=smart&auto=webp&s=f8e7c15b768cb5416c16ac90f62168115005b06e', 'width': 1080}], 'source': {'height': 1520, 'url': 'https://external-preview.redd.it/UZR4U517J2YbKNgab0aiHv-Lm3wfbwJDKsnl4_vN7i0.jpg?auto=webp&s=3b04fa83381cea7dfb512001a2773e30d6ee68a1', 'width': 1424}, 'variants': {}}]} |
||
Is OpenLlama supposed to be released today? | 10 | Is OpenLlama supposed to be released today? or is there unexpected news? | 2023-06-05T20:19:48 | https://www.reddit.com/r/LocalLLaMA/comments/141qher/is_openllama_supposed_to_be_released_today/ | ninjasaid13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141qher | false | null | t3_141qher | /r/LocalLLaMA/comments/141qher/is_openllama_supposed_to_be_released_today/ | false | false | self | 10 | null |
Biggest blocker: memory resources | 4 | Hey all, just starting my journey into learning about all things ML / machine learning coming from a full stack eng background. I've been interested in [replicating this project](https://www.izzy.co/blogs/robo-boys.html) so I've been getting my head wrapped around the LLaMA eco system for a week or so. My experience so far has been (on a 8 GB M2 MacBook Air):
1. download the 7B llama weights
2. reading that loading the 7B llama weights at about 13 GB is too much for my 8 GB CPU
3. throwing it into google colab paying $10 - then trying to run some training on it via GPU
4. while playing around with training I start burning through my compute hours for whatever reason. could be my fault
5. trying out smaller versions namely llama.cpp for local CPU
1. as is documented, I was able to run the 7B model on my CPU but the output was slow and and inaccurate. I got Neil Armstrong was 17 years old when he visited moon
6. Try to run alpaca - lora on google colab but eventually ran out of compute. currently debating on buying more
I just want to practically fine tune a llama model but I feel stuck between just trying to get a cheap GPU or buying more Google Colab credits. Even then, coming from a full stack eng context, is expecting to easily run some of these larger models on locally on my low RAM just not super feasible and accessible? or am I missing something else? | 2023-06-05T21:14:04 | https://www.reddit.com/r/LocalLLaMA/comments/141s2uo/biggest_blocker_memory_resources/ | natron- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141s2uo | false | null | t3_141s2uo | /r/LocalLLaMA/comments/141s2uo/biggest_blocker_memory_resources/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'xmJssDdwe5jayzxRpPODqWkdPUPI6UKgkvDlnavO4n4', 'resolutions': [{'height': 33, 'url': 'https://external-preview.redd.it/ZdRa_OvnuELBWXuglGbO8IrpL_CGohTVLpBrCwVwztg.jpg?width=108&crop=smart&auto=webp&s=60ec707e55d359696515b2b04d0d54a3fd236e7a', 'width': 108}, {'height': 66, 'url': 'https://external-preview.redd.it/ZdRa_OvnuELBWXuglGbO8IrpL_CGohTVLpBrCwVwztg.jpg?width=216&crop=smart&auto=webp&s=738620cbc118b6a53c37392db3e0e7b6eaff6320', 'width': 216}, {'height': 98, 'url': 'https://external-preview.redd.it/ZdRa_OvnuELBWXuglGbO8IrpL_CGohTVLpBrCwVwztg.jpg?width=320&crop=smart&auto=webp&s=1671bb14bb2548bfef2a1387a33abcfe2a1d0510', 'width': 320}, {'height': 196, 'url': 'https://external-preview.redd.it/ZdRa_OvnuELBWXuglGbO8IrpL_CGohTVLpBrCwVwztg.jpg?width=640&crop=smart&auto=webp&s=853dba1d086f63f675011de92fe2e8ae7a3ffc05', 'width': 640}], 'source': {'height': 279, 'url': 'https://external-preview.redd.it/ZdRa_OvnuELBWXuglGbO8IrpL_CGohTVLpBrCwVwztg.jpg?auto=webp&s=22599f50e8197ef64a1a43d0116769ece2ae6589', 'width': 911}, 'variants': {}}]} |
Query output random text | 1 | [removed] | 2023-06-05T21:38:23 | https://www.reddit.com/r/LocalLLaMA/comments/141ss84/query_output_random_text/ | fractaldesigner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141ss84 | false | null | t3_141ss84 | /r/LocalLLaMA/comments/141ss84/query_output_random_text/ | false | false | default | 1 | null |
Releasing local.ai - an LLM local playground with minimal setup | 59 | [Demo of the App inferencing with Wizard 7B](https://reddit.com/link/141swd7/video/vzzb2c4k7c4b1/player)
I've been working on this app over the past month. It enables everyone to experiment with LLM model locally with no technical setup, quickly evaluate a model's digest to ensure its integrity, and spawn an inference server to integrate with any app via SSE.
The app has 3 main features:
\- Resumable model downloader, with a known-working models list API
\- Starts a /completion endpoint streaming inference API on any port
\- A quick note-taking app that stores the conversation into plaintext markdown files
My main goal with this app is to make it seamless for everyone to experiment with AI locally, so that they can evaluate the ability and the risk of each AI model on their own. Also making it easier for myself personally to download and experiment with new models. The note taking app has also turned to be my main note taking driver recently, it's great because it's all local, and plain text so I can open it in vscode or on my phone to edit it however I like!
I wanted to publish it back in mid April but had to went through a bunch of hoop to acquire the signing certs to sign the Windows and Mac binary. Now that I got all of the certs in place, the app is now signed and has auto-update capability! (OV cert only tho, so Windows would still show a yellow warning....)
The app utilizes [https://github.com/rustformers/llm](https://github.com/rustformers/llm) for its LLM inferencing - check it out!
Demo/download: [https://www.localai.app/](https://www.localai.app/)
Source code: [https://github.com/louisgv/local.ai](https://github.com/louisgv/local.ai)
​ | 2023-06-05T21:42:23 | https://www.reddit.com/r/LocalLLaMA/comments/141swd7/releasing_localai_an_llm_local_playground_with/ | louisgv | self.LocalLLaMA | 2023-06-06T05:58:37 | 0 | {} | 141swd7 | false | null | t3_141swd7 | /r/LocalLLaMA/comments/141swd7/releasing_localai_an_llm_local_playground_with/ | false | false | 59 | {'enabled': False, 'images': [{'id': 'dVYS3RZ9kUF0Ao_cZPpXypVtoqBLkF3lTLo8GgBrkp4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OCKUhZjE42Jq5x75iw-v8Uij5fZ8TSv10poF2DIt3pM.jpg?width=108&crop=smart&auto=webp&s=5196195348dd13a83f027880ae2aa39d98a32821', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OCKUhZjE42Jq5x75iw-v8Uij5fZ8TSv10poF2DIt3pM.jpg?width=216&crop=smart&auto=webp&s=b4e1e3c396b8788bb529ba92b9840f730ea11076', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OCKUhZjE42Jq5x75iw-v8Uij5fZ8TSv10poF2DIt3pM.jpg?width=320&crop=smart&auto=webp&s=e7d1d171c8c22829679d72425e2cf20dc35b59b9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OCKUhZjE42Jq5x75iw-v8Uij5fZ8TSv10poF2DIt3pM.jpg?width=640&crop=smart&auto=webp&s=776995d6b712b643605d090a02899010120b8e7d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OCKUhZjE42Jq5x75iw-v8Uij5fZ8TSv10poF2DIt3pM.jpg?width=960&crop=smart&auto=webp&s=af9bc65c9de92ad41fe257ebffe66fff2d0229bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OCKUhZjE42Jq5x75iw-v8Uij5fZ8TSv10poF2DIt3pM.jpg?width=1080&crop=smart&auto=webp&s=96a956f46138c5a9dddcc819a8deec2797e3b0ed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OCKUhZjE42Jq5x75iw-v8Uij5fZ8TSv10poF2DIt3pM.jpg?auto=webp&s=3d631c1f5deed58c8f064e4e554df9f5fbfc31b6', 'width': 1200}, 'variants': {}}]} |
|
"Context" for LLAMA / Vicuna Models | 4 | So, the "context" field in oobabooga is prefilled with "this is a converseation with YOUR assistant". This is quite different from Open AI's GPT 3 where "you" tends to refer to the assistant or LLM.
Why is it like that? It kinda throws me off honestly since I'm used to prompting the LLM by referring to it as "you" etc. | 2023-06-05T21:51:56 | https://www.reddit.com/r/LocalLLaMA/comments/141t6ax/context_for_llama_vicuna_models/ | pelatho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141t6ax | false | null | t3_141t6ax | /r/LocalLLaMA/comments/141t6ax/context_for_llama_vicuna_models/ | false | false | self | 4 | null |
QLoRA on Falcon | 2 | [removed] | 2023-06-05T21:57:10 | https://www.reddit.com/r/LocalLLaMA/comments/141tbo2/qlora_on_falcon/ | NeatManagement3 | self.LocalLLaMA | 2023-06-06T11:19:46 | 0 | {} | 141tbo2 | false | null | t3_141tbo2 | /r/LocalLLaMA/comments/141tbo2/qlora_on_falcon/ | false | false | default | 2 | null |
Alternative to LangChain for open LLMs? | 19 | LangChain seems very OpenAI-centric. Have people tried using other frameworks for local LLMs? Is so, what do you recommend?
In particular I have trouble getting LangChain to work with quantized Vicuna (4-bit GPTQ). I'm specifically interested in low-memory LLMs. I see the same questions about LangChain integration were being asked a couple of months ago, but I'm not seeing much progress, so that's why I'm looking for alternatives. Feel free to tell me I'm wrong. | 2023-06-05T22:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/141ttwt/alternative_to_langchain_for_open_llms/ | NodeTraverser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141ttwt | false | null | t3_141ttwt | /r/LocalLLaMA/comments/141ttwt/alternative_to_langchain_for_open_llms/ | false | false | self | 19 | null |
Apple Introduces M2 Ultra with up to 192GB Unified Memory - LLM powerhouse? | 129 | [deleted] | 2023-06-05T22:18:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 141tyxe | false | null | t3_141tyxe | /r/LocalLLaMA/comments/141tyxe/apple_introduces_m2_ultra_with_up_to_192gb/ | false | false | default | 129 | null |
||
r/LocalLLaMA should shut down from 12th to 14th June | 0 | 2023-06-05T23:37:44 | https://www.reddit.com/r/Save3rdPartyApps/comments/13yh0jf/dont_let_reddit_kill_3rd_party_apps/ | Nabakin | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 141w87q | false | null | t3_141w87q | /r/LocalLLaMA/comments/141w87q/rlocalllama_should_shut_down_from_12th_to_14th/ | false | false | default | 0 | null |
|
Who here has had success with running a local model as a D*scord bot? | 8 | I want to run my local models as a D\*scord bot, but I'm having a lot of trouble finding resources to do so, despite seeing others do it. Can someone point me in the right direction? | 2023-06-05T23:52:32 | https://www.reddit.com/r/LocalLLaMA/comments/141wmk3/who_here_has_had_success_with_running_a_local/ | shortybobert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 141wmk3 | false | null | t3_141wmk3 | /r/LocalLLaMA/comments/141wmk3/who_here_has_had_success_with_running_a_local/ | false | false | self | 8 | null |
Orca (built on llama13b) looks like the new sheriff in town | 226 | Just read this paper from Microsoft about the Orca model (https://arxiv.org/pdf/2306.02707). They make big claims
1) a lot better than vicuna
2) on par with gpt3.5 on many tasks
3) that they will release the diff weights
A gamechanger for small models if it turns out as advertised. | 2023-06-06T03:05:29 | https://www.reddit.com/r/LocalLLaMA/comments/1421lfq/orca_built_on_llama13b_looks_like_the_new_sheriff/ | ironborn123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1421lfq | false | null | t3_1421lfq | /r/LocalLLaMA/comments/1421lfq/orca_built_on_llama13b_looks_like_the_new_sheriff/ | false | false | self | 226 | null |
Has your ISP throttled your Internet after downloading several LLM models? | 1 | [removed] | 2023-06-06T03:10:51 | https://www.reddit.com/r/LocalLLaMA/comments/1421px1/has_your_isp_throttled_your_internet_after/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1421px1 | false | null | t3_1421px1 | /r/LocalLLaMA/comments/1421px1/has_your_isp_throttled_your_internet_after/ | false | false | default | 1 | null |
Getting local sillytavern to work with ooba in runpod | 2 | Trying to use ST on my local machine to talk to TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ in runpod via ooba. I'm new to runpod.
I can't find anything on how to get ST to talk to the ooba API. Do I need to enable TCP on the port instead of http?
I'm too tired right now to try it tonight but will try tomorrow. | 2023-06-06T04:55:16 | https://www.reddit.com/r/LocalLLaMA/comments/14246cf/getting_local_sillytavern_to_work_with_ooba_in/ | pixelnull | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14246cf | false | null | t3_14246cf | /r/LocalLLaMA/comments/14246cf/getting_local_sillytavern_to_work_with_ooba_in/ | false | false | self | 2 | null |
This is the new king of LLM hardware with 576 GB of RAM. | 78 | 2023-06-06T05:40:06 | https://www.nvidia.com/en-us/data-center/grace-hopper-superchip/ | fallingdowndizzyvr | nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 14257ap | false | null | t3_14257ap | /r/LocalLLaMA/comments/14257ap/this_is_the_new_king_of_llm_hardware_with_576_gb/ | false | false | 78 | {'enabled': False, 'images': [{'id': 'JlM10ogN6SFBsF7OHe0Ho68nBHGHEoNA0re9fozmOYs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/NDqLrUNI0AKIfVVU4IFQ1Fon_iy1zxTM-osVQ_M85CM.jpg?width=108&crop=smart&auto=webp&s=281110a80485cb90590e95da4116d996689f7ede', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/NDqLrUNI0AKIfVVU4IFQ1Fon_iy1zxTM-osVQ_M85CM.jpg?width=216&crop=smart&auto=webp&s=c18f8a2104a65486f029aeb0cc432b7c595a38d6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/NDqLrUNI0AKIfVVU4IFQ1Fon_iy1zxTM-osVQ_M85CM.jpg?width=320&crop=smart&auto=webp&s=c4f6ee879c2786056da4ee3f700db09c707e8fee', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/NDqLrUNI0AKIfVVU4IFQ1Fon_iy1zxTM-osVQ_M85CM.jpg?width=640&crop=smart&auto=webp&s=3070a4e82ee138c9afdfd9014f26635f1002d2ae', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/NDqLrUNI0AKIfVVU4IFQ1Fon_iy1zxTM-osVQ_M85CM.jpg?width=960&crop=smart&auto=webp&s=372b8453b2c7742943f10fb5ccf4c2846cc29b4b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/NDqLrUNI0AKIfVVU4IFQ1Fon_iy1zxTM-osVQ_M85CM.jpg?width=1080&crop=smart&auto=webp&s=815beffa255b34701cf593ed916610f7caadb380', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/NDqLrUNI0AKIfVVU4IFQ1Fon_iy1zxTM-osVQ_M85CM.jpg?auto=webp&s=8168920510b2ab8ac48253f345d9f059eebec84f', 'width': 1200}, 'variants': {}}]} |
||
how can i use localllama as openai with rustful witout localai | 1 | [removed] | 2023-06-06T05:52:38 | https://www.reddit.com/r/LocalLLaMA/comments/1425hk3/how_can_i_use_localllama_as_openai_with_rustful/ | cool_xixi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1425hk3 | false | null | t3_1425hk3 | /r/LocalLLaMA/comments/1425hk3/how_can_i_use_localllama_as_openai_with_rustful/ | false | false | default | 1 | null |
Wizard-30B-GPTQ is good so far on 4090 | 9 | Gets about 10 t/s on an old CPU. I think WizardLM-Uncensored-30B is really
performant model so far. It follows few shot instructions better and is zippy enough for my taste. | 2023-06-06T06:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/1426eo3/wizard30bgptq_is_good_so_far_on_4090/ | EarthquakeBass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1426eo3 | false | null | t3_1426eo3 | /r/LocalLLaMA/comments/1426eo3/wizard30bgptq_is_good_so_far_on_4090/ | false | false | self | 9 | null |
How to train bigger models (13B+) | 4 | I currently fine-tune a lot on 7B models. Can the training scripts easily adopted for 13B +? How to calculate the memory consumption to avoid out of memory errors? | 2023-06-06T07:14:12 | https://www.reddit.com/r/LocalLLaMA/comments/14279s1/how_to_train_bigger_models_13b/ | wsebos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14279s1 | false | null | t3_14279s1 | /r/LocalLLaMA/comments/14279s1/how_to_train_bigger_models_13b/ | false | false | self | 4 | null |
Falcon 40b instruct on a M1 macbook air 16GB? | 2 | [removed] | 2023-06-06T07:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/1427ht6/falcon_40b_instruct_on_a_m1_macbook_air_16gb/ | SnazzySnowflake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1427ht6 | false | null | t3_1427ht6 | /r/LocalLLaMA/comments/1427ht6/falcon_40b_instruct_on_a_m1_macbook_air_16gb/ | false | false | default | 2 | null |
Noob question - 2019 Intel mac | 1 | [removed] | 2023-06-06T07:42:11 | https://www.reddit.com/r/LocalLLaMA/comments/1427vcv/noob_question_2019_intel_mac/ | UnpleasantEgg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1427vcv | false | null | t3_1427vcv | /r/LocalLLaMA/comments/1427vcv/noob_question_2019_intel_mac/ | false | false | default | 1 | null |
Subsets and Splits