title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Noob here | 1 | [removed] | 2023-06-15T03:57:08 | https://www.reddit.com/r/LocalLLaMA/comments/149s1t7/noob_here/ | kunjinn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149s1t7 | false | null | t3_149s1t7 | /r/LocalLLaMA/comments/149s1t7/noob_here/ | false | false | default | 1 | null |
New Discord server and a poll for this community | 37 | New server: [https://discord.gg/Y8H8uUtxc3](https://discord.gg/Y8H8uUtxc3)
LocalLLaMA has reached 30,000 members! Thank you everyone for being here.
As the community has grown, one of the most popular suggestions has been to create a Discord server. To check if this is something the community wants or doesn't want, it'll be decided by poll. Do you want a Discord server for this community? Vote in the poll if this should stay open or be deleted.
**Edit:** The community has decided, and the server will remain open. Thanks for voting and showing interest.
​
Q & A:
***Why Discord? Discord is terrible.***
There are three goals in making this server:
* Give beginners another way to ask very simple questions that have existing answers. These question posts are often downvoted. Removing these posts and leaving beginners without an answer is not a good solution, but at the same time, it wouldn't be ideal to see the community and longtime members here have their home page overrun with posts they prefer not to see. Discord can be a middle ground.
* Try to provide another way for the developers in this community to connect and collaborate.
* Encourage more discussion of locally run language models and related topics.
The original plan was to never make a server for this sub, but it was reconsidered after requests for one.
***Discord is a platform where information is blocked off for everyone else. Why make a server?***
There are endless Discord servers and as locally run language models become more popular, endless more will be created. If this is inevitable, it makes sense to create a centralized place where these discussions can be searched forever instead of being split up and lost in countless smaller servers.
***Why not something else besides Discord?***
Discord is the most popular alternative and most people usually don't want to be inconvenienced with platforms they don't already use.
***Why is the server named LocalLLaMA?***
LocalLLaMA focuses on LLaMA but has always allowed discussion of other local LLMs, and the name and community has become a catchall for local language model news and discussion.
***Anything else to know?***
Have a suggestion for the server, interested in being a Discord mod, or want to help by adding info or a utility? [Click here](https://www.reddit.com/message/compose?to=/r/LocalLLaMA) to send a message. For being a mod there, active members in this subreddit will be considered first.
​
Friendly reminder: this subreddit has a wiki. You can find a list of models [here](https://www.reddit.com/r/LocalLLaMA/wiki/models) and the simple install guide [here](https://www.reddit.com/r/LocalLLaMA/wiki/guide). A list of important datasets and community projects is [here](https://www.reddit.com/r/LocalLLaMA/wiki/communityprojects). The [sticked post](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/) below on running LLaMA has also been updated with a few more frequently asked questions and a simple chart showing some models and what they can be used for.
​
**Poll:** Do you want a Discord server for this community?
Yes: keep the server open
No: delete the server
I don't care: I don't have a preference whether there's a server or not.
[View Poll](https://www.reddit.com/poll/149s2ul) | 2023-06-15T03:58:47 | https://www.reddit.com/r/LocalLLaMA/comments/149s2ul/new_discord_server_and_a_poll_for_this_community/ | Technical_Leather949 | self.LocalLLaMA | 2023-06-19T10:16:26 | 0 | {} | 149s2ul | true | null | t3_149s2ul | /r/LocalLLaMA/comments/149s2ul/new_discord_server_and_a_poll_for_this_community/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'E8Jfupwh2wiDlIJ4BmYK6WgDxfqDQRYjZ_8QK--Cnqg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/M2EBgGeeo2HSx9mOiRPZMaDbB0foY9mKnPwthOx0LRA.jpg?width=108&crop=smart&auto=webp&s=4d287cc99a365271c3b4e143e681ce7de9b177d2', 'width': 108}], 'source': {'height': 162, 'url': 'https://external-preview.redd.it/M2EBgGeeo2HSx9mOiRPZMaDbB0foY9mKnPwthOx0LRA.jpg?auto=webp&s=560dfadb35cdb30d15adec491f5a2de8611e7c2d', 'width': 162}, 'variants': {}}]} |
A short conversation with Falcon | 299 | Questions are purposely stupid, btw | 2023-06-15T04:37:22 | https://www.reddit.com/gallery/149su0a | Basic_Description_56 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 149su0a | false | null | t3_149su0a | /r/LocalLLaMA/comments/149su0a/a_short_conversation_with_falcon/ | false | false | 299 | null |
|
How to finetuned module ? WizardVicuna 7B uncensored | 8 | I inspired by novel ai FAQ about how they module work, they use gpt-j and finetune it with material for fantasy story writing and anime related stuff, i want try replicate what they do with WZ 7B and see if i can achieve similar result to their module. but problem is i don't have single clue how star with it. I any enlighten here how it work and where i wrong about it. | 2023-06-15T05:00:33 | https://www.reddit.com/r/LocalLLaMA/comments/149t8oa/how_to_finetuned_module_wizardvicuna_7b_uncensored/ | Merchant_Lawrence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149t8oa | false | null | t3_149t8oa | /r/LocalLLaMA/comments/149t8oa/how_to_finetuned_module_wizardvicuna_7b_uncensored/ | false | false | self | 8 | null |
New quantization method SqueezeLLM allows for loseless compression for 3-bit and outperforms GPTQ and AWQ in both 3-bit and 4-bit. Quantized Vicuna and LLaMA models have been released. | 216 | Paper: [https://arxiv.org/abs/2306.07629](https://arxiv.org/abs/2306.07629)
Code: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
SqueezeLLM quantized models: [https://huggingface.co/squeeze-ai-lab](https://huggingface.co/squeeze-ai-lab)
Excerpts:
>We introduce SqueezeLLM, a post-training quantization framework that not only enables lossless compression to ultra-low precisions of up to 3-bit, but also achieves higher quantization performance under the same memory constraint. We extensively test SqueezeLLM on LLaMA-7B, 13B, and 30B on language modeling tasks using the C4 and WikiText2 benchmarks, where we find that SqueezeLLM consistently outperforms existing quantization methods by a large margin across different bit precisions. Our deployed models on A6000 GPUs not only demonstrate improved quantization performance but also exhibit significant gains in latency.
>
>In generative LLM inference, loading weight matrices into memory is the primary bottleneck, while the cost of dequantization and computation in the FP16 domain is relatively insignificant. Thus, by quantizing just the weights to lower precision, while leaving the activations in full precision, we can attain significant speedup, in addition to the reduction in model size. Notably, even the dense-only version of SqueezeLLM achieves perplexity comparable to the grouped GPTQ and AWQ. By incorporating sparsity, we achieve further perplexity improvements, reducing the gap from the FP16 baseline to less than 0.1 and 0.4 perplexity points for 4-bit and 3-bit quantization, respectively. Notably, with 3-bit quantization, our approach achieves up to a 2.1× reduction in perplexity gap from the FP16 baseline compared to existing methods.
>
>SqueezeLLM achieves higher accuracy for both Vicuna-7B and 13B as compared to the AWQ method and also preserve the accuracy of the FP16 baseline model with 4-bit quantization. Furthermore, it is noteworthy that the 4-bit quantized version of Vicuna-13B using SqueezeLLM has 2× smaller memory footprint than the 7B baseline model in FP16, while still achieving a 2% higher accuracy. In the case of 3-bit quantization, SqueezeLLM outperforms both GPTQ and the state-of-the-art AWQ method with a group size of 128 even without incorporating sparsity.
>
>Keeping 0.05% of sensitive values in FP16 only adds approximately 20% latency overhead across different model sizes, while still providing up to 1.9× speed up compared to the baseline. Keeping 0.45% of parameters in FP16 only adds 40-45% latency overhead relative to the dense-only implementation, while still resulting in 1.7× speed up compared to the FP16 baseline. In contrast, when accounting for permutation, the GPTQ runtime is degraded heavily. This shows how our Dense-and-Sparse quantization methodology allows for both higher accuracy as well as better performance relative to GPTQ. | 2023-06-15T05:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/149txjl/new_quantization_method_squeezellm_allows_for/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149txjl | false | null | t3_149txjl | /r/LocalLLaMA/comments/149txjl/new_quantization_method_squeezellm_allows_for/ | false | false | self | 216 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Training on documents to summarize them. | 17 | I am highly interested in using LLM to summarize long documents. However, the maximum token length is often a limitation.
But LLM can provide summaries based on the text it was trained on. If I ask ChatGPT about the summary of "Harry Potter and the Philosopher's Stone," it can generate a satisfactory output.
So, I had the idea that it should be possible to train an LLM (likely with LoRA) on a specific document and then request a summarization. The summarization might suffer from halluzination but for my use case this would be okay.
I researched for this but could not find anything. I even asked ChatGPT about it, but it just halluzinated about it. Does anyone know if something like this exists?
Edit:
After thinking about this for a little more, I realized (or at least I think) that the model I described would work like a stateful LLM. There is at least a bit of [researchon this topic](https://arxiv.org/pdf/2209.07634.pdf). However I do not know why this is not used more yet. Isn't this the whole reason LSTMs were so popoluar for a long time? | 2023-06-15T06:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/149vbmx/training_on_documents_to_summarize_them/ | Schmandli | self.LocalLLaMA | 2023-06-15T12:39:41 | 0 | {} | 149vbmx | false | null | t3_149vbmx | /r/LocalLLaMA/comments/149vbmx/training_on_documents_to_summarize_them/ | false | false | self | 17 | null |
Local LLMs: After Novelty Wanes | 75 |
What do you do with them after the novelty wears off?
- Have you found usecases that are particularly valuable?
- Have you found cool new applications in your personal or professional life?
- Are you building next gen apps with local LLMs?
Share your experiences, tips, and ideas on how to keep them useful in the long run. | 2023-06-15T09:06:55 | https://www.reddit.com/r/LocalLLaMA/comments/149xhln/local_llms_after_novelty_wanes/ | AstrionX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149xhln | false | null | t3_149xhln | /r/LocalLLaMA/comments/149xhln/local_llms_after_novelty_wanes/ | false | false | self | 75 | null |
Is it possible to train a llm to model a personality? | 14 | I don’t have a good machine so the only models are 7b ones that I can run on my machines.
What if I could give the model a personality, train it in a particular field say for example make it a doctor or even Shakespeare
The way I understand it is that less the parameters the worse the model is in general (7B worse than 30B and so on)
But the thing is it is worse in general
If all it’s powers were steered towards a particular thing
It could perform a lot better
I hope this makes sense
Maybe there already exist such models that I don’t know of | 2023-06-15T10:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/149ylzh/is_it_possible_to_train_a_llm_to_model_a/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 149ylzh | false | null | t3_149ylzh | /r/LocalLLaMA/comments/149ylzh/is_it_possible_to_train_a_llm_to_model_a/ | false | false | self | 14 | null |
Psychiatry (Choice of medications ) advice from LLAMA? | 1 | [removed] | 2023-06-15T10:54:19 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 149zbzk | false | null | t3_149zbzk | /r/LocalLLaMA/comments/149zbzk/psychiatry_choice_of_medications_advice_from_llama/ | false | false | default | 1 | null |
||
A guide for WSL/Windows 11 users including the installation of WSL2, Conda, Cuda & more) | 1 | [deleted] | 2023-06-15T11:42:06 | [deleted] | 2023-06-15T11:46:54 | 0 | {} | 14a086f | false | null | t3_14a086f | /r/LocalLLaMA/comments/14a086f/a_guide_for_wslwindows_11_users_including_the/ | false | false | default | 1 | null |
||
Free GPU options for LlaMA model experimentation | 63 | To those who are starting out on the llama model with llama.cpp or other similar models, you may feel tempted to purchase a used 3090, 4090, or an Apple M2 to run these models. However, I'd like to share that there are free alternatives available for you to experiment with before investing your hard-earned money.
Google Colab notebooks offer a decent virtual machine (VM) equipped with a GPU, and it's completely free to use.
Here are the typical specifications of this VM:
12 GB RAM
80 GB DISK
Tesla T4 GPU with 15 GB VRAM
This setup is sufficient to run most models effectively. In the comments section, I will be sharing a sample Colab notebook specifically designed for beginners.
If you happen to know about any other free GPU VMs, please do share them in the comments below. | 2023-06-15T11:47:23 | https://www.reddit.com/r/LocalLLaMA/comments/14a0bs9/free_gpu_options_for_llama_model_experimentation/ | AstrionX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a0bs9 | false | null | t3_14a0bs9 | /r/LocalLLaMA/comments/14a0bs9/free_gpu_options_for_llama_model_experimentation/ | false | false | self | 63 | null |
A guide for WSL/Windows 11/Linux users including the installation of WSL2, Conda, Cuda & more) | 26 |
I created a guide that includes some tips to improve your UX experience when using WSL2/windows 11/Linux
The WSL part contains :
* [install WSL](https://github.com/AmineDjeghri/AwesomeWindows11#install-wsl)
* [install miniconda in WSL](https://github.com/AmineDjeghri/AwesomeWindows11#install-miniconda-in-wsl)
* [configure WSL terminal](https://github.com/AmineDjeghri/AwesomeWindows11#configure-wsl-terminal-)
* [Install cuda in WSL](https://github.com/AmineDjeghri/AwesomeWindows11#install-cuda-in-wsl-)
* [WSL2 tools (backup, restore WSL image ..)](https://github.com/AmineDjeghri/AwesomeWindows11#wsl2-tools-)
* [WSL Terminal customization](https://github.com/AmineDjeghri/AwesomeWindows11/blob/master/README.md#41-customize-wsl-ubuntu-terminal) (For both Linux and WSL)
contributions are welcomed !
[Link to WSL Section](https://github.com/AmineDjeghri/AwesomeWindows11/blob/master/README.md#321coding-using-linuxwsl-inside-windows-best-choice---skip-this-if-you-dont-want-to-use-wsl)
[Link to the repo](https://github.com/AmineDjeghri/AwesomeWindows11) | 2023-06-15T11:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/14a0eqq/a_guide_for_wslwindows_11linux_users_including/ | aminedjeghri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a0eqq | false | null | t3_14a0eqq | /r/LocalLLaMA/comments/14a0eqq/a_guide_for_wslwindows_11linux_users_including/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'B-xC3cG-8ytaaWfLUNyL7usqsTGoV31nbZDjbq4WyUo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=108&crop=smart&auto=webp&s=93e75c196cc4dad1e97761d710cfef356fd34593', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=216&crop=smart&auto=webp&s=e2bc9137c3456b7d82641abc60c8597efc160094', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=320&crop=smart&auto=webp&s=6c085ce886c13c58ff83f5c35afbb53b142d80fe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=640&crop=smart&auto=webp&s=e78fc6fc70758a0de7740beea69a178ea36eb2bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=960&crop=smart&auto=webp&s=64b9ff6e55943fe6291be5fce28d1ceda819546d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?width=1080&crop=smart&auto=webp&s=99ffd9ae0979d1759c22d11607ab95c653bb1b5f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ue5b8HSxaXR-AZZPq_DboxFoXzY3FhgPkLxFkGoyHZc.jpg?auto=webp&s=6cc958a41cb442d011d627c00e75e91fbc572f44', 'width': 1200}, 'variants': {}}]} |
Honest LLaMA - Inference-Time Intervention (ITI) | 28 |
Abstract:
We introduce Inference-Time Intervention (ITI), a technique designed to enhance the truthfulness of large language models (LLMs). ITI operates by shifting model activations during inference, following a set of directions across a limited number of attention heads. This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark. On an instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness from to . We identify a tradeoff between truthfulness and helpfulness and demonstrate how to balance it by tuning the intervention strength. ITI is minimally invasive and computationally inexpensive. Moreover, the technique is data efficient: while approaches like RLHF require extensive annotations, ITI locates truthful directions using only few hundred examples. Our findings suggest that LLMs may have an internal representation of the likelihood of something being true, even as they produce falsehoods on the surface.
TLDR:
For Alpaca, ITI improves its truthfulness from 32.5% to 65.1% on TruthfulQA
Paper:
https://arxiv.org/abs/2306.03341 | 2023-06-15T11:53:47 | https://github.com/likenneth/honest_llama | i_wayyy_over_think | github.com | 1970-01-01T00:00:00 | 0 | {} | 14a0g7m | false | null | t3_14a0g7m | /r/LocalLLaMA/comments/14a0g7m/honest_llama_inferencetime_intervention_iti/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'seIlZWMeu9MvvI-1gNgo5gFIGcPuPnONo3Op_T_cSts', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=108&crop=smart&auto=webp&s=b348d8153ba7eadccfcbac0e5a7404492b79f400', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=216&crop=smart&auto=webp&s=be28c81e0f37865ce3e3c84d6643bcd19660d5b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=320&crop=smart&auto=webp&s=cc6d6a7e2a4d01f68f5f64d672d2d902ad6b94af', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=640&crop=smart&auto=webp&s=f8bb33120f42500c33b23cdb73765cb57db75308', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=960&crop=smart&auto=webp&s=794211e375338137ec1bd56c456193531e2a138e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?width=1080&crop=smart&auto=webp&s=2e3467bc947ae9d8df25e081876ff4fed76ced19', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1OWaGQh5Xbjux70KU0UpsT039GqvTo_rOeKb31QQH8w.jpg?auto=webp&s=bbbb635835e2867e5023124bb8ad2b42a4aaf9ae', 'width': 1200}, 'variants': {}}]} |
|
what do you use for quickly testing pretrained models locally? | 1 | [deleted] | 2023-06-15T11:56:45 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14a0ib0 | false | null | t3_14a0ib0 | /r/LocalLLaMA/comments/14a0ib0/what_do_you_use_for_quickly_testing_pretrained/ | false | false | default | 1 | null |
||
Running LLaMA 65B reminded me how much more fun base models are | 89 | Lately I've been running LLaMA 65B in the cloud to try out exllama. With no actual use case for it, I've been using the chatbot example. And it has reminded me how much more fun base models are compared to RLHF'd GPT. They're funnier, more concise, less obsequious, and capable of expressing emotions other than bland positivity. (They're also dumb as rocks compared to GPT-4, but that's expected.)
Maybe I'm just bad at prompting, but I've tried asking GPT-4 to be more concise, for example, and it didn't help much. (The message cap would make that futile anyway.) Meanwhile, if I ask a base model to play a game where we take turns saying the first word that comes to mind, it'll do it, and actually say only that word, without me having to ask explicitly.
It makes me wish that the tuned models (like GPT-4) would retain that sort of communication style. And I hope that the LLaMA v2 release isn't fine-tuned into the stereotypical RLHF writing style. | 2023-06-15T12:13:37 | https://www.reddit.com/r/LocalLLaMA/comments/14a0uig/running_llama_65b_reminded_me_how_much_more_fun/ | hold_my_fish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a0uig | false | null | t3_14a0uig | /r/LocalLLaMA/comments/14a0uig/running_llama_65b_reminded_me_how_much_more_fun/ | false | false | self | 89 | null |
Best open-source LLM for code generation | 10 | I am looking for an LLM in turn to develop a code generator application. The goal is to fine-tune the LLM with template libraries. Suggestions? | 2023-06-15T12:43:04 | https://www.reddit.com/r/LocalLLaMA/comments/14a1go5/best_opensource_llm_for_code_generation/ | lahaine93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a1go5 | false | null | t3_14a1go5 | /r/LocalLLaMA/comments/14a1go5/best_opensource_llm_for_code_generation/ | false | false | self | 10 | null |
Can you run inference on Mac with AMD GPUs? | 2 | Is that possible with things like oobabooga, ROCm, and llama.cpp?
I'm talking about Macs with Intel CPU an AMG GPU, not macs with Silicon chips. | 2023-06-15T13:13:49 | https://www.reddit.com/r/LocalLLaMA/comments/14a2505/can_you_run_inference_on_mac_with_amd_gpus/ | jl303 | self.LocalLLaMA | 2023-06-16T00:55:17 | 0 | {} | 14a2505 | false | null | t3_14a2505 | /r/LocalLLaMA/comments/14a2505/can_you_run_inference_on_mac_with_amd_gpus/ | false | false | self | 2 | null |
GPT4all with BDSM content and not censored? | 0 | [removed] | 2023-06-15T15:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/14a5p28/gpt4all_with_bdsm_content_and_not_censored/ | Pleasant_Ad5555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a5p28 | false | null | t3_14a5p28 | /r/LocalLLaMA/comments/14a5p28/gpt4all_with_bdsm_content_and_not_censored/ | false | false | default | 0 | null |
A kind of magic | 3 | [removed] | 2023-06-15T16:13:45 | https://www.reddit.com/r/LocalLLaMA/comments/14a6gii/a_kind_of_magic/ | Lolleka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a6gii | false | null | t3_14a6gii | /r/LocalLLaMA/comments/14a6gii/a_kind_of_magic/ | false | false | default | 3 | null |
Need Help | 7 | Ok so I'm fairly new to llama.cpp, I used to run the lama models with oogabooga, but after the newest changes to llama.cpp I switched. It runs so much faster on my GPU. Its amazing almost instant response. So anyways, I'm using the following code inside a bat file to run it
title llama.cpp
:start
main -i --interactive-first -r "### Human:" --temp 0 -c 2048 -n -1 --ignore-eos --repeat\_penalty 1.2 --instruct -m Wizard-Vicuna-13B-Uncensored.ggmlv3.q6\_K.bin -ngl 43
pause
goto start
Now here is my Question. I read that I can also run this as a Server (OpenAI like with Rest-API). How would i need to change my bat file in order to start the Server using said bat file. Or the better Question is that at all possible or do I need to do it some other way.
Thanks in advance for any help. | 2023-06-15T16:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/14a6tlc/need_help/ | GuentherDonner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a6tlc | false | null | t3_14a6tlc | /r/LocalLLaMA/comments/14a6tlc/need_help/ | false | false | self | 7 | null |
Wizard-Vicuna 30B only using 50% of CPU | 1 | [removed] | 2023-06-15T17:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/14a8fkp/wizardvicuna_30b_only_using_50_of_cpu/ | TenseSpoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a8fkp | false | null | t3_14a8fkp | /r/LocalLLaMA/comments/14a8fkp/wizardvicuna_30b_only_using_50_of_cpu/ | false | false | default | 1 | null |
How do you provide a model with source files for analysis? | 3 | I'm quite new to this, and using oobabooga on windows.
The models appear to think they can read HTTP URLs, and when I asked how I could provide them with source files instructed me to upload them and provide links, but it quickly became apparent they cannot actually access and read the uploaded source files despite claiming that's what they are doing.
Working with vicuna-30B-uncensored-gptq, but WizardCoder-15B-v1.0 also had the same problem. I assume it's because anything llama based assumes it is running as a centralized service by a large organization (vicuna notably falsely claims it is sending my feedback to "the development team").
Since I haven't gotten anywhere asking the models themselves about this, I thought I'd ask here: how can I provide source files or an entire project to a model for analysis? My first guess is using LoRAs to provide extensions to it's knowledge base, but in this use case what I want is for the AI's knowledge to always reflect only the most current version of the source files in the project. If there was a pipeline feeding source files into it's knowledge base, wouldn't that mean it ends up with knowledge of tons of outdated and useless source files? Wouldn't that interfere with it's ability to answer questions about the current state of the codebase?
My second guess is that I should be using the model through visual studio to accomplish this. Not sure if that is possible with any given model or only ones which are designed for it.
Can anyone tell me what my options are here? | 2023-06-15T17:50:12 | https://www.reddit.com/r/LocalLLaMA/comments/14a8pcq/how_do_you_provide_a_model_with_source_files_for/ | gigachad_deluxe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a8pcq | false | null | t3_14a8pcq | /r/LocalLLaMA/comments/14a8pcq/how_do_you_provide_a_model_with_source_files_for/ | false | false | self | 3 | null |
trial of my status | 1 | [deleted] | 2023-06-15T18:04:31 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14a8x2t | false | null | t3_14a8x2t | /r/LocalLLaMA/comments/14a8x2t/trial_of_my_status/ | false | false | default | 1 | null |
||
At what model size does Reasoning appear? | 47 | LLMs are great as Wikipedia type information sources.
However, in GPT4 the emergent property of Reasoning appears.
Do we have a chance of achieving this on our home systems? | 2023-06-15T18:11:56 | https://www.reddit.com/r/LocalLLaMA/comments/14a91wu/at_what_model_size_does_reasoning_appear/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14a91wu | false | null | t3_14a91wu | /r/LocalLLaMA/comments/14a91wu/at_what_model_size_does_reasoning_appear/ | false | false | self | 47 | null |
Running 40B Falcon on AMD MI300X real-time demo | 0 | [removed] | 2023-06-15T19:00:01 | https://www.reddit.com/r/LocalLLaMA/comments/14aa8ln/running_40b_falcon_on_amd_mi300x_realtime_demo/ | themushroommage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aa8ln | false | null | t3_14aa8ln | /r/LocalLLaMA/comments/14aa8ln/running_40b_falcon_on_amd_mi300x_realtime_demo/ | false | false | default | 0 | null |
Should laptop make a scratching noise when running Oobabooga or similar tools? | 0 | [removed] | 2023-06-15T19:30:30 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14ab0d3 | false | null | t3_14ab0d3 | /r/LocalLLaMA/comments/14ab0d3/should_laptop_make_a_scratching_noise_when/ | false | false | default | 0 | null |
||
A in-context-learning tuning dataset to few shots prompting | 10 | 2023-06-15T19:33:18 | https://huggingface.co/datasets/tasksource/icl-symbol-tuning-instruct | Jean-Porte | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 14ab2yj | false | null | t3_14ab2yj | /r/LocalLLaMA/comments/14ab2yj/a_incontextlearning_tuning_dataset_to_few_shots/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'ZgdTHBfsWO9CNq6kHwFEV90rdRHY5G_gJFbf1uYk2MQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=108&crop=smart&auto=webp&s=9ed7d6b1ac08c55270997f99cb3303405dd6b012', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=216&crop=smart&auto=webp&s=26fffa719db5eef4c54c17c67144c4e3b377f139', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=320&crop=smart&auto=webp&s=3898cc7de82bf1ffd09348cd800daa5a296dfc73', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=640&crop=smart&auto=webp&s=46e85a696798be47f3fe97834118aefc8c4abd9e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=960&crop=smart&auto=webp&s=46a942d705f25318d1637a6c93732b1a495efb7c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?width=1080&crop=smart&auto=webp&s=0417082e8101b6891980dc43305025bb44b094b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9ZjaWk50vyd8R_jJeHV5GUjgdhHi_ijuMelpplD3EVo.jpg?auto=webp&s=e73c6f309384dcd3d41cf6a3535b6372707eb6f4', 'width': 1200}, 'variants': {}}]} |
||
My experience with Falcon 40B on Hugging Face | 5 | I tried out several different prompts on the Huggingface playground ( [Falcon-Chat - a Hugging Face Space by HuggingFaceH4](https://huggingface.co/spaces/HuggingFaceH4/falcon-chat) ) for it to act as a Spanish tutor and to correct any mistakes I made, but it either only corrected the mistakes and nothing else, or it ignored the mistakes altogether and kept chatting. I know it is a free model but it doesn't seem to follow instructions very carefully.
Mind you, this is the first time I use an opensource LLM so I may be expecting too much?
Is there a 7B model available for testing online anywhere? I could only find the 40B model.... thanks | 2023-06-15T19:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/14abh3k/my_experience_with_falcon_40b_on_hugging_face/ | cool-beans-yeah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14abh3k | false | null | t3_14abh3k | /r/LocalLLaMA/comments/14abh3k/my_experience_with_falcon_40b_on_hugging_face/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'ErHVgBnGBq6UZs92whKb8RedUVO_2L-lXigC_T4TQXw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=108&crop=smart&auto=webp&s=b805a6aad62bd8affa3b4d25cc1f327827dd454b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=216&crop=smart&auto=webp&s=255904e9c827306ef890b0207b64a4b460942184', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=320&crop=smart&auto=webp&s=eaa2dc37536e48ced13b5c093b9cf067b21907f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=640&crop=smart&auto=webp&s=abd521b9d2ba661712238cb427a5f0e52101ade5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=960&crop=smart&auto=webp&s=b2de6d1343a7070606226a69e72abc2b9b65c2a2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?width=1080&crop=smart&auto=webp&s=918380af84f5e526b1067e2ed0c7c1859ec87717', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Igl7xX_NgL1Knnz9hM-MnYKMfMWrYJA2THmh1zGVe7c.jpg?auto=webp&s=b784291ca5c25ef8d884445e92427cb78436682b', 'width': 1200}, 'variants': {}}]} |
Releasing a super-simple memory project for LLMs / semantic search | 19 | Hey /r/LocalLLaMA,
tl;dr: check out the project @ [https://github.com/spyglass-search/memex](https://github.com/spyglass-search/memex)
Last week, I made a [post](https://www.reddit.com/r/LocalLLaMA/comments/14405do/building_a_supersimple_memory_service_for/) about open-sourcing this tool. I'm open-sourcing the code and hoping to get general feedback from the community to improve it for any local LLM projects people are working on!
To recap: One of the major frustrations I've had (and others too from posts I've seen) with building projects w/ LLMs is dealing with the complexity of chunking/embedding/vector dbs, especially if you're in the non-python world.
Here's a demo video that I made to demonstrate how it works. I add a text file to the memory and then use an LLM to ask questions about it right afterwards.
[Run service. Add data. Query from other apps or just use the API endpooints](https://reddit.com/link/14acy7w/video/5sxl1z20t86b1/player)
**Why use this vs embeddings/vector databases/etc.?**
If you're already familiar with that and comfortable with it, no need to switch.
1. If you're on a non-python stack or need something a little less resource intensive.
2. If you're relatively new to LLMs and just want to add text to a data store and get the right piece of text out for your LLM tool/project.
3. You want text-based semantic search without all the extra steps.
**How you can help**
This is super early stage!
If you're working on an LLM project, I'd love to see how you'd use this and if there's any additional functionality that would make your life easier. | 2023-06-15T20:47:43 | https://www.reddit.com/r/LocalLLaMA/comments/14acy7w/releasing_a_supersimple_memory_project_for_llms/ | andyndino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14acy7w | false | null | t3_14acy7w | /r/LocalLLaMA/comments/14acy7w/releasing_a_supersimple_memory_project_for_llms/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'CD7VanZwz3k_Vo6F2TTbwGnDsM43dI400RAClk2pdW0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=108&crop=smart&auto=webp&s=6bf10cc78985d4b7bacedf6bff9ff0e745ef9b8a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=216&crop=smart&auto=webp&s=73f93fd6e7ac9ae1b709fea86aaee6a16be40981', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=320&crop=smart&auto=webp&s=b8e28d3c60cbc346c71d7b8c4cd7e777d7a77f57', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=640&crop=smart&auto=webp&s=b97bdf83193359ed6faf76335329c3a8bc8e27c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=960&crop=smart&auto=webp&s=86105aeeb1737e7dfbebbd22b36c09a788a1308a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?width=1080&crop=smart&auto=webp&s=d797a865ee414bda5d5ba74ba5ee6cb9e91ebbe4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Eux4IvMBk6sTY0WN1shuNReaqBlBSarm7ARB0TDyeCE.jpg?auto=webp&s=78011bdad504861cd2d56b255c262ef910535490', 'width': 1200}, 'variants': {}}]} |
|
Preset Arena: 17,205 comparisons between 241 different presets. Vote on the best ones! | 125 | Everyone here has probably been through the question "What parameters should I use to get the best responses?". Temperature, top_k, top_p, repetition_penalty, typical_p... Finding the ideal combination is extremely difficult.
To tackle this problem, I have come up with the following experiment: comparing thousands of pairs of responses for the same prompt but different presets, and then computing the resulting elo scores for the presets. Just like lmsys did in their [Chatbot Arena Leaderboard](https://lmsys.org/blog/2023-05-25-leaderboard/), but for *presets* instead of models.
I have divided the prompts for the experiment into two categories:
- Instruct: 8465 instructions from the WizardLM_evol_instruct_70k dataset.
- Chat: 8740 conversations from the soda dataset (the #1 conversational dataset on Hugging Face). I have called the characters "Friend" and "You", and have built prompts consisting of the first 4 messages. The 5th one is generated by the model.
These models were used:
- Instruct prompts: Vicuna 13b v1.1 (GPTQ, 4-bit, 128g). This is a model that has ranked well on many leaderboards, and I have been using it for a while with good results.
- Chat prompts: LLaMA 13b (GPTQ, 4-bit, 128g). I find that the base LLaMA gives more natural and human-like responses during conversations.
It took me around 36 hours to generate the ~34000 completions on my RTX 3090 using the text-generation-webui API.
Now I need help categorizing the best responses. I have rented a Linux server and put together a "Preset Arena" website where anyone can vote.
# The arena is live here: https://oobabooga.github.io/arena/index.html
The final dataset will be shared on Hugging Face, including the prompts, responses, and votes.
Before voting, you can optionally enter an identifier like your reddit username or real name. The top voters will be acknowledged in the Hugging Face dataset card.
Some comments:
* The presets include special sampling techniques (Contrastive Search, Mirostat, Eta Sampling), as well as random combinations of the more common parameters. The full list can be found here: https://oobabooga.github.io/arena/presets.html
* Since the final dataset will contain pairs of outputs for the same prompt and a human preference label for each pair, it will in principle be possible to create a reward model for RLHF training based on it.
* I will regularly post progress updates in this thread.
-----
Updates (UTC time):
* 2023-06-16 00:01: 950 votes so far. This is going really well!
* 2023-06-16 02:31: 1260 votes. First preliminary results.
* 2023-06-16 04:02: 1421 votes.
* 2023-06-16 13:42: 2284 votes.
* 2023-06-16 15:44: 2535 votes.
* 2023-06-16 17:56: 2638 votes.
* 2023-06-16 23:59: 2952 votes. **Preliminary results update**: [preliminary results](https://oobabooga.github.io/arena/preliminary-results.html). | 2023-06-15T21:06:55 | https://www.reddit.com/r/LocalLLaMA/comments/14adfw2/preset_arena_17205_comparisons_between_241/ | oobabooga4 | self.LocalLLaMA | 2023-06-17T00:00:12 | 0 | {} | 14adfw2 | false | null | t3_14adfw2 | /r/LocalLLaMA/comments/14adfw2/preset_arena_17205_comparisons_between_241/ | false | false | self | 125 | {'enabled': False, 'images': [{'id': 'Q1MF8IN_UA9pU4tqvD1hEdePlazYPLTs893pR_vAxGU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=108&crop=smart&auto=webp&s=563cdab161ef7a0db1d99480a17e1e0a964713a8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=216&crop=smart&auto=webp&s=f1e49ccecddfa4224d3b0a84c02a2f4e91fdbe23', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=320&crop=smart&auto=webp&s=433ccbd98e6312cd473b7462d9b2795647e68b22', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=640&crop=smart&auto=webp&s=1af23e81a95a45bb6c1c4137b61cac670f2817d9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=960&crop=smart&auto=webp&s=ea529d168b43832467d3edd73f14c041a334f395', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=1080&crop=smart&auto=webp&s=e32dc82175c894fb2626e6595baa25c1a27f2720', 'width': 1080}], 'source': {'height': 1138, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?auto=webp&s=b48765733ccaee4211093fc1887b8e3e1484d509', 'width': 1138}, 'variants': {}}]} |
Why are LLM benchmarks so absurdly correlated? | 20 | I've been looking at jubilantjerry's excellent 'Combined LLM benchmarks' work ([https://www.reddit.com/r/LocalLLaMA/comments/13qj07n/sharing\_my\_comparison\_methodology\_for\_llm\_models/](https://www.reddit.com/r/LocalLLaMA/comments/13qj07n/sharing_my_comparison_methodology_for_llm_models/)), and the most striking thing about his chart is that if you sort all models by their overall score, *every model appears in the same rank order for every benchmark*.
The correlation is not just that GPT4 is in 1st place on 12 out of 12 tests, but that llama-65b is 5th place on 12 of 12, and alpaca-7b is 16th place, 12/12, etc.. No models outperform on individual benchmarks relative to their overall rank.
Zero ordinal variance on 33 models x 12 benchmarks is pretty striking. Why might this be?
I can think of three possible answers, but there may be others:
**1. Foundation Model similarity**: All of these models are produced in similar ways, with similar techniques, data sets, and tunings. They all occupy points on a spectrum that shades from better to worse, but with no other differences.
**2. Benchmark similarity**: The prompt->response pattern is central to the benchmarks, so the source of the prompts, and the measured outcome, are really just minor variations on a uniform test suite. Zero-shot Trivia QA is harder than few-shot HellaSwag, but they are testing the same kinds of behavior.
**3. Feedback loops**: LLM creators know these benchmarks and use them during training, so the models are designed to do well on these benchmarks. The benchmarks just repeat the training phase of the models.
4. ???
Do any of these explanations seem plausible? More than one? Can any be ruled out? Have I missed anything? | 2023-06-15T21:31:54 | https://www.reddit.com/r/LocalLLaMA/comments/14ae2xv/why_are_llm_benchmarks_so_absurdly_correlated/ | ClayShirky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ae2xv | false | null | t3_14ae2xv | /r/LocalLLaMA/comments/14ae2xv/why_are_llm_benchmarks_so_absurdly_correlated/ | false | false | self | 20 | null |
Langchain and self hosted LLaMA hosted API | 3 | What are the current best "no reinventing the wheel" approaches to have Langchain use an LLM through a locally hosted REST API, the likes of Oobabooga or [hyperonym/basaran](https://github.com/hyperonym/basaran) with streaming support for 4-bit GPTQ? | 2023-06-15T21:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/14aeo8f/langchain_and_self_hosted_llama_hosted_api/ | 2muchnet42day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aeo8f | false | null | t3_14aeo8f | /r/LocalLLaMA/comments/14aeo8f/langchain_and_self_hosted_llama_hosted_api/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'gFHYjON9kBqne0o-foYmyiaBdqO1BkPF2DRj0ogoTxg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=108&crop=smart&auto=webp&s=fea07ae3ab0bb62bc75b34910774a4c0619feecf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=216&crop=smart&auto=webp&s=a82e2c2973b8695c5547a135e4f2fef3797e70b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=320&crop=smart&auto=webp&s=3f76e017af382f4811a3b3b081310bc8beb34161', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=640&crop=smart&auto=webp&s=3357b486e1d1226172774cfda0e852872418db2e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=960&crop=smart&auto=webp&s=bb910faa55650b348e9d328428f61099a64a0369', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?width=1080&crop=smart&auto=webp&s=ce8032648c7802523394a5c23a317db4bbc62272', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VOlWMgzRN51j72aNawt6hTAKIaCcNWUEg_5PmWyYmX8.jpg?auto=webp&s=29bfa5df4818629250ae36f291681ab0e380dd68', 'width': 1200}, 'variants': {}}]} |
Airoboros 65b GGML is really good! | 112 | [https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML](https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML)
Haven't seen people talk about this release from thebloke, but it did a solid job with the joke that has stumped pretty much every other open source LLM since the beginning of my testing:
USER:
A man decides that we wants to rob a bank. He paints a banana black, then puts on a mask, and goes inside and points the banana at the bank teller. "I've got a gun, stick'em up!" He shouts at the clerk. The clerk looks at him, "I've been keeping my eyes peeled for a character like you." "Put your hands in the air!" Shouts the robber. "I thought about splitting when I saw you." Replied the clerk. "Put your hands in the air or I'll fire." The robber replies. "It'd be more dangerous if you just threw your gun on the floor, am I right?" Said the clerk. Why is this joke funny? Please explain your reasoning in detail.
ASSISTANT:
This joke is funny because it uses puns and wordplay to create a humorous situation where a man tries to rob a bank with a banana painted black, pretending it's a gun. The clerk recognizes the banana as fake but plays along, using puns like "I thought about splitting when I saw you" (referring to the banana being split in half) and "It'd be more dangerous if you just threw your gun on the floor" (referring to a slippery peel). The back-and-forth between the robber and clerk creates an absurd situation that makes us laugh at their misunderstanding of each other.
I've tried a bunch of other prompts on it, including creative writing prompts, and it has been consistently top tier. What are your experiences with it? Interested to hear. | 2023-06-16T00:48:33 | https://www.reddit.com/r/LocalLLaMA/comments/14ainpc/airoboros_65b_ggml_is_really_good/ | TheDreamSymphonic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ainpc | false | null | t3_14ainpc | /r/LocalLLaMA/comments/14ainpc/airoboros_65b_ggml_is_really_good/ | false | false | self | 112 | {'enabled': False, 'images': [{'id': 'CtpzRLMS6GlQQnv1zcIL_IW4nH6Hb6XpVXFHAgeld0k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=108&crop=smart&auto=webp&s=1a3a7ec11661e57b61ec3bdd14353a9b4fe71fda', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=216&crop=smart&auto=webp&s=a079718a0044e4faa1407e13ffb9bd04fadb6785', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=320&crop=smart&auto=webp&s=8306ebc4ca8e96eb2a82e2cfb82d854f66bc3933', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=640&crop=smart&auto=webp&s=783bd92935dadcf4cd944eb87b59f0c6bc3ccd18', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=960&crop=smart&auto=webp&s=8f9d6e5d60914c8722b0a7efcc88f19ce53ba166', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?width=1080&crop=smart&auto=webp&s=e4a0f8228add370a1c66042d8f7fa579f098d609', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/R2NpRx8QV3Z0krq0lsqPaorFLEPDDb-NAzWXGKNi9QY.jpg?auto=webp&s=04fd1b6d4960434cb9da29732f01a69ed72bb350', 'width': 1200}, 'variants': {}}]} |
More intelligent Pdf parsers | 15 | Apologies of this is slightly off topic. I've been experimenting with Llama Index along with local models for context based question answering problem. One key issue I've been facing is Pdf parsing, especially tabular information formats.
Does anyone know of current state of the art (reproducible code, not just papers) for semi intelligent Pdf parsing solutions which might even be slightly better than regular vanilla pypdf. | 2023-06-16T01:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/14aj8pe/more_intelligent_pdf_parsers/ | darthstargazer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aj8pe | false | null | t3_14aj8pe | /r/LocalLLaMA/comments/14aj8pe/more_intelligent_pdf_parsers/ | false | false | self | 15 | null |
Official WizardCoder-15B-V1.0 Released! Can Achieve 59.8% Pass@1 on HumanEval! | 175 | * Today, the ***WizardLM Team*** has released their **Official** **WizardCoder-15B-V1.0** model trained with 78**k** evolved code instructions.
* *WizardLM Team* will ***open-source all the code, data, models, and algorithms recently!***
* Paper: [https://arxiv.org/abs/2306.08568](https://arxiv.org/abs/2306.08568)
* The project repo: [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder)
* The official Twitter: [WizardLM\_AI](https://twitter.com/WizardLM_AI)
* HF Model: [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
* Four online demo links:
1. [https://609897bc57d26711.gradio.app/](https://609897bc57d26711.gradio.app/)
2. [https://fb726b12ab2e2113.gradio.app/](https://fb726b12ab2e2113.gradio.app/)
3. [https://b63d7cb102d82cd0.gradio.app/](https://b63d7cb102d82cd0.gradio.app/)
4. [https://f1c647bd928b6181.gradio.app/](https://f1c647bd928b6181.gradio.app/)
(We will update the demo links in our [github](https://github.com/nlpxucan/WizardLM/tree/main).)
**Comparing WizardCoder with the Closed-Source Models.**
🔥 The following figure shows that **our WizardCoder attains the third position** in the [HumanEval benchmark](https://www.reddit.com/r/LocalLLaMA/comments/1469343/hi_folks_back_with_an_update_to_the_humaneval/), surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models.
https://preview.redd.it/2s1pzshsr96b1.png?width=2315&format=png&auto=webp&s=b8ff5aa284a9f1a86be57fb92c66cff98993ea41
❗Note: In this study, we copy the scores for HumanEval and HumanEval+ from the [LLM-Humaneval-Benchmarks](https://github.com/my-other-github-account/llm-humaneval-benchmarks). Notably, all the mentioned models generate code solutions for each problem utilizing a **single attempt**, and the resulting pass rate percentage is reported. Our WizardCoder generates answers using greedy decoding and tests with the same [code](https://github.com/evalplus/evalplus).
​
**Comparing WizardCoder with the Open-Source Models.**
The following table clearly demonstrates that our WizardCoder exhibits a substantial performance advantage over all the open-source models.
**❗If you are confused with the different scores of our model (57.3 and 59.8), please check the Notes.**
https://preview.redd.it/x502zegur96b1.png?width=622&format=png&auto=webp&s=74f8d79557b6de8a4976c17c1f79f9f8a9cb376b
❗Note: The reproduced result of StarCoder on MBPP.
❗Note: Though PaLM is not an open-source model, we still include its results here.
❗Note: The above table conducts a comprehensive comparison of our WizardCoder with other models on the HumanEval and MBPP benchmarks. We adhere to the approach outlined in previous studies by generating **20 samples** for each problem to estimate the pass@1 score and evaluate it with the same [code](https://github.com/openai/human-eval/tree/master). The scores of GPT4 and GPT3.5 reported by [OpenAI](https://openai.com/research/gpt-4) are 67.0 and 48.1 (maybe these are the early version of GPT4&3.5). | 2023-06-16T01:27:49 | https://www.reddit.com/r/LocalLLaMA/comments/14ajglx/official_wizardcoder15bv10_released_can_achieve/ | cylaw01 | self.LocalLLaMA | 2023-06-17T13:01:14 | 0 | {} | 14ajglx | false | null | t3_14ajglx | /r/LocalLLaMA/comments/14ajglx/official_wizardcoder15bv10_released_can_achieve/ | false | false | 175 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
|
How is ExLlama so good? Can it be used with a more feature rich UI? | 46 | ExLlama uses way less memory and is much faster than AutoGPTQ or GPTQ-for-Llama, running on a 3090 at least. Does anyone know how to get it to work with Tavern or Kobold or Oobabooga? | 2023-06-16T02:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/14ak4yg/how_is_exllama_so_good_can_it_be_used_with_a_more/ | Excessive_Etcetra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ak4yg | false | null | t3_14ak4yg | /r/LocalLLaMA/comments/14ak4yg/how_is_exllama_so_good_can_it_be_used_with_a_more/ | false | false | self | 46 | null |
Do you wanna have a free Al agent? | 1 | [removed] | 2023-06-16T04:27:56 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14amz6s | false | null | t3_14amz6s | /r/LocalLLaMA/comments/14amz6s/do_you_wanna_have_a_free_al_agent/ | false | false | default | 1 | null |
||
Could a llama lora be merged into OpenLLama? | 1 | If they are the same architecture & size, it should be possible, right? Because lora trained on base llama can be merged into finetunes. | 2023-06-16T05:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/14aofh5/could_a_llama_lora_be_merged_into_openllama/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aofh5 | false | null | t3_14aofh5 | /r/LocalLLaMA/comments/14aofh5/could_a_llama_lora_be_merged_into_openllama/ | false | false | self | 1 | null |
Robin V2 Launches: Achieves Unparalleled Performance on OpenLLM! | 1 | 2023-06-16T06:20:36 | https://www.reddit.com/r/machinelearningnews/comments/14am06w/robin_v2_launches_achieves_unparalleled/ | saintshing | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14ap05c | false | null | t3_14ap05c | /r/LocalLLaMA/comments/14ap05c/robin_v2_launches_achieves_unparalleled/ | false | false | default | 1 | null |
|
Prevent LLM from answering out of context questions | 3 | I am currently playing around with fastchat t5. I have a set of documents that is fed to the model based on the query through langchain. My current issue is that whenever there's an out of context question, fastchat is using it's pretrained knowledge to answer it whereas the expected output should be 'I don't know' or 'let's keep it relevant to <domain>'.
I have tried several different prompts but none of them work.
Any suggestions on how to solve this? | 2023-06-16T06:58:55 | https://www.reddit.com/r/LocalLLaMA/comments/14apo7n/prevent_llm_from_answering_out_of_context/ | perseus_14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14apo7n | false | null | t3_14apo7n | /r/LocalLLaMA/comments/14apo7n/prevent_llm_from_answering_out_of_context/ | false | false | self | 3 | null |
Is there a free local analogue of the text translator DeepL in the form of an LLM? | 16 |
1. To translate not literally by words, but in the context of a sentence (paragraph).
2. To be able to download databases for required languages only.
3. integrated with WebUI | 2023-06-16T08:09:01 | https://www.reddit.com/r/LocalLLaMA/comments/14aqvrw/is_there_a_free_local_analogue_of_the_text/ | nihnuhname | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aqvrw | false | null | t3_14aqvrw | /r/LocalLLaMA/comments/14aqvrw/is_there_a_free_local_analogue_of_the_text/ | false | false | self | 16 | null |
Best uncensored llm to run locally on free google colab? | 1 | [removed] | 2023-06-16T08:26:43 | https://www.reddit.com/r/LocalLLaMA/comments/14ar639/best_uncensored_llm_to_run_locally_on_free_google/ | Murky-Cheek-7554 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ar639 | false | null | t3_14ar639 | /r/LocalLLaMA/comments/14ar639/best_uncensored_llm_to_run_locally_on_free_google/ | false | false | default | 1 | null |
LLM user support | 0 | [removed] | 2023-06-16T08:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/14arivh/llm_user_support/ | One_Creator_One | self.LocalLLaMA | 2023-06-16T11:02:22 | 0 | {} | 14arivh | false | null | t3_14arivh | /r/LocalLLaMA/comments/14arivh/llm_user_support/ | false | false | default | 0 | null |
Can we evade LLM detection by using dynamic temperature across text? Most detectors pass your text through their LLM and check if the temperature is constant, and maybe look for other sampling artifacts. So making temperature vary, we make it harder to detect our text. | 25 | Can we evade LLM detection by using dynamic temperature across text? Most detectors pass your text through their LLM and check if the temperature is constant, and maybe look for other sampling artifacts. So making temperature vary, we make it harder to detect our text. | 2023-06-16T09:23:09 | https://www.reddit.com/r/LocalLLaMA/comments/14as3x3/can_we_evade_llm_detection_by_using_dynamic/ | NancyAurum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14as3x3 | false | null | t3_14as3x3 | /r/LocalLLaMA/comments/14as3x3/can_we_evade_llm_detection_by_using_dynamic/ | false | false | self | 25 | null |
Can anyone help me with this problem in private gpt | 1 | [removed] | 2023-06-16T09:26:36 | Curious-Ninja150627 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14as62r | false | null | t3_14as62r | /r/LocalLLaMA/comments/14as62r/can_anyone_help_me_with_this_problem_in_private/ | false | false | default | 1 | null |
|
Help me with this private gpt Error | 1 | [removed] | 2023-06-16T09:29:48 | Curious-Ninja150627 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14as7zd | false | null | t3_14as7zd | /r/LocalLLaMA/comments/14as7zd/help_me_with_this_private_gpt_error/ | false | false | default | 1 | null |
|
What do you think is the best llm to use as a teacher | 2 | [removed] | 2023-06-16T09:42:47 | https://www.reddit.com/r/LocalLLaMA/comments/14asgaa/what_do_you_think_is_the_best_llm_to_use_as_a/ | Redbeardybeard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14asgaa | false | null | t3_14asgaa | /r/LocalLLaMA/comments/14asgaa/what_do_you_think_is_the_best_llm_to_use_as_a/ | false | false | default | 2 | null |
Compiling Falcon.cpp on windows | 5 | Has anyone managed to compile this https://github.com/jploski/ggml/tree/falcon40b to an exe on windows?? No matter what I try I cannot get it to compile to main.exe.
Error output looks like this (much large but similar build issues) using cMake gui and chocolately’s make package for windows.
2>E:\ggml-falcon40b\examples\falcon\main.cpp(745,28): warning C4267: '+=': conversion from 'size_t' to 'int', possible loss of data
2>E:\ggml-falcon40b\examples\falcon\main.cpp(776,30): warning C4267: '+=': conversion from 'size_t' to 'int', possible loss of data
2>Done building project "falcon.vcxproj" -- FAILED.
After I compile it I’m hoping to use https://huggingface.co/TheBloke/falcon-40b-instruct-GGML | 2023-06-16T09:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/14aslu6/compiling_falconcpp_on_windows/ | ihaag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aslu6 | false | null | t3_14aslu6 | /r/LocalLLaMA/comments/14aslu6/compiling_falconcpp_on_windows/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'LhXGBGxcvevyJYiKZhH9d4c-Ry850cspL4r0h_gl7XQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=108&crop=smart&auto=webp&s=bc39ed4ae8e213f34a6c96ce101221a0368d3c68', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=216&crop=smart&auto=webp&s=8ee25f2c35dbfa8a5f7ab958fef688e8b3470fa9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=320&crop=smart&auto=webp&s=d932d27548cf1070d2ef68cab62d7c28dcacfaae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=640&crop=smart&auto=webp&s=b322052b1cca578ddf9b192db28109e2ce6a2118', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=960&crop=smart&auto=webp&s=b800378039ac887915e7df73c1c8fd4cc80c42e1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?width=1080&crop=smart&auto=webp&s=3bc494f349b6c68db0a758e9da907ce2551a8b34', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ceGnPQ8D0MYLaZTrv8rOYBt5-sTGR_12n3r59PsHjWI.jpg?auto=webp&s=33cffd77ca62a01c0ce9f3259efbbfda7f46729c', 'width': 1200}, 'variants': {}}]} |
The best model for content generation? | 2 | Hey, I wanted to know which model would you guys suggest is best for content generation. | 2023-06-16T10:07:17 | https://www.reddit.com/r/LocalLLaMA/comments/14asvzd/the_best_model_for_content_generation/ | TheArchivist314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14asvzd | false | null | t3_14asvzd | /r/LocalLLaMA/comments/14asvzd/the_best_model_for_content_generation/ | false | false | self | 2 | null |
airoboros gpt4 v1.2 | 81 | Thanks for testing the last round and giving feedback, another one is ready to test:
* [https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.2](https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.2)
* [https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.2](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.2)
* [https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2)
* [https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2)
Dataset here, feel free to use as desired but it was generated by GPT-4 so take a peek at their ToS: [https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2)
It should be better at coding. I added a few hundred coding examples with " PLAINFORMAT" at the end that skips the backticks/explanation/etc. to just print the code. It doesn't always work but often does.
I also added several thousand Orca style reasoning prompts, including some with the idea of parallelization (e.g. "It takes 15 minutes to bake a dozen cookies. How long will it take to bake 24 cookies?")
Also added \~1000 role-playing data.
Still has the custom contextual question answering stuff with really low hallucinations and referencing.
The great and powerful u/TheBloke has already quantized the 33b and 65b but I don't think he's done the 13b or 7b yet ;) /hint /please
Feedback, please, for the next round of improvements!
Also, updated alpaca\_eval (pr pending) and llm-jeopardy scores:
https://preview.redd.it/eqhf3uuvxc6b1.png?width=810&format=png&auto=webp&s=38e2e635d5b5bb6c622f1b34e37f879d33c1d5a2
https://preview.redd.it/ceui8vuvxc6b1.png?width=1488&format=png&auto=webp&s=cd09c059e8acb8b2f9a5e6efc9e0e5a698651d0f | 2023-06-16T10:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/14atdex/airoboros_gpt4_v12/ | JonDurbin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14atdex | false | null | t3_14atdex | /r/LocalLLaMA/comments/14atdex/airoboros_gpt4_v12/ | false | false | 81 | {'enabled': False, 'images': [{'id': 'hjs5sZRSjODTAP9HiMGqQKQDjnjJ4XWJd-J-sPTJc_U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=108&crop=smart&auto=webp&s=cdcd4cb494639aeab3bede204f6a48d1d99a67d1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=216&crop=smart&auto=webp&s=f0a453886bceb606d2b28d7bcd6cf833af6ef1df', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=320&crop=smart&auto=webp&s=4f9584739fa016910fb50396187413c869674c04', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=640&crop=smart&auto=webp&s=c1c2c303b47bc0055d7aa31067b4567257fd7338', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=960&crop=smart&auto=webp&s=4d3ced0a8feba7b9a08501d867830c622124c889', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?width=1080&crop=smart&auto=webp&s=295b1138217a2404f0ce55613153221e06bc40d9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/b-2QeG40weNXA8RzcCwO3KY1CHaiARyicg2EW-2TgLQ.jpg?auto=webp&s=3240bcca7bbe6249ca30294614b4fba6354976bf', 'width': 1200}, 'variants': {}}]} |
|
How are all these models different from each other | 26 | Like what does Wizard do that vicuña couldn’t etc.
Also with so many different models can we have a section in the wiki explaining what specific thing each is good at | 2023-06-16T11:04:28 | https://www.reddit.com/r/LocalLLaMA/comments/14atx3q/how_are_all_these_models_different_from_each_other/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14atx3q | false | null | t3_14atx3q | /r/LocalLLaMA/comments/14atx3q/how_are_all_these_models_different_from_each_other/ | false | false | self | 26 | null |
I was able to load WizardLM-30b onto my RTX 3090 | 22 | I am impressed, could load the model into vram, took 800s to load. But its really quick at inference. I gave it a few prompts such as summarizing a document, and writing me emails. Works well. But I wonder are there any better models? I saw the HF leaderboard, but seems like lots of models are missing.
Any models I should try out?
Was thinking of loading up: TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ
But I have seen some 65b models with 2 and 3 bit quantization. Anyone have success with those?
EDIT: found out why i loads so slow. In wsl2 the io speed from naitive ex4 disk and windows disk is poor. You have to move the repo to the native ex4 disk in wsl2 and its significantly faster.
Source: https://github.com/microsoft/WSL/issues/4197
Autogptq loaded the 30b model in 37s. Thank you all. | 2023-06-16T11:58:37 | https://www.reddit.com/r/LocalLLaMA/comments/14auy1z/i_was_able_to_load_wizardlm30b_onto_my_rtx_3090/ | klop2031 | self.LocalLLaMA | 2023-06-18T14:52:25 | 0 | {} | 14auy1z | false | null | t3_14auy1z | /r/LocalLLaMA/comments/14auy1z/i_was_able_to_load_wizardlm30b_onto_my_rtx_3090/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'zgC8GKkQBGteVKfyG-suakesKihDSh27INw1Hgkj5Oo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=108&crop=smart&auto=webp&s=b737df0e747f7673ec07665e18bb1c14197657ab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=216&crop=smart&auto=webp&s=d086699b7c33424817993d71b27d392c4053c990', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=320&crop=smart&auto=webp&s=d38541dd0fb917ca94d70d7f3c265c7f7c369208', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=640&crop=smart&auto=webp&s=f26f403ad7cadd3ac8358761971b0098dea0e82a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=960&crop=smart&auto=webp&s=c12f92a322f1c0c0564885071037babb5f29042c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?width=1080&crop=smart&auto=webp&s=abd3709b8f73321f2a95439e1211b093a2b64c9d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s9OVUUW_8RxZmj8qtV36Hr32p7FN4XKm4ZvErv2HZ28.jpg?auto=webp&s=80eb818475a376132b032fa01a459c6aac8af54a', 'width': 1200}, 'variants': {}}]} |
Replace pinecone. | 28 | I am impressed with the service https://github.com/Mintplex-Labs/anything-llm but I don't want to use pinecone, as it makes me dependent on someone else's service. can I replace pinecone and if so, how do I do it? Thanks! | 2023-06-16T12:04:59 | https://www.reddit.com/r/LocalLLaMA/comments/14av30w/replace_pinecone/ | One_Creator_One | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14av30w | false | null | t3_14av30w | /r/LocalLLaMA/comments/14av30w/replace_pinecone/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'QESTWT5GMa_WkFsdEhn1_2EHGzZjQq2UCxpW6FKE-nw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=108&crop=smart&auto=webp&s=2c0464c24f7a3fe17b534591bf843aa02de61175', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=216&crop=smart&auto=webp&s=edc31e3d206bd6123abdec64522fe5cac07a0271', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=320&crop=smart&auto=webp&s=694653997d6bb1d048f2e6e00a8fe98bc6f766f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=640&crop=smart&auto=webp&s=73e2169e10fb5c1584de25d670e04af10693f092', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=960&crop=smart&auto=webp&s=377e559a3a7b0f134f506d5db885f11649759c39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?width=1080&crop=smart&auto=webp&s=ee2dbabb03d658cc0bc8d863d8869a4b89cc6e4a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jy9XImdAfzUxWGr96OoUt8mk7x_pxHfh_tww131nMS4.jpg?auto=webp&s=a0e75ec64e4af9d83be5511b8ef48f7e0cb2e748', 'width': 1200}, 'variants': {}}]} |
help with Oobaboga training please | 1 | [removed] | 2023-06-16T12:13:02 | https://www.reddit.com/r/LocalLLaMA/comments/14av8vf/help_with_oobaboga_training_please/ | Competitive_Fox7811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14av8vf | false | null | t3_14av8vf | /r/LocalLLaMA/comments/14av8vf/help_with_oobaboga_training_please/ | false | false | default | 1 | null |
which current LLM is most advanced/modern? which one will it be in next few months? | 15 | Not talking about size.
It can be a model which is small but uses different technologies. | 2023-06-16T12:49:28 | https://www.reddit.com/r/LocalLLaMA/comments/14aw0on/which_current_llm_is_most_advancedmodern_which/ | Sofronyami | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14aw0on | false | null | t3_14aw0on | /r/LocalLLaMA/comments/14aw0on/which_current_llm_is_most_advancedmodern_which/ | false | false | self | 15 | null |
best Llama model for Mac M1? | 9 | I have a mac mini m1 256/ 8gb.
What is the best instruct llama model I can run smoothly on this machine without burning it? | 2023-06-16T13:51:01 | https://www.reddit.com/r/LocalLLaMA/comments/14axesy/best_llama_model_for_mac_m1/ | buxrmp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14axesy | false | null | t3_14axesy | /r/LocalLLaMA/comments/14axesy/best_llama_model_for_mac_m1/ | false | false | self | 9 | null |
NER but with LLM | 1 | [removed] | 2023-06-16T14:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/14ay6bq/ner_but_with_llm/ | JollyAnteater5339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ay6bq | false | null | t3_14ay6bq | /r/LocalLLaMA/comments/14ay6bq/ner_but_with_llm/ | false | false | default | 1 | null |
Could a model not be trained by a decentralized network? Like Seti @ home or kinda-sorta like bitcoin. Petals accomplishes this somewhat, but if raw computer power is the only barrier to open-source I'd be happy to try organizing decentalized computing efforts | 55 | Title. If enough people could be reached with a model that is trainable with decentralized computing, in theory, could this not create an (unlikely, but possible) avenue for an open-source model as powerful as Meta's or OpenAI's?
Sorry if this might be a naieve question | 2023-06-16T14:56:20 | https://www.reddit.com/r/LocalLLaMA/comments/14az00q/could_a_model_not_be_trained_by_a_decentralized/ | doublemint2202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14az00q | false | null | t3_14az00q | /r/LocalLLaMA/comments/14az00q/could_a_model_not_be_trained_by_a_decentralized/ | false | false | self | 55 | null |
PDF form parser with non-text inputs | 1 | [removed] | 2023-06-16T15:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/14azuck/pdf_form_parser_with_nontext_inputs/ | harold__hadrada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14azuck | false | null | t3_14azuck | /r/LocalLLaMA/comments/14azuck/pdf_form_parser_with_nontext_inputs/ | false | false | default | 1 | null |
Issue Loading 13B Model in Ooba Booga on RTX 4070 with 12GB VRAM | 5 | Hello everyone,
I am currently encountering an issue with the Ooba Booga software on my Windows 10 PC. I'm running the program on a machine with an RTX 4070 GPU, which has 12GB of VRAM.
Whenever I'm trying to load any 13B LLaMA models, I am consistently receiving an error that seems to be related to memory allocation. The error message is:
RuntimeError: \[enforce fail at C:\\cb\\pytorch\_1000000000000\\work\\c10\\core\\impl\\alloc\_cpu.cpp:72\] data. DefaultCPUAllocator: not enough memory: you tried to allocate 35389440 bytes.
​
This is puzzling because, from what I understand, a 13B model should require less than 10GB of VRAM, and my GPU should be more than capable of handling this.
I am able to load 7B models without any issue.
I've ensured that there are no other significant processes running that could be using up VRAM and I've got the last Nvidia drivers running. My Ooba Booga setup is also up to date.
Has anyone else encountered this problem or does anyone have any suggestions on what might be going wrong and how to fix it?
Any help would be greatly appreciated.
Thank you! | 2023-06-16T15:35:46 | https://www.reddit.com/r/LocalLLaMA/comments/14azzij/issue_loading_13b_model_in_ooba_booga_on_rtx_4070/ | alexthai7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14azzij | false | null | t3_14azzij | /r/LocalLLaMA/comments/14azzij/issue_loading_13b_model_in_ooba_booga_on_rtx_4070/ | false | false | self | 5 | null |
Why is there not as much cutting edge research into embeddings? | 35 | I know LLMs are flashier, but embeddings are in the same vein, and arguably have more established real world usages right now. Why is there not nearly as many embeddings models and research coming out? The SOTA is still `mpnet-base-v2` (correct me if I'm wrong) which is almost 2 years old now. | 2023-06-16T15:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/14b0dta/why_is_there_not_as_much_cutting_edge_research/ | srvhfvakc | self.LocalLLaMA | 2023-06-16T15:56:19 | 0 | {} | 14b0dta | false | null | t3_14b0dta | /r/LocalLLaMA/comments/14b0dta/why_is_there_not_as_much_cutting_edge_research/ | false | false | self | 35 | null |
Question on fine tuning with LORA | 8 | I tried to fine tune a 7B model using oogaboga text gui. It went well.
When I loaded it on my machine, I was using the quantised version of the model. Llama.cpp gave me a warning saying using Lora’s with quantised models may produce bad quality.
Does anyone know if this is significant? Should I use QLora to train on a quantised model directly instead? | 2023-06-16T16:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/14b0qkt/question_on_fine_tuning_with_lora/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b0qkt | false | null | t3_14b0qkt | /r/LocalLLaMA/comments/14b0qkt/question_on_fine_tuning_with_lora/ | false | false | self | 8 | null |
RWKV Pile+ seems to be training on far more tokens than any LLM ever has | 93 | Has anyone else noticed [this](https://huggingface.co/BlinkDL/rwkv-4-pileplus)? Seems to be in-progress
Apparently it's being *finetuned* on 1.7T tokens (RedPajama + subset of The Pile v2), *after* already training on the ~800GB of The Pile. As far as public knowledge goes, that's *far* more than any LLM has ever been trained on. I'm excited to see how it will perform | 2023-06-16T16:21:22 | https://www.reddit.com/r/LocalLLaMA/comments/14b147o/rwkv_pile_seems_to_be_training_on_far_more_tokens/ | pedantic_pineapple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b147o | false | null | t3_14b147o | /r/LocalLLaMA/comments/14b147o/rwkv_pile_seems_to_be_training_on_far_more_tokens/ | false | false | self | 93 | {'enabled': False, 'images': [{'id': '0gNI3KsG_0eps01yrutV_yvv3UTcmipaOX5BWN-oTVY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=108&crop=smart&auto=webp&s=6fbcda1a2d2fc6cab365d0a844c26ef4274911ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=216&crop=smart&auto=webp&s=aeed29e0c8070df3c3085a6c13cb555f517b16ff', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=320&crop=smart&auto=webp&s=7ed36779f94a6039663ad9277ffec0f91908935d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=640&crop=smart&auto=webp&s=430868c0c4cad89d79b10b54ee6a1d570e12dbc1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=960&crop=smart&auto=webp&s=547c1a302106935a16f918155c6c523566e854df', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?width=1080&crop=smart&auto=webp&s=4ea025712dd86c053ff4770d10dc93da0bad6865', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EEw0WAaGvRm-5GaW4ew8WSjHJ45CBmjp2FsLB3IOPOQ.jpg?auto=webp&s=12a749a4428e3d0e4b438d711ecf788eab6e9cdd', 'width': 1200}, 'variants': {}}]} |
WizardCoder-15B-1.0 vs ChatGPT coding showdown: 4 webapps * 3 frameworks | 56 | Hello /r/LocalLLaMa!
With yesterday's release of WizardCoder-15B-1.0 (see [official thread](https://www.reddit.com/r/LocalLLaMA/comments/14ajglx/official_wizardcoder15bv10_released_can_achieve/) and [less official thread] (https://www.reddit.com/r/LocalLLaMA/comments/149ir49/new_model_just_dropped_wizardcoder15bv10_model/) ) we finally have an open model that passes my [can-ai-code benchmark](https://huggingface.co/spaces/mike-ravkine/can-ai-code-results)
With the basics out of the way, we are finally ready to do some real LLM coding!
I have created an [llm-webapps](https://github.com/the-crypt-keeper/llm-webapps) repository with the boilerplate necessary to:
* define requirements for simple web-apps
* format those requirements into language, framework and model-specific prompts
* run the prompts through LLM
* visualize the results
OK enough with the boring stuff, [CLICK HERE TO PLAY WITH THE APPS](https://huggingface.co/spaces/mike-ravkine/llm-webapps-results)
On mobile the sidebar is hidden by default; click the chevron on the top left to select which model, framework and project you want to try.
Lots of interesting stuff in here, drop your thoughts and feedback in the comments. If you're interested in repeating this experiment or trying your own experiments or otherwise hacking on this hit up the [llm-webapps GitHub](https://github.com/the-crypt-keeper/llm-webapps). | 2023-06-16T16:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/14b1tsw/wizardcoder15b10_vs_chatgpt_coding_showdown_4/ | kryptkpr | self.LocalLLaMA | 2023-06-16T17:31:50 | 0 | {} | 14b1tsw | false | null | t3_14b1tsw | /r/LocalLLaMA/comments/14b1tsw/wizardcoder15b10_vs_chatgpt_coding_showdown_4/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': 'QyV2S4rh-pMbnJML_R7bbJcWB2RH98giH_Z_P45_3vo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=108&crop=smart&auto=webp&s=a058206456911026ebb90e4a841787cd0c5c75ec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=216&crop=smart&auto=webp&s=a0443ae4570d3dda40ce608821d828c31efa9f2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=320&crop=smart&auto=webp&s=545ca59a9a6fb473a2e1712e0ad3b1b8380628c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=640&crop=smart&auto=webp&s=0de6bb9ce3e4101c9452d707bb74e4d14fc21bb2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=960&crop=smart&auto=webp&s=ec5fa8d6f9f1500e545661940ce5af088320a871', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?width=1080&crop=smart&auto=webp&s=afd418dec5f3c16c2d0ee802736fea7f3862246e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/svHHAck7dSjoulgoGopBQmcysvgwynJ3DcKwn8sX7Zc.jpg?auto=webp&s=164db6523d618406ece148c52d1007ae5deb362c', 'width': 1200}, 'variants': {}}]} |
🦜🔗 Building Multi task AI agent with LangChain and using Aim to trace and visualize the executions.. Let's prompt the agent to: i. List 5 supernova facts, ii. Find the latest explosion, iii. Calculate years since the last explosion | 11 | 2023-06-16T17:28:12 | https://v.redd.it/w12wxtjfze6b1 | tatyanaaaaaa | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14b2q6n | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/w12wxtjfze6b1/DASHPlaylist.mpd?a=1695733629%2CYWI3N2RmYzVlZDVlZTgwZjMyMmZjYWZhNTA0Mjc2N2E3MmVlOWE2NzgwMjAxYTM0ZDM4NzkyOGVhY2E3ZmNlMA%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/w12wxtjfze6b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/w12wxtjfze6b1/HLSPlaylist.m3u8?a=1695733629%2CNjNmYjkxNmNkNzdkOTVlNmNmYmRkNjYwMzhmOGQ1Njk1N2Q2NWFlM2YyZjViNTc2OGU4OTcyNmY0YjhkYmM0NA%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/w12wxtjfze6b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_14b2q6n | /r/LocalLLaMA/comments/14b2q6n/building_multi_task_ai_agent_with_langchain_and/ | false | false | default | 11 | null |
|
Robin V2 model reaches top of LLM leaderboard | 84 | 2023-06-16T17:38:39 | https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1 | yahma | medium.com | 1970-01-01T00:00:00 | 0 | {} | 14b2zdu | false | null | t3_14b2zdu | /r/LocalLLaMA/comments/14b2zdu/robin_v2_model_reaches_top_of_llm_leaderboard/ | false | false | 84 | {'enabled': False, 'images': [{'id': 'EyludQq-38PPetVGcAXonPPSk188ZNmvgsHWeGmH_WQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=108&crop=smart&auto=webp&s=808b475e8072757887d221391905eba6ad61b722', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=216&crop=smart&auto=webp&s=19fba5ffa273feaa049ae35607db63e8de29d7b9', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=320&crop=smart&auto=webp&s=78ea366b150b332dd8248c8c4e53aada9cbd9bcd', 'width': 320}, {'height': 347, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=640&crop=smart&auto=webp&s=559f444acb68a6391621aa8fe2d4711eb5d22404', 'width': 640}, {'height': 521, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=960&crop=smart&auto=webp&s=59d4ddf4c4e1559ed0bc9fb3d151d51b11f0f8ed', 'width': 960}, {'height': 586, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?width=1080&crop=smart&auto=webp&s=d784ebe25d262ee8a5ca177b84cb8163fd4c7694', 'width': 1080}], 'source': {'height': 652, 'url': 'https://external-preview.redd.it/x2SFNUegVp-i28oXRjfOqovBU2vZhZhy6-tlntzSmPc.jpg?auto=webp&s=f965900b26a1dd962248db305a39aa072c34c9df', 'width': 1200}, 'variants': {}}]} |
||
Which Linux distro is best for AI stuff? | 1 | [removed] | 2023-06-16T19:43:31 | https://www.reddit.com/r/LocalLLaMA/comments/14b620d/which_linux_distro_is_best_for_ai_stuff/ | ThrowawayProgress99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b620d | false | null | t3_14b620d | /r/LocalLLaMA/comments/14b620d/which_linux_distro_is_best_for_ai_stuff/ | false | false | default | 1 | null |
Falcon 40B instruct tuned on Open Assistant data - model weights Open Source | 6 | 2023-06-16T19:46:09 | https://gpt-gm.h2o.ai/ | ricklamers | gpt-gm.h2o.ai | 1970-01-01T00:00:00 | 0 | {} | 14b64et | false | null | t3_14b64et | /r/LocalLLaMA/comments/14b64et/falcon_40b_instruct_tuned_on_open_assistant_data/ | false | false | default | 6 | null |
|
I created a site to share Opensource LLMs in chatbot and API form | 5 | [removed] | 2023-06-16T19:51:14 | https://www.reddit.com/r/LocalLLaMA/comments/14b68x6/i_created_a_site_to_share_opensource_llms_in/ | ortegaalfredo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b68x6 | false | null | t3_14b68x6 | /r/LocalLLaMA/comments/14b68x6/i_created_a_site_to_share_opensource_llms_in/ | false | false | default | 5 | null |
What Can I Run | 1 | [removed] | 2023-06-16T20:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/14b6wz7/what_can_i_run/ | Creative-Match7817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b6wz7 | false | null | t3_14b6wz7 | /r/LocalLLaMA/comments/14b6wz7/what_can_i_run/ | false | false | default | 1 | null |
w7900 for llama | 3 | Hello everybody, AMD recently released the w7900, a graphics card with 48gb memory. The few tests that are available suggest that it is competitive from a price performance point of view to at least the older A6000 by Nvidia.
That said, no tests with LLMs were conducted (which does not surprise me tbh). Could this card be of interest for the semiprofessional or wealthy local llama user? Or is it infeasible due to bad performance and/or bad software support? If it's the software, would there be - in theory - the possibility to rectify this situation? | 2023-06-16T20:21:07 | https://www.reddit.com/r/LocalLLaMA/comments/14b6z1e/w7900_for_llama/ | RabbitHole32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b6z1e | false | null | t3_14b6z1e | /r/LocalLLaMA/comments/14b6z1e/w7900_for_llama/ | false | false | self | 3 | null |
[Feedback Request] Monitor LLM Hallucinations - with ChatGPT! | 4 | [removed] | 2023-06-16T21:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/14b8n9t/feedback_request_monitor_llm_hallucinations_with/ | shurturgal19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14b8n9t | false | null | t3_14b8n9t | /r/LocalLLaMA/comments/14b8n9t/feedback_request_monitor_llm_hallucinations_with/ | false | false | default | 4 | null |
What is the best VPS / cloud server to run LLMs on ? | 13 | For LLama 65B 8bit you require 74GB of RAM (from the wiki). For a few LLM runs, its probably cheaper to hire a cloud server instead of buying a huge machine to run it yourself.
What cloud providers are there that give this much RAM? Are there any free ones?
If you are going to buy your own machine, what are your options? Even high end NVidia GPUs dont have this much, apart from the A100s. | 2023-06-16T22:51:22 | https://www.reddit.com/r/LocalLLaMA/comments/14bactj/what_is_the_best_vps_cloud_server_to_run_llms_on/ | [deleted] | self.LocalLLaMA | 2023-06-16T23:02:15 | 0 | {} | 14bactj | false | null | t3_14bactj | /r/LocalLLaMA/comments/14bactj/what_is_the_best_vps_cloud_server_to_run_llms_on/ | false | false | self | 13 | null |
Any model that can do basic reasoning? | 26 | \`\`\`Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does sally have?\`\`\`
Is there any open model that can reliably answer this question and also offer the right explanations?
I've been playing a lot with these models, and while they can be extremely good for story-telling or embody a specific character (after 33b+) they fail at basic reasoning even at 65B.
I also tried "Tree-of-Thought" strategy where 3 experts quarel with each other, nada. I tried giving hints that are obvious. No success.
What's special about logic and how do we get it into open-source. Is it something emerging at 160B+ ? | 2023-06-16T23:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/14bb08o/any_model_that_can_do_basic_reasoning/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bb08o | false | null | t3_14bb08o | /r/LocalLLaMA/comments/14bb08o/any_model_that_can_do_basic_reasoning/ | false | false | self | 26 | null |
OK, since people here agreed decentralized AI training could be a powerful if possible. I'm opening r/BuildThisAI for intentional discussion on achieving this goal. This is directly inspired by a QnA I got to attend w/ Sam Altman (Info post): | 1 | [removed] | 2023-06-16T23:26:22 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14bb5sm | false | null | t3_14bb5sm | /r/LocalLLaMA/comments/14bb5sm/ok_since_people_here_agreed_decentralized_ai/ | false | false | default | 1 | null |
||
EXLlama support added to oobabooga-text-generation-webui | 119 | 2023-06-17T00:13:43 | https://github.com/oobabooga/text-generation-webui/commit/9f40032d32165773337e6a6c60de39d3f3beb77d | panchovix | github.com | 1970-01-01T00:00:00 | 0 | {} | 14bc7ug | false | null | t3_14bc7ug | /r/LocalLLaMA/comments/14bc7ug/exllama_support_added_to/ | false | false | 119 | {'enabled': False, 'images': [{'id': 'K1Btc2tnb4mW5SaUKQuVDRHyun2UZv9I85wrDQolNBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=108&crop=smart&auto=webp&s=509348a4936cd56dcb8065f19a1ad83851791644', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=216&crop=smart&auto=webp&s=b27f90340f4f5d22a8158cf43d24f79c5adb2c1c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=320&crop=smart&auto=webp&s=ddda4e5d06a00180e2f83c8d765fafee2cc61d25', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=640&crop=smart&auto=webp&s=fd78110895619e5fd96fba978489a63388ee53c2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=960&crop=smart&auto=webp&s=7934128d7238df011e7dfa24fafa8c49d052c1eb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?width=1080&crop=smart&auto=webp&s=6a4b601f8771c9716a4ccf6b579290e595b09e20', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WBwPr26ZKAbreRZsb7QXwzLHKL0dstUS3HCEcsV0yEw.jpg?auto=webp&s=ddbcb501988d0bfee038a44741d78e802ae2ab9b', 'width': 1200}, 'variants': {}}]} |
||
Base models are all uncensored right? | 5 | Such as the open llama 3b and 7B base models? | 2023-06-17T02:16:49 | https://www.reddit.com/r/LocalLLaMA/comments/14bere3/base_models_are_all_uncensored_right/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bere3 | false | null | t3_14bere3 | /r/LocalLLaMA/comments/14bere3/base_models_are_all_uncensored_right/ | false | false | self | 5 | null |
Meta reportedly wants to make LLaMA v2 available for commercial use | 168 | This comes from *The Information*, a subscription based publication made for tech executives. The [full article is paywalled](https://www.theinformation.com/articles/meta-wants-companies-to-make-money-off-its-open-source-ai-in-challenge-to-google) but here's the intro:
>Meta Platforms CEO Mark Zuckerberg and his deputies want other companies to freely use and profit from new artificial intelligence software Meta is developing, a decision that could have big implications for other AI developers and businesses that are increasingly adopting it.
>
>Meta is working on ways to make **the next version of its open-source large-language model**—technology that can power chatbots like ChatGPT—**available for commercial use**, said a person with direct knowledge of the situation and a person who was briefed about it. The move could prompt a feeding frenzy among AI developers eager for alternatives to proprietary software sold by rivals Google and OpenAI. It would also indirectly benefit Meta’s own AI development.
​
This lines up with what Mark Zuckerberg mentioned on Lex Fridman's podcast and clarifies his words a lot:
>Lex: You mentioned the next of LLaMA. What can you say about the next version of LLaMA?
>
>Mark: The work that we're doing to advance the infrastructure, it's basically at this point **taking it beyond a research project** into something which is ready to be kind of core infrastructure **not only for our own products but hopefully for a lot of other things out there too**.
>
>Lex: Do you think that LLaMA version two will be open sourced?
>
>Mark: **I think the question is how to do it**. I think we did the research license for v1 and I think the big thing that we're thinking about is basically like what's the right way.
>
>We sort of believe in this more community oriented, more democratic approach to building out the products and technology around this.
OpenAI also wants to open source a model, but there hasn't been any news since the [last article](https://www.reuters.com/technology/openai-readies-new-open-source-ai-model-information-2023-05-15/). It'll be fun to see the race this sparks among the AI companies. Which company will rush first to release their new model? Maybe Google will join in and release an open source model of their own. This will all be big wins for the local community. | 2023-06-17T03:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/14bfpwi/meta_reportedly_wants_to_make_llama_v2_available/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bfpwi | false | null | t3_14bfpwi | /r/LocalLLaMA/comments/14bfpwi/meta_reportedly_wants_to_make_llama_v2_available/ | false | false | self | 168 | {'enabled': False, 'images': [{'id': 'K3mJS5QAvFPQp9UO9nCAh8cDZb2w0a_Va2a2L3IwnHs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=108&crop=smart&auto=webp&s=232b13b494d16530c85a2400e9adad86d5830b11', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=216&crop=smart&auto=webp&s=35c31f7f4df5231b34799ba54781e7f81011c63c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=320&crop=smart&auto=webp&s=da07947f8016cf225acfe93a11f92331594fa3dd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=640&crop=smart&auto=webp&s=0806b8f40761ea42e5a192001412d4556e5df7d5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=960&crop=smart&auto=webp&s=8cc154171c484902e479d07bf966930c3abdc692', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?width=1080&crop=smart&auto=webp&s=6a2dd16b7eeae3130c019665b4cbbf5e7bd2411b', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/JIVlwx_zSZmSG1SFNAAHXhlJLBXZlBlBBpDwa0o3d1Y.jpg?auto=webp&s=ed52f226c407145c30a7af3413f87f611b4a50b7', 'width': 1200}, 'variants': {}}]} |
Does anyone know if any uncensored multilingual model exists? | 13 | I recently tried OpenBuddy and it was pretty decent as local model in Japanese language. At least it was better than any Japanese-focused model released by Japanese corporations. But when it comes to nsfw things it turns into chatgpt and refusing to response.
I also tried RWKV(raven and world models) but honestly it's hard to say that this is competitive to llama.
When I modified system.prompt and prompt suffix in llama.cpp, I could successfully produce short nsfw story. But it is ideal to have multilingal model which is uncensored from the beginning. | 2023-06-17T03:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/14bfs6a/does_anyone_know_if_any_uncensored_multilingual/ | Beautiful-Ad2181 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bfs6a | false | null | t3_14bfs6a | /r/LocalLLaMA/comments/14bfs6a/does_anyone_know_if_any_uncensored_multilingual/ | false | false | self | 13 | null |
Open Llama 13B trained in 1T tokens | 94 | Now available at https://huggingface.co/openlm-research/open_llama_13b | 2023-06-17T03:53:45 | https://www.reddit.com/r/LocalLLaMA/comments/14bgmxs/open_llama_13b_trained_in_1t_tokens/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bgmxs | false | null | t3_14bgmxs | /r/LocalLLaMA/comments/14bgmxs/open_llama_13b_trained_in_1t_tokens/ | false | false | self | 94 | {'enabled': False, 'images': [{'id': 'ky8W7WcfNIMGIbR8Vqs_nfTRFC9i3aO_4A0kyPbWbGY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/z4W5gYNvUIsTPWKuQg3d_UPA8g5P0xwUnHN3LnXdCXI.jpg?width=108&crop=smart&auto=webp&s=08f205010b6ec0f19e009311d70c83cecfbbd4e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/z4W5gYNvUIsTPWKuQg3d_UPA8g5P0xwUnHN3LnXdCXI.jpg?width=216&crop=smart&auto=webp&s=4d2e314b6a1f62bd8a22aebdeb3404b8156b28de', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/z4W5gYNvUIsTPWKuQg3d_UPA8g5P0xwUnHN3LnXdCXI.jpg?width=320&crop=smart&auto=webp&s=20d4f5ce9930eb77864a3b8182fbc507a79c683f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/z4W5gYNvUIsTPWKuQg3d_UPA8g5P0xwUnHN3LnXdCXI.jpg?width=640&crop=smart&auto=webp&s=f878f9b176298eb89efd55f602b77c9c8b2bdee8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/z4W5gYNvUIsTPWKuQg3d_UPA8g5P0xwUnHN3LnXdCXI.jpg?width=960&crop=smart&auto=webp&s=b6b879ec04912103a2235f251cf8a4205a3bcb8c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/z4W5gYNvUIsTPWKuQg3d_UPA8g5P0xwUnHN3LnXdCXI.jpg?width=1080&crop=smart&auto=webp&s=8ded7e79f6c7b45622fe8bd3e3f9de8f1605cbe7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/z4W5gYNvUIsTPWKuQg3d_UPA8g5P0xwUnHN3LnXdCXI.jpg?auto=webp&s=d8ccbde47839e0efa19eceb34620e87f25178342', 'width': 1200}, 'variants': {}}]} |
1 million instruct-response pairs - fine-tuning help! | 10 | I am looking to train a 13B or 20B model for a very specific usecase where I have tonnes of data (1 million instruct-response pairs).
What are the best practices/resources available for fine-tuning open LLMs?
I don't own a GPU and I plan to use cloud resources.
I've found using Colab / Colab Pro *incredibly* frustrating due to 12 hour time out (even if I manage to circumvent disconnections). I've tried using paperspace but find it confusing. | 2023-06-17T03:58:45 | https://www.reddit.com/r/LocalLLaMA/comments/14bgq91/1_million_instructresponse_pairs_finetuning_help/ | supermanhelpsevery1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bgq91 | false | null | t3_14bgq91 | /r/LocalLLaMA/comments/14bgq91/1_million_instructresponse_pairs_finetuning_help/ | false | false | self | 10 | null |
New Twitter Communities for LocalLlama? | 0 | [removed] | 2023-06-17T05:30:20 | https://www.reddit.com/r/LocalLLaMA/comments/14biels/new_twitter_communities_for_localllama/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14biels | false | null | t3_14biels | /r/LocalLLaMA/comments/14biels/new_twitter_communities_for_localllama/ | false | false | default | 0 | null |
MediaWiki instance for LLMs/AI | 5 | Hi everyone,
I know there was discussion recently on creating a wiki for local LLMs, and more generally AI as a whole.
I've taken the liberty of setting up a MediaWiki instance, available at [https://wiki.ffyt.xyz](https://wiki.ffyt.xyz).
It's like Wikipedia: anyone (even without an account) can edit any page :) so if you plan to contribute, thank you!! | 2023-06-17T05:44:10 | https://www.reddit.com/r/LocalLLaMA/comments/14binet/mediawiki_instance_for_llmsai/ | thooton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14binet | false | null | t3_14binet | /r/LocalLLaMA/comments/14binet/mediawiki_instance_for_llmsai/ | false | false | self | 5 | null |
Noob - Hi guys! I need your assistance could you please help? | 0 | [removed] | 2023-06-17T05:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/14biw8l/noob_hi_guys_i_need_your_assistance_could_you/ | Butefluko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14biw8l | false | null | t3_14biw8l | /r/LocalLLaMA/comments/14biw8l/noob_hi_guys_i_need_your_assistance_could_you/ | false | false | default | 0 | null |
Hermes Open-Llama 7b 8k context behind the terms wall. | 4 | Anybody know anything about this model?
[conceptofmind/Hermes-Open-Llama-7b-8k · Discussions (huggingface.co)](https://huggingface.co/conceptofmind/Hermes-Open-Llama-7b-8k/discussions?status=open&type=discussion)
He has it as publicly listed, but behind a wall requiring terms, but Hermes data is GPL which means no terms. Anyone tried it? Does it work? | 2023-06-17T07:18:53 | https://www.reddit.com/r/LocalLLaMA/comments/14bkbto/hermes_openllama_7b_8k_context_behind_the_terms/ | Radiant_Dog1937 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bkbto | false | null | t3_14bkbto | /r/LocalLLaMA/comments/14bkbto/hermes_openllama_7b_8k_context_behind_the_terms/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'sEXIpuxaF_4tPtdOKivI_7igmZAS2DUpXimI9mfvy68', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XzdIBYsz4BmiNQRRY3jEHZabnm9gVTkp1H54QvmxJxg.jpg?width=108&crop=smart&auto=webp&s=ad939674e2c90590fe777b84d8db07479c42208a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XzdIBYsz4BmiNQRRY3jEHZabnm9gVTkp1H54QvmxJxg.jpg?width=216&crop=smart&auto=webp&s=17e4cce92bab88148e59b6c7ec512d05859df5b5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XzdIBYsz4BmiNQRRY3jEHZabnm9gVTkp1H54QvmxJxg.jpg?width=320&crop=smart&auto=webp&s=4ec7285ae02acb10fc022892bfa4863cb9fe1009', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XzdIBYsz4BmiNQRRY3jEHZabnm9gVTkp1H54QvmxJxg.jpg?width=640&crop=smart&auto=webp&s=f978b4a59528499c8ded87c2248a2e01a47a2328', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XzdIBYsz4BmiNQRRY3jEHZabnm9gVTkp1H54QvmxJxg.jpg?width=960&crop=smart&auto=webp&s=4af817e3eb8246aaf985e6c7ffef3f252342de96', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XzdIBYsz4BmiNQRRY3jEHZabnm9gVTkp1H54QvmxJxg.jpg?width=1080&crop=smart&auto=webp&s=1ca6891d601800e4460de25cd53f07ccbdfc6591', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XzdIBYsz4BmiNQRRY3jEHZabnm9gVTkp1H54QvmxJxg.jpg?auto=webp&s=1bde823fc3c9ae4bd23409da5e4389d0c7a30f8c', 'width': 1200}, 'variants': {}}]} |
OpenAI regulatory pushing government to ban illegal advanced matrix operations [pdf] | 180 | 2023-06-17T08:06:53 | https://news.ycombinator.com/item?id=36368191 | Merchant_Lawrence | news.ycombinator.com | 1970-01-01T00:00:00 | 0 | {} | 14bl694 | false | null | t3_14bl694 | /r/LocalLLaMA/comments/14bl694/openai_regulatory_pushing_government_to_ban/ | false | false | default | 180 | null |
|
This is perfect example where LLAMA does 100% better job than ChatGPT | 124 | 2023-06-17T08:15:33 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14blbl6 | false | null | t3_14blbl6 | /r/LocalLLaMA/comments/14blbl6/this_is_perfect_example_where_llama_does_100/ | false | false | nsfw | 124 | {'enabled': True, 'images': [{'id': 'kP90TYFDhW8fRsXP1izxoJrmKMPnscVYcjFK2BWSDew', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=108&crop=smart&auto=webp&s=022251877fc82f83825422f94c1825b17378a5c5', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=216&crop=smart&auto=webp&s=b03374b124ed7c5c2c940d70883f8615b87128dd', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=320&crop=smart&auto=webp&s=d1e67c73385296f9b5bf6fd42d3facc4d9526065', 'width': 320}, {'height': 294, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=640&crop=smart&auto=webp&s=c94d8460e1f4fcd489ba52cafabf34a9b1f20e4d', 'width': 640}, {'height': 442, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=960&crop=smart&auto=webp&s=27fee59af468911d994fe59c86f3332c07a18667', 'width': 960}, {'height': 497, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=1080&crop=smart&auto=webp&s=e1f605a4203b1849fe163e4678ef046eb98178df', 'width': 1080}], 'source': {'height': 778, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?auto=webp&s=581b9d6fc8d67018b55156d961734383aed7ead3', 'width': 1689}, 'variants': {'nsfw': {'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=636c54f3dcdfa153922cb136abebc145b0df58e7', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=84467096d42673271465ad8cf9b00e140e23e425', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=77fca17b5d24ac74ad91fc2d2b304ac4f1b294f5', 'width': 320}, {'height': 294, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=4479aefa292ccce03c67da2cc27dd536a4ef6c1d', 'width': 640}, {'height': 442, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=67202c38b2be0dd2688ad437bb9caed56b364cf4', 'width': 960}, {'height': 497, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=05264ca79704f14f5cf193b0a4ba6901ece8a2dd', 'width': 1080}], 'source': {'height': 778, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?blur=40&format=pjpg&auto=webp&s=4eee93141577c9def72c59238b5b1bcaf3c75cda', 'width': 1689}}, 'obfuscated': {'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=636c54f3dcdfa153922cb136abebc145b0df58e7', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=84467096d42673271465ad8cf9b00e140e23e425', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=77fca17b5d24ac74ad91fc2d2b304ac4f1b294f5', 'width': 320}, {'height': 294, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=4479aefa292ccce03c67da2cc27dd536a4ef6c1d', 'width': 640}, {'height': 442, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=67202c38b2be0dd2688ad437bb9caed56b364cf4', 'width': 960}, {'height': 497, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=05264ca79704f14f5cf193b0a4ba6901ece8a2dd', 'width': 1080}], 'source': {'height': 778, 'url': 'https://preview.redd.it/nj6dsd6fej6b1.jpg?blur=40&format=pjpg&auto=webp&s=4eee93141577c9def72c59238b5b1bcaf3c75cda', 'width': 1689}}}}]} |
||
RWKV (RNN + Transformers) | 10 | Based from GitHub:
"RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding."
Has anyone experimented with it? What are your observations?
Github: https://github.com/BlinkDL/RWKV-LM
GGML port: https://github.com/saharNooby/rwkv.cpp | 2023-06-17T09:27:31 | https://www.reddit.com/r/LocalLLaMA/comments/14bmj0v/rwkv_rnn_transformers/ | Acrobatic-Site2065 | self.LocalLLaMA | 2023-06-17T10:23:25 | 0 | {} | 14bmj0v | false | null | t3_14bmj0v | /r/LocalLLaMA/comments/14bmj0v/rwkv_rnn_transformers/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'Nl8Z35RMn0iOFzqK1Ep8j4Ovz7HkCMHDesWROcVzW1A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2tOa7o5ERE-JosiI-krDUGte6HUpT2Kf1SjhJL5EX4o.jpg?width=108&crop=smart&auto=webp&s=8644eccde130eb12b769518365217c33ed667d00', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2tOa7o5ERE-JosiI-krDUGte6HUpT2Kf1SjhJL5EX4o.jpg?width=216&crop=smart&auto=webp&s=1e29c25f2d13160efa07423884455dfcba6e04bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2tOa7o5ERE-JosiI-krDUGte6HUpT2Kf1SjhJL5EX4o.jpg?width=320&crop=smart&auto=webp&s=dd8f626614ee95ca93034f18416a5fa43b0f4d14', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2tOa7o5ERE-JosiI-krDUGte6HUpT2Kf1SjhJL5EX4o.jpg?width=640&crop=smart&auto=webp&s=405b5d0b2f96660295b17db982690046005d8edf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2tOa7o5ERE-JosiI-krDUGte6HUpT2Kf1SjhJL5EX4o.jpg?width=960&crop=smart&auto=webp&s=8b48310b493228486631d5c5061b0aeb8216f603', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2tOa7o5ERE-JosiI-krDUGte6HUpT2Kf1SjhJL5EX4o.jpg?width=1080&crop=smart&auto=webp&s=1766ef8269756bcfc1380d7059b2d148aa8cfbf1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2tOa7o5ERE-JosiI-krDUGte6HUpT2Kf1SjhJL5EX4o.jpg?auto=webp&s=182bb34aed5ee63388b859887faded2c32d0bb1c', 'width': 1200}, 'variants': {}}]} |
Fix for CUDA Memory Error | 8 | I'm following this tutorial to get genertaive agents to run : [https://python.langchain.com/docs/use\_cases/agent\_simulations/characters.html](https://python.langchain.com/docs/use_cases/agent_simulations/characters.html)
I have 2 GPUs with about 24GB of VRAM each. Still I get a CUDA MEMORY ERROR. I checked nvidia-smi and 1 GPU is being used fully, while only 7GB is being used i another GPU.
​
OutOfMemoryError: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 0; 23.70 GiB total capacity; 22.51 GiB already allocated; 154.31 MiB free; 22.53 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I did run `export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512`
But it did nothing.
How do I resolve this? | 2023-06-17T09:28:23 | https://www.reddit.com/r/LocalLLaMA/comments/14bmjjp/fix_for_cuda_memory_error/ | QUAZARD3141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bmjjp | false | null | t3_14bmjjp | /r/LocalLLaMA/comments/14bmjjp/fix_for_cuda_memory_error/ | false | false | self | 8 | null |
ExLLaMA Causing Incompatibility w/ SillyTavern? | 0 | [deleted] | 2023-06-17T09:37:33 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14bmpdr | false | null | t3_14bmpdr | /r/LocalLLaMA/comments/14bmpdr/exllama_causing_incompatibility_w_sillytavern/ | false | false | default | 0 | null |
||
Shorter sentences & roles | 1 | [removed] | 2023-06-17T11:08:43 | https://www.reddit.com/r/LocalLLaMA/comments/14bocpi/shorter_sentences_roles/ | Lila-Llama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bocpi | false | null | t3_14bocpi | /r/LocalLLaMA/comments/14bocpi/shorter_sentences_roles/ | false | false | default | 1 | null |
Concise output & roles | 3 | [deleted] | 2023-06-17T11:12:14 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 14bof4d | false | null | t3_14bof4d | /r/LocalLLaMA/comments/14bof4d/concise_output_roles/ | false | false | default | 3 | null |
||
Currently developing a conversation simulator, with scenario, location, goals and emotions for the chat bot! Here some results with Richard Feynman as persona and the Dans-PersonalityEngine-30b-ggml-q5_0 model. Can release source code if interested! | 58 | 2023-06-17T12:18:28 | FlowerPotTeaTime | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14bpps4 | false | null | t3_14bpps4 | /r/LocalLLaMA/comments/14bpps4/currently_developing_a_conversation_simulator/ | false | false | 58 | {'enabled': True, 'images': [{'id': 'kThHZ2d3Jsmm1juE_ypsmTtr2yaPAqCU0LPUMgbX2Qs', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/3uffn35hlk6b1.png?width=108&crop=smart&auto=webp&s=a2447c0f4dc228e7b8d41aefb82d0e558586ba30', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/3uffn35hlk6b1.png?width=216&crop=smart&auto=webp&s=95b78c5eddaa254b30c1aecfdd76c8d0b6926c21', 'width': 216}, {'height': 332, 'url': 'https://preview.redd.it/3uffn35hlk6b1.png?width=320&crop=smart&auto=webp&s=c99398cc8ac542e5d4f00ec6aded7e1210bb6ed7', 'width': 320}, {'height': 665, 'url': 'https://preview.redd.it/3uffn35hlk6b1.png?width=640&crop=smart&auto=webp&s=9060eb4bd161f66f0edc8c98ef9a2f5e4e2567a0', 'width': 640}, {'height': 997, 'url': 'https://preview.redd.it/3uffn35hlk6b1.png?width=960&crop=smart&auto=webp&s=f721ff735874958a76f410a16ff527892a07cdd6', 'width': 960}, {'height': 1122, 'url': 'https://preview.redd.it/3uffn35hlk6b1.png?width=1080&crop=smart&auto=webp&s=5c4c7573cbcb720b876cf9e1ddea74647c751f0f', 'width': 1080}], 'source': {'height': 1540, 'url': 'https://preview.redd.it/3uffn35hlk6b1.png?auto=webp&s=caf4e2a36eb5e337c02be1dc8c6b00006ca3952c', 'width': 1482}, 'variants': {}}]} |
|||
Best way to use AMD CPU and GPU | 31 | I have the following Linux PC:
CPU – AMD 5800X3D w/ 32GB RAM
GPU – AMD 6800 XT w/ 16GB VRAM
[Serge](https://github.com/serge-chat/serge) made it really easy for me to get started, but it’s all CPU-based.
[MLC LLM](https://github.com/mlc-ai/mlc-llm) looks like an easy option to use my AMD GPU.
[Llama.cpp](https://github.com/ggerganov/llama.cpp) seems like it can use both CPU and GPU, but I haven't quite figured that out yet.
What's the most performant way to use my hardware? Will CPU + GPU always be superior, and if so, what's the easiest way to do that? I don't mind upgrading to 64GB RAM if it's worth it (motherboard only has 2 slots). | 2023-06-17T12:32:07 | https://www.reddit.com/r/LocalLLaMA/comments/14bpzw9/best_way_to_use_amd_cpu_and_gpu/ | x6q5g3o7 | self.LocalLLaMA | 2023-06-17T12:55:43 | 0 | {} | 14bpzw9 | false | null | t3_14bpzw9 | /r/LocalLLaMA/comments/14bpzw9/best_way_to_use_amd_cpu_and_gpu/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': '7lVbF5o3vrBIRhlKpInBHoabA_NNFAnBzzGEIFFuoBI', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/xoIz8NrptyBkXXjrFdBZzGJuUi_lC5EvNBZjCvNsgC8.jpg?width=108&crop=smart&auto=webp&s=6784d106f42e864712c2302463dbe0a709f80108', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/xoIz8NrptyBkXXjrFdBZzGJuUi_lC5EvNBZjCvNsgC8.jpg?width=216&crop=smart&auto=webp&s=a62994ce895c554389995fa74ffe34cf01c266a6', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/xoIz8NrptyBkXXjrFdBZzGJuUi_lC5EvNBZjCvNsgC8.jpg?width=320&crop=smart&auto=webp&s=66ae5d8e9ddda86a010755c66417c9c78377b426', 'width': 320}, {'height': 417, 'url': 'https://external-preview.redd.it/xoIz8NrptyBkXXjrFdBZzGJuUi_lC5EvNBZjCvNsgC8.jpg?width=640&crop=smart&auto=webp&s=594fe05382d4784b4624e4b7a0b3ec2050b094a7', 'width': 640}, {'height': 625, 'url': 'https://external-preview.redd.it/xoIz8NrptyBkXXjrFdBZzGJuUi_lC5EvNBZjCvNsgC8.jpg?width=960&crop=smart&auto=webp&s=1d4a4bc6b1aa7707fc0a2d75229b09bbb7ec68dc', 'width': 960}, {'height': 703, 'url': 'https://external-preview.redd.it/xoIz8NrptyBkXXjrFdBZzGJuUi_lC5EvNBZjCvNsgC8.jpg?width=1080&crop=smart&auto=webp&s=27064a7569735ad43eab9d4fe5671bbd2bc609b9', 'width': 1080}], 'source': {'height': 980, 'url': 'https://external-preview.redd.it/xoIz8NrptyBkXXjrFdBZzGJuUi_lC5EvNBZjCvNsgC8.jpg?auto=webp&s=30ac29efa2bc163c7a3183b67da6a7412e0ffd2a', 'width': 1504}, 'variants': {}}]} |
AI Algorithms started working for the humanity benefit | 2 | [removed] | 2023-06-17T12:48:58 | adesigne | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14bqbsn | false | null | t3_14bqbsn | /r/LocalLLaMA/comments/14bqbsn/ai_algorithms_started_working_for_the_humanity/ | false | false | default | 2 | null |
|
Cant load WizardCoder-15B-1.0-GGML into Oobabooga | 2 | [removed] | 2023-06-17T12:49:11 | https://www.reddit.com/r/LocalLLaMA/comments/14bqby9/cant_load_wizardcoder15b10ggml_into_oobabooga/ | FloridaManIssues | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bqby9 | false | null | t3_14bqby9 | /r/LocalLLaMA/comments/14bqby9/cant_load_wizardcoder15b10ggml_into_oobabooga/ | false | false | default | 2 | null |
"Mark Zuckerberg Was Early in AI. Now Meta Is Trying to Catch Up." | 2 | [removed] | 2023-06-17T12:54:40 | https://www.reddit.com/r/LocalLLaMA/comments/14bqfst/mark_zuckerberg_was_early_in_ai_now_meta_is/ | Super-Waltz-5676 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bqfst | false | null | t3_14bqfst | /r/LocalLLaMA/comments/14bqfst/mark_zuckerberg_was_early_in_ai_now_meta_is/ | false | false | default | 2 | null |
Which has the best output quality on these quantizations? qX_K_L, qX_K_M, qX_K_S, qX | 1 | [removed] | 2023-06-17T13:11:05 | https://www.reddit.com/r/LocalLLaMA/comments/14bqskg/which_has_the_best_output_quality_on_these/ | Artemis369212 | self.LocalLLaMA | 2023-06-17T13:15:27 | 0 | {} | 14bqskg | false | null | t3_14bqskg | /r/LocalLLaMA/comments/14bqskg/which_has_the_best_output_quality_on_these/ | false | false | default | 1 | null |
llama.cpp with Metal GPU inference only produces gibberish | 2 | I tried out GPU inference on Apple Silicon using Metal with GGML and ran the following command to enable GPU inference:
pip uninstall -y llama-cpp-python \
CMAKE_ARGS="-DLLAMA_METAL=on" \
FORCE_CMAKE=1 \
pip install llama-cpp-python \
--no-cache-dir
Then I start `oobabooga/text-generation-webui` like so:
python server.py \
--n-gpu-layers 10
--model=TheBloke_Wizard-Vicuna-13B-Uncensored-GGML \
With these settings I'm getting incredibly fast load times (0.41 seconds) and close to 20t/s but the output is gibberish. Does anyone know what I'm doing wrong?
Model: `TheBloke_Wizard-Vicuna-13B-Uncensored-GGML/Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_1.bin` Laptop: Apple M2 Pro (macOS 13.4), 16 GB RAM
\>Note: This model used to work fine at \~4t/s before tinkering with GPU inference
​
[Assistant output](https://preview.redd.it/nr5fya8svk6b1.png?width=1660&format=png&auto=webp&s=636f4632a6b95fb4e063449ae8d23641667baea8) | 2023-06-17T13:14:49 | https://www.reddit.com/r/LocalLLaMA/comments/14bqvgv/llamacpp_with_metal_gpu_inference_only_produces/ | RastaBambi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14bqvgv | false | null | t3_14bqvgv | /r/LocalLLaMA/comments/14bqvgv/llamacpp_with_metal_gpu_inference_only_produces/ | false | false | 2 | null |
|
Public data training | 4 | Do you guys know and have any resources that verify if public comments have been used to train ? | 2023-06-17T13:29:41 | https://www.reddit.com/r/LocalLLaMA/comments/14br72i/public_data_training/ | WinteriscomingXii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14br72i | false | null | t3_14br72i | /r/LocalLLaMA/comments/14br72i/public_data_training/ | false | false | self | 4 | null |
Subsets and Splits