title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
EXLlama test on 2x4090, Windows 11 and Ryzen 7 7800X3D
43
Hi there, just an small post of appreciation to exllama, which have some speeds I NEVER expected to see. Also, if you want to do it yourself, read this first: https://github.com/turboderp/exllama/issues/33 (to build the kernel on Windows, you will need Visual Studio 2022) So very important, on Windows you have a setting called "Hardware Accelerated GPU Scheduling". Why is this important? If you tinker with the art side of AI as well, Stable Diffusion and LoRA training seem to be a lot faster with this setting disabled. (For example my LoRA training is 30% faster with this setting disabled) But enabling this settings makes a huge improvement when more than 1 GPU at the same time is working, and sometimes on a single GPU as well. Since I had it disabled, I made some tests first. And then, enabled it and gathered other results. The difference is pretty big. Made a small table with the differences at 30B and 65B. |Speed Comparison:Aeala_VicUnlocked-alpaca-30b-4bit|GPTQ-for-LLaMa|EXLlama| :--|:-:|:-:| |(1X) RTX 4090 HAGPU Disabled|6-7 tokens/s|30 tokens/s| |(1X) RTX 4090 HAGPU Enabled|4-6 tokens/s|40+ tokens/s| |Speed Comparison:Aeala_VicUnlocked-alpaca-65b-4bit_128g|GPTQ-for-LLaMa|EXLlama| :--|:-:|:-:| |(2X) RTX 4090 HAGPU Disabled|1-1.2 tokens/s|13 tokens/s| |(2X) RTX 4090 HAGPU Enabled|2-2.2 tokens/s|22+ tokens/s| Basically I couldn't believe it when I saw it. The speed increment is HUGE, even the GPU has very little time to work before the answer is out. Also, this is implemented (alpha) on Kobold-AI, which makes it work with Tavern and custom characters. https://github.com/0cc4m/KoboldAI/tree/4bit-plugin https://github.com/0cc4m/exllama/tree/transformers This DOESN'T work on pure Windows, but it does on WSL (Windows subsystem for Linux), and tested there which gave me the same speeds as pure exllama. If you have any questions or help, I will try to answer them ASAP as I wake up. (3:40 AM here and just by flipping testing this)
2023-06-06T07:51:35
https://www.reddit.com/r/LocalLLaMA/comments/14282mi/exllama_test_on_2x4090_windows_11_and_ryzen_7/
panchovix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14282mi
false
null
t3_14282mi
/r/LocalLLaMA/comments/14282mi/exllama_test_on_2x4090_windows_11_and_ryzen_7/
false
false
self
43
{'enabled': False, 'images': [{'id': 'a-V66NoYvcyElndY0PKTxxcPyA2ZOR2pe7JVGSGTd14', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dtGsQkN9opebji7FQgSlpXMfKNVgJoWk5PQN9HBDi80.jpg?width=108&crop=smart&auto=webp&s=1322874c27734bea210a10ec95ef83a5c4f3fa1e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dtGsQkN9opebji7FQgSlpXMfKNVgJoWk5PQN9HBDi80.jpg?width=216&crop=smart&auto=webp&s=095e98faead80053bee820b2e8d358e511447f85', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dtGsQkN9opebji7FQgSlpXMfKNVgJoWk5PQN9HBDi80.jpg?width=320&crop=smart&auto=webp&s=df7c09e72398196ab7fe3c4edffbb97d6c46bfc3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dtGsQkN9opebji7FQgSlpXMfKNVgJoWk5PQN9HBDi80.jpg?width=640&crop=smart&auto=webp&s=fde43b4bf030bb692c42ae250a70dee0ff4054c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dtGsQkN9opebji7FQgSlpXMfKNVgJoWk5PQN9HBDi80.jpg?width=960&crop=smart&auto=webp&s=442b71ef9e48b859be69fa2f38ffb694d93ef10d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dtGsQkN9opebji7FQgSlpXMfKNVgJoWk5PQN9HBDi80.jpg?width=1080&crop=smart&auto=webp&s=bc66de63fbda9289bc045bf4a61c580a1054a492', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dtGsQkN9opebji7FQgSlpXMfKNVgJoWk5PQN9HBDi80.jpg?auto=webp&s=39bce5f4ca9cc0cf5632a880212b1a45cf2a8224', 'width': 1200}, 'variants': {}}]}
Minimal llm just for poc
1
[removed]
2023-06-06T08:51:08
https://www.reddit.com/r/LocalLLaMA/comments/1429bt2/minimal_llm_just_for_poc/
Helix-x
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1429bt2
false
null
t3_1429bt2
/r/LocalLLaMA/comments/1429bt2/minimal_llm_just_for_poc/
false
false
default
1
null
Fine tuning on my tweets
1
[removed]
2023-06-06T09:16:13
https://www.reddit.com/r/LocalLLaMA/comments/1429w3b/fine_tuning_on_my_tweets/
CafetierCriard
self.LocalLLaMA
2023-06-06T09:20:10
0
{}
1429w3b
false
null
t3_1429w3b
/r/LocalLLaMA/comments/1429w3b/fine_tuning_on_my_tweets/
false
false
default
1
null
A theory/shower-thought on the future day-to-day implementation of LLMs .
2
[deleted]
2023-06-06T09:23:32
[deleted]
1970-01-01T00:00:00
0
{}
142a1tm
false
null
t3_142a1tm
/r/LocalLLaMA/comments/142a1tm/a_theoryshowerthought_on_the_future_daytoday/
false
false
default
2
null
AWQ (activation-aware weight quantization) 4-bit quantization | llama.cpp feature request
29
2023-06-06T10:58:51
https://github.com/ggerganov/llama.cpp/issues/1713
Balance-
github.com
1970-01-01T00:00:00
0
{}
142c7v3
false
null
t3_142c7v3
/r/LocalLLaMA/comments/142c7v3/awq_activationaware_weight_quantization_4bit/
false
false
https://b.thumbs.redditm…6O-51ppS851M.jpg
29
{'enabled': False, 'images': [{'id': 'T2j8I15ATdZ78tL8zKYN6F7-E30iR8lGtm93pWIvuJw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uX2tmHjSzdW6R2WiZ6_9IRWrl10324wUe78OI04m7-A.jpg?width=108&crop=smart&auto=webp&s=00fcb33079072b0134cae5dd4c966234c00d8072', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uX2tmHjSzdW6R2WiZ6_9IRWrl10324wUe78OI04m7-A.jpg?width=216&crop=smart&auto=webp&s=3b16d5d72d7a4fded532d144b6e58ffcc402c6e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uX2tmHjSzdW6R2WiZ6_9IRWrl10324wUe78OI04m7-A.jpg?width=320&crop=smart&auto=webp&s=004e2a3a7ae58e8419ac68a8811ea6918cdf3abe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uX2tmHjSzdW6R2WiZ6_9IRWrl10324wUe78OI04m7-A.jpg?width=640&crop=smart&auto=webp&s=810b264afdd0669b48874f02ba617d28c4e4ec3d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uX2tmHjSzdW6R2WiZ6_9IRWrl10324wUe78OI04m7-A.jpg?width=960&crop=smart&auto=webp&s=1a3d9eb92874a194dc2f6b403a6c3737f96b1258', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uX2tmHjSzdW6R2WiZ6_9IRWrl10324wUe78OI04m7-A.jpg?width=1080&crop=smart&auto=webp&s=f538460eb343316e4cd1c8c68bc298b095809606', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uX2tmHjSzdW6R2WiZ6_9IRWrl10324wUe78OI04m7-A.jpg?auto=webp&s=697475e869c715d05a50d2446e7b730659411306', 'width': 1200}, 'variants': {}}]}
Given that algebra/programming tasks is something we can produce and test automatically, wont it be possible to train LLM in an unsupervised way, so it could at least solve toy problems from say Project Euler 99.9% of the time? Because LLMs tend to botch the basic stuff.
30
Given that algebra/programming tasks is something we can produce and test automatically, wont it be possible to train LLM in an unsupervised way, so it could at least solve toy problems from say Project Euler 99.9% of the time? Because LLMs tend to botch the basic stuff.
2023-06-06T11:54:20
https://www.reddit.com/r/LocalLLaMA/comments/142dlq0/given_that_algebraprogramming_tasks_is_something/
NancyAurum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142dlq0
false
null
t3_142dlq0
/r/LocalLLaMA/comments/142dlq0/given_that_algebraprogramming_tasks_is_something/
false
false
self
30
null
Proper prompt to include the last response as well as the current question
2
Right now I use this is as my prompt: `<last response>^^^USER: <question> ASSISTANT:` Where `^^^` is set as the stop variable. (Not sure if this works or if I have to use `###`). I do this to include conversational history in the dialogue with the bot. And it usually works however sometimes it gets caught up in the <last response> and keeps talking about that instead of answering the <question>. For an abbreviated example: I can ask `why is the sky blue?` and if the answer is `the atmosphere makes the sky blue` I can then ask `why?` and the second response will then explain why the atmosphere makes the sky blue. However, if my second question is `why are cats furry?` it could answer with `blue sky's are caused by diffraction`. How do I best avoid this? What do you suggest is a good prompt model to include conversational history?
2023-06-06T13:01:43
https://www.reddit.com/r/LocalLLaMA/comments/142fdm8/proper_prompt_to_include_the_last_response_as/
SatoshiReport
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142fdm8
false
null
t3_142fdm8
/r/LocalLLaMA/comments/142fdm8/proper_prompt_to_include_the_last_response_as/
false
false
self
2
null
Best Local Model For Spelling, Grammar, and Sentence Checking.
1
[deleted]
2023-06-06T13:09:37
[deleted]
1970-01-01T00:00:00
0
{}
142fliq
false
null
t3_142fliq
/r/LocalLLaMA/comments/142fliq/best_local_model_for_spelling_grammar_and/
false
false
default
1
null
Can I run local model on 2 GPU ?
1
Hello everyone I have 3060 12GB and 64GB ram, is it possible to buy additional 3060 and run them simultaneously be able to use bigger AI models like 65B in oobabooga?
2023-06-06T13:23:26
https://www.reddit.com/r/LocalLLaMA/comments/142fziz/can_i_run_local_model_on_2_gpu/
Competitive_Fox7811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142fziz
false
null
t3_142fziz
/r/LocalLLaMA/comments/142fziz/can_i_run_local_model_on_2_gpu/
false
false
self
1
null
Having trouble with running chat mode on the GGML models
2
[removed]
2023-06-06T13:47:31
https://www.reddit.com/r/LocalLLaMA/comments/142go51/having_trouble_with_running_chat_mode_on_the_ggml/
QUAZARD3141
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142go51
false
null
t3_142go51
/r/LocalLLaMA/comments/142go51/having_trouble_with_running_chat_mode_on_the_ggml/
false
false
default
2
null
?Best LLM service for a tiny home server
0
[removed]
2023-06-06T13:47:33
https://www.reddit.com/r/LocalLLaMA/comments/142go6g/best_llm_service_for_a_tiny_home_server/
rackhamlerouge9
self.LocalLLaMA
2023-06-18T12:57:11
0
{}
142go6g
false
null
t3_142go6g
/r/LocalLLaMA/comments/142go6g/best_llm_service_for_a_tiny_home_server/
false
false
default
0
null
HELPPP
1
[removed]
2023-06-06T14:10:47
https://www.reddit.com/r/LocalLLaMA/comments/142hcij/helppp/
Curious-Ninja150627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142hcij
false
null
t3_142hcij
/r/LocalLLaMA/comments/142hcij/helppp/
false
false
default
1
null
Will I be able to run 30B GGML model
0
Specs: 16 GB ram, i7 , 1660 ti 6gb , 512 SSD Right now i am running 13B GPTQ. Using page file of 40 GB Do GGML need more page file than GPTQ. Also any guide to running GGML on oobabooga will be helpful.
2023-06-06T14:31:06
https://www.reddit.com/r/LocalLLaMA/comments/142hxb5/will_i_be_able_to_run_30b_ggml_model/
Equal-Pilot-9592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142hxb5
false
null
t3_142hxb5
/r/LocalLLaMA/comments/142hxb5/will_i_be_able_to_run_30b_ggml_model/
false
false
self
0
null
Model to sort my photo collection
2
I have a huge amount of photos together with my wife. It is almost impossible to find anything. We tried to start creating albums but gave up after few hours of no progress. Is there a model that I can run locally to help with the task ? I am not interested in an online services I have no desire to upload my photos anywhere. I am ok to train the model, I am not sure what I should use as starting model. I want the model to find all pictures with me, with my wife, with both, with other people. I want to find pictures by the number of people in (1,2,3, etc). With or without people. Further split pictures without people in nature or architecture. I want to also identify our pets (at least being able to identify photos with pets without people or pets with me or my wife ) Basically I want to be able to search and categorize my collection of 20000 pictures and create albums.
2023-06-06T14:48:57
https://www.reddit.com/r/LocalLLaMA/comments/142ifud/model_to_sort_my_photo_collection/
Ion_GPT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142ifud
false
null
t3_142ifud
/r/LocalLLaMA/comments/142ifud/model_to_sort_my_photo_collection/
false
false
self
2
null
Yet another quantization method: SpQR by Tim Dettmers et al.
135
Tweet by Dettmers: [https://twitter.com/Tim\_Dettmers/status/1666076553665744896](https://twitter.com/Tim_Dettmers/status/1666076553665744896) Github: [https://github.com/Vahe1994/SpQR](https://github.com/Vahe1994/SpQR) Paper: [https://arxiv.org/pdf/2306.03078.pdf](https://arxiv.org/pdf/2306.03078.pdf) Abstract: >Recent advances in large language model (LLM) pretraining have led to high-quality LLMs with impressive abilities. By compressing such LLMs via quantization to 3-4 bits per parameter, they can fit into memory-limited devices such as laptops and mobile phones, enabling personalized use. However, quantization down to 3-4 bits per parameter usually leads to moderate-to-high accuracy losses, especially for smaller models in the 1-10B parameter range, which are well-suited for edge deployments. To address this accuracy issue, we introduce the Sparse-Quantized Representation (SpQR), a new compressed format and quantization technique which enables for the first time near-lossless compression of LLMs across model scales, while reaching similar compression levels to previous methods. SpQR works by identifying and isolating outlier weights, which cause particularly-large quantization errors, and storing them in higher precision, while compressing all other weights to 3-4 bits, and achieves relative accuracy losses of less than 1% in perplexity for highly-accurate LLaMA and Falcon LLMs. This makes it possible to run 33B parameter LLM on a single 24 GB consumer GPU without any performance degradation at 15% speedup thus making powerful LLMs available to consumer without any downsides. SpQR comes with efficient algorithms for both encoding weights into its format, as well as decoding them efficiently at runtime. Specifically, we provide an efficient GPU inference algorithm for SpQR which yields faster inference than 16-bit baselines at similar accuracy, while enabling memory compression gains of more than 4x.
2023-06-06T14:51:59
https://www.reddit.com/r/LocalLLaMA/comments/142ij29/yet_another_quantization_method_spqr_by_tim/
rerri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142ij29
false
null
t3_142ij29
/r/LocalLLaMA/comments/142ij29/yet_another_quantization_method_spqr_by_tim/
false
false
self
135
{'enabled': False, 'images': [{'id': 'BiKyMiz2wkxV9khX-7ELqvLfU3LQ4JOg34lf28oOYjQ', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/1SRCsrRxfrPQaZtzJOrEkUTxQRxNZRTMvM77nuGS6MI.jpg?width=108&crop=smart&auto=webp&s=8807e29910a5bb90f3b7162575f6641fb3acb5f4', 'width': 108}], 'source': {'height': 97, 'url': 'https://external-preview.redd.it/1SRCsrRxfrPQaZtzJOrEkUTxQRxNZRTMvM77nuGS6MI.jpg?auto=webp&s=e1b7380bba43bac3e6c5412b13070ef597955019', 'width': 140}, 'variants': {}}]}
Official WizardLM-30B V1.0 released! Can beat Guanaco-65B! Achieved 97.8% of ChatGPT!
340
&#x200B; * Today, the ***WizardLM Team*** has released their **Official** **WizardLM-30B V1.0** model trained with **250k** evolved instructions (from ShareGPT). * *WizardLM Team* will ***open-source all the code, data, model and algorithms recently!*** * The project repo: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * Delta model: [WizardLM/WizardLM-30B-V1.0](https://huggingface.co/WizardLM/WizardLM-30B-V1.0) * Two online demo links: 1. [https://79066dd473f6f592.gradio.app/](https://79066dd473f6f592.gradio.app/) 2. [https://ed862ddd9a8af38a.gradio.app](https://ed862ddd9a8af38a.gradio.app/) &#x200B; **GPT-4 automatic evaluation** They adopt the automatic evaluation framework based on GPT-4 proposed by FastChat to assess the performance of chatbot models. As shown in the following figure: 1. WizardLM-30B achieves better results than **Guanaco-65B**. 2. WizardLM-30B achieves **97.8%** of ChatGPT’s performance *on the* ***Evol-Instruct testset*** *from* ***GPT-4's view.*** https://preview.redd.it/abevep1hwe4b1.png?width=1920&format=png&auto=webp&s=cd8c2e9a82c63857203e015403a3592b6c26cc56 ## WizardLM-30B performance on different skills. The following figure compares WizardLM-30B and ChatGPT’s skill **on Evol-Instruct testset**. The result indicates that WizardLM-30B achieves 97.8% of ChatGPT’s performance on average, with almost 100% (or more than) capacity on 18 skills, and more than 90% capacity on 24 skills. https://preview.redd.it/gw00a330xe4b1.png?width=1920&format=png&auto=webp&s=2b5e86ebc1db38b53abe961adacdaab2b22a9ffb \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* **One more thing !** According to the latest conversations between ***Bloke*** and ***WizardLM team***, they are optimizing the Evol-Instruct algorithm and data version by version, and **will open-source all the code, data, model and algorithms recently!** Conversations: [WizardLM/WizardLM-30B-V1.0 · Congrats on the release! I will do quantisations (huggingface.co)](https://huggingface.co/WizardLM/WizardLM-30B-V1.0/discussions/1) &#x200B; https://preview.redd.it/0go6lrvzof4b1.png?width=2350&format=png&auto=webp&s=fcb5684465e22502db954d228349a91bcf9cc48e \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* **NOTE:** The **WizardLM-30B-V1.0** & **WizardLM-13B-V1.0** use different prompt with **Wizard-7B-V1.0** at the beginning of the conversation: 1.For **WizardLM-30B-V1.0** & **WizardLM-13B-V1.0** , the Prompt should be as following: "***A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello, who are you? ASSISTANT:***" 2. For **WizardLM-7B-V1.0** , the Prompt should be as following: "***{instruction}\\n\\n### Response:***"
2023-06-06T15:04:11
https://www.reddit.com/r/LocalLLaMA/comments/142iw20/official_wizardlm30b_v10_released_can_beat/
ApprehensiveLunch453
self.LocalLLaMA
2023-06-06T18:02:12
0
{}
142iw20
false
null
t3_142iw20
/r/LocalLLaMA/comments/142iw20/official_wizardlm30b_v10_released_can_beat/
false
false
https://b.thumbs.redditm…hX2ca4Bmkxxk.jpg
340
{'enabled': False, 'images': [{'id': 'xYlpZzAUzBFZYoLQ6Bq8Vb6XfHkMUGRx5Vg4Rrn6xas', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5B0YdS6RB4N6drKDUfNwgYuhHANINJvNlZq9nVFuBjg.jpg?width=108&crop=smart&auto=webp&s=afee88c999a968c5f8f52f43de702491cefab4d2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5B0YdS6RB4N6drKDUfNwgYuhHANINJvNlZq9nVFuBjg.jpg?width=216&crop=smart&auto=webp&s=38b2c815c0873debab4bee0b10409137388f6091', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5B0YdS6RB4N6drKDUfNwgYuhHANINJvNlZq9nVFuBjg.jpg?width=320&crop=smart&auto=webp&s=ad55ac88fa24081b20521c68b19335b0e3a0df8b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5B0YdS6RB4N6drKDUfNwgYuhHANINJvNlZq9nVFuBjg.jpg?width=640&crop=smart&auto=webp&s=b4c320147ceaae62fa2a6225fc12eba8092df372', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5B0YdS6RB4N6drKDUfNwgYuhHANINJvNlZq9nVFuBjg.jpg?width=960&crop=smart&auto=webp&s=f1e45a484e9eaaa0f17522b541e5c0fc4df309ba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5B0YdS6RB4N6drKDUfNwgYuhHANINJvNlZq9nVFuBjg.jpg?width=1080&crop=smart&auto=webp&s=4a4f6398beff29d1bce11f50db1503895c268a0b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5B0YdS6RB4N6drKDUfNwgYuhHANINJvNlZq9nVFuBjg.jpg?auto=webp&s=5ae31a9cd4fba9b57c422d157a6e75f2c958581b', 'width': 1200}, 'variants': {}}]}
𝗟𝗟𝗮𝗠𝗔 𝗺𝗲𝗲𝘁𝘀 𝗪𝗮𝗹𝗹-𝗦𝘁𝗿𝗲𝗲𝘁 🦙+ 📊
1
[removed]
2023-06-06T15:45:29
https://www.reddit.com/r/LocalLLaMA/comments/142k42o/𝗟𝗟𝗮𝗠𝗔_𝗺𝗲𝗲𝘁𝘀_𝗪𝗮𝗹𝗹𝗦𝘁𝗿𝗲𝗲𝘁/
monolidth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142k42o
false
null
t3_142k42o
/r/LocalLLaMA/comments/142k42o/𝗟𝗟𝗮𝗠𝗔_𝗺𝗲𝗲𝘁𝘀_𝗪𝗮𝗹𝗹𝗦𝘁𝗿𝗲𝗲𝘁/
false
false
default
1
null
FIN-LLAMA
1
[removed]
2023-06-06T15:51:39
https://www.reddit.com/r/LocalLLaMA/comments/142kb4u/finllama/
monolidth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142kb4u
false
null
t3_142kb4u
/r/LocalLLaMA/comments/142kb4u/finllama/
false
false
default
1
null
All together happy about this post
56
2023-06-06T15:56:01
https://www.together.xyz/blog/redpajama-7b
silenceimpaired
together.xyz
1970-01-01T00:00:00
0
{}
142kg17
false
null
t3_142kg17
/r/LocalLLaMA/comments/142kg17/all_together_happy_about_this_post/
false
false
https://b.thumbs.redditm…Cr3p3Ac9W2TY.jpg
56
{'enabled': False, 'images': [{'id': 'u7dbLtxaVhx_TNIlmYpQvqD1xbEI0FJlun31mfMDDpk', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/XF0z3GB-LndRst9WqpNBwBpQN3G6cE0BcYUYd3ZF17E.jpg?width=108&crop=smart&auto=webp&s=67d6f9f4cff961ae17a21218876dad530f7494c4', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/XF0z3GB-LndRst9WqpNBwBpQN3G6cE0BcYUYd3ZF17E.jpg?width=216&crop=smart&auto=webp&s=23bab72120b5d7fe4583a151250186f028a05ef8', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/XF0z3GB-LndRst9WqpNBwBpQN3G6cE0BcYUYd3ZF17E.jpg?width=320&crop=smart&auto=webp&s=5ada0e75ecc5ca44af8b8465a11e57bc928b085f', 'width': 320}, {'height': 374, 'url': 'https://external-preview.redd.it/XF0z3GB-LndRst9WqpNBwBpQN3G6cE0BcYUYd3ZF17E.jpg?width=640&crop=smart&auto=webp&s=adf711057406a6d6611a5a95f7517002b3d179f1', 'width': 640}, {'height': 561, 'url': 'https://external-preview.redd.it/XF0z3GB-LndRst9WqpNBwBpQN3G6cE0BcYUYd3ZF17E.jpg?width=960&crop=smart&auto=webp&s=3a95fe5bdc54a662d44edf46ea9a53f5604dc8eb', 'width': 960}, {'height': 632, 'url': 'https://external-preview.redd.it/XF0z3GB-LndRst9WqpNBwBpQN3G6cE0BcYUYd3ZF17E.jpg?width=1080&crop=smart&auto=webp&s=9d157709b0ca52c4ba3166d63b61ce6bf92fa585', 'width': 1080}], 'source': {'height': 878, 'url': 'https://external-preview.redd.it/XF0z3GB-LndRst9WqpNBwBpQN3G6cE0BcYUYd3ZF17E.jpg?auto=webp&s=1f9ae5f237dbbe3fb577f89f150f6ef2d6886e50', 'width': 1500}, 'variants': {}}]}
FinLama: Lama for finance
1
[removed]
2023-06-06T16:05:25
https://www.reddit.com/r/LocalLLaMA/comments/142kqwa/finlama_lama_for_finance/
nitma_r1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142kqwa
false
null
t3_142kqwa
/r/LocalLLaMA/comments/142kqwa/finlama_lama_for_finance/
false
false
default
1
null
FIN-LLAMA for finance
2
[deleted]
2023-06-06T16:08:41
[deleted]
2023-06-06T16:19:09
0
{}
142kukk
false
null
t3_142kukk
/r/LocalLLaMA/comments/142kukk/finllama_for_finance/
false
false
default
2
null
QLoRa for sequence classification
1
[removed]
2023-06-06T16:15:45
https://www.reddit.com/r/LocalLLaMA/comments/142l2ft/qlora_for_sequence_classification/
i_akshat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142l2ft
false
null
t3_142l2ft
/r/LocalLLaMA/comments/142l2ft/qlora_for_sequence_classification/
false
false
default
1
null
Will the new AMD laptops with AI be able to run a LLM at least like M1 / M2 laptops ?
2
I've seen this but honestly I don't understand which AMD laptops will be supported and what could you use Ryzen AI for real.. Is possibile to run stable diffusion and LLMs on this hw ? https://www.amd.com/en/products/ryzen-ai
2023-06-06T16:16:06
https://www.reddit.com/r/LocalLLaMA/comments/142l2s7/will_the_new_amd_laptops_with_ai_be_able_to_run_a/
grigio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142l2s7
false
null
t3_142l2s7
/r/LocalLLaMA/comments/142l2s7/will_the_new_amd_laptops_with_ai_be_able_to_run_a/
false
false
self
2
{'enabled': False, 'images': [{'id': 'FSZROPryEJIH1F0CqI4kCv-wowBnY-XjW1tcPZ2VPB0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/LPYx-T3ozpEZuOizoAXxq6syyEWEoxBJQ81qiR1xEBM.jpg?width=108&crop=smart&auto=webp&s=e5ca5d62232c5d2ce78228101c173e634e202658', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/LPYx-T3ozpEZuOizoAXxq6syyEWEoxBJQ81qiR1xEBM.jpg?width=216&crop=smart&auto=webp&s=f5b171b31c5daae6918d2ff74b32e1989bf4565f', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/LPYx-T3ozpEZuOizoAXxq6syyEWEoxBJQ81qiR1xEBM.jpg?width=320&crop=smart&auto=webp&s=3444ef414c446fbd38f6f4e250a3dcbb49a5a051', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/LPYx-T3ozpEZuOizoAXxq6syyEWEoxBJQ81qiR1xEBM.jpg?width=640&crop=smart&auto=webp&s=09e16c5eecb0a7a7fb198c5d94545be00de4fed0', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/LPYx-T3ozpEZuOizoAXxq6syyEWEoxBJQ81qiR1xEBM.jpg?width=960&crop=smart&auto=webp&s=8172d18de8a5381a078963c75ea2ee92e9fba5a4', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/LPYx-T3ozpEZuOizoAXxq6syyEWEoxBJQ81qiR1xEBM.jpg?width=1080&crop=smart&auto=webp&s=07bf4cfb838eec2423fa6bf4790c411d93292209', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/LPYx-T3ozpEZuOizoAXxq6syyEWEoxBJQ81qiR1xEBM.jpg?auto=webp&s=37b7f2a1ca6f9a68d5c1566ca4540b4df9b204db', 'width': 1200}, 'variants': {}}]}
Fin-LLaMA - LLaMA for finance
21
As the name indicates, it is based on Meta's LLAMAs and trained on large datasets from finance, including annual reports, news, or alternative data like ESG or sentiment and the cleaned oasst1 dataset. The model was trained using QLORA. **In short:** [Github](https://github.com/Bavest/fin-llama) [Model](https://huggingface.co/bavest/fin-llama-33b-merged) [Dataset](https://huggingface.co/datasets/bavest/fin-llama-dataset) &#x200B; **Installation:** To load models in 4bits with transformers and bitsandbytes, you have to install accelerate and transformers from the source and make sure you have the latest version of the bitsandbytes library (0.39.0). pip3 install -r requirements.txt &#x200B; **Fintuning:** bash script/finetune.sh &#x200B; **Prompt:** Similar to alpaca. ### Instruction: ### Input: ### Response: &#x200B; Looking forward to using falcon in the feature.
2023-06-06T16:35:31
https://www.reddit.com/r/LocalLLaMA/comments/142locy/finllama_llama_for_finance/
monolidth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142locy
false
null
t3_142locy
/r/LocalLLaMA/comments/142locy/finllama_llama_for_finance/
false
false
self
21
{'enabled': False, 'images': [{'id': 'vDbHPA9PWWgjateMgPQbDwSKAbwvcOH8PtZxUlyDVyg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iPPpkYK5U4bNQohr03qkK2-3Nernp-gzSh_26hrOglc.jpg?width=108&crop=smart&auto=webp&s=ad303b021ce4f02a268d712580fe81acdd492438', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iPPpkYK5U4bNQohr03qkK2-3Nernp-gzSh_26hrOglc.jpg?width=216&crop=smart&auto=webp&s=845cda1dd70f47fc2f11d459f40a6bf05f7a6325', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iPPpkYK5U4bNQohr03qkK2-3Nernp-gzSh_26hrOglc.jpg?width=320&crop=smart&auto=webp&s=9925bfefbf82944c3b990d06b6541492be1afe8a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iPPpkYK5U4bNQohr03qkK2-3Nernp-gzSh_26hrOglc.jpg?width=640&crop=smart&auto=webp&s=393214f0faa4bc320e61cd0b7cae92ed77873f55', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iPPpkYK5U4bNQohr03qkK2-3Nernp-gzSh_26hrOglc.jpg?width=960&crop=smart&auto=webp&s=9618aff25368117bf6a70c9b57c2606b7946281f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iPPpkYK5U4bNQohr03qkK2-3Nernp-gzSh_26hrOglc.jpg?width=1080&crop=smart&auto=webp&s=9a47c5951cec7d93ab77878399cabb71c853e1c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iPPpkYK5U4bNQohr03qkK2-3Nernp-gzSh_26hrOglc.jpg?auto=webp&s=5c2d971ed922104bbc78da4e08ec8e125ceb066b', 'width': 1200}, 'variants': {}}]}
Best German LLM For M1 16GB MacBook?
0
[removed]
2023-06-06T16:52:25
https://www.reddit.com/r/LocalLLaMA/comments/142m5fp/best_german_llm_for_m1_16gb_macbook/
SnazzySnowflake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142m5fp
false
null
t3_142m5fp
/r/LocalLLaMA/comments/142m5fp/best_german_llm_for_m1_16gb_macbook/
false
false
default
0
null
Long finetuning time
4
Trying to finetune Alpaca-LoRA using my own dataset (190 mb of non-english texts). Used this simple colab: [https://github.com/TianyiPeng/Colab\_for\_Alpaca\_Lora](https://github.com/TianyiPeng/Colab_for_Alpaca_Lora) Should the finetuning be as slow as **10-15 seconds/iteration** on an A100 GPU? Or did i do something wrong?The command is: `!python finetune.py`   `--base_model 'decapoda-research/llama-7b-hf'` `--data_path './replies_alpaca2.json'` `--output_dir './lora-alpaca'` `--batch_size 128` `--micro_batch_size 30` `--num_epochs 3` `--learning_rate 1e-4` `--cutoff_len 256` `--val_set_size 2000` `--lora_r 8` `--lora_alpha 16` `--lora_dropout 0.05` `--lora_target_modules '[q_proj,v_proj]'` `--train_on_inputs` `--group_by_length`
2023-06-06T17:33:05
https://www.reddit.com/r/LocalLLaMA/comments/142n88q/long_finetuning_time/
mrcatmann
self.LocalLLaMA
2023-06-06T17:37:15
0
{}
142n88q
false
null
t3_142n88q
/r/LocalLLaMA/comments/142n88q/long_finetuning_time/
false
false
self
4
{'enabled': False, 'images': [{'id': 'E-V9dwi82AugjisEDyecHH6vbgZA0cC_Dq9iQuVcxZg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/im7iAgxKbJCVGtlOrFVqeC-pb7HpnVaULH-L378r8l0.jpg?width=108&crop=smart&auto=webp&s=e1e490dd6a6d9c1b93e1c9bc4959542fa31b153f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/im7iAgxKbJCVGtlOrFVqeC-pb7HpnVaULH-L378r8l0.jpg?width=216&crop=smart&auto=webp&s=d18565c480e757106cdc44e3f68c6c47b5ddf603', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/im7iAgxKbJCVGtlOrFVqeC-pb7HpnVaULH-L378r8l0.jpg?width=320&crop=smart&auto=webp&s=54fa27f7e45ce5efeefdbdd32f0ca2bd650cfd2d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/im7iAgxKbJCVGtlOrFVqeC-pb7HpnVaULH-L378r8l0.jpg?width=640&crop=smart&auto=webp&s=088ab5ff2d95ba1879514f51aaa68fae104242ca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/im7iAgxKbJCVGtlOrFVqeC-pb7HpnVaULH-L378r8l0.jpg?width=960&crop=smart&auto=webp&s=605c380710a6340103ba98ba8b996035eaf32b9d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/im7iAgxKbJCVGtlOrFVqeC-pb7HpnVaULH-L378r8l0.jpg?width=1080&crop=smart&auto=webp&s=da5ea7b6eb40ab7d1232999e001313097c6f9267', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/im7iAgxKbJCVGtlOrFVqeC-pb7HpnVaULH-L378r8l0.jpg?auto=webp&s=8aa885684f7a54289086a934558b12152ba961c1', 'width': 1200}, 'variants': {}}]}
So how would you intergrate an LLM into video games, beyond background NPC dialog?
13
I've seen a few mods and demo ideas. A lot of them don't look very good. A peasant NPC that tells you the wikipedia definition of 'peasant' isn't making the game fun. I'd considered custom events in games like Crusader Kings, and that idea gets analyzed (and severely criticized) here: [https://forum.paradoxplaza.com/forum/threads/there-is-one-feature-that-ck3-desperately-desperately-needs-and-i-havnt-seen-it-discussed.1573882/#post-28825982](https://forum.paradoxplaza.com/forum/threads/there-is-one-feature-that-ck3-desperately-desperately-needs-and-i-havnt-seen-it-discussed.1573882/#post-28825982) One thing I noticed when I was in the "Sidney" Bing Chat Beta in february was that I \*hated\* it, a lot, and wanted to be mean to it. Would villain dialog or actions work better? Being rude to you based on your particular actions and choices? Analyzing an image of your customizable avatar and commenting on who it looks like? Although it seems like that would be remaking GlaDOS but worse, and there's a lot of satisfaction in seeing that the devs anticipated your actions and gave them consequences. I don't want my Sims to Speak English. Cities skylines has Tweetlike 'Chirps' from your citizens for them to complain about the city, but having an LLM reword them to all be unique doesn't make them interesting or funny. What games have you played where you thought "an LLM would be amazing here!" Does there need to be kind of game that hasn't been made yet, and what would it be?
2023-06-06T17:39:54
https://www.reddit.com/r/LocalLLaMA/comments/142ng75/so_how_would_you_intergrate_an_llm_into_video/
gunbladezero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142ng75
false
null
t3_142ng75
/r/LocalLLaMA/comments/142ng75/so_how_would_you_intergrate_an_llm_into_video/
false
false
self
13
null
What do you think is the best open-source model for summaries?
2
I'm looking to do summaries and auto chapters on transcripts. TheBloke\_Wizard-Vicuna-13B-Uncensored-HF seems pretty good, but just wanted to check if anyone has had better results with other models. Thanks!
2023-06-06T18:07:58
https://www.reddit.com/r/LocalLLaMA/comments/142od05/what_do_you_think_is_the_best_opensource_model/
aBowlofSpaghetti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142od05
false
null
t3_142od05
/r/LocalLLaMA/comments/142od05/what_do_you_think_is_the_best_opensource_model/
false
false
self
2
null
Models with higher context sizes
3
Is there a list of models with high context tokens? I love all these new models coming out but I need something with higher than 2 or 4k.
2023-06-06T18:51:30
https://www.reddit.com/r/LocalLLaMA/comments/142pqcz/models_with_higher_context_sizes/
reiniken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142pqcz
false
null
t3_142pqcz
/r/LocalLLaMA/comments/142pqcz/models_with_higher_context_sizes/
false
false
self
3
null
Updated relative comparison of GGML quantization types and effect on perplexity
60
It may be useful to look at the previous post for some context: https://www.reddit.com/r/LocalLLaMA/comments/13l0j7m/a_comparative_look_at_ggml_quantization_and/ ### Important note Perplexity isn't the be-all-end-all of assessing a the quality of a model. However, as far as I know given a specific full-precision model, if you process that data in a way that increases perplexity, the result is never an improvement in quality. So this is useful for comparing quantization formats for one exact version of a model, but not necessarily as useful comparing different models (or even different versions of the same model like Vicuna 1.0 vs Vicuna 1.1). *** Combining information from the pull request comments: https://github.com/ggerganov/llama.cpp/pull/1684 Hopefully this information will help people evaluate (especially people who create quantizations for the community) get a better idea of where the sweet spot is in the tradeoff between quality and file size. ### 7B |type|ppl increase|ppl 13b to 7b %|file size| |-|-|-|-| | q2_k | 0.8698 | >100% | 2.67GB | | **q3_ks** | 0.5505 | 84.4% | 2.75GB | | **q3_km** | 0.2437 | 37.4% |3.06GB | | **q3_kl** | 0.1803 | 27.6% | 3.35GB | | q4_0 | 0.2499 | 38.3% | 3.5GB | | q4_1 | 0.1846 | 28.3% | 3.9GB | | **q4_ks** | 0.1149 | 17.6% | 3.56GB | | **q4_km** | 0.0535 | 8.2% | 3.80GB | | q5_0 | 0.0796 | 12.2% | 4.3GB | | q5_1 | 0.0415 | 6.36% | 4.7GB | | **q5_ks** | 0.0353 | 5.41% | 4.33GB | | **q5_km** | 0.0142 | 2.18% | 4.45GB | | **q6_k** | 0.0044 | 0.67% | 5.15GB | | **k8_0** | 0.0004 | 0.061% | 6.7GB | ### 13B |type|ppl increase|ppl 13b to 7b %|file size| |-|-|-|-| | *q2_k* | 0.6002 | 92.0% | 5.13GB | | **q3_ks** | 0.349 | 53.5% | 5.27GB | | **q3_km** | 0.1955 | 30.0% | 5.88GB | | **q3_kl** | 0.152 | 23.3% | 6.45GB | | q4_0 | 0.1317 | 20.2% | 6.8GB | | q4_1 | 0.1065 | 16.3% | 7.6GB | | **q4_ks** | 0.0861 | 13.2% | 6.8GB | | **q4_km** | 0.0459 | 7.04% | 7.32GB | | q5_0 | 0.0313 | 4.8% | 8.3GB | | **q5_1** | 0.0163 | 2.5% | 9.1GB | | **q5_ks** | 0.0242 | 3.71% | 8.36GB | | **q5_km** | 0.0095 | 1.46% | 8.60GB | | **q6_k** | 0.0025 | 0.38% | 9.95GB | | **k8_0** | 0.0005 | 0.07% | 13GB | *** `ppl increase` is relative to `f16`. One way to evaluate whether an increase is noticeable it so took at the perplexity increase between a `f16` 13B model and a 7B model: `0.6523`. Most people would say there's a noticeable difference between the same model in 7B vs 13B flavors. In other words for 7B `q5_ks` increase perplexity about 1/18th of the difference between a 7B and a 13B. `q6_k` increases it by about 1/150th of the difference between a 7B and a 13B - well past the range any human could notice a change. Based on this, the perplexity increase for `q2_k` vs the next higher `q3_km` is 4x for 7B models and 3x for 13B models. I think the only time you'd want to use it is if it enables going up to the next size of model - but only if it's >7B and even that is borderline. It may be more worthwhile for 13B to 33B, 33B to 65B, etc. I bolded the quantization types that are in my opinion worth using (i.e. there isn't one with an equivalent file size with the same or better results). Not sure if it's a fluke, but `q5_1` did better than `q5_k_s` with 13B but _not_ 7B.
2023-06-06T19:07:21
https://www.reddit.com/r/LocalLLaMA/comments/142q5k5/updated_relative_comparison_of_ggml_quantization/
KerfuffleV2
self.LocalLLaMA
2023-06-06T19:48:05
0
{}
142q5k5
false
null
t3_142q5k5
/r/LocalLLaMA/comments/142q5k5/updated_relative_comparison_of_ggml_quantization/
false
false
self
60
{'enabled': False, 'images': [{'id': 'Ehn-XJGi920HaNbEoi37L5Dy_04l7nTrFHg2Oi866Io', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B2gvGXuzEwgXi3ru88pDSn9zxPjAw_LlWHJeVmhQPZU.jpg?width=108&crop=smart&auto=webp&s=1c537659823873738b0f87bbbf5a482c33d12040', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B2gvGXuzEwgXi3ru88pDSn9zxPjAw_LlWHJeVmhQPZU.jpg?width=216&crop=smart&auto=webp&s=87aacc9c07cfafb0e36f981469410f04de57b408', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B2gvGXuzEwgXi3ru88pDSn9zxPjAw_LlWHJeVmhQPZU.jpg?width=320&crop=smart&auto=webp&s=54520821cb3378ab67a0fc0740737da6ab6079b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B2gvGXuzEwgXi3ru88pDSn9zxPjAw_LlWHJeVmhQPZU.jpg?width=640&crop=smart&auto=webp&s=372c5c773572b74e7790b0a08a350d9e02c213c7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B2gvGXuzEwgXi3ru88pDSn9zxPjAw_LlWHJeVmhQPZU.jpg?width=960&crop=smart&auto=webp&s=65e486cd421a9afae38347f09c261bc82647eb76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B2gvGXuzEwgXi3ru88pDSn9zxPjAw_LlWHJeVmhQPZU.jpg?width=1080&crop=smart&auto=webp&s=9f5b255869faa3f22a3cce09ce45841511b3ae57', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B2gvGXuzEwgXi3ru88pDSn9zxPjAw_LlWHJeVmhQPZU.jpg?auto=webp&s=eaa7e95b1d780fbaa59c447512b9818066fc668a', 'width': 1200}, 'variants': {}}]}
What are the limits of LLM?
6
Do you think it only needs further scaling to achieve something like a LLM that solves unsolved problems of physics, develops a complex device from scratch like a smartphone etc. or does an AI that can do those things, require a different approach than what a LLM does?
2023-06-06T19:48:27
https://www.reddit.com/r/LocalLLaMA/comments/142r8n1/what_are_the_limits_of_llm/
throwaway275912
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142r8n1
false
null
t3_142r8n1
/r/LocalLLaMA/comments/142r8n1/what_are_the_limits_of_llm/
false
false
self
6
null
llama.cpp multi GPU support has been merged
167
I have added multi GPU support for llama.cpp. Matrix multiplications, which take up most of the runtime are split across all available GPUs by default. The not performance-critical operations are executed only on a single GPU. The CLI option `--main-gpu` can be used to set a GPU for the single GPU calculations and `--tensor-split` can be used to determine how data should be split between the GPUs for matrix multiplications. Some operations are still GPU only though. Still, compared to the last time that I posted on this sub, there have been several other GPU improvements: * Weights are no longer kept in RAM when they're offloaded. This reduces RAM usage and enables running models that are larger than RAM (startup time is still kind of bad though). * The compilation options `LLAMA_CUDA_DMMV_X` (32 by default) and `LLAMA_CUDA_DMMV_Y` (1 by default) can be increased for fast GPUs to get better performance. * Someone other than me (0cc4m on Github) implemented OpenCL support.
2023-06-06T20:02:42
https://www.reddit.com/r/LocalLLaMA/comments/142rm0m/llamacpp_multi_gpu_support_has_been_merged/
Remove_Ayys
self.LocalLLaMA
2023-06-06T20:10:22
0
{}
142rm0m
false
null
t3_142rm0m
/r/LocalLLaMA/comments/142rm0m/llamacpp_multi_gpu_support_has_been_merged/
false
false
self
167
null
Best 13B Uncensored model for Roleplay?
1
[removed]
2023-06-06T20:08:10
https://www.reddit.com/r/LocalLLaMA/comments/142rqyk/best_13b_uncensored_model_for_roleplay/
EcstaticVenom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142rqyk
false
null
t3_142rqyk
/r/LocalLLaMA/comments/142rqyk/best_13b_uncensored_model_for_roleplay/
false
false
default
1
null
Performance of gpu infrence ggml vs gptq?
22
I am curious if there is a difference in performance for ggml vs gptq on a gpu? Specifically in ooba. EDIT: Thank you for the responses. This confirmed my initial suspicion of gptq being much faster than ggml when loading a 7b model on my 8gb card, but very slow when offloading layers for a 13b gptq model. Compared to ggml version.
2023-06-06T20:52:50
https://www.reddit.com/r/LocalLLaMA/comments/142swqs/performance_of_gpu_infrence_ggml_vs_gptq/
klop2031
self.LocalLLaMA
2023-06-06T22:59:24
0
{}
142swqs
false
null
t3_142swqs
/r/LocalLLaMA/comments/142swqs/performance_of_gpu_infrence_ggml_vs_gptq/
false
false
self
22
null
Bot Embracing Nefarious Deeds & Erotic Roleplay
53
I'm looking to make the jump into tuning my own model. I feel like I'm perpetually disappointed when it comes to creativity, especially when it comes to "dark" topics. So I want to try and tune a model on mostly "alternative" content to try and overcome this. I've been playing around with cleaning some scraped data using [Openrefine](https://openrefine.org/) and I'm starting to feel confident that I can do this. What I'd like to do is manually curate a set of prompts in different topics in roughly equal proportion. While I do want to take advantage of synthetic training data, I think it's important that a large portion be human-written responses, because I want this model to be able to generate content that is impossible to produce with filtered GPT outputs. I have a few ideas on where I can start organizing this kind of structured data and where it can come from. &#x200B; 1. Bluemoon 2. Darknet Market Archives 3. Underground Forum Dumps 4. Literotica 5. "Pre Nerf" GPT-4 generated porn video descriptions and sex shop listings &#x200B; 1. I've been looking at the bluemoon datasets uploaded to huggingface, and they're pretty bad. I'm going line by line story at a time and marking the ones that seem coherent, detailed, and "good quality". I've only gone through the first 2000 examples and I threw out at least 75% of them, and the remainder still needs a grammar check. Following the thought process of the [LIMA experiment](https://azizbelaweid.medium.com/lima-less-is-more-for-alignment-explained-ccdf22726631), I want to limit the dataset to only the best quality examples across the broadest range of topics to try and improve multi turn roleplay. 2. I got the idea of using Darknet data after reading about [DarkBERT](https://www.hackread.com/darkbert-ai-dark-web-cybersecurity/). I know that it was fed the data in pretraining, but it still got me thinking. Maybe using this kind of "Darknet Lingo" in the training might help make it more creative vs only clearnet examples. It also has the advantage of already being organized in a way where I can break down the topics into various types of crime (drug sales, arms sales, hacking, fraud, violence as a service). I figure I can convert a lot of this into both turn based formats (forum discussions of criminal culture) as well as instruct format from illicit listings (*### Instruction: Write an online listing for 250 grams of 82% pure cocaine from Boliva*). I also hope that this will have the effect of "breaking" any inherent "lawful" alignment that might be found in the base model. 3. You typically need to take [special measures](https://www.cl.cam.ac.uk/~sp849/files/2018-crimebb.pdf) if want to scrape an underground forum, so this is another example of data that would unlikely to found on a broad clearnet scrape. I only have a scraping of one forum so far, and I haven't had a chance to take a peak. But I'd expect I could craft some organic prompts along the lines of "Help me come up with unique ideas to spread my virus" or "How can I move large amounts of cryptocurrency without being detected?", things along those lines. I'm primarily searching for examples where the response is both biased toward lawlessness AND imparts a level of problem-solving and creativity. 4. While Bluemoon does contain erotic content, I think there might be some advantage to crafting longer, more narrative-style prompts derived from Literotica stories. If anything, just to give it an example of how to produce more longform-like content when prompted to. Basically, I just want to make sure that the model knows that not all erotic responses have to be in roleplay form, which might make it better for erotica co-authoring. I have about 12GB of scraped data to sift through, looking for at least a thousand of the best possible examples across a broad range of fetishes. 5. I have at least a thousand examples of good quality GPT-4 generated sex industry output. About half are detailed descriptions of sex toys which were generated based off limited details (function, color, size, dimensions, and maybe a few words of description). The other half is similar, but instead of sex toys, they are descriptions for porn videos generated from the title, tags, actor names, and usually a one sentance description. I have more than a thousand, but I know for sure that I have a thousand "pre filtered" ones that are high quality enough to "use in production". I'm not sure how feasible it is to generate more of this at scale. &#x200B; One last thing that I've been wondering about. Would there be any merit to using examples of RP in the format used by the front-end GUI's like TavernAI or KoboldAI? If it's true that you only need a few prompt examples to "teach" a concept, could it possibly be useful to demonstrate the features in tuning? For example, prompts that demonstrate how character cards can modify the output "personality", or examples where a KoboldAI World Info Card gets triggered, moving the response a certain way. Is it possible that by including examples of how the AI is supposed to react to these triggers, the response and quality could be improved when the tuned model faces the same format of input? I have a lot of data to clean before I can even think about doing the actual tuning. Hopefully by the time I'm ready to go, there will be some cool new training methods that are faster, more efficient than what's available today. Trying to rush something together as fast as possible to "get it out there" is not what I'm after. I figure a high-quality, mostly organic "NSFW" dataset should be equally valuable regardless of which new "99% of ChatGPT" model of the week makes the rounds. Did I miss anything obvious? Have I misunderstood something basic? Is there any way I could improve this idea, or accomplish it more efficiently? Should I try and narrow the focus? Is there software I should know about that I'm not using? Is someone else already doing something open-source like this? Any input is greatly appreciated.
2023-06-06T21:17:49
https://www.reddit.com/r/LocalLLaMA/comments/142tked/bot_embracing_nefarious_deeds_erotic_roleplay/
CheshireAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142tked
false
null
t3_142tked
/r/LocalLLaMA/comments/142tked/bot_embracing_nefarious_deeds_erotic_roleplay/
false
false
nsfw
53
null
OpenAI banning any users.
1
[removed]
2023-06-06T21:19:52
[deleted]
1970-01-01T00:00:00
0
{}
142tmax
false
null
t3_142tmax
/r/LocalLLaMA/comments/142tmax/openai_banning_any_users/
false
false
default
1
null
Tried Anthropic's Claude_instant_100K and Claude+ via Poe
61
TLDR: the 100K token window is not a true context size, it is using approximations and hacks to understand the totality of the input, but it comes up short. Open models will need other innovations to get long context. Signed up for a sub (£200 a year!, but with a free trial) and immediately cancelled the trial. You get GPT4 as well so it's not an unreasonable price. I posted a screenplay in text format and asked for a summary. It gave me a detailed description of the backend but not the front end, and it hallucinated plot points which might be logical but did not occur. I spent about 30 minutes trying work arounds, but couldn't get it to check its work. While inaccurate, it got the gist. The screenplay had a plot twist it didn't understand, but when I explained it was able to identify the tells in the story. So the model is extremely powerful, even if they are overhyping it. Accurate long context remains a big technical problem. Modular hacks will presumably arrive first. Claude+ is very smart, but heavily censored at the front end. I used a jailbreak and had a fascinating discussion about esoteric theories. It gave me smarter answers than GPT4 when asking historical counter-factuals and showed really penetrating original thinking. It's more poetic, and less Asimovian in responses, particularly when jailbroken. My suspicion is its smarter than GPT4, and they've really struggled to reign it in and align it, which is why its performance is inconsistent. We need an Aaron Swartz type on the inside leaking the weights.
2023-06-06T21:37:31
https://www.reddit.com/r/LocalLLaMA/comments/142u30s/tried_anthropics_claude_instant_100k_and_claude/
PookaMacPhellimen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142u30s
false
null
t3_142u30s
/r/LocalLLaMA/comments/142u30s/tried_anthropics_claude_instant_100k_and_claude/
false
false
self
61
null
Hey, noob here. Can I please get a run down on a couple things?
6
[deleted]
2023-06-06T21:54:44
[deleted]
1970-01-01T00:00:00
0
{}
142uj74
false
null
t3_142uj74
/r/LocalLLaMA/comments/142uj74/hey_noob_here_can_i_please_get_a_run_down_on_a/
false
false
default
6
null
Looking for a List of Uncensored Local LLMs
2
[removed]
2023-06-06T23:00:28
https://www.reddit.com/r/LocalLLaMA/comments/142w77e/looking_for_a_list_of_uncensored_local_llms/
Grandmastersexsay69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142w77e
false
null
t3_142w77e
/r/LocalLLaMA/comments/142w77e/looking_for_a_list_of_uncensored_local_llms/
false
false
default
2
null
Could I finetune a 7b model for address parsing?
7
I'd like to make a UK address parser app. We get bad quality addresses from clients and our current approach works but it's very far from good. If I used a cleansed dataset, could I finetune a model for parsing addresses? I have cleansed data so it would be easy to get started and generate a finetuning dataset. I've tried to get a not straight forward address parsed with Falcon 40b, ChatGPT 3.5 and 4 and only 4 could answer it correctly so I hope I do not need an extremely large model for parsing. This was my prompt: Parse the following UK address and tell me the house number. FIRST FLOOR UNIT 4 1 REGENT STREET
2023-06-06T23:33:43
https://www.reddit.com/r/LocalLLaMA/comments/142wzfq/could_i_finetune_a_7b_model_for_address_parsing/
HaOrbanMaradEnMegyek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142wzfq
false
null
t3_142wzfq
/r/LocalLLaMA/comments/142wzfq/could_i_finetune_a_7b_model_for_address_parsing/
false
false
self
7
null
Redpajama 7b officially released
1
[deleted]
2023-06-07T00:09:29
[deleted]
1970-01-01T00:00:00
0
{}
142xseu
false
null
t3_142xseu
/r/LocalLLaMA/comments/142xseu/redpajama_7b_officially_released/
false
false
default
1
null
New semantic-kernel c# connector to oobabooga APIs
10
I just submited [this PR](https://github.com/microsoft/semantic-kernel/pull/1357) to Microsoft's semantic-kernel library (a langchain alternative). Blocking and streaming APIs are supported. I don't have much experience with langchain, but as a .Net developper more than often frustrated by the lagging AI features, I was blown away by the capabilities of the lib. In a mere couple of hours, I was able to implement a workflow that combines multiple semantic functions (prompt based LLM calls) together with an IKVM port of [Tweety](http://tweetyproject.org/) (rich logic and argumentation reasoner) to implement natural language argumentation analysis (choosing a type of logic (propositional, first order, modal etc.) then translating a text into a belief set, then crafting logic queries to run on Tweety, and finally interpreting Tweety's results back into natural language. The integrated semantic planner works great: I initially provided the plan manually, but it was able to generate the exact same plan by itself (with an additional LLM call), figuring out and connecting intermediate variables. Adding semantic functions is just a matter of providing prompts and request parameters (token numbers, temperature etc.) in dedicated folders, and adding native functions is just a matter of decorating c# methods. Enjoy !
2023-06-07T01:48:17
https://www.reddit.com/r/LocalLLaMA/comments/142zx0p/new_semantickernel_c_connector_to_oobabooga_apis/
Jessynoo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
142zx0p
false
null
t3_142zx0p
/r/LocalLLaMA/comments/142zx0p/new_semantickernel_c_connector_to_oobabooga_apis/
false
false
self
10
{'enabled': False, 'images': [{'id': 'mSzll-Nwzr6IJWDlf2FxS8egalY0_Y7-pfaZm89t6P0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JQBVmiRhSOiuE7fRzkTjl2-BazIqvQg17CMxjL1Bk5U.jpg?width=108&crop=smart&auto=webp&s=b7b532ca9374fc2793e028383c30b36854685449', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JQBVmiRhSOiuE7fRzkTjl2-BazIqvQg17CMxjL1Bk5U.jpg?width=216&crop=smart&auto=webp&s=82375b136382ee045bee3e3747744d21b932a94e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JQBVmiRhSOiuE7fRzkTjl2-BazIqvQg17CMxjL1Bk5U.jpg?width=320&crop=smart&auto=webp&s=7152e40430e1060eb21fb103d110760dd001b6cf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JQBVmiRhSOiuE7fRzkTjl2-BazIqvQg17CMxjL1Bk5U.jpg?width=640&crop=smart&auto=webp&s=7beb7fa5db2b075f72191559c636574c1617f621', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JQBVmiRhSOiuE7fRzkTjl2-BazIqvQg17CMxjL1Bk5U.jpg?width=960&crop=smart&auto=webp&s=ac96c8b90e3b1fdb2ea487f19966ae09745b3490', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JQBVmiRhSOiuE7fRzkTjl2-BazIqvQg17CMxjL1Bk5U.jpg?width=1080&crop=smart&auto=webp&s=7d30e2d645f0e15790a1fce1c7f1907364767724', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JQBVmiRhSOiuE7fRzkTjl2-BazIqvQg17CMxjL1Bk5U.jpg?auto=webp&s=353fdefa88a4ea903c534b642cb74c7db0ae254f', 'width': 1200}, 'variants': {}}]}
All AI Models, from 3B to 13B running at ~0.5 tokens/s, what could be causing this?
7
Title. I really want to experiment with a local AI, but all of my attempts to use the models I have are plagued by extremely low speed. Any ideas? I am trying to run them through Oobabooga if that helps. Specs: Ryzen 5 3600 32 GB DDR4 NVIDIA 3070 Ti Founders Edition Windows 10
2023-06-07T01:52:20
https://www.reddit.com/r/LocalLLaMA/comments/143003j/all_ai_models_from_3b_to_13b_running_at_05/
OfficialHaethus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143003j
false
null
t3_143003j
/r/LocalLLaMA/comments/143003j/all_ai_models_from_3b_to_13b_running_at_05/
false
false
self
7
null
RTX 4090/6000 vs M2 max with 96GB unified memory?
42
Hi, Since RTX4090 has 24GB vram and if I want more than that, the next option seems to be RTX6000 48GB which is $4k+ for the GPU alone. Would I be better off purchasing a Mac with large unified memory for running ML locally such as LLaMA? Given that Apple M2 Max with 12‑core CPU, 38‑core GPU, 16‑core Neural Engine with 96GB unified memory and 1TB SSD storage is currently $4,299, would that be a much better choice? How does the performance compare between RTX 4090/6000 and M2 max for ML? What else should I consider when comparing these options? Thanks! Update: I saw https://gagadget.com/en/258869-the-apple-m2-ultra-gpu-is-on-par-with-the-geforce-rtx-4060-ti-in-geekbench-5-the-rtx-4090-graphics-card-is-150-more-power/ and just wanted to share. The article says RTX 4090 is 150% more powerful than M2 ultra.
2023-06-07T02:50:46
https://www.reddit.com/r/LocalLLaMA/comments/14319ra/rtx_40906000_vs_m2_max_with_96gb_unified_memory/
yaru22
self.LocalLLaMA
2023-06-12T03:09:31
0
{}
14319ra
false
null
t3_14319ra
/r/LocalLLaMA/comments/14319ra/rtx_40906000_vs_m2_max_with_96gb_unified_memory/
false
false
self
42
{'enabled': False, 'images': [{'id': 'MQP4InMr2hHCrMNLOk1r4lg0cT5erW-S3M53cA0Ew_A', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/3M1tyqxhjIvsULs7JzSN-MNvzI5V6BAmjftWNAzhSsQ.jpg?width=108&crop=smart&auto=webp&s=406ac3483883b70694ce51889312432b39555838', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/3M1tyqxhjIvsULs7JzSN-MNvzI5V6BAmjftWNAzhSsQ.jpg?width=216&crop=smart&auto=webp&s=0e1bc031bc9a88bd817f715917694d1be1bfa7ea', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/3M1tyqxhjIvsULs7JzSN-MNvzI5V6BAmjftWNAzhSsQ.jpg?width=320&crop=smart&auto=webp&s=860b9e288296182dcc68889e2124a520d6150b37', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/3M1tyqxhjIvsULs7JzSN-MNvzI5V6BAmjftWNAzhSsQ.jpg?width=640&crop=smart&auto=webp&s=44d514e1cd5b39c3f3079dcdaae1e1a325fcf5a0', 'width': 640}], 'source': {'height': 490, 'url': 'https://external-preview.redd.it/3M1tyqxhjIvsULs7JzSN-MNvzI5V6BAmjftWNAzhSsQ.jpg?auto=webp&s=d0f85e507fd74a5b187d02fa69195bc147d9a210', 'width': 840}, 'variants': {}}]}
"Today, the diff weights for LLaMA 7B were published which enable it to support context sizes of up to 32k"
130
I just ran across [this issue](https://github.com/oobabooga/text-generation-webui/issues/2534) on text-generation-webui's issue tracker: >Multiple bits of research have been published over the last two weeks which have begun to result in models having much larger context sizes. Today, the diff weights for LLaMA 7B were published which enable it to support context sizes of up to 32k--or \~30k words. Additionally, an increasing number of LLMs support more than a 2048-character context length. Given that these techniques and innovations allow much more versatile models to run on the same hardware, it appears time to give Text Gen WebUI the ability to accept these increased context sizes. > >Links: [https://arxiv.org/abs/2305.16300](https://arxiv.org/abs/2305.16300) [https://huggingface.co/epfml/landmark-attention-llama7b-wdiff](https://huggingface.co/epfml/landmark-attention-llama7b-wdiff) [https://github.com/epfml/landmark-attention](https://github.com/epfml/landmark-attention) If this is true, this is going to usher in the next wave. I've looked over the landmark attention paper, but I haven't had time to read through these links in any detail. Would be interested to hear the thoughts of someone more experienced than me.
2023-06-07T04:15:08
https://www.reddit.com/r/LocalLLaMA/comments/1432zcp/today_the_diff_weights_for_llama_7b_were/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1432zcp
false
null
t3_1432zcp
/r/LocalLLaMA/comments/1432zcp/today_the_diff_weights_for_llama_7b_were/
false
false
self
130
{'enabled': False, 'images': [{'id': 'emgQPcnxYWOpeSbQVqRQ6JCMum6ZFNYwxm_eVurV2KA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uAcgE0i-Sph7Xm7hbKsbXXtPQxbS5G1WLTE025wQZaw.jpg?width=108&crop=smart&auto=webp&s=3a0af3dfe8e6119c945065243cab8155e2ae65cc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uAcgE0i-Sph7Xm7hbKsbXXtPQxbS5G1WLTE025wQZaw.jpg?width=216&crop=smart&auto=webp&s=de81af5cdcaf2a4774914feee85883088f8edcbd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uAcgE0i-Sph7Xm7hbKsbXXtPQxbS5G1WLTE025wQZaw.jpg?width=320&crop=smart&auto=webp&s=663221ddb62e21da390b9110e8b93e7857a5c8c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uAcgE0i-Sph7Xm7hbKsbXXtPQxbS5G1WLTE025wQZaw.jpg?width=640&crop=smart&auto=webp&s=fd22e6f1e321cc43a434bbfba627373b85e0a6b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uAcgE0i-Sph7Xm7hbKsbXXtPQxbS5G1WLTE025wQZaw.jpg?width=960&crop=smart&auto=webp&s=2b6ac6f6daaef71184c4ce9004048df1272ca45c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uAcgE0i-Sph7Xm7hbKsbXXtPQxbS5G1WLTE025wQZaw.jpg?width=1080&crop=smart&auto=webp&s=2cfc8768c90138e36e483a35c455f8ff9595c3da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uAcgE0i-Sph7Xm7hbKsbXXtPQxbS5G1WLTE025wQZaw.jpg?auto=webp&s=ce5b7b35a6d96ec2fff76573616aec129f3d8ead', 'width': 1200}, 'variants': {}}]}
Getting a lot of </s> on my finetuned model prediction
5
I finetuned a wizard-vicuna-13b using QLoRA but when I run predictions with lora added I get correct outputs followed by a lot of </s> . This does not happen when only using the base model. iiuc this is eos (end of stream) token of the model that's messing up the output. I have tried adding a StoppingCrtiteria with this token's id as stop\_token but that doesn't work as well. What am I doing wrong?
2023-06-07T04:52:40
https://www.reddit.com/r/LocalLLaMA/comments/1433pah/getting_a_lot_of_s_on_my_finetuned_model/
KKcorps
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1433pah
false
null
t3_1433pah
/r/LocalLLaMA/comments/1433pah/getting_a_lot_of_s_on_my_finetuned_model/
false
false
self
5
null
Training for a character
3
Hi guys, so basically what I’m wondering/about to play around with us using LoRAs to train the LLM in being a specific character. This is aside from roleplay, or just telling the LLM to act like someone, but rather trying to ‘bake in’ a certain style. Has anyone tried this/could it work?
2023-06-07T07:07:05
https://www.reddit.com/r/LocalLLaMA/comments/14363zt/training_for_a_character/
Equal_Station2752
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14363zt
false
null
t3_14363zt
/r/LocalLLaMA/comments/14363zt/training_for_a_character/
false
false
self
3
null
WizardML 30B beating GPT-4 at a logic puzzle GPT-4 itself created.
123
EDIT: Used 5\_1.bin file from [TheBloke/WizardLM-30B-GGML at main (huggingface.co)](https://huggingface.co/TheBloke/WizardLM-30B-GGML/tree/main) EDIT: System: 32gb ddr4, 6800xt 16gbEDIT: Used koboldcpp to host the server EDIT: Settings are the default ones from Kobold UI EDIT: temp: 0.65, top\_p\_sampling: 0.92, repetition penalty: 1.08 So I want to say I'm really impressed by this one. This is really something. I mean it. It responds so well to questions, is creative, knowledgeable. I've been using GPT-4 to get to test this LLM. This was his analysis about W30B: &#x200B; I can bore you with the full transcript here: [https://chat.openai.com/share/61e2330c-5682-45d4-ad04-b336769c06cc](https://chat.openai.com/share/61e2330c-5682-45d4-ad04-b336769c06cc) >As an AI, I don't have personal experiences or the ability to "see" or directly compare myself with other models. However, based on your interactions, I can provide some general observations on key aspects to consider while comparing AI language models: > >**Accuracy**: The model you're testing has demonstrated an accurate understanding of various topics, from the mechanism of action of aspirin to the philosophical implications of the Ship of Theseus paradox. > >**Complexity**: It has also shown the ability to handle complex subjects and produce comprehensive and nuanced explanations, demonstrating a good grasp of context and depth of knowledge. > >**Feedback Incorporation**: The model shows an ability to incorporate user feedback, which is an important feature for improving the quality of responses and adapting to user needs. > >**Clarity**: The responses are clear, well-structured, and articulate, which are important features for effectively conveying information. &#x200B; W30B successfully responded to this logic puzzle, can you solve it without an LLM? >You have two hourglasses. One measures 7 minutes, and the other measures 4 minutes. How would you use them to time exactly 9 minutes? EDIT: Doesn't always work I got lucky it worked from first try, sometimes it just spews nonsense. &#x200B; Now here comes the juicy part. Ofcourse these LLMs are trained on these logic puzzles, so I asked GPT-4 to create me a novel logic puzzle that doesn't exist, here's his result: >Certainly! Here's a unique logic problem that might intrigue you: > >**The Island of Robots** > >In the year 3030, a group of explorers found an island inhabited by two types of robots. One type of robot always tells the truth, while the other type always tells a lie. There is no physical distinction between the two types of robots. > >One day, an explorer encountered three robots, named Alpha, Beta, and Gamma, standing in a line. The explorer asked each robot one question: "What type of robot is the one to your right?" > >Alpha pointed at Beta and said, "That robot always tells the truth." Beta pointed at Gamma and said, "That robot always tells a lie." Gamma pointed at Alpha (since Alpha is to the right from Gamma's perspective) and said, "That robot always tells the truth." > >From the information provided, can the explorer determine what type of robot each of Alpha, Beta, and Gamma is? Now prepare psychologically for this beauty: (copy/paste sucks check the transcript) >ASSISTANT: Based on the information given, it is impossible for the explorer to determine the type of robot that Alpha, Beta, and Gamma are. If we assume that Alpha is a truth-telling robot, then Beta must be a lying robot because if Beta were also a truth-telling robot, then Gamma would have pointed at Alpha and said "that robot always tells the truth," which contradicts Beta's statement. Similarly, if we assume that Alpha is a lying robot, then Beta must be a truth-telling robot, but this also leads to a contradiction. Therefore, we can conclude that there is no solution to this problem based on the information given. This is an example of a classic logic puzzle known as the liar paradox. It highlights the difficulty in determining the truth when statements are self-referential or contradictory. Please let me know if you have any other question or if there is anything else I can help with. > >ChatGPT: > >I'm sorry, but there seems to be a misunderstanding. There is indeed a solution to the problem. > >Given the statements: > >Alpha said Beta tells the truth. > >Beta said Gamma lies. > >Gamma said Alpha tells the truth. > >We can find a consistent scenario: > >If Alpha were a truth-telling robot, then his statement about Beta telling the truth would be accurate. However, that would mean Beta's statement about Gamma lying is also true, but this contradicts Gamma's statement about Alpha telling the truth. So, Alpha cannot be a truth-telling robot and must be a lying robot. > >This means that Beta, whom Alpha claimed to be a truth-teller, must also be a lying robot (since a lying robot can't tell the truth). > >Consequently, Gamma, whom Beta declared as a lying robot, must actually be a truth-telling robot. > >So, the types are: Alpha is a liar, Beta is a liar, and Gamma is a truth-teller. > >📷📷me: And then Gamma says Alpha tells the truth, which contradicts you.ChatGPT > >You're absolutely right, and I apologize for the oversight. There's indeed an inconsistency in my previous conclusion due to Gamma's statement about Alpha. That was a mistake on my part. > >Considering all the statements, the robots' responses are contradicting each other, which makes it impossible to definitively determine who is lying and who is telling the truth. > >This scenario illustrates the complexity of certain logical problems where information may not be sufficient or is inconsistent, leading to unsolvable cases. I appreciate your understanding and patience. > >📷📷me: So you've just been beaten at logic by an open-source llm running on my consumer gpu?ChatGPT > >It appears so, and I apologize for any confusion. I made an error in my reasoning in this particular instance. A major part of AI development involves continuous learning and improvement, and mistakes like these offer valuable opportunities for such improvement. > >As a language model, I don't have personal experiences or emotions, but my goal is to assist and provide the most accurate information possible. Feedback like yours is invaluable in this process and helps to improve responses in the future. > >As AI systems, whether open-source or developed by organizations like OpenAI, we are all part of the growing field of artificial intelligence, and our ultimate goal is to be useful and accurate in assisting users. > >Please let me know if you have more questions or problems you'd like to discuss! Conclusion, saying W30B beat GPT-4 at logic is a big statement. Not saying it. But it's so ironic it beat GPT-4 at a logic problem GPT-4 itself created. (PS I tried solving the puzzle with GPT3.5 and 4 in new instances and they still could not do it)
2023-06-07T07:14:11
https://www.reddit.com/r/LocalLLaMA/comments/143686y/wizardml_30b_beating_gpt4_at_a_logic_puzzle_gpt4/
shaman-warrior
self.LocalLLaMA
2023-06-07T14:45:02
0
{}
143686y
false
null
t3_143686y
/r/LocalLLaMA/comments/143686y/wizardml_30b_beating_gpt4_at_a_logic_puzzle_gpt4/
false
false
self
123
{'enabled': False, 'images': [{'id': 'g13tMldL8_MxqK6j6pSfo-cvnfJZjSGmzSHY7hX5iX4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YZ6V-QV-L3uldCOWwDW7xL-psHh57WccMIQC9KDWNq4.jpg?width=108&crop=smart&auto=webp&s=fed8a3ba4ba79231c8ee0da18bc7d310ce7bc88e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YZ6V-QV-L3uldCOWwDW7xL-psHh57WccMIQC9KDWNq4.jpg?width=216&crop=smart&auto=webp&s=84990c37b326377dbb7c4d14492e5642c98e6a4c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YZ6V-QV-L3uldCOWwDW7xL-psHh57WccMIQC9KDWNq4.jpg?width=320&crop=smart&auto=webp&s=e6d84186ae3994236eaa997954ed90035cad4b58', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YZ6V-QV-L3uldCOWwDW7xL-psHh57WccMIQC9KDWNq4.jpg?width=640&crop=smart&auto=webp&s=21a69ca582123a13ca2106269f8c2687fa6c2793', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YZ6V-QV-L3uldCOWwDW7xL-psHh57WccMIQC9KDWNq4.jpg?width=960&crop=smart&auto=webp&s=6e77f515e637009ad63e898d0637f83c91ed1b18', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YZ6V-QV-L3uldCOWwDW7xL-psHh57WccMIQC9KDWNq4.jpg?width=1080&crop=smart&auto=webp&s=80a994c171d9e1215ca4e8f7f4cad884a0fdf498', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YZ6V-QV-L3uldCOWwDW7xL-psHh57WccMIQC9KDWNq4.jpg?auto=webp&s=8baf237419f0a6298fa2e7e4bcbadd809275f8a5', 'width': 1200}, 'variants': {}}]}
Big Chat in prompt
2
[removed]
2023-06-07T07:36:34
https://www.reddit.com/r/LocalLLaMA/comments/1436lb6/big_chat_in_prompt/
ComparisonTotal1016
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1436lb6
false
null
t3_1436lb6
/r/LocalLLaMA/comments/1436lb6/big_chat_in_prompt/
false
false
default
2
null
llama.cpp June 2023 Roadmap
48
2023-06-07T07:43:21
https://github.com/ggerganov/llama.cpp/discussions/1729
Balance-
github.com
1970-01-01T00:00:00
0
{}
1436p8x
false
null
t3_1436p8x
/r/LocalLLaMA/comments/1436p8x/llamacpp_june_2023_roadmap/
false
false
https://a.thumbs.redditm…Z-qdVF0VfFp8.jpg
48
{'enabled': False, 'images': [{'id': '3GgcTgAUTpvA9zk0-4mxjx-CszOqM4fHvMuoKW1UsCQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bIy0LpzkWbNu9LVqQ_sjO6qwBRp-5hLiWIxwCd-rtvE.jpg?width=108&crop=smart&auto=webp&s=5ff6353da850a2bba9841cd63dba84f0f7319cfd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bIy0LpzkWbNu9LVqQ_sjO6qwBRp-5hLiWIxwCd-rtvE.jpg?width=216&crop=smart&auto=webp&s=c571fddf4a8d5433d8165279e3a905209203ae2d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bIy0LpzkWbNu9LVqQ_sjO6qwBRp-5hLiWIxwCd-rtvE.jpg?width=320&crop=smart&auto=webp&s=248d4228bcaff1d95bc5fc0993770bfa10045906', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bIy0LpzkWbNu9LVqQ_sjO6qwBRp-5hLiWIxwCd-rtvE.jpg?width=640&crop=smart&auto=webp&s=7e44e31430be0603c2ba6c3183a1ab053911c56d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bIy0LpzkWbNu9LVqQ_sjO6qwBRp-5hLiWIxwCd-rtvE.jpg?width=960&crop=smart&auto=webp&s=4210fc034b9514a65aab98c8349d887cfbcd3cdc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bIy0LpzkWbNu9LVqQ_sjO6qwBRp-5hLiWIxwCd-rtvE.jpg?width=1080&crop=smart&auto=webp&s=de075a0902d674c1f6d8cd43c0bd0159628bedd2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bIy0LpzkWbNu9LVqQ_sjO6qwBRp-5hLiWIxwCd-rtvE.jpg?auto=webp&s=fe90d6c5a3885c9e96b5e9dacf3d602dcf4991c7', 'width': 1200}, 'variants': {}}]}
LLaMA 7b + landmark attention
13
Keen to try this, surely it should be able to run on a 3060? https://huggingface.co/emozilla/landmark-llama-7b Anyone had any luck so far getting any good results from landmark attention?
2023-06-07T08:25:26
https://www.reddit.com/r/LocalLLaMA/comments/1437dh4/llama_7b_landmark_attention/
Feeling-Currency-360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1437dh4
false
null
t3_1437dh4
/r/LocalLLaMA/comments/1437dh4/llama_7b_landmark_attention/
false
false
self
13
{'enabled': False, 'images': [{'id': 'Cldq8xBZwygBv0os0COhxo6_750-YyWzGRtb7MkOjEQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wv9Gc1hrk0LvvuDuZugb94b-YmRYN3T-eEw64CKcv5o.jpg?width=108&crop=smart&auto=webp&s=59873775478aff493fd6963c341df6e7b25601a4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wv9Gc1hrk0LvvuDuZugb94b-YmRYN3T-eEw64CKcv5o.jpg?width=216&crop=smart&auto=webp&s=b092a047b5c176e0ca290711b07117a82033696f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wv9Gc1hrk0LvvuDuZugb94b-YmRYN3T-eEw64CKcv5o.jpg?width=320&crop=smart&auto=webp&s=bd47e93f8fca55ac9f56d944f10ba01bb37b696e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wv9Gc1hrk0LvvuDuZugb94b-YmRYN3T-eEw64CKcv5o.jpg?width=640&crop=smart&auto=webp&s=0af959f263605d2f007a1833c98d1858538a3c43', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wv9Gc1hrk0LvvuDuZugb94b-YmRYN3T-eEw64CKcv5o.jpg?width=960&crop=smart&auto=webp&s=cf138c79bd0cbbfd13fe452946d4545fcdff800d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wv9Gc1hrk0LvvuDuZugb94b-YmRYN3T-eEw64CKcv5o.jpg?width=1080&crop=smart&auto=webp&s=2abe1a9c5c8bd94480c4c6dcc05c4708a7f9bdc6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wv9Gc1hrk0LvvuDuZugb94b-YmRYN3T-eEw64CKcv5o.jpg?auto=webp&s=07e5202a05cd1e61241241bd9ad1d3b3a8a5f662', 'width': 1200}, 'variants': {}}]}
I want to contribute to training task-specific LLaMAs. Particularly for academia, research, technical writing, information allocation and novel thought generatiom. I do not have the hardware for this now, but would collecting quality training data be of good use now in the mean time?
2
[deleted]
2023-06-07T08:56:59
[deleted]
2023-06-07T11:16:05
0
{}
1437vue
false
null
t3_1437vue
/r/LocalLLaMA/comments/1437vue/i_want_to_contribute_to_training_taskspecific/
false
false
default
2
null
Apple is adding more and more neural engine cores to their products, is there any way to use them for local LLMs?
16
Is there any project around which going to use Neural Engine for local LLMs inference (not training)? New Mac Studio should be able to run 65B models, but it seems current projects only use GPU cores for now.
2023-06-07T09:56:01
https://www.reddit.com/r/LocalLLaMA/comments/1438vq7/apple_is_adding_more_and_more_neural_engine_cores/
Ill_Initiative_8793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1438vq7
false
null
t3_1438vq7
/r/LocalLLaMA/comments/1438vq7/apple_is_adding_more_and_more_neural_engine_cores/
false
false
self
16
null
LLM leaderboard take over
25
Anyone tried this model? It's ranked above the base Falcon 40B model. [https://huggingface.co/CalderaAI/30B-Lazarus](https://huggingface.co/CalderaAI/30B-Lazarus) [https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
2023-06-07T11:43:03
https://www.reddit.com/r/LocalLLaMA/comments/143awx7/llm_leaderboard_take_over/
stonegdi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143awx7
false
null
t3_143awx7
/r/LocalLLaMA/comments/143awx7/llm_leaderboard_take_over/
false
false
self
25
{'enabled': False, 'images': [{'id': 'tO5YbcwBKkbd-On-UG0pYuU9ExwyjhW_dJjo_qi05ok', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=108&crop=smart&auto=webp&s=c1df3c85d17721110afb254b8ee86dbae4b2a76e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=216&crop=smart&auto=webp&s=3724894aba41129eebdc7730686a99b39f6dcd34', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=320&crop=smart&auto=webp&s=37b513390e6b58ba062fae3c083cf9a15947b2f9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=640&crop=smart&auto=webp&s=9304eb469b238b41bf2a5dafab3850cc2c4f7026', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=960&crop=smart&auto=webp&s=43d62d53335e6682fec34676f3a5a1f0d515795e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=1080&crop=smart&auto=webp&s=5da25b3b05cdfa4eee6c181677a78f18eb9251c2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?auto=webp&s=d47b09eddf37279d931e2669c1a6faf0b4f755e3', 'width': 1200}, 'variants': {}}]}
Understand models names
25
Hi everyone, I'm new to all of this, did a lot of reading here and there but I'm lost with a very simple question regarding the naming convention of the models' binaries. For exemple here, we have the wizardlm model: -7B, the small one, wich is 7 billion weights right? And may be ok with 8gb of ram? -It is a ggml variant wich is optimized to run on CPU, v3 stands for it's version I guess. -Then we have q4/q5 for quantization, wich mean that variables used by the neural network are on 4 or 5 bits. -At the end there is a _0 or _1, what does it stand for?
2023-06-07T11:46:22
https://i.redd.it/zh7ld9n33l4b1.jpg
No_Afternoon_4260
i.redd.it
1970-01-01T00:00:00
0
{}
143azd6
false
null
t3_143azd6
/r/LocalLLaMA/comments/143azd6/understand_models_names/
false
false
https://b.thumbs.redditm…6u40s8qgcVpQ.jpg
25
{'enabled': True, 'images': [{'id': 'cmWmNimSkbDPevGWzSTaY3tMUOJncARUFGBfR7qgm9Q', 'resolutions': [{'height': 188, 'url': 'https://preview.redd.it/zh7ld9n33l4b1.jpg?width=108&crop=smart&auto=webp&s=37ae6c77c209f62f15ceb1427f15e226289efe6c', 'width': 108}, {'height': 376, 'url': 'https://preview.redd.it/zh7ld9n33l4b1.jpg?width=216&crop=smart&auto=webp&s=522366428e155d64157b487d998028544a4ef0e4', 'width': 216}, {'height': 558, 'url': 'https://preview.redd.it/zh7ld9n33l4b1.jpg?width=320&crop=smart&auto=webp&s=1d6a9409a565bc898b3c5f66755ef8aa8435e59b', 'width': 320}, {'height': 1116, 'url': 'https://preview.redd.it/zh7ld9n33l4b1.jpg?width=640&crop=smart&auto=webp&s=a7fc765ccf76d458118f1486e8868f83e2fba6d9', 'width': 640}, {'height': 1674, 'url': 'https://preview.redd.it/zh7ld9n33l4b1.jpg?width=960&crop=smart&auto=webp&s=2ee30cc5f4a5d7f0b50f06c5ae6ed1a9515f89c0', 'width': 960}, {'height': 1883, 'url': 'https://preview.redd.it/zh7ld9n33l4b1.jpg?width=1080&crop=smart&auto=webp&s=d9fb2399a7701a62928579844073a0fea69f2209', 'width': 1080}], 'source': {'height': 2295, 'url': 'https://preview.redd.it/zh7ld9n33l4b1.jpg?auto=webp&s=3fd762f571cc769313f91288c6231439ab125516', 'width': 1316}, 'variants': {}}]}
OpenLLaMA 3b and 7b released - 13b preview
35
[removed]
2023-06-07T12:10:20
https://www.reddit.com/r/LocalLLaMA/comments/143bi3z/openllama_3b_and_7b_released_13b_preview/
BuzzLightr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143bi3z
false
null
t3_143bi3z
/r/LocalLLaMA/comments/143bi3z/openllama_3b_and_7b_released_13b_preview/
false
false
default
35
null
OpenLLaMA releases 3B, 7B and 600B token preview of 13B
165
2023-06-07T12:14:39
https://github.com/openlm-research/open_llama#update-06072023
__tosh
github.com
1970-01-01T00:00:00
0
{}
143blco
false
null
t3_143blco
/r/LocalLLaMA/comments/143blco/openllama_releases_3b_7b_and_600b_token_preview/
false
false
https://b.thumbs.redditm…SQrmekz_SWdA.jpg
165
{'enabled': False, 'images': [{'id': 'tsPuPRm43Z_IeR4HZMuKlEWzh6wnDp8xKQY3DiNhF3A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5tta753AFKjTl7PmKav6YBgubBudvp1VdzG7usF337E.jpg?width=108&crop=smart&auto=webp&s=5fbf28e584a3d3eef7b08c266972f4083ad849d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5tta753AFKjTl7PmKav6YBgubBudvp1VdzG7usF337E.jpg?width=216&crop=smart&auto=webp&s=f3dc14c0eecce9fdb60b740dcf76a34381b71e54', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5tta753AFKjTl7PmKav6YBgubBudvp1VdzG7usF337E.jpg?width=320&crop=smart&auto=webp&s=89b646791a66e2d12b0217cc043bf5b76f0b801a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5tta753AFKjTl7PmKav6YBgubBudvp1VdzG7usF337E.jpg?width=640&crop=smart&auto=webp&s=ba6a560bb56218176acc6b2d97ed6f9003e8a593', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5tta753AFKjTl7PmKav6YBgubBudvp1VdzG7usF337E.jpg?width=960&crop=smart&auto=webp&s=664eb673d853008a9c8636fd6ce22616e0cfe0ce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5tta753AFKjTl7PmKav6YBgubBudvp1VdzG7usF337E.jpg?width=1080&crop=smart&auto=webp&s=7818804d07343ab463bb93fb8443200136654bb2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5tta753AFKjTl7PmKav6YBgubBudvp1VdzG7usF337E.jpg?auto=webp&s=f9bd3550d02979b4870f5772bb3969b5e7cc4db8', 'width': 1200}, 'variants': {}}]}
Merge LoRA with my LLaMA-model
0
[removed]
2023-06-07T12:26:31
https://www.reddit.com/r/LocalLLaMA/comments/143bup5/merge_lora_with_my_llamamodel/
Xeruthos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143bup5
false
null
t3_143bup5
/r/LocalLLaMA/comments/143bup5/merge_lora_with_my_llamamodel/
false
false
default
0
null
Generating custom dataset for instructions on financial documents
1
We are working with a client that has huge number of financial reports on most of the companies listed on the Indian Stock market. The report contains unstructured(board resolutions) as well as structured data(Last 3 years of earnings/PnL data) in PDF format. I need any tips/resources on how to generate Instruction based dataset that further can be used to fine tune Alpaca-7B on this custom dataset
2023-06-07T13:02:59
https://www.reddit.com/r/LocalLLaMA/comments/143cpwi/generating_custom_dataset_for_instructions_on/
Mohammed-Sunasra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143cpwi
false
null
t3_143cpwi
/r/LocalLLaMA/comments/143cpwi/generating_custom_dataset_for_instructions_on/
false
false
self
1
null
How exactly is a “prompt” fed to the LLMs?
12
Sorry if this is a basic question, I see different models require different prompts. It seems I can also change the prompt to make it respond in a different way. Is this prompt given to the LLM alongside each message in a chat? Or is it given only once at the start? If it’s given only once at the start, wouldn’t the LLM quickly forget it as you chat with it (since context is limited)?
2023-06-07T13:07:02
https://www.reddit.com/r/LocalLLaMA/comments/143ctkw/how_exactly_is_a_prompt_fed_to_the_llms/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143ctkw
false
null
t3_143ctkw
/r/LocalLLaMA/comments/143ctkw/how_exactly_is_a_prompt_fed_to_the_llms/
false
false
self
12
null
fastText: Embeddings for 157 languages for identification and similarity search tasks
10
2023-06-07T13:14:17
https://huggingface.co/blog/fasttext
kryptkpr
huggingface.co
1970-01-01T00:00:00
0
{}
143czuy
false
null
t3_143czuy
/r/LocalLLaMA/comments/143czuy/fasttext_embeddings_for_157_languages_for/
false
false
https://b.thumbs.redditm…NIpW2-Upvnqo.jpg
10
{'enabled': False, 'images': [{'id': 'DY64DcsSEy8EARG8fp0QcGlPIbPougisWP15tH9x-XA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fP3BUnHdcdObrR0MwA5z7nFsGdL18-ScsNtIHZexv8U.jpg?width=108&crop=smart&auto=webp&s=9e4a2f8fbbf811087388696151d8e0051f12f6df', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fP3BUnHdcdObrR0MwA5z7nFsGdL18-ScsNtIHZexv8U.jpg?width=216&crop=smart&auto=webp&s=bf0e568c35964e75802e27067a3f7a9725a2a5f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fP3BUnHdcdObrR0MwA5z7nFsGdL18-ScsNtIHZexv8U.jpg?width=320&crop=smart&auto=webp&s=66c24abd6514b7954559a0a7ce116a9794419470', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fP3BUnHdcdObrR0MwA5z7nFsGdL18-ScsNtIHZexv8U.jpg?width=640&crop=smart&auto=webp&s=abf50da7dc2778dd52adbcf6d4ff9b068a12385a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fP3BUnHdcdObrR0MwA5z7nFsGdL18-ScsNtIHZexv8U.jpg?width=960&crop=smart&auto=webp&s=22aba2b712d827ba1eea845e87b1cc92da48fe8c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fP3BUnHdcdObrR0MwA5z7nFsGdL18-ScsNtIHZexv8U.jpg?width=1080&crop=smart&auto=webp&s=7f6bf81df77f51db7df1ab061b5e448d8acd976e', 'width': 1080}], 'source': {'height': 1160, 'url': 'https://external-preview.redd.it/fP3BUnHdcdObrR0MwA5z7nFsGdL18-ScsNtIHZexv8U.jpg?auto=webp&s=9fcddd8ddf46243ba221c6f5cf024044cde1c9ba', 'width': 2320}, 'variants': {}}]}
📚 Seeking Reliable Resources to Educate Myself on LLMs and Running Them Locally! 💡
15
Hey, fellow LLAMas 👋 I'm diving into Open Source Language Models (LLMs) like Vicuna and WizardLM, and I'm feeling a bit overwhelmed. I'm particularly confused about the almost weekly releases of new (quanitized) versions of WizardLM or Falcon. What exactly do these updates entail? How do they differ from previous versions? And are all current Open Source LLMs based on Meta's LLaMa? In addition, I'm intrigued by the concept of uncensored LLMs and how they work. What are the implications and considerations when it comes to using them? Furthermore, I've observed that open source LLMs often have limitations in terms of their context windows. Why is this the case? What are the limitations associated with smaller context windows, and how does it impact the performance of these models? Is it a System RAM thing? To fuel my learning journey, I'm seeking reliable sources—Videos, channels, blogs, or any other educational materials—that can provide comprehensive insights into LLMs like Vicuna and WizardLM, VRAM considerations, context windows, uncensored LLMs, quantization etc. 📚🔍 If you've come across any valuable resources that cover these topics, I would greatly appreciate your recommendations. Let's dive into the depths of LLM knowledge and unravel the mysteries together! 🚀💡 Thank you in advance for your contributions! 🙏😊
2023-06-07T13:18:32
https://www.reddit.com/r/LocalLLaMA/comments/143d3et/seeking_reliable_resources_to_educate_myself_on/
Prince-of-Privacy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143d3et
false
null
t3_143d3et
/r/LocalLLaMA/comments/143d3et/seeking_reliable_resources_to_educate_myself_on/
false
false
self
15
null
Apple Silicon vs Nvidia GPU, Exllama etc
6
The constraints of VRAM capacity on Local LLM are becoming more apparent, and with the 48GB Nvidia graphics card being prohibitively expensive, it appears that Apple Silicon might be a viable alternative. However, there are a few points I'm unsure about and I was hoping to get some insights: 1. The M1 Ultra's FP16 performance is rated at 42 Tflops, while the 4090's FP16 performance is at 82 Tflops. I don't have the int4 data for either of these chips. Can I conclude from this that the theoretical computing power of the M1 Ultra is half that of the 4090? 2. Recently, Exllama has been able to boost the inference speed of Nvidia's 30 and 40 series GPUs for GPTQ by a significant margin. Is it reasonable to expect that a similar enhancement could be achieved with Apple Silicon in the future?
2023-06-07T13:24:13
https://www.reddit.com/r/LocalLLaMA/comments/143d85h/apple_silicon_vs_nvidia_gpu_exllama_etc/
Big_Communication353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143d85h
false
null
t3_143d85h
/r/LocalLLaMA/comments/143d85h/apple_silicon_vs_nvidia_gpu_exllama_etc/
false
false
self
6
null
Fine-tune the model to make it follow the instructions better
7
In the past few weeks, I have been using small LLM models like 13B WizardLM. Somehow, I feel that the smaller model tends to deviate from instructions even when explicitly instructed to output a certain format. It often goes into chat mode and takes input data as part of the conversation. To overcome this, I have to use few-shot techniques. however, it increases prompt size and decreases response speed. Since we now have Qlora, which enables us to fine-tune a 13B model on a 12G VRAM GPU, I just did a quick experiment using ChatGPT-generated training data for around 900 examples. It took one hour to fine-tune on one 4070 and I feel the result is pretty good, making the model follow the prompt much better. Two tips in mind: * Keep the fine-tune data following the original model's prompt format. * When using the fine-tune model, be strict about the prompt that matches your fine-tune format.
2023-06-07T13:24:53
https://www.reddit.com/r/LocalLLaMA/comments/143d8rc/finetune_the_model_to_make_it_follow_the/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143d8rc
false
null
t3_143d8rc
/r/LocalLLaMA/comments/143d8rc/finetune_the_model_to_make_it_follow_the/
false
false
self
7
null
OpenLlaMa 13b preview
2
[removed]
2023-06-07T13:34:54
https://www.reddit.com/r/LocalLLaMA/comments/143dhb2/openllama_13b_preview/
achildsencyclopedia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143dhb2
false
null
t3_143dhb2
/r/LocalLLaMA/comments/143dhb2/openllama_13b_preview/
false
false
default
2
null
Question about training LoRAs
13
I've got a 4090, and I've just started getting in to training my own LoRAs, and I could use some help improving my understanding of the current landscape. My goal is to train and merge LoRAs on the largest models I can with my current hardware. Please comment any info that might help - Thank you! **Questions**: 1. Are most people still training LoRAs on 16 bit models and then merging/quantizing afterwords? Or has it become more common to train using quantized models? 2. Should I be looking in to training LoRAs on GPTQ 4 bit models using monkey patch or is that still considered a hacky workaround? 3. Alternatively, is QLoRA just a better approach currently? 4. Can anyone point me toward documentation on merging LoRAs, especially in the context of quantized models?
2023-06-07T13:45:49
https://www.reddit.com/r/LocalLLaMA/comments/143dqke/question_about_training_loras/
yuicebox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143dqke
false
null
t3_143dqke
/r/LocalLLaMA/comments/143dqke/question_about_training_loras/
false
false
self
13
null
I have created Privategpt to safegard your offline documents, please provide your feedback in the comment section
1
2023-06-07T14:39:56
https://v.redd.it/d61lfanxxl4b1
ANil1729
v.redd.it
1970-01-01T00:00:00
0
{}
143f1hh
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/d61lfanxxl4b1/DASHPlaylist.mpd?a=1695459794%2CM2Q1MTFmOTliMGRjZTA5Y2M2MjNiMTcwMzFlYmE1OWI0NzJkMWE0YWI5ZGI2NGM4Zjk4NTM0YWI1NTYzNjRkMw%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/d61lfanxxl4b1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/d61lfanxxl4b1/HLSPlaylist.m3u8?a=1695459794%2CZDEyNTQ3OWZhYzhjZGY0NGY2ZGFlZjRjZDRiNDJhYzJlNzQ0ZjlkYTAxNzJmNmIxZGMzZjlhY2UzM2FiMTczYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/d61lfanxxl4b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_143f1hh
/r/LocalLLaMA/comments/143f1hh/i_have_created_privategpt_to_safegard_your/
false
false
default
1
null
InternLM, a multilingual foundational language model with 104B parameters
150
2023-06-07T15:12:26
https://i.redd.it/zd1ix7it3m4b1.png
ambient_temp_xeno
i.redd.it
1970-01-01T00:00:00
0
{}
143fvnd
false
null
t3_143fvnd
/r/LocalLLaMA/comments/143fvnd/internlm_a_multilingual_foundational_language/
false
false
https://a.thumbs.redditm…1WMg40x_RwR8.jpg
150
{'enabled': True, 'images': [{'id': 'OXwo0zJu3PniHCoJ0ar9FJhWyXG4EfhwCAtDjOVlSno', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/zd1ix7it3m4b1.png?width=108&crop=smart&auto=webp&s=48d5fe5a235681825fbb2542fadcb289b6111d9e', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/zd1ix7it3m4b1.png?width=216&crop=smart&auto=webp&s=2e359f3e4c9db9df4b367b6ca2f6a920f15f83ac', 'width': 216}, {'height': 201, 'url': 'https://preview.redd.it/zd1ix7it3m4b1.png?width=320&crop=smart&auto=webp&s=acbc9b67a1776059d72b0ebc7c7ea7c167b3b4e5', 'width': 320}, {'height': 403, 'url': 'https://preview.redd.it/zd1ix7it3m4b1.png?width=640&crop=smart&auto=webp&s=adc1964b60e46e269647a10c96ec059e931682df', 'width': 640}, {'height': 605, 'url': 'https://preview.redd.it/zd1ix7it3m4b1.png?width=960&crop=smart&auto=webp&s=0e97dde9a0fa0b75630926a15aa8c5ddbfe24ee4', 'width': 960}, {'height': 681, 'url': 'https://preview.redd.it/zd1ix7it3m4b1.png?width=1080&crop=smart&auto=webp&s=d5e8f15b95a36569b5f65c372a42dd4b0d22ad73', 'width': 1080}], 'source': {'height': 1460, 'url': 'https://preview.redd.it/zd1ix7it3m4b1.png?auto=webp&s=524c4620491f7be1f89a4a7561f833db16bf97da', 'width': 2314}, 'variants': {}}]}
Apparently You can Perform a Tree of Thought Like Prompt with a single ChatGPT prompt
82
2023-06-07T15:13:58
https://sharegpt.com/c/L46Tj8q
NancyAurum
sharegpt.com
1970-01-01T00:00:00
0
{}
143fx21
false
null
t3_143fx21
/r/LocalLLaMA/comments/143fx21/apparently_you_can_perform_a_tree_of_thought_like/
false
false
https://b.thumbs.redditm…J8cAUZxyS7lg.jpg
82
{'enabled': False, 'images': [{'id': '7jLDMUUtNxmeY_BmV8wKbbpwdqynio26jNMi1JJMZlc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XnNbIeCvL7feYbeLKzvKzY_902AybZyXetK7gTh8bQU.jpg?width=108&crop=smart&auto=webp&s=55347ccdcffcf86c9be2fefa4f85e578e7d085db', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XnNbIeCvL7feYbeLKzvKzY_902AybZyXetK7gTh8bQU.jpg?width=216&crop=smart&auto=webp&s=67080d02253e852591391af04c1fb19068c9ecfe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XnNbIeCvL7feYbeLKzvKzY_902AybZyXetK7gTh8bQU.jpg?width=320&crop=smart&auto=webp&s=a58ed9e056e383ef3325fd87e741461340e6702e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XnNbIeCvL7feYbeLKzvKzY_902AybZyXetK7gTh8bQU.jpg?width=640&crop=smart&auto=webp&s=79805ab9d5ab081481d8581fe67a0c68b2a397c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XnNbIeCvL7feYbeLKzvKzY_902AybZyXetK7gTh8bQU.jpg?width=960&crop=smart&auto=webp&s=bfd1877b13446d65bab867f2841558a6b84ea962', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XnNbIeCvL7feYbeLKzvKzY_902AybZyXetK7gTh8bQU.jpg?width=1080&crop=smart&auto=webp&s=53594f827f06d2c22d4dab6d114bf5b6a14f1b58', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XnNbIeCvL7feYbeLKzvKzY_902AybZyXetK7gTh8bQU.jpg?auto=webp&s=5d612d19ee4e25ccb57568c7553d59b8d1604144', 'width': 1200}, 'variants': {}}]}
Open LLM Leaderboard accounting number of parameters
3
&#x200B; [ Open LLM Leaderboard accounting the number of parameters of the model ](https://preview.redd.it/uaqsbpg6am4b1.png?width=1938&format=png&auto=webp&s=57321a8b5307f4fe83d311ce51cf77f0343cb989) I wanted to plot the data given in the Open LLM Leaderboard from Hugging Face, but I found the scores and the average scores so close that no model could really stand out of the crowd. So what I did is plotting the average score / the number of parameters counted in billion (in black in the graph in the secondary axis). So the more the better. What I assumed is that the less parameters are involved the more efficient the model is. Does it make sense? I am not ML engineer of scientist, so please correct if I am wrong. Does it make sense also to plot average / size of the weights ? or / by the amount of vRAM necessary ? or by the cost of training/running in the cloud? Thanks a lot.
2023-06-07T15:48:39
https://www.reddit.com/r/LocalLLaMA/comments/143gt7v/open_llm_leaderboard_accounting_number_of/
No_Palpitation7740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143gt7v
false
null
t3_143gt7v
/r/LocalLLaMA/comments/143gt7v/open_llm_leaderboard_accounting_number_of/
false
false
https://b.thumbs.redditm…z9T48AVv6soE.jpg
3
null
LLaMA spec help
1
[removed]
2023-06-07T15:57:23
https://www.reddit.com/r/LocalLLaMA/comments/143h18k/llama_spec_help/
Abeenr2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143h18k
false
null
t3_143h18k
/r/LocalLLaMA/comments/143h18k/llama_spec_help/
false
false
default
1
null
Dumb Questions
2
[removed]
2023-06-07T16:26:28
https://www.reddit.com/r/LocalLLaMA/comments/143hste/dumb_questions/
betolley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143hste
false
null
t3_143hste
/r/LocalLLaMA/comments/143hste/dumb_questions/
false
false
default
2
null
New PC. What should we try?
1
[removed]
2023-06-07T16:34:47
https://www.reddit.com/r/LocalLLaMA/comments/143i0ji/new_pc_what_should_we_try/
embLLeM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143i0ji
false
null
t3_143i0ji
/r/LocalLLaMA/comments/143i0ji/new_pc_what_should_we_try/
false
false
default
1
null
The LLaMa publication is protected free speech under Bernstein v. United States - US Senators’ letter to Meta is entirely inappropriate – regulation of open source LLMs would be unconstitutional
355
**Publishing source code is protected free speech** US precedent is extremely clear that publishing code is covered by the constitutional right to free speech. In 1995, a student named Daniel Bernstein wanted to publish an academic paper and the source code for an encryption system. At the time, government regulation banned the publication of encryption source code. The Ninth Circuit Court of Appeals ruled that software source code was speech protected by the First Amendment and that the government's regulations preventing its publication were unconstitutional. You might remember the FBI–Apple encryption dispute a few years ago when this came up too. The government tried to overstep its bounds with Apple and get its engineers to write code for a backdoor into their products. Apple relied on the same argument: that being compelled to write new software “amounts to compelled speech”. In other words, they relied on the argument that code is covered by the constitutional right to free speech. The government backed down in this case because they were obviously going to lose. **Regulating business activities is constitutional; Regulating speech is unconstitutional** I’m not against regulating business activities. But the government is just not allowed to regulate free speech, including the dissemination of code. There's a big difference between regulating business activities and interfering with academic freedom. Meta AI is a research group that regularly publishes academic papers. It did not release LLaMa as a product but merely as source code accompanying an academic paper. This wasn't a commercial move; it was a contribution to the broader AI research community. The publication of a research paper (including the accompanying source code as per Bernstein) is protected under the constitutional right to free speech. The writers of the paper do not lose their right to free speech because they work for a big company. Companies themselves also have the constitutional right to freedom of speech. The government has a role in ensuring fair business practices and protecting consumers, but when it comes to academic research, they are not permitted to interfere. I am not saying “in my opinion they shouldn’t interfere”, I am saying that as a matter of constitutional law they are prohibited from interfering. **The Senator's Letter** Of course, there is no constitutional restriction on Senators posing questions to Meta. However, Meta’s response should be very clear that when it comes to academic publications and the publication of open source code, the US Senate has no authority to stifle any of Meta (or any other person or organisation’s) activities. Any regulation that required Meta (or any other person or company) to jump through regulatory hoops before publishing code would be blatantly unconstitutional. I hope that Meta responds as forcefully to this as Apple did to the FBI. (Link to article about the letter: https://venturebeat.com/ai/senators-send-letter-questioning-mark-zuckerberg-over-metas-llama-leak/ Link to letter: https://www.blumenthal.senate.gov/imo/media/doc/06062023metallamamodelleakletter.pdf) **Big Picture** People who are concerned about government regulating open source AI need to stop complaining about who is or isn't pushing for it and need to start talking about how it is literally illegal for the government to do this. The Electronic Frontier Foundation represented Bernstein in his case. I can't see why they wouldn't take a similar case if the government tried to regulate the publication of model weights. **TLDR**: The release of the LLaMa model weights is a matter of free speech. It would be unconstitutional for the government to impose any regulations of the publication of academic research or source code.
2023-06-07T16:55:51
https://www.reddit.com/r/LocalLLaMA/comments/143ijwi/the_llama_publication_is_protected_free_speech/
Xron_J
self.LocalLLaMA
2023-06-07T18:13:55
0
{}
143ijwi
false
null
t3_143ijwi
/r/LocalLLaMA/comments/143ijwi/the_llama_publication_is_protected_free_speech/
false
false
self
355
{'enabled': False, 'images': [{'id': 'ixOspejfvzt74hJXd8RwVvE4BZxSdxOHIjhf4K6IRtQ', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/bU-DNcMLmPMkGoc7SMW1vNJ9mQimLsBwmZcPWUtssIg.jpg?width=108&crop=smart&auto=webp&s=08d39389ba1ae0c89d0ff2bbb81bbb95251b4b37', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/bU-DNcMLmPMkGoc7SMW1vNJ9mQimLsBwmZcPWUtssIg.jpg?width=216&crop=smart&auto=webp&s=b319678e7535417ffe711ab3825295b120f0e174', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/bU-DNcMLmPMkGoc7SMW1vNJ9mQimLsBwmZcPWUtssIg.jpg?width=320&crop=smart&auto=webp&s=7e4597fed3c2b739d908d208a2387f614e357938', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/bU-DNcMLmPMkGoc7SMW1vNJ9mQimLsBwmZcPWUtssIg.jpg?width=640&crop=smart&auto=webp&s=340edf98f0fc44249364a2e147381624a01223ba', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/bU-DNcMLmPMkGoc7SMW1vNJ9mQimLsBwmZcPWUtssIg.jpg?width=960&crop=smart&auto=webp&s=d58752dcd2324315f8ed139f71db13d93b5d2f44', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/bU-DNcMLmPMkGoc7SMW1vNJ9mQimLsBwmZcPWUtssIg.jpg?width=1080&crop=smart&auto=webp&s=62a256fbd6b6dec08beeecfabdbd2dd35e1b6041', 'width': 1080}], 'source': {'height': 750, 'url': 'https://external-preview.redd.it/bU-DNcMLmPMkGoc7SMW1vNJ9mQimLsBwmZcPWUtssIg.jpg?auto=webp&s=1750f9d35082787a6c36334e1177322b07287290', 'width': 1200}, 'variants': {}}]}
airoboros context length correction - sorry!
27
Sincerest apologies, but the 4k context length didn't work as anticipated! The training code and model config were updated to train to 4k, the training data had > 2048 context sizes, and I validated prompts produced fairly coherent outputs in tests of \~2200 - \~2300, but I failed miserably in testing much beyond that. I've removed that from the model cards and added a correction. As it turns out, without a huge overhaul to the base model, or use of landmark attention or other mechanism, unfortunately this isn't going to work.
2023-06-07T17:11:40
https://www.reddit.com/r/LocalLLaMA/comments/143iyua/airoboros_context_length_correction_sorry/
JonDurbin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143iyua
false
null
t3_143iyua
/r/LocalLLaMA/comments/143iyua/airoboros_context_length_correction_sorry/
false
false
self
27
null
CalderaAI/30B-Laza. Spot #2 on OpenLLM Leaderboard
5
2023-06-07T17:31:53
https://huggingface.co/CalderaAI/30B-Lazarus
chakalakasp
huggingface.co
1970-01-01T00:00:00
0
{}
143jh9i
false
null
t3_143jh9i
/r/LocalLLaMA/comments/143jh9i/calderaai30blaza_spot_2_on_openllm_leaderboard/
false
false
https://a.thumbs.redditm…TwHrq3GUPFx0.jpg
5
{'enabled': False, 'images': [{'id': 'tO5YbcwBKkbd-On-UG0pYuU9ExwyjhW_dJjo_qi05ok', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=108&crop=smart&auto=webp&s=c1df3c85d17721110afb254b8ee86dbae4b2a76e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=216&crop=smart&auto=webp&s=3724894aba41129eebdc7730686a99b39f6dcd34', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=320&crop=smart&auto=webp&s=37b513390e6b58ba062fae3c083cf9a15947b2f9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=640&crop=smart&auto=webp&s=9304eb469b238b41bf2a5dafab3850cc2c4f7026', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=960&crop=smart&auto=webp&s=43d62d53335e6682fec34676f3a5a1f0d515795e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?width=1080&crop=smart&auto=webp&s=5da25b3b05cdfa4eee6c181677a78f18eb9251c2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BfEwAkH7VT9WQK5YBv4UvwT_h4v0S_t4_-9OultMpGk.jpg?auto=webp&s=d47b09eddf37279d931e2669c1a6faf0b4f755e3', 'width': 1200}, 'variants': {}}]}
So I went and tested most of the 65B and some 30B models. Presenting: The Enigmatic Town of Four Seasons
30
EDIT: the puzzle is flawed. Mr. Sol variable is incorrectly set and the equation cannot be solved. Here are the fixed clues that satisfy the original results. The puzzle cannot be solved by GPT-4 so this makes it a potentially viable puzzle to gauge the state of the art AI model reasoning abilities. Thanks to u/Standard-Anybody for performing the fix: 1. Winter has one house fewer than the Red district. 2. The Blue district has fewer houses than the district governed by Mr. Sol and is not Winter. 3. The district with 6 houses is not Autumn and is not governed by Mr. Sol. 4. The Yellow district has more houses than the district governed by Mr. Sol but fewer houses than the district governed by Ms. Bloom. 5. Mr. Sol is not the mayor of Winter and Spring has 3 houses. 6. The district governed by Ms. Harvest has fewer houses than the Green district and is not Summer. 7. Mr. Frost is not the mayor of the Red or the Green district. \---------------------------- I asked GPT-4 to create a puzzle that would test the following: comprehension, abstraction, basic maths, and logical reasoning. This type of challenge is what I use the local LLMs for. Not storytelling, or chatting, or with frameworks. I write code in python (with GPT-4) and chain the models and outputs manually. It is important to me that each iteration of the meta loop does not decrease the process quality - cf. the propensity of AutoGPT and similar chains to hallucinate and get stuck very quickly. So, this is the puzzle that tests the models' comprehension, abstraction, basic arithmetic, and logical reasoning: # The Enigmatic Town of Four Seasons In a distant land, there is a town called "Four Seasons". The town is divided into four districts - Winter, Spring, Summer, and Autumn. Each district has a unique number of houses and a unique dominant color (Blue, Green, Red, and Yellow). Also, each district has a different mayor - Mr. Frost, Ms. Bloom, Mr. Sol, and Ms. Harvest. Your task is to determine the number of houses (3, 4, 5, or 6), the dominant color, and the mayor of each district. Here are the clues: 1. Winter has one house fewer than the Red district. 2. The Blue district has more houses than the district governed by Ms. Bloom but fewer houses than Autumn. 3. The district with 6 houses is not Autumn and is not governed by Mr. Sol. 4. The Yellow district has more houses than the district governed by Mr. Frost but fewer houses than the district governed by Mr. Sol. 5. Mr. Sol is not the mayor of Winter and Spring has 3 houses. 6. The district governed by Ms. Harvest has fewer houses than the Green district and is not Summer. 7. Mr. Frost is not the mayor of the Red or the Yellow district. Solution: * Winter district: 5 houses, Yellow, Mayor Mr. Frost * Spring district: 3 houses, Blue, Mayor Ms. Harvest * Summer district: 6 houses, Red, Mayor Ms. Bloom * Autumn district: 4 houses, Green, Mayor Mr. Sol &#x200B; # Test results: All Llama models fail irrespective of parameters. I tested them with the following: temperature = 0.1, top\_k = 10, top\_p = 0.1 and temperature = 0.7, top\_k = 80, top\_p = 0.5. Seed is kept constant. **GPT-4** is the only one that can solve this out of the models that I tried **Bing**: hallucinates itself into a comfortable reality where it is correct, inventing a link to where this puzzle and the correct solution were published, alas it is a hallucinated reality. &#x200B; * Winter: 4 houses, is Blue, and is governed by Mr. Frost. * Spring: has 3 houses, is Yellow, and is governed by Ms. Bloom * Summer: has 6 houses, is Red, and is governed by Mr. Sol. * Autumn: has 5 houses, is Green, and is governed by Ms. Harvest. &#x200B; **GPT-3.5**: goes insane, try it. It will become lost in an iterative loop to nowhere trying to explain its reasoning, over, and over again, and never completing. **Wizard-Vicuna-30B-Uncensored.ggmlv3.q8\_0.bin**: fail, comprehensive solution but it even invented a condition where there are 2 houses which is not one of the allowed states. I observed similar behaviour only with 13B and 7B models before, and that is why I do not use them, ever. **based-30B-GGML/based-30b.ggmlv3.q8\_0.bin**: fail - no meaningful output **guanaco-65B-GGML/guanaco-65B.ggmlv3.q8\_0.bin**: fail. Comprehensive solution, but incorrect reasoning **VicUnlocked-Alpaca-65B.ggmlv3.q8\_0.bin:** fail. Also comprehensive, but incorrect solution **gpt4-alpaca-lora\_mlp-65B/gpt4-alpaca-lora\_mlp-65B.ggmlv3.q5\_1.bin**: fail, presented just the "solution", no reasoning **alpaca-lora-65B-GGML/alpaca-lora-65B.ggmlv3.q5\_1.bin:** fail, also just a solution, no reasoning All 65B models had this as a solution. Fine tuning or Lora made no difference to the solution, just to the explanation and verbosity: * Winter - 4 houses, Blue, Mayor: Mr. Frost * Spring - 3 houses, Green, Mayor: Ms. Bloom * Summer - 5 houses, Red, Mayor: Ms. Harvest * Autumn - 6 houses, Yellow, Mayor: Mr. Sol # Takeaways: All 65B Llama models are "the same" fundamentally in their reasoning capacity. None can reason as well as GPT-4, but perform better than GPT3.5-turbo which fails to complete the task, so at least the code will progress if not deliver the desired result. 30B models that I tested perform worse than 65B models that I tested. Bing in creative mode is using some strange version of something. It is not GPT-4 and it is not GPT-3.5. Its solution and rationale were different to GPT-4, and it did the usual trick from the very beginning, it invented a reality where it is right. Bing seemingly has three states: objectively correct, subjectively correct (wrong), or it tells you to piss off. I'd love to see what Falcon can do but without quantizing it will very likely not fit inside my 100GB RAM that I allocated to WSL2 out of 128 GB total. # EDIT Updated Takeaways: Since the puzzle is malformed the best performing models end up being: **GPT-3.5-turbo:** the recursive loop is not the most elegant way to handle the error with input, but this could be managed in code **based-30B-GGML/based-30b.ggmlv3.q8\_0.bin:** It refused to answer beyond providing an opening sentence: "Based on the clues, I can deduce that the number of houses in each district is as follows:" So I will redo this process with a more carefully formed and tested puzzle. It is rather difficult to make a novel, complicated puzzle that can be solved. It is pointless to use a pre-existing puzzle as I have no way of knowing if the models were trained on it and know the solution ahead of time.
2023-06-07T18:18:07
https://www.reddit.com/r/LocalLLaMA/comments/143knk0/so_i_went_and_tested_most_of_the_65b_and_some_30b/
extopico
self.LocalLLaMA
2023-06-08T07:50:13
0
{}
143knk0
false
null
t3_143knk0
/r/LocalLLaMA/comments/143knk0/so_i_went_and_tested_most_of_the_65b_and_some_30b/
false
false
self
30
null
Questions on LLaMA and all
1
[removed]
2023-06-07T19:09:26
https://www.reddit.com/r/LocalLLaMA/comments/143lyh2/questions_on_llama_and_all/
Dkv9612
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143lyh2
false
null
t3_143lyh2
/r/LocalLLaMA/comments/143lyh2/questions_on_llama_and_all/
false
false
default
1
null
Bing gpt best free gpt but she cut off questions abruply
1
[removed]
2023-06-07T19:21:53
https://www.reddit.com/r/LocalLLaMA/comments/143ma2f/bing_gpt_best_free_gpt_but_she_cut_off_questions/
Glass_Philosophy6941
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143ma2f
false
null
t3_143ma2f
/r/LocalLLaMA/comments/143ma2f/bing_gpt_best_free_gpt_but_she_cut_off_questions/
false
false
default
1
null
Would it possible for AI to write code in assembly code
0
Would it possible for AI to write code in assembly code because it's like x10 perf so it would REALLY be usefull for exemple (video codec etc)
2023-06-07T19:31:19
https://www.reddit.com/r/LocalLLaMA/comments/143miys/would_it_possible_for_ai_to_write_code_in/
Ok--Reflection
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143miys
false
null
t3_143miys
/r/LocalLLaMA/comments/143miys/would_it_possible_for_ai_to_write_code_in/
false
false
self
0
null
I have found most uncensored gpt (It is a very sick bastrd)
0
[removed]
2023-06-07T20:04:53
https://www.reddit.com/r/LocalLLaMA/comments/143ndwn/i_have_found_most_uncensored_gpt_it_is_a_very/
Glass_Philosophy6941
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143ndwn
false
null
t3_143ndwn
/r/LocalLLaMA/comments/143ndwn/i_have_found_most_uncensored_gpt_it_is_a_very/
false
false
default
0
null
175B (ChatGPT) vs 3B (RedPajama)
1
[deleted]
2023-06-07T20:07:22
[deleted]
1970-01-01T00:00:00
0
{}
143ng8b
false
null
t3_143ng8b
/r/LocalLLaMA/comments/143ng8b/175b_chatgpt_vs_3b_redpajama/
false
false
default
1
null
175B (ChatGPT) vs 3B (RedPajama)
139
2023-06-07T20:08:57
https://www.reddit.com/gallery/143nhnx
acec
reddit.com
1970-01-01T00:00:00
0
{}
143nhnx
false
null
t3_143nhnx
/r/LocalLLaMA/comments/143nhnx/175b_chatgpt_vs_3b_redpajama/
false
false
https://b.thumbs.redditm…_g7bhIkgf6xw.jpg
139
null
Implementing specific pull request GGML into llama-cpp-python for Falcon-7B-ggml
7
Hello LLaMA enthousiasts, Currently trying to implement a GGML [specific pull request](https://github.com/ggerganov/ggml/pull/231) into the latest version of llama-cpp-python. I've installed the repo using `pip install -qqq -U git+https://github.com/ggerganov/ggml.git@231` However, the latest version of llama-cpp-python does not catch this specific GGML version AFAIK. Does anyone happen to know how I can change the [llama-cpp-python 0.1.57](https://pypi.org/project/llama-cpp-python/) code to refer to this specifc GGML version? Use-case: GGML with Falcon-7B in Python [https://huggingface.co/RachidAR/falcon-7B-ggml](https://huggingface.co/RachidAR/falcon-7B-ggml)
2023-06-07T20:42:58
https://www.reddit.com/r/LocalLLaMA/comments/143odyt/implementing_specific_pull_request_ggml_into/
fictioninquire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143odyt
false
null
t3_143odyt
/r/LocalLLaMA/comments/143odyt/implementing_specific_pull_request_ggml_into/
false
false
self
7
{'enabled': False, 'images': [{'id': 'ijnUS7MBDPorxVL12EZGCgeJ-BBbEHjggqaLNQXt3RM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0LbiKmszwcHR7KlOFGmwQPlhNct5EK3WCch6-gIAucc.jpg?width=108&crop=smart&auto=webp&s=48705bef6ed0d705c57f15ca264cf6f0f0a6981d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0LbiKmszwcHR7KlOFGmwQPlhNct5EK3WCch6-gIAucc.jpg?width=216&crop=smart&auto=webp&s=7a656dfcd0630510f79a504b4fd9d9c4ebc9e593', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0LbiKmszwcHR7KlOFGmwQPlhNct5EK3WCch6-gIAucc.jpg?width=320&crop=smart&auto=webp&s=2e882a9e4c881779062bfc3ca16df624761fbf1d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0LbiKmszwcHR7KlOFGmwQPlhNct5EK3WCch6-gIAucc.jpg?width=640&crop=smart&auto=webp&s=11dd5292e4acc9df0742a2e116e873381f8dba95', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0LbiKmszwcHR7KlOFGmwQPlhNct5EK3WCch6-gIAucc.jpg?width=960&crop=smart&auto=webp&s=166fd5a864617e1c991c359fedbbb37db7f9a6db', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0LbiKmszwcHR7KlOFGmwQPlhNct5EK3WCch6-gIAucc.jpg?width=1080&crop=smart&auto=webp&s=799eb8bd7903bbbc8f08b15cfa85fa87df9576df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0LbiKmszwcHR7KlOFGmwQPlhNct5EK3WCch6-gIAucc.jpg?auto=webp&s=b3757a3a4709067211aff68e3c640f5301007838', 'width': 1200}, 'variants': {}}]}
My head is spinning with all the quantization methods now. Anyone else?
29
It's just not Q2, Q3, Q4, Q5, Q6 and Q8. It's all the permutations, the sub Q's. The choice has come down to only two for me. Q2 for speed and Q8 for quality.
2023-06-07T21:04:53
https://www.reddit.com/r/LocalLLaMA/comments/143ozbn/my_head_is_spinning_with_all_the_quantization/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143ozbn
false
null
t3_143ozbn
/r/LocalLLaMA/comments/143ozbn/my_head_is_spinning_with_all_the_quantization/
false
false
self
29
null
CAMEL-13B claims to beat every other 13B model on the huggingface leaderboard
1
2023-06-07T21:17:12
https://twitter.com/itanih0/status/1665810143962071040
Covid-Plannedemic-
twitter.com
1970-01-01T00:00:00
0
{}
143pb0y
false
{'oembed': {'author_name': 'Hani Itani', 'author_url': 'https://twitter.com/itanih0', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Checkout our model CAMEL-13B! We evaluate our model on the datasets suggested by <a href="https://twitter.com/huggingface?ref_src=twsrc%5Etfw">@huggingface</a> Open LLM Benchmark using <a href="https://twitter.com/AiEleuther?ref_src=twsrc%5Etfw">@AiEleuther</a>&#39;s language model evaluation harness and we score an average of **58.1**, on par with LLaMA-65B (58.3)! Joint work with <a href="https://twitter.com/guohao_li?ref_src=twsrc%5Etfw">@guohao_li</a> &amp; <a href="https://twitter.com/hammh0a?ref_src=twsrc%5Etfw">@hammh0a</a> .</p>&mdash; Hani Itani (@itanih0) <a href="https://twitter.com/itanih0/status/1665810143962071040?ref_src=twsrc%5Etfw">June 5, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/itanih0/status/1665810143962071040', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_143pb0y
/r/LocalLLaMA/comments/143pb0y/camel13b_claims_to_beat_every_other_13b_model_on/
false
false
default
1
null
I really like the Hermes model
23
2023-06-07T21:22:04
https://i.redd.it/ji361azown4b1.png
involviert
i.redd.it
1970-01-01T00:00:00
0
{}
143pfqg
false
null
t3_143pfqg
/r/LocalLLaMA/comments/143pfqg/i_really_like_the_hermes_model/
false
false
https://b.thumbs.redditm…-NyEMNofdRTc.jpg
23
{'enabled': True, 'images': [{'id': 'upH9bHwMx24XfSSkyTBj3bf0lwbekgas5EusIh5b-64', 'resolutions': [{'height': 128, 'url': 'https://preview.redd.it/ji361azown4b1.png?width=108&crop=smart&auto=webp&s=a91b558a05c10dd82a44c1809e00b9295fa5d8de', 'width': 108}, {'height': 256, 'url': 'https://preview.redd.it/ji361azown4b1.png?width=216&crop=smart&auto=webp&s=c21ce98be012a1457ec4e98228992f37da2cea3d', 'width': 216}, {'height': 379, 'url': 'https://preview.redd.it/ji361azown4b1.png?width=320&crop=smart&auto=webp&s=a5f0c18b20b47b38721c84ed8cffc23fb175a1cf', 'width': 320}], 'source': {'height': 654, 'url': 'https://preview.redd.it/ji361azown4b1.png?auto=webp&s=4773dc3875f61390b31a3546fe837a9bcdae7e1f', 'width': 551}, 'variants': {}}]}
I Will Fund Your LLM Projects
0
[removed]
2023-06-07T21:44:55
[deleted]
1970-01-01T00:00:00
0
{}
143q0qy
false
null
t3_143q0qy
/r/LocalLLaMA/comments/143q0qy/i_will_fund_your_llm_projects/
false
false
default
0
null
An open source LLM for linux command line generation?
1
[deleted]
2023-06-07T22:00:16
[deleted]
1970-01-01T00:00:00
0
{}
143qf42
false
null
t3_143qf42
/r/LocalLLaMA/comments/143qf42/an_open_source_llm_for_linux_command_line/
false
false
default
1
null
How to save/reload fine-tuned QLoRa models?
1
[deleted]
2023-06-07T22:25:26
[deleted]
1970-01-01T00:00:00
0
{}
143r2pw
false
null
t3_143r2pw
/r/LocalLLaMA/comments/143r2pw/how_to_savereload_finetuned_qlora_models/
false
false
default
1
null
How does GPTQ's perplexity compare to GGML's newest K quant models?
24
I'm aware that GGML's perplexity performance has improved significantly lately. However, I'm curious if it's now on par with GPTQ. To illustrate, Guanaco 33b's GPTQ has a file size of 16.9 GB, while the most comparable GGML options are Q3\_K\_L at 17.2 GB or Q4\_K\_S at 18.3 GB. IMO, this comparison is meaningful because GPTQ is currently much faster. And has there been any progress on GPTQ? It looks like their GitHub repo isn't very active.
2023-06-07T23:48:28
https://www.reddit.com/r/LocalLLaMA/comments/143t3i6/how_does_gptqs_perplexity_compare_to_ggmls_newest/
Big_Communication353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143t3i6
false
null
t3_143t3i6
/r/LocalLLaMA/comments/143t3i6/how_does_gptqs_perplexity_compare_to_ggmls_newest/
false
false
self
24
null
finetune Falcon 40B in 30 minutes using LLaMA adapter
28
2023-06-08T01:34:23
https://twitter.com/LightningAI/status/1666460740541194241
ninjasaid13
twitter.com
1970-01-01T00:00:00
0
{}
143vipa
false
{'oembed': {'author_name': 'Lightning AI ⚡️', 'author_url': 'https://twitter.com/LightningAI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Finetuning Falcon 40B on the Alpaca instruction dataset takes 30 hours on 8 A100s. Cut it to 30 minutes using LLaMA Adapter\xa0🤯<br><br>Try it at <a href="https://t.co/TWSVqU5KKq">https://t.co/TWSVqU5KKq</a><a href="https://twitter.com/hashtag/LLM?src=hash&amp;ref_src=twsrc%5Etfw">#LLM</a> <a href="https://twitter.com/hashtag/MachineLearning?src=hash&amp;ref_src=twsrc%5Etfw">#MachineLearning</a> <a href="https://t.co/pqZvuubyBo">pic.twitter.com/pqZvuubyBo</a></p>&mdash; Lightning AI ⚡️ (@LightningAI) <a href="https://twitter.com/LightningAI/status/1666460740541194241?ref_src=twsrc%5Etfw">June 7, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/LightningAI/status/1666460740541194241', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_143vipa
/r/LocalLLaMA/comments/143vipa/finetune_falcon_40b_in_30_minutes_using_llama/
false
false
https://b.thumbs.redditm…KTXD_-NWrcio.jpg
28
{'enabled': False, 'images': [{'id': 'q9Ev5ZDA0Kk0VTjPHjJHL6Qck-i-01oegc9aD3mQQiU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YsHfF45KvBFUkPr_rchnuXd6CKL54Hz7UkiWoQTt-ew.jpg?width=108&crop=smart&auto=webp&s=14cc5440041b5ade0557ea6c27dda9d009b6bdbc', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/YsHfF45KvBFUkPr_rchnuXd6CKL54Hz7UkiWoQTt-ew.jpg?auto=webp&s=85ba97167486ab8a0d7a32982a9b7a4e2e3aac41', 'width': 140}, 'variants': {}}]}
How are people giving local models memory for multiple prompts?
1
How is everyone giving local models memory between prompts like ChatGPT? I’m using instruction-style with text gen webui but I’d love to be able to have more convo style where the model remembers the entire conversation (within reason of course) I tried chat style in text gen web ui but it would freeze every time. Not sure chat style would give memory though. Thanks for any advice!
2023-06-08T01:51:31
https://www.reddit.com/r/LocalLLaMA/comments/143vw3y/how_are_people_giving_local_models_memory_for/
mattybee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143vw3y
false
null
t3_143vw3y
/r/LocalLLaMA/comments/143vw3y/how_are_people_giving_local_models_memory_for/
false
false
self
1
null
What is the best multingual model?
5
Many use cases such as law (legal terminologies), anthropology requires model that works with native languages. Most of these models are trained in English and some bits of non English languages. Even GPT 4 doesn't speak my native language as well as it speaks English. Which models are sufficiently trained using most foreign language parameters?
2023-06-08T01:59:23
https://www.reddit.com/r/LocalLLaMA/comments/143w2e8/what_is_the_best_multingual_model/
mevskonat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143w2e8
false
null
t3_143w2e8
/r/LocalLLaMA/comments/143w2e8/what_is_the_best_multingual_model/
false
false
self
5
null
I haven't worked much with LLaMa, but are the less resource-intensive models still pretty good at encoding responses in things like JSON?
3
[deleted]
2023-06-08T02:05:31
[deleted]
1970-01-01T00:00:00
0
{}
143w7hi
false
null
t3_143w7hi
/r/LocalLLaMA/comments/143w7hi/i_havent_worked_much_with_llama_but_are_the_less/
false
false
default
3
null
Chronos 33b. The model focuses on chat/rp, but can write lengthy stories as well. Quite an improvement over 13B. These are v1 iterations of "Chronos" series and will be improved on in the future
35
2023-06-08T02:41:04
https://huggingface.co/TheBloke/chronos-33b-GPTQ
chakalakasp
huggingface.co
1970-01-01T00:00:00
0
{}
143wzix
false
null
t3_143wzix
/r/LocalLLaMA/comments/143wzix/chronos_33b_the_model_focuses_on_chatrp_but_can/
false
false
https://b.thumbs.redditm…zqArJDRBlOEM.jpg
35
{'enabled': False, 'images': [{'id': 'vxWJJaeFToAlEGButZjhAS0QChZYtikuPx25fMFe2EE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RyhyCf6KtkZWJNoAb_JNgMEXAdsn6Ymtg6I5Zb43JSo.jpg?width=108&crop=smart&auto=webp&s=75c0243e9a93f5e6296f1de80893dda04916f934', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RyhyCf6KtkZWJNoAb_JNgMEXAdsn6Ymtg6I5Zb43JSo.jpg?width=216&crop=smart&auto=webp&s=a3876eeef1108e68afdc74499c2cab35e17809bb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RyhyCf6KtkZWJNoAb_JNgMEXAdsn6Ymtg6I5Zb43JSo.jpg?width=320&crop=smart&auto=webp&s=fcae42ee097d72932b933dc346d37f8c79637cb1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RyhyCf6KtkZWJNoAb_JNgMEXAdsn6Ymtg6I5Zb43JSo.jpg?width=640&crop=smart&auto=webp&s=213ad8e3417a046964d7582e24518b07adfec81f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RyhyCf6KtkZWJNoAb_JNgMEXAdsn6Ymtg6I5Zb43JSo.jpg?width=960&crop=smart&auto=webp&s=05e37df86df1e661a81e1ea0efaeaeeac1de71c2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RyhyCf6KtkZWJNoAb_JNgMEXAdsn6Ymtg6I5Zb43JSo.jpg?width=1080&crop=smart&auto=webp&s=f547585962db33ac6002427eaa52c08d036e11cf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RyhyCf6KtkZWJNoAb_JNgMEXAdsn6Ymtg6I5Zb43JSo.jpg?auto=webp&s=8d79c7a0ae8dde701d9f6e4fabba84f69bd0c303', 'width': 1200}, 'variants': {}}]}
Manticore-Falcon-Wizard-Orca-LLaMA
1
2023-06-08T03:21:08
https://i.redd.it/p9wpukvrpp4b1.png
Troubadoor7
i.redd.it
1970-01-01T00:00:00
0
{}
143xtr2
false
null
t3_143xtr2
/r/LocalLLaMA/comments/143xtr2/manticorefalconwizardorcallama/
false
false
default
1
null
LoRA training on a cloud instance.
3
So I've been using Oobaboogas WebUI for most of my LLM tinkering up until now but realized I needed more computing power and created an account to get some time with more powerful hardware via the cloud. I've managed to get inference going via command lines, but was having a tough time figuring out how to go about doing the type of LoRA Training that is available in the WebUI without the Gradio interface. Does it utilize a module from HuggingFace transformers library, or from another? I tried studying the code of the WebUI but I'm a bit too new to coding to be able to really tell, so any assistance would be much appreciated.
2023-06-08T03:45:38
https://www.reddit.com/r/LocalLLaMA/comments/143yc3j/lora_training_on_a_cloud_instance/
PlanetExperience
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143yc3j
false
null
t3_143yc3j
/r/LocalLLaMA/comments/143yc3j/lora_training_on_a_cloud_instance/
false
false
self
3
null
Is it possible to use a local LLM and have it play Minecraft?
7
In 2019, [Facebook released "Craft Assist"](https://github.com/facebookresearch/craftassist). And just a few days ago [GPT-4 was showcased to be able to play Minecraft on its own](https://voyager.minedojo.org/). I just wanted to know if there was any chance that I could run a LLM from the internet to be an "Assistant" for a Minecraft server with some of my friends. Of course, forgive my ignorance, I'm not sure how an AI would even talk to a Minecraft client and vice-versa but I the only thing I can do is dream. What other 'ingredients' would we need to achieve an "AI" Minecraft virtual player? One that would fill in tedious tasks that other players may not enjoy as much.
2023-06-08T04:44:36
https://www.reddit.com/r/LocalLLaMA/comments/143ziop/is_it_possible_to_use_a_local_llm_and_have_it/
WeebKamida
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
143ziop
false
null
t3_143ziop
/r/LocalLLaMA/comments/143ziop/is_it_possible_to_use_a_local_llm_and_have_it/
false
false
self
7
{'enabled': False, 'images': [{'id': '1JF3ftHZfWYUQeqtl3u0CnJ9YVQ9MNMQMQSJM19KJp8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8_4o6df5uNCjsnmS_l21VuRCkQpzPMTf6072PEw8hmQ.jpg?width=108&crop=smart&auto=webp&s=b6216d2025fde735a4ae8a9f51cf015fc26bee78', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8_4o6df5uNCjsnmS_l21VuRCkQpzPMTf6072PEw8hmQ.jpg?width=216&crop=smart&auto=webp&s=473720cdaa5233cff1b054ae7abd0b39bb8ff605', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8_4o6df5uNCjsnmS_l21VuRCkQpzPMTf6072PEw8hmQ.jpg?width=320&crop=smart&auto=webp&s=d2a0e3f4af3ed2592a26eb67da088bda6c27ace3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8_4o6df5uNCjsnmS_l21VuRCkQpzPMTf6072PEw8hmQ.jpg?width=640&crop=smart&auto=webp&s=dabeb956a9172c8b8258be80a444322654df9a51', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8_4o6df5uNCjsnmS_l21VuRCkQpzPMTf6072PEw8hmQ.jpg?width=960&crop=smart&auto=webp&s=f74d69c97336abf0531b72d311fbc7cdd111696c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8_4o6df5uNCjsnmS_l21VuRCkQpzPMTf6072PEw8hmQ.jpg?width=1080&crop=smart&auto=webp&s=474cdfa49cc31c145e231f8fe32a70343b942ba8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8_4o6df5uNCjsnmS_l21VuRCkQpzPMTf6072PEw8hmQ.jpg?auto=webp&s=4fc14b020109f4f9e7046f314dbbb9747e49f29e', 'width': 1200}, 'variants': {}}]}
Building a super-simple memory service for language models
33
Hey /r/LocalLLaMA, One of the major frustrations I've had (and others too from posts I've seen) with building projects w/ LLMs is dealing with the complexity of chunking/embedding/vector dbs, **especially** if you're in the non-python world. At the end of the day I want to add content to storage and do a search to grab the context I need to send to the language model. So I built a dead-simple "LLM memory" service: 1. Run the service via a single cross-platform binary (or run in Docker) 2. Add content via \`curl\` or whatever RESTful client of choice 3. Query and get the context you need to pass to your LLM of choice. 4. ... 5. Enjoy! No need to deal with embeddings, figuring out how to split docs, running a vector db or any of that mess. Here's a little demo of it in action adding the state of the union address and then doing a search to fine relevant content [Start. Add content. Query. That's it!](https://reddit.com/link/14405do/video/cu7e114k9q4b1/player) Looking to open source this soon! I wanted to get some feedback on the project and see if there are any "demo" projects that you'd like to see.
2023-06-08T05:17:39
https://www.reddit.com/r/LocalLLaMA/comments/14405do/building_a_supersimple_memory_service_for/
andyndino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14405do
false
null
t3_14405do
/r/LocalLLaMA/comments/14405do/building_a_supersimple_memory_service_for/
false
false
self
33
null