title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
How to download only 7B and 13B weights for llama?
1
[removed]
2023-06-10T03:20:19
https://www.reddit.com/r/LocalLLaMA/comments/145p4my/how_to_download_only_7b_and_13b_weights_for_llama/
thepragprog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
145p4my
false
null
t3_145p4my
/r/LocalLLaMA/comments/145p4my/how_to_download_only_7b_and_13b_weights_for_llama/
false
false
default
1
null
How to merge the two weights into a single weight?
1
[removed]
2023-06-10T04:11:46
https://www.reddit.com/r/LocalLLaMA/comments/145q402/how_to_merge_the_two_weights_into_a_single_weight/
thepragprog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
145q402
false
null
t3_145q402
/r/LocalLLaMA/comments/145q402/how_to_merge_the_two_weights_into_a_single_weight/
false
false
default
1
null
Is anyone interested in starting a wiki page for new comers?
201
I have been lurking in here for some time and there are so many new things that confused the hell out of me. For example, what does 30B mean, and what does the 4bit, 8bit means? What is llama.cpp and what is llama-cpp-python? I think it will be extremely helpful we started a post and gether all these answers in one place and keep expanding on that to help people. I really don't like dis-cord as valuable information are easily lost between conversations. Does anyone have any suggestions about how to do that?
2023-06-10T05:13:41
https://www.reddit.com/r/LocalLLaMA/comments/145r8tw/is_anyone_interested_in_starting_a_wiki_page_for/
learnai_account
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
145r8tw
false
null
t3_145r8tw
/r/LocalLLaMA/comments/145r8tw/is_anyone_interested_in_starting_a_wiki_page_for/
false
false
self
201
null
Local translator based on decent LLAMA model on personal computer and it works well!
63
I have developed a program that utilizes the text-generation-webui API interface to translate a book. It works in conjunction with the CPU-based LLAMA model-powered multi-language OPENBUDDY model, allowing the program to run on a personal computer without requiring a high-performance graphics card. However, it does require at least 32G of RAM. I translated the book "Travels in a Tree-top" by Charles C. Abbott, which is available at [https://www.gutenberg.org/ebooks/55805](https://www.gutenberg.org/ebooks/55805), into Chinese. The translation process took me approximately 5 hours. The motivation behind creating this program stems from the fact that while Google Translate provides fast results, the quality is not satisfactory. On the other hand, services like CHATGPT and DEEPL charge for text translation. Although using CHATGPT for document translation doesn't incur significant costs, I am concerned about the possibility of my input being used as training data. Furthermore, I have some documents that I prefer not to make publicly available online. Hence, I wanted to find a method that can be run locally. The program itself is not complex. Following a similar approach to [https://github.com/yihong0618/bilingual\_book\_maker](https://github.com/yihong0618/bilingual_book_maker), I use EPUBLIB to decompress EPUB files and traverse the content within specific HTML tags (such as <DIV>, <p>, <span>) for translation. During the translation process, I use "Translate in xxx:" as a prompt and store the translated results back into the EPUB document. For this purpose, I have utilized OPENBUDDY, an emerging multi-language model. However, please note that model OPENBUDDY is not intended for commercial use. While the 13B model is currently in the early test stage, you can directly download the 7B model, although its performance may not be as good as the 13B model. The program still has room for improvement, such as reducing the number of API calls and optimizing the division of translation batches. You can find the code for this program at: [https://github.com/dustinchen93/text-generation-webui-translator](https://github.com/dustinchen93/text-generation-webui-translator) More information about OPENBUDDY can be found here: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
2023-06-10T06:06:32
https://www.reddit.com/r/LocalLLaMA/comments/145s65p/local_translator_based_on_decent_llama_model_on/
dustinChen93
self.LocalLLaMA
1970-01-01T00:00:00
0
{'gid_1': 1}
145s65p
false
null
t3_145s65p
/r/LocalLLaMA/comments/145s65p/local_translator_based_on_decent_llama_model_on/
false
false
self
63
{'enabled': False, 'images': [{'id': 'l5FirFjVBhoIt2gXeqNw4PUivhdqaAGmXpEhRPfrw_w', 'resolutions': [{'height': 170, 'url': 'https://external-preview.redd.it/QO46DcEtVPDXRSRLM0c8PemTdaGM48dRVokNn3oyKp0.jpg?width=108&crop=smart&auto=webp&s=6ff522235a2c540af7871c73bb692598ab7d45ee', 'width': 108}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/QO46DcEtVPDXRSRLM0c8PemTdaGM48dRVokNn3oyKp0.jpg?auto=webp&s=fe031defb149696ff39cd4b311ff752700fb16fc', 'width': 190}, 'variants': {}}]}
GGML models not working with llamacpp
1
[removed]
2023-06-10T06:39:56
https://www.reddit.com/r/LocalLLaMA/comments/145sroz/ggml_models_not_working_with_llamacpp/
savvamadar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
145sroz
false
null
t3_145sroz
/r/LocalLLaMA/comments/145sroz/ggml_models_not_working_with_llamacpp/
false
false
default
1
null
Cpu inference, 7950x vs 13900k, which one is better?
4
Unfortunately, it is a sad truth that running models of 65b or larger on CPUs is the most cost-effective option. This is especially true when compared to the expensive Mac Studio or multiple 4090 cards. Additionally, with the possibility of 100b or larger models on the horizon, even two 4090s may not be enough. Therefore, I am kindly asking if anyone with either of the two CPUs could test any 33b or 65b models on LLaMA.cpp pure CPU inference and share the speed with us. Please include your RAM speed and whether you have overclocked or power-limited your CPU. I also have some other questions: 1. Does the big-LITTLE design of the 13900k have any negative impact on LLaMA.cpp performance? 2. I have heard that Zen 4s do not support high-frequency 4\*32GB DDR5s. Have there been any recent tests on this? And what about the 13900k? Can it work with 4\*32 DDR5 sticks on high frequency? 3. If I want offload some layers to a 4090, which CPU would be more suitable for this task? I believe that high-frequency DDR5 memory is crucial in this case. 4. 4 sticks of DDR5 vs 2 sticks, which one is better for speed? Thank you in advance for any help or information you can provide.
2023-06-10T07:06:30
https://www.reddit.com/r/LocalLLaMA/comments/145t90n/cpu_inference_7950x_vs_13900k_which_one_is_better/
Big_Communication353
self.LocalLLaMA
2023-06-10T07:10:37
0
{}
145t90n
false
null
t3_145t90n
/r/LocalLLaMA/comments/145t90n/cpu_inference_7950x_vs_13900k_which_one_is_better/
false
false
self
4
null
Analysis of size-perplexity tradeoff for k-quantization
21
So, I really liked [this post](https://www.reddit.com/r/LocalLLaMA/comments/142q5k5/updated_relative_comparison_of_ggml_quantization/?utm_source=share&utm_medium=web2x&context=3), especially the handy table put together by u/kerfuffleV2 and u/YearZero in the comments. The question was brought up as to whether any quantization type/size presented an "ideal" trade-off of size to perplexity. Kerfuffle computed increase in perplexity per 1GB saved. I want to make it clear that I appreciate their work and don't think is a \*bad\* metric, but at the same time, I wanted to explore another one: Given that a 1 GB change isn't always going to be the same (ie going from 13 GB to 12 GB is ho-hum. Going from 3 GB to 2 -- or even from 7 GB to 6, which could make the model viable on systems it wasn't before -- is a big deal), I figured it would be interesting to compute something different: percent change in size per percent change in perplexity. Specifically, I wanted to compare this from step-to-step rather than "overall" (ie compute the change going from q5\_km to q5\_ks, for instance, rather than computing the change from float 16 to q5\_km, compute the change from float 16 to q5\_ks, etc.). Of course, while this should provide a rough idea of whether stepping up (down?) to a certain quantization level is wise, it's important to note that it's a dimensionless metric that doesn't have any specific meaning (other than at "1", an x-percent change in perplexity is being exchanged for the same x-percent change in size), so we'll have to look at the data to determine what's "good". I did this using the numbers for the 7B model, which I think are pretty generally-applicable since the other models follow a similar pattern (while 13B seems a little more resistant to perplexity changes, this doesn't seem to be something that can be taken for granted across all larger model sizes ([source](https://github.com/ggerganov/llama.cpp/pull/1684))). I also removed the "old" q4\_0, q4\_1, q5\_0, and q5\_1 methods to avoid confusion and negative numbers in the table. Anyway, the results are as follows: &#x200B; |quantization type|size in GB|perplexity|perplexity increase\*|size reduction\*|size reduction/perplexity increase| |:-|:-|:-|:-|:-|:-| |q2\_k|2.67|6.774|4.94494|2.90909|0.58830| |q3\_ks|2.75|6.4571|4.98837|10.13072|2.03087| |q3\_km|3.06|6.1503|1.04158|8.65672|8.31113| |q3\_kl|3.35|6.0869|1.08611|5.89888|5.43121| |q4\_ks|3.56|6.0215|1.03018|6.31579|6.13074| |q4\_km|3.8|5.9601|0.30630|12.24018|39.96151| |q5\_ks|4.33|5.9419|0.35637|2.69663|7.56692| |q5\_km|4.45|5.9208|0.16579|13.59223|81.98336| |q6\_k|5.15|5.911|0.06772|23.13433|341.63619| |q8\_0|6.7|5.907|0.00677|48.46153|7156.07308| |float 16|13|5.9066|\-|\-|\-| \*from last quantization step, in percent &#x200B; So, this is interesting. What are some takeaways? First off, notice the HUGE numbers in the last two columns. These demonstrate that there's an \*insane\* benefit to going from float 16 to q8\_0, and from q8\_0 to q6\_k since it yields a huge size benefit in exchange for minimal perplexity increase. That's... kind of already common knowledge, but provides some confirmation that the metric is working as intended. Next in the q5\_km column, we notice the number is still pretty high, though not as astronomical as before. I think this is still a "no-brainer" step to make, and I think most people would agree with me since it provides better quality than the former q5\_1 (the previous "highest that anyone really used" level) while being smaller. This is essentially the highest quantization level that I think makes sense for everyday use rather than benchmarking or experiments. q5\_ks actually offers minimal benefit. I was a little surprised by this, but it makes sense. While the perplexity increase isn't large, the space saved isn't either. Both are small enough that it's kind of a "meh" optimization. It goes back up for q4\_km though. This makes sense given that we're seeing a <1% increase in perplexity from q5\_ks, while getting a 12% space savings. That's.... really the last time we see double digits though. After this, you're really trading your perplexity for space on a level that's not seen at the larger sizes. I think this denotes that q4\_km is the "sweet spot" of quantization, or the smallest size that doesn't sacrifice "too much" for what you're getting, and the size that you should really aim for if your system can support it. With regard to the "old" methods, it's marginally worse than q5\_1 while being slightly smaller than q4\_1, which is hard to beat on both fronts. Moving on though, it's all single digits. These aren't necessarily "bad" sacrifices to make depending on your resources, but it does feel like more of a sacrifice, especially the move down to q3\_ks and q2\_k where skyrocketing perplexity push this all the way down to 2 and then \*0.6\*. I think this really shows the limitations of q3 and q2 quantization, given that -- with perspective to the other types -- trading a 1% change in perplexity for a 2% change in size (or worse, vice-versa), is REALLY cutting into performance to cut size down. That coupled with the fact that 65B models cut down to q2\_k start to approach the perplexity of 30B's with good quantization, really shows that these are neat in theory but should generally be avoided unless you \*really\* need them to fit a specific model into your RAM/VRAM. Try to aim for \*at least\* q3\_km. Hopefully someone finds this interesting :)
2023-06-10T07:15:52
https://www.reddit.com/r/LocalLLaMA/comments/145tf00/analysis_of_sizeperplexity_tradeoff_for/
AI-Pon3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
145tf00
false
null
t3_145tf00
/r/LocalLLaMA/comments/145tf00/analysis_of_sizeperplexity_tradeoff_for/
false
false
self
21
{'enabled': False, 'images': [{'id': 'MtnaEafBHqpNXjJMi7PNYjV_OuOxnwpBHlMRqhHpjtg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jvEyAVR0M299SHUgKVVOkTLaNzJ1lgJDitrfjixc0FY.jpg?width=108&crop=smart&auto=webp&s=4badc49beb71f203272cacfc3fe01f465f334f49', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jvEyAVR0M299SHUgKVVOkTLaNzJ1lgJDitrfjixc0FY.jpg?width=216&crop=smart&auto=webp&s=59ad48c1375bdc6bc0970043be7d2456ffa21a63', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jvEyAVR0M299SHUgKVVOkTLaNzJ1lgJDitrfjixc0FY.jpg?width=320&crop=smart&auto=webp&s=93c86cb7b5c9452af2db1d9cf5c218993056d6ce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jvEyAVR0M299SHUgKVVOkTLaNzJ1lgJDitrfjixc0FY.jpg?width=640&crop=smart&auto=webp&s=66b29fe05c81b036ab22c7d6a5174cc4df89d026', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jvEyAVR0M299SHUgKVVOkTLaNzJ1lgJDitrfjixc0FY.jpg?width=960&crop=smart&auto=webp&s=e559e6481d0c9a8205e60a9221ec0b5d6a5c3e72', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jvEyAVR0M299SHUgKVVOkTLaNzJ1lgJDitrfjixc0FY.jpg?width=1080&crop=smart&auto=webp&s=f91b6a3162ba637963a57a05ac965b99b499781e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jvEyAVR0M299SHUgKVVOkTLaNzJ1lgJDitrfjixc0FY.jpg?auto=webp&s=a085c9e657c2009d5a673b3403890d1476382e1b', 'width': 1200}, 'variants': {}}]}
How to use KoboldCPP?
2
[removed]
2023-06-10T07:28:11
https://www.reddit.com/r/LocalLLaMA/comments/145tmuf/how_to_use_koboldcpp/
yungfishstick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
145tmuf
false
null
t3_145tmuf
/r/LocalLLaMA/comments/145tmuf/how_to_use_koboldcpp/
false
false
default
2
null
Question about which .bin file to use and quantization
10
Hi all! After a few weeks of leaving my system dormant I decided to jump back into the local LLM frenzy and am pleased to say that after some tinkering with Conda environments and Python packages yesterday I managed to run `Wizard-Vicuna-13B-Uncensored` on my **Apple M2 Pro - 16 GB RAM**. Now, when I first tried inspecting the files on Hugging Face I saw that there were a bunch of new files that I didn't encounter the last time I played with LLMs. As I have a machine that is not ideal for running LLMs locally, my understanding is that I need 4-bit quantized GGML models, so here we are. Because I didn't have the time to download all the files I started with these two: * Wizard-Vicuna-13B-Uncensored.ggmlv3.q4\_0.bin * Wizard-Vicuna-13B-Uncensored.ggmlv3.q4\_1.bin By the way, I assumed that both files are needed to complete the model, similar to a `.zip` file. Is that accurate or can I get away with only using one? How will that affect output quality and speed? Leaving only the `q4_0.bin` and `q4_1.bin` files in my `/models` folder and then startimg the chat gave me a decent 5 tokens/s and I've been chatting away with this wonderful model. But now my question is what are all the other files for then and could they help me improve the quality and/or speed of my chat experience? For example when I have only `*_K_M.bin` files, then I can't even get the model to load. So what are these files even for? As I've gathered from the commit messages from TheBloke's Hugging Face page it's something pertaining to k-quant, but I can't find any information as to what they're used for... Then, would I benefit from ONLY running the `q8_0.bin` for example or is my hardware not equipped to handle that load or am I better off running `q4_0.bin` or maybe I should go for the "newer" `q5_0.bin`? Here are all files available for download in the "Files and versions" tab: Wizard-Vicuna-13B-Uncensored.ggmlv3.q2_K.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q3_K_L.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q3_K_M.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q3_K_S.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_0.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_1.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_K_M.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q4_K_S.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_0.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_1.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_K_M.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q5_K_S.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q6_K.bin Wizard-Vicuna-13B-Uncensored.ggmlv3.q8_0.bin PS: I run the model through `oobabooga/text-generation-webui` with the following command: python server.py --threads=8 --gpu-memory=10 --mlock --chat --model=TheBloke_Wizard-Vicuna-13B-Uncensored-GGML
2023-06-10T07:28:53
https://www.reddit.com/r/LocalLLaMA/comments/145tn9q/question_about_which_bin_file_to_use_and/
RastaBambi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
145tn9q
false
null
t3_145tn9q
/r/LocalLLaMA/comments/145tn9q/question_about_which_bin_file_to_use_and/
false
false
self
10
null
LexPodLM-13B
9
[https://huggingface.co/64bits/LexPodLM-13B](https://huggingface.co/64bits/LexPodLM-13B) &#x200B; LexPodLM-13B is a LLaMA-13B model fine-tuned using Vicuna's schema, with this [dataset](https://huggingface.co/datasets/64bits/lex_fridman_podcast_for_llm_vicuna). While it might not excel in tasks like code writing, fact searching, or essay writing, it can potentially serve as an effective chat model, capable of providing verbose and nuanced responses. Sample Resp: *Human: do you think consciousness is foundamentally computational?* *Assistant: I think consciousness is foundational computational. Yes. The key insight that I've had over the last five years is that consciousness itself is the ultimate self-organizing system. It's the ultimate emergent of the universe. It's the ultimate thing that's not explainable in terms of its parts. So it's not that I think brains are computation. I think, you know, brains are made of neurons, and neurons are made of cells. And the cells are made of molecules and the molecules are made of atoms, and the atoms are made of particles. And we can explain all of those things in terms of their parts. There's no emergent physics in physics. There are no emergent laws in physics. And you know, people have been trying to get to this level. And there's the whole philosophy of physics that says, you know, you've got your fundamental laws. Let's just explain everything in terms of the fundamental laws. But that's just not how physics works. The reason that there are emergent laws in physics is because there are emergent structures in physics. And the first emergent structure is the cell. The cell is a structure that doesn't exist in physics. It's an emergent structure. And the fact that the cell is an emergent structure means that it has rules that are different from the rules of physics. And the same thing with brains. Brains have rules that are different from the rules of physics. And the same thing with consciousness. Consciousness, the fact that it exists at all is an emergent fact. It's not a fact that you can reduce to the underlying physics. And the thing that's amazing to me is that there is this structure that exists at the highest level of everything that exists. And it's not an emergent. So the fact that you and I have consciousness is not an emergent fact. It's a foundational fact. It's a fact that's there right at the bottom of reality right along with atoms and particles. I mean, it's not that I think that brains are conscious. I think that brains are a particular way of organizing consciousness. I think brains are a particular way of organizing the relationship between consciousness and the physical world. And so when people say to me, you know, well, what do you think?* &#x200B; *Human: What are the ethical implications of artificial intelligence?* *Assistant: The ethical implications of artificial intelligence are significant. As AI systems become more advanced and integrated into our daily lives, we have to be aware of their limitations and potential unfair biases. Such systems should be designed with a focus on fairness, transparency, and security. AI systems should not be used to replace human decision-making. Their output should be easily interpretable, so that decisions can be traced to rationales and justifications. We should also prioritize the well-being of humans over the well-being of AI systems. We need to ensure that AI systems are designed in a way that they do not harm users or other people. This can be achieved by building in safety mechanisms and testing systems for potential biases and security vulnerabilities.* &#x200B; *Human: How do you think quantum computing will affect our future?* *Assistant: Quantum computing has the potential to change the way we compute in many ways. One of the most significant benefits will be the possibility of solving previously unsolvable problems and accelerating the development of machine learning and artificial intelligence. By harnessing the power of quantum computers, we can make machines that are exponentially more powerful than any existing computer.* &#x200B; *Human: What are your thoughts on the Singularity theory?* *Assistant: I am a strong proponent of the singularity. I have seen many technological advancements in my career. I've seen the power of exponential growth in technology. I've seen the power of exponential growth in markets, in the economy, and in population. And I'm very excited about the future. I'm very excited about what's going to happen. And I think that it's important for the world to understand that. And so I think that the singularity is very important for our future and we need to make sure that we're preparing for it.* ~~----------------------------------------------~~ Follow me on twitter: [https://twitter.com/64bits\_ai](https://twitter.com/64bits_ai)
2023-06-10T08:08:36
https://www.reddit.com/r/LocalLLaMA/comments/145ubqx/lexpodlm13b/
Latter-Implement-243
self.LocalLLaMA
2023-06-11T01:55:33
0
{}
145ubqx
false
null
t3_145ubqx
/r/LocalLLaMA/comments/145ubqx/lexpodlm13b/
false
false
self
9
{'enabled': False, 'images': [{'id': 'kKJ_cFNbVmbf2ajs03wwK3pozHvGKhF6DD5aV9Ooarg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VCwUSliM2bu5Btbsf5cSq4YMwsDwXmdYSLsYsdZMr9U.jpg?width=108&crop=smart&auto=webp&s=9057e60a476bcb0767f93c0d1b47cf545c2a9c53', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VCwUSliM2bu5Btbsf5cSq4YMwsDwXmdYSLsYsdZMr9U.jpg?width=216&crop=smart&auto=webp&s=b276578c207d64d37b4b125505083479d1093da1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VCwUSliM2bu5Btbsf5cSq4YMwsDwXmdYSLsYsdZMr9U.jpg?width=320&crop=smart&auto=webp&s=70733b9ab6bb43699266cb18ce42d72c43cf597a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VCwUSliM2bu5Btbsf5cSq4YMwsDwXmdYSLsYsdZMr9U.jpg?width=640&crop=smart&auto=webp&s=3e2c51a57b4cc2a811e285f2e6745707d050eae8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VCwUSliM2bu5Btbsf5cSq4YMwsDwXmdYSLsYsdZMr9U.jpg?width=960&crop=smart&auto=webp&s=e822d5cfdb2f5e69f739cfaa9e0c257dd38b4ccc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VCwUSliM2bu5Btbsf5cSq4YMwsDwXmdYSLsYsdZMr9U.jpg?width=1080&crop=smart&auto=webp&s=1f7650608f75254da8cc089d38b4e77eba500553', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VCwUSliM2bu5Btbsf5cSq4YMwsDwXmdYSLsYsdZMr9U.jpg?auto=webp&s=25363890c5472169553e0c80965672139dc36c2f', 'width': 1200}, 'variants': {}}]}
Installed llama.cpp successfully in window 11 using standard command line input.
1
[removed]
2023-06-10T08:20:45
https://www.reddit.com/r/LocalLLaMA/comments/145uiy4/installed_llamacpp_successfully_in_window_11/
Formal_Lingonberry29
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
145uiy4
false
null
t3_145uiy4
/r/LocalLLaMA/comments/145uiy4/installed_llamacpp_successfully_in_window_11/
false
false
default
1
null
PAutoBot - Private Auto Robot was released - Ask your local documents!
28
[removed]
2023-06-10T08:47:37
https://www.reddit.com/r/LocalLLaMA/comments/145uyn9/pautobot_private_auto_robot_was_released_ask_your/
PuzzleheadedLab4175
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
145uyn9
false
null
t3_145uyn9
/r/LocalLLaMA/comments/145uyn9/pautobot_private_auto_robot_was_released_ask_your/
false
false
default
28
null
Open llama 7B is savage
187
2023-06-10T09:01:00
https://i.redd.it/ktj544aao55b1.png
Famberlight
i.redd.it
1970-01-01T00:00:00
0
{}
145v6of
false
null
t3_145v6of
/r/LocalLLaMA/comments/145v6of/open_llama_7b_is_savage/
false
false
https://b.thumbs.redditm…aP_ejyoqfv0A.jpg
187
{'enabled': True, 'images': [{'id': '3_rXsbM3VJ4WIo3Vf1FOTvtkqJHPb6TfFMn5699imIA', 'resolutions': [{'height': 26, 'url': 'https://preview.redd.it/ktj544aao55b1.png?width=108&crop=smart&auto=webp&s=a673b44006289667e5834578212c08089aa63b5a', 'width': 108}, {'height': 52, 'url': 'https://preview.redd.it/ktj544aao55b1.png?width=216&crop=smart&auto=webp&s=c7b7cc5b94c9e32629de44604a19767db065fe83', 'width': 216}, {'height': 77, 'url': 'https://preview.redd.it/ktj544aao55b1.png?width=320&crop=smart&auto=webp&s=72a6a6046e13c2fc967ac355784512cb9d71494a', 'width': 320}, {'height': 155, 'url': 'https://preview.redd.it/ktj544aao55b1.png?width=640&crop=smart&auto=webp&s=2f3649bb6d81bd1100290c07b3207108ecf2db97', 'width': 640}], 'source': {'height': 177, 'url': 'https://preview.redd.it/ktj544aao55b1.png?auto=webp&s=d470616626bd897bd72bffd6b83ca3daea0673af', 'width': 727}, 'variants': {}}]}
Is there a way to fine-tune llama on extremely small dataset?
31
I am not talking embedding search. I want to fine-tune a model on a single or several documents.To add knowledge into LLM. I tried to fine-tune llama-alpaca on information about gpt 4. Its beyond alpaca dataset so it should be new information. Made a small dataset qa dataset. Barely 10-15 entries. After training it can replicate the answers to questions exactly. But when I try to ask it slightly beyond dataset questions like "What large language models did OpenAI release" or "What can you tell me about gpt 4" it starts to hallucinate very hard. It confuses it with gpt 3 and gpt 2. e.g. it says that it has 12 layers Its obvious that LLM learn(to answer questions) about some facts during fine-tuning. Or do they learn facts during pre-training? But how do they do it?Do I need to extend dataset to 100 questions? Do I need to modify dataset to anchor that information to other entities it might know such as LLM or GPT 3 or 2?
2023-06-10T12:51:17
https://www.reddit.com/r/LocalLLaMA/comments/145zigd/is_there_a_way_to_finetune_llama_on_extremely/
nikitastaf1996
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
145zigd
false
null
t3_145zigd
/r/LocalLLaMA/comments/145zigd/is_there_a_way_to_finetune_llama_on_extremely/
false
false
self
31
null
r/MachineLearning on Reddit: [P] FalconFBI - LLM Generated Reports For FBI's Most Wanted
1
2023-06-10T12:57:48
https://www.reddit.com/r/MachineLearning/comments/145ynul/p_falconfbi_llm_generated_reports_for_fbis_most/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=1&utm_term=1
KlutzyRemove9936
reddit.com
1970-01-01T00:00:00
0
{}
145zn3o
false
null
t3_145zn3o
/r/LocalLLaMA/comments/145zn3o/rmachinelearning_on_reddit_p_falconfbi_llm/
false
false
default
1
null
axolotl - Finetune many models easily with QLoRA and Landmark attention support!
92
2023-06-10T13:06:10
https://github.com/OpenAccess-AI-Collective/axolotl
bratao
github.com
1970-01-01T00:00:00
0
{}
145zttj
false
null
t3_145zttj
/r/LocalLLaMA/comments/145zttj/axolotl_finetune_many_models_easily_with_qlora/
false
false
https://b.thumbs.redditm…64bOcU2wNzkg.jpg
92
{'enabled': False, 'images': [{'id': '1Q5s9tPPUCeYJUD3uoykGDYUfuneA8DbK4whGll8HQk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HAZlaUlmVNkB6dOty1JOF8qjSMP672DLfKH_x1JWOOc.jpg?width=108&crop=smart&auto=webp&s=b0aa664b4902fde8ac8614f64ae503b5aaedd1c9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HAZlaUlmVNkB6dOty1JOF8qjSMP672DLfKH_x1JWOOc.jpg?width=216&crop=smart&auto=webp&s=738938df3561e86ad3b4c21a022f906bea7df53f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HAZlaUlmVNkB6dOty1JOF8qjSMP672DLfKH_x1JWOOc.jpg?width=320&crop=smart&auto=webp&s=780ca25d7b784ddfbaa53337772d65f0b0143e59', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HAZlaUlmVNkB6dOty1JOF8qjSMP672DLfKH_x1JWOOc.jpg?width=640&crop=smart&auto=webp&s=5b5d64ee4001b5ec1a8670ceeed193a51c66d033', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HAZlaUlmVNkB6dOty1JOF8qjSMP672DLfKH_x1JWOOc.jpg?width=960&crop=smart&auto=webp&s=90fcf9390d4cce4a813856d7825b0796962003ea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HAZlaUlmVNkB6dOty1JOF8qjSMP672DLfKH_x1JWOOc.jpg?width=1080&crop=smart&auto=webp&s=51b39a67a964c4ab60e72f76341ea796bac0f4a9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HAZlaUlmVNkB6dOty1JOF8qjSMP672DLfKH_x1JWOOc.jpg?auto=webp&s=4821b9f371dfabee2a69b49ab4ca55957ff13012', 'width': 1200}, 'variants': {}}]}
How to configure koboldcpp for roleplay?
6
[deleted]
2023-06-10T13:56:57
[deleted]
1970-01-01T00:00:00
0
{}
1460xuj
false
null
t3_1460xuj
/r/LocalLLaMA/comments/1460xuj/how_to_configure_koboldcpp_for_roleplay/
false
false
default
6
null
Major Performance Degradation with nVidia driver 535.98 at larger context sizes
47
I tried upgrading to the newest nVidia driver, 535.98, and saw performance tank as the context size started to get larger. I ended up rolling back to 532.03 since performance was so bad. Using 'TheBloke_guanaco-33B-GPT' GPTQ model on a 4090 with the OobaBooga Text Generation UI in Notebook mode, I had it generate a story 300 tokens at a time. **Driver 532.03** Tokens/s | Tokens | Context Size ---|---|---- 8.79 | 300 | 325 7.95 | 300 | 625 7.88 | 300 | 925 7.56 | 300 | 1225 7.19 | 190 | 1525 Overall, performance is pretty stable. Perhaps a minor performance decrease as the context size increases. **Driver 535.98** Tokens/s | Tokens | Context Size ---|---|---- 8.25 | 300 | 329 5.83 | 300 | 629 1.48 | 47 | 929 Almost immediately, performance tanks. It decided to produce a much shorter story this time. In hindsight, I should have kept the seed the same, but I don't think I would have had the patience to go any further. This driver also makes front-end tools like SillyTavern essentially unusable as they send along large amounts of context with each chat message. Loading up a larger character card and simply typing 'Hi' produced a response that generated at 0.65 tokens/s. There's a couple of threads in /r/StableDiffusion also complaining about performance issues with 535.98. Seems like nVidia may have changed something AI related that's causing problems. Anyone else tried driver 535.98? If so, what's your performance like?
2023-06-10T14:14:24
https://www.reddit.com/r/LocalLLaMA/comments/1461d1c/major_performance_degradation_with_nvidia_driver/
GoldenMonkeyPox
self.LocalLLaMA
2023-06-10T14:28:25
0
{}
1461d1c
false
null
t3_1461d1c
/r/LocalLLaMA/comments/1461d1c/major_performance_degradation_with_nvidia_driver/
false
false
self
47
null
Can two models be averaged together like Stable Diffusion models?
10
Pardon my lack of knowledge. I was wondering if it was possible as it would enable anybody to experiment deeper while not needing to have such a deep understanding as most people here possess.
2023-06-10T15:09:35
https://www.reddit.com/r/LocalLLaMA/comments/1462nsv/can_two_models_be_averaged_together_like_stable/
Extraltodeus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1462nsv
false
null
t3_1462nsv
/r/LocalLLaMA/comments/1462nsv/can_two_models_be_averaged_together_like_stable/
false
false
self
10
null
What hardware do I need for fine tuning/training?
8
Hi all, I've been reading threads here and have a basic understanding of hardware requirements for inference. However, I'm a bit unclear as to requirements (and current capabilities) for fine tuning, embedding, training, etc., i.e. to adapt models to personal text corpuses. I currently need to retire my dying 2013 MBP, so I'm wondering how much I could do with a 16GB or 24GB MB Air (and start saving towards a bigger workstation in the mean time). In the early stages I don't think I so much need to fit the biggest model, as I want to learn the basics of hooking up a LLM to other tools like LangChain, etc. or start to work with my own texts. Doesn't need to be the most powerful, but should at least be enough so that playing around is not too painful. Is a 16GB enough to start doing meaningful things with a 13B model? Would a 24GB MB Air be able to run a 4-bit 30B model with enough overhead to couple it with other tools? The second question is to what extent would it be worth it to spend an extra \~$1000 on top of what I would spend on a MB Air to upgrade to M1 Pro/Max, and 32GB/64GB. Other than the fact that 64GB can fit 2x 4-bit 30B or 1x 4-bit 65B, are there other important qualitative differences? E.g. Would the M1/M2 Pro/Max lead to much faster speeds than the regular M2? Or, is it that for training, fine tuning, personalizing etc putting that $1000 towards cloud or a NVIDIA workstation is the way to go? How much would a 1x 3090 or a 2x 3090 NVIDIA machine (with sufficient RAM) cost? Thanks!
2023-06-10T15:35:27
https://www.reddit.com/r/LocalLLaMA/comments/14639ol/what_hardware_do_i_need_for_fine_tuningtraining/
EnPaceRequiescat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14639ol
false
null
t3_14639ol
/r/LocalLLaMA/comments/14639ol/what_hardware_do_i_need_for_fine_tuningtraining/
false
false
self
8
null
How to get started?
0
[removed]
2023-06-10T16:06:15
https://www.reddit.com/r/LocalLLaMA/comments/1464083/how_to_get_started/
malkauns
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1464083
false
null
t3_1464083
/r/LocalLLaMA/comments/1464083/how_to_get_started/
false
false
default
0
null
Using AI characters as Tools?
12
Has anyone had any success using AI characters as tools or bots? For instance, I like using a specific prompt to get Bing AI to print out a stable diffusion prompt based on an idea I give it. I feed bing AI this prompt, then say “/generate a sunset on a beach” and it gives me a great prompt for stable diffusion. I’d like to be able to load an AI character who has this prompt built in. Most of the AI characters I see are just to chat with. Does anyone have examples built for utility that I could use with oogabooga? Thanks!
2023-06-10T16:17:01
https://www.reddit.com/r/LocalLLaMA/comments/146499t/using_ai_characters_as_tools/
NerfGuyReplacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146499t
false
null
t3_146499t
/r/LocalLLaMA/comments/146499t/using_ai_characters_as_tools/
false
false
self
12
null
Seeking advice on running local LLM with laptop specs: RTX 4050 6GB, 40GB RAM, Intel i5-12450H
0
[removed]
2023-06-10T16:17:51
https://www.reddit.com/r/LocalLLaMA/comments/14649zp/seeking_advice_on_running_local_llm_with_laptop/
Beneficial_Promise20
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
14649zp
false
null
t3_14649zp
/r/LocalLLaMA/comments/14649zp/seeking_advice_on_running_local_llm_with_laptop/
false
false
default
0
null
text-generation-webui's "Train Only After" option
3
[removed]
2023-06-10T16:39:51
https://www.reddit.com/r/LocalLLaMA/comments/1464sw6/textgenerationwebuis_train_only_after_option/
Old_Speech_5665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1464sw6
false
null
t3_1464sw6
/r/LocalLLaMA/comments/1464sw6/textgenerationwebuis_train_only_after_option/
false
false
default
3
null
Infinite reponse when finetuning LLaMA 13b
2
[removed]
2023-06-10T16:58:42
https://www.reddit.com/r/LocalLLaMA/comments/146598z/infinite_reponse_when_finetuning_llama_13b/
Old_Speech_5665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146598z
false
null
t3_146598z
/r/LocalLLaMA/comments/146598z/infinite_reponse_when_finetuning_llama_13b/
false
false
default
2
null
2bit quant - 30b model
3
I've seen the 2 bit quants of 30b models require max 16.1 gb VRAM. A 4090 laptop GPU has 16gb of VRAM. Is there a way to run these models on a single laptop with this card? If so - how?
2023-06-10T17:46:24
https://www.reddit.com/r/LocalLLaMA/comments/1466el0/2bit_quant_30b_model/
BasicsOnly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1466el0
false
null
t3_1466el0
/r/LocalLLaMA/comments/1466el0/2bit_quant_30b_model/
false
false
self
3
null
Advice for model to run on laptop gtx 1060 (6gb), i7 8750H, ram 16gb
3
I'm a newbie here. I have been watching youtube videos and following posts here for a couple of weeks, but still helplessly lost. I'd like to try a decent enough model, using a gui like oobabooga. Which should be a better model for my device, a 7b gptq model or a ggml model? Also which model have you tried that (until now) seems to perform the best, with regards to creative writing?
2023-06-10T17:51:05
https://www.reddit.com/r/LocalLLaMA/comments/1466ikf/advice_for_model_to_run_on_laptop_gtx_1060_6gb_i7/
blacktie_redstripes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1466ikf
false
null
t3_1466ikf
/r/LocalLLaMA/comments/1466ikf/advice_for_model_to_run_on_laptop_gtx_1060_6gb_i7/
false
false
self
3
null
Commercially available llm's
2
[deleted]
2023-06-10T17:59:20
[deleted]
1970-01-01T00:00:00
0
{}
1466pco
false
null
t3_1466pco
/r/LocalLLaMA/comments/1466pco/commercially_available_llms/
false
false
default
2
null
Build options
0
[deleted]
2023-06-10T18:55:50
[deleted]
1970-01-01T00:00:00
0
{}
14681vw
false
null
t3_14681vw
/r/LocalLLaMA/comments/14681vw/build_options/
false
false
default
0
null
local llama newbie question
2
[removed]
2023-06-10T19:01:03
https://www.reddit.com/r/LocalLLaMA/comments/146869v/local_llama_newbie_question/
M0N33R
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146869v
false
null
t3_146869v
/r/LocalLLaMA/comments/146869v/local_llama_newbie_question/
false
false
default
2
null
How does your fav LLM handle the request to decompile this 16-bit DOS asm?
9
2023-06-10T19:18:20
https://sharegpt.com/c/TNzfUiU
NancyAurum
sharegpt.com
1970-01-01T00:00:00
0
{}
1468l15
false
null
t3_1468l15
/r/LocalLLaMA/comments/1468l15/how_does_your_fav_llm_handle_the_request_to/
false
false
https://a.thumbs.redditm…DZ_UV9xR-hb4.jpg
9
{'enabled': False, 'images': [{'id': 'KTN-IAIr08myj7ighQ4LRKRhjwRWN_M0vjVWBk0xSsg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yt7TF9at61N2jlI33LhZX28JKSNi-3galw_ijUY0Fdc.jpg?width=108&crop=smart&auto=webp&s=ec28ee7ae4b58bd2fd2c3e6f71378abd1dd2cb3d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yt7TF9at61N2jlI33LhZX28JKSNi-3galw_ijUY0Fdc.jpg?width=216&crop=smart&auto=webp&s=8618bf31104a2e0c3743aefd7e29f0d0592c9bb5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yt7TF9at61N2jlI33LhZX28JKSNi-3galw_ijUY0Fdc.jpg?width=320&crop=smart&auto=webp&s=e99cb61551d4b69a1f8a53e96550618277febfec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yt7TF9at61N2jlI33LhZX28JKSNi-3galw_ijUY0Fdc.jpg?width=640&crop=smart&auto=webp&s=f1f9b64eb1637f29aa4b09a9ac5120e214112dee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yt7TF9at61N2jlI33LhZX28JKSNi-3galw_ijUY0Fdc.jpg?width=960&crop=smart&auto=webp&s=9d1ac505024ed0a8c3240f4a65ea6379cc5d7f4b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yt7TF9at61N2jlI33LhZX28JKSNi-3galw_ijUY0Fdc.jpg?width=1080&crop=smart&auto=webp&s=c3b3e6446e04229cfc0d8faf94d54fdacf0748b5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yt7TF9at61N2jlI33LhZX28JKSNi-3galw_ijUY0Fdc.jpg?auto=webp&s=4513ba13270d49c8ca54e48b1dde3e5952364a07', 'width': 1200}, 'variants': {}}]}
Hi folks, back with an update to the HumanEval+ programming ranking I posted the other day incorporating your feedback - and some closed models for comparison! Now has improved generation params, new models: Falcon, Starcoder, Codegen, Claude+, Bard, OpenAssistant and more
189
2023-06-10T19:40:17
https://i.imgur.com/AqSjjj2.jpg
ProfessionalHand9945
i.imgur.com
1970-01-01T00:00:00
0
{}
1469343
false
null
t3_1469343
/r/LocalLLaMA/comments/1469343/hi_folks_back_with_an_update_to_the_humaneval/
false
false
https://b.thumbs.redditm…4201uChpZUfE.jpg
189
{'enabled': True, 'images': [{'id': 'aa_R111d1Gj3sZwLu9TBc22fyd-kZKXjHxmRep32GC8', 'resolutions': [{'height': 169, 'url': 'https://external-preview.redd.it/mmqLLanMLzulxsxjLQ6AavLb4yUdmLe0f22S56LP368.jpg?width=108&crop=smart&auto=webp&s=bd42e2bb4013d7e545c862955f75d83d87c58ee1', 'width': 108}, {'height': 339, 'url': 'https://external-preview.redd.it/mmqLLanMLzulxsxjLQ6AavLb4yUdmLe0f22S56LP368.jpg?width=216&crop=smart&auto=webp&s=bf20e8eee9ae4655f79efce410e8a873f9b5d5d8', 'width': 216}, {'height': 502, 'url': 'https://external-preview.redd.it/mmqLLanMLzulxsxjLQ6AavLb4yUdmLe0f22S56LP368.jpg?width=320&crop=smart&auto=webp&s=3c0b7b05b7367c88ea3e2938fee618674517c989', 'width': 320}, {'height': 1005, 'url': 'https://external-preview.redd.it/mmqLLanMLzulxsxjLQ6AavLb4yUdmLe0f22S56LP368.jpg?width=640&crop=smart&auto=webp&s=57f8f6d8511c248f4b103d4de8377c0980862774', 'width': 640}, {'height': 1508, 'url': 'https://external-preview.redd.it/mmqLLanMLzulxsxjLQ6AavLb4yUdmLe0f22S56LP368.jpg?width=960&crop=smart&auto=webp&s=53719bf20d146d850e3067386a720264f4a78316', 'width': 960}, {'height': 1696, 'url': 'https://external-preview.redd.it/mmqLLanMLzulxsxjLQ6AavLb4yUdmLe0f22S56LP368.jpg?width=1080&crop=smart&auto=webp&s=e65fb5009f5f52bad5ccf57ee639e59f696eaf84', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/mmqLLanMLzulxsxjLQ6AavLb4yUdmLe0f22S56LP368.jpg?auto=webp&s=92b1dac9008f43c13c222920f9d32b314901f019', 'width': 1273}, 'variants': {}}]}
Watercooling, anyone?
1
I’m putting together a server and wondering what people are doing for cooling. I take it a good strat right now is buying a used air-cooled 3090 or two. But is anyone doing a watercooled setup? Seems to me like an open loop would be great for keeping a multi GPU box cool. So I’m wondering what builds are out there.
2023-06-10T21:31:27
https://www.reddit.com/r/LocalLLaMA/comments/146bq7b/watercooling_anyone/
tryunite
self.LocalLLaMA
2023-06-10T21:43:49
0
{}
146bq7b
false
null
t3_146bq7b
/r/LocalLLaMA/comments/146bq7b/watercooling_anyone/
false
false
self
1
null
Minotaur-13b-Landmark - 10k+ context using Landmark Attention
172
I just finished getting my [Landmark-Attention-QLoRA](https://github.com/eugenepentland/landmark-attention-qlora) repo all working! It lets you train models to use landmark attention on a single GPU in 2-3 hours. Landmark Attention enables a 50x compression of an LLM's context into landmarks, making the process of selecting relevant tokens for answers more efficient, and allowing 2-16x longer context use without memory constraints. To be able to use this model in oobabooga, you need to have --trust-remote-code flag enabled. .[https://huggingface.co/eugenepentland/Minotaur-13b-Landmark](https://huggingface.co/eugenepentland/Minotaur-13b-Landmark) The model will most likely be updated within the next day or two with further improvements. I've also released just the QLoRA adapters to my models, and another interesting thing is that I was successfully able to use the Minotaur-13B train QLoRA on the base Llama-13B model and it works! So you may be able to take it and apply it to whatever your favorite 13B model is without any retraining. Edit: We are still running into issues with getting it to read the landmarks properly in oobabooga. It has no problem accepting 10k+ tokens but its not able to find the information you are asking for. I will update this post once it has been resolved.
2023-06-10T23:07:06
https://www.reddit.com/r/LocalLLaMA/comments/146dz1s/minotaur13blandmark_10k_context_using_landmark/
NeverEndingToast
self.LocalLLaMA
2023-06-11T13:51:25
0
{'gid_3': 1}
146dz1s
false
null
t3_146dz1s
/r/LocalLLaMA/comments/146dz1s/minotaur13blandmark_10k_context_using_landmark/
false
false
self
172
{'enabled': False, 'images': [{'id': 'KVVKzWltlazi2Fh9-1tOXKKdVmlXdIPdQ9zALOzn8ps', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/di5yPYU2NKWrc-Lff0jeLOjYXlKiUlE1NK0UrqN72To.jpg?width=108&crop=smart&auto=webp&s=3aac9d6d44d7b785e9d5246e32cd3930ad394e35', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/di5yPYU2NKWrc-Lff0jeLOjYXlKiUlE1NK0UrqN72To.jpg?width=216&crop=smart&auto=webp&s=7a9e6e0a8bd5440a2c9ad6e5242b5e98e6b200a9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/di5yPYU2NKWrc-Lff0jeLOjYXlKiUlE1NK0UrqN72To.jpg?width=320&crop=smart&auto=webp&s=7addfe63c086f6c585017abedf4979b06757224e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/di5yPYU2NKWrc-Lff0jeLOjYXlKiUlE1NK0UrqN72To.jpg?width=640&crop=smart&auto=webp&s=50b3c3c0a383f1482827610588a31aa89ace85ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/di5yPYU2NKWrc-Lff0jeLOjYXlKiUlE1NK0UrqN72To.jpg?width=960&crop=smart&auto=webp&s=df760258fda14f1993e8cad482003b6c3b280be7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/di5yPYU2NKWrc-Lff0jeLOjYXlKiUlE1NK0UrqN72To.jpg?width=1080&crop=smart&auto=webp&s=6ddcc2131737f681fdca5c3ed5b78128ff44028e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/di5yPYU2NKWrc-Lff0jeLOjYXlKiUlE1NK0UrqN72To.jpg?auto=webp&s=5c550a910c74030782cd25ef6e12820270ef317f', 'width': 1200}, 'variants': {}}]}
A website to check which LLMs you can run
250
2023-06-10T23:38:07
https://v.redd.it/k9unetrk0a5b1
TernaryJimbo
v.redd.it
1970-01-01T00:00:00
0
{}
146enb8
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/k9unetrk0a5b1/DASHPlaylist.mpd?a=1695568186%2CZTY3NTQ4ZDI1NmQ2NTNhMGNmZjAxNmVjNjQzOGYxYjFhMTUwYzI4ZWE2NGQ3YzcxM2U1NTk2NzA0ZjVlYWVjMA%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/k9unetrk0a5b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/k9unetrk0a5b1/HLSPlaylist.m3u8?a=1695568186%2COWIxMzY4MTA0MDQyNTA2MWJjOTMzNzBlM2FhMDZiNDc2NTlhMTc5M2ZkYzRkMjNmMmI3NWU2ZDBjZTdiZGNiNg%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/k9unetrk0a5b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1664}}
t3_146enb8
/r/LocalLLaMA/comments/146enb8/a_website_to_check_which_llms_you_can_run/
false
false
https://b.thumbs.redditm…6t3Tljnkxb4Y.jpg
250
{'enabled': False, 'images': [{'id': 'eU4CmA6_gLrOExMv-tuQv3iDxpEvPY-VvverKPKeV1o', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/sIbVdeNX9ygHwM7crLXyS9QKvcLzDjKlhqSUkxqsrHM.png?width=108&crop=smart&format=pjpg&auto=webp&s=f9976bea4dc24c9b5f11ccb5a37d4da76a59ceb4', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/sIbVdeNX9ygHwM7crLXyS9QKvcLzDjKlhqSUkxqsrHM.png?width=216&crop=smart&format=pjpg&auto=webp&s=a1b17deb5a31a76de5eac3cff5dad4af4ca11af0', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/sIbVdeNX9ygHwM7crLXyS9QKvcLzDjKlhqSUkxqsrHM.png?width=320&crop=smart&format=pjpg&auto=webp&s=772c787a980b046fca72ef50577fcf6e3a93bfa8', 'width': 320}, {'height': 415, 'url': 'https://external-preview.redd.it/sIbVdeNX9ygHwM7crLXyS9QKvcLzDjKlhqSUkxqsrHM.png?width=640&crop=smart&format=pjpg&auto=webp&s=1adf93862ab7c7b883fa11fee16c71abd084ebba', 'width': 640}, {'height': 623, 'url': 'https://external-preview.redd.it/sIbVdeNX9ygHwM7crLXyS9QKvcLzDjKlhqSUkxqsrHM.png?width=960&crop=smart&format=pjpg&auto=webp&s=e11f495fc346d4a7561a61b88c4dead473401179', 'width': 960}, {'height': 700, 'url': 'https://external-preview.redd.it/sIbVdeNX9ygHwM7crLXyS9QKvcLzDjKlhqSUkxqsrHM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5d6cbbac25b6e7ade0430007d93503aafa807abd', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/sIbVdeNX9ygHwM7crLXyS9QKvcLzDjKlhqSUkxqsrHM.png?format=pjpg&auto=webp&s=f82daf9017239f71e5ddddfce809cd83ba8a2902', 'width': 1664}, 'variants': {}}]}
Landmark attention models released, claim to get up to 32k context on 7B llama models, 5K on 13B
101
Disclaimer: This is not my work, but I do want it to get attention, I have managed to get the 13B loaded into the Ooba webui and am currently testing it. &#x200B; Download the models from here: [https://huggingface.co/eugenepentland](https://huggingface.co/eugenepentland) Github link: [https://github.com/eugenepentland/landmark-attention-qlora](https://github.com/eugenepentland/landmark-attention-qlora)
2023-06-11T00:21:09
https://www.reddit.com/r/LocalLLaMA/comments/146fkqo/landmark_attention_models_released_claim_to_get/
Deep-Preference
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146fkqo
false
null
t3_146fkqo
/r/LocalLLaMA/comments/146fkqo/landmark_attention_models_released_claim_to_get/
false
false
self
101
{'enabled': False, 'images': [{'id': 'OvfkQGbhXCdxlN4TyqH8By3zlWqtRSo5W6t8_C44wq8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LnAAGBTrMz7O_HAMPhia2GHh8JLmIneRYepgMsR33N4.jpg?width=108&crop=smart&auto=webp&s=8cf14bdcfbd94538d8dc9bcb8f49fad1b6087824', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LnAAGBTrMz7O_HAMPhia2GHh8JLmIneRYepgMsR33N4.jpg?width=216&crop=smart&auto=webp&s=ca815a211cb1cc354c9f3965a82d96b26e3f468b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LnAAGBTrMz7O_HAMPhia2GHh8JLmIneRYepgMsR33N4.jpg?width=320&crop=smart&auto=webp&s=b75f9e5f9ffbe83a3f6e1091580845bdb7e415ed', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LnAAGBTrMz7O_HAMPhia2GHh8JLmIneRYepgMsR33N4.jpg?width=640&crop=smart&auto=webp&s=bb0eb63a72d240cc3fa7fbe500e10851e9c4d056', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LnAAGBTrMz7O_HAMPhia2GHh8JLmIneRYepgMsR33N4.jpg?width=960&crop=smart&auto=webp&s=dbed16ec166305f61785f6b95140318d345c8bda', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LnAAGBTrMz7O_HAMPhia2GHh8JLmIneRYepgMsR33N4.jpg?width=1080&crop=smart&auto=webp&s=6ae85b2033dcc50a82ba2e1434bd70c75705aa65', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LnAAGBTrMz7O_HAMPhia2GHh8JLmIneRYepgMsR33N4.jpg?auto=webp&s=728644df0df90dee48ad1127ae48aa19a7a897d4', 'width': 1200}, 'variants': {}}]}
I am hosting LexPodLM-13B
15
I am hosting LexPodLM-13B for a few days here: [https://16bf7d64b6646ee9b2.gradio.live](https://16bf7d64b6646ee9b2.gradio.live) (It costs me $36 per day) Please feel free to try it out and please post the responses you find interesting! ~~----------------------------------------------~~ [https://huggingface.co/64bits/LexPodLM-13B](https://huggingface.co/64bits/LexPodLM-13B) LexPodLM-13B is a LLaMA-13B model fine-tuned using Vicuna's schema, with this [dataset](https://huggingface.co/datasets/64bits/lex_fridman_podcast_for_llm_vicuna). While it might not excel in tasks like code writing, fact searching, or essay writing, it can potentially serve as an effective chat model, capable of providing verbose and nuanced responses. ~~----------------------------------------------~~ Edit: It's more of an experiement and project for fun, don't take it seriously lol. ~~----------------------------------------------~~ Edit: Thanks for trying out! I am closing the hosted instance now! Follow me on twitter for more updates: [https://twitter.com/64bits\_ai](https://twitter.com/64bits_ai)
2023-06-11T01:52:50
https://www.reddit.com/r/LocalLLaMA/comments/146hglh/i_am_hosting_lexpodlm13b/
Latter-Implement-243
self.LocalLLaMA
2023-06-12T23:42:34
0
{}
146hglh
false
null
t3_146hglh
/r/LocalLLaMA/comments/146hglh/i_am_hosting_lexpodlm13b/
false
false
self
15
null
New instruction tuned LLaMA: Tulu 7/13/30/65b (Exploring the State of Instruction Tuning on Open Resources)
44
[https://arxiv.org/abs/2306.04751](https://arxiv.org/abs/2306.04751) >In this work we explore recent advances in instruction-tuning language models on a range of open instruction-following datasets. **Despite recent claims that open models can be on par with state-of-the-art proprietary models, these claims are often accompanied by limited evaluation, making it difficult to compare models across the board and determine the utility of various resources**. We provide a large set of instruction-tuned models from 6.7B to 65B parameters in size, trained on 12 instruction datasets ranging from manually curated (e.g., OpenAssistant) to synthetic and distilled (e.g., Alpaca) and systematically evaluate them on their factual knowledge, reasoning, multilinguality, coding, and open-ended instruction following abilities through a collection of automatic, model-based, and human-based metrics. We further introduce Tülu, our best performing instruction-tuned model suite finetuned on a combination of high-quality open resources.Our experiments show that different instruction-tuning datasets can uncover or enhance specific skills, while no single dataset (or combination) provides the best performance across all evaluations. Interestingly, we find that model and human preference-based evaluations fail to reflect differences in model capabilities exposed by benchmark-based evaluations, suggesting the need for the type of systemic evaluation performed in this work. **Our evaluations show that the best model in any given evaluation reaches on average 83% of ChatGPT performance, and 68% of GPT-4 performance,** suggesting that further investment in building better base models and instruction-tuning data is required to close the gap. We release our instruction-tuned models, including a fully finetuned 65B Tülu, along with our code, data, and evaluation framework at [this https URL](https://github.com/allenai/open-instruct) to facilitate future research. &#x200B; https://preview.redd.it/od1yz25hra5b1.png?width=1456&format=png&auto=webp&s=74a1febded30434fb04fcf80664cc48d759b3fd0 [https://huggingface.co/TheBloke/tulu-7B-GPTQ](https://huggingface.co/TheBloke/tulu-7B-GPTQ) [https://huggingface.co/TheBloke/tulu-13B-GPTQ](https://huggingface.co/TheBloke/tulu-13B-GPTQ) [https://huggingface.co/TheBloke/tulu-30B-GPTQ](https://huggingface.co/TheBloke/tulu-30B-GPTQ)
2023-06-11T02:05:46
https://www.reddit.com/r/LocalLLaMA/comments/146hpkq/new_instruction_tuned_llama_tulu_7133065b/
diovd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146hpkq
false
null
t3_146hpkq
/r/LocalLLaMA/comments/146hpkq/new_instruction_tuned_llama_tulu_7133065b/
false
false
https://b.thumbs.redditm…pCujZOgPffyE.jpg
44
null
Is there a chatgpt's code interpreter in the open source community yet?
7
I have learned Langchain and understand how it works a bit. I'm trying to build some customized tool in my local environment to test out Llama-7b's ability with data visualization. Even though chatgpt hasn't officially released code interpreter yet as one of their plugins, is there anything close to it in the open source community that executes python code in a sandbox environment for data visualization? Thanks.
2023-06-11T02:23:56
https://www.reddit.com/r/LocalLLaMA/comments/146i23r/is_there_a_chatgpts_code_interpreter_in_the_open/
kingstonggg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146i23r
false
null
t3_146i23r
/r/LocalLLaMA/comments/146i23r/is_there_a_chatgpts_code_interpreter_in_the_open/
false
false
self
7
null
What model can I run with a 8GB RAM and and Intel G/CPU?
4
I'm using MacBook Air, I also have about 170GB storage left. &#x200B;
2023-06-11T04:03:47
https://www.reddit.com/r/LocalLLaMA/comments/146jy0a/what_model_can_i_run_with_a_8gb_ram_and_and_intel/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146jy0a
false
null
t3_146jy0a
/r/LocalLLaMA/comments/146jy0a/what_model_can_i_run_with_a_8gb_ram_and_and_intel/
false
false
self
4
null
Trying to run Wizard-Vicuna-30B-Uncensored-GPTQ in text-generation-webUI (windows, using GPU)
2
[removed]
2023-06-11T04:33:58
https://www.reddit.com/r/LocalLLaMA/comments/146khi6/trying_to_run_wizardvicuna30buncensoredgptq_in/
gigachad_deluxe
self.LocalLLaMA
2023-06-11T04:58:57
0
{}
146khi6
false
null
t3_146khi6
/r/LocalLLaMA/comments/146khi6/trying_to_run_wizardvicuna30buncensoredgptq_in/
false
false
default
2
null
Commercial model + API question
0
Hi all - hoping to pick your brains. I’ve been toying with the idea of creating a commercial application for a LLM. The main requirements are 1.) commercial license so I can make money on it 2.) either (a) able to load and run fully locally on a cheap device like a raspberry pi or (b) able to setup and run over the internet using an api, but where it doesn’t cost any money to access the api. Does anything like this exist? Is this even possible? It doesn’t have to be a particularly large or advance LLM, if that helps things. Any feedback would be appreciated - thank you!
2023-06-11T06:06:38
https://www.reddit.com/r/LocalLLaMA/comments/146m4gq/commercial_model_api_question/
akuhl101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146m4gq
false
null
t3_146m4gq
/r/LocalLLaMA/comments/146m4gq/commercial_model_api_question/
false
false
self
0
null
Cheapest way to run llama.cpp
1
[removed]
2023-06-11T07:35:29
https://www.reddit.com/r/LocalLLaMA/comments/146nlyd/cheapest_way_to_run_llamacpp/
[deleted]
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146nlyd
false
null
t3_146nlyd
/r/LocalLLaMA/comments/146nlyd/cheapest_way_to_run_llamacpp/
false
false
default
1
null
Difference between “chat”, “instruct”, and “chat-instruct”?
71
I see a setting in Oogaboga text gui that can change this setting. What’s the difference between the three? Say I wanted an everyday conversation bot that can give me some useful general info sometimes, which one would work best?
2023-06-11T08:01:19
https://www.reddit.com/r/LocalLLaMA/comments/146o14s/difference_between_chat_instruct_and_chatinstruct/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146o14s
false
null
t3_146o14s
/r/LocalLLaMA/comments/146o14s/difference_between_chat_instruct_and_chatinstruct/
false
false
self
71
null
Requirments to
3
[removed]
2023-06-11T08:23:07
[deleted]
2023-06-11T08:40:01
0
{}
146oe2v
false
null
t3_146oe2v
/r/LocalLLaMA/comments/146oe2v/requirments_to/
false
false
default
3
null
Can't load new Landmark models in ooba, complains that trust_remote_code not enabled when it clearly is
2
Trying to run the Landmark models with the following command and not having any luck. Is there a step that I'm missing? I've searched pretty much every single Google result I can find and just haven't had any luck. root@130446e19582:/workspace/text-generation-webui# python [server.py](https://server.py) \--listen --trust-remote-code --auto-devices --model eugenepentland\_WizardLM-7B-Landmark Gradio HTTP request redirected to localhost :) Warning: trust\_remote\_code is enabled. This is dangerous. &#x200B; bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes\_cuda116.so Loading eugenepentland\_WizardLM-7B-Landmark... Auto-assiging --gpu-memory 23 for your GPU to try to prevent out-of-memory errors. You can manually set other values. Traceback (most recent call last): File "/workspace/text-generation-webui/server.py", line 917, in <module> shared.model, shared.tokenizer = load\_model(shared.model\_name) File "/workspace/text-generation-webui/modules/models.py", line 186, in load\_model model = LoaderClass.from\_pretrained(checkpoint, \*\*params) File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto\_factory.py", line 441, in from\_pretrained config, kwargs = AutoConfig.from\_pretrained( File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration\_auto.py", line 919, in from\_pretrained raise ValueError( ValueError: Loading models/eugenepentland\_WizardLM-7B-Landmark requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option \`trust\_remote\_code=True\` to remove this error.
2023-06-11T09:04:50
https://www.reddit.com/r/LocalLLaMA/comments/146p2n4/cant_load_new_landmark_models_in_ooba_complains/
tenmileswide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146p2n4
false
null
t3_146p2n4
/r/LocalLLaMA/comments/146p2n4/cant_load_new_landmark_models_in_ooba_complains/
false
false
self
2
null
Browsing in Semantic Kernel
5
Hello, I'm trying to learn LLaMA to use browser, it works to some point, LLaMA doesn't understand content what it is looking on. Could you help me with this? [https://gitlab.com/Dave3991/semantickernel/-/blob/master/src/Modules/SemanticKernel/Domain/Skills/BrowserSkill.cs](https://gitlab.com/Dave3991/semantickernel/-/blob/master/src/Modules/SemanticKernel/Domain/Skills/BrowserSkill.cs)
2023-06-11T11:18:08
https://www.reddit.com/r/LocalLLaMA/comments/146rdff/browsing_in_semantic_kernel/
warezak_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146rdff
false
null
t3_146rdff
/r/LocalLLaMA/comments/146rdff/browsing_in_semantic_kernel/
false
false
self
5
{'enabled': False, 'images': [{'id': 'Vw3bBu31aiMSJTxYckiYR6DzhREt281xzPoVcP66tdI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ZquPAU8cZhn0mo04ZhRduLjfjWqLtXZZPPqxxp1O98E.jpg?width=108&crop=smart&auto=webp&s=16ee55e3a0d73da50e8fd27a6ad34f7d093721ba', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ZquPAU8cZhn0mo04ZhRduLjfjWqLtXZZPPqxxp1O98E.jpg?width=216&crop=smart&auto=webp&s=040ae3528fe0a5ed70752c03621d4b32bf01eb9d', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ZquPAU8cZhn0mo04ZhRduLjfjWqLtXZZPPqxxp1O98E.jpg?width=320&crop=smart&auto=webp&s=84d404c7b7863509ba30e98b6b8c71d0bae19920', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ZquPAU8cZhn0mo04ZhRduLjfjWqLtXZZPPqxxp1O98E.jpg?width=640&crop=smart&auto=webp&s=850372ef18372a4dd28079ffd6f4b3bbda3c82c7', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ZquPAU8cZhn0mo04ZhRduLjfjWqLtXZZPPqxxp1O98E.jpg?width=960&crop=smart&auto=webp&s=83a47304c3cdf62ad5412258c1d5588b790b43e3', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/ZquPAU8cZhn0mo04ZhRduLjfjWqLtXZZPPqxxp1O98E.jpg?auto=webp&s=e2e2a035683b076a2e9c6ed930d86f533801fc00', 'width': 1024}, 'variants': {}}]}
Do we have any sister "subs" on kbin yet?
47
I'd love to leverage this opportunity on Monday to diversify the platforms we have community on. Is there anything out there on the fediverse yet?
2023-06-11T13:35:59
https://www.reddit.com/r/LocalLLaMA/comments/146u4en/do_we_have_any_sister_subs_on_kbin_yet/
KindaNeutral
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146u4en
false
null
t3_146u4en
/r/LocalLLaMA/comments/146u4en/do_we_have_any_sister_subs_on_kbin_yet/
false
false
self
47
null
Let's discuss Orca 13B from Microsoft Research, soon to be open sourced.
228
For those of you not familiar with Orca 13B, here's a [video giving an overview of the full paper by AI Explained.](https://www.youtube.com/watch?v=Dt_UNg7Mchg) For those of you who prefer to read the paper yourself, [here's the original paper.](https://arxiv.org/pdf/2306.02707.pdf) Personally, I'm most interested in discussing Microsoft's reasons behind releasing an open source model which in some ways directly competes with OpenAI despite Microsoft's relationship with OpenAI.
2023-06-11T14:18:43
https://www.reddit.com/r/LocalLLaMA/comments/146v3uo/lets_discuss_orca_13b_from_microsoft_research/
Megneous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146v3uo
false
null
t3_146v3uo
/r/LocalLLaMA/comments/146v3uo/lets_discuss_orca_13b_from_microsoft_research/
false
false
self
228
{'enabled': False, 'images': [{'id': 'dHa6Ig99dw9l0QLeB4SnTmnPEzQaAf3CmlTbla4pRPA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/YEWdB9GsX6vvOiF6Y1OIzKRsel2iHAWhx-RvikbFWuI.jpg?width=108&crop=smart&auto=webp&s=4c3815855733b4a54d1b7fc87ee85f2e68ac5f72', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/YEWdB9GsX6vvOiF6Y1OIzKRsel2iHAWhx-RvikbFWuI.jpg?width=216&crop=smart&auto=webp&s=abb5e0484f9482d1e837aecfae7eb882b95276bc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/YEWdB9GsX6vvOiF6Y1OIzKRsel2iHAWhx-RvikbFWuI.jpg?width=320&crop=smart&auto=webp&s=76b1aa26b9a3b9c1d273a2b1e66a75d4506a4423', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/YEWdB9GsX6vvOiF6Y1OIzKRsel2iHAWhx-RvikbFWuI.jpg?auto=webp&s=f6371823c0a4ce7e4e6bae834aada27902217f1b', 'width': 480}, 'variants': {}}]}
PyTorch vs GGML?
10
As GGML models with the same amount of parameters are way smaller than PyTorch models, do GGML models have less quality? Thanks!
2023-06-11T14:39:54
https://www.reddit.com/r/LocalLLaMA/comments/146vlr0/pytorch_vs_ggml/
chocolatebanana136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146vlr0
false
null
t3_146vlr0
/r/LocalLLaMA/comments/146vlr0/pytorch_vs_ggml/
false
false
self
10
null
Free GPU Options for Llama Model Experimentation
1
[removed]
2023-06-11T16:01:12
[deleted]
2023-06-15T11:44:49
0
{}
146xlbm
false
null
t3_146xlbm
/r/LocalLLaMA/comments/146xlbm/free_gpu_options_for_llama_model_experimentation/
false
false
default
1
null
Upgrading my setup for new features - how?
1
So I followed guides to setup and tweak my textgenerationwebui + Wizard Vicuna 13B setup on my Mac M1 macbook -- now that I've done so, though, I'm not sure what I need to upgrade/change each when I see new models and functionalities being released? The main thing is I saw that GPU inference / better handling of the Mac M1 pro chip stuff being released for Llama - no idea what to change as part of my setup to take advantage of this, or how to go about doing so! Any ELI5 type responses much appreciated as a lot to take in!
2023-06-11T16:47:46
https://www.reddit.com/r/LocalLLaMA/comments/146yqfc/upgrading_my_setup_for_new_features_how/
spoilingba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146yqfc
false
null
t3_146yqfc
/r/LocalLLaMA/comments/146yqfc/upgrading_my_setup_for_new_features_how/
false
false
self
1
null
Run quantized Falcon model l
1
[deleted]
2023-06-11T16:48:08
[deleted]
1970-01-01T00:00:00
0
{}
146yqqs
false
null
t3_146yqqs
/r/LocalLLaMA/comments/146yqqs/run_quantized_falcon_model_l/
false
false
default
1
null
Run quantized Llama and Falcon models not using Llama.cpp
12
If you want to run a quantized Llama or Falcon model, you can try this library from OpenNMT: https://opennmt.net/CTranslate2/guides/transformers.html
2023-06-11T16:49:47
https://www.reddit.com/r/LocalLLaMA/comments/146ys5t/run_quantized_llama_and_falcon_models_not_using/
Acrobatic-Site2065
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
146ys5t
false
null
t3_146ys5t
/r/LocalLLaMA/comments/146ys5t/run_quantized_llama_and_falcon_models_not_using/
false
false
self
12
null
Clarification on Custom Datasets
1
Hello! I've been running llama.cpp on pre-trained weights and had a few questions about training my own dataset (for this example let's say I want specific customer service responses about my product) 1) It seems like the best approach to use custom data is to use an existing model (like 7B: [Vicuna v1.1](https://huggingface.co/eachadea/ggml-vicuna-7b-1.1)) and then I would take my dataset using LORA merge the LORA model with the existing trained model? Is that correct? 2) Does anyone have a good resource to learn more about this / open source library they would recommend?
2023-06-11T18:26:39
https://www.reddit.com/r/LocalLLaMA/comments/1471934/clarification_on_custom_datasets/
sudocaptain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1471934
false
null
t3_1471934
/r/LocalLLaMA/comments/1471934/clarification_on_custom_datasets/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KJJFb_vYzt3LgoIp4piANHHDFm2Fi9VkonZzVdjEgVA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=108&crop=smart&auto=webp&s=8cda09ab5c77cdcf284e7f085d139a72ac86bac3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=216&crop=smart&auto=webp&s=86fde117f5379c7df0dbbc434aaa7c0771a92ce9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=320&crop=smart&auto=webp&s=0e69b61be1f991d8c667291896f9ce716e73023a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=640&crop=smart&auto=webp&s=8bd8cd26283c3e890f7b2e2273e714df2bb45143', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=960&crop=smart&auto=webp&s=afbed4721bff1979ec8adfc199cb0b6021782a46', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?width=1080&crop=smart&auto=webp&s=2ee64a7edaa0c4f582de40db45853578fbf2d312', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/g7nJoY3S0C27JJbAawqryvXNz6ae8_8kOfXK-8UfXGM.jpg?auto=webp&s=fc09d54412f3392d664111a799e57b0f44b048db', 'width': 1200}, 'variants': {}}]}
NF4 inference quantization is awesome: Comparison of answer quality of the same model quantized to INT8, NF4, q2_k, q3_km, q3_kl, q4_0, q8_0
30
I've created embeddings of a bunch of Linux man pages, and been using Wizard-Vicuna-Uncensored to see how good it can answer questions based on the info in the man pages. Almost immediately I've found a question that can be used as a good illustration of bits-per-parameter/answer quality correlation. All model variations were asked the question `How can I make SSH act as a local SOCKS server?`. Temperature was set to 0. All run on Ubuntu 22.04 with a single RTX 4090 GPU and 64GB of RAM. `INT8` and `NF4`(what's used in QLoRA) refer to [transformers](https://huggingface.co/docs/transformers/main/main_classes/quantization) quantization methods, see [load_in_8bit](https://huggingface.co/docs/transformers/main/main_classes/quantization#load-a-large-model-in-8bit) and [load_in_4bit](https://huggingface.co/docs/transformers/main/main_classes/quantization#using-nf4-normal-float-4-data-type). |Model Name | Quantization | Answer | Tokens/sec | |-------------------------|--------------|--------|------------| | Wizard-Vicuna-Uncen-13B | INT8 | Wrong, suggests using `-L` flag | 19.3 | | Wizard-Vicuna-Uncen-13B | NF4 | **Right**, pretty detailed, but the wording is "clumsy" | 13.28 | | Wizard-Vicuna-Uncen-13B | ggml.q8_0 | Wrong, suggests using `-R` flag | 20.80 | | Wizard-Vicuna-Uncen-30B | ggml.q2_K | Wrong, suggests using `-R` flag | 10.72 | | Wizard-Vicuna-Uncen-30B | ggml.q3_K_M | **Right**, detailed | 11.64 | | Wizard-Vicuna-Uncen-30B | ggml.q3_K_L | Wrong (sic!), suggests `-D` option, but describes `-R` option | 10.02 | | Wizard-Vicuna-Uncen-30B | ggml.q4_0 | **Right**, detailed, good wording | 10.02 | | Wizard-Vicuna-Uncen-30B | NF4 | **Right**, concise | 4.67 | (Here is the [JSON](https://gist.github.com/epicfilemcnulty/e3cb6ab511b11ad46c829fe09da0b739) with actual answers) Conclusions: * For *13B*, there is quite a gap in quality between 8-bit and what **NF4** of the same model gives you (which is, supposedly, should be 99% as accurate as FP16). And VRAM usage is reduced compared to 8-bit. But inference is slower, and you have to keep the model's weights in **FP16**. * *30B* `q2_k` roughly compares to 13B `q8_0`, and thus, I'd say, pretty useless. * *30B* `q3_whatever` is kinda lottery * *30B* `q4_0` and above is sweet. But even if you go all the way up to `q8_0` with 30B (which won't fit on a single 24GB GPU, so you'll have to offload some layers to CPU and tokens/sec gonna be terrible), ~~*30B* **NF4** will give you more accurate results than 30B `q8_0`~~. **UPDATE:** I was going to re-do the test with more samples, but I've realized that the inital test itself was flawed: it was based on the embeddings of linux man pages, but the relevant part of the SSH man page was never provided to the model as part of the context, so the right answer should've been "it is impossible to say given the context". On a bunch of questions without embeddings I did not get a single answer from NF4 version that I could rate as being better than q8_0 version. So the statement `30B NF4 will give you more accurate results than 30B q8_0` was most definitely wrong. The accuracy of NF4 is somewhere in between q4_0 and q8_0 =) To say for sure where exactly, one needs to do a proper test, with a decent amount of samples and a reliable rating system.
2023-06-11T18:26:54
https://www.reddit.com/r/LocalLLaMA/comments/14719ch/nf4_inference_quantization_is_awesome_comparison/
epicfilemcnulty
self.LocalLLaMA
2023-06-17T21:31:40
0
{}
14719ch
false
null
t3_14719ch
/r/LocalLLaMA/comments/14719ch/nf4_inference_quantization_is_awesome_comparison/
false
false
self
30
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
PSA: We'll lose API access to Reddit at the end of this month so start scraping if you want to train models with Reddit data
222
I'll [post my notebooks here](https://github.com/ragnarkar/reddit_imitator/tree/main/Reddit_download_notebooks) (though they're more for my own project of [building imitators for various subreddits](https://github.com/ragnarkar/reddit_imitator))
2023-06-11T21:08:09
https://www.reddit.com/r/LocalLLaMA/comments/1475f1s/psa_well_lose_api_access_to_reddit_at_the_end_of/
ragnarkar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1475f1s
false
null
t3_1475f1s
/r/LocalLLaMA/comments/1475f1s/psa_well_lose_api_access_to_reddit_at_the_end_of/
false
false
self
222
{'enabled': False, 'images': [{'id': '9gWy2t8pFuIOFSUBcEsABgYaA4i0PAuqEbdzff_Bsw4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u5UGpy7IfABYTJU_BzZKJLwVKq_GXH0YRkECcqaS5Bc.jpg?width=108&crop=smart&auto=webp&s=220dc447022d07a1e5b35c33b41ded5d8686223e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/u5UGpy7IfABYTJU_BzZKJLwVKq_GXH0YRkECcqaS5Bc.jpg?width=216&crop=smart&auto=webp&s=18d6e8e4bf9160a42bbc9a4eac30a58fa2501028', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/u5UGpy7IfABYTJU_BzZKJLwVKq_GXH0YRkECcqaS5Bc.jpg?width=320&crop=smart&auto=webp&s=3653fc11b5b5be628c873e9db6b03bb77e8ccfa3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/u5UGpy7IfABYTJU_BzZKJLwVKq_GXH0YRkECcqaS5Bc.jpg?width=640&crop=smart&auto=webp&s=e29d976b4be2931cfa544e2d585e438b52b9a94d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/u5UGpy7IfABYTJU_BzZKJLwVKq_GXH0YRkECcqaS5Bc.jpg?width=960&crop=smart&auto=webp&s=535fe79055478849ac15eff661bb3a86e421f359', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/u5UGpy7IfABYTJU_BzZKJLwVKq_GXH0YRkECcqaS5Bc.jpg?width=1080&crop=smart&auto=webp&s=1065546516021a571c57e92ea4e63f82b756b2ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/u5UGpy7IfABYTJU_BzZKJLwVKq_GXH0YRkECcqaS5Bc.jpg?auto=webp&s=8475881bd15e6b8183ae8b4a53b6a1fa95ae2cb9', 'width': 1200}, 'variants': {}}]}
Is there an interface for CYOA
2
Is there an interface for CYOA ? Like you ask a model to ask you questions, and it would generate clickable suggestions and you would be able to generate more suggestions or type something ...
2023-06-11T21:10:49
https://www.reddit.com/r/LocalLLaMA/comments/1475hjm/is_there_an_interface_for_cyoa/
Ok--Reflection
self.LocalLLaMA
2023-06-11T21:35:51
0
{}
1475hjm
false
null
t3_1475hjm
/r/LocalLLaMA/comments/1475hjm/is_there_an_interface_for_cyoa/
false
false
self
2
null
Better inference on 3090 than A100
4
I am running a the 30B parameter model on 4 bit quantization.When tested I get a slightly lower inference speed on 3090 compared to A100. Is this normal?
2023-06-11T21:58:37
https://www.reddit.com/r/LocalLLaMA/comments/1476pco/better_inference_on_3090_than_a100/
l_Teto_l
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1476pco
false
null
t3_1476pco
/r/LocalLLaMA/comments/1476pco/better_inference_on_3090_than_a100/
false
false
self
4
null
PrivateGPT on RPi?
2
Seems like I need an Arch based system, so Raspbian won’t work. Any idea what I can run on a RaspberryPi to install PrivateGPT?
2023-06-11T22:24:31
https://www.reddit.com/r/LocalLLaMA/comments/1477cnz/privategpt_on_rpi/
jameshayek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1477cnz
false
null
t3_1477cnz
/r/LocalLLaMA/comments/1477cnz/privategpt_on_rpi/
false
false
self
2
null
Train a local model to select from a set of pre-defined responses to customer questions?
4
I’ve been interested in using LLMs for a while to speed up customer service, but I don’t need a full scale chatbot. 95% of our customers questions by email are essentially all the same, and we have a bunch of predefined responses to them. So basically I need a bot smart enough to understand the question, and select the correct template. For example: customer: hey bro can you do 100 response: we don’t take offers Customer: hey I put the wrong address can you change it Response: no, we need to cancel it, you can reorder it We might handle situations slightly different than someone else, so I specifically want to train it to pick the responses that *we* would pick. Is this something achievable yet with local models? Specifically that you can train it to determine which (if any) responses from a pre-defined list is appropriate? (I’ll probably be using Microsoft power automate to actually link this all together to email and APIs for handling orders etc, but that’s a question for another day. I had initially tried their own in-line bot but it can only take in very short prompts and is still general purpose and can’t be custom trained.)
2023-06-11T22:36:11
https://www.reddit.com/r/LocalLLaMA/comments/1477mzu/train_a_local_model_to_select_from_a_set_of/
Darius510
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1477mzu
false
null
t3_1477mzu
/r/LocalLLaMA/comments/1477mzu/train_a_local_model_to_select_from_a_set_of/
false
false
self
4
null
Simple LLM Watermarking - Open Lllama 3b LORA
4
2023-06-11T22:54:30
https://huggingface.co/UnstableLlama/alpaca_lora_open_llama_3b_watermarked
Unstable_Llama
huggingface.co
1970-01-01T00:00:00
0
{}
14782fp
false
null
t3_14782fp
/r/LocalLLaMA/comments/14782fp/simple_llm_watermarking_open_lllama_3b_lora/
false
false
https://b.thumbs.redditm…bE-V4RrSuBCk.jpg
4
{'enabled': False, 'images': [{'id': 'Cp87flM0fl8HzqJ6fjzXfWaxoZXpZ_5MOxGB0EL8APE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nAt3fSp0LPRYBO5Sm8f8-psiEfvmHcr8oBARu4aSk20.jpg?width=108&crop=smart&auto=webp&s=d19308333ea24da6585ddd63c4312cdb315bc1e3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nAt3fSp0LPRYBO5Sm8f8-psiEfvmHcr8oBARu4aSk20.jpg?width=216&crop=smart&auto=webp&s=b5fd9a23c89f44f220826884c413a953bffd432c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nAt3fSp0LPRYBO5Sm8f8-psiEfvmHcr8oBARu4aSk20.jpg?width=320&crop=smart&auto=webp&s=2c709d9f3571ea99a8c94a30e8ff1e6022f5186b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nAt3fSp0LPRYBO5Sm8f8-psiEfvmHcr8oBARu4aSk20.jpg?width=640&crop=smart&auto=webp&s=ca699136f324f360223b61a8ac559c30dbf89fa0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nAt3fSp0LPRYBO5Sm8f8-psiEfvmHcr8oBARu4aSk20.jpg?width=960&crop=smart&auto=webp&s=de312a0544f166a16dc5609dc7e834576012a2a9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nAt3fSp0LPRYBO5Sm8f8-psiEfvmHcr8oBARu4aSk20.jpg?width=1080&crop=smart&auto=webp&s=f1f24a14f1825f164f7c2f213f43d9c46e7fbe8d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nAt3fSp0LPRYBO5Sm8f8-psiEfvmHcr8oBARu4aSk20.jpg?auto=webp&s=ae3ecbf1f1c4c32e2afad5e116d4f8f77a48620a', 'width': 1200}, 'variants': {}}]}
Are there benchmarks out there for comparing hardware?
3
I’m currently deciding what I want to use for a home setup. Are there any benchmark suites out there designed for ML/LLMs? Also does anyone have any resources on what to measure? Thinking of making my own cross-platform benchmarking tool, but I’d like to see if that’s redundant, and if not what I need to measure to make it useful. This is more for speed/capabilities between hardware and models. I’d be interested in reading research about determining the usefulness of models, but I’m not looking to create anything that does that.
2023-06-11T23:48:34
https://www.reddit.com/r/LocalLLaMA/comments/1479aa0/are_there_benchmarks_out_there_for_comparing/
soleblaze
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1479aa0
false
null
t3_1479aa0
/r/LocalLLaMA/comments/1479aa0/are_there_benchmarks_out_there_for_comparing/
false
false
self
3
null
I have tried various different methods to install, and none work. Can you spoon-feed me how?
4
[removed]
2023-06-12T00:59:35
https://www.reddit.com/r/LocalLLaMA/comments/147as0q/i_have_tried_various_different_methods_to_install/
devshore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147as0q
false
null
t3_147as0q
/r/LocalLLaMA/comments/147as0q/i_have_tried_various_different_methods_to_install/
false
false
default
4
null
Nous Hermes 13b is very good.
92
It doesn't get talked about very much in this subreddit so I wanted to bring some more attention to Nous Hermes. It tops most of the 13b models in most benchmarks I've seen it in ([here's a compilation of llm benchmarks](https://www.reddit.com/r/LocalLLaMA/comments/144rg6a/all_model_leaderboards_that_i_know/) by u/YearZero). In my own (very informal) testing I've found it to be a better all-rounder and make less mistakes than my previous favorites, which include airoboros, wizardlm 1.0, vicuna 1.1, and a few of their variants. Find ggml/gptq/etc versions here: [https://huggingface.co/models?search=nous-hermes](https://huggingface.co/models?search=nous-hermes)
2023-06-12T02:50:34
https://www.reddit.com/r/LocalLLaMA/comments/147d0vn/nous_hermes_13b_is_very_good/
lemon07r
self.LocalLLaMA
2023-06-12T03:33:57
0
{}
147d0vn
false
null
t3_147d0vn
/r/LocalLLaMA/comments/147d0vn/nous_hermes_13b_is_very_good/
false
false
self
92
{'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=108&crop=smart&auto=webp&s=284ee86cd9228390268ace75b44e497c1fec562f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=216&crop=smart&auto=webp&s=96628b1c155401ce2d04a853b6524fa0c95cd632', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=320&crop=smart&auto=webp&s=f5f435bb4d31f0f695560cb0fb6f456702452062', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=640&crop=smart&auto=webp&s=b8b6a03fcde27061acee8ab4cb6ecc598a7ac6b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=960&crop=smart&auto=webp&s=bbda73bd4f11be7b71efb3892b4107414d815613', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=1080&crop=smart&auto=webp&s=0158100ff6f9041cc8dcb861b66a3db041df5095', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?auto=webp&s=daff0272548bd7ffe5bc2b1eff6cd5c752144ed4', 'width': 1200}, 'variants': {}}]}
Using AI agents to create better datasets?
0
Has anyone tried using ai agents like superagi to create better datasets for training and finetuning? Would this even be possible or even useful?
2023-06-12T03:03:44
https://www.reddit.com/r/LocalLLaMA/comments/147daa2/using_ai_agents_to_create_better_datasets/
Nousfeed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147daa2
false
null
t3_147daa2
/r/LocalLLaMA/comments/147daa2/using_ai_agents_to_create_better_datasets/
false
false
self
0
null
Best model for writing erotica?
52
Finally figured out how to run it on my setup and interested in testing it out. Which model is best for erotica right now?
2023-06-12T03:42:16
https://www.reddit.com/r/LocalLLaMA/comments/147e00j/best_model_for_writing_erotica/
ricketpipe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147e00j
false
null
t3_147e00j
/r/LocalLLaMA/comments/147e00j/best_model_for_writing_erotica/
false
false
nsfw
52
null
65B the ultimate tutorial for use with llama.cpp... cannot be found by me.
4
I have a 3090, and um... (looks around) 24 GB of Ram (running Oobabooga in a Qemu/Kvm VM using gpu passthrough with Linux on both ends ). I can get 3 words a minute! when trying to load TheBloke\_guanaco-65B-GGML-4\_0. Exciting stuff. I have used the following settings in Oobabooga: threads: 20, n\_batch: 512, n-gpu-layers: 100, n\_ctx: 1024 But these numbers are shots in the dark. I checked, llama.cpp and there was nothing to be found around loading different model sizes. [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) It looked like Oobabooga says I have to compile llama.cpp to use my GPU, but it offers me the slider, so that's confusing: [https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md#gpu-acceleration](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md#gpu-acceleration) Please point me to any tutorials on using llama.cpp with Oobabooga, or good search terms, or your settings... or a wizard in a funny hat that can just make it work. Any help appreciated. EDIT: 64 gb of ram sped things right up… running a model from your disk is tragic
2023-06-12T03:45:42
https://www.reddit.com/r/LocalLLaMA/comments/147e2bj/65b_the_ultimate_tutorial_for_use_with_llamacpp/
silenceimpaired
self.LocalLLaMA
2023-06-20T21:30:15
0
{}
147e2bj
false
null
t3_147e2bj
/r/LocalLLaMA/comments/147e2bj/65b_the_ultimate_tutorial_for_use_with_llamacpp/
false
false
self
4
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]}
Anyone know why this unfiltered/unaligned model is so big?
1
[removed]
2023-06-12T04:26:09
https://www.reddit.com/r/LocalLLaMA/comments/147et4k/anyone_know_why_this_unfilteredunaligned_model_is/
Skin_Chemist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147et4k
false
null
t3_147et4k
/r/LocalLLaMA/comments/147et4k/anyone_know_why_this_unfilteredunaligned_model_is/
false
false
default
1
null
Random conversational deterioration with Wizard-Vicuna-7B-uncensored.ggmlv3.q4_0.bin?
6
Using this model, the conversations quickly devolve into statements from llama.cpp like: > I regret that our conversation has deteriorated into this level of hostility. As I said earlier, I am open to discussing these issues further in a civil and constructive manner if you are willing to engage in such a dialogue. I am also accused of using insults and pejoratives when I ask for something as simple as the definition of a "dad". Anyone else experience this? I tried restarting and it seemed to not help.
2023-06-12T05:16:09
https://www.reddit.com/r/LocalLLaMA/comments/147foc3/random_conversational_deterioration_with/
AlexKingstonsGigolo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147foc3
false
null
t3_147foc3
/r/LocalLLaMA/comments/147foc3/random_conversational_deterioration_with/
false
false
self
6
null
It was only a matter of time.
818
OpenAI is now primarily focused on being a business entity rather than truly ensuring that artificial general intelligence benefits all of humanity. While they claim to support startups, their support seems contingent on those startups not being able to compete with them. This situation has arisen due to papers like Orca, which demonstrate comparable capabilities to ChatGPT at a fraction of the cost and potentially accessible to a wider audience. It is noteworthy that OpenAI has built its products using research, open-source tools, and public datasets.
2023-06-12T05:17:37
https://i.redd.it/qisxmlxaui5b1.jpg
onil_gova
i.redd.it
1970-01-01T00:00:00
0
{}
147fp7z
false
null
t3_147fp7z
/r/LocalLLaMA/comments/147fp7z/it_was_only_a_matter_of_time/
false
false
https://b.thumbs.redditm…iqk2Aa6vsXdg.jpg
818
{'enabled': True, 'images': [{'id': '478GPI_ImwnnBg1nl99VJNFw3U73xs0OBlOtfNAs-KY', 'resolutions': [{'height': 176, 'url': 'https://preview.redd.it/qisxmlxaui5b1.jpg?width=108&crop=smart&auto=webp&s=ebd50ea83252f4183328fb25efdfdf28f5eb926a', 'width': 108}, {'height': 353, 'url': 'https://preview.redd.it/qisxmlxaui5b1.jpg?width=216&crop=smart&auto=webp&s=11a3185c2c97df5aec23f667e87b5aee9aa4accf', 'width': 216}, {'height': 523, 'url': 'https://preview.redd.it/qisxmlxaui5b1.jpg?width=320&crop=smart&auto=webp&s=11c4426a46e6db6923e8da6ebfcbb2c33143d372', 'width': 320}, {'height': 1047, 'url': 'https://preview.redd.it/qisxmlxaui5b1.jpg?width=640&crop=smart&auto=webp&s=75020f4e35b8355c78cd06449038e5e36e7f4dff', 'width': 640}], 'source': {'height': 1479, 'url': 'https://preview.redd.it/qisxmlxaui5b1.jpg?auto=webp&s=981c487f508f15ef9ec830739f922b6e929e9d39', 'width': 904}, 'variants': {}}]}
Text generation is extremely slow when character log hits certain threshhold (oobabooga, --cpu)
4
Pretty much what the title says. I'm running text-generation-webui with --cpu flag with WizardLM-30B-Uncensored-GGML (I have 6gb of VRAM and 128gb of RAM so I figured out leaving it CPU-only should be faster). If I run the model in chat mode, and the character log hits a certain threshhold (\~10kB for me), the subsequent generation is very slow (\~360s) while before that it takes \~30-40s. I quickly ran out of ideas, and there are no issues that refer to that so I figured out I am doing smth wrong here. Some logs: /* Loading a model */ INFO:Loading WizardLM-30B-Uncensored.ggmlv3.q4_0.bin... INFO:llama.cpp weights detected: models/WizardLM-30B-Uncensored.ggmlv3.q4_0.bin INFO:Cache capacity is 0 bytes llama.cpp: loading model from models/WizardLM-30B-Uncensored.ggmlv3.q4_0.bin llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32001 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 6656 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 52 llama_model_load_internal: n_layer = 60 llama_model_load_internal: n_rot = 128 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: n_ff = 17920 llama_model_load_internal: n_parts = 1 llama_model_load_internal: model size = 30B llama_model_load_internal: ggml ctx size = 0.13 MB llama_model_load_internal: mem required = 19756.67 MB (+ 3124.00 MB per state) . llama_init_from_file: kv self size = 3120.00 MB AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | INFO:Loaded the model in 0.29 seconds. /* Instruct mode */ llama_print_timings: load time = 1732.59 ms llama_print_timings: sample time = 3.92 ms / 11 runs ( 0.36 ms per token) llama_print_timings: prompt eval time = 1732.55 ms / 9 tokens ( 192.51 ms per token) llama_print_timings: eval time = 5947.06 ms / 10 runs ( 594.71 ms per token) llama_print_timings: total time = 7699.96 ms Output generated in 7.92 seconds (1.26 tokens/s, 10 tokens, context 10, seed 1180324049) /* Chat-instruct mode, Chiharu Yamada */ llama_print_timings: load time = 1732.59 ms llama_print_timings: sample time = 6.28 ms / 17 runs ( 0.37 ms per token) llama_print_timings: prompt eval time = 65955.69 ms / 409 tokens ( 161.26 ms per token) llama_print_timings: eval time = 9782.56 ms / 16 runs ( 611.41 ms per token) llama_print_timings: total time = 75769.95 ms Output generated in 75.99 seconds (0.21 tokens/s, 16 tokens, context 410, seed 1596456226) /* Chat mode, Chiharu Yamada (2.6kB logs) */ Llama.generate: prefix-match hit Output generated in 141.75 seconds (0.82 tokens/s, 116 tokens, context 426, seed 849433334) /* Chat mode, Chiharu Yamada (regenerate) */ Llama.generate: prefix-match hit Output generated in 76.48 seconds (1.53 tokens/s, 117 tokens, context 426, seed 486988653) /* Chat mode, custom character (12.2kB logs) */ Output generated in 343.99 seconds (0.07 tokens/s, 23 tokens, context 1837, seed 1835124699) /* Chat mode, custom character (regenerate) */ Llama.generate: prefix-match hit Output generated in 22.20 seconds (1.31 tokens/s, 29 tokens, context 1837, seed 564248536) Edit: typo
2023-06-12T05:30:32
https://www.reddit.com/r/LocalLLaMA/comments/147fwti/text_generation_is_extremely_slow_when_character/
Grand_Question1069
self.LocalLLaMA
2023-06-12T05:48:14
0
{}
147fwti
false
null
t3_147fwti
/r/LocalLLaMA/comments/147fwti/text_generation_is_extremely_slow_when_character/
false
false
self
4
null
I would like to try my hand at finetuning some models. What is the best way to start? I have some questions that I'd appreciate your help on.
8
Normally, I use models such as Wizard-Vicuna-GGMLv3-q4\_1.bin with a UI for every day inference. 1. If my understanding is correct, these kind of models are already fine-tuned correct? So it's best not to fine-tune them again? 2. Should I only start with base models? 3. What models can be fine tuned using something like QLORA? (Can Falcon base model be fine tuned? OpenAssistant? Open LLAMA?) 4. What "type" of models can be fine-tuned? Falcon, GGML, GPT4All, GPT-J, GPT-Neo? Are these all simply different encodings and can all be fine tuned provided I re-encode them again to the appropriate format the fine-tune library accepts? 5. I believe I read somewhere that only LLama models can be fine tuned uring LORAs, is that true? 6. And finally, what's a good library to start? I'm looking at the QLORA library right now. The example fine-tunes a \` gpt-neox-20b\`, I'm wondering if other formats can be used out of the box?
2023-06-12T05:54:33
https://www.reddit.com/r/LocalLLaMA/comments/147gb5l/i_would_like_to_try_my_hand_at_finetuning_some/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147gb5l
false
null
t3_147gb5l
/r/LocalLLaMA/comments/147gb5l/i_would_like_to_try_my_hand_at_finetuning_some/
false
false
self
8
null
Issues using CLBlast with KoboldCPP
1
[deleted]
2023-06-12T06:22:01
[deleted]
2023-06-12T06:52:09
0
{}
147gs9b
false
null
t3_147gs9b
/r/LocalLLaMA/comments/147gs9b/issues_using_clblast_with_koboldcpp/
false
false
default
1
null
Ai trained on photos
1
[removed]
2023-06-12T07:06:24
https://www.reddit.com/r/LocalLLaMA/comments/147hi0a/ai_trained_on_photos/
Worth_Discussion_130
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147hi0a
false
null
t3_147hi0a
/r/LocalLLaMA/comments/147hi0a/ai_trained_on_photos/
false
false
default
1
null
The Empire Strikes Back. Or so, it thinks
177
OpenAI, Google, Anthropic, none of them want anyone to use datasets distilled from their models to fine-tune competitive models. Doing nothing means that the open source progress will come to a halt. **We need to join forces and start to manually create community driven datasets** that we can use for instruct fine-tuning. Databricks did this with the dolly v2 dataset by involving their employees. **We can do it at scale.** This will allow us together to build fully open source datasets that people can then use to fine-tune base models and create instruct models that can compete, and maybe even outperform the closed source ones. With the right tooling, we would all submit instructions, peer review and approve the instructions, de-duplicate and release datasets that are not subject to any TOS or policies of Big Tech. If everyone puts in **1 instruction**, we will have more than what we need. Together we can pull this off and give them something to worry about. A strong, collaborative open source movement is the only way to **democratize AI** and make sure that control doesn't slip into the hands of 3-4 corporations. They are already calling for regulation to close the gates for everyone else. **The time to act and be a pioneer is now!**
2023-06-12T07:47:36
https://www.reddit.com/r/LocalLLaMA/comments/147i5ee/the_empire_strikes_back_or_so_it_thinks/
Test-Elegant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147i5ee
false
null
t3_147i5ee
/r/LocalLLaMA/comments/147i5ee/the_empire_strikes_back_or_so_it_thinks/
false
false
self
177
null
Try chatting with fine-tuned models for Falcon-7B, Falcon-40B, and the new Open-Llama-7B
39
2023-06-12T07:57:58
https://gpt-gm.h2o.ai/
ichiichisan
gpt-gm.h2o.ai
1970-01-01T00:00:00
0
{}
147ib63
false
null
t3_147ib63
/r/LocalLLaMA/comments/147ib63/try_chatting_with_finetuned_models_for_falcon7b/
false
false
https://b.thumbs.redditm…KrzYXP1lUAcc.jpg
39
{'enabled': False, 'images': [{'id': 'ZIMCcSj-3WqllRvS9Fja1Z6BxVgcpscjASFHFeakGRY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ED6VkM_xIEGeRO3il4PqDGpet8CbkYzFK-AkT3fqj24.jpg?width=108&crop=smart&auto=webp&s=a3718539cf4394164b9fc402be5919d235d05d7c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ED6VkM_xIEGeRO3il4PqDGpet8CbkYzFK-AkT3fqj24.jpg?width=216&crop=smart&auto=webp&s=fd5b8cd1a04b354a474c2906020976a76ed25b0b', 'width': 216}], 'source': {'height': 250, 'url': 'https://external-preview.redd.it/ED6VkM_xIEGeRO3il4PqDGpet8CbkYzFK-AkT3fqj24.jpg?auto=webp&s=9839e19cf0bd59bceaebeb172914268280ebc23b', 'width': 250}, 'variants': {}}]}
Question about Local LLM processing, regarding interactive chatbot usage, possibly accelerating it.
0
[removed]
2023-06-12T09:34:08
https://www.reddit.com/r/LocalLLaMA/comments/147jta4/question_about_local_llm_processing_regarding/
No_Opposite_4334
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147jta4
false
null
t3_147jta4
/r/LocalLLaMA/comments/147jta4/question_about_local_llm_processing_regarding/
false
false
default
0
null
What's the current SOTA for local NMT (Neural Machine Translation)?
2
I've seen so many demos over the months and years, but nothing that I can actually run locally right now and get good quality results. There's Marian/FFTranslations but it still seems to be very poor quality, there's Google/Facebook's papers which make very bold claims but I haven't seen their models actually released anywhere, and there's DeepL, but it's cloud based and proprietary. I've had people tell me, "oh, foobar-13B should be able to do that," but I've never seen one that actually performs well. Is there _anything_ that I can run locally and get high-quality translations from yet?
2023-06-12T10:07:41
https://www.reddit.com/r/LocalLLaMA/comments/147kdi8/whats_the_current_sota_for_local_nmt_neural/
miserlou
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147kdi8
false
null
t3_147kdi8
/r/LocalLLaMA/comments/147kdi8/whats_the_current_sota_for_local_nmt_neural/
false
false
self
2
null
Which best uncensored (free-speech) LLM models should we download before its too late?
124
[deleted]
2023-06-12T11:20:17
[deleted]
2023-06-14T08:28:49
0
{}
147lmku
false
null
t3_147lmku
/r/LocalLLaMA/comments/147lmku/which_best_uncensored_freespeech_llm_models/
false
false
default
124
null
How to keep track of all the LLMs out there?
34
Hi, I'm supposed to be the NLP "expert" at work. But I am so overwhelmed by the LLM scene right now with new ones popping every day. Is there an easy way to keep track of all the LLMs out there? This includes how to download the model, use it both programmatically and from a UI, what type of model it is etc. Thanks.
2023-06-12T11:58:37
https://www.reddit.com/r/LocalLLaMA/comments/147mbr6/how_to_keep_track_of_all_the_llms_out_there/
learning_agent
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147mbr6
false
null
t3_147mbr6
/r/LocalLLaMA/comments/147mbr6/how_to_keep_track_of_all_the_llms_out_there/
false
false
self
34
null
How to add GPU support to oobabooga?
4
[deleted]
2023-06-12T12:42:37
[deleted]
1970-01-01T00:00:00
0
{}
147n6ti
false
null
t3_147n6ti
/r/LocalLLaMA/comments/147n6ti/how_to_add_gpu_support_to_oobabooga/
false
false
default
4
null
Which one of these models has potential to become sentient?
0
[deleted]
2023-06-12T12:57:08
[deleted]
1970-01-01T00:00:00
0
{}
147nhpn
false
null
t3_147nhpn
/r/LocalLLaMA/comments/147nhpn/which_one_of_these_models_has_potential_to_become/
false
false
default
0
null
Finetuning on multiple GPUs
4
Hi, Does anyone have a working example for finetuning LLaMa or Falcon on multiple GPUs? If it also has QLoRA that would be the best but afaik it's [not implemented in bitsandbytes yet](https://github.com/TimDettmers/bitsandbytes/issues/366)?
2023-06-12T13:29:39
https://www.reddit.com/r/LocalLLaMA/comments/147o6pb/finetuning_on_multiple_gpus/
Simhallq
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147o6pb
false
null
t3_147o6pb
/r/LocalLLaMA/comments/147o6pb/finetuning_on_multiple_gpus/
false
false
self
4
{'enabled': False, 'images': [{'id': 'X92AqpWm5jfYXXLshVqqMTAbuBIAstNiq5DLgpyT2Vk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=108&crop=smart&auto=webp&s=25735138efd4a163368d9e3e1f0a5771c43f7938', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=216&crop=smart&auto=webp&s=ecdfe9f7546516bb6cd5463ddb933a318abfc988', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=320&crop=smart&auto=webp&s=f652a86d64d4ed0ab656c3a0eeeef4425aaf66a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=640&crop=smart&auto=webp&s=ce9080c7eb06e27559723d90ce66b4c35c9db040', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=960&crop=smart&auto=webp&s=1c189639e25ec5aef03edadcacb176747ec51c42', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?width=1080&crop=smart&auto=webp&s=1227e0c6e69449d2905e1ad6201d0331a458aa07', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w4BRBKf_FOe3S5uIwicUJzK6qv2zs-v0RaH39_CfiJs.jpg?auto=webp&s=8cbd2d269894c50888a9b60253d74e2b4a8e24e7', 'width': 1200}, 'variants': {}}]}
Web-ui vs API
1
When I call the API using Postman, I don't get the same results as when I use the Web-UI. How can I replicate the same parameters that the Web-UI is using? I'm passing the following: Here is a sample of what I'm sending, it just rewrites my question instead of giving me a story: { "prompt": "write a short story about a bear and a dog that became friends", "max_new_tokens": 250, "do_sample": true, "temperature": 1.3, "top_p": 0.1, "typical_p": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "tfs": 1, "top_a": 0, "repetition_penalty": 1.18, "top_k": 40, "min_length": 0, "no_repeat_ngram_size": 0, "num_beams": 1, "penalty_alpha": 0, "length_penalty": 1, "early_stopping": false, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "seed": -1, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "stopping_strings": [] }
2023-06-12T13:47:04
https://www.reddit.com/r/LocalLLaMA/comments/147oklj/webui_vs_api/
igorbirman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147oklj
false
null
t3_147oklj
/r/LocalLLaMA/comments/147oklj/webui_vs_api/
false
false
self
1
null
what is the next step after LLM? when? how? why? required?
0
[removed]
2023-06-12T15:43:26
https://www.reddit.com/r/LocalLLaMA/comments/147q46o/what_is_the_next_step_after_llm_when_how_why/
Sofronyami
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147q46o
false
null
t3_147q46o
/r/LocalLLaMA/comments/147q46o/what_is_the_next_step_after_llm_when_how_why/
false
false
default
0
null
What are the best open sourced LLMs for financial NLP tasks ?
0
2023-06-12T16:17:12
https://www.reddit.com/r/LocalLLaMA/comments/147qw2p/what_are_the_best_open_sourced_llms_for_financial/
Zine47X
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147qw2p
false
null
t3_147qw2p
/r/LocalLLaMA/comments/147qw2p/what_are_the_best_open_sourced_llms_for_financial/
false
false
default
0
null
Finetuning using Google Colab (Free Tier)
0
I wanted to finetune any of the open-source LLMs using the free Google Colab runtime instances. Is there any setup that works out the best? If so, could you please share them? I was trying to use LoRA adaptors on the free google colab but I ran out of RAM and am unable to proceed.
2023-06-12T16:32:12
https://www.reddit.com/r/LocalLLaMA/comments/147r92h/finetuning_using_google_colab_free_tier/
garamkarakchai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147r92h
false
null
t3_147r92h
/r/LocalLLaMA/comments/147r92h/finetuning_using_google_colab_free_tier/
false
false
self
0
null
MacBook Air 15 inch capabilities?
0
[removed]
2023-06-12T16:41:29
https://www.reddit.com/r/LocalLLaMA/comments/147rh8d/macbook_air_15_inch_capabilities/
Necessary_Ad_9800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147rh8d
false
null
t3_147rh8d
/r/LocalLLaMA/comments/147rh8d/macbook_air_15_inch_capabilities/
false
false
default
0
null
Long-term memory management for Llama model.
9
I am trying to build long-term memory management for Llama model. But I am not getting anywhere. So I am here to ask if there is some existing prompt or Long-term memory Llama system? my prompt so far: \`\`\`\` f"""### Instruction: You are memAI, an AI with extended memory capabilities. You can "essentialy" talk to your memory. In reality the memory is just a command parser that at the start provides you a question you will need to answer. There is a question below and you will need to answer it. All of your responses will be parsed, so make sure they comply with the commands listed below. Available commands: 1. Name: 'List files', Description: 'Returns currently available .txt files', Usage: 'list\_files' 2. Name: 'Read file', Description: 'Returns content of a specified .txt file', Usage: 'read\_file example.txt' 3. Name: 'Final answer', Description: 'Ends the answer refining loop' Usage: 'final\_answer example answer' Now you schould understand what to do. Just keep in mind that if your response is not a available command your message will be invalid, so please respond with the commands and with the commands only. {chat\_history} Respond only with one of the available commands: \### Input: memory: {user\_input} \### Response: memAI: """ \`\`\`
2023-06-12T16:41:58
https://www.reddit.com/r/LocalLLaMA/comments/147rhm5/longterm_memory_management_for_llama_model/
floppapeek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147rhm5
false
null
t3_147rhm5
/r/LocalLLaMA/comments/147rhm5/longterm_memory_management_for_llama_model/
false
false
self
9
null
Best Open Source Model for Therapy?
5
I've been wanting to experiment with some open source models that offer good therapy advice - yk generally uplifting, doesn't have to be too scientific + that sounds more like a real person than GPT does. Has anyone had any experience with models that might be useful?
2023-06-12T18:29:01
https://www.reddit.com/r/LocalLLaMA/comments/147u0a3/best_open_source_model_for_therapy/
robopika
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147u0a3
false
null
t3_147u0a3
/r/LocalLLaMA/comments/147u0a3/best_open_source_model_for_therapy/
false
false
self
5
null
Best or more complete instruction datasets
7
If we didn't have enough threats from politicians (aka useless people), now the big companies seem to be going against open source AI as well, as usual driven by their unquenchable greed and malice. So considering that the future looks bleak, what are some of the most complete datasets out there? I got some time ago this one: [https://huggingface.co/datasets/anon8231489123/ShareGPT\_Vicuna\_unfiltered](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) since it removed some of the ethics filter BS that plagues a lot of instruction datasets that use chatGPT or corporate data. But are there any other complete and interesting datasets that one should save and keep for a rainy day, once censorship lays waste to sites like huggingface? I've been using some of those data to train stuff like novelAI modules (i.e. LORAs), so I really want to keep as many interesting instruction datasets as I possibly can (or any other interesting datasets). What suggestions do you have?
2023-06-12T18:35:59
https://www.reddit.com/r/LocalLLaMA/comments/147u6ag/best_or_more_complete_instruction_datasets/
CulturedNiichan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147u6ag
false
null
t3_147u6ag
/r/LocalLLaMA/comments/147u6ag/best_or_more_complete_instruction_datasets/
false
false
self
7
{'enabled': False, 'images': [{'id': 'hCJm1WvoukTm8o3iKxx6PgypOTukUiQ9MSNgq1s3NQE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=108&crop=smart&auto=webp&s=53cfd5649ccabc02caf81c85c0ef6fd93c0d6753', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=216&crop=smart&auto=webp&s=4b2776e4ab9a0394aada31f03054955a7242c6b6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=320&crop=smart&auto=webp&s=5fa1a900b723e80f7b65e561e5028867be4b58c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=640&crop=smart&auto=webp&s=13412c8d161e4a13edf3f7ad8b8750684a005536', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=960&crop=smart&auto=webp&s=f73fac0c06956e47104c1b3c606a3edaf1b1d98f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?width=1080&crop=smart&auto=webp&s=200773d04c8debe3865bdc395a318126791fffde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZAAFUcL47wkmSmraeP1fxnWAeCEYzHSfny9rWp7caT8.jpg?auto=webp&s=6130b1031b11bc2639db3f24677561e5a4e73b10', 'width': 1200}, 'variants': {}}]}
Can we participate in the Subredit Blackout?
92
I wonder if the mods are open to the idea of participating in the [subreddit blackout](https://www.google.com/search?q=subreddit+blackout&tbm=nws&sxsrf=APwXEddc6z6xiO5fkre7jgb0sWf38uDTmQ:1686595092461&source=lnt&tbs=sbd:1&sa=X&ved=2ahUKEwjFsPaqsL7_AhXDEFkFHYRaACsQpwV6BAgZEBM&biw=931&bih=568&dpr=1.1) over their api changes as well as accessibility issues for blind users. Apparently over 3k subreddits are participating right now, and growing. Thanks for your consideration!
2023-06-12T18:40:00
https://www.reddit.com/r/LocalLLaMA/comments/147u9k7/can_we_participate_in_the_subredit_blackout/
jl303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147u9k7
false
null
t3_147u9k7
/r/LocalLLaMA/comments/147u9k7/can_we_participate_in_the_subredit_blackout/
false
false
self
92
null
The Safari of Deep Signal Processing: Hyena and Beyond (New Models for Ultra-Long Sequences)
24
2023-06-12T19:29:12
https://hazyresearch.stanford.edu/blog/2023-06-08-hyena-safari
Balance-
hazyresearch.stanford.edu
1970-01-01T00:00:00
0
{}
147vdtb
false
null
t3_147vdtb
/r/LocalLLaMA/comments/147vdtb/the_safari_of_deep_signal_processing_hyena_and/
false
false
https://b.thumbs.redditm…ndo5NeB0Qt2I.jpg
24
{'enabled': False, 'images': [{'id': '-WHgGLJANkDpubg8JwSLJ_kMgGHdyAiWnD4mQMVCLm0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=108&crop=smart&auto=webp&s=fd10df8933b1c9751ea7d0fcf20f1e54587a02ce', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=216&crop=smart&auto=webp&s=d39d5b78df8653abb63f948d029eb734c0bd0254', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=320&crop=smart&auto=webp&s=2a242c8ec32e5e02a650a73b9418c82d2327ac54', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?auto=webp&s=b3c6e6f793801a135b69d089032f2aee32682946', 'width': 460}, 'variants': {}}]}
Looking to hire an AI Web Developer for a Project
0
[removed]
2023-06-12T20:03:58
https://www.reddit.com/r/LocalLLaMA/comments/147w7um/looking_to_hire_an_ai_web_developer_for_a_project/
Specific_Valuable893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147w7um
false
null
t3_147w7um
/r/LocalLLaMA/comments/147w7um/looking_to_hire_an_ai_web_developer_for_a_project/
false
false
default
0
null
Advice on training a grammar correction model?
1
Howdy folks, could anyone point me to (or provide some advice) on how to train a local model to transform a sentence from a freeform construction into one that follows a set of specific grammatic rules? Basically, I'd like to convert our code docs into a formal language that can then be analyzed for completeness and correctness. It seems like I should be able to do this...but playing around with RedPajama-Incite-3B, my training gives terrible results. I am pretty new to this, so I'm definitely making a hash of things. Any advice would be much appreciated!
2023-06-12T20:36:54
https://www.reddit.com/r/LocalLLaMA/comments/147wzjn/advice_on_training_a_grammar_correction_model/
shitty_coder_2000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147wzjn
false
null
t3_147wzjn
/r/LocalLLaMA/comments/147wzjn/advice_on_training_a_grammar_correction_model/
false
false
self
1
null
Developer Help
0
[removed]
2023-06-12T20:49:50
https://www.reddit.com/r/LocalLLaMA/comments/147xaaq/developer_help/
Own_Turnip8625
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147xaaq
false
null
t3_147xaaq
/r/LocalLLaMA/comments/147xaaq/developer_help/
false
false
default
0
null
llama.cpp just got full CUDA acceleration, and now it can outperform GPTQ!
397
New PR just added by Johannes Gaessler: [https://github.com/ggerganov/llama.cpp/pull/1827](https://github.com/ggerganov/llama.cpp/pull/1827) This adds full GPU acceleration to llama.cpp. It is now able to fully offload all inference to the GPU. For the first time ever, this means GGML can now outperform AutoGPTQ and GPTQ-for-LLaMa inference (though it still loses to exllama) Note: if you test this, be aware that you should now use `--threads 1` as it's no longer beneficial to use multiple threads; in fact it slows down performance a lot. # Some initial benchmarks **With H100 GPU + Intel Xeon Platinum 8480+ CPU:** * 7B q4\_K\_S: * Previous llama.cpp performance: 25.51 tokens/s * New PR llama.cpp performance: 60.97 tokens/s * = 2.39x * AutoGPTQ 4bit performance on this system: 45 tokens/s * 30B q4\_K\_S * Previous llama.cpp performance: 10.79 tokens/s * New PR llama.cpp performance: 18.62 tokens/s * = 1.73x * AutoGPTQ 4bit performance on the same system: 20.78 tokens/s **On 4090 GPU + Intel i9-13900K CPU:** * 7B q4\_K\_S: * New llama.cpp performance: 109.29 tokens/s * AutoGPTQ CUDA 7B GPTQ 4bit: 98 tokens/s * 30B q4\_K\_S: * New PR llama.cpp performance: 29.11 tokens/s * AutoGPTQ CUDA 30B GPTQ 4bit: 35 tokens/s So on 7B models, GGML is now ahead of AutoGPTQ on both systems I've tested. 30B it's a little behind, but within touching difference. And Johannes says he believes there's even more optimisations he can make in future. Everything we knew before is changing! Now GGML is both the most flexible/accessible, AND starting to rival the fastest. (For Llama models anyway. Which is really going to emphasise the fact of how far ahead Llama GGML development is versus the other GGML models, like GPT-J, GPT-NeoX, MPT, StarCoder, etc,) # Still CPU bottlenecked (for now) There's been quite a few comments asking why the 4090 beats the H100. It's important to note that CPU still plays an important part. I believe the reason why the 4090 system is winning in the benchmarks above is because the i9-13900K in that system has much higher single-core performance than the server CPU in the H100 system. When doing these benchmarks, I noted that: * One CPU core was pegged at 100% * GPU utilisation was well below 100%: * On the H100 with Intel Xeon, max GPU usage was 55% * On the 4090 with i9-13900K, max GPU usage was 69% Therefore the CPU is still an important factor and can limit/bottleneck the GPU. And specifically, it's now the max single-core CPU speed that matters, not the multi-threaded CPU performance like it was previously in llama.cpp. This now matches the behaviour of pytorch/GPTQ inference, where single-core CPU performance is also a bottleneck (though apparently the exllama project has done great work in reducing that dependency for their GPTQ implementation.) Johannes, the developer behind this llama.cpp PR, says he plans to look at further CPU optimisations which might make CPU less of a bottleneck, and help unlock more of that currently-unused portion of the GPU. # Increased VRAM requirements with the new method One important thing to note is that these performance improvements have come as a result of putting more on to the GPU, which necessarily also increases VRAM usage. As a result, some people are finding that their preferred quant size will no longer fit in VRAM on their hardware, and that performance is therefore lower. Johannes has said he plans on making the changes optional: >I plan to make moving the KV cache to VRAM optional but before I decide on a specific implementation for the user interface I'll need to do some performance testing. If it turns out that the KV cache is always less efficient in terms of t/s per VRAM then I think I'll just extend the logic for --n-gpu-layers to offload the KV cache after the regular layers if the value is high enough. So by the time this PR has reached llama.cpp main, it should be possible for the user to choose what works best for them: maximum performance but perhaps on a smaller quant size, or a larger quant size with the same performance as they're used to now. # Want to try this for yourself? * For now you'll need to compile the PR from source: &#8203; git clone https://github.com/JohannesGaessler/llama.cpp llama.cpp-PR cd llama.cpp-PR git checkout cuda-full-gpu-2 make clean && LLAMA_CUBLAS=1 make -j &#x200B;
2023-06-12T22:12:15
https://www.reddit.com/r/LocalLLaMA/comments/147z6as/llamacpp_just_got_full_cuda_acceleration_and_now/
The-Bloke
self.LocalLLaMA
2023-06-13T10:43:14
0
{}
147z6as
false
null
t3_147z6as
/r/LocalLLaMA/comments/147z6as/llamacpp_just_got_full_cuda_acceleration_and_now/
false
false
self
397
{'enabled': False, 'images': [{'id': 'libw-YiNaD5BcmhkgQeD707MDy7dFNk9mryQZ0gsqvM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=108&crop=smart&auto=webp&s=448424edf998a31a1f3075021f02d2cb4b3ae890', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=216&crop=smart&auto=webp&s=b08d914ba4082f4117d1d12e7fe101ac2895afdb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=320&crop=smart&auto=webp&s=281a05cbdf0ca5efb53ce5c8f571627d8624f417', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=640&crop=smart&auto=webp&s=39c2c59081b59572dce3d63fb16dc25b7242203c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=960&crop=smart&auto=webp&s=ebc05e4017c9113a5a95e7a1b2b8cdf9222913b8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?width=1080&crop=smart&auto=webp&s=b2acffac245312646f0a627aaaad5b188faac8a3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4RUP2I3VOC3pfETVnZTTV2T4mtN3EVSDhVTkXjXp4DU.jpg?auto=webp&s=032cf4107a62a0de7efcc5144dbab611328f98ef', 'width': 1200}, 'variants': {}}]}
Id like to download wizardlm before it gets blocked
0
[removed]
2023-06-12T22:33:16
https://www.reddit.com/r/LocalLLaMA/comments/147zmm8/id_like_to_download_wizardlm_before_it_gets/
coop7774
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
147zmm8
false
null
t3_147zmm8
/r/LocalLLaMA/comments/147zmm8/id_like_to_download_wizardlm_before_it_gets/
false
false
default
0
null
Can you increase a models number of parameters?
0
[removed]
2023-06-12T23:26:20
https://www.reddit.com/r/LocalLLaMA/comments/1480qg1/can_you_increase_a_models_number_of_parameters/
TimTams553
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1480qg1
false
null
t3_1480qg1
/r/LocalLLaMA/comments/1480qg1/can_you_increase_a_models_number_of_parameters/
false
false
default
0
null