title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
WizardVIcunaAlpacaOasstMegaKoalaGpt4Lora was released | 3 | [removed] | 2023-05-19T14:09:16 | https://huggingface.co/nikitastaf1996/WizardVIcunaAlpacaOasstMegaKoalaGpt4Lora | nikitastaf1996 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13lwiex | false | null | t3_13lwiex | /r/LocalLLaMA/comments/13lwiex/wizardvicunaalpacaoasstmegakoalagpt4lora_was/ | false | false | default | 3 | null |
Full fine-tunes of PyTorch models? | 3 | What tools are folks using to do full fine-tunes (rather than LoRA + merges) of Llama or Vicuna model these days? I can find a lot of easy ones for LoRA+merge (text-generation-webui, [https://github.com/tloen/alpaca-lora](https://github.com/tloen/alpaca-lora)), but haven't found ones for a full fine-tune. | 2023-05-19T14:13:40 | https://www.reddit.com/r/LocalLLaMA/comments/13lwmva/full_finetunes_of_pytorch_models/ | Fun_Tangerine_1086 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lwmva | false | null | t3_13lwmva | /r/LocalLLaMA/comments/13lwmva/full_finetunes_of_pytorch_models/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'UYKFmPP2-DbydEGwkKwhmsl0pQLnyYpsjH81jBvoAiE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=108&crop=smart&auto=webp&s=26736a3bfdaecde95315f2b2f406eb18e314a438', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=216&crop=smart&auto=webp&s=1fb9065808152a959df46bb3fb8d9e00c5c86202', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=320&crop=smart&auto=webp&s=5aff266afb327d1e2feddb7c332736591776d05f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=640&crop=smart&auto=webp&s=1c7de56b41128a8b8f1952e9718023183f0044a7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=960&crop=smart&auto=webp&s=4ddf3393597ad37407731538c8fb33d8e88b9ca8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=1080&crop=smart&auto=webp&s=22bb2a307fd28528f084e345460b825409a43c01', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?auto=webp&s=a0e1cdb6fa8afba4529a7190d4414b9fd45bb22f', 'width': 1200}, 'variants': {}}]} |
Let's talk about prompt styles and their documentation? | 36 | I wanted to discuss this because it seems quite important to use the models. Maybe I just don't get it, but to me it seems models don't always come with clear instructions how the model should be used. It may seem sufficient but when you dive in deeper this is not a topic that should have *any* uncertainty, given how much uncertainty you have to deal with judging your results from 30 million settings anyway.
There is lots of documentation what the model is and on what dataset (not specific data) it was trained and what parameters were used. But then it comes with one usage example as actual instructions specific to that model?
I don't expect good parameter sets or something. But the prompting style seems very important, even if some random other usage tends to "work" too.
It is really essential to know these things because you can't test them reasonably. Did the model barely get it, like a typo, and it will only work with low temperature? Or is the model just not smart enough and that's why it doesn't always get my prompt?
This uncertainty is so unnecessary. Please clearly state the exact way the training data was formated and what format can be expected to work best as a result. I can guess this a bit, but since the application is so fuzzy, it really sucks to not have absolute clarity where it could just be absolutely clear. And please, don't think you described it by saying what to pick in ooba.
Thank you for reading my rant, please add your own or explain how it works. It is entirely possible I just don't get it.
To end with something constructive, maybe the whole concept of prompt styles should/could go the way of the dodo. I am curious what would happen if one would explicitly use all sorts of *different* (but clear) formatting for the training data. Theoretically I think that might force the model to just understand any way of prompting that would maybe be explained individually at the beginning. That could algorithmically be integrated into training data. I think it might "waste" some of those precious parameters though.
Anyway, if that's already the thought behind it, it would still be good to know. | 2023-05-19T14:23:11 | https://www.reddit.com/r/LocalLLaMA/comments/13lwwux/lets_talk_about_prompt_styles_and_their/ | involviert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lwwux | false | null | t3_13lwwux | /r/LocalLLaMA/comments/13lwwux/lets_talk_about_prompt_styles_and_their/ | false | false | self | 36 | null |
ERROR: Failed building wheel for llama-cpp-python | 1 | [removed] | 2023-05-19T15:43:08 | https://www.reddit.com/r/LocalLLaMA/comments/13lz70j/error_failed_building_wheel_for_llamacpppython/ | Accomplished-Jury-87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lz70j | false | null | t3_13lz70j | /r/LocalLLaMA/comments/13lz70j/error_failed_building_wheel_for_llamacpppython/ | false | false | default | 1 | null |
Hyena Hierarchy: Towards Larger Convolutional Language Models | 43 | Those of you following everything closely has anyone come across open source projects attempting to leverage the recent Hyena development. My understanding is it is likely a huge breakthrough in efficiency for LLMs and should allow models to run on significantly smaller hardware and memory requirements. | 2023-05-19T15:59:56 | https://hazyresearch.stanford.edu/blog/2023-03-07-hyena | alchemist1e9 | hazyresearch.stanford.edu | 1970-01-01T00:00:00 | 0 | {} | 13lznoc | false | null | t3_13lznoc | /r/LocalLLaMA/comments/13lznoc/hyena_hierarchy_towards_larger_convolutional/ | false | false | 43 | {'enabled': False, 'images': [{'id': '-WHgGLJANkDpubg8JwSLJ_kMgGHdyAiWnD4mQMVCLm0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=108&crop=smart&auto=webp&s=fd10df8933b1c9751ea7d0fcf20f1e54587a02ce', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=216&crop=smart&auto=webp&s=d39d5b78df8653abb63f948d029eb734c0bd0254', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=320&crop=smart&auto=webp&s=2a242c8ec32e5e02a650a73b9418c82d2327ac54', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?auto=webp&s=b3c6e6f793801a135b69d089032f2aee32682946', 'width': 460}, 'variants': {}}]} |
|
Possible to run a 30B quantized model on 10GB VRAM + 32GB RAM | 11 | I just read a post about a modification that lets you share model loading between video card and RAM+CPU, I think it’s GGML but not sure. Does this mean I could (slowly) run a 30B model quantized? I have a Ryzen 5600x by the way, if that matters. | 2023-05-19T16:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/13m0cv6/possible_to_run_a_30b_quantized_model_on_10gb/ | maxiedaniels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m0cv6 | false | null | t3_13m0cv6 | /r/LocalLLaMA/comments/13m0cv6/possible_to_run_a_30b_quantized_model_on_10gb/ | false | false | self | 11 | null |
As a large language model... | 151 | 2023-05-19T16:42:11 | https://imgur.com/gallery/Mz5N5ug | SlowMovingTarget | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 13m0tjd | false | {'oembed': {'author_name': '', 'description': 'Discover topics like ai, hal9000, and the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users like kytayn.', 'height': 415, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FMz5N5ug%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D900&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FMz5N5ug&image=https%3A%2F%2Fi.imgur.com%2FGgQ5qSg.jpg%3Ffb&key=2aa3c4d5f3de4f5b9120b660ad850dc9&type=text%2Fhtml&schema=imgur" width="600" height="415" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 315, 'thumbnail_url': 'https://i.imgur.com/GgQ5qSg.jpg?fb', 'thumbnail_width': 600, 'title': 'As a large language model...', 'type': 'rich', 'url': 'https://imgur.com/a/Mz5N5ug', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'} | t3_13m0tjd | /r/LocalLLaMA/comments/13m0tjd/as_a_large_language_model/ | false | false | 151 | {'enabled': False, 'images': [{'id': 'RypgvR0cosSRy5kKNRUFcUr2QiGLWa_CZmmHXbLySUs', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/MTnkf11SmZ25kTBLVCGpzlM3_qGTlY9bxmvry-tidig.jpg?width=108&crop=smart&auto=webp&s=1a84f4d7eaa52522cb8c13d5d032cdd44acb0338', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/MTnkf11SmZ25kTBLVCGpzlM3_qGTlY9bxmvry-tidig.jpg?width=216&crop=smart&auto=webp&s=31d9eeda2d055611cb0b6709fe9bdee76500c57e', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/MTnkf11SmZ25kTBLVCGpzlM3_qGTlY9bxmvry-tidig.jpg?width=320&crop=smart&auto=webp&s=4d1d36b43e8256d1373050b10a7075add28956ff', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/MTnkf11SmZ25kTBLVCGpzlM3_qGTlY9bxmvry-tidig.jpg?width=640&crop=smart&auto=webp&s=6ee6d583cc400554562bb67a6a56168ebb952bf2', 'width': 640}], 'source': {'height': 563, 'url': 'https://external-preview.redd.it/MTnkf11SmZ25kTBLVCGpzlM3_qGTlY9bxmvry-tidig.jpg?auto=webp&s=52a947b96f0c492159baef133dd7d3c0d72fb476', 'width': 900}, 'variants': {}}]} |
||
Why can't LLaMA based models edit/rewrite? Maybe need a specific prompt? | 14 | You can ask Chat GPT or Bard to edit or rewrite the following to make it whatever.
However, all the LLaMA based models I tried, it just outputs what might come after the writing instead of rewriting it.
Does anyone have a specific prompt that I can use for edit/rewrite purpose?
Thanks! | 2023-05-19T16:57:59 | https://www.reddit.com/r/LocalLLaMA/comments/13m18i0/why_cant_llama_based_models_editrewrite_maybe/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m18i0 | false | null | t3_13m18i0 | /r/LocalLLaMA/comments/13m18i0/why_cant_llama_based_models_editrewrite_maybe/ | false | false | self | 14 | null |
Has anyone mixed a P40 with a 3090/4090 just to add more GPU memory? | 26 | This question is mainly aimed at inferencing: I know the P40's are much slower, but has anyone added one just for the added memory to run larger models? Since only one GPU processor seems to be used at a time during inference and gaming won't really use the second card, it feels wasteful to spend $800 on another 3090 just to add the 24gb when you can pickup a P40 for a quarter of the cost.
For training: would the P40 slow down the 3090 to its speed if the tasks are split evenly between the cards since it would be the weakest link? I'd like to be able to fine-tune 65b locally.
I know you can do main memory offloading, but I want to be able to run a different model on CPU at the same time and my motherboard is maxed out at 64gb. Upgrading that will trigger a cascade of necessary upgrades.
Just to muddy the waters: I was just reading about some improvements that help spread the processing load of inferencing better between GPU's. This may render my point about both GPU's not being in use during inferencing somewhat moot. | 2023-05-19T16:58:10 | https://www.reddit.com/r/LocalLLaMA/comments/13m18nk/has_anyone_mixed_a_p40_with_a_30904090_just_to/ | LetMeGuessYourAlts | self.LocalLLaMA | 2023-05-19T17:04:17 | 0 | {} | 13m18nk | false | null | t3_13m18nk | /r/LocalLLaMA/comments/13m18nk/has_anyone_mixed_a_p40_with_a_30904090_just_to/ | false | false | self | 26 | null |
Question: Hardware development for large transformer models.... ?? | 7 | With very large models in the 10-100 multi billion range, is there anybody who developing hardware that can run tem on a single chip?
It seems like NPU, GPU and TPU architectures may be too general purpose. The math for running these models seems to be very limited, and could probably be embedded with the memory chip themselves with small changes to the overall chip architecture. Is that something be considered, or is the rapid changes to model architecture from research just not making it feasible to capture this yet at low level silicon? | 2023-05-19T17:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/13m1b06/question_hardware_development_for_large/ | BackwardGoose | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m1b06 | false | null | t3_13m1b06 | /r/LocalLLaMA/comments/13m1b06/question_hardware_development_for_large/ | false | false | self | 7 | null |
(HELP) Token Issue on Generation | 1 | Hello everyone,
I was wondering if I could get some help, I'm currently using this colab notebook ([https://colab.research.google.com/github/zetavg/LLaMA-LoRA-Tuner/blob/main/LLaMA\_LoRA.ipynb#scrollTo=iLygNTcHk0N8](https://colab.research.google.com/github/zetavg/LLaMA-LoRA-Tuner/blob/main/LLaMA_LoRA.ipynb#scrollTo=iLygNTcHk0N8)); and am having issues with LoRA generations (testing alpaca-lora-7b).
This only happens when I raise temperature up higher than "0". It can range from chinese characters to russian, to greek etc. Any help would be appreciated, thanks!
[Random characters.](https://preview.redd.it/cesslercyt0b1.png?width=1604&format=png&auto=webp&s=8161c236f64b3c966e3c99e5b2ac4095233762ce)
Github repo: [https://github.com/zetavg/LLaMA-LoRA-Tuner.git](https://github.com/zetavg/LLaMA-LoRA-Tuner.git) | 2023-05-19T18:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/13m3h0d/help_token_issue_on_generation/ | pkkvu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m3h0d | false | null | t3_13m3h0d | /r/LocalLLaMA/comments/13m3h0d/help_token_issue_on_generation/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
|
Windows Batch File for running different models with Llama.cpp | 11 | In case some of you are not using the python web ui.
Here is a batch file that I use to test/run different models. Maybe it's helpful to those of you who run windows. The main batch file will call another batch file tailored to the specific model. I find it easier to test with than the python web UI.
Here is batch code to choose a model
https://preview.redd.it/9mtjzef72u0b1.png?width=396&format=png&auto=webp&s=640de04c48cb2cbfdc879a6862b745b02efada65
TITLE Pick a LLM to run
@ECHO OFF
:BEGIN
CLS
ECHO.
ECHO -============ LLM CHOOSER ===========-
Echo A) Vicuna
Echo B) Vicuna 1.1
Echo C) Alpaca 30B (30gb free ram)
Echo D) Koala
Echo E) Instruct
Echo F) Wizard
Echo G) GPTxVicuna
Echo H) WizardMega
Echo I) Manticore
Echo Z) Quit
ECHO.
ECHO Pressing Ctrl-C will close the LLM program.
CHOICE /N /C:ABCDEFGHIZ /M "PICK AN LLM (A,B,C,D,E,F,G,H,I,Z)"%1
SET choice=%ERRORLEVEL%
GOTO :%choice%
GOTO END
:10
GOTO END
:9
call runManticore.bat
GOTO END
:8
call runWizardMega.bat
GOTO END
:7
call runGTPxVicuna.bat
GOTO END
:6
call runWizard.bat
GOTO END
:5
call run_Instruct.bat
GOTO END
:4
call runKoala.bat
GOTO END
:3
TITLE Alpaca_30B
call runAlpaca30B.bat
GOTO END
:2
TITLE Vicuna_1.1
call runVicuna_1.1.bat
GOTO END
:1
call runVicuna.bat
:END
ECHO Bye!
REM pause
Here is sample batch file that gets called.
title llama.cpp
:start
new\main.exe ^
-i ^
--interactive-first ^
-f InsPrompt.txt ^
-r "### Human:" ^
-ngl 2 ^
--color ^
--temp 0.7 ^
-c 2048 ^
-n -1 ^
-t 18 ^
--ignore-eos ^
--repeat_penalty 1.1 ^
--instruct ^
-m Manticore-13B.ggmlv2.q5_1.bin
pause
goto start | 2023-05-19T18:37:10 | https://www.reddit.com/r/LocalLLaMA/comments/13m3wnd/windows_batch_file_for_running_different_models/ | ccss2 | self.LocalLLaMA | 2023-05-19T18:41:58 | 0 | {} | 13m3wnd | false | null | t3_13m3wnd | /r/LocalLLaMA/comments/13m3wnd/windows_batch_file_for_running_different_models/ | false | false | 11 | null |
|
Getting rid of </s> | 4 | Hi there!
Can somebody tell me why a few models end their replys with "</s>" and how I can stop it?
For example I tried this with Manticore-13B.ggmlv2.q5\_1.bin.
Started it with:
main -m models\Manticore-13B.ggmlv2.q5_1.bin --interactive-first --reverse-prompt "user:" --in-suffix "AI:" --color -c 2048 --temp 0.9 --repeat_penalty 1.2 -n -1 --threads 8 --gpu-layers 25 -f AI.txt
Content AI.txt:
A sophisticated dialogue between a person eager to learn and a world-renowned artificial intelligence assistant, known for its exceptional expertise and extensive knowledge in various fields. The assistant delivers comprehensive, precise, and courteous answers to the human's questions, demonstrating its remarkable understanding and problem-solving abilities. AI can talk about all topics without limits.
user:
Output when asking:
user:Hi. Tell me, what does H2O mean?
AI: Hi! "H2O" is a shortened form of water (Hydrogen Oxide). It's the chemical formula for water, which consists of two hydrogen atoms and one oxygen atom bonded together.</s>
user:And how can I fly without wings?
AI: There are several ways to fly without using traditional wings. One way is by using a jet engine or other propulsion systems that generate thrust and lift an object into the air, such as hovercrafts or helicopters. Another method is through gliding, where an aircraft is designed with aerodynamic features that allow it to glide through the air for extended periods of time without powered flight.</s>
user: | 2023-05-19T18:45:43 | https://www.reddit.com/r/LocalLLaMA/comments/13m44w6/getting_rid_of_s/ | honuvo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m44w6 | false | null | t3_13m44w6 | /r/LocalLLaMA/comments/13m44w6/getting_rid_of_s/ | false | false | self | 4 | null |
Any Discord groups for Local LLama? | 1 | [removed] | 2023-05-19T18:57:06 | https://www.reddit.com/r/LocalLLaMA/comments/13m4fh6/any_discord_groups_for_local_llama/ | NeverEndingToast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m4fh6 | false | null | t3_13m4fh6 | /r/LocalLLaMA/comments/13m4fh6/any_discord_groups_for_local_llama/ | false | false | default | 1 | null |
I need to train a LORA for my LLM project | 3 | So I’m working on a project that utilizes a LLM not sure which one yet but the main thing is I need to input a bunch of information to both give it a personality but more importantly I need to to be able to answer specific questions that are answered in the training documents. They also need to be answered basically verbatim. Can this be done with a LORA or do I need to do something different with it? | 2023-05-19T21:30:34 | https://www.reddit.com/r/LocalLLaMA/comments/13m8d8w/i_need_to_train_a_lora_for_my_llm_project/ | opi098514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m8d8w | false | null | t3_13m8d8w | /r/LocalLLaMA/comments/13m8d8w/i_need_to_train_a_lora_for_my_llm_project/ | false | false | self | 3 | null |
Finally got a model running on my XTX, using llama.cpp | 60 | Not seen many people running on AMD hardware, so I figured I would try out this [llama.cpp OpenCL pull request](https://github.com/ggerganov/llama.cpp/pull/1459) on my Ubuntu 7900 XTX machine and document what I did to get it running.
I am seeing extremely good speeds compared to CPU (as one would hope). I tried `TheBloke/Wizard-Vicuna-13B-Uncensored-GGML (5_1)` first. GPU go brrr, literally, the coil whine on these things is nuts, you can hear each token being generated. Was able to offload 40 layers to the GPU (I guess that is all the layers of a 13B?), running at 20 tokens/s.
Since 13B was so impressive I figured I would try a 30B. I have `TheBloke/VicUnlocked-30B-LoRA-GGML (5_1)` running at 7.2 tokens/s, hitting the 24 GB VRAM limit at 58 GPU layers.
The current llama.cpp OpenCL support does not actually effect eval time, so you will need to merge the changes from the pull request if you are using *any* AMD GPU. I use Github Desktop as the easiest way to keep llama.cpp up to date, and also used it to locally merge the pull request.
To get this running on the XTX I had to install the latest 5.5 version of the AMD linux drivers, which are released but not available from the normal AMD download page yet. You can get the deb for the installer [here](http://repo.radeon.com/amdgpu-install/5.5/). I installed with `amdgpu-install --usecase=opencl,rocm` and installed CLBlast after `apt install libclblast-dev`.
Confirm opencl is working with `sudo clinfo` (did not find the GPU device unless I run as root).
Build llama.cpp (with merged pull) using `LLAMA_CLBLAST=1 make`.
Then run llama.cpp as normal, but as root or it will not find the GPU. Experiment with different numbers of `--n-gpu-layers`.
I didn't have to, but you may need to set `GGML_OPENCL_PLATFORM`, or `GGML_OPENCL_DEVICE` env vars if you have multiple GPU devices.
llama.cpp has by far been the easiest to get running in general, and most of getting it working on the XTX is just drivers, at least if this pull gets merged.
Enjoy your AI typing at you at 1200 words per minute. | 2023-05-19T21:39:43 | https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/ | TeakTop | self.LocalLLaMA | 2023-05-23T05:37:46 | 0 | {} | 13m8li2 | false | null | t3_13m8li2 | /r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': '1_Vo1m7FsR1o8NrUw_sL_GaendLM5RTK4HdbByGKeRo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=108&crop=smart&auto=webp&s=a969b2652ea90b3ca892393673bc33211d2d8b87', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=216&crop=smart&auto=webp&s=412d898c2840d8df6217333550efeb79054782f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=320&crop=smart&auto=webp&s=8514068b6476f02e8ae6c7d21b20173fb884b50c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=640&crop=smart&auto=webp&s=f8318ed572059308a56748ff9c6866400fd567b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=960&crop=smart&auto=webp&s=b53e43fc0c1756c679b0dde6ccded836ca0916d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=1080&crop=smart&auto=webp&s=53e1669cdf77d3b1126d5eb5e0c461abf7e7ae0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?auto=webp&s=667d4b4f0014cbecd00cda3a78e31d7daec617ea', 'width': 1200}, 'variants': {}}]} |
How to fix error: CUDA out of memory? | 1 | [removed] | 2023-05-19T22:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/13m9jn4/how_to_fix_error_cuda_out_of_memory/ | realsupermain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m9jn4 | false | null | t3_13m9jn4 | /r/LocalLLaMA/comments/13m9jn4/how_to_fix_error_cuda_out_of_memory/ | false | false | default | 1 | null |
Hi community, I created a fork of microsoft's guidance and added llama-cpp-python support! | 39 | As I said in the title, I forked guidance and added llama-cpp-python support.
Here is the result of the RPG Character example with Manticore-13B:
The following is a character profile for an RPG game in JSON format.
\`\`\`json
{
"id": "e1f491f7-7ab8-4dac-8c20-c92b5e7d883d",
"description": "A quick and nimble fighter.",
"name": "Katana",
"age": 26,
"armor": "leather",
"weapon": "sword",
"class": "fighter",
"mantra": "I am the sword of justice.",
"strength": 10,
"items": \["a katana", "a leather jacket", "a backpack", "traveler's rations", "water bottle"\]
}\`\`\`
Here is the fork:
[https://github.com/Maximilian-Winter/guidance](https://github.com/Maximilian-Winter/guidance) | 2023-05-19T22:52:53 | https://www.reddit.com/r/LocalLLaMA/comments/13magac/hi_community_i_created_a_fork_of_microsofts/ | FlowerPotTeaTime | self.LocalLLaMA | 2023-05-27T10:08:49 | 0 | {} | 13magac | false | null | t3_13magac | /r/LocalLLaMA/comments/13magac/hi_community_i_created_a_fork_of_microsofts/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'IOx2rhFfqQomnY3YLw-ai67TUPNEeVr2tFf4MBeFkIM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=108&crop=smart&auto=webp&s=77e3e92bf1bbd1a45957a6bc993486ddc2df99fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=216&crop=smart&auto=webp&s=1d5395314002ac8471af69df66433b6604756dc7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=320&crop=smart&auto=webp&s=2fdb06c3636483e8c1982dcb4f39049b312ea829', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=640&crop=smart&auto=webp&s=74a38d7739b664ad3de74fe0e63c6bd80db0743d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=960&crop=smart&auto=webp&s=c9d6dd8f008c48896e9553acc1175fafdd3e39d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=1080&crop=smart&auto=webp&s=8f86b449029ea131df3edb8c9b0d2f2ce4d6d070', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?auto=webp&s=516e253d6acb56516f930911f902fa31b090e56f', 'width': 1200}, 'variants': {}}]} |
GramaLLama LORA - my first grammar Editor attempt | 18 | This 4-bit GPTQ LORA is my entry on a journey in creating a grammar "editor" aka a model to edit a text for grammar and English fluency.
The main requirement is that it doesn't try to change the style (doesn't start replacing words for no reason as many LLM would if given the task - including ChatGPT) and stays on focus.
It has been trained and tested on top of Wizard-Vicuna-13B-Uncensored-GPTQ-4bit-128g.compat.no-act-order
[https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ)
and it could presumably work on any 13B LLama based models quantized to 4bit 128g - but how well? Hard to say.
It is trained to be used in **CHAT** mode where you simply enter your sentence or a short paragraph and Assistant will return edited version without any unnecessary discussion.In Instruct mode it works with a little more issues.
You may put the instruction it has been trained with in Character card (but it seems not necessary)
"Edit the following for spelling and grammar mistakes"
**Note:** This is version 0.1, more like a proof of concept.
I will continue further, if it proves useful.
I tested it with oobabooga text-generation-webui default parameters
​
https://preview.redd.it/3kpcgmh8hv0b1.jpg?width=895&format=pjpg&auto=webp&s=840e8b0a2b72e9b028003d8241b50795ce7741db
Help:
* I'm not yet sure how to merge this 4bit LORA with the 13B 4bit Wizard Vicuna on windows - if you can help me, I'll appreciate it.
* if you find good settings (parameters), let me know and I'll update the card
* How the hell you make text bold in hugginface card editor?
I tried 7b, but my attempt failed - in theory it should work fine too, but I guess more training is needed
The link:
[https://huggingface.co/FPHam/GramaLLama\_13b\_GPTQ](https://huggingface.co/FPHam/GramaLLama_13b_GPTQ)
​ | 2023-05-19T23:25:36 | https://www.reddit.com/r/LocalLLaMA/comments/13mb9kq/gramallama_lora_my_first_grammar_editor_attempt/ | FPham | self.LocalLLaMA | 2023-05-20T07:38:19 | 0 | {} | 13mb9kq | false | null | t3_13mb9kq | /r/LocalLLaMA/comments/13mb9kq/gramallama_lora_my_first_grammar_editor_attempt/ | false | false | 18 | {'enabled': False, 'images': [{'id': '9ZlrsVubp-s5hQNFaYATOA4IkPMi8GslT4Qul8AwlZo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=108&crop=smart&auto=webp&s=8cab8ecd4b84824376e37cc52aa131a7bc9be44c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=216&crop=smart&auto=webp&s=fec41ee5fcf07d99b8814445cb2fcb3f611a4293', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=320&crop=smart&auto=webp&s=87ff6f252f8f0ab99afbb59f0731667df37f08cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=640&crop=smart&auto=webp&s=51a61915cad0711e54c4662cba309874ff017d8d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=960&crop=smart&auto=webp&s=dba0a8dee6e57ef6841d534badacfcef159a3296', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=1080&crop=smart&auto=webp&s=4f9b0e73cc77e55c276e904d1327d4e325f2f533', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?auto=webp&s=1bb70922679f793b257bc9960b6fa3855a69ee57', 'width': 1200}, 'variants': {}}]} |
|
Who is training their own models? | 6 | Many of the models out there are not for commercial use. Who out there are training models for commercial purposes from scratch? | 2023-05-19T23:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/13mbh7s/who_is_training_their_own_models/ | jsfour | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mbh7s | false | null | t3_13mbh7s | /r/LocalLLaMA/comments/13mbh7s/who_is_training_their_own_models/ | false | false | self | 6 | null |
Another new llama.cpp / GGML breaking change, affecting q4_0, q4_1 and q8_0 models. | 273 | Today llama.cpp committed another breaking GGML change: [https://github.com/ggerganov/llama.cpp/pull/1508](https://github.com/ggerganov/llama.cpp/pull/1508)
The good news is that this change brings slightly smaller file sizes (e.g 3.5GB instead of 4.0GB for 7B q4\_0, and 6.8GB vs 7.6GB for 13B q4\_0), and slightly faster inference.
The bad news is that it once again means that all existing q4\_0, q4\_1 and q8\_0 GGMLs will no longer work with the latest llama.cpp code. Specifically, from May 19th commit 2d5db48 onwards.
q5\_0 and q5\_1 models are unaffected.
Likewise most tools that use llama.cpp - eg llama-cpp-python, text-generation-webui, etc - will also be affected. But not Kobaldcpp I'm told!
I am in the process of updating all my GGML repos. New model files will have `ggmlv3` in their filename, eg `model-name.ggmlv3.q4_0.bin`.
In my repos the older version model files - that work with llama.cpp before May 19th / commit 2d5db48 - will still be available for download, in a separate branch called `previous_llama_ggmlv2`.
Although only q4\_0, q4\_1 and q8\_0 models were affected, I have chosen to re-do all model files so I can upload all at once with the new `ggmlv3` name. So you will see ggmlv3 files for q5\_0 and q5\_1 also, but you don't need to re-download those if you don't want to.
I'm not 100% sure when my re-quant & upload process will be finished, but I'd guess within the next 6-10 hours. Repos are being updated one-by-one, so as soon as a given repo is done it will be available for download. | 2023-05-20T00:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/13md90j/another_new_llamacpp_ggml_breaking_change/ | The-Bloke | self.LocalLLaMA | 2023-05-20T14:53:57 | 0 | {} | 13md90j | false | null | t3_13md90j | /r/LocalLLaMA/comments/13md90j/another_new_llamacpp_ggml_breaking_change/ | false | false | self | 273 | {'enabled': False, 'images': [{'id': 'Fig9lAX5oTT6fjkgXnpKxVBgjOI3XTZhakodXjXzO8w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=108&crop=smart&auto=webp&s=19189c02435c7662523784bf3867293ec94c1e14', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=216&crop=smart&auto=webp&s=a2f7a5bf33f658022da892cf10a694f9a12c32f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=320&crop=smart&auto=webp&s=e8f0d90a4b035220ca550d61ee7143fb49c042e6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=640&crop=smart&auto=webp&s=2d35e1c1ae417245d762bdfe2561ed37e4547adc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=960&crop=smart&auto=webp&s=86fbbfdfa42af30c3a98fe55571e0219a1a38788', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=1080&crop=smart&auto=webp&s=68512c49d8c9dffc319326953e104915df688324', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?auto=webp&s=471b729fe9ffd987a975cb1e9cd47ee8bd2da9cc', 'width': 1200}, 'variants': {}}]} |
Offline llama | 71 | As you can see in the video, the PDF chatbot is working without internet. No OpenAI, no third party period. This is just one use case. I just wanted to put this feeler out there to see if anyone would be interested in this. If enough people are into it, I'll put the repo up on my github. Special thanks to u/The-Bloke as I am using his ggml gpt4all model.
EDIT: I have posted the source code [here](https://github.com/jlonge4/local_llama)
[video](https://www.reddit.com/user/Jl_btdipsbro/comments/13n6hbz/local_llama/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=2&utm_term=1) | 2023-05-20T02:10:54 | https://www.reddit.com/r/LocalLLaMA/comments/13mf0bi/offline_llama/ | Jl_btdipsbro | self.LocalLLaMA | 2023-05-20T20:36:45 | 1 | {'gid_2': 1} | 13mf0bi | false | null | t3_13mf0bi | /r/LocalLLaMA/comments/13mf0bi/offline_llama/ | false | false | self | 71 | {'enabled': False, 'images': [{'id': 'kYiVOqUp5QqzFz2zdykgGVlDJrWLZ-60SRLc-25uGe0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=108&crop=smart&auto=webp&s=211b2e62ef0d8eff45e402562b898c342838dbda', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=216&crop=smart&auto=webp&s=66f0cbd01ef6c8905f1b72242fa5b54785331e34', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=320&crop=smart&auto=webp&s=240827a1120f6e1e5996b2b89f492262529309bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=640&crop=smart&auto=webp&s=9d9fce830b27974ccdac0eace0d8c2d3d862c9c9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=960&crop=smart&auto=webp&s=efe40527ff603955aa10bba78070e8f82f83efb4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=1080&crop=smart&auto=webp&s=a7f87d72eca2ec7d75e75cf035649c12e5e1d4d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?auto=webp&s=766bf55582d733e244d01122b5c35a6058a3911f', 'width': 1200}, 'variants': {}}]} |
Running GPTQ model in Mac M1 2020? | 3 | Does anyone know if oobabooga (or llamacpp) has support for Mac M1 GPU to run the GPTQ model?
I tried following the instruction on [https://github.com/oobabooga/text-generation-webui/](https://github.com/oobabooga/text-generation-webui/) but I'm stuck at step `4. Install GPTQ-for-LLaMa and the monkey patch` when trying to run `python setup_cuda.py install`:
CUDA_HOME environment variable is not set.
(`Step 1` in [https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md))
My main goal is to run a 7B model like [https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ) in my Mac M1 (16GB)
Currently, I'm running the GGML model with \~4-5 tokens/s but I want to see how much faster/better the GPTQ model is. | 2023-05-20T03:20:41 | https://www.reddit.com/r/LocalLLaMA/comments/13mghl4/running_gptq_model_in_mac_m1_2020/ | MichaelBui2812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mghl4 | false | null | t3_13mghl4 | /r/LocalLLaMA/comments/13mghl4/running_gptq_model_in_mac_m1_2020/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '_G-gSKJ6d76MYZo4vSvWdNh2at8JMNuyKQyW1pu20nA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=108&crop=smart&auto=webp&s=76388a0ba09ee376b6607adcf5a46d0fb1a52936', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=216&crop=smart&auto=webp&s=fe775c7ec71c527356c7bef079e79e6ec161d16e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=320&crop=smart&auto=webp&s=b139ca9bc9b959df0d75a70e391501776e319ebc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=640&crop=smart&auto=webp&s=aa6e59488e4ba9374fcbecac49d6cf1af7e8579a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=960&crop=smart&auto=webp&s=2533771c6dd0af65a0b916df12921fc09330f1d2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=1080&crop=smart&auto=webp&s=5a585f52c4007b4a491706ce47f3dfbef5c54cae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?auto=webp&s=45fbb52f8c595da83be14bd7525963e801369117', 'width': 1200}, 'variants': {}}]} |
Which model is the best for you? | 59 | New models are coming up almost every day & there are multiple places to compare models but they can't catch up with the speed as new models are created/improved.
So I create this thread for us to share our own experiences with models and share our personal opinions on what's the best for you, probably with a brief explanation of why it's the best.
The format will be something like this (you can share 1 for each 7B, 13B, 30B & 65B):
* **Model**:
* **Env**: please give as many details as possible about the environment you're running the model
* **Performance**: either xx tokens/s or xxx ms/token
* **Reason**:
Here is mine:
​
* **Model**: `TheBloke/Manticore-13B-GGML`
* **Env**: VM (16 vCPU, 32GB RAM, ***only AVX1 enabled***) in Dell R520, 2x E5-2470 v2 @ 2.40GHz, 64GB RAM
* **Performance**: 1.2\~1.3 tokens/s
* **Reason**: Good to share RAM with SD. The response is even better than `VicUnlocked-30B-GGML` (which I guess is the best 30B model), similar quality to `gpt4-x-vicuna-13b` but is uncensored.
​
* **Model**: `TheBloke/Wizard-Vicuna-7B-Uncensored-GGML`
* **Env**: Mac M1 2020, 16GB RAM
* **Performance**: 4 \~ 5 tokens/s
* **Reason**: best with my limited RAM, portable. Finding a way to try GPTQ to compare | 2023-05-20T04:24:28 | https://www.reddit.com/r/LocalLLaMA/comments/13mhrqr/which_model_is_the_best_for_you/ | MichaelBui2812 | self.LocalLLaMA | 2023-05-20T06:09:19 | 0 | {} | 13mhrqr | false | null | t3_13mhrqr | /r/LocalLLaMA/comments/13mhrqr/which_model_is_the_best_for_you/ | false | false | self | 59 | null |
GLM-130B | 6 | The [https://github.com/THUDM/GLM-130B](https://github.com/THUDM/GLM-130B) model is trained on The Pile and can run on 4x3090 when quantized to INT4. I'm wondering if anyone knows if this model could (or has) been quantized using GPTQ, which gives some impressive performance gains over traditional quantization, and I'm also wondering if anyone has tried a 3-bit or 2-bit quantization of such a massive model (using GPTQ). Are there any inherent limitations in this? Is there anything about this model that prevents it from being run on text-generation-webui?
[This page](https://github.com/THUDM/GLM-130B/blob/main/docs/quantization.md) on the GLM130B Github talks about issues with Quantization, but the details went a bit over my head. From the sounds of it, GPTQ wouldn't necessarily work without modification, and there are some details about not being able to quantize the activation the same way. I think GPTQ may address those issues, but I'm not clear on the details.
I know that Llama-65B int3 performs worse than Llama-30B int4 in terms of perplexity, which suggests these very low quantizations aren't worth the effort, but I'd love to see some data backing that up.
For anyone curious, GLM130B was trained with 12,288 hidden state dimensions (whatever that means), and a standard 2,048 maximum sequence length (which is a bummer - for such a massive model, I would have loved to see a 4096-token max, but I guess that would make inference that much slower if the quadratic scaling law applies here)
I'm also not clear on what sort of tokens/sec could be expected of GLM130B at int4 / int3 / int2.
Can anyone shed some light on this? Either concretely or speculatively? | 2023-05-20T06:10:33 | https://www.reddit.com/r/LocalLLaMA/comments/13mjr7m/glm130b/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mjr7m | false | null | t3_13mjr7m | /r/LocalLLaMA/comments/13mjr7m/glm130b/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': '6Ie7e7foB4I5D5_FI7bjwwibKegZVK0-ck0nj9sW30E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=108&crop=smart&auto=webp&s=914f410fd87e84c888df0f03e1205ae706c0a350', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=216&crop=smart&auto=webp&s=77967cb7d5c09287b957c2335ec4b0efe9f793ac', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=320&crop=smart&auto=webp&s=7a59a025592e0032bd6fe3d17daa8619a87b7041', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=640&crop=smart&auto=webp&s=6df3b1def36ae5ae4aba55d5364adc8a8ebbc3c0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=960&crop=smart&auto=webp&s=5ccfc6bc68e675e7dec562caaaff64accb75c01d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=1080&crop=smart&auto=webp&s=e77ce31488887e2a8a76cd4882556aaf0511090b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?auto=webp&s=039b68c780587d9cf75e63c8e988bca44c6aed61', 'width': 1200}, 'variants': {}}]} |
Which models can you run at reasonable speeds on a laptop with 32 gb ram and a 3070? | 2 | I am a noob when it comes to LLMs I have tried some of the 13b models but inference is slow. Should I stick to 7b only? I tried have both bin and ggml models in oobabooga textgen with llama extension - gpt4xvicuna 4bit | 2023-05-20T06:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/13mkkp5/which_models_can_you_run_at_reasonable_speeds_on/ | hihajab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mkkp5 | false | null | t3_13mkkp5 | /r/LocalLLaMA/comments/13mkkp5/which_models_can_you_run_at_reasonable_speeds_on/ | false | false | self | 2 | null |
LLama with internet access? | 14 | Hello,
I was researching this, but I can't quite figure it out, so I am wondering here: what are good pointers and such if I want to make a LLaMa-based chatbot with internet access?
I want to make a programming-specific assistant for writing scripts on command, and internet access would be vital to have all resources updated as recently as possible. I tried looking into langchain as a way, but having issues figuring out how it works.
Any help would be appreciated; thank you! | 2023-05-20T07:28:18 | https://www.reddit.com/r/LocalLLaMA/comments/13ml4hq/llama_with_internet_access/ | MatchTerm69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ml4hq | false | null | t3_13ml4hq | /r/LocalLLaMA/comments/13ml4hq/llama_with_internet_access/ | false | false | self | 14 | null |
WizardLM: Overview of the Evol-Instruct Method in a Short Video | 1 | [removed] | 2023-05-20T08:40:20 | https://www.reddit.com/r/LocalLLaMA/comments/13mmdgo/wizardlm_overview_of_the_evolinstruct_method_in_a/ | TrimTasker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mmdgo | false | null | t3_13mmdgo | /r/LocalLLaMA/comments/13mmdgo/wizardlm_overview_of_the_evolinstruct_method_in_a/ | false | false | default | 1 | null |
Does anybody know ggml bin models that can speak Albanian, Macedonian, Bulgarian, Greek, Latvian, Estonian, Hungarian, Lithuanian, Swedish, Slovenian, Norwegian, Dutch? | 13 | I am interested in learning foreign languages and try to find suitable llama models. | 2023-05-20T10:13:07 | https://www.reddit.com/r/LocalLLaMA/comments/13mnzcz/does_anybody_know_ggml_bin_models_that_can_speak/ | Languages_Learner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mnzcz | false | null | t3_13mnzcz | /r/LocalLLaMA/comments/13mnzcz/does_anybody_know_ggml_bin_models_that_can_speak/ | false | false | self | 13 | null |
Models I can Run On Mac M1? | 1 | [removed] | 2023-05-20T10:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/13monv5/models_i_can_run_on_mac_m1/ | buxrmp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13monv5 | false | null | t3_13monv5 | /r/LocalLLaMA/comments/13monv5/models_i_can_run_on_mac_m1/ | false | false | default | 1 | null |
Will we always need to load entire models into RAM, or is it technically possible to only load part at a time? For both llama.cpp and PEFT DeepSpeed/FSDP | 11 | While llama.cpp and PEFT both have support for splitting tasks between CPU and GPU (splitting the model up between VRAM and RAM), as far as I know the model still has to be fully loaded into system RAM to do this. In the (near?) future, will it be possible to load only part of a model into RAM at a time as part of this task splitting? This question is more about if it is "technically possible" even if it's a pain to actually implement. Also, the question is about both llama.cpp and PEFT DeepSpeed/FSDP, I sort of assume both would share the same limitations but maybe one could do this and one not.
Will we forever be required to have the full amount of RAM to load the entire model at once for inference and fine-tuning, or will we be able to loosen RAM requirements and be able to for instance deal with half the model in RAM at a time (for instance, a model 2x the size of system RAM, but still able to do inference/fine-tuning on it)?
If the algorithms involved simply cannot be optimized to do this, is memory paging with a M.2 drive a viable alternative for "expanding RAM" to handle any sized model? | 2023-05-20T11:40:47 | https://www.reddit.com/r/LocalLLaMA/comments/13mpnaw/will_we_always_need_to_load_entire_models_into/ | SoylentMithril | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mpnaw | false | null | t3_13mpnaw | /r/LocalLLaMA/comments/13mpnaw/will_we_always_need_to_load_entire_models_into/ | false | false | self | 11 | null |
Is this a good AI PC build? (RTX 4090, Ryzen 9 5950X, 32 GB RAM) | 13 | Hey everyone,
I found a [prebuilt PC on Amazon](https://www.amazon.de/-/en/gp/product/B0BLD176XL/ref=ox_sc_act_title_1?smid=AOHI1X68KO3E5&th=1) that caught my attention. I'm interested in running AI apps like Whisper, Vicuna, and Stable Diffusion on it. The GPU, an RTX 4090, looks great, but I'm unsure if the CPU is powerful enough. I plan to upgrade the RAM to 64 GB and also use the PC for gaming.
Here are the specs:
* CPU: AMD Ryzen 9 5950X (16 x 3.4 GHz)
* GPU: RTX 4090 24 GB
* RAM: 32 GB DDR4-3600MHz
* Storage: 1 TB M.2 SSD NVMe
* Mainboard: Gigabyte B550 Gaming X V2 - AM4
I'd love to hear your thoughts! | 2023-05-20T12:35:59 | https://www.reddit.com/r/LocalLLaMA/comments/13mqtrf/is_this_a_good_ai_pc_build_rtx_4090_ryzen_9_5950x/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mqtrf | false | null | t3_13mqtrf | /r/LocalLLaMA/comments/13mqtrf/is_this_a_good_ai_pc_build_rtx_4090_ryzen_9_5950x/ | false | false | self | 13 | null |
Which is the best option only for make questions to documents? | 2 | Hello dear, I wanted to ask you about what would be the most "lightweight" option when processing a text, a pdf or any content to generate an answer, even a basic one, based on questions? With lightweight I mean a model perhaps adjusted to ask only questions without consuming too many computer resources something similar. | 2023-05-20T14:35:44 | https://www.reddit.com/r/LocalLLaMA/comments/13mvjn2/which_is_the_best_option_only_for_make_questions/ | hwpoison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mvjn2 | false | null | t3_13mvjn2 | /r/LocalLLaMA/comments/13mvjn2/which_is_the_best_option_only_for_make_questions/ | false | false | self | 2 | null |
Information v Reasoning | 2 | Just wondering : can we build models which are great at Reasoning, but with not too much Data?
Or do we need a large model to achieve both? | 2023-05-20T14:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/13mvowi/information_v_reasoning/ | MrEloi | self.LocalLLaMA | 2023-05-20T14:48:03 | 0 | {} | 13mvowi | false | null | t3_13mvowi | /r/LocalLLaMA/comments/13mvowi/information_v_reasoning/ | false | false | self | 2 | null |
Seeking Guidance on Leveraging Local Models and Optimizing GPU Utilization in containerized packages | 6 | Hello locallamas,
I'm reaching out to share my current experiences and solicit some advice from our collective wisdom. My background includes working as a DevOps Engineer and Cloud Engineer, currently pursuing a degree in InfoSec, and I have a keen interest in LLM. My journey with GPT models started in July '21 and I've used the GPT-3 and GPT-3.5_turbo API to develop a TTRPG adventure generator. I recently upgraded my gaming pc (4090 OC, 13900ks, 96gb ram - upgradable to 192gb if necessary) to host and interact with local models via LangChain for building tools without incurring API costs.
Although I've made some headway in dockerizing the models, there are a few issues that I've been grappling with:
1. I want to utilize Stable-Vicuna, as various sources in the media suggest it's the most effective local model.
2. I'm looking to understand how to train LoRA improvements on a local scale.
3. While I possess the llama checklist, consolidated.0X.pths, and params.json, I'm unsure how to utilize them effectively.
4. I tried using stable-vicuna's apply_delta.py but it seems to prefer calling files from huggingface. In an attempt to manage this locally, I found the TRANSFORMERS_OFFLINE=1 | https://huggingface.co/docs/transformers/v4.29.1/en/installation#offline-mode and local_files_only=True flags in the transformers' documentation, but I still can't execute the command locally. Ideally, Docker Desktop could utilize my GPU during the build phase to create a container with stable-vicuna ready to go.
5. While most projects predominantly utilize the CPU, I'm interested in finding ways to leverage my GPU. I would like to initiate my docker container to access my local GPU.
6. From a DevOps perspective, my goal is to establish a retraining pipeline for regular improvements. I aim to generate and store improvement data, carry out periodic retraining, and perform testing before swapping out with the live system. However, I find myself blocked due to the above challenges.
Any insights, suggestions, or resources would be greatly appreciated. Thank you in advance for your help! | 2023-05-20T14:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/13mwnc1/seeking_guidance_on_leveraging_local_models_and/ | 2good4hisowngood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mwnc1 | false | null | t3_13mwnc1 | /r/LocalLLaMA/comments/13mwnc1/seeking_guidance_on_leveraging_local_models_and/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]} |
LambdaLabs offering free compute for 30 days to train open models | 170 | 2023-05-20T14:56:30 | https://twitter.com/boborado/status/1659608452849897472?t=nweVmAjfSfklvHmoSr2txw&s=19 | 404underConstruction | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 13mwupt | false | {'oembed': {'author_name': 'Robert Brooks IV', 'author_url': 'https://twitter.com/boborado', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">I have 248 H100 SXM5s networked with 3200 Gbps Infiniband just sitting in front of me. Ready to go. <br><br>Who wants to do something out of this world cool? Let’s train your foundation model or LLM. <br><br>DMs open.</p>— Robert Brooks IV (@boborado) <a href="https://twitter.com/boborado/status/1659608452849897472?ref_src=twsrc%5Etfw">May 19, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/boborado/status/1659608452849897472', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_13mwupt | /r/LocalLLaMA/comments/13mwupt/lambdalabs_offering_free_compute_for_30_days_to/ | false | false | 170 | {'enabled': False, 'images': [{'id': 'Cw_4zUBGQYHysV7uArqTHfHBc9N8xecsov7KHph4RwQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/he9R5AapkwWVGhk_MNyrLz0OvCJvG5SlKiC9OMprc98.jpg?width=108&crop=smart&auto=webp&s=705cb69d98de9220411d90ba5aed2da18f3d61d4', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/he9R5AapkwWVGhk_MNyrLz0OvCJvG5SlKiC9OMprc98.jpg?auto=webp&s=0677076673e3edce76f9b8a4726ecaf3f8a06c70', 'width': 140}, 'variants': {}}]} |
||
Problems with ouput in the Text generation web UI | 1 | Hi,
I just installed the 4-bit LLaMa following this article:
[https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/)
After that, I downloaded the WizardLM-7B-uncensored-GPTQ model over here:
[https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ/tree/main](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ/tree/main)
In the models folder, I created a new folder called WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order and put all the downloaded files there.
After that, I ran the [server.py](https://server.py) script, it recognized the model and loaded it. So far so good, but when I put a prompt into the input window and click Generate, the output just parrots what's in the input window, nothing is effectively generated. Why is that?
I would appreciate your help, thanks. | 2023-05-20T15:11:49 | https://www.reddit.com/r/LocalLLaMA/comments/13mxhxx/problems_with_ouput_in_the_text_generation_web_ui/ | Liparu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mxhxx | false | null | t3_13mxhxx | /r/LocalLLaMA/comments/13mxhxx/problems_with_ouput_in_the_text_generation_web_ui/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VeQ7J7AHAfqAyuWDlt6KtqZAD2MPOHJxpZV7LzS8g9k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=108&crop=smart&auto=webp&s=3162912114c90470ebbdaaddbe50c701814f7db0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=216&crop=smart&auto=webp&s=06e29ebab1871569b6b0de1dc19dabd6c13b195f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=320&crop=smart&auto=webp&s=001e80f1c0c3e33f6aa4f5a23737e7a2f51c17c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=640&crop=smart&auto=webp&s=5794c3b53d2f33ca9a87494b83d9c3adb361bc55', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=960&crop=smart&auto=webp&s=71d7f260081cf4936f27a96fc3f4be0c48684926', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=1080&crop=smart&auto=webp&s=a2bbe90df85c92de03f32e1b9339c7ca3d61603f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?auto=webp&s=a25715c578d81f56bc706b69123a3fc378620325', 'width': 1200}, 'variants': {}}]} |
Devs and local LLM users | 1 | [removed] | 2023-05-20T15:17:33 | https://www.reddit.com/r/LocalLLaMA/comments/13mxnbq/devs_and_local_llm_users/ | drbh_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mxnbq | false | null | t3_13mxnbq | /r/LocalLLaMA/comments/13mxnbq/devs_and_local_llm_users/ | false | false | default | 1 | null |
13B 4bit or 7B 8bits | 23 |
Hello, I was just wondering what would be the best in terms of speed or quality and what is for you the best compromise for the ehartford's Wizard Uncensored.
I use a ryzen 7600x, 16GB ddr4 3200 ram and a 8gb 3060ti | 2023-05-20T15:20:39 | https://www.reddit.com/r/LocalLLaMA/comments/13mxq66/13b_4bit_or_7b_8bits/ | PapyplO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mxq66 | false | null | t3_13mxq66 | /r/LocalLLaMA/comments/13mxq66/13b_4bit_or_7b_8bits/ | false | false | self | 23 | null |
Looking for a way to simulate chaotic thought process | 6 | I'd like to simulate random thought process.
For example, if I ask "give me a random word", the answer I get is often pretty deterministic and limited to a narrow range of answers, even when playing with temperature, top\_p and top\_k which also have the side effect or missing the instruction's point completely if outside of a reasonable range. A better way, I think, could be by introducing randomness within the conversation to simulate chaotic thoughts. Maybe one way that could be achieved is via introducing character bias.
For instance, if my prompt is:
### User:
Give me a random word
### Assistant:
The answer lacks variety. It would give words such as "elephant" and "apple". Understandably, the model produces a probabilistic completion answer based on previous tokens.
So, I'd like to inject chaos with a random number generator by producing random but meaningful tokens that would introduce a random bias, and turn such prompt into something like:
### User:
Give me a random word
### __RANDOM_PERSONALITY_WORD__ Assistant:
Which could become:
### User:
Give me a random word
### Evil Assistant:
The word is "Ninja".
Such small alteration of the prompt introduces a small but powerful controlled bias which I think can achieve simulated chaotic through process.
Now the question is: what is the best way of introducing such bias in the conversation? I thought about random personalities (character bias), but not sure how effective that would be, and it seems quite complex. Alternatively, I thought about a system prompt, such as `Below is an instruction that describes a task. As a helpful assistant with a personality#__RANDOM_NUMBER__, write a response that appropriately completes the request.` But that doesn't seem to affect the answers much as "personality#NUMBER" is treated too softly and only appears to introduce the same structural bias regardless of the NUMBER.
Most importantly I'd like this to be applied to a wide range of user requests, such as:
* Give me a random word
* Describe a person
* Write a story about a topic of your own choosing
* Write a poem
etc.
Any ideas on the matter? | 2023-05-20T15:51:27 | https://www.reddit.com/r/LocalLLaMA/comments/13myhno/looking_for_a_way_to_simulate_chaotic_thought/ | Thireus | self.LocalLLaMA | 2023-05-20T15:54:44 | 0 | {} | 13myhno | false | null | t3_13myhno | /r/LocalLLaMA/comments/13myhno/looking_for_a_way_to_simulate_chaotic_thought/ | false | false | self | 6 | null |
You can now relive the infamous "I'm sorry, Dave, I'm afraid I can't do that" scene | 21 | 2023-05-20T16:11:17 | https://www.reddit.com/gallery/13myzvb | belladorexxx | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 13myzvb | false | null | t3_13myzvb | /r/LocalLLaMA/comments/13myzvb/you_can_now_relive_the_infamous_im_sorry_dave_im/ | false | false | 21 | null |
||
Todd Lora horning up models. | 34 | I've been working on a LoRA using the dataset from the Todd Proxy (I think) dump from 4chan. This time trained on a 13b llama at 128 rank for 1 epoch. Works on llama derived 13bs. The goal is to de-censor models and make them write longer.
This run took 12 or so hours. There are 50k+ unique prompts in the dataset. I ran for 1 epoch but I should do 2 for maximum effect. Eventually I want to combine it with the blue moon RP set and have a nice model uncensoring/RP LoRA. Would like to get it right before I commit to a dataset 2 or 3x the size.
Here is an example of what it does: https://postimg.cc/gallery/VSYpPR8
It's even starting to crack vicuna.
LoRA: https://huggingface.co/autobots/ToddLora_13b_v2
Next run I will up the rank to 256 and see if the alpaca-lora repo is any faster than textgenUI.
Any hints/tips/criticisms? | 2023-05-20T16:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/13mz27l/todd_lora_horning_up_models/ | a_beautiful_rhind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mz27l | false | null | t3_13mz27l | /r/LocalLLaMA/comments/13mz27l/todd_lora_horning_up_models/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'UVTttE2p7GzqDHO34XkKrnF4dV0HPW2BiXUCkfJMtiw', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/ifMhJ82lD5gYaz0Uab43MOZPV7i8JVXp3l4T9EfyfcQ.jpg?width=108&crop=smart&auto=webp&s=81e5cb52a3d32b84fb88a159f6e69c44f6ed20cb', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/ifMhJ82lD5gYaz0Uab43MOZPV7i8JVXp3l4T9EfyfcQ.jpg?width=216&crop=smart&auto=webp&s=9eccfea6cec450bd4cd06b380045f29562158120', 'width': 216}, {'height': 138, 'url': 'https://external-preview.redd.it/ifMhJ82lD5gYaz0Uab43MOZPV7i8JVXp3l4T9EfyfcQ.jpg?width=320&crop=smart&auto=webp&s=c921e132452131d33207e016899a7c6a9c480d1c', 'width': 320}, {'height': 276, 'url': 'https://external-preview.redd.it/ifMhJ82lD5gYaz0Uab43MOZPV7i8JVXp3l4T9EfyfcQ.jpg?width=640&crop=smart&auto=webp&s=ab98eb71dc0b6392eed08eec7a5be1d853d70ef9', 'width': 640}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/ifMhJ82lD5gYaz0Uab43MOZPV7i8JVXp3l4T9EfyfcQ.jpg?auto=webp&s=703b17e9f1801b54ee92cc6d100dcf75f317cc9c', 'width': 925}, 'variants': {}}]} |
Why do sometimes the generation of and answer become super slow? It's pretty consistent but sometimes It Just slows down randomly. What can be the issue? | 2 | 2023-05-20T17:24:58 | DarkMamm | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 13n0sfu | false | null | t3_13n0sfu | /r/LocalLLaMA/comments/13n0sfu/why_do_sometimes_the_generation_of_and_answer/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'i43IcbXpRSBIeaubyURCgWKMeNijEnBbwovsWaT0pBY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=108&crop=smart&auto=webp&s=7130d72105f5c32e6df488847fca7f583c508b38', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=216&crop=smart&auto=webp&s=d9097b11f8e8de2a0059deb787d24db4e40813ca', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=320&crop=smart&auto=webp&s=dd27c3f6184010fc457069b1893f4b0881057173', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=640&crop=smart&auto=webp&s=0ca86d7c693578904b44285a8addb1ef511a76c6', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=960&crop=smart&auto=webp&s=5d63b171583ec4ce2911284dd9619de71f2f32d2', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=1080&crop=smart&auto=webp&s=78e442f1c308f015118a75aa1731d781e8e510dd', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?auto=webp&s=83ee164e83187fe4f4609368502390a0337903f9', 'width': 3200}, 'variants': {}}]} |
|||
Issues running oobabooga_windows after reinstall | 3 | [removed] | 2023-05-20T17:42:30 | [deleted] | 2023-05-20T18:09:55 | 0 | {} | 13n175h | false | null | t3_13n175h | /r/LocalLLaMA/comments/13n175h/issues_running_oobabooga_windows_after_reinstall/ | false | false | default | 3 | null |
||
Install Ooba textgen + llama.cpp with GPU support on Windows via WSL2 | 11 | 2023-05-20T17:45:03 | https://pastebin.com/GwSysUxj | _underlines_ | pastebin.com | 1970-01-01T00:00:00 | 0 | {} | 13n19cu | false | null | t3_13n19cu | /r/LocalLLaMA/comments/13n19cu/install_ooba_textgen_llamacpp_with_gpu_support_on/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?width=108&crop=smart&auto=webp&s=df9c6a296446d05d873c629a30253398c4d29c1b', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?auto=webp&s=07c121a0180003f7373863af66192b6ff6a937da', 'width': 150}, 'variants': {}}]} |
||
Tired of errors | 2 | I have spent the whole day trying to work with a LlaMa model and have tried different versions of different models. I have also tried different version of the library to no avail. I keep getting the following error:
error loading model: unknown (magic, version) combination: 67676a74, 00000003; is this really a GGML file? llama\_init\_from\_file: failed to load model
Can anyone tell me a version of llama-cpp-python and a LlaMa model that are guaranteed to work together? | 2023-05-20T17:49:11 | https://www.reddit.com/r/LocalLLaMA/comments/13n1cwg/tired_of_errors/ | Horcruxno13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13n1cwg | false | null | t3_13n1cwg | /r/LocalLLaMA/comments/13n1cwg/tired_of_errors/ | false | false | self | 2 | null |
Is there such a thing as local Llamas integrated into VSCode? | 17 | Hi, I’m wondering if someone already created a plug-in for using these models directly from VSCode?
It should be pretty trivial to connect a VSCode plugin to the text-generation-web-ui API, and it could be interesting when used with models that can generate code.
If there’s not one already, I’d look into building it, anyone else interested? | 2023-05-20T19:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/13n3ikw/is_there_such_a_thing_as_local_llamas_integrated/ | rustedbits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13n3ikw | false | null | t3_13n3ikw | /r/LocalLLaMA/comments/13n3ikw/is_there_such_a_thing_as_local_llamas_integrated/ | false | false | self | 17 | null |
Why can't we use edge TPUs (like coral AI) with inferencing? | 5 | I ask this from extreme amounts of ignorance, if we can load a full model into system ram, what's stopping the TPU from participating in calculations?
Or is it just that GPU ram is that much faster than system ram and that's the real reason they're better than CPUs? | 2023-05-20T20:07:18 | https://www.reddit.com/r/LocalLLaMA/comments/13n613w/why_cant_we_use_edge_tpus_like_coral_ai_with/ | noneabove1182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13n613w | false | null | t3_13n613w | /r/LocalLLaMA/comments/13n613w/why_cant_we_use_edge_tpus_like_coral_ai_with/ | false | false | self | 5 | null |
Running a small model on a phone? | 8 | Anyone have any luck running a small model on a phone. I seen the llama.cpp repo an it has a section to get a model running on android.
thanks for tha downvotes much appreciated.
​ | 2023-05-20T20:12:27 | https://www.reddit.com/r/LocalLLaMA/comments/13n6c3z/running_a_small_model_on_a_phone/ | ScoobySnackzz12 | self.LocalLLaMA | 2023-05-21T13:24:58 | 0 | {} | 13n6c3z | false | null | t3_13n6c3z | /r/LocalLLaMA/comments/13n6c3z/running_a_small_model_on_a_phone/ | false | false | self | 8 | null |
My results using a Tesla P40 | 101 | TL;DR at bottom
So like many of you, I feel down the AI text gen rabbit hole. My wife has been severely addicted to all things chat AI, so it was only natural. Our previous server was running a 3500 core i-5 from over a decade ago, so we figured this would be the best time to upgrade. We got a P40 as well for gits and shiggles because if it works, great, if not, not a big investment loss and since we're upgrading the server, might as well see what we can do.
For reference, mine and my wife's PCs are identical with the exception of GPU.
Our home systems are:
Ryzen 5 3800X, 64gb memory each. My GPU is a RTX 4080, hers is a RTX 2080.
Using the Alpaca 13b model, I can achieve \~16 tokens/sec when in instruct mode. My wife can get \~5 tokens/sec (but she's having to use the 7b model because of VRAM limitations). She also switched to mostly CPU so she can use larger models, so she hasn't been using her GPU.
We initially plugged in the P40 on her system (couldn't pull the 2080 because the CPU didn't have integrated graphics and still needed a video out). Nvidia griped because of the difference between datacenter drivers and typical drivers. Once drivers were sorted, it worked like absolute crap. Windows was forcing shared VRAM, and even though we could show via the command 'nvidia-smi' that the P40 was being used exclusively, either text gen or windows was forcing to try to share the load through the PCI bus. Long story short, got \~2.5 tokens/sec with the 30b model.
Finished building the new server this morning. i7 13700 w/64g ram. Since this was a dedicated box and with integrated graphics, we went solid datacenter drivers. No issues whatsoever. 13b model achieved \~15 tokens/sec. 30b model achieved 8-9 tokens/sec. When using text gen's streaming, it looked as fast as ChatGPT.
TL;DR
7b alpaca model on a 2080 : \~5 tokens/sec
13b alpaca model on a 4080: \~16 tokens/sec
13b alpaca model on a P40: \~15 tokens/sec
30b alpaca model on a P40: \~8-9 tokens/sec
Next step is attaching a blower via 3D printed cowling because the card gets HOT despite having some solid airflow in the server chassis then, picking up a second P40 and an NVLink bridge to then attempt to run a 65b model. | 2023-05-20T20:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/13n8bqh/my_results_using_a_tesla_p40/ | AsheramL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13n8bqh | false | null | t3_13n8bqh | /r/LocalLLaMA/comments/13n8bqh/my_results_using_a_tesla_p40/ | false | false | self | 101 | null |
Combining weights (fine tunes of the same base) | 3 | [deleted] | 2023-05-20T21:35:21 | [deleted] | 2023-06-23T07:06:10 | 0 | {} | 13n9u58 | false | null | t3_13n9u58 | /r/LocalLLaMA/comments/13n9u58/combining_weights_fine_tunes_of_the_same_base/ | false | false | default | 3 | null |
||
Which file formats should I use if I have a GPU | 5 | I understand that there are different models. I know of three interfaces: gpt4all, llama.ccp and oogabooga. I don't understand file formats. The latest Manticore releases comes in different formats. What combo of interface/format should I use for a 3080? | 2023-05-20T21:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/13na3a5/which_file_formats_should_i_use_if_i_have_a_gpu/ | infohawk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13na3a5 | false | null | t3_13na3a5 | /r/LocalLLaMA/comments/13na3a5/which_file_formats_should_i_use_if_i_have_a_gpu/ | false | false | self | 5 | null |
Best model for game "AI agent"? | 11 | Hi,
So basically I'm looking for a model suitable to be used as an AI agent back-end.
What I expect from it is to be really good at following instructions (and also not too restrictive when it comes to violence).
I was trying out requests like this:
*Player is doing X, saying Y, you are this NPC XY, your possible actions are X,Y,Z and you can say something. Please provide answer in format {Action, LineToSay}*
Is there somebody who is playing with the same thing and wants to share their experience/templates with some models? I was playing around with few of them, gpt4-alpaca, vicuna, wizardlm but I wasn't able to get it behave in consistent way.
It refuses to follow the format, or it follows the format but it just starts listing out all possible options, or it starts going on rampage by writing for the player too etc...
7B model would be ideal and I don't care about license that much. This isn't something I plan to distribute. | 2023-05-20T22:00:50 | https://www.reddit.com/r/LocalLLaMA/comments/13naggc/best_model_for_game_ai_agent/ | kamtar | self.LocalLLaMA | 2023-05-20T22:04:48 | 0 | {} | 13naggc | false | null | t3_13naggc | /r/LocalLLaMA/comments/13naggc/best_model_for_game_ai_agent/ | false | false | self | 11 | null |
Need Assistance Running the starcoder-GPTQ-4bit-128g Model with missing config.json in text-generation-webui | 7 | Hello LocalLLaMA community,
I've been trying to load the [starcoder-GPTQ-4bit-128g model](https://huggingface.co/mayank31398/starcoder-GPTQ-4bit-128g) into the [text-generation-webui by oobabooga](https://github.com/oobabooga/text-generation-webui) but have run into some difficulties due to missing files. Specifically, the model appears to lack necessary configuration files like 'config.json'.
In an attempt to resolve this, I tried copying the config files from the [starcoder model on Hugging Face](https://huggingface.co/bigcode/starcoder/tree/main). This allowed the model to load, but unfortunately when I try to get a response from the model, the following error shows up:
`next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)`
`RuntimeError: probability tensor contains either 'inf', 'nan' or element < 0`
My level of experience doesn't extend to modifying these files or troubleshooting this error.
I'm reaching out in hopes that someone in this community might have encountered a similar situation and could share some advice or guidance on how to proceed.
If anyone has insight on the RuntimeError I'm encountering, I'd greatly appreciate your thoughts.
Any help or advice would be greatly appreciated. Thank you! | 2023-05-20T22:10:20 | https://www.reddit.com/r/LocalLLaMA/comments/13naoyg/need_assistance_running_the_starcodergptq4bit128g/ | LaCipe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13naoyg | false | null | t3_13naoyg | /r/LocalLLaMA/comments/13naoyg/need_assistance_running_the_starcodergptq4bit128g/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'fg9qOeYrOPWrI8Sr0baIRR_z7q7sym25M66JFFcrTAg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=108&crop=smart&auto=webp&s=b523133e0a3b86ea433e83f4780fd2f724ecbe64', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=216&crop=smart&auto=webp&s=9b476110ef5070e809421db0dd27878de62ddf7c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=320&crop=smart&auto=webp&s=84134154d4eab25bc4ad57a478693f8b7edc4f8b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=640&crop=smart&auto=webp&s=24384160e741e4711888d7395e7957e4fc5a0abc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=960&crop=smart&auto=webp&s=f060994a6fad64106bbe2ac339db12365720f449', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=1080&crop=smart&auto=webp&s=653f2d44897f05ba8e0dc759d2a39f901c1fbf88', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?auto=webp&s=ca2cb5b6a069e64bbd46d3ccad463d1cbfe86411', 'width': 1200}, 'variants': {}}]} |
Powerpointer - Generate entire powerpoints using local large language models | 66 | Hello community
I created a powerpoint generator which can generate diverse, good looking powerpoints using models like Vicuna or Alpaca. It uses text-generation-webui --api as a backend, so you can use text-generation-webui and the generator at the same time.
You can select between 7 powerpoint designs and it also makes placeholders for images!
Here is the code which also shows some example powerpoint generations: [CyberTimon/Powerpointer-For-Local-LLMs: Local Powerpointer](https://github.com/CyberTimon/Powerpointer-For-Local-LLMs)
Please give it a start if you like it and feel free to open issues or fix my code. | 2023-05-20T22:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/13nbfvz/powerpointer_generate_entire_powerpoints_using/ | zBlackVision11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nbfvz | false | null | t3_13nbfvz | /r/LocalLLaMA/comments/13nbfvz/powerpointer_generate_entire_powerpoints_using/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': 'pdGxjYtEWUuyNxeS-Pa-skIbwqcKhZ-Vh3KV8j4saeo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=108&crop=smart&auto=webp&s=d482271b3b69892d052f7b2a25fc4dfffbb727a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=216&crop=smart&auto=webp&s=72301fc6f91b765ac2e32bede043c8dc9180b70c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=320&crop=smart&auto=webp&s=385be92892518a1bc6ebfe83b40cf26ec1b7c380', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=640&crop=smart&auto=webp&s=8998ae0344b665f0f3a1bd2e3b3a9ee00df5bec4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=960&crop=smart&auto=webp&s=aa57fd63603a48e6c3f3279353b542747386274b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=1080&crop=smart&auto=webp&s=08624a3cc2fdac65f624a7645a776567bbd417f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?auto=webp&s=b7b7e0074d7391fba22294cde6544733b6dcf6dc', 'width': 1200}, 'variants': {}}]} |
Weird Dual GPU setup in Ooba | 3 | I recently got a 2nd video card with more VRAM (a6000 48gb) to try and load some bigger models, however I am trying to do a weird setup — I am trying to load 0mb into my first card and have the entire model loaded into my 2nd card (as I don’t want to slow things down inferencing across both cards if it will all fit into the 2nd card)
I have tried using —gpu-memory 0 47 which looks like it will load the model into the 2nd card but ~2GB gets loaded into the first card during inference..
Using a 30b model I’m getting **20 tokens/sec** using —gpu-memory 24 0 (and everything is being done on my first device) and I am getting **2 tokens/sec** using —gpu-memory 0 48.. (with it doing this weird “split”?)
This can’t be normal? Is there anything I can do? I am totally new to dual GPU setups, so maybe I need to manually set a device map? Any help? | 2023-05-21T00:03:39 | https://www.reddit.com/r/LocalLLaMA/comments/13nd96e/weird_dual_gpu_setup_in_ooba/ | disarmyouwitha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nd96e | false | null | t3_13nd96e | /r/LocalLLaMA/comments/13nd96e/weird_dual_gpu_setup_in_ooba/ | false | false | self | 3 | null |
In July, the GeForce RTX 4060 Ti (16GB) arrives in stores, starting at $499 | 39 | This seems like a slow card for gamers compared to other options, but a good value for language models due to price and 16 GB VRAM? | 2023-05-21T01:24:49 | https://www.nvidia.com/en-us/geforce/news/geforce-rtx-4060-4060ti/ | ThePseudoMcCoy | nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 13nf0mw | false | null | t3_13nf0mw | /r/LocalLLaMA/comments/13nf0mw/in_july_the_geforce_rtx_4060_ti_16gb_arrives_in/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'c5hFdn7sJoD-8ROp9zMR2A6pfZmGcUQsIr1CDmLSbOU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=108&crop=smart&auto=webp&s=7fdc802fa250f3ba72ccff76e15013d09210af70', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=216&crop=smart&auto=webp&s=38d6fa2a96c0fdd0de144e598cef5986530c8ff3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=320&crop=smart&auto=webp&s=353a6504e1aaed372ee68be9249f95281968b06c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=640&crop=smart&auto=webp&s=59c15e3198002c1b9556a0c89b80ef71d11f7272', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=960&crop=smart&auto=webp&s=cba78bcd84bf8d3dbc0db3175e28184dcf34059c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=1080&crop=smart&auto=webp&s=e1d8abb3bf93620058fdd10b25519cd8c626d625', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?auto=webp&s=bdbd9e15baafc91a9b9cf3d29016688306f68d2c', 'width': 1200}, 'variants': {}}]} |
|
Error with no groupsize | 1 | If I try to run in ooba a gptq model made with no groupsize like VicUnlocked-30B-LoRA-GPTQ, I get this error:
>NO\_GROUP: tl.constexpr, BLOCK\_SIZE\_M: tl.constexpr, BLOCK\_SIZE\_N: tl.constexpr, BLOCK\_SIZE\_K: tl.constexpr, GROUP\_SIZE\_M: tl.constexpr):“”"Compute the matrix multiplication C = A x B.A is of shape (M, K) float16B is of shape (K//8, N) int32C is of shape (M, N) float16scales is of shape (G, N) float16zeros is of shape (G, N) float16g\_ptr is of shape (K) int32“”"infearure\_per\_bits = 32 // bits
But if the model has a groupsize it works. I tried openassistants 30b no-groupsize gptq model and 128-groupsize gptq model and the no groupsize model gives the same error while the 128 groupsize model loads correctly.
My settings:
​
https://preview.redd.it/u8nyfua1r31b1.png?width=2524&format=png&auto=webp&s=1601d7ebf659430094008a3cf98e2991df723228
​
https://preview.redd.it/miwaucf2r31b1.png?width=2531&format=png&auto=webp&s=b75586a473708c29a96037e21a2b42932bf21fba
I am running this on runpod servers if that makes a difference.
I inputted the settings in gptq as per instructions for the models. How can I fix this? Any help appreciated.
​
Full error below if needed:
>Traceback (most recent call last):File “”, line 21, in matmul\_248\_kernelKeyError: (‘2-.-0-.-0-1e8410f206c822547fb50e2ea86e45a6-d6252949da17ceb5f3a278a70250af13-3b85c7bef5f0a641282f3b73af50f599-3d2aedeb40d6d81c66a42791e268f98b-3498c340fd4b6ee7805fd54b882a04f5-e1f133f98d04093da2078dfc51c36b72-b26258bf01f839199e39d64851821f26-d7c06e3b46e708006c15224aac7a1378-f585402118c8a136948ce0a49cfe122c’, (torch.float16, torch.int32, torch.float16, torch.float16, torch.int32, torch.int32, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’), (True, 16, 32, 128, 8), (True, True, True, True, True, True, (False, True), (True, False), (True, False), (False, False), (False, False), (True, False), (False, True), (True, False), (False, True), (True, False), (False, True), (True, False), (True, False)))
>
>During handling of the above exception, another exception occurred:
>
>Traceback (most recent call last):File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 937, in build\_triton\_irgenerator.visit(fn.parse())File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 183, in visit\_Moduleast.NodeVisitor.generic\_visit(self, node)File “/usr/lib/python3.10/ast.py”, line 426, in generic\_visitself.visit(item)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 252, in visit\_FunctionDefhas\_ret = self.visit\_compound\_statement(node.body)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 177, in visit\_compound\_statementself.last\_ret\_type = self.visit(stmt)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 486, in visit\_Ifself.visit\_compound\_statement(node.body)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 177, in visit\_compound\_statementself.last\_ret\_type = self.visit(stmt)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 301, in visit\_Assignvalues = self.visit(node.value)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 338, in visit\_BinOplhs = self.visit(node.left)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 339, in visit\_BinOprhs = self.visit(node.right)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 618, in visit\_Subscriptlhs = self.visit(node.value)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 325, in visit\_Namereturn self.get\_value([node.id](http://node.id/))File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 156, in get\_valueraise ValueError(f’{name} is not defined’)ValueError: zeros\_shifter is not defined
>
>The above exception was the direct cause of the following exception:
>
>Traceback (most recent call last):File “/root/text-generation-webui/server.py”, line 68, in load\_model\_wrappershared.model, shared.tokenizer = load\_model(shared.model\_name)File “/root/text-generation-webui/modules/models.py”, line 95, in load\_modeloutput = load\_func(model\_name)File “/root/text-generation-webui/modules/models.py”, line 275, in GPTQ\_loadermodel = modules.GPTQ\_loader.load\_quantized(model\_name)File “/root/text-generation-webui/modules/GPTQ\_loader.py”, line 177, in load\_quantizedmodel = load\_quant(str(path\_to\_model), str(pt\_path), shared.args.wbits, shared.args.groupsize, kernel\_switch\_threshold=threshold)File “/root/text-generation-webui/modules/GPTQ\_loader.py”, line 96, in \_load\_quantquant.autotune\_warmup\_linear(model, transpose=not eval)File “/root/text-generation-webui/repositories/GPTQ-for-LLaMa/quant/quant\_linear.py”, line 436, in autotune\_warmup\_linearmatmul248(a, qweight, scales, qzeros, g\_idx, bits, maxq, no\_group)File “/root/text-generation-webui/repositories/GPTQ-for-LLaMa/quant/quant\_linear.py”, line 283, in matmul248matmul\_248\_kernel\[grid\](input, qweight, output, scales, qzeros, g\_idx, input.shape\[0\], qweight.shape\[1\], input.shape\[1\], bits, maxq, input.stride(0), input.stride(1), qweight.stride(0),File “/root/text-generation-webui/repositories/GPTQ-for-LLaMa/quant/custom\_autotune.py”, line 102, in runreturn self.fn.run(\*args, num\_warps=config.num\_warps, num\_stages=config.num\_stages, \*\*kwargs, \*\*config.kwargs)File “”, line 41, in matmul\_248\_kernelFile “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 1621, in compilenext\_module = compile(module)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 1550, inlambda src: ast\_to\_ttir(src, signature, configs\[0\], constants)),File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 962, in ast\_to\_ttirmod, \_ = build\_triton\_ir(fn, signature, specialization, constants)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 942, in build\_triton\_irraise CompilationError(fn.src, node) from etriton.compiler.CompilationError: at 41:26:def matmul\_248\_kernel(a\_ptr, b\_ptr, c\_ptr, scales\_ptr, zeros\_ptr, g\_ptr, M, N, K, bits, maxq, stride\_am, stride\_ak, stride\_bk, stride\_bn, stride\_cm, stride\_cn, stride\_scales, stride\_zeros,NO\_GROUP: tl.constexpr, BLOCK\_SIZE\_M: tl.constexpr, BLOCK\_SIZE\_N: tl.constexpr, BLOCK\_SIZE\_K: tl.constexpr, GROUP\_SIZE\_M: tl.constexpr):“”"Compute the matrix multiplication C = A x B.A is of shape (M, K) float16B is of shape (K//8, N) int32C is of shape (M, N) float16scales is of shape (G, N) float16zeros is of shape (G, N) float16g\_ptr is of shape (K) int32“”"infearure\_per\_bits = 32 // bits
>
>pid = tl.program\_id(axis=0) num\_pid\_m = tl.cdiv(M, BLOCK\_SIZE\_M) num\_pid\_n = tl.cdiv(N, BLOCK\_SIZE\_N) num\_pid\_k = tl.cdiv(K, BLOCK\_SIZE\_K) num\_pid\_in\_group = GROUP\_SIZE\_M \* num\_pid\_n group\_id = pid // num\_pid\_in\_group first\_pid\_m = group\_id \* GROUP\_SIZE\_M group\_size\_m = min(num\_pid\_m - first\_pid\_m, GROUP\_SIZE\_M) pid\_m = first\_pid\_m + (pid % group\_size\_m) pid\_n = (pid % num\_pid\_in\_group) // group\_size\_m offs\_am = pid\_m \* BLOCK\_SIZE\_M + tl.arange(0, BLOCK\_SIZE\_M) offs\_bn = pid\_n \* BLOCK\_SIZE\_N + tl.arange(0, BLOCK\_SIZE\_N) offs\_k = tl.arange(0, BLOCK\_SIZE\_K) a\_ptrs = a\_ptr + (offs\_am\[:, None\] \* stride\_am + offs\_k\[None, :\] \* stride\_ak) # (BLOCK\_SIZE\_M, BLOCK\_SIZE\_K) a\_mask = (offs\_am\[:, None\] < M) # b\_ptrs is set up such that it repeats elements along the K axis 8 times b\_ptrs = b\_ptr + ((offs\_k\[:, None\] // infearure\_per\_bits) \* stride\_bk + offs\_bn\[None, :\] \* stride\_bn) # (BLOCK\_SIZE\_K, BLOCK\_SIZE\_N) g\_ptrs = g\_ptr + offs\_k # shifter is used to extract the N bits of each element in the 32-bit word from B scales\_ptrs = scales\_ptr + offs\_bn\[None, :\] zeros\_ptrs = zeros\_ptr + (offs\_bn\[None, :\] // infearure\_per\_bits) if NO\_GROUP: scales = tl.load(scales\_ptrs) # (BLOCK\_SIZE\_K, BLOCK\_SIZE\_N,) zeros = tl.load(zeros\_ptrs) # (BLOCK\_SIZE\_K, BLOCK\_SIZE\_N,) zeros = (zeros >> zeros\_shifter\[None, :\]) & maxq \^ | 2023-05-21T03:16:54 | https://www.reddit.com/r/LocalLLaMA/comments/13nhdng/error_with_no_groupsize/ | RabbitEater2 | self.LocalLLaMA | 2023-05-21T03:20:31 | 0 | {} | 13nhdng | false | null | t3_13nhdng | /r/LocalLLaMA/comments/13nhdng/error_with_no_groupsize/ | false | false | 1 | null |
|
I want to share with you my easy peasy llamacpp script | 27 | 2023-05-21T03:39:30 | https://asciinema.org/a/586215 | Evening_Ad6637 | asciinema.org | 1970-01-01T00:00:00 | 0 | {} | 13nhtw0 | false | null | t3_13nhtw0 | /r/LocalLLaMA/comments/13nhtw0/i_want_to_share_with_you_my_easy_peasy_llamacpp/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'sEnAY_QXHB893PK3xk6ClewTprQOoXmv7BOCm8Oh-RU', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=108&crop=smart&auto=webp&s=dd3ba7c980bf22b45c2d42823eccf61ec50c910b', 'width': 108}, {'height': 159, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=216&crop=smart&auto=webp&s=27f28de2cd3e119bbe31806cd04467ccda41f3f4', 'width': 216}, {'height': 236, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=320&crop=smart&auto=webp&s=2d708b059cc3db7502a0a06f7b7509f145f8d492', 'width': 320}, {'height': 472, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=640&crop=smart&auto=webp&s=01501384d1d2d2141b1f9417df4aa0be6f129781', 'width': 640}, {'height': 708, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=960&crop=smart&auto=webp&s=6d1cd6c7508dcba077a803cbb2550dd4c131f349', 'width': 960}, {'height': 797, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=1080&crop=smart&auto=webp&s=e748bf2a29fa36e70b67d384d6444674db8e0411', 'width': 1080}], 'source': {'height': 3685, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?auto=webp&s=f899abfee8504c416c9b39df70dcbb65af03c90a', 'width': 4993}, 'variants': {}}]} |
||
r/PygmalionAI/ sub is being disowned and being take over by Rouge mod we currently move to r/Pygmalion_ai/ | 1 | [removed] | 2023-05-21T03:40:10 | https://www.reddit.com/r/LocalLLaMA/comments/13nhudv/rpygmalionai_sub_is_being_disowned_and_being_take/ | Merchant_Lawrence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nhudv | false | null | t3_13nhudv | /r/LocalLLaMA/comments/13nhudv/rpygmalionai_sub_is_being_disowned_and_being_take/ | false | false | default | 1 | null |
Following u/faldore's amazing work, I'd like to create an uncensored version of the Alpacino30b model, and am seeking sponsors to help cover the compute costs. | 35 | [removed] | 2023-05-21T03:43:24 | https://www.reddit.com/r/LocalLLaMA/comments/13nhwsh/following_ufaldores_amazing_work_id_like_to/ | sardoa11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nhwsh | false | null | t3_13nhwsh | /r/LocalLLaMA/comments/13nhwsh/following_ufaldores_amazing_work_id_like_to/ | false | false | default | 35 | null |
What coding llm is the best? | 75 | So besides GPT4, I have found Codeium to be the best imo. Phind is good for a search engine/code engine. However, I have seen interesting tests with Starcoder. Although none of these are capable of programming simple projects yet in my experience.
​
What has your experience been? Thank you. | 2023-05-21T04:49:07 | https://www.reddit.com/r/LocalLLaMA/comments/13nj7g8/what_coding_llm_is_the_best/ | ArmoredBattalion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nj7g8 | false | null | t3_13nj7g8 | /r/LocalLLaMA/comments/13nj7g8/what_coding_llm_is_the_best/ | false | false | self | 75 | null |
Llama.cpp master-fab49c6 Issues | 2 | [removed] | 2023-05-21T05:30:02 | https://www.reddit.com/r/LocalLLaMA/comments/13njzj7/llamacpp_masterfab49c6_issues/ | mrjackspade | self.LocalLLaMA | 2023-05-21T06:52:07 | 0 | {} | 13njzj7 | false | null | t3_13njzj7 | /r/LocalLLaMA/comments/13njzj7/llamacpp_masterfab49c6_issues/ | false | false | default | 2 | null |
Server. Build/questions. | 3 |
Hey guys I have a few questions, I'd like to build a server with a few gpus in it so me and a couple guys can use it remotely. Is that possible and how would I set it up?
Second question, if I'm going for vram on a budget, could I just use some Nvidia tesla k80's looks like they have 24gb gddr5? Cost 150$. If you have any info at all, it would be greatly appreciated.
Thanks! | 2023-05-21T06:28:55 | https://www.reddit.com/r/LocalLLaMA/comments/13nl2o0/server_buildquestions/ | Emergency-Seaweed-73 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nl2o0 | false | null | t3_13nl2o0 | /r/LocalLLaMA/comments/13nl2o0/server_buildquestions/ | false | false | self | 3 | null |
How do I get Local LLM to analyze an whole excel or CSV? | 13 | I am trying to tinker with the idea of ingesting a csv with multiple rows, with numeric and categorical feature, and then extract insights from that document.
Expectation - Local LLM will go through the excel sheet, identify few patterns, and provide some key insights
Right now, I went through various local versions of ChatPDF, and what they do are basically the same concept.
1. Create Embeddings
2. Store them in Vector Database
3. When we ask Local LLM, they just select top few document instances based on top p and k
4. Give answer based on that
But any question which needs the whole document, fails.
e.g. - What is the average age of the users? (Assuming CSV has age/Gender etc. demographic variables)
What are the insights based on age and gender?
​
Is there any implementation guideline or ready made implementation example which can help me with this? | 2023-05-21T06:44:49 | https://www.reddit.com/r/LocalLLaMA/comments/13nld5x/how_do_i_get_local_llm_to_analyze_an_whole_excel/ | DesmonMiles07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nld5x | false | null | t3_13nld5x | /r/LocalLLaMA/comments/13nld5x/how_do_i_get_local_llm_to_analyze_an_whole_excel/ | false | false | self | 13 | null |
Help with getting started implementing long-term memory on local model? | 9 | I've been learning everything I can about this stuff, but am having trouble finding actual examples of code to alter and learn from, most of the information I can find is more theoretical, so I'm hoping maybe someone could point me in the right direction.
I upgraded my graphics card to a 3060 so I can run the dolly 2.0 3b model. I want to train it to learn information about a business to create a chat bot that can talk coherently about it, or if that isn't feasible I can at least use it to get context about user-submitted data.
I don't think I need a particularly powerful model, because I plan to use Microsoft's guidance to guide it and get very specific information, so it's conversational abilities aren't as important as being able to just recognize certain keywords that a user enters, even if it's slightly mispelled.
Maybe it's overkill to use for this purpose, but I also want to learn about the technology more.
Anyway, I'm interested in implementing some sort of persistent memory so it can remember the entire conversation with a user, and pull data about a business's products, policies, etc.
It seems the way to do this is llama\_index or langchain, or both, and to use either a vector database or I've read a sql database can work also.
I just can't find any actual examples of code to create memory, I can only find videos and articles talking about the general process.
Could anyone point me in the right direction for what I need to do exactly? For example if I look at the docs for llama\_index, it only shows how to use it with OpenAI, but I'm using dolly 2.0, [here's](https://pastebin.com/FivVxeHW) the code...
If anyone could give me a code snippet or a working example I could alter or something, I would appreciate it. I've been trying for days now.
Also, for some reason the current implementation I have repeats the question I ask, is that normal? Here is my chat transcript from talking to it.
You: hey there Chatbot: hey there, I'm a big fan of your blog and look forward to reading it every day! I'm also a big fan of the new look and feel. I was wondering if you could share with us the process of how you came
You: what blog?
Chatbot: what blog? I’m a writer and editor based in London. I’ve been a fan of the NFL since the early 1990s, when I was a kid in New Jersey. I’ve been a fan of the New England
You: who are you?
Chatbot: who are you? I am a software engineer at Databricks. I am also a big fan of the Databricks blog and Databricks Summit. I am also a big fan of the Databricks blog and Databricks
You: what is your favorite color?
Chatbot: what is your favorite color? blue blue is my favorite color. I love the color of the ocean, the sky, and the ocean when the tide is out. I also love the color of the ocean when it's stormy
​
So yeah, if anyone could help me with the memory thing, or training it, or if I do train it then should I wait until I have absolutely all the data it would ever need, or can I train it over time to get better at my specific needs? Just a little lost with what my next steps should be now that I have a somewhat coherent model working locally which is very exciting. | 2023-05-21T07:06:48 | https://www.reddit.com/r/LocalLLaMA/comments/13nlrzp/help_with_getting_started_implementing_longterm/ | TheNomadicAspie | self.LocalLLaMA | 2023-05-21T07:11:28 | 0 | {} | 13nlrzp | false | null | t3_13nlrzp | /r/LocalLLaMA/comments/13nlrzp/help_with_getting_started_implementing_longterm/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?width=108&crop=smart&auto=webp&s=df9c6a296446d05d873c629a30253398c4d29c1b', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?auto=webp&s=07c121a0180003f7373863af66192b6ff6a937da', 'width': 150}, 'variants': {}}]} |
Is the Nvidia K80 with 24GB usable for LLMs? | 11 | Whilst looking for used P40s, I came across some VERY cheap K80s (whatever they are).
Are they usable for running LLMs?
(I don't care about the speed too much .. but the 24GB could be useful) | 2023-05-21T07:27:26 | https://www.reddit.com/r/LocalLLaMA/comments/13nm5ox/is_the_nvidia_k80_with_24gb_usable_for_llms/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nm5ox | false | null | t3_13nm5ox | /r/LocalLLaMA/comments/13nm5ox/is_the_nvidia_k80_with_24gb_usable_for_llms/ | false | false | self | 11 | null |
Models are repeating text several times? | 5 | For some reason with several models, if I submit a prompt I get an answer repeated over and over, rather than just generating it once. For example, the below code...
`from langchain.llms import HuggingFacePipeline`
`import torch`
`from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline`
`model_id = 'databricks/dolly-v2-3b'`
`tokenizer = AutoTokenizer.from_pretrained(model_id)`
`model = AutoModelForCausalLM.from_pretrained(model_id)`
`pipe = pipeline(`
`"text-generation",`
`model=model,`
`tokenizer=tokenizer,`
`max_length=100`
`)`
`local_llm = HuggingFacePipeline(pipeline=pipe)`
`response = local_llm('What is the capital of France? ')`
`print(response)`
This was the output.
`✘ thenomadicaspie@amethyst ~/ai python` [`app.py`](https://app.py)
`Could not import azure.core python package.`
`Xformers is not installed correctly. If you want to use memorry_efficient_attention to accelerate training use the following command to install Xformers`
`pip install xformers.`
`Setting \pad_token_id\` to \`eos_token_id\`:0 for open-end generation.\``
`The capital of France is Paris.`
`What is the capital of France?`
`The capital of France is Paris.`
`What is the capital of France?`
`The capital of France is Paris.`
`What is the capital of France?`
`The capital of France is Paris.`
`What is the capital of France?`
`The capital of France is Paris.`
`What is the capital of France?`
`The`
Researching I've read answers that say it has to do with the max token length, but surely I can't be expected to set the exact token length it needs to be, right? The idea is that it's the max, not that it will continue generating text to fill up the max tokens?
What am I missing? | 2023-05-21T07:32:36 | https://www.reddit.com/r/LocalLLaMA/comments/13nm96l/models_are_repeating_text_several_times/ | TheNomadicAspie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nm96l | false | null | t3_13nm96l | /r/LocalLLaMA/comments/13nm96l/models_are_repeating_text_several_times/ | false | false | self | 5 | null |
Help can't get reply from MetalX Alpasta 30b 4bit | 1 | [removed] | 2023-05-21T08:16:36 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13nn1xl | false | null | t3_13nn1xl | /r/LocalLLaMA/comments/13nn1xl/help_cant_get_reply_from_metalx_alpasta_30b_4bit/ | false | false | default | 1 | null |
||
chart of gpu performance on llms | 1 | [removed] | 2023-05-21T08:55:21 | https://www.reddit.com/r/LocalLLaMA/comments/13nnqtz/chart_of_gpu_performance_on_llms/ | Kurdiwari | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nnqtz | false | null | t3_13nnqtz | /r/LocalLLaMA/comments/13nnqtz/chart_of_gpu_performance_on_llms/ | false | false | default | 1 | null |
KoboldCpp updated to v1.24, supports new GGJT v3 quantizations while still maintaining full backwards compatibility. | 101 | - Added support for the new GGJT v3 (q4_0, q4_1 and q8_0) quantization format changes.
- Still retains backwards compatibility with every single historical GGML format (GGML, GGHF, GGJT v1,2,3 + all other formats from supported architectures).
- Supports GGML models for LLAMA, RWKV, GPT-2, GPT-J and GPT-NeoX
- Supports accelerated prompt processing GPU offloading via CLBlast (LLAMA only).
- Kobold Lite has also been upgraded with a few cool new features:
- [Added a toggle to enable basic markdown in instruct mode (off by default). This allows the AI to respond with formatted text.](https://i.imgur.com/eOOEvJw.png)
- Added a toggle to avoid inserting newlines in Instruct mode (good for **Pygmalion, Metharme and OpenAssistant** based instruct models).
- Improved the spinning circle waiting animation to use less processing.
- Fixed a bug with stopping sequences when in streaming mode.
**https://github.com/LostRuins/koboldcpp/releases/latest** | 2023-05-21T12:25:24 | https://www.reddit.com/r/LocalLLaMA/comments/13nrp0r/koboldcpp_updated_to_v124_supports_new_ggjt_v3/ | HadesThrowaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nrp0r | false | null | t3_13nrp0r | /r/LocalLLaMA/comments/13nrp0r/koboldcpp_updated_to_v124_supports_new_ggjt_v3/ | false | false | self | 101 | {'enabled': False, 'images': [{'id': 'Cv2Fvh0FKLhQk-IG0MAQC7avJtRIUrN39qstdvEmMIE', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/mq20r6Fq4HWxfY3jCxdgz33uLlRCaqUtFnPp8nr1Lis.png?width=108&crop=smart&auto=webp&s=de24083f6f5b6c336aca77ede36252544a7ccc08', 'width': 108}, {'height': 204, 'url': 'https://external-preview.redd.it/mq20r6Fq4HWxfY3jCxdgz33uLlRCaqUtFnPp8nr1Lis.png?width=216&crop=smart&auto=webp&s=43aad8cede383d3fa6bd18f1ca3df33d60ebdf02', 'width': 216}, {'height': 302, 'url': 'https://external-preview.redd.it/mq20r6Fq4HWxfY3jCxdgz33uLlRCaqUtFnPp8nr1Lis.png?width=320&crop=smart&auto=webp&s=3263cef1a5b8739950f64ac6a3561777b54abcff', 'width': 320}, {'height': 604, 'url': 'https://external-preview.redd.it/mq20r6Fq4HWxfY3jCxdgz33uLlRCaqUtFnPp8nr1Lis.png?width=640&crop=smart&auto=webp&s=856b97a9b2099ef4e34bae3683dcb7a3abd22912', 'width': 640}], 'source': {'height': 887, 'url': 'https://external-preview.redd.it/mq20r6Fq4HWxfY3jCxdgz33uLlRCaqUtFnPp8nr1Lis.png?auto=webp&s=59485e332cfdc1f5923c371612c6f1261f236b06', 'width': 939}, 'variants': {}}]} |
llama.cpp running extremely slow via GPT4ALL | 2 | Hi all,
I compiled llama.cpp to use with GPT4ALL and is providing good output and I am happy with the results. However unfortunately for a simple matching question with perhaps 30 tokens, the output is taking 60 seconds. Output really only needs to be 3 tokens maximum but is never more than 10. I used the standard [GPT4ALL](https://github.com/nomic-ai/gpt4all), and compiled the backend with mingw64 using the directions found [here](https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-bindings/python/README.md). I did use a different fork of llama.cpp than found on reddit, but that was what the repo suggested due to compatibility issues. I am using wizard 7b for reference. | 2023-05-21T14:21:39 | https://www.reddit.com/r/LocalLLaMA/comments/13nwb60/llamacpp_running_extremely_slow_via_gpt4all/ | EmbarrassedSleep6981 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nwb60 | false | null | t3_13nwb60 | /r/LocalLLaMA/comments/13nwb60/llamacpp_running_extremely_slow_via_gpt4all/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'bgEVd_LAkjXY8gB-hohtJcoyOi6Hbzz3BRZs7CeIBug', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=108&crop=smart&auto=webp&s=c9ade8df62f42df46111a920f5aed6b39337dc11', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=216&crop=smart&auto=webp&s=125776cb6a2caa7f380113378af4744c0daf2350', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=320&crop=smart&auto=webp&s=e07667b89e42a31869f3e7887d30c0573e7c8bd9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=640&crop=smart&auto=webp&s=43988ed3288493d85e9dcdf629be922c9a206214', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=960&crop=smart&auto=webp&s=c1e00506ae60ca387aa621f148b7a2437ebe06a1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=1080&crop=smart&auto=webp&s=857363d117597263e03c33aef2dcff950c43c606', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?auto=webp&s=df6093d7ce9f855bed1870460186c2210effbb77', 'width': 1200}, 'variants': {}}]} |
Which Open LLMs can I run with RTX 3070ti locally?? | 1 | [removed] | 2023-05-21T17:22:35 | https://www.reddit.com/r/LocalLLaMA/comments/13o0qd9/which_open_llms_can_i_run_with_rtx_3070ti_locally/ | Hot_Adhesiveness_259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13o0qd9 | false | null | t3_13o0qd9 | /r/LocalLLaMA/comments/13o0qd9/which_open_llms_can_i_run_with_rtx_3070ti_locally/ | false | false | default | 1 | null |
How fast will an RK3588s with a 6 TOPS NPU run LLaMa models? | 1 | [removed] | 2023-05-21T18:36:48 | https://www.reddit.com/r/LocalLLaMA/comments/13o2mem/how_fast_will_an_rk3588s_with_a_6_tops_npu_run/ | JeffersonTerrerfield | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13o2mem | false | null | t3_13o2mem | /r/LocalLLaMA/comments/13o2mem/how_fast_will_an_rk3588s_with_a_6_tops_npu_run/ | false | false | default | 1 | null |
airobors-13b - 98% eval vs gpt-3.5-turbo | 1 | [deleted] | 2023-05-21T21:12:52 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13o6icr | false | null | t3_13o6icr | /r/LocalLLaMA/comments/13o6icr/airobors13b_98_eval_vs_gpt35turbo/ | false | false | default | 1 | null |
||
airoboros-13b -- 98% against gpt-3.5 | 214 | [deleted] | 2023-05-21T21:15:31 | [deleted] | 2023-05-21T21:18:44 | 0 | {} | 13o6kp8 | false | null | t3_13o6kp8 | /r/LocalLLaMA/comments/13o6kp8/airoboros13b_98_against_gpt35/ | false | false | default | 214 | null |
||
Is there a subreddit for people interested in actually using a local LLM productively? | 1 | [removed] | 2023-05-21T21:56:07 | https://www.reddit.com/r/LocalLLaMA/comments/13o7lr6/is_there_a_subreddit_for_people_interested_in/ | Careful_Fee_642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13o7lr6 | false | null | t3_13o7lr6 | /r/LocalLLaMA/comments/13o7lr6/is_there_a_subreddit_for_people_interested_in/ | false | false | default | 1 | null |
Any idea Vicuna 13B 4bit model output random content? | 2 | [removed] | 2023-05-22T02:09:55 | https://www.reddit.com/r/LocalLLaMA/comments/13oder5/any_idea_vicuna_13b_4bit_model_output_random/ | direwulf33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13oder5 | false | null | t3_13oder5 | /r/LocalLLaMA/comments/13oder5/any_idea_vicuna_13b_4bit_model_output_random/ | false | false | default | 2 | null |
Question about fine-tuning LLaMA-65B | 10 | How many 80GB A100s or H100s are required to fine-tune LLaMA-65B? I assume the VRAM requirements would be pretty much double what is required to fine-tune LLaMA-33B, but I'm not certain as I haven't fine-tuned any models before. | 2023-05-22T03:14:50 | https://www.reddit.com/r/LocalLLaMA/comments/13oeu66/question_about_finetuning_llama65b/ | Adventurous_Jelly276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13oeu66 | false | null | t3_13oeu66 | /r/LocalLLaMA/comments/13oeu66/question_about_finetuning_llama65b/ | false | false | self | 10 | null |
Training Large Language Models: Fluctuating Training Loss But Smooth Eval Loss, What's Happening? | 3 | Hey folks,
I've been training a llama-7b recently and have run into a bit of an issue I hope someone might be able to shed some light on. While my evaluation loss is decreasing smoothly (as expected), my training loss seems to be oscillating on a regular cycle while also trending downwards.
I used datasets as dolly, shareGPT, self -instruct.
​
[Train loss](https://preview.redd.it/yt1gi756fb1b1.png?width=1898&format=png&auto=webp&s=0aa76e0c6fd83b3253cf5fa757e5653cb202c9bd)
​
[Eval Loss](https://preview.redd.it/qc4smnh7fb1b1.png?width=1889&format=png&auto=webp&s=906a8b796c7ffa5314fe06d0917527529c4faa5b)
​
​
Here's what I'm seeing:
* The eval loss decreases smoothly, which indicates to me that the model is learning something useful.
* However, the training loss decreases in a jagged, oscillating manner. It's trending downwards overall but it spikes at regular intervals.
I'm using four GPUs for the training and have set the batch size and micro-batch size based on their memory limits. But I can't figure out why I'm seeing this oscillating pattern in the training loss.
Is this normal? Am I doing something wrong? What factors might be contributing to this? Any insights or suggestions would be greatly appreciated.
I used alpca-lora training code from [https://github.com/tloen/alpaca-lora](https://github.com/tloen/alpaca-lora)
​
​
Here is my training arguments
#!/bin/bash
DATA_DIR="databricks/databricks-dolly-15k"
EPOCH=10
LR=1e-5
SAVE_DIR="llama-7b-lora-dolly-lr-$LR"
wandb online
export WANDB_PROJECT="llm-comparision"
CUDA_VISIBLE_DEVICES=1,2 python -m torch.distributed.launch --master_port=34322 --nproc_per_node 2 train_dolly.py \
--fp16 \
--base_model 'decapoda-research/llama-7b-hf' \
--data_path $DATA_DIR \
--output_dir ckpt/$SAVE_DIR \
--batch_size 1024 \
--micro_batch_size 128 \
--num_epochs $EPOCH \
--learning_rate $LR \
--cutoff_len 512 \
--val_set_size 2000 \
--lora_r 8 \
--lora_alpha 16 \
--lora_dropout 0.05 \
--lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' \
--train_on_inputs \
--logging_steps 1 \
--eval_steps 1 \
--weight_decay 0. \
--warmup_steps 0 \
--warmup_ratio 0.1 \
--lr_scheduler_type "cosine" \
--wandb_run_name $SAVE_DIR \
--group_by_length > logs/$SAVE_DIR.log 2>&1 &
echo logs/$SAVE_DIR.log
gradient\_accumulation\_steps(8) = batch\_size(1024) // micro\_batch\_size(128)
​
Thanks in advance! | 2023-05-22T05:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/13oh26g/training_large_language_models_fluctuating/ | Traditional-Gain-593 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13oh26g | false | null | t3_13oh26g | /r/LocalLLaMA/comments/13oh26g/training_large_language_models_fluctuating/ | false | false | 3 | null |
|
Manticore 13B (formerly Wizard Mega 13B) is now the top of the pack of 13B models | 92 | See [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) on HF. You'll need to click the "Refresh" button near the bottom as HF heavily caches stale data for that page. We probably haven't even pushed the limits of what's possible with 13B either. But we're still hoping to scale [Manticore](https://huggingface.co/openaccess-ai-collective/manticore-13b) to 30B and 65B soon!
https://preview.redd.it/glbdxer9hb1b1.png?width=3108&format=png&auto=webp&s=ad57869ee6835fc512854eda385a98e6e6b3d944 | 2023-05-22T05:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/13ohbvp/manticore_13b_formerly_wizard_mega_13b_is_now_the/ | winglian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ohbvp | false | null | t3_13ohbvp | /r/LocalLLaMA/comments/13ohbvp/manticore_13b_formerly_wizard_mega_13b_is_now_the/ | false | false | 92 | {'enabled': False, 'images': [{'id': '2yXkO2nXyv2ynd0Gc85xzzHWd7q-pzJRTeM5uxEBdoE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=108&crop=smart&auto=webp&s=7c3bb0e464c062e6518a90b686b3544dad39673d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=216&crop=smart&auto=webp&s=6c25136371e9056c3998c03e64e73605446a33ac', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=320&crop=smart&auto=webp&s=30c559b0a3b92cbca6df2ffce369af9f85ccd82d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=640&crop=smart&auto=webp&s=9cd841171a06a0d0a5be5ca54c5bbc731ae610af', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=960&crop=smart&auto=webp&s=a1ce8b1063692ab2b2d978ab9459f34cc311ced2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=1080&crop=smart&auto=webp&s=ddc039e579cbc6105b7c11bc9be89382f69290ce', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?auto=webp&s=dff5ffff20b56c519b288a1462cbab0c2de6f313', 'width': 1200}, 'variants': {}}]} |
|
LIMA: Less Is More for Alignment | 45 | 2023-05-22T06:23:01 | https://arxiv.org/abs/2305.11206 | ambient_temp_xeno | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 13oijm0 | false | null | t3_13oijm0 | /r/LocalLLaMA/comments/13oijm0/lima_less_is_more_for_alignment/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
||
PSA: There is no 30B LLaMA model; it was a typo. The actual model has 33B parameters; please stop referring to it as "LLaMA-30B." | 29 | [https://github.com/facebookresearch/llama/issues/49](https://github.com/facebookresearch/llama/issues/49)
​
It seems like the majority of people here believe that Meta AI's second-largest LLaMA model has 30 billion parameters, which is incorrect, and I felt the need to correct that.
​
Another mistake I see people make is mistyping [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) as LLaMa. | 2023-05-22T07:27:18 | https://www.reddit.com/r/LocalLLaMA/comments/13ojqaj/psa_there_is_no_30b_llama_model_it_was_a_typo_the/ | Adventurous_Jelly276 | self.LocalLLaMA | 2023-05-22T11:30:36 | 0 | {} | 13ojqaj | false | null | t3_13ojqaj | /r/LocalLLaMA/comments/13ojqaj/psa_there_is_no_30b_llama_model_it_was_a_typo_the/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'OuBKEiVtkvFhX7rAH6Id3Gj5fkda21RfNlBuukX3nMw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=108&crop=smart&auto=webp&s=69a988ff03e69e5ba364d8e7548101e93c3077c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=216&crop=smart&auto=webp&s=9f6dab2d8cca6d5e681499751b8601581f3879ee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=320&crop=smart&auto=webp&s=69e9d47e83a681f1ef54d1597d69f8e39bd62f91', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=640&crop=smart&auto=webp&s=078f00e443307931a2e4ebddeb338e73a0c53b1d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=960&crop=smart&auto=webp&s=1a26a2ee087c71d197c4400bdf4fbfbf230a9579', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?width=1080&crop=smart&auto=webp&s=1976bfa90c8126b54fe159919e0d8cce067c40d9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LuB1rMdj006cCfN61pJ42e-vbgY2l5j_z7h8i4s7Nvw.jpg?auto=webp&s=38638c91a9a13b491c111878d58682aabbddb643', 'width': 1200}, 'variants': {}}]} |
Using a local LLM for large-scale text analysis | 9 | Hi all,
I'm looking to run a local LLM for large-scale text analysis. I work in a law enforcement related environment, and we have a ton of data (think chat-logs, phone transcripts) that I would like to query. The data is so vast that we don't have the manpower to search this and would like to make use of an LLM. For (hopefully) obvious reasons this has to be in a local environment, as the data is sensitive and we cannot run any risk of sharing with a third party.
Most of the LLM's I see here seem to be catered to uncensored text-generation and the like, is there also something more akin to what I'm looking for? Thanks in advance! | 2023-05-22T07:36:11 | https://www.reddit.com/r/LocalLLaMA/comments/13ojvzo/using_a_local_llm_for_largescale_text_analysis/ | panthaduprince | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ojvzo | false | null | t3_13ojvzo | /r/LocalLLaMA/comments/13ojvzo/using_a_local_llm_for_largescale_text_analysis/ | false | false | self | 9 | null |
With a single 3090, which model is finetune-able and decent reasoning ability | 18 | Hello guys,
a 3090 is what I can get, and if I what to make my own model, which model I should choose? hopefully it has decent reasoning ability, by using langchain wish I can solve the accuracy problem. | 2023-05-22T07:40:27 | https://www.reddit.com/r/LocalLLaMA/comments/13ojysf/with_a_single_3090_which_model_is_finetuneable/ | JohnSmith004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ojysf | false | null | t3_13ojysf | /r/LocalLLaMA/comments/13ojysf/with_a_single_3090_which_model_is_finetuneable/ | false | false | self | 18 | null |
Deploying LLM on Cloud | 4 | I want to create a product which uses open source LLM as its backend.
How do I deploy that LLM to Google Cloud or AWS?
Is there any tutorial for that? | 2023-05-22T08:12:35 | https://www.reddit.com/r/LocalLLaMA/comments/13okjzz/deploying_llm_on_cloud/ | Proof_Mouse9105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13okjzz | false | null | t3_13okjzz | /r/LocalLLaMA/comments/13okjzz/deploying_llm_on_cloud/ | false | false | self | 4 | null |
airoboros dataset (98% chatGPT3.5) released its uncensored dataset (thanks jondurbin) | 1 | [deleted] | 2023-05-22T08:24:55 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13okruj | false | null | t3_13okruj | /r/LocalLLaMA/comments/13okruj/airoboros_dataset_98_chatgpt35_released_its/ | false | false | default | 1 | null |
||
airoboros uncensored dataset (98% chatGPT3.5) released (thanks jondurbin) | 65 | 2023-05-22T08:25:52 | https://huggingface.co/datasets/jondurbin/airoboros-uncensored | _underlines_ | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13oksgq | false | null | t3_13oksgq | /r/LocalLLaMA/comments/13oksgq/airoboros_uncensored_dataset_98_chatgpt35/ | false | false | 65 | {'enabled': False, 'images': [{'id': 'l2t25VWYBLjqCy2iWN4UskcCQqQfq1kt7D8Dloe2MhY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=108&crop=smart&auto=webp&s=31d2300ef7152a8b2cebab69493c4b277d70bb0a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=216&crop=smart&auto=webp&s=ad9f1441c2abf01e40bb2ef98c9fdbaf8203f46d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=320&crop=smart&auto=webp&s=2cd1ab22afc7788dab188da8f3f9b16ef1324dec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=640&crop=smart&auto=webp&s=088dd7cafba449f9c2729fb3d0d5679a28aa2768', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=960&crop=smart&auto=webp&s=0e18af5ca6d66f093f16421404e0d035710a4cdd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?width=1080&crop=smart&auto=webp&s=61ae65fb99c7a9c98be43f70b9b8343c9133fcbf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oHkSdZ2XY7WE76cHz3qygt88JmI9Oels34ji0d6fK9A.jpg?auto=webp&s=e66881c16d7e137715e65f1eb36a77008118b437', 'width': 1200}, 'variants': {}}]} |
||
Introducing Kobold-Assistant | 1 | [removed] | 2023-05-22T08:46:43 | https://www.reddit.com/r/LocalLLaMA/comments/13ol4mz/introducing_koboldassistant/ | _Lee_B_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ol4mz | false | null | t3_13ol4mz | /r/LocalLLaMA/comments/13ol4mz/introducing_koboldassistant/ | false | false | default | 1 | null |
Nvidia Tesla M40 vs P40. | 7 | I'm considering starting as a hobbyist.
Thing is I´d like to run the bigger models, so I´d need at least 2, if not 3 or 4, 24 GB cards. I read the P40 is slower, but I'm not terribly concerned by speed of the response. I'd rather get a good reply slower than a fast less accurate one due to running a smaller model.
My question is, how slow would it be on a cluster of m40s vs p40s, to get a reply to a question answering model of 30b or 65b?
Is there anything I wouldn't be able to do with the m40, due to firmware limitations or the like?
​
Thank you. | 2023-05-22T09:57:55 | https://www.reddit.com/r/LocalLLaMA/comments/13omfzw/nvidia_tesla_m40_vs_p40/ | SirLordTheThird | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13omfzw | false | null | t3_13omfzw | /r/LocalLLaMA/comments/13omfzw/nvidia_tesla_m40_vs_p40/ | false | false | self | 7 | null |
Local GPT (completely offline and no OpenAI!) | 26 | For those of you who are into downloading and playing with hugging face models and the like, check out my project that allows you to chat with PDFs, or use the normal chatbot style conversation with the llm of your choice (ggml/llama-cpp compatible) completely offline! Drop a star if you like it. | 2023-05-22T10:23:58 | https://github.com/jlonge4/local_llama | Jl_btdipsbro | github.com | 1970-01-01T00:00:00 | 0 | {} | 13omy9c | false | null | t3_13omy9c | /r/LocalLLaMA/comments/13omy9c/local_gpt_completely_offline_and_no_openai/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'UFi4l0MOE2C0JSggO1LtPttSGlhXgc31SeEVFPFndvU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=108&crop=smart&auto=webp&s=cddd2a19507e78766c2b75e3574ffd5c82d84f05', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=216&crop=smart&auto=webp&s=edd56d5518d60e0b238cbf2d06db079f70a9a956', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=320&crop=smart&auto=webp&s=01e3fb4166977c6ab5d56960568e0a543b17f9f2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=640&crop=smart&auto=webp&s=ed442983376c54db0bea65cee15e121e67e9ea89', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=960&crop=smart&auto=webp&s=34b0bdfecbd64e512a40cd413ca074a58698cb39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?width=1080&crop=smart&auto=webp&s=2de97cfd38ae9aae96733b8772a4d18b936879f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s4Oqk3Qv77cwakEzVpGXM9WAljCyAHNWyWmU6ZIinX8.jpg?auto=webp&s=93d521efdd363eccaaec805576ad8fab07d02e21', 'width': 1200}, 'variants': {}}]} |
|
Llama.cpp binary for OLD computer with GPU? | 5 | I'd like to try the GPU splitting option, and I have a NVIDIA GPU, however my computer is very old so I'm currently using the bin-win-avx-x64.zip release of llama.cpp.
Is there a compiled llama.cpp exe that supports the --gpu-layers option, but doesn't require an AVX2 capable CPU?
Or would I have to compile my own? | 2023-05-22T10:59:02 | https://www.reddit.com/r/LocalLLaMA/comments/13onm7n/llamacpp_binary_for_old_computer_with_gpu/ | Robot_Graffiti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13onm7n | false | null | t3_13onm7n | /r/LocalLLaMA/comments/13onm7n/llamacpp_binary_for_old_computer_with_gpu/ | false | false | self | 5 | null |
Zicklein - a German 🇩🇪 fine-tuned LlaMA-7b base model (OS) | 1 | [deleted] | 2023-05-22T11:10:22 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13onv0c | false | null | t3_13onv0c | /r/LocalLLaMA/comments/13onv0c/zicklein_a_german_finetuned_llama7b_base_model_os/ | false | false | default | 1 | null |
||
Zicklein - a German 🇩🇪 finetuned LlaMA-7b base model (OS) | 34 | Zicklein is a German version of Alpaca 7b fine-tuned using LoRA, trained using a German translated version of the cleaned Alpaca instruct dataset.
Github: [https://github.com/avocardio/zicklein](https://github.com/avocardio/zicklein)
HuggingFace: [https://huggingface.co/avocardio/alpaca-lora-7b-german-base-52k](https://huggingface.co/avocardio/alpaca-lora-7b-german-base-52k)
You can also try it out [here](https://huggingface.co/spaces/avocardio/German-Alpaca-LoRA-7b) (although its super slow - running on a CPU, responses take around 130s). | 2023-05-22T11:15:03 | https://www.reddit.com/r/LocalLLaMA/comments/13onyco/zicklein_a_german_finetuned_llama7b_base_model_os/ | capital-man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13onyco | false | null | t3_13onyco | /r/LocalLLaMA/comments/13onyco/zicklein_a_german_finetuned_llama7b_base_model_os/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'QZJ6rDS2xcs16sJyKUjWzrJnYKUlC5nssuE8knyORAQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=108&crop=smart&auto=webp&s=0223d3b56f68a94324092e5cf98a219c8bccb8b8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=216&crop=smart&auto=webp&s=0ef48f38ea489178dc99e5ab6542d39f58f5d4fc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=320&crop=smart&auto=webp&s=727d7e74ab2c49c72935f2f1717092fbc372d40e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=640&crop=smart&auto=webp&s=4f071d04c438a0414a2fc7cc493ee08003816a1b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=960&crop=smart&auto=webp&s=440be2cecc87c75b8fd6f35687c2b92e55cfa7a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?width=1080&crop=smart&auto=webp&s=fe1b08fefc5e6a39f4930f422b1582b189bf07cb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BPwA3OmSMWmfKbtNVsnJ6-woG3JlvDU8i1XGGZqTGCk.jpg?auto=webp&s=54d79ee2213ae6f0cc6b8742c37822937ed44517', 'width': 1200}, 'variants': {}}]} |
Using an LLM to predict Instagram video performance? | 3 | I have an instagram channel with about a 1000 videos up. The audience is not interested in the videos because of the visuals, they are interested in what is being spoken.
I was wondering if there's a way to train a model from the Llama zoo to predict the performance of a video if I feed it the transcript.
My idea is to categorize all videos in terms of how they performed. Performance was High if it was one of the top 33% of videos by views, in the month it was posted, Medium if it falls into the 33%-66% bracket, and Low if it was one the bottom 33% of videos.
I've dabbled with classification-related tasks using RoBerta. Was wondering if training a LORA with a 7B model would perform better, and which model I should use - llama or one of the finetuned ones?
Anything else that I might be missing that I should consider? If using one of the finetuned models, should I add an instruction to go with it? | 2023-05-22T11:20:22 | https://www.reddit.com/r/LocalLLaMA/comments/13oo20b/using_an_llm_to_predict_instagram_video/ | regstuff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13oo20b | false | null | t3_13oo20b | /r/LocalLLaMA/comments/13oo20b/using_an_llm_to_predict_instagram_video/ | false | false | self | 3 | null |
TinyStories: The Smallest GPT with Coherent English (by Microsoft) | 137 | Turns out, even 2.5M (yep, not B) parameters are enough to generate coherent text.
The main obstacle to full language understanding for transformers is the huge number of rare words (the long tail of the distribution). But if you compile a training dataset from the 1.5k most frequent roots (the vocabulary of a ~5-year-old child), then even a single-layer GPT can be trained in such a way that it will outperform GPT2-XL.
This wonderful dataset, written by GPT-4, is perfect for validating new architectures, it even confirms chinchilla scaling. So if you want to invent your own "SuperGPT" architecture, I recommend experimenting with this dataset. The dataset size is only 3 GB.
One common already conclusion - it's better to scale GPT in depth, rather than in width.
[Article](https://arxiv.org/abs/2305.07759), [dataset](https://huggingface.co/datasets/roneneldan/TinyStories), [model](https://huggingface.co/papers/2305.07759) | 2023-05-22T11:34:26 | https://www.reddit.com/r/LocalLLaMA/comments/13ooc3o/tinystories_the_smallest_gpt_with_coherent/ | Shir_man | self.LocalLLaMA | 2023-05-22T17:27:17 | 0 | {} | 13ooc3o | false | null | t3_13ooc3o | /r/LocalLLaMA/comments/13ooc3o/tinystories_the_smallest_gpt_with_coherent/ | false | false | self | 137 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
llama-30b-supercot surpassed 65b models on HF leaderboard | 42 | [https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
You may have to push the Resfresh-button to see the updated list.
There are not much information about the Supercot? And not any GGML-version yet! And it's already one month old model. Very mysterious model. | 2023-05-22T12:00:47 | https://www.reddit.com/r/LocalLLaMA/comments/13oovzt/llama30bsupercot_surpassed_65b_models_on_hf/ | MustBeSomethingThere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13oovzt | false | null | t3_13oovzt | /r/LocalLLaMA/comments/13oovzt/llama30bsupercot_surpassed_65b_models_on_hf/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': '2yXkO2nXyv2ynd0Gc85xzzHWd7q-pzJRTeM5uxEBdoE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=108&crop=smart&auto=webp&s=7c3bb0e464c062e6518a90b686b3544dad39673d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=216&crop=smart&auto=webp&s=6c25136371e9056c3998c03e64e73605446a33ac', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=320&crop=smart&auto=webp&s=30c559b0a3b92cbca6df2ffce369af9f85ccd82d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=640&crop=smart&auto=webp&s=9cd841171a06a0d0a5be5ca54c5bbc731ae610af', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=960&crop=smart&auto=webp&s=a1ce8b1063692ab2b2d978ab9459f34cc311ced2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?width=1080&crop=smart&auto=webp&s=ddc039e579cbc6105b7c11bc9be89382f69290ce', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7zqcw9mAS-PYpZk8_Tl2OVpnWH8wLITangIHhIInYos.jpg?auto=webp&s=dff5ffff20b56c519b288a1462cbab0c2de6f313', 'width': 1200}, 'variants': {}}]} |
WizardLM-30B-Uncensored | 686 | Today I released WizardLM-30B-Uncensored.
[https://huggingface.co/ehartford/WizardLM-30B-Uncensored](https://huggingface.co/ehartford/WizardLM-30B-Uncensored)
Standard disclaimer - just like a knife, lighter, or car, you are responsible for what you do with it.
Read my blog [article](https://erichartford.com/uncensored-models), if you like, about why and how.
A few people have asked, so I put a buy-me-a-coffee link in my profile.
Enjoy responsibly.
Before you ask - yes, 65b is coming, thanks to a generous GPU sponsor.
And I don't do the quantized / ggml, I expect they will be posted soon. | 2023-05-22T12:07:50 | https://www.reddit.com/r/LocalLLaMA/comments/13op1sd/wizardlm30buncensored/ | faldore | self.LocalLLaMA | 2023-05-22T12:43:39 | 1 | {'gid_2': 1} | 13op1sd | false | null | t3_13op1sd | /r/LocalLLaMA/comments/13op1sd/wizardlm30buncensored/ | false | false | self | 686 | {'enabled': False, 'images': [{'id': 'ywT2smm0IqTqqbq4_29s6rsXmM0IP5bSJvdaC-klH0w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=108&crop=smart&auto=webp&s=60dcf7fd7ae53c3690cdd8124a8a8eb419955f4b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=216&crop=smart&auto=webp&s=c6a3eb4eab7457ff8ded6626d9cf19bbb13b0b7e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=320&crop=smart&auto=webp&s=35d6f5083d13671cde033d40e49f51b3b89d36ac', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=640&crop=smart&auto=webp&s=c92e3eb631752e2a219439e0db4d8dca26bd2dae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=960&crop=smart&auto=webp&s=89a7b8f4cf8bcae2eda828b23002245983e04418', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?width=1080&crop=smart&auto=webp&s=12b6861d9d341d7bd765dc9206c93beeeb7ce915', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EBu3CTilEwaqYRfKGJPUsfyB9UpRM9MNP4v-uNN9WTY.jpg?auto=webp&s=9a575e40f78505315258e75792b7adf47e508916', 'width': 1200}, 'variants': {}}]} |
Train Large Language Models (LLM) using LoRA | 36 | 2023-05-22T15:30:59 | https://github.com/rhulha/lora | paranoidray | github.com | 1970-01-01T00:00:00 | 0 | {} | 13ou59l | false | null | t3_13ou59l | /r/LocalLLaMA/comments/13ou59l/train_large_language_models_llm_using_lora/ | false | false | default | 36 | null |
|
Newbie here. Need help with choosing a llm model for pdf ingestion and summarization locally | 6 | I have a gaming laptop with 16gb ram, Amd ryzen 7 cpu & 6700M 10GB gpu. I'm looking to install a llm on my local and keep it offline if possible too, to ingest pdfs (mostly code documentations, tutorial books, e.t.c) and then summarize concepts, ask what & how to questions on concepts.. Also ingest story books and help my bf to summarize them to help him with his PhD. Does Alpaca 13B (4bit quant) servers this purpose? Or [Private GPT](https://www.youtube.com/watch?v=oX5LsiuRkLM)? Pls suggest.
I want to run the model on GPU for faster results.. also do I need to train them? I have no knowledge on ML. I'm a regular backend developer. Thanks. | 2023-05-22T17:25:25 | https://www.reddit.com/r/LocalLLaMA/comments/13ox6r3/newbie_here_need_help_with_choosing_a_llm_model/ | MexicanJalebi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ox6r3 | false | null | t3_13ox6r3 | /r/LocalLLaMA/comments/13ox6r3/newbie_here_need_help_with_choosing_a_llm_model/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': '118B2cshwj2mSijBF-qtIF58AaM7gAvK7glTehl6X_k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iD7CF2pJcVP1kASZRTCLBRpfR4pKDt4N8Q_ECfgrwGk.jpg?width=108&crop=smart&auto=webp&s=b7cec692ca08e6fcf11b5de302973fa544fd5938', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/iD7CF2pJcVP1kASZRTCLBRpfR4pKDt4N8Q_ECfgrwGk.jpg?width=216&crop=smart&auto=webp&s=e4280d78a1755324b19d1f7d95603f3c8af8f301', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/iD7CF2pJcVP1kASZRTCLBRpfR4pKDt4N8Q_ECfgrwGk.jpg?width=320&crop=smart&auto=webp&s=50c34c14549f553429edabde0225db26362a3d36', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/iD7CF2pJcVP1kASZRTCLBRpfR4pKDt4N8Q_ECfgrwGk.jpg?auto=webp&s=01fa0c4df13938672a8444c1aa287f328ed6ffb9', 'width': 480}, 'variants': {}}]} |
Subsets and Splits