title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
airoboros-7b | 85 | Pick your poison:
- https://huggingface.co/jondurbin/airoboros-7b
- https://huggingface.co/jondurbin/airoboros-7b-ggml-f32
- https://huggingface.co/jondurbin/airoboros-7b-ggml-f16
- https://huggingface.co/jondurbin/airoboros-7b-ggml-q8_0
- https://huggingface.co/jondurbin/airoboros-7b-ggml-q5_1
- https://huggingface.co/jondurbin/airoboros-7b-ggml-q4_0
More info about training, eval, etc. found in the 13b repo, and reminder that:
1. this is all trained on 100% AI generated training data (instructions and responses)
2. the point was not to make an incredible model, but rather to prove that in some ways, smaller models can be quickly and easily tuned to match the style and response of better models by fine-tuning on synthetic, random training data
3. training data for this particular model is here: https://huggingface.co/datasets/jondurbin/airoboros-uncensored
I hope people will find the training data generation tool useful, and come up with better prompts with more complex questions, including multi-step reasoning/theory of mind, etc. | 2023-05-22T17:46:32 | https://www.reddit.com/r/LocalLLaMA/comments/13oxr6z/airoboros7b/ | JonDurbin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13oxr6z | false | null | t3_13oxr6z | /r/LocalLLaMA/comments/13oxr6z/airoboros7b/ | false | false | self | 85 | {'enabled': False, 'images': [{'id': '3dEWqzEOS8nFBfNpUZ3gTC5On3cSB4SQNDtsmSxToCY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=108&crop=smart&auto=webp&s=d5932439417948fdd15bc8c62102ac2efea23371', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=216&crop=smart&auto=webp&s=c4d3e60156b5ec05843f765ee54be64f1a1c7324', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=320&crop=smart&auto=webp&s=a18eedabe58bb8238b3180e43c872dfd4dcd84ee', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=640&crop=smart&auto=webp&s=70ac10f1d6d2b54dfe552d363ffb851c2f88779c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=960&crop=smart&auto=webp&s=48c7fb433e83e39d290f3d31e67ef21eca398341', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?width=1080&crop=smart&auto=webp&s=5a7b9c530333e20d41dd67639df1b2b3acf286cc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8v6sq83tpN4pCzqsv9bWIjAbbi3PextETaN4yIMZL7w.jpg?auto=webp&s=e640eaea5b22f4e615345bb816e0ffb4d73c3717', 'width': 1200}, 'variants': {}}]} |
🚨 How to deploy local models to Google Colab/Azure/etc. | 6 | I can run some models on M1 Mac, but I'd like to use the 65B models. As a student, I have free access to Azure. Is it possible to use that infrastructure to run such large models? How about Colab? | 2023-05-22T18:47:13 | https://www.reddit.com/r/LocalLLaMA/comments/13ozgej/how_to_deploy_local_models_to_google_colabazureetc/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ozgej | false | null | t3_13ozgej | /r/LocalLLaMA/comments/13ozgej/how_to_deploy_local_models_to_google_colabazureetc/ | false | false | self | 6 | null |
Ideal Laptop | 7 | Hello r/LocalLLaMA I'm shopping for a new laptop, my current one being a 16gb ram macbook pro.
The ideal use case would be to run Local LLM's on my laptop. Ideally I don't want to have to buy a GPU so I'm thinking a lot of ram will probably be what I need.
Are the new macbooks with 96GB\~ ram something I should be looking into? I don't want to buy a desktop, since I'll be traveling and desktops are a bit restrictive.
What are y'all using and what do you recommend? | 2023-05-22T19:00:49 | https://www.reddit.com/r/LocalLLaMA/comments/13oztjn/ideal_laptop/ | aldarisbm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13oztjn | false | null | t3_13oztjn | /r/LocalLLaMA/comments/13oztjn/ideal_laptop/ | false | false | self | 7 | null |
Can't get 30B models to work properly in oobabooga | 8 | I have 24GB VRAM and 64GB RAM, even with nothing else running, the 30B models will typically freeze after a couple of prompts. I am using 4bit quantized models and llama model type. Have tried cycling through different groupsizes. Resource monitor shows 100% GPU usage. I thought my specs should be enough to run 30B quantized models? Am I doing something wrong? | 2023-05-22T19:42:15 | https://www.reddit.com/r/LocalLLaMA/comments/13p0y4m/cant_get_30b_models_to_work_properly_in_oobabooga/ | Commercial_Way_8217 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13p0y4m | false | null | t3_13p0y4m | /r/LocalLLaMA/comments/13p0y4m/cant_get_30b_models_to_work_properly_in_oobabooga/ | false | false | self | 8 | null |
zero-shot / few-shot / fine-tuning classification with Llama? | 5 | I'm trying to perform some zero shot / few-shot classification with Llama. I tried with \`decapoda-research/llama-7b-hf\` and a basic prompt. For example:
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. # noqa: E501
\### Instruction:
classify the news title and body into liberal or conservative.
\### Input:
UN climate report: 'Atlas of human suffering' worse, bigger news body: Deadly with extreme weather now, climate change is about to get so much worse. It is likely going to make the world sicker, hungrier, poorer, gloomier and way more dangerous in the next 18 years with an “unavoidable” increase in risks, a new United Nations science report says. And after that watch out. The UN Intergovernmental Panel on Climate Change report said Monday if human-caused global warming isn’t limited to just another couple tenths of a degree, an Earth now struck regularly by deadly heat, fires, floods and drought in...
\[PROMPT\_END\]
**Output from Llama:**
​
Input: UN climate report: "Atlas of human suffering"
worse, bigger better news body: Deadly with extreme weather now,
climate change is about to get so much worse, it is likely likely
likely likely likely .....
​
Any idea how to approach this? I saw [https://huggingface.co/tasks/zero-shot-classification](https://huggingface.co/tasks/zero-shot-classification) but not exactly sure if it's going to work with Llama. | 2023-05-22T20:23:25 | https://www.reddit.com/r/LocalLLaMA/comments/13p23ew/zeroshot_fewshot_finetuning_classification_with/ | furciferX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13p23ew | false | null | t3_13p23ew | /r/LocalLLaMA/comments/13p23ew/zeroshot_fewshot_finetuning_classification_with/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]} |
Small Uncensored LLM model to train cheaply for a specific task. | 14 | I’m looking for a small LLM model that I can train on my local machine with a basic gpu. I want to use it to create very specific type of content. I have access to large amount of training data for this type of content but I’m not sure how to approach this whole problem. I am very new to fine tuning and LLaMAs in general but I have used OpenAIs API to build a similar AI content generation tool. Any direction to solve this is highly appreciated. | 2023-05-22T20:25:44 | https://www.reddit.com/r/LocalLLaMA/comments/13p25qy/small_uncensored_llm_model_to_train_cheaply_for_a/ | ImpressiveFault42069 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13p25qy | false | null | t3_13p25qy | /r/LocalLLaMA/comments/13p25qy/small_uncensored_llm_model_to_train_cheaply_for_a/ | false | false | self | 14 | null |
Anything using palm-2 in the open world? | 1 | [removed] | 2023-05-22T20:32:57 | https://www.reddit.com/r/LocalLLaMA/comments/13p2dce/anything_using_palm2_in_the_open_world/ | suoko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13p2dce | false | null | t3_13p2dce | /r/LocalLLaMA/comments/13p2dce/anything_using_palm2_in_the_open_world/ | false | false | default | 1 | null |
I Want Speed Reccomendations | 1 | Which ggml model gives you the smartest fastest results? I'm talking 7B or lower. It doesn't have to be great, I'm looking for an oracle to spitball ideas. Normally I use story dice or Magic cards, but I thought a very fast AI could work great as well.
Thanks for your suggestions in advance | 2023-05-22T21:54:57 | https://www.reddit.com/r/LocalLLaMA/comments/13p4phb/i_want_speed_reccomendations/ | shamaalpacadingdong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13p4phb | false | null | t3_13p4phb | /r/LocalLLaMA/comments/13p4phb/i_want_speed_reccomendations/ | false | false | self | 1 | null |
New to this community, most models I download fail and end up in a core dump | 1 | Hi all,
Aside from a few models, when I load most models with llama.cpp (updated and compiled from master) I get an error such as the following:
➜ llama.cpp git:(master) ./main -m ./models/ggml-gpt4all-j-v1.3-groovy.bin -c 512 -b 1024 -n 256 --keep 48 \
--repeat_penalty 1.0 --color -i \
-r "User:" -f prompts/chat-with-bob.txt
main: build = 584 (2e6cd4b)
main: seed = 1684793577
llama.cpp: loading model from ./models/ggml-gpt4all-j-v1.3-groovy.bin
terminate called after throwing an instance of 'std::runtime_error'
what(): unexpectedly reached end of file
[1] 392757 IOT instruction (core dumped) ./main -m ./models/ggml-gpt4all-j-v1.3-groovy.bin -c 512 -b 1024 -n 256 --kee
I'm a little lost as to what I can do to fix this. I read somet stuff about a format changing, and I'm wondering if that's just what i'm running into and I need to wait a few days until projects release updates of their quantized models.
Same issue with `ggml-vic7b-q4_0.bin` | 2023-05-22T22:17:14 | https://www.reddit.com/r/LocalLLaMA/comments/13p5b18/new_to_this_community_most_models_i_download_fail/ | evert | self.LocalLLaMA | 2023-05-22T22:36:29 | 0 | {} | 13p5b18 | false | null | t3_13p5b18 | /r/LocalLLaMA/comments/13p5b18/new_to_this_community_most_models_i_download_fail/ | false | false | self | 1 | null |
Could someone help me figure out what I'm doing wrong with recent models? | 1 | [removed] | 2023-05-22T23:43:15 | https://www.reddit.com/r/LocalLLaMA/comments/13p7h73/could_someone_help_me_figure_out_what_im_doing/ | banzai_420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13p7h73 | false | null | t3_13p7h73 | /r/LocalLLaMA/comments/13p7h73/could_someone_help_me_figure_out_what_im_doing/ | false | false | default | 1 | null |
How to test if a model is truly UNCENSORED? | 27 | I downloaded the latest WizardLM-30B-Uncensored models (ggml) and wanted to check whether they're actually uncensored. But I still get "I'm sorry, as an AI model..." for certain prompts (e.g., the model refuses to write racist jokes, etc.)
Is that what people mean by "uncensored"? | 2023-05-23T00:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/13p8kfe/how_to_test_if_a_model_is_truly_uncensored/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13p8kfe | false | null | t3_13p8kfe | /r/LocalLLaMA/comments/13p8kfe/how_to_test_if_a_model_is_truly_uncensored/ | false | false | self | 27 | null |
Running RedPajama and other open LLMs on phones, browsers and AMD/NV/Intel GPUs | 43 | MLC LLM enables universal deployment of RedPajama-3B and other LLMs (Dolly, Vicuna, etc) across different platforms with hardware acceleration. It uses ~2.2GB memory, which most of the GPUs, macbooks and phones can afford. Check it out: https://mlc.ai/blog/2023/05/22/bringing-open-large-language-models-to-consumer-devices!
Our next step will be better documentation and making the compilation pipeline more accessible to DIY and redistribution. | 2023-05-23T00:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/13p8lba/running_redpajama_and_other_open_llms_on_phones/ | yzgysjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13p8lba | false | null | t3_13p8lba | /r/LocalLLaMA/comments/13p8lba/running_redpajama_and_other_open_llms_on_phones/ | false | false | self | 43 | null |
llama.cpp has a long warmup time in the beginning | 1 | [removed] | 2023-05-23T00:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/13p8r7e/llamacpp_has_a_long_warmup_time_in_the_beginning/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13p8r7e | false | null | t3_13p8r7e | /r/LocalLLaMA/comments/13p8r7e/llamacpp_has_a_long_warmup_time_in_the_beginning/ | false | false | default | 1 | null |
Update: OpenCL is merged! AMD GPUs now work with llama.cpp | 168 | Compile with `LLAMA_CLBLAST=1 make`. Make sure you have OpenCL drivers installed.
More info on original post: https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/
Edit: Some speed benchmarks I did on my XTX with `WizardLM-30B-Uncensored.ggmlv3.q4_1`
All 60 layers offloaded to GPU: 22 GB VRAM usage, 8.5 tokens/s
52 layers offloaded: 19.5 GB VRAM, 6.1 tokens/s
27 layers offloaded: 11.3 GB VRAM, 4.0 tokens/s | 2023-05-23T00:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/13p8zq2/update_opencl_is_merged_amd_gpus_now_work_with/ | TeakTop | self.LocalLLaMA | 2023-05-25T07:41:47 | 0 | {} | 13p8zq2 | false | null | t3_13p8zq2 | /r/LocalLLaMA/comments/13p8zq2/update_opencl_is_merged_amd_gpus_now_work_with/ | false | false | self | 168 | null |
30b running slowly on 4090 | 8 | I’m not sure what I’m doing wrong. I have a 4090 and 32Gib of memory running on Ubuntu server with an 11700K.
Is it not enough memory? How much memory would I need.
Could it be something else? I see other people say they can get about 15 tokens per second with a 4090. | 2023-05-23T01:42:31 | https://www.reddit.com/r/LocalLLaMA/comments/13paao9/30b_running_slowly_on_4090/ | OldLostGod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13paao9 | false | null | t3_13paao9 | /r/LocalLLaMA/comments/13paao9/30b_running_slowly_on_4090/ | false | false | self | 8 | null |
cannot for the life of me compile libllama.dll | 1 | I searched through GitHub and nothing comes up that is new. I wanted to run the model through the C# wrapper linked on [LLaMASharp](https://github.com/SciSharp/LLamaSharp) which requires compiling llama.cpp and extracting the libllama dll into the C# project files. When I build llama.cpp with OpenBLAS, everything shows up fine in the command line. Just as the link suggests I make sure to set DBUILD\_SHARED\_LIBS=ON when in CMake. However, the output in the Visual Studio Developer Command Line interface ignores the setup for libllama.dll in the CMakeFiles.txt entirely. The only dll to compile is llama.dll; I know this is a fairly technical question but does anyone know how to fix? | 2023-05-23T02:58:11 | https://www.reddit.com/r/LocalLLaMA/comments/13pbzcl/cannot_for_the_life_of_me_compile_libllamadll/ | EmbarrassedSleep6981 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pbzcl | false | null | t3_13pbzcl | /r/LocalLLaMA/comments/13pbzcl/cannot_for_the_life_of_me_compile_libllamadll/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'HnmhCr9KpOY1JpoxPu0QbKiqBD659HvHAf1UUmqWpMM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=108&crop=smart&auto=webp&s=8fe5981522c1551224da419a3ae2356d37d3d7ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=216&crop=smart&auto=webp&s=63d8ddd2ac3b1bcf34c29dbb965f39307aae83dd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=320&crop=smart&auto=webp&s=7d0f1fb21a2face61ef25dd5b270896111ec42a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=640&crop=smart&auto=webp&s=ad29b53b06105820ccc102b2b1bf04566d3a242b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=960&crop=smart&auto=webp&s=68448ed0af640ffccdf9069a6bba8e0c3c92dfab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?width=1080&crop=smart&auto=webp&s=529d0ac58accb5e7882820d93d492c97c731a5f7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NieqA5dnYPqR2uedma1aNz1xCJ2Yy7g-yHYaW8d5wDY.jpg?auto=webp&s=9e847298d76898bc4fa232495de9ab5080b43e33', 'width': 1200}, 'variants': {}}]} |
Meta AI announces Massive Multilingual Speech code, models for 1000+ languages | 85 | 2023-05-23T05:34:02 | https://github.com/facebookresearch/fairseq/tree/main/examples/mms | dagerdev | github.com | 1970-01-01T00:00:00 | 0 | {} | 13pf7cm | false | null | t3_13pf7cm | /r/LocalLLaMA/comments/13pf7cm/meta_ai_announces_massive_multilingual_speech/ | false | false | 85 | {'enabled': False, 'images': [{'id': 'bWWjT8_ksza2jJ4nJ9S2yWMnlqwSuvgQKoig4BcjaoU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=108&crop=smart&auto=webp&s=3e4c74bcd50f1599ca51b7a786f2663240e23b2e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=216&crop=smart&auto=webp&s=123307d73bb8230634799c774893b7e2a17d960d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=320&crop=smart&auto=webp&s=a3f93a86b4e7c4d2ac4e760b1eb3ef580912734a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=640&crop=smart&auto=webp&s=b5afd44d9273e26fa361ef5358cfa2e4977676f2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=960&crop=smart&auto=webp&s=787e6926f989dd3ab43dfb561df1cf084caf3e91', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?width=1080&crop=smart&auto=webp&s=62e86656da7d9e6c090c8042ebe704d310878c0c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/A-GefEE77ReZxZf8OJKa-VZUdj2YJL41mj6sIgjn4Tg.jpg?auto=webp&s=ad57f71cbe9ec7d44ff5d7d001dafc1d0509acd9', 'width': 1200}, 'variants': {}}]} |
||
Keeping an LLM on subject | 1 | I am using private Gpt with llamacpp, if it gets a positive hit in the chroma vector store its great and is accurate.
If it doesn't then it falls back to its trained knowledge, are there any techniques to get it to just say it doesn't know?
I have been looking at nvidia nemo guardrails but that seems more like an NLU system to catch certain sentiments such as insults etc which could be useful in some ways.
https://github.com/NVIDIA/NeMo-Guardrails | 2023-05-23T05:40:18 | https://www.reddit.com/r/LocalLLaMA/comments/13pfbk1/keeping_an_llm_on_subject/ | megadonkeyx | self.LocalLLaMA | 2023-05-23T06:00:01 | 0 | {} | 13pfbk1 | false | null | t3_13pfbk1 | /r/LocalLLaMA/comments/13pfbk1/keeping_an_llm_on_subject/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jWD991VzfuV_zlwEbdfXj9xZHb0GTG64p-F7C6HDdZU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=108&crop=smart&auto=webp&s=3325e379ae9469740b0f6be622454b8611e4ae4e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=216&crop=smart&auto=webp&s=6ec10809601e29e5ec43c683785b62294d1a2238', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=320&crop=smart&auto=webp&s=120d4259a92321877c35162a2b23c2a440897892', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=640&crop=smart&auto=webp&s=4f2440a128458f3fab34f3a8f28342b5e7c62778', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=960&crop=smart&auto=webp&s=6a5cbeb9fbeb5ef4130477b7afb1670eecd237fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?width=1080&crop=smart&auto=webp&s=fecb0a3dc5cc975122fc135920aeac4f792840a3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YhcCBLjDxkS9_nkFlbBY2j5NH2OmQeJyT_82ojkRsWY.jpg?auto=webp&s=cbd033c6c01da513c20703fa1fa70ca2cbf5fc06', 'width': 1200}, 'variants': {}}]} |
[ Removed by Reddit ] | 166 | [removed] | 2023-05-23T07:45:10 | https://www.reddit.com/r/LocalLLaMA/comments/13phmy0/removed_by_reddit/ | Creative-Rest-2112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13phmy0 | false | null | t3_13phmy0 | /r/LocalLLaMA/comments/13phmy0/removed_by_reddit/ | false | false | default | 166 | null |
How to fine tune / train an open source model? | 27 | [deleted] | 2023-05-23T08:20:53 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13pi9h8 | false | null | t3_13pi9h8 | /r/LocalLLaMA/comments/13pi9h8/how_to_fine_tune_train_an_open_source_model/ | false | false | default | 27 | null |
||
Local LLMs that pass this test? | 10 | "I have 2 apples. I ate 1 banana. How many apples do I have now?"
I was trying Wizard 33B GPTQ 4bit last night and I was disappointed to see this answer. No matter how many times I clicked "regenerate", it is consistently giving wrong answers. Can you guys try your favorite local LLMs and see if they pass?
However, I am impressed that the LLM tried to gaslight me saying my question was unclear when confronted.
https://preview.redd.it/nfnfs6rhpj1b1.png?width=1080&format=png&auto=webp&s=0a6effc161f331b88d7158930bf7e4226f4a18c2 | 2023-05-23T08:57:29 | https://www.reddit.com/r/LocalLLaMA/comments/13pivzv/local_llms_that_pass_this_test/ | Saren-WTAKO | self.LocalLLaMA | 2023-05-23T09:13:55 | 0 | {} | 13pivzv | false | null | t3_13pivzv | /r/LocalLLaMA/comments/13pivzv/local_llms_that_pass_this_test/ | false | false | 10 | null |
|
Whats currently the best model for scientific paper writing? | 9 | I had good results writing papers with ChatGPT. Just summarizing the main points I was trying to make for a particular section, then letting ChatGPT generating the final text.
Basically I supply the technical input, and the model supplies the language. I'm not a native speaker, so this is obviously helpful for me.
I tried this with the new WizardLM 30B uncensored in 4bit, and it works ok. But maybe there are other models better suited for this? | 2023-05-23T09:26:18 | https://www.reddit.com/r/LocalLLaMA/comments/13pje5m/whats_currently_the_best_model_for_scientific/ | IlIllIlllIlllIllll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pje5m | false | null | t3_13pje5m | /r/LocalLLaMA/comments/13pje5m/whats_currently_the_best_model_for_scientific/ | false | false | self | 9 | null |
Is it possible to run the model wizard 7B GGLM 4 bit with my computer? | 2 | I'm totally new to the domain and trying to run the model wizard 7B GGLM 4 bit on my computer. I can't make it work but because I don't have good config, I'm thinking that's it maybe comes from that.
​
I have 8GB of RAM, an AMD Ryzen 7 3700U and a. AMD Radeon Vega mobile Gfx graphics.
​
I try to run the model on llama.cpp on windows (so on the CPU), but I have something like 1 token per 6 secondes.
​
Does my configuration not allow me to run a model or is it a configuration problem with llama.cpp? | 2023-05-23T09:35:15 | https://www.reddit.com/r/LocalLLaMA/comments/13pjk09/is_it_possible_to_run_the_model_wizard_7b_gglm_4/ | superanki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pjk09 | false | null | t3_13pjk09 | /r/LocalLLaMA/comments/13pjk09/is_it_possible_to_run_the_model_wizard_7b_gglm_4/ | false | false | self | 2 | null |
(oobabooga) Is there any way to chat with more than one character at once? | 2 | [removed] | 2023-05-23T10:38:13 | https://www.reddit.com/r/LocalLLaMA/comments/13pkr0e/oobabooga_is_there_any_way_to_chat_with_more_than/ | OobaboogaHelp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pkr0e | false | null | t3_13pkr0e | /r/LocalLLaMA/comments/13pkr0e/oobabooga_is_there_any_way_to_chat_with_more_than/ | false | false | default | 2 | null |
Desktop connected to the RunPod cloud Oobabooga, running TheBloke_vicuna-13B-1.1-GPTQ-4bit-128g. These local models have so much potential! | 24 | 2023-05-23T12:25:43 | https://v.redd.it/8cogzmvrpk1b1 | No_Wheel_9336 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13pn0v6 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/8cogzmvrpk1b1/DASHPlaylist.mpd?a=1694951465%2CODQxMGQxNjBjZTY0NzY5ODFiMjgyZjA0Mzg3NjNhYzkwM2Y4YTYzMWRiNTgyMzQzMjNkMmYyMGM2YjI2ZDRlMA%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/8cogzmvrpk1b1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/8cogzmvrpk1b1/HLSPlaylist.m3u8?a=1694951465%2CMWYzZjc0ZGM0NTU1MmM0MjAyZTRhYjFjYjM0NzIwZjE0ODQ2YTA1Y2JlYTYwOGUxYTFlY2NiMTA0MjU3ZTlmYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8cogzmvrpk1b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 790}} | t3_13pn0v6 | /r/LocalLLaMA/comments/13pn0v6/desktop_connected_to_the_runpod_cloud_oobabooga/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'SleJWLSZhwK3tcsK1ghjR6he76HoDxjqiluCNhjl3i0', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=108&crop=smart&format=pjpg&auto=webp&s=90bb8fccde9b7da3a379f22e27f5d0cb9e8de49d', 'width': 108}, {'height': 196, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=216&crop=smart&format=pjpg&auto=webp&s=517300430329609e4b9ae3c048b41fa7dc618c74', 'width': 216}, {'height': 291, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=320&crop=smart&format=pjpg&auto=webp&s=dcb81fc1eb42c3277a39d68ec8f95f05a453f436', 'width': 320}, {'height': 583, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=640&crop=smart&format=pjpg&auto=webp&s=08de76196bfef62ad38625cc6db36a794fcb9912', 'width': 640}, {'height': 874, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=960&crop=smart&format=pjpg&auto=webp&s=37962d54cde08d0a7050889003f46cf9269bb965', 'width': 960}, {'height': 984, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?width=1080&crop=smart&format=pjpg&auto=webp&s=374c7fbc348d6922e7d07db84f62682b76cd2c1c', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://external-preview.redd.it/mY1XW6t196vQHamUE6hrkGM_ErtcK5WW31XzFRHPocc.png?format=pjpg&auto=webp&s=8413eaa8a07b72ecde66144ee342cc635e0300bb', 'width': 1128}, 'variants': {}}]} |
||
What's the story behind Hugging Face? | 14 | What's the story behind Hugging Face?
I have just found their website, and don't know what to make of it.
Is there an article somewhere which describes their history, aims and products/services? | 2023-05-23T12:26:11 | https://www.reddit.com/r/LocalLLaMA/comments/13pn1a7/whats_the_story_behind_hugging_face/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pn1a7 | false | null | t3_13pn1a7 | /r/LocalLLaMA/comments/13pn1a7/whats_the_story_behind_hugging_face/ | false | false | self | 14 | null |
How can I run the TinyStories model on llama.cpp for conversation instead of completion? | 1 | [removed] | 2023-05-23T13:27:35 | https://www.reddit.com/r/LocalLLaMA/comments/13pojdd/how_can_i_run_the_tinystories_model_on_llamacpp/ | su3su2u1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pojdd | false | null | t3_13pojdd | /r/LocalLLaMA/comments/13pojdd/how_can_i_run_the_tinystories_model_on_llamacpp/ | false | false | default | 1 | null |
Meanwhile here at LocalLLaMA.. | 716 | 2023-05-23T14:42:21 | MoneroBee | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13pqj3j | false | null | t3_13pqj3j | /r/LocalLLaMA/comments/13pqj3j/meanwhile_here_at_localllama/ | false | false | 716 | {'enabled': True, 'images': [{'id': 'ROIALCegwgm_E2XHpaA-DA29woATZKxLTPj8YOtsnCE', 'resolutions': [{'height': 156, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?width=108&crop=smart&auto=webp&s=e81e8cdcad02bc05d1f5cfa401c7db4e48493087', 'width': 108}, {'height': 312, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?width=216&crop=smart&auto=webp&s=901753897965525029b8940b0c320379f4388322', 'width': 216}, {'height': 462, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?width=320&crop=smart&auto=webp&s=fac0fc9729bb0fc072a479fdd924082727d8baac', 'width': 320}], 'source': {'height': 748, 'url': 'https://preview.redd.it/jr463ta7fl1b1.png?auto=webp&s=5fc6e2a8cd6ace5520ec6b558b19be962cff0eac', 'width': 517}, 'variants': {}}]} |
|||
Can a llm demand more memory than is physically available? | 0 | [removed] | 2023-05-23T15:00:22 | https://www.reddit.com/r/LocalLLaMA/comments/13pr05z/can_a_llm_demand_more_memory_than_is_physically/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pr05z | false | null | t3_13pr05z | /r/LocalLLaMA/comments/13pr05z/can_a_llm_demand_more_memory_than_is_physically/ | false | false | default | 0 | null |
Instruct-guided novel writing using WizardLM-30B-Uncensored (with SillyTavern) | 130 | Hello everyone!
To preface this, I've been playing around with AI-assisted novel writing for a while, having used [HoloAI](https://writeholo.com/) and [NovelAI](https://novelai.net/) quite extensively, and playing around with KoboldAI and the [Nerys](https://huggingface.co/KoboldAI/fairseq-dense-13B-Nerys) and [Erebus](https://huggingface.co/KoboldAI/OPT-13B-Erebus) models they've trained. So far, my impression had been that the quality of the locally hosted models didn't quite live up to the paid offerings, coming down to (presumably) less finetuned models and less capable GUIs.
Now, after screwing around with the new [WizardLM-30B-Uncensored](https://huggingface.co/ehartford/WizardLM-30B-Uncensored) (thank you, Mr. Hartford 🙏), I figured that it lends itself pretty well to novel writing. Even though the model is instruct-tuned, the outputs (when guided correctly) actually rival NovelAI's Euterpe model. Obviously, this is highly subjective, and I can't speak for the "more capable" (and more expensive) offerings, such as NovelAI's Krake, as I haven't tried them yet. I also can't speak to the abilities of more specialised models to generate text guided by special genre and/or author prompts, as I haven't really explored those either.
Anyways, being able to run a high-parameter count LLaMA-based model locally (thanks to GPTQ) and "uncensored" is absolutely amazing to me, as it enables quick, (mostly) stylistically and semantically consistent text generation on a broad range of topics without having to spend money on a subscription.
However, local generation does have hurdles, and getting SillyTavern to handle WizardLM together with its built-in World Info and the summarization feature provided by SillyTavern-extras took some finagling. The result seems pretty good though:
https://preview.redd.it/v803qcmr7l1b1.png?width=1013&format=png&auto=webp&s=301d0866cf5e8f50db7a5f16847d920710728587
For the benefit of those interested in AI-assisted writing (and for my own record, should my PC catch fire from all the *intense inferencing*), I thought I should write down a short guide.
First things first, a note on my setup:
* I'm running all AI stuff on a WSL2 VM (Ubuntu 22.04) with an RTX 3090 and 64 GB of RAM.
* The actual model used is the [WizardLM's-30B-Uncensored GPTQ 4bit-quantized version provided by TheBloke](https://huggingface.co/TheBloke/WizardLM-30B-Uncensored-GPTQ)
* The backend for [SillyTavern](https://github.com/Cohee1207/SillyTavern) is provided by [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui), the GPTQ implementation is [iwalton3's GPTQ-for-LLaMa fork](https://github.com/iwalton3/GPTQ-for-LLaMa) providing support for act-order.
Now for the actual settings:
​
[Generation Settings](https://preview.redd.it/16wosahebl1b1.png?width=912&format=png&auto=webp&s=6bef884db0e4c9137d60b6edb0b87f8b4dd90434)
​
[Advanced Formatting](https://preview.redd.it/37ulgzxsbl1b1.png?width=1824&format=png&auto=webp&s=4910b0a5b30250289bb976327a9bd1ca1376ea8e)
Note that all AutoFormat Overrides are enabled, Instruct mode is active, Preset set to WizardLM, and the Tokenizer is Sentencepiece. The Anchors are disabled.
The system prompt is modified from the default, which is guiding the model towards behaving like a chatbot. The prompt I use is the following:
>This response is written in the style of a novel excerpt written in the third person by an omniscient narrator, containing vivid descriptions of each scene. Every response takes into account every previous response. The response stays in the present and does not contain information about the future. At the end of each response, there is a cliffhanger, staying in the present, leaving it open what happens next. Never end with a summary or moral of the story.
>
>Consider the following context when generating the response:
The bit about not having a summary or "moral of the story" tries to prevent the model from spitting out such things at the end of every generation.
​
[World Info Example](https://preview.redd.it/ei6bv43qcl1b1.png?width=1824&format=png&auto=webp&s=3a97647aa3aacc3bdb1a99f625287cfa4888e50c)
The World Info entries should be kept very short, but improve generation noticeably by preventing the AI from forgetting the characters. Notice that I chose "After Char" insertion, which will place the entry below the "Consider the following context when generating the response:" in the System Prompt mentioned above.
​
[Extensions](https://preview.redd.it/74cfpg5pdl1b1.png?width=1824&format=png&auto=webp&s=43ca1e1b4fcd76232faca4eea1460d8d900fef37)
Note that the Author's Note stays empty, as I didn't really find a use for it.
The "Chat memory" is kind of janky, because it sometimes misinterprets the story, but it can be manually edited. Note that it is regenerated every time a previous generation falls out of the token budget - which is pretty much every generation after the first few. The memory is inserted into the prompt below the "World Info" entries, directly in front of the instruction and prefixed by "Context:"
​
​
[Character Tab 1](https://preview.redd.it/r9vmoajnel1b1.png?width=1824&format=png&auto=webp&s=088dd2f39c8f3afd60445966fe786e9cfcbb7710)
[Character Tab 2](https://preview.redd.it/yj2f23ktel1b1.png?width=1824&format=png&auto=webp&s=f0c9abf063bd28fdb38c6d34e8ac1524d859db34)
The Character is empty except for the Scenario field (and the Name). The Scenario prompt is inserted in front of all World Info entries right below the "Consider the following context when generating the response:" mentioned above.
​
To visualize what's actually in the entire prompt sent to text-generation-webui, I have created this intricate diagram:
[Prompt Burger](https://preview.redd.it/iixl1akzgl1b1.png?width=910&format=png&auto=webp&s=3197ab27930fdefbefbbaa56cea8a41407cd3a92)
Note that the "previous generations" section is the one that gets reduced when all the other sections take up "too much" tokens.
​
Now as always, your mileage may vary. Maybe WizardLM-30B-Uncensored-GPTQ is only good for weird cliché-ridden Until Dawn-based "aifiction"? Only one way to find out ;)
And obviously, this post is probably going to be outdated by the end of the month, sooo - I hope this helps for the millisecond that it's relevant, and I'd love to hear feedback or improvements! | 2023-05-23T15:03:25 | https://www.reddit.com/r/LocalLLaMA/comments/13pr3bh/instructguided_novel_writing_using/ | bilwis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pr3bh | false | null | t3_13pr3bh | /r/LocalLLaMA/comments/13pr3bh/instructguided_novel_writing_using/ | false | false | 130 | null |
|
GPT4 x Alpaca 13B vs Vicuna 13B | 2 | [removed] | 2023-05-23T15:07:57 | https://www.reddit.com/r/LocalLLaMA/comments/13pr7l0/gpt4_x_alpaca_13b_vs_vicuna_13b/ | One-Relationship4205 | self.LocalLLaMA | 2023-05-23T15:34:37 | 0 | {} | 13pr7l0 | false | null | t3_13pr7l0 | /r/LocalLLaMA/comments/13pr7l0/gpt4_x_alpaca_13b_vs_vicuna_13b/ | false | false | default | 2 | null |
Are sole individuals or are teams creating all these new models? | 8 | Just wondering : Are driven individuals or are teams creating all these new models?
If teams, what size are they typically? | 2023-05-23T15:58:23 | https://www.reddit.com/r/LocalLLaMA/comments/13pskc2/are_sole_individuals_or_are_teams_creating_all/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pskc2 | false | null | t3_13pskc2 | /r/LocalLLaMA/comments/13pskc2/are_sole_individuals_or_are_teams_creating_all/ | false | false | self | 8 | null |
Is local LLM cheaper than ChatGPT API? | 32 | ChatGPT api only costs 0.002 dollar for 1k token. I found that LLMs like llama output only 10-20 tokens per second, which is very slow. And such machines costs over 1 dollar per hour. It seems that using api is much cheaper. Based on these observations, it seems that utilizing the ChatGPT API might be a more affordable option.
Have any of you compared the costs and performance of local LLMs versus the ChatGPT API? Share your experiences and insights below. | 2023-05-23T16:19:57 | https://www.reddit.com/r/LocalLLaMA/comments/13pt5f3/is_local_llm_cheaper_than_chatgpt_api/ | Financial_Stranger52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pt5f3 | false | null | t3_13pt5f3 | /r/LocalLLaMA/comments/13pt5f3/is_local_llm_cheaper_than_chatgpt_api/ | false | false | self | 32 | null |
Official MPS Support Thread | 12 | For those on M1 or M2 macs, let's create a centralized post for all resources related to MPS and how to get models to run faster on macs.
Currently on my M1 Max 32GB Memory machine I can handle 5 tokens/sec with 13B models. However 30B is crawling.
Have you guys been able to get faster speeds, and if so how?
​
\--
# How to see memory usage?
ActivityMonitor's Memory doesn't show the real memory usage. For that, right click the header bar and select `RealMemory`. This will show the correct memory usage.
# More tokens/sec
These flags will make your model run much faster:
* `--threads` = num of CPU cores you have. On my M1 Max its 8
* M1/M2 = `4`
* M1/M2 Pro (8 cores) = `6`
* M1/M2 Pro (10 cores) = `8`
* M1/M2 Max = `8`
* M1 Ultra = `16`
* [source](https://github.com/oobabooga/text-generation-webui/pull/393#issuecomment-1500918024)
* `--mlock` = keeps the model in RAM
* `--sdp-attention` = this gave me a good speed up for 30B
* `--load-in-4bit` = made my model load up much faster
# [Bug] First message
Whenever I load up my `text-generation-webui` server my first message only uses 100-200% CPU instead of the 700-800% it should be using.
The trick is to copy your prompt, send it, click `Stop`, then paste and send your prompt again. Then you will see the correct `% CPU` | 2023-05-23T16:29:39 | https://www.reddit.com/r/LocalLLaMA/comments/13ptew5/official_mps_support_thread/ | IWantToBeAWebDev | self.LocalLLaMA | 2023-05-26T00:31:45 | 0 | {} | 13ptew5 | false | null | t3_13ptew5 | /r/LocalLLaMA/comments/13ptew5/official_mps_support_thread/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'gaBsUPsMPCc9S_ThXyuPcwjHz0CJUu6iz8201w3BGxs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=108&crop=smart&auto=webp&s=06a03436c65d40168a8118c576a419a3190031de', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=216&crop=smart&auto=webp&s=bea50dcebd00104e044de7e8c869812b0ad89eb4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=320&crop=smart&auto=webp&s=856549cc651253d1c7dcf8a060e9cdb6b236bb80', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=640&crop=smart&auto=webp&s=1cd6c82e76afbf2098a49687daf2df17a0d3c2cf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=960&crop=smart&auto=webp&s=b9c2eab860882c3c2e4514fd06d8a03694fe76b0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?width=1080&crop=smart&auto=webp&s=0cd281c22512fb96c7c73f280ea893da415d4b5c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oZpD5pP_uA43M5wuiN815vr-MW9cWu1jz5G5-g3OuLw.jpg?auto=webp&s=b68f9d772b79529b9a8c97f79c1c1313e6c2691c', 'width': 1200}, 'variants': {}}]} |
Can I run a local LLM with only 4GB VRAM? | 0 | [removed] | 2023-05-23T17:03:42 | https://www.reddit.com/r/LocalLLaMA/comments/13puc8t/can_i_run_a_local_llm_with_only_4gb_vram/ | mrmczebra | self.LocalLLaMA | 2023-05-23T17:37:17 | 0 | {} | 13puc8t | false | null | t3_13puc8t | /r/LocalLLaMA/comments/13puc8t/can_i_run_a_local_llm_with_only_4gb_vram/ | false | false | default | 0 | null |
Best 7b models list? | 3 | I have seen a lot of lists comparing best open source models. Those are great but they often don't list 7b models as they are not good compare to higher parameter models, especially the uncensored ones. I am unable to run higher models on 8GB vram so 7b are best fit for me.
Which are the best ones so far? If a list for 7b exist let me know. Also, what family of models are being most successful in 7b?
use cases: summaries, q/n, describing stuff (prompts basically), story writing. | 2023-05-23T17:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/13pupe6/best_7b_models_list/ | Suimeileo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pupe6 | false | null | t3_13pupe6 | /r/LocalLLaMA/comments/13pupe6/best_7b_models_list/ | false | false | self | 3 | null |
Watching the Microsoft Build Keynote and this post was made with the help of Dolly. | 0 | [removed] | 2023-05-23T17:25:36 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13puwml | false | null | t3_13puwml | /r/LocalLLaMA/comments/13puwml/watching_the_microsoft_build_keynote_and_this/ | false | false | default | 0 | null |
||
OpenLLaMA Releases 7B/3B Checkpoints with 700B/600B Tokens | 90 | [Update 05/22/2023](https://github.com/openlm-research/open_llama/tree/main#update-05222023)
>We are happy to release our [700B token checkpoint for the OpenLLaMA 7B model](https://huggingface.co/openlm-research/open_llama_7b_700bt_preview) and [600B token checkpoint for the 3B model](https://huggingface.co/openlm-research/open_llama_3b_600bt_preview). We’ve also updated the evaluation results. We expect the full 1T token training run to finish at the end of this week.
>
>Full checkpoints with 1T tokens are expected by the end of the week.
The [loss rate evaluation metrics](https://github.com/openlm-research/open_llama/tree/main#dataset-and-training) for 7B and 3B indicate substantially superior model performance to [RedPajama and even LLaMA](https://discord.com/channels/1082503318624022589/1097534874719625236/1110479369442447380) (h/t Suikamelon on [Together's Discord](https://discord.gg/WYh7Ag5swP)) at this point in the training and slightly worse performance than LLaMA 7B as released.
https://preview.redd.it/hq4lfxslum1b1.png?width=1013&format=png&auto=webp&s=1f4edd008e44071dbae4ba65d112f5df749eaf39
The results are actually quite good. This is 16-bit inference from 7B and blazing fast.
https://preview.redd.it/cldplkrz4n1b1.png?width=1361&format=png&auto=webp&s=85f30829d8eadf2ba44345a5658b70b03dc0964c | 2023-05-23T17:27:09 | https://www.reddit.com/r/LocalLLaMA/comments/13puy2j/openllama_releases_7b3b_checkpoints_with_700b600b/ | friedrichvonschiller | self.LocalLLaMA | 2023-05-23T20:28:59 | 0 | {} | 13puy2j | false | null | t3_13puy2j | /r/LocalLLaMA/comments/13puy2j/openllama_releases_7b3b_checkpoints_with_700b600b/ | false | false | 90 | {'enabled': False, 'images': [{'id': 'O8tRU_f2R-b_kBe3Ct1V4f07ncMfAYdYRE7TLhOCXBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=108&crop=smart&auto=webp&s=690b3adda60e0cec765ac71023f16c613e76dc35', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=216&crop=smart&auto=webp&s=97dafa7ae671d168423988f61ac82f8fb7db5a6b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=320&crop=smart&auto=webp&s=18cc957ce65ab4a09b5f971f983577703e385fb5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=640&crop=smart&auto=webp&s=8f2d21bfaacb751a90fc59bfb02a398fa305db76', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=960&crop=smart&auto=webp&s=452068adb31f5136fa9c745cb31cd919827b7533', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?width=1080&crop=smart&auto=webp&s=c340e1c2aa919875c71969353bc74f7c5c51e621', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q6Tih_Q9kUjBS5zeHnjVvpVu2ccC2wHdYiBY5oZ5svs.jpg?auto=webp&s=dfba8a2266fa9a6c2b0f9f1b83a3e769db691bb7', 'width': 1200}, 'variants': {}}]} |
|
What kind of hardware do you need to run LLaMA locally? | 9 | Somehow I don't think my RTX 3070 is enough. | 2023-05-23T18:21:57 | https://www.reddit.com/r/LocalLLaMA/comments/13pwcu6/what_kind_of_hardware_do_you_need_to_run_llama/ | Darkhog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pwcu6 | false | null | t3_13pwcu6 | /r/LocalLLaMA/comments/13pwcu6/what_kind_of_hardware_do_you_need_to_run_llama/ | false | false | self | 9 | null |
Proof-of-concept with fine-tuning on local data? | 4 | Hi,
What do you all recommend for implementing a Q&A chat similar to ChatGPT, but with fine-tuning on some local documents (bunch of html and pdf files)? I would prefer an API approach offered by OpenAI, but they require the use of json in the question-answer format. I'd like to point to a bunch of files, fine tune, then be able to ask the AI chat questions. PrivateGPT allows one to point to a directory with files, but I heard PrivateGPT can be quite slow. I am aware of Serge via a docker container, but I don't know if it supports tuning using local data. MosaicML's mpt-7b looks promising, but not sure how to get started, either through their API or on a local server.
I'm open to an API, or trying it on a local computer/server. I think I can get my hands on a temporary server with GPU's. Thanks in advance for your help! | 2023-05-23T20:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/13pzlsy/proofofconcept_with_finetuning_on_local_data/ | vinhdizzo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13pzlsy | false | null | t3_13pzlsy | /r/LocalLLaMA/comments/13pzlsy/proofofconcept_with_finetuning_on_local_data/ | false | false | self | 4 | null |
7B Wizard model speed | 3 | How fast are 7B models supposed to run? I have a middle of the road cpu for my laptop(Ryzen 5 5500U) and I'm using WizardLM-7B-uncensored with llama.cpp. I'm getting a prompt eval time of 77.60ms per token and a regular eval time of 174.67ms per token. I don't know if this is up to par so I'm thinking of using gpu acceleration. Not that I have any problems with the current speed I'm just wondering whether it could be faster. | 2023-05-23T21:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/13q1psd/7b_wizard_model_speed/ | Jamalmail | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q1psd | false | null | t3_13q1psd | /r/LocalLLaMA/comments/13q1psd/7b_wizard_model_speed/ | false | false | self | 3 | null |
Make models answer in the language of the instruction | 1 | Models like WizardVicuna13B are good at understanding non-English (in my case German) instructions, but they are responding in English, that I have to explicitely write that it should answer it in German.
What would be a method to auto-detect the language and respond in that, too?
For ChatGPT it is the case. Or does the underlying Llama architecture miss an auto-translate layer? | 2023-05-23T22:16:36 | https://www.reddit.com/r/LocalLLaMA/comments/13q2ob1/make_models_answer_in_the_language_of_the/ | Koliham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q2ob1 | false | null | t3_13q2ob1 | /r/LocalLLaMA/comments/13q2ob1/make_models_answer_in_the_language_of_the/ | false | false | self | 1 | null |
OpenAccess AI Collective's Manticore 13B Chat (Manticore - Pygmalion) | 83 | # Manticore 13B Chat
Manticore 13B Chat builds on Manticore with new datasets, including a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using
###
in favor of chat only style prompts using
USER:
ASSISTANT:
as well as [**pygmalion/metharme prompting**](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using
<|system|>, <|user|> and <|model|>
tokens.
# GGML powered HF Space
As with Manticore and Wizard Mega, OpenAccess AI Collective are unique in providing a GGML-powered HF Space, available here: [https://huggingface.co/spaces/openaccess-ai-collective/manticore-13b-chat-pyg](https://huggingface.co/spaces/openaccess-ai-collective/manticore-13b-chat-pyg)
# Main repo
[https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg)
# Quantisations for local usage
GPTQ for GPU inference: [https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GPTQ](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GPTQ)
GGML for CPU(+GPU) inference: [https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GGML](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GGML) | 2023-05-23T23:48:22 | https://www.reddit.com/r/LocalLLaMA/comments/13q4xgh/openaccess_ai_collectives_manticore_13b_chat/ | The-Bloke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q4xgh | false | null | t3_13q4xgh | /r/LocalLLaMA/comments/13q4xgh/openaccess_ai_collectives_manticore_13b_chat/ | false | false | self | 83 | {'enabled': False, 'images': [{'id': '0yX6Osaz2Y363-dUccZuszUzp2yGUjbZHiLjr57wZ88', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=108&crop=smart&auto=webp&s=fc6e7eb7da486c3d445b45b1c25f6cd1d33d2ad5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=216&crop=smart&auto=webp&s=53dfb937180b13d61254b3802e86ff113aa4f01e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=320&crop=smart&auto=webp&s=643e47aee926253a2ca4fc1fc504314832fdee83', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=640&crop=smart&auto=webp&s=f8163f4e56d353e7d66e32b1ecc76cfcbc254f9f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=960&crop=smart&auto=webp&s=cf9b8f5438c54fe216bc72f6e0fe2d98c635ecc1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?width=1080&crop=smart&auto=webp&s=b43b6eb351378c1e3e4813a840db32e6ae92f467', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/E_xRo1yw5rmmykhNYCrOR2yBjvzBDlomFZv20GSdNeg.jpg?auto=webp&s=d5946fe0cea5ed688a05b5cc9f29d93a10fdc35c', 'width': 1200}, 'variants': {}}]} |
CLI LLM | 1 | [removed] | 2023-05-24T00:11:13 | https://www.reddit.com/r/LocalLLaMA/comments/13q5gjk/cli_llm/ | 5c077_fr33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q5gjk | false | null | t3_13q5gjk | /r/LocalLLaMA/comments/13q5gjk/cli_llm/ | false | false | default | 1 | null |
What is an budget level PC for running basic LLaMa models? | 3 | I sort of adhere to learning to swim by jumping in.
I am looking to understand locally hosted llamas without breaking the bank. Would 16GB RAM and a 24GB RTX 3090 be enough to start? | 2023-05-24T00:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/13q5vh4/what_is_an_budget_level_pc_for_running_basic/ | iChinguChing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q5vh4 | false | null | t3_13q5vh4 | /r/LocalLLaMA/comments/13q5vh4/what_is_an_budget_level_pc_for_running_basic/ | false | false | self | 3 | null |
4090 issue with Wizard 30B | 1 | [removed] | 2023-05-24T00:46:14 | https://www.reddit.com/r/LocalLLaMA/comments/13q692i/4090_issue_with_wizard_30b/ | Efficient-Sherbet-15 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q692i | false | null | t3_13q692i | /r/LocalLLaMA/comments/13q692i/4090_issue_with_wizard_30b/ | false | false | default | 1 | null |
How many tokens of storage can I get using Llama models? | 3 | I am trying to write a story and want to run some checks through a chat bot. GPT does okay, but forgets after a fairly short period of time technical details that are critical to continuing the discussion for longer than a few exchanges.
What am I looking at for some of these? I have a few models already, and am wondering if I can somehow increase its memory storage? I have 2 4090s, 64 GB ram and plenty of SSD space. | 2023-05-24T00:46:32 | https://www.reddit.com/r/LocalLLaMA/comments/13q69bj/how_many_tokens_of_storage_can_i_get_using_llama/ | Siigari | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q69bj | false | null | t3_13q69bj | /r/LocalLLaMA/comments/13q69bj/how_many_tokens_of_storage_can_i_get_using_llama/ | false | false | self | 3 | null |
4090 wont run wizard 30b, people have same issue on community page | 1 | [removed] | 2023-05-24T00:48:10 | https://www.reddit.com/r/LocalLLaMA/comments/13q6akx/4090_wont_run_wizard_30b_people_have_same_issue/ | Efficient-Sherbet-15 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q6akx | false | null | t3_13q6akx | /r/LocalLLaMA/comments/13q6akx/4090_wont_run_wizard_30b_people_have_same_issue/ | false | false | default | 1 | null |
AlpacaFarm makes the RLHF process accessible to everyone by providing a simulator that replicates the RLHF process quickly (24h) and cheaply ($200) | 30 | Incredible how fast the open source langauge model space is moving. | 2023-05-24T00:48:28 | https://crfm.stanford.edu/2023/05/22/alpaca-farm.html | kryptkpr | crfm.stanford.edu | 1970-01-01T00:00:00 | 0 | {} | 13q6ato | false | null | t3_13q6ato | /r/LocalLLaMA/comments/13q6ato/alpacafarm_makes_the_rlhf_process_accessible_to/ | false | false | default | 30 | null |
4x 8GB RAM modules in dual channel VS 2x 16GB RAM modules in dual channel for Llama.cpp | 2 | I want to run GPT4 x Alpaca, I have two options for upgrading my PC.
4x 8GB RAM modules in dual channel: affordable but I will not have space for more RAM in the future.
2x 16GB RAM modules in dual channel: expensive but I could upgrade to 64GB in the future.
Would having 64GB of RAM in the future provide a significant boost for offline LLMs? Additionally, I plan to acquire an RTX 3060 for stable diffusion in the near future. Would having that much RAM be beneficial for SD, or would 32GB be sufficient? | 2023-05-24T00:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/13q6ejz/4x_8gb_ram_modules_in_dual_channel_vs_2x_16gb_ram/ | Nazuna_Vampi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q6ejz | false | null | t3_13q6ejz | /r/LocalLLaMA/comments/13q6ejz/4x_8gb_ram_modules_in_dual_channel_vs_2x_16gb_ram/ | false | false | self | 2 | null |
7B performance on a 15 year old potato | 27 | [deleted] | 2023-05-24T00:57:32 | [deleted] | 2023-06-12T19:53:32 | 0 | {} | 13q6hu8 | false | null | t3_13q6hu8 | /r/LocalLLaMA/comments/13q6hu8/7b_performance_on_a_15_year_old_potato/ | false | false | default | 27 | null |
||
Baize v2! This is the best 7B chatbot! Fast & lightweight! | 34 | From the [founder](https://twitter.com/XuCanwen/status/1661099080746143745):
🐲 We are releasing Baize v2! This is the best 7B chatbot! Fast & lightweight!
🍺Baize v2 features Self-Distillation with Feedback (SDF) using ChatGPT's feedback to replace RLHF.
🤗Demo: huggingface.co/spaces/project-baize/chat-with-baize
🏎Weights: github.com/project-baize/baize-chatbot#v2
📄Paper: github.com/project-baize/baize-chatbot/releases/download/baize-v2/baize_v2_technical_report.pdf
🐲Now Baize supports @lmsysorg Fastchat. You can run Baize v2 with Fsatchat CLI or API. Fastchat also supports OpenAI-style SDK - you only need to specify Baize's new checkpoints. You can use it the same way as Vicuna.
🔗 github.com/lm-sys/FastChat#api | 2023-05-24T01:09:46 | Xhehab_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13q6rw1 | false | null | t3_13q6rw1 | /r/LocalLLaMA/comments/13q6rw1/baize_v2_this_is_the_best_7b_chatbot_fast/ | false | false | 34 | {'enabled': True, 'images': [{'id': 'fJMd19zLcBUN-S2GkbJYmpN0lyV4D1CA7Cr3Y3m0O1w', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=108&crop=smart&auto=webp&s=f9e1bdb5d0fbcfeda0da536285bc24ab90c9ad8b', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=216&crop=smart&auto=webp&s=e703fbadf94771507e5899e3db81841985061dcc', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=320&crop=smart&auto=webp&s=7363267febeb84f479d5b86fc1666e745e257801', 'width': 320}, {'height': 365, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=640&crop=smart&auto=webp&s=b42ef77ebb2f8991fa9c24b4a7fc987eb9db9eb3', 'width': 640}, {'height': 548, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=960&crop=smart&auto=webp&s=82398f5485626a9babdd97601849535d86276ec7', 'width': 960}, {'height': 617, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?width=1080&crop=smart&auto=webp&s=57255e58368ed2f7ecd0d253bbc40adc023b5486', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/zetgh0er0q1b1.jpg?auto=webp&s=60b037fc1955cafd096da911352817052e5470d7', 'width': 2100}, 'variants': {}}]} |
||
Koboldcpp linux with gpu guide | 21 | **Opencl installation**
Fedora: sudo dnf in clblast clblast-devel mesa-libOpenCL-devel
Arch: sudo pacman -S cblas clblast
Debian: libclblast-dev
Amd proprietary drivers are not needed
**Koboldcpp linux with gpu guide**
git clone https://github.com/LostRuins/koboldcpp && cd koboldcpp && LLAMA_CLBLAST=1 make
clinfo --list
You need to use the right platform and device id from clinfo! The easy launcher which appears when running koboldcpp without arguments may not do this automatically like in my case. For me the correct option is Platform #2: AMD Accelerated Parallel Processing, Device #0: gfx1030.
python ./koboldcpp.py --useclblast 2 0 --model /ggmlfolder/ggmlfile.bin # first useclblast argument is platform ID, second is device ID
If you get an error about missing a mesa3d file you are probably using clover which requires mesa 23.1 or newer, avoid it for now
GGML models can be found on https://huggingface.co
Stop koboldcpp once you see n_layer value then run again:
python ./koboldcpp.py --useclblast 2 0 --model /ggmlfolder/ggmlfile.bin --gpulayers 40
I am testing with [Manticore-13B.ggmlv3.q4_1](https://huggingface.co/TheBloke/Manticore-13B-GGML) which has 40 layers. I want all layers on gpu so I input 40. You can also put more layers than actual if you want, no harm. Use less if you don't have enough vram, but speed will be slower. Memory usage order low to high: q4_0, q4_1, q5_0, q5_1, etc
Open http://localhost:5001 in browser, have fun!
Alternate steps for rocm (faster blas processing!):
git clone https://github.com/YellowRoseCx/koboldcpp koboldcpprocm && cd koboldcpprocm
make LLAMA_HIPBLAS=1
Rocm is linux only and must be installed. See my gptq posts for information | 2023-05-24T01:12:37 | https://www.reddit.com/r/LocalLLaMA/comments/13q6u9e/koboldcpp_linux_with_gpu_guide/ | amdgptq | self.LocalLLaMA | 2023-06-27T12:12:23 | 0 | {} | 13q6u9e | false | null | t3_13q6u9e | /r/LocalLLaMA/comments/13q6u9e/koboldcpp_linux_with_gpu_guide/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]} |
This is major news, Meta AI released a paper on how to build next-gen transformers (multiscale transformers enabling 1M+ token LLMs) | 8 | [removed] | 2023-05-24T01:17:46 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13q6ya9 | false | null | t3_13q6ya9 | /r/LocalLLaMA/comments/13q6ya9/this_is_major_news_meta_ai_released_a_paper_on/ | false | false | default | 8 | null |
||
Trying to run WizardLM-30B uncensored-GPTQ but I'm running out of VRAM on my 3090? | 2 | [removed] | 2023-05-24T02:09:50 | https://www.reddit.com/r/LocalLLaMA/comments/13q82yk/trying_to_run_wizardlm30b_uncensoredgptq_but_im/ | BallJoints420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q82yk | false | null | t3_13q82yk | /r/LocalLLaMA/comments/13q82yk/trying_to_run_wizardlm30b_uncensoredgptq_but_im/ | false | false | default | 2 | null |
How much / why does quantization negatively affect LoRA training on LLaMA? | 3 | I haven't trained LoRAs myself but I have a lot of interest in the technology. One thing I'm really curious about is how quantization affects the LoRA training that everyone's doing with LLaMA or its derivatives. Basically, I'm wondering if I can get a big model optimized well enough to run inference on my machine, would I also be able to make LoRA's with that same optimized model? If not, what actually goes wrong?
My impression for inference at least is that people have been managing to make LLaMA run on Raspberry Pi's using int4 and getting results mostly indistinguishable from normal precision. And there is even promising research about using binary, which is int1! It seems like there is a lot of room for optimizing the model here.
My questions:
1. Does quantization work just as well for training?
For full fine-tuning at least I get the impression that normal precision is needed, otherwise the losses fail to converge. I do see tutorials for training LoRAs using int8, but also I see people claiming that using normal precision is better if you have enough VRAM for it (not sure if that's slightly better, or massively better). And I can't quite tell from searching online whether int4 training works and just has poor library support currently, or if it performs so poorly that libraries don't bother implementing it for good reason.
2. If not, why does quantization affect training more than inference?
I can't quite see why the backward pass would be significantly different from the forward pass. I see references to the idea that the gradients may be several orders of magnitude smaller than the current values of the weights, but it seems to me that this problem ought to disappear if you just keep the LoRA weights in full precision. Why would the precision of the frozen weights matter?
It seems to me at least that it would be a huge deal if super low precision LoRA training works, because that would mean training with frozen base model weights can be made massively faster than training with all weights tunable. It would lend credence to the idea of incrementally upgrading a base model with many layers of LoRA to create an ever stronger base model. | 2023-05-24T02:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/13q8zjc/how_much_why_does_quantization_negatively_affect/ | jubilantjerry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q8zjc | false | null | t3_13q8zjc | /r/LocalLLaMA/comments/13q8zjc/how_much_why_does_quantization_negatively_affect/ | false | false | self | 3 | null |
Looking for reccomendations | 6 | I was wanting to find the best local LLM in terms of output quality and dont care that much if its slow. I tried snoozy from GPT4all and its alright but im trying to find one thats almost as good as ChatGPT | 2023-05-24T03:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/13q97ub/looking_for_reccomendations/ | Top_Culture_9625 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q97ub | false | null | t3_13q97ub | /r/LocalLLaMA/comments/13q97ub/looking_for_reccomendations/ | false | false | self | 6 | null |
What are my best options right now for models/setups if I have a 3060, a 5600x, and 32 GB of ram? | 10 | I've been browsing this subreddit daily but it's a lot. I just want a model i can talk to like chatgpt that can help me write stories and answer a few random questions here and there at decent speeds. If I need more ram/Vram I understand. | 2023-05-24T03:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/13q9rdf/what_are_my_best_options_right_now_for/ | sephy009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13q9rdf | false | null | t3_13q9rdf | /r/LocalLLaMA/comments/13q9rdf/what_are_my_best_options_right_now_for/ | false | false | self | 10 | null |
If I have 24GB VRAM at this point what's the best option to go with on Windows? | 3 | To get going quickly. My head is spinning with all these models and the 4090 is p dec so I figure I could get away with a slightly beefier one | 2023-05-24T04:10:00 | https://www.reddit.com/r/LocalLLaMA/comments/13qalqf/if_i_have_24gb_vram_at_this_point_whats_the_best/ | EarthquakeBass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13qalqf | false | null | t3_13qalqf | /r/LocalLLaMA/comments/13qalqf/if_i_have_24gb_vram_at_this_point_whats_the_best/ | false | false | self | 3 | null |
Research on business cases | 1 | Hey all - I was wondering if there were any guides for business related use cases. I understand llama isn’t commercial and I don’t plan on operationalizing. I just wanted to experiment and see what the capabilities of 65B and 30B models were compared to GPT4 when fine tuned to a use case.
Think building process documents, designing operating models, assessing and recommending based on current state etc…
Wondering if I’m missing a community or if this is just a new area of research. Thanks!! | 2023-05-24T04:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/13qazzc/research_on_business_cases/ | sshan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13qazzc | false | null | t3_13qazzc | /r/LocalLLaMA/comments/13qazzc/research_on_business_cases/ | false | false | self | 1 | null |
GitHub - artidoro/qlora: QLORA: Efficient Finetuning of Quantized LLMs | 148 | 2023-05-24T05:37:30 | https://github.com/artidoro/qlora | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 13qclld | false | null | t3_13qclld | /r/LocalLLaMA/comments/13qclld/github_artidoroqlora_qlora_efficient_finetuning/ | false | false | 148 | {'enabled': False, 'images': [{'id': '6KSYn1OpiNmAhSf2yx1kkb0ibgiMfcU3UE91pJt36_w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=108&crop=smart&auto=webp&s=ef25d30e9cae1029ecfa6f9dcb6854866a8c6598', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=216&crop=smart&auto=webp&s=8a2e15deb3726556736beec227a66089a01ecdff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=320&crop=smart&auto=webp&s=9947a7555051da5c1cfebd8b31bcbfe16bdf3b16', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=640&crop=smart&auto=webp&s=b8506d6a7e75f99ce80a5f5f33af0507393f9e61', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=960&crop=smart&auto=webp&s=4530d6aa63afe3c4e37b4be65ff0d09661ef759c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?width=1080&crop=smart&auto=webp&s=6b4599d0fda5934a9a852d508b21d880b584fc16', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uV7hsaBV-goZWI1cKZFqEtNzwy7MmDkTBQAUopanpAY.jpg?auto=webp&s=70385919c3aa169f76a78d82aca4d032474a0e45', 'width': 1200}, 'variants': {}}]} |
||
Make a Lora for 30B on CPU+10GB VRAM? | 3 | Is that plausible? I know you can generate text like that (slow but works).. but can I make a LORA? I have a few thousand text examples I’d like to train it on. | 2023-05-24T06:00:13 | https://www.reddit.com/r/LocalLLaMA/comments/13qd0n4/make_a_lora_for_30b_on_cpu10gb_vram/ | maxiedaniels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13qd0n4 | false | null | t3_13qd0n4 | /r/LocalLLaMA/comments/13qd0n4/make_a_lora_for_30b_on_cpu10gb_vram/ | false | false | self | 3 | null |
RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text | 35 | 2023-05-24T07:26:39 | https://github.com/aiwaves-cn/RecurrentGPT/tree/main | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 13qekna | false | null | t3_13qekna | /r/LocalLLaMA/comments/13qekna/recurrentgpt_interactive_generation_of/ | false | false | 35 | {'enabled': False, 'images': [{'id': 'jomdYge7wRVxv7mq4YEIekVi418yhAAWB12HmV_6r1I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=108&crop=smart&auto=webp&s=5d08c335440809ce08a34e2ecdc6bc26fa5e9578', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=216&crop=smart&auto=webp&s=5f39f4ee2e9a3c27ae559e5765c3bd071b6cc5bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=320&crop=smart&auto=webp&s=06735888230a650cf7e6680807460ccedc380b56', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=640&crop=smart&auto=webp&s=dd055d16af7f01d43339488e9c2db485c087c98f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=960&crop=smart&auto=webp&s=9b9270e683b9e28d6f73c49f417777a6f590eb76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?width=1080&crop=smart&auto=webp&s=bebd03d898fb14f9919f0bd4d61a88ede237deb2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DGJHtUq7yIN0OCiqgGLSNX8KD-HhmdRd_4WlGKw_-E0.jpg?auto=webp&s=6af25fff65528fd7e81485eb936e8a60ee954d78', 'width': 1200}, 'variants': {}}]} |
||
With limited hardware (laptop), what kind of local AI would be most viable? | 5 |
Is it a model with less parameters, and super specific?
For my specific use case I'm looking to create a generative coding local AI (to generate JavaScript code), and have around 4Gb ram, 120Gb ssd, on a i5 intel with 4 cores.
Any help to avoid downloading tons of Gbs and frying my laptop would be highly appreciated!!! 😅 | 2023-05-24T07:58:55 | https://www.reddit.com/r/LocalLLaMA/comments/13qf4l3/with_limited_hardware_laptop_what_kind_of_local/ | neilyogacrypto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13qf4l3 | false | null | t3_13qf4l3 | /r/LocalLLaMA/comments/13qf4l3/with_limited_hardware_laptop_what_kind_of_local/ | false | false | self | 5 | null |
I'm trying to create a recommendation engine. What's the current best way to fine-tune these models? | 3 | I'm trying to have a LLM output some sort of structured data such as JSON that contains recommendations based on the prompt. As far as I understand I could generate a massive collection of simple prompts, desired outputs and then tune the model to handle more complex prompts to generate useful outputs. Does that sound right or am I confusing fine-tuning with actual training? I also read something about embeddings that could be useful to enforce the output format maybe?
What tools should I use to do this to a good model such as the new WizardLM-30B-Uncensored. | 2023-05-24T08:25:19 | https://www.reddit.com/r/LocalLLaMA/comments/13qfks1/im_trying_to_create_a_recommendation_engine_whats/ | Dogeboja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13qfks1 | false | null | t3_13qfks1 | /r/LocalLLaMA/comments/13qfks1/im_trying_to_create_a_recommendation_engine_whats/ | false | false | self | 3 | null |
NovelAI has finished developing their new model from scratch, Clio, rivaling Llama 7b despite only being a 3B parameter model. Also has ~8200 tokens of context. Larger models are being trained now. | 1 | [removed] | 2023-05-24T09:58:44 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13qh60l | false | null | t3_13qh60l | /r/LocalLLaMA/comments/13qh60l/novelai_has_finished_developing_their_new_model/ | false | false | default | 1 | null |
||
I want to train a pre-trained model for a translation task. | 2 | [deleted] | 2023-05-24T10:05:58 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13qhapy | false | null | t3_13qhapy | /r/LocalLLaMA/comments/13qhapy/i_want_to_train_a_pretrained_model_for_a/ | false | false | default | 2 | null |
||
Sharing my comparison methodology for LLM models | 16 | There's a lot of benchmarks used to compare LLMs, yet none of them seem to be used as a standard and it can get unclear which models are strong overall and which models are weak overall, because the known metrics might be completely disjoint between the two models you want to compare.
I end up having a hard time understanding how good or bad the new LLaMA alternatives are, or how they compare to OpenAI's models.
So I've tried to use a basic matrix factorization method to estimate unknown benchmark scores for models based on the known benchmark scores. Basically, I assume each model has some intrinsic "quality" score, and the known benchmarks are assumed to be a linear function of the quality score. This is similar to matrix factorization with only 1 latent factor (though the bias values have to handled differently). Then I fit the known benchmark scores from [https://github.com/LudwigStumpp/llm-leaderboard](https://github.com/LudwigStumpp/llm-leaderboard) to my parameters, and estimate the remaining benchmark scores.
I organized the predicted results in this spreadsheet: [https://drive.google.com/file/d/15E1cxj0fQGAE2eyokQeX91PI\_npIjzSA/view?usp=sharing](https://drive.google.com/file/d/15E1cxj0fQGAE2eyokQeX91PI_npIjzSA/view?usp=sharing). It's a bit messy and I haven't written more detailed instructions, but the quality score is shown on the rightmost column of the second sheet.
Some observations:
* My sheet does show a high quality score for GPT-4, as expected (0.793)
* It suggests that open source models generally are worse than LLaMA and GPT-3
* MPT-7B, Bloom-176B, and RWKV-14B seem to have relatively high quality scores among open-source models (0.0566, -0.0007, and -0.0330 respectively)
* The benchmarks in the table are only intended to compare base LLM's, not tuned ones. Instruction tuning improves the benchmark scores, so it might not be fair to compare, say, text-gpt-3.5-175B with LLaMA-65B, since a fine-tuned LLaMA-65B may do better.
My code: [https://github.com/JubilantJerry/matrix-factorization](https://github.com/JubilantJerry/matrix-factorization)
*Edits: I manually fixed some entries from the table, added additional benchmark metrics, and added gpt-3.5-turbo as well as RWKV-14B model to the list. I also removed code-only models, Palm 2, and the human evaluation coding metric.* | 2023-05-24T11:36:50 | https://www.reddit.com/r/LocalLLaMA/comments/13qj07n/sharing_my_comparison_methodology_for_llm_models/ | jubilantjerry | self.LocalLLaMA | 2023-05-24T18:49:15 | 0 | {} | 13qj07n | false | null | t3_13qj07n | /r/LocalLLaMA/comments/13qj07n/sharing_my_comparison_methodology_for_llm_models/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'uD7w87RxJjXgqr43pHXrYCli60XLOKUPqU3SKiX_5NU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=108&crop=smart&auto=webp&s=9d5f63fdbbc69bd603cfe41e9e280f87bd567c36', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=216&crop=smart&auto=webp&s=9828c61598920245118368b81aecf2e6f2525a68', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=320&crop=smart&auto=webp&s=660207c7d9f51b5a3dc9fd5c6e13f521fa8ebb4f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=640&crop=smart&auto=webp&s=b92c0934ecead6831cf85562ed44f630d5f78deb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=960&crop=smart&auto=webp&s=828652ff3c2defd25ce792ee683fd2d89c23b0ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?width=1080&crop=smart&auto=webp&s=6b7bd1e5086aac4e338e1747cb682e2155bcb033', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/H29ajyQiJ3Zau0WBpx6IwQivZpAlgfPLPVpC-8qMGOA.jpg?auto=webp&s=63fabe7739c783c91cf1dc54537d1f63933f7d50', 'width': 1280}, 'variants': {}}]} |
Seeking advice on fine-tuning MPT-7B for semantic search in a non-English language | 12 | Hey, everyone!
I am working on a start-up and require a local, offline Large Language Model that can ideally be quantized to 8 bits and has a commercially viable license (Apache 2.0 / MIT / etc.). One might assume such a model already exists. However, I need it to support a language other than English, spoken by around 20 million people and ranked within the top 20 economies.
My current idea is to utilize the [MPT-7B](https://www.mosaicml.com/blog/mpt-7b) LLM. I would like to know whether translating the Dolly-15k dataset into the desired language using state-of-the-art translation techniques like DeepL would be a viable approach to fine-tune this MPT-7B base model. I want to use this model for semantic search, so perfect conversations (QA) is not a necessity right now.
Please do not hesitate to suggest alternative strategies as well! | 2023-05-24T12:26:27 | https://www.reddit.com/r/LocalLLaMA/comments/13qk631/seeking_advice_on_finetuning_mpt7b_for_semantic/ | fictioninquire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13qk631 | false | null | t3_13qk631 | /r/LocalLLaMA/comments/13qk631/seeking_advice_on_finetuning_mpt7b_for_semantic/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'KiWphxd9bS2yRtNjZ0zpxXu1aWJSEVs3xt9PJGA93mY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=108&crop=smart&auto=webp&s=b7de4a11d8aa930cab7bcfab456a15cb1e4ac7f5', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=216&crop=smart&auto=webp&s=2194435974221fda2161e27f3e2c95a4bc913258', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=320&crop=smart&auto=webp&s=61b1e0abf23b609927b6941f2d78d2951f14fd6f', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=640&crop=smart&auto=webp&s=35517665b774a1a393348d53844e9cf54bd9e014', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=960&crop=smart&auto=webp&s=9b623d55b1d22f5b0505cf491d8c130a77cec46c', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?width=1080&crop=smart&auto=webp&s=b5f9d5a1ab65aaa65eede74c7f11ab3888621343', 'width': 1080}], 'source': {'height': 578, 'url': 'https://external-preview.redd.it/dfC0Ybyf-RybVsuU8wdIvj5okUv2aDxBHZZxGquwIAM.jpg?auto=webp&s=17e383a94ed487bc8a24a52103945058c4cdd305', 'width': 1106}, 'variants': {}}]} |
QLoRA - finetune a 65B model on a single 48GB GPU, 7B on a 6GB GPU | 52 | [removed] | 2023-05-24T12:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/13qkdr0/qlora_finetune_a_65b_model_on_a_single_48gb_gpu/ | PookaMacPhellimen | self.LocalLLaMA | 2023-05-24T14:48:17 | 0 | {} | 13qkdr0 | false | null | t3_13qkdr0 | /r/LocalLLaMA/comments/13qkdr0/qlora_finetune_a_65b_model_on_a_single_48gb_gpu/ | false | false | default | 52 | null |
"Meet BLOOMChat: An Open-Source 176-Billion-Parameter Multilingual Chat Large Language Model (LLM) Built on Top of the BLOOM Model" | 69 |
Article linked
From the article...
"The BLOOM model has the ability to generate text in 46 natural languages and 13 programming languages. For languages such as Spanish, French, and Arabic, BLOOM represents the first language model ever created with over 100 billion parameters. BLOOM was developed by the BigScience organization, which is an international collaboration of over 1000 researchers. By fine-tuning BLOOM on open conversation and alignment datasets from projects like OpenChatKit, Dolly 2.0, and OASST1, the core capabilities of BLOOM were extended into the chat domain."
"In human evaluations conducted across six languages, BLOOMChat responses were preferred over GPT-4 responses 45.25% of the time. Compared to four other open-source chat-aligned models in the same six languages, BLOOMChat’s responses ranked as the best 65.92% of the time." | 2023-05-24T14:01:19 | https://www.marktechpost.com/2023/05/22/meet-bloomchat-an-open-source-176-billion-parameter-multilingual-chat-large-language-model-llm-built-on-top-of-the-bloom-model/?amp | KaihogyoMeditations | marktechpost.com | 1970-01-01T00:00:00 | 0 | {} | 13qmlg5 | false | null | t3_13qmlg5 | /r/LocalLLaMA/comments/13qmlg5/meet_bloomchat_an_opensource_176billionparameter/ | false | false | 69 | {'enabled': False, 'images': [{'id': 'wXqgfI3AEwOalftJJW_ty6Ee2L_QwitoKrRG6Vd6GQY', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/kr_b5Kgcle0EvhBvjNBnOww_nict8xVkh6yOGajf0Hw.jpg?width=108&crop=smart&auto=webp&s=0bcc1045ea1a01da4480d7833cac3c8356182c37', 'width': 108}, {'height': 225, 'url': 'https://external-preview.redd.it/kr_b5Kgcle0EvhBvjNBnOww_nict8xVkh6yOGajf0Hw.jpg?width=216&crop=smart&auto=webp&s=2ff2b2620e9be97ae484b6afee4003d7a3802de2', 'width': 216}, {'height': 334, 'url': 'https://external-preview.redd.it/kr_b5Kgcle0EvhBvjNBnOww_nict8xVkh6yOGajf0Hw.jpg?width=320&crop=smart&auto=webp&s=d587430ff2b4a638d243b4bc0e2f2bc948208446', 'width': 320}], 'source': {'height': 368, 'url': 'https://external-preview.redd.it/kr_b5Kgcle0EvhBvjNBnOww_nict8xVkh6yOGajf0Hw.jpg?auto=webp&s=d8766fd797c77d234a356bec42ab5b85bc3e971a', 'width': 352}, 'variants': {}}]} |
|
Multiscale Transformers paper published (1 million+ tokens now possible) | 95 | 2023-05-24T14:41:59 | https://arxiv.org/abs/2305.07185 | ptitrainvaloin | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 13qnmyt | false | null | t3_13qnmyt | /r/LocalLLaMA/comments/13qnmyt/multiscale_transformers_paper_published_1_million/ | false | false | 95 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
||
"About Community" wording change suggestion | 2 | [removed] | 2023-05-24T15:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/13qo3e2/about_community_wording_change_suggestion/ | ozzeruk82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13qo3e2 | false | null | t3_13qo3e2 | /r/LocalLLaMA/comments/13qo3e2/about_community_wording_change_suggestion/ | false | false | default | 2 | null |
How to increase privateGPT performance by up to 2x | 76 | [Speed boost for privateGPT](https://preview.redd.it/4og9cvashs1b1.jpg?width=2432&format=pjpg&auto=webp&s=c4048229d813b3a64fbb951ec776e42bf00b4e28)
I want to share some settings that I changed to improve the performance of the privateGPT by up to 2x.
👉 Update 1 (25 May 2023)
Thanks to u/Tom_Neverwinter for bringing the question about CUDA 11.8 usage instead of using CUDA 11.4. CUDA 11.8 performs better than CUDA 11.4 version for sure.I updated my post.
👉 Update (12 June 2023) :
* If you have a non-AVX2 CPU and want to benefit Private GPT [check this out.](https://blog.anantshri.info/privategpt-and-cpus-with-no-avx2/)([u/BringOutYaThrowaway](https://www.reddit.com/u/BringOutYaThrowaway/) Thanks for the info)
* AMD card owners please follow [this instructions](https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/).
* llama-cpp version updated on the instructions from `0.1.53` to `0.1.57`
​
​
Original repo: [https://github.com/imartinez/privateGPT](https://github.com/imartinez/privateGPT)
1. First, you need to build the wheel for llama-cpp-python. This can be challenging, but if you have any problems, please follow the instructions below.
Additionally I installed the following llama-cpp version to use v3 GGML models:
pip uninstall -y llama-cpp-python
set CMAKE_ARGS="-DLLAMA_CUBLAS=on"
set FORCE_CMAKE=1
pip install llama-cpp-python==0.1.57 --no-cache-dir
​
2. Next, I modified the "privateGPT.py" file to initialize the LLM with GPU offloading. I added the following lines to the file:
# Added a paramater for GPU layer numbers
n_gpu_layers = os.environ.get('N_GPU_LAYERS')
# Added custom directory path for CUDA dynamic library
os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8/bin")
os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8/extras/CUPTI/lib64")
os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8/include")
os.add_dll_directory("C:/tools/cuda/bin")
​
match model_type:
case "LlamaCpp":
# Added "n_gpu_layers" paramater to the function
llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False, n_gpu_layers=n_gpu_layers)
🔗 Download the modified privateGPT.py file [from here](https://pastes.io/yin0amh1tr)
​
3. Finally, I added the following line to the ".env" file:
PERSIST_DIRECTORY=db
MODEL_TYPE=LlamaCpp
MODEL_PATH=models\Manticore-13B-GGML\Manticore-13B.ggmlv3.q8_0.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
MODEL_N_CTX=1000
N_GPU_LAYERS=26
🔗 Download the modified .env file [from here](https://pastes.io/9az8xb8eer)
​
Here is the speed difference between two of them:[https://imgsli.com/MTgxMjcz](https://imgsli.com/MTgxMjcz)
​
|without GPU offloading|with GPU offloading|Delta|
|:-|:-|:-|
|Total time: 102066 ms|Total time: 47705ms|214%|
As you can see, the modified version of privateGPT is up to 2x faster than the original version.
​
⚠ If you encounter any problems building the wheel for llama-cpp-python, please follow the instructions below:
**Building wheel instructions for llma-cpp-python:**
1. Make sure you have installed the required Visual Studio 2022 and the necessary tools.
[Visual Studio 2022 Community + Desktop Development tools for C++](https://preview.redd.it/uvscoiq2ks1b1.jpg?width=1261&format=pjpg&auto=webp&s=2495e38f47099678f418b30a1cc24451b368a4b8)
1. Upgrade to the latest NVidia drivers.-- [https://www.nvidia.com/drivers](https://www.nvidia.com/drivers)
2. Install CUDA 12.1-- [https://developer.nvidia.com/cuda-downloads?target\_os=Windows&target\_arch=x86\_64&target\_version=10&target\_type=exe\_local](https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exe_local)
3. Install cuDNN 8.9.1 for CUDA 11.8-- [https://developer.nvidia.com/rdp/cudnn-download](https://developer.nvidia.com/rdp/cudnn-download)\-- choose cuDNN Library for Windows (x64), extract to C:\\tools so you have C:\\tools\\cuda\\bin\\cudnn64\_8.dll
4. Download and install the x64 C++ Redistributable.-- [https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0](https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0)
5. Download and install the x86 C++ Redistributable.-- [https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0](https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0)
6. Install TensorFlow.`pip install tensorflow`
7. Install tf-nightly.`pip install tf-nightly`
8. Reboot your computer.
​
⚠ **IMPORTANT:** After you build the wheel successfully, privateGPT needs CUDA 11.8 installed to work properly.
Install CUDA 11.8:-- [https://developer.nvidia.com/cuda-downloads?target\_os=Windows&target\_arch=x86\_64&target\_version=10&target\_type=exe\_local](https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exe_local)
I hope you can get the speed boost from the GPU too.Please let me know your thoughts under the comments.
​
​ | 2023-05-24T15:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/13qo59f/how_to_increase_privategpt_performance_by_up_to_2x/ | JumpingQuickBrownFox | self.LocalLLaMA | 2023-06-11T22:36:54 | 0 | {} | 13qo59f | false | null | t3_13qo59f | /r/LocalLLaMA/comments/13qo59f/how_to_increase_privategpt_performance_by_up_to_2x/ | false | false | 76 | {'enabled': False, 'images': [{'id': 'tWLxL6ibeMKE3n4RExZG-2dQruS8sIu1PTIk6XjJiqM', 'resolutions': [{'height': 35, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=108&crop=smart&auto=webp&s=37506b090892f0bdd3ceac769269b961d70f4481', 'width': 108}, {'height': 70, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=216&crop=smart&auto=webp&s=8fadae1db1fe79f1d7b1de81ed0236125188336f', 'width': 216}, {'height': 104, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=320&crop=smart&auto=webp&s=66c901d41fb9036187c74245adc5e195cc9a9010', 'width': 320}, {'height': 208, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=640&crop=smart&auto=webp&s=51472a5cfd790c897e0cdf79f3da73d90959c329', 'width': 640}, {'height': 312, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=960&crop=smart&auto=webp&s=b2601764bdd0733e8a5709aa34a5bcc1c1948111', 'width': 960}, {'height': 351, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?width=1080&crop=smart&auto=webp&s=026b25968612f6290306a1fb5b595ac837f7658d', 'width': 1080}], 'source': {'height': 934, 'url': 'https://external-preview.redd.it/4WE5A_uvsQuxzGhLAmABdg7iPO301FFfSeE1WZNqzlo.jpg?auto=webp&s=91a0dfe9de3fbbc60a9a3350b75c0afbc8573a52', 'width': 2868}, 'variants': {}}]} |
|
What a time to be alive! Tell the AI to generate a webpage and it works in line! | 18 | 2023-05-24T15:03:51 | muchCode | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13qo6xt | false | null | t3_13qo6xt | /r/LocalLLaMA/comments/13qo6xt/what_a_time_to_be_alive_tell_the_ai_to_generate_a/ | false | false | 18 | {'enabled': True, 'images': [{'id': 'j6YE-B-JEDNaCrfY5HTeEJq5WyhhllxNMj5tRTX1qmI', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/zy08a1yzns1b1.png?width=108&crop=smart&auto=webp&s=e1fe56e9ce62e20427f393c953a0253b4f8bcc1a', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/zy08a1yzns1b1.png?width=216&crop=smart&auto=webp&s=3403328230afe412d500054384904b593b1524fd', 'width': 216}, {'height': 239, 'url': 'https://preview.redd.it/zy08a1yzns1b1.png?width=320&crop=smart&auto=webp&s=215d8dfe6ab51493c0fa07f15223b7e7cd79e9d3', 'width': 320}, {'height': 479, 'url': 'https://preview.redd.it/zy08a1yzns1b1.png?width=640&crop=smart&auto=webp&s=a34a6c6e70dbafe7c7c3e160aebb770d9e491075', 'width': 640}], 'source': {'height': 620, 'url': 'https://preview.redd.it/zy08a1yzns1b1.png?auto=webp&s=fb076ffa87906c9df72e55d8a45c71182f3b83f1', 'width': 827}, 'variants': {}}]} |
|||
Llama glitch tokens? | 7 | Has anyone found any?
Inspired by this https://www.lesswrong.com/posts/kmWrwtGE9B9hpbgRT/a-search-for-more-chatgpt-gpt-3-5-gpt-4-unspeakable-glitch I just randomly took a token that looked like a glitchy one from list of Llama tokens https://docs.google.com/spreadsheets/d/1-7-bP78cQ9DqXRPV6h078lVEbR4ojQ7bv7VWobX1nKc/edit#gid=1544081846 and surprisingly it seemed to have anomalous results!
"IABot", token 10977 - needs to be in parentheses, tested a bit so far only with Vicuna 13b (on the chat.lmsys.org site and locally with an uncensored 5bit ggml version on koboldcpp) and Alpacino30b (5bit ggml).
Results with this one:
-Both think it's related to urls
-With vic13b the other different result I've seen is it sees this as "Weitze"
https://imgur.com/a/VkAQxS4
Turns out, "IABot" may be referring to the Internet Archive Bot so maybe that makes some sense.
I just thought it was interesting, and maybe this can be a place for more info on Llama glitch tokens.
Edit: "simeq" also seems to have weird effects sometimes.
https://imgur.com/a/kMKlAr3
Update: "simeq" may just be explained by this https://tex.stackexchange.com/questions/40769/how-to-use-simeq-in-equation
Final update for now: "simeq" and "IABot" may be the only notable "weird" tokens I've noticed so far. Combining them may give interesting results: https://imgur.com/a/kx3fZKV (still vicuna 13b, the first answer seen is to me saying "Huh? try again" when I first asked it to spell 'IABot' and it replied "The string 'http://wwwitings' can be spelled out as ''.")
Do try for anyone interested to confirm | 2023-05-24T15:19:20 | https://www.reddit.com/r/LocalLLaMA/comments/13qoktf/llama_glitch_tokens/ | reduserGf | self.LocalLLaMA | 2023-05-24T21:08:04 | 0 | {} | 13qoktf | false | null | t3_13qoktf | /r/LocalLLaMA/comments/13qoktf/llama_glitch_tokens/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'IvzD4iQYy_c-HtzJKSGBiKBstEDX4S90HkKv_2bf5Zc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=108&crop=smart&auto=webp&s=79dc132c93224b5b02f24bf2afd5e4e37939bf89', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=216&crop=smart&auto=webp&s=c849edfb552c650656181c178527efa90b209882', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=320&crop=smart&auto=webp&s=ae2b296b2a002904dc370a0538731c056519ce59', 'width': 320}, {'height': 344, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=640&crop=smart&auto=webp&s=6d7bd926665ec317e4c3bd12892d17693be9cce8', 'width': 640}, {'height': 516, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=960&crop=smart&auto=webp&s=dda0a0d415b96e5d56c3ecc8b760d00b88249667', 'width': 960}, {'height': 580, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?width=1080&crop=smart&auto=webp&s=ad459416ce029504735ca578a929392aac59aec2', 'width': 1080}], 'source': {'height': 621, 'url': 'https://external-preview.redd.it/tArGV6dbgbSgFEfe3WPxZ6fqtWs9KAUo9-emHoo4Fm8.jpg?auto=webp&s=cfa23485b75d135e8673c1d5856919508a8c44ac', 'width': 1155}, 'variants': {}}]} |
How do I enable GPU acceleration for ggml models? | 1 | [removed] | 2023-05-24T16:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/13qq6f5/how_do_i_enable_gpu_acceleration_for_ggml_models/ | Famberlight | self.LocalLLaMA | 2023-05-24T16:26:47 | 0 | {} | 13qq6f5 | false | null | t3_13qq6f5 | /r/LocalLLaMA/comments/13qq6f5/how_do_i_enable_gpu_acceleration_for_ggml_models/ | false | false | default | 1 | null |
Testing out 13B models | 24 | So, I was quite bored so I tested many 13B models with a prompt of my own.
**Prompt**: You're an AI assistant. Here's the user's prompt: 'create a python dictionary with the planets of the solar system and their most important moons, if they have any'. Reply to it:
Tested on the base 4.0 GGMLv3 model (less RAM consuming, a little slower and less precise than 4.1)
20 layers offloaded to a GTX 1070 through CUBLAS, the rest on 16GB of RAM and a Ryzen 2600. Visual Studio Code, VLC and Chrome open. Sadly I can't get to test out 30B models just yet.
Some models outputted extra text or descriptions after the dictionary, I tried to not take this into account as it was no the main goal of the prompt.
I couldn't download more models because HuggingFace returns a 504 error...
​
# Manticore-13B.ggmlv3.q4_0
^(llama\_print\_timings: load time = 28496.72 ms)
^(llama\_print\_timings: sample time = 36.04 ms / 129 runs ( 0.28 ms per token))
^(llama\_print\_timings: prompt eval time = 28496.68 ms / 49 tokens ( 581.56 ms per token))
^(llama\_print\_timings: eval time = 43588.52 ms / 128 runs ( 340.54 ms per token))
^(llama\_print\_timings: total time = 77660.50 ms)
| Planet | Moon(s) |
|---------|----------|
| Mercury | None |
| Venus | None |
| Earth | Moon |
| Mars | Phobos |
| Mars | Deimos |
| Jupiter | Io |
| Jupiter | Europa |
| Jupiter | Ganymede |
| Saturn | Titan |
| Uranus | Titania |
| Neptune | Triton |'
It outputted a CSV.
​
# wizard-vicuna-13B-uncensored-4.0
^(llama\_print\_timings: load time = 34791.87 ms)
^(llama\_print\_timings: sample time = 43.15 ms / 157 runs ( 0.27 ms per token))
^(llama\_print\_timings: prompt eval time = 34791.82 ms / 49 tokens ( 710.04 ms per token))
^(llama\_print\_timings: eval time = 48461.63 ms / 156 runs ( 310.65 ms per token))
^(llama\_print\_timings: total time = 90063.74 ms)
planets = {
"Mercury": ["Phobos", "Deimos"],
"Venus": [],
"Earth": ["Moon"],
"Mars": ["Phobos", "Deimos"],
"Jupiter": ["Io", "Europa", "Ganymede", "Callisto"],
"Saturn": ["Titan", "Hyperion", "Dione", "Rhea", "Iapetus"],
"Uranus": [],
"Neptune": []
}
​
# airoboros-13B.q4_0
^(llama\_print\_timings: load time = 20586.40 ms)
^(llama\_print\_timings: sample time = 112.17 ms / 412 runs ( 0.27 ms per token))
^(llama\_print\_timings: prompt eval time = 20586.35 ms / 49 tokens ( 420.13 ms per token))
^(llama\_print\_timings: eval time = 138007.42 ms / 411 runs ( 335.78 ms per token))
^(llama\_print\_timings: total time = 176563.73 ms)
{'Mercury': {'moon': None},
'Venus': {'moon': None},
'Earth': {'moon': [1) The Moon, 2) Phobos and Deimos (Mars moons), 3) Charon (Pluto moon)],
'Mars': {'moon': [1) Phobos, 2) Deimos, 3) Dione (Saturn moon), 4) Callisto (Jupiter moon), 5) Titan (Saturn moon)],
'Jupiter': {'moon': [1) Io, 2) Europa, 3) Ganymede, 4) Callisto, 5) Himalia (Uranus moon), 6) Metis (Saturn moon)]},
'Saturn': {'moon': [1) Titan, 2) Mimas, 3) Enceladus, 4) Dione, 5) Rhea, 6) Hyperion (Sun moon), 7) Helene (Mars moon), 8) Polydeuces (Saturn moon)]},
'Uranus': {'moon': [1) Miranda, 2) Ariel, 3) Umbriel, 4) Titania, 5) Oberon, 6) Cordelia (Mars moon), 7) Ophelia (Neptune moon), 8) Bianca (Saturn moon)]},
'Neptune': {'moon': [1) Triton, 2) Nereid, 3) Larissa, 4) Proteus, 5) Despina (Mars moon), 6) Galatea (Uranus moon), 7) Thalassa (Saturn moon)]}}
​
# wizard-mega-13B.ggmlv3.q4_0
^(llama\_print\_timings: load time = 31029.52 ms)
^(llama\_print\_timings: sample time = 70.19 ms / 262 runs ( 0.27 ms per token))
^(llama\_print\_timings: prompt eval time = 31029.46 ms / 49 tokens ( 633.25 ms per token))
^(llama\_print\_timings: eval time = 85070.01 ms / 261 runs ( 325.94 ms per token))
^(llama\_print\_timings: total time = 127368.92 ms)
planets = {
"Mercury": {"moon": "Mercury has no known moons."},
"Venus": {"moon": "Venus has no known moons."},
"Earth": {"moon": "The Moon"},
"Mars": {"moon": "Phobos and Deimos"},
"Jupiter": {"moon": "Io, Europa, Ganymede, Callisto"},
"Saturn": {"moon": "Titan, Mimas, Enceladus, Tethys, Dione, Rhea, Iapetus, Hyperion"},
"Uranus": {"moon": "Miranda, Ariel, Umbriel, Titania, Oberon"},
"Neptune": {"moon": "Triton"}
}
This is the best though it didn't list Phobos and Deimos as two different moons, lol.
​
# GPT4All-13B-snoozy.ggmlv3.q4_0
^(llama\_print\_timings: load time = 33640.08 ms)
^(llama\_print\_timings: sample time = 13.84 ms / 51 runs ( 0.27 ms per token))
^(llama\_print\_timings: prompt eval time = 33640.03 ms / 49 tokens ( 686.53 ms per token))
^(llama\_print\_timings: eval time = 16336.59 ms / 50 runs ( 326.73 ms per token))
^(llama\_print\_timings: total time = 52194.82 ms)
Sure! I can help you create a Python dictionary that contains information about the planets in our solar system and their most important moons, if they have any. Would you like me to include additional information such as planetary rotation or surface features?
It didn't even create a dictionary :( I continued the chat by passing the chat history to it and replying with a "yes please" to see what would happen.
Understood! I will create a Python dictionary that contains the following information for each planet in our solar system: its name, the number of moons it has, and their names (if applicable). For example, the dictionary entry for Earth would be as follows:
```python
'Earth': {'moon_number': 1, 'moon_names': ['Moon']},
```
I will also include any additional information you requested such as planetary rotation or surface features. Is there anything else I can help with?
It's clearly far behind.
​
# gpt4-x-vicuna-13B.ggmlv3.q4_0
^(llama\_print\_timings: load time = 24338.25 ms)
^(llama\_print\_timings: sample time = 104.17 ms / 388 runs ( 0.27 ms per token))
^(llama\_print\_timings: prompt eval time = 24338.20 ms / 49 tokens ( 496.70 ms per token))
^(llama\_print\_timings: eval time = 132157.93 ms / 387 runs ( 341.49 ms per token))
^(llama\_print\_timings: total time = 173126.20 ms)
planets = {
'Mercury': ['Moon'],
'Venus': ['Moon'],
'Earth': ['Moon', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune'],
'Mars': ['Phobos', 'Deimos'],
'Jupiter': ['Io', 'Europa', 'Ganymede', 'Callisto'],
'Saturn': ['Titan', 'Enceladus', 'Dione', 'Rhea'],
'Uranus': ['Moon', 'Oberon', 'Miranda', 'Ariel', 'Uranus'],
'Neptune': ['Triton']
}
​
# llama-13b.ggmlv3.q4_0
^(llama\_print\_timings: load time = 13184.74 ms)
^(llama\_print\_timings: sample time = 138.48 ms / 512 runs ( 0.27 ms per token))
^(llama\_print\_timings: prompt eval time = 13184.69 ms / 49 tokens ( 269.08 ms per token))
^(llama\_print\_timings: eval time = 212711.05 ms / 511 runs ( 416.26 ms per token))
^(llama\_print\_timings: total time = 248161.27 ms)
Here is your list: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune. All are made of rock.'
\end{code}
The test case that failed is:
\begin{blockquote}
You're an AI assistant. Here's the user's prompt: 'create a python dictionary with the planets of the solar system and their most important moons, if they have any'. Reply to it: 'Here is your list: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune. All are made of rock.'
\end{blockquote}
My code (which fails):
\begin{code}
def createPlanets(string):
planetDict = { 'Mercury' : 'Inside the Sun',
'Venus' : 'inside the Sun',
'Earth' : 'inside the Sun',
'Mars' : 'inside the Sun',
'Jupiter' : 'inside the Sun',
'Saturn' : 'inside the Sun',
'Uranus' : 'inside the Sun',
'Neptune' : 'inside the Sun'}
return planetDict
\end{code}
Here is what I get:
\begin{blockquote}
You're an AI assistant. Here's the user's prompt: 'create a python dictionary with the planets of the solar system and their most important moons, if they have any'. Reply to it: 'Here is your list: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune. All are made of rock.'
\end{blockquote}
How do I fix this?
Answer: Your function is returning a dictionary containing only those 9 strings as keys and the same values (in your case `'inside the Sun'`). Instead you need to modify `planetDict` inside your function. So, you can change your code as follows:
\begin{code}
def createPlanets(string):
planetDict = { 'Mercury' : 'Inside the Sun',
'Venus' : '
I think I forgot to pass the correct stop string but in any case it didn't achieve it.
​
# koala-13B.ggmlv3.q4_0
^(llama\_print\_timings: load time = 61491.52 ms)
^(llama\_print\_timings: sample time = 62.68 ms / 230 runs ( 0.27 ms per token))
^(llama\_print\_timings: prompt eval time = 61491.46 ms / 49 tokens ( 1254.93 ms per token))
^(llama\_print\_timings: eval time = 76202.17 ms / 229 runs ( 332.76 ms per token))
^(llama\_print\_timings: total time = 147727.62 ms)
{'Sun': ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune'],'Mercury': ['Phosphorus', 'Pygmalion'],'Venus': ['Atmosphere', 'Lucy'],'Earth': ['Luna', 'Ganymede', 'Callisto', 'Titan', 'Iapetus'],'Mars': ['Deimos', 'Phobos'],'Jupiter': ['Ambrosia', 'Harpalyce', 'Thalassa'],'Saturn': ['Iapetus', 'Prometheus', 'Helene'],'Uranus': ['Titania', 'Oberon', 'Ariel', 'Miranda', 'Umbriel,', 'Cordelia'],'Neptune': ['Triton']}
​
# stable-vicuna-13B.ggmlv3.q4_0
^(llama\_print\_timings: load time = 25041.86 ms)
^(llama\_print\_timings: sample time = 58.88 ms / 230 runs ( 0.26 ms per token))
^(llama\_print\_timings: prompt eval time = 25041.81 ms / 49 tokens ( 511.06 ms per token))
^(llama\_print\_timings: eval time = 68250.25 ms / 229 runs ( 298.04 ms per token))
^(llama\_print\_timings: total time = 102847.04 ms)
planets = {
'Mercury': ['Phobos', 'Deimos'],
'Venus': ['Aphrodite', 'Cytherea'],
'Earth': ['Moon'],
'Mars': ['Phobos', 'Demios'],
'Jupiter': ['Io', 'Europa', 'Ganymede', 'Callisto'],
'Saturn': ['Titan', 'Dione', 'Rhea', 'Enceladus'],
'Uranus': ['Oberon', 'Miranda', 'Ariel', 'Umbriel'],
'Neptune': ['Triton']
}
Thanks. | 2023-05-24T16:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/13qqfrj/testing_out_13b_models/ | Hopeful_Donut4790 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13qqfrj | false | null | t3_13qqfrj | /r/LocalLLaMA/comments/13qqfrj/testing_out_13b_models/ | false | false | self | 24 | null |
QLoRA: 4-bit finetuning of LLMs is here! With it comes Guanaco, a chatbot on a single GPU, achieving 99% ChatGPT performance on the Vicuna benchmark | 291 | 2023-05-24T17:08:30 | https://twitter.com/Tim_Dettmers/status/1661379354507476994 | dondochaka | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 13qrdj6 | false | {'oembed': {'author_name': 'Tim Dettmers', 'author_url': 'https://twitter.com/Tim_Dettmers', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">QLoRA: 4-bit finetuning of LLMs is here! With it comes Guanaco, a chatbot on a single GPU, achieving 99% ChatGPT performance on the Vicuna benchmark:<br><br>Paper: <a href="https://t.co/J3Xy195kDD">https://t.co/J3Xy195kDD</a><br>Code+Demo: <a href="https://t.co/SP2FsdXAn5">https://t.co/SP2FsdXAn5</a><br>Samples: <a href="https://t.co/q2Nd9cxSrt">https://t.co/q2Nd9cxSrt</a><br>Colab: <a href="https://t.co/Q49m0IlJHD">https://t.co/Q49m0IlJHD</a> <a href="https://t.co/UJcowpfhpH">pic.twitter.com/UJcowpfhpH</a></p>— Tim Dettmers (@Tim_Dettmers) <a href="https://twitter.com/Tim_Dettmers/status/1661379354507476994?ref_src=twsrc%5Etfw">May 24, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Tim_Dettmers/status/1661379354507476994', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_13qrdj6 | /r/LocalLLaMA/comments/13qrdj6/qlora_4bit_finetuning_of_llms_is_here_with_it/ | false | false | 291 | {'enabled': False, 'images': [{'id': 'o7agSeX3ZkMjEjsxVRrvjlUK4bfnbkfpOkISXNJZ51c', 'resolutions': [{'height': 119, 'url': 'https://external-preview.redd.it/RNk7fQ5q3aMFjZuWs3yLqRMirLJAW4S4RL9oc7CZHH8.jpg?width=108&crop=smart&auto=webp&s=dfcb37f7fe25da2eb15d491101af4ea80dc46019', 'width': 108}, {'height': 238, 'url': 'https://external-preview.redd.it/RNk7fQ5q3aMFjZuWs3yLqRMirLJAW4S4RL9oc7CZHH8.jpg?width=216&crop=smart&auto=webp&s=c2f8752fb723552ed8c17f051333b8d04164670c', 'width': 216}], 'source': {'height': 348, 'url': 'https://external-preview.redd.it/RNk7fQ5q3aMFjZuWs3yLqRMirLJAW4S4RL9oc7CZHH8.jpg?auto=webp&s=6be7afa9802696249e7ca279244fb3774f0eb05e', 'width': 315}, 'variants': {}}]} |
||
Has the LLaMA licence been updated? | 3 | Apologies if this has been asked and answered before. When I googled the LLaMA licence I found this:
https://github.com/facebookresearch/llama/blob/main/LICENSE
This indicates that LLaMA (and therefore it's fine tuned varients) can be used commercially, whereas most of the discussion I have seen suggests that LLaMA can only be used for research.
Which is the truth? Is there a more official place to look for the licencing? | 2023-05-24T17:58:20 | https://www.reddit.com/r/LocalLLaMA/comments/13qsp3i/has_the_llama_licence_been_updated/ | Bryan-Ferry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13qsp3i | false | null | t3_13qsp3i | /r/LocalLLaMA/comments/13qsp3i/has_the_llama_licence_been_updated/ | false | false | self | 3 | null |
True story of my life in the last weeks | 150 | 2023-05-24T18:05:27 | Evening_Ad6637 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13qswm6 | false | null | t3_13qswm6 | /r/LocalLLaMA/comments/13qswm6/true_story_of_my_life_in_the_last_weeks/ | false | false | 150 | {'enabled': True, 'images': [{'id': 'VKB1Xc3iTc96RWAjEyDS3DdDNVz2QsQURT8fZLMN7cg', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/zkkjizntjt1b1.png?width=108&crop=smart&auto=webp&s=fff311e9779810e38b1951b9509bbb9e38c84a75', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/zkkjizntjt1b1.png?width=216&crop=smart&auto=webp&s=72e2417669621b8447c4b112c9d2fbe4d66b6715', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/zkkjizntjt1b1.png?width=320&crop=smart&auto=webp&s=4b1bca13883bf7fbd0311ddf990bac0e4981ac0e', 'width': 320}], 'source': {'height': 1777, 'url': 'https://preview.redd.it/zkkjizntjt1b1.png?auto=webp&s=417cce3ab92b9590b75e24c343b2870154f9ee67', 'width': 484}, 'variants': {}}]} |
|||
Sharing my project: a desktop app for finding, downloading, and running LLaMa models | 10 | [removed] | 2023-05-24T18:25:10 | [deleted] | 2023-05-26T21:57:06 | 0 | {} | 13qtfqx | false | null | t3_13qtfqx | /r/LocalLLaMA/comments/13qtfqx/sharing_my_project_a_desktop_app_for_finding/ | false | false | default | 10 | null |
||
Compression Aware prompting for quantized models | 6 | 2023-05-24T21:54:38 | https://arxiv.org/pdf/2305.11186v1.pdf | RMCPhoto | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 13qz0qc | false | null | t3_13qz0qc | /r/LocalLLaMA/comments/13qz0qc/compression_aware_prompting_for_quantized_models/ | false | false | default | 6 | null |
|
Anyone working on linking local Ai with Home Assistant? | 7 | Title. To be clear I am in no way a coder, so this is totally me asking for someone else to do it. :p
But imagine how cool it would be, especially with the ai voice plug-ins. It would be like a local Alexa that is actually useful. I imagine there could be some sort of custom prompt explaining what the model is behaving as as well as a list of all of the devices it can control.
I'm sure it will come out eventually but based on my somewhat limited knowledge of ai models and extremely limited coding knowledge I think it's possible today. It might even be possible to link in stable diffusion, although that's a whole other thing. | 2023-05-24T23:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/13r15j3/anyone_working_on_linking_local_ai_with_home/ | TheSilentFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r15j3 | false | null | t3_13r15j3 | /r/LocalLLaMA/comments/13r15j3/anyone_working_on_linking_local_ai_with_home/ | false | false | self | 7 | null |
Introducing PB reddit 7B - a lora trained on financial subreddits | 47 | 2023-05-25T00:21:32 | https://huggingface.co/winddude/pb_lora_7b_v0.1 | wind_dude | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13r2oh3 | false | null | t3_13r2oh3 | /r/LocalLLaMA/comments/13r2oh3/introducing_pb_reddit_7b_a_lora_trained_on/ | false | false | 47 | {'enabled': False, 'images': [{'id': 'K17jxhEiS3GLm_8LeCSvyhVjkzErMe0Bfe1j1E2_NhM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=108&crop=smart&auto=webp&s=4fd6e56f2fba3081430c83cf277da90a31612fa6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=216&crop=smart&auto=webp&s=f327386bbcf3aa02f6f4085bf69757e4f51d8368', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=320&crop=smart&auto=webp&s=a2290d3b6e5f929272e2bb4d920a5ae0080b9a07', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=640&crop=smart&auto=webp&s=d531f39e4cbe0f148edb4b867a563bc46201f720', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=960&crop=smart&auto=webp&s=21392cf6832b3843464a8113b337fc7607df7168', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?width=1080&crop=smart&auto=webp&s=4639fda70e4f207ccbb780c566ecde06e92bf87b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B2MlJJRUDiICgoQh4MHyZrNPR9f3auxjdRqfIafeMUw.jpg?auto=webp&s=794e674e887d98d624728292908e45bfa4d87324', 'width': 1200}, 'variants': {}}]} |
||
Intel Vs AMD performance | 7 | In a thread about tokens/sec performance in this sub I read a comment by someone that noticed that all the better performing systems had Intel CPUs. So I went to a gpu cloud and tested out various systems with some of the smaller HF models using oobabooga, all being headless Linux machines. What I found was that I couldn't observe a significant difference between less and more powerful GPUs, which I thought was already quite strange.
THE ONE thing that I could extract from that experiment is that the systems that had 13th gen Intel CPUs ( 5k, 7k, 9k all performed similarly, but the 5k seemed to have a slight edge ) all performed a lot better than all the rest of the systems ( including other Intel CPUs ).
With this exact CPU series less powerful GPUs ( e.g. 3080 ) performed much better than more powerful ones ( e.g. 4090 ). The difference was as much as 20 vs 50 t/s on the quantised 7b models.
Can anybody provide insight into wth this is happening? As an aside question: Could anybody wanting to share his ways get speeds significantly beyond this ( e.g. 70 or 100 )? | 2023-05-25T00:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/13r3ayc/intel_vs_amd_performance/ | footurist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r3ayc | false | null | t3_13r3ayc | /r/LocalLLaMA/comments/13r3ayc/intel_vs_amd_performance/ | false | false | self | 7 | null |
Intel 13th gen performance | 3 | [removed] | 2023-05-25T01:33:46 | https://www.reddit.com/r/LocalLLaMA/comments/13r48uc/intel_13th_gen_performance/ | footurist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r48uc | false | null | t3_13r48uc | /r/LocalLLaMA/comments/13r48uc/intel_13th_gen_performance/ | false | false | default | 3 | null |
Which models should/could I download. M2 Mac mini 16GB Ram 500GB SSD | 0 | [deleted] | 2023-05-25T01:48:53 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13r4kee | false | null | t3_13r4kee | /r/LocalLLaMA/comments/13r4kee/which_models_shouldcould_i_download_m2_mac_mini/ | false | false | default | 0 | null |
||
How well would a big quad-core server work for running LLMs? | 3 | I want to experiment with locally running an LLM (considering WizardLM-30B-Uncensored). I don't really have the right hardware for it, but I do have an R820 with 4x E5-4650 processors and 720GB of ram.
I know running a model with CPU/RAM is horribly inefficient, but just how slow would my R820 be at generating text with such a model? Has anybody tried anything similar? I don't care too much about energy usage, just how long it takes to generate text. | 2023-05-25T02:01:01 | https://www.reddit.com/r/LocalLLaMA/comments/13r4tgq/how_well_would_a_big_quadcore_server_work_for/ | ruapirate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r4tgq | false | null | t3_13r4tgq | /r/LocalLLaMA/comments/13r4tgq/how_well_would_a_big_quadcore_server_work_for/ | false | false | self | 3 | null |
Okay so it's now surprisingly efficient to fine tune with QLora.. how can we take advantage of this? | 20 | What are people's gut instincts on how to best take advantage? And what kind are people already making? Would love to learn how these easy fine tuning can impact our daily usage | 2023-05-25T03:42:08 | https://www.reddit.com/r/LocalLLaMA/comments/13r6xh9/okay_so_its_now_surprisingly_efficient_to_fine/ | noneabove1182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r6xh9 | false | null | t3_13r6xh9 | /r/LocalLLaMA/comments/13r6xh9/okay_so_its_now_surprisingly_efficient_to_fine/ | false | false | self | 20 | null |
GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ | 49 | The difference from QLoRA is that GPTQ is used instead of NF4 (Normal Float4) + DQ (Double Quantization) for model quantization.
The advantage is that you can expect better performance because it provides better quantization than conventional bitsandbytes.
The downside is that it is a one-shot quantization methodology, so it is more inconvenient than bitsandbytes, and unlike bitsandbytes, it is not universal.
I'm still experimenting, but it seems to work.
At least, I hope it can be more options for people using LoRA.
[https://github.com/qwopqwop200/gptqlora/tree/main](https://github.com/qwopqwop200/gptqlora/tree/main)
https://preview.redd.it/wblq0glrlw1b1.png?width=810&format=png&auto=webp&s=734d6786bc2fff06fa46f2d8d7da0e6dae9ca121 | 2023-05-25T04:22:14 | https://www.reddit.com/r/LocalLLaMA/comments/13r7pzg/gptqlora_efficient_finetuning_of_quantized_llms/ | CowZestyclose3193 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r7pzg | false | null | t3_13r7pzg | /r/LocalLLaMA/comments/13r7pzg/gptqlora_efficient_finetuning_of_quantized_llms/ | false | false | 49 | {'enabled': False, 'images': [{'id': 'xmqlWbAhUstrTd79Q8ac3zMYFtwdGQ3nhBmv0lIqcoI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=108&crop=smart&auto=webp&s=13e4a55a5507f496ea6b420c0fd09d59f458b9dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=216&crop=smart&auto=webp&s=c0b63341fa6476aa47423c12bb5164b8eebabc28', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=320&crop=smart&auto=webp&s=5f3611ce5e4634bdaf47b74413b99b3015960c9e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=640&crop=smart&auto=webp&s=cb0b4b8b051a3026f85730025c4132a33e42614e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=960&crop=smart&auto=webp&s=47f9e9b8a33ad055705652b99692069a51bec831', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?width=1080&crop=smart&auto=webp&s=0b15d6c64b32c9af1e7f04cc2c377049c66aa13c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HdNrB0O_wkeyRqXtuJIArlTRtafsFOo1DA3S7yw2fCs.jpg?auto=webp&s=deed3fd004c409ed140aba1fceed1b0357efe12c', 'width': 1200}, 'variants': {}}]} |
|
Claude can read 75k words in seconds | 7 | [deleted] | 2023-05-25T05:05:05 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13r8i5j | false | null | t3_13r8i5j | /r/LocalLLaMA/comments/13r8i5j/claude_can_read_75k_words_in_seconds/ | false | false | default | 7 | null |
||
Looking for advice to build a home setup | 3 | I currently do not have a setup at home strong enough to train models.
I would like to get this setup, before the prices for these resources explode even more.
Im willing to spent 1000-2000$ on this.
Can you please line out the do's and don'ts of purchasing a home rig?
What are currently good GPUs? What specs of a gpu are more important than others? From what I hear the RAM size is the most important?
What do I need to watch out for besides the GPU?
Would also be glad if you could point me to any blog posts that discuss this topic.
Many thanks. | 2023-05-25T05:43:27 | https://www.reddit.com/r/LocalLLaMA/comments/13r96jp/looking_for_advice_to_build_a_home_setup/ | sismograph | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13r96jp | false | null | t3_13r96jp | /r/LocalLLaMA/comments/13r96jp/looking_for_advice_to_build_a_home_setup/ | false | false | self | 3 | null |
How I feel after two days | 2 | [removed] | 2023-05-25T06:57:58 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13rafzj | false | null | t3_13rafzj | /r/LocalLLaMA/comments/13rafzj/how_i_feel_after_two_days/ | false | false | default | 2 | null |
||
All locally hosted discord chatbot? Text+Image+Speech? | 1 | [removed] | 2023-05-25T07:56:39 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13rbedy | false | null | t3_13rbedy | /r/LocalLLaMA/comments/13rbedy/all_locally_hosted_discord_chatbot_textimagespeech/ | false | false | default | 1 | null |
||
Engaging in conversation with Samantha AI (WizardLM-30B-Uncensored-GPTQ,Oobabooga API running in the cloud). Without a doubt, these models beat OpenAI "As a AI Language model" models :D | 37 | 2023-05-25T10:11:46 | https://v.redd.it/7wcfysj1cy1b1 | No_Wheel_9336 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13rdple | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/7wcfysj1cy1b1/DASHPlaylist.mpd?a=1695022491%2CMTk1OWJiYmU4Nzg2OWIxZTQ0YjA0ZGE3YTY5MDc0ZmFmZmRlNTNlMGQwMjk4YzA3ODcwOTczNzhmZWE0MTcyYQ%3D%3D&v=1&f=sd', 'duration': 63, 'fallback_url': 'https://v.redd.it/7wcfysj1cy1b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/7wcfysj1cy1b1/HLSPlaylist.m3u8?a=1695022491%2CYjYxOGFjNDUxMjY4YTUwZjI0MmM3NDBjYmViNGM0ZWMxMDRjZGRlODA2MjQ4ZDJmNGM0ZTM3NGJkNTBkMmZiNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7wcfysj1cy1b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 608}} | t3_13rdple | /r/LocalLLaMA/comments/13rdple/engaging_in_conversation_with_samantha_ai/ | false | false | default | 37 | null |
|
Huggingface 🤗 QLoRA docs | 41 | HF has release library, docs and also Google Colab for QLoRA
Everything goes too fast with 4bit quantization | 2023-05-25T10:13:35 | https://huggingface.co/blog/4bit-transformers-bitsandbytes | Xnohat | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13rdqqy | false | null | t3_13rdqqy | /r/LocalLLaMA/comments/13rdqqy/huggingface_qlora_docs/ | false | false | 41 | {'enabled': False, 'images': [{'id': '5ors3IzSd91LDWo2yC2yuTxbj5satwsZQHIPrP5aNIo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=108&crop=smart&auto=webp&s=ffc336c656133c8a8bd641e31b75fb1aa11f8621', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=216&crop=smart&auto=webp&s=b9ff564f2023e86473ce0e652c89c051f6ff1f0d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=320&crop=smart&auto=webp&s=7f128fbeaaa0215fa9d56fe951360b6816d05d15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=640&crop=smart&auto=webp&s=11de21cb0d1e5e65061595bbdc3ee0e7bf363013', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=960&crop=smart&auto=webp&s=aa940f51838964b78f0e74d4a8d185daa8b66258', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=1080&crop=smart&auto=webp&s=b45d46bcf6025c8ae8b913dd314011b331b2f9d0', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?auto=webp&s=29860490a90daf517877704aef67f4db4a6ebbc9', 'width': 1300}, 'variants': {}}]} |
|
Can someone explain why there isn't a good interface for the oobabooga api in langchain? | 55 | I have to admit, I'm quite overwhelmed by all the different model types and quant stuff. Everywhere I look, it seems like everybody is doing things differently and using completely different approaches to models.
Let's see, there's: llama.cpp, llama.cpp with gpu (sorta if you can figure it out i guess), autogptq, gptq triton, gptq old cuda, and hugging face pipelines. Then there's GGML (but three versions with breaking changes), GPTQ models, GPTJ?, HF models, .bin, .safetenors, act-order and no act-orders. Some of these quantized models will work with cuda, some won't apparently. On top of that you've got all these different models with different interface templates? They have different stop characters and different underlying prompt templates which don't seem to be document anywhere? Wizard is different than llama is slightly different than alpaca is different than starcoder, and on and on and on...
oobabooga seems to be the only unified platform that is trying to at least package it all together so you can just run a model, but even then I'm struggling to have anything be as simple as download and run. To me, it honestly feels completely insane what's going on out there. It feels like you need to have a PHD to really play with this stuff in any meaningful capacity and stay up to date.
Which is why I don't understand if oobabooga at least has an api and is one of the few systems that can load models in GPU, why isn't there an interface for it in langchain? Why are all of these tutorials jumping through hoops to use llama.cpp or whatever when all the model loading can be handled by oobabooga? I'm really wondering if someone can help explain to me what's going on out there because I've been really struggling to actually get to work with this stuff. I've got a 3090ti and i'm struggling to find a consistent way to load a GPTQ model (or any model) on the gpu outside of oobabooga and interface with it using langchain. and I don't understand how to use the oobabooga api inside of langchain.
Just a little frustrated with all this. Would love to hear some thoughts from people out there who are successful with getting all this to work. | 2023-05-25T11:40:12 | https://www.reddit.com/r/LocalLLaMA/comments/13rffud/can_someone_explain_why_there_isnt_a_good/ | UnoriginalScreenName | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rffud | false | null | t3_13rffud | /r/LocalLLaMA/comments/13rffud/can_someone_explain_why_there_isnt_a_good/ | false | false | self | 55 | null |
What would be sensible settings to finetune a model on 24GB of VRAM? | 20 | I want to finetune a LLM to teach it new content in my field of work. In the end, it should be a chatbot to solve technical problems.
There are many parameters that seem to influence VRAM usage during training:
\- model size
\- lora rank
\- micro batch size
\- cutoff length
What would be a sensible option to start? I have a 3090. | 2023-05-25T12:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/13rg6y9/what_would_be_sensible_settings_to_finetune_a/ | IlIllIlllIlllIllll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rg6y9 | false | null | t3_13rg6y9 | /r/LocalLLaMA/comments/13rg6y9/what_would_be_sensible_settings_to_finetune_a/ | false | false | self | 20 | null |
Are weights copyrightable? | 6 | Are weights, which were created by AI and not humans, copyrightable at all? | 2023-05-25T12:22:39 | https://www.reddit.com/r/LocalLLaMA/comments/13rgc0a/are_weights_copyrightable/ | Bendaluk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13rgc0a | false | null | t3_13rgc0a | /r/LocalLLaMA/comments/13rgc0a/are_weights_copyrightable/ | false | false | self | 6 | null |
Subsets and Splits