title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Red Pajamas could be run on CPU with a patched GGML: https://github.com/ggerganov/ggml/pull/134 https://huggingface.co/keldenl/ Windows version needs ggml_time_init() added to main() and compiled with cmake -G 'Unix Makefiles'. aligned_alloc() could be needed too, if gcc is too old.
13
2023-05-27T14:04:57
https://i.redd.it/ino7o9b49f2b1.png
NancyAurum
i.redd.it
1970-01-01T00:00:00
0
{}
13t8dtn
false
null
t3_13t8dtn
/r/LocalLLaMA/comments/13t8dtn/red_pajamas_could_be_run_on_cpu_with_a_patched/
false
false
https://a.thumbs.redditm…lKl7x12US060.jpg
13
{'enabled': True, 'images': [{'id': 'vUP2r2mDrkdlIjeW3QHC6Cc-w1PrTJhq_uVpBdJY5Kg', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=108&crop=smart&auto=webp&s=ba50db91972bae499dbb316ce3d1adb89f7ee01c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=216&crop=smart&auto=webp&s=a187ae67e7879045b7d9951fcb0377b055511b57', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=320&crop=smart&auto=webp&s=95b63bf47daac53bb7da79c001fe0710ecfe46f5', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=640&crop=smart&auto=webp&s=e5967aa931fb9d5382c7f1fd3c94e3b188bd14ab', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=960&crop=smart&auto=webp&s=9936ac7f76619c42c5467b6bdd7247f75f0ed0b7', 'width': 960}, {'height': 606, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?width=1080&crop=smart&auto=webp&s=e7ad57be64d0677a4da58b3d50c6facc8fe23084', 'width': 1080}], 'source': {'height': 1063, 'url': 'https://preview.redd.it/ino7o9b49f2b1.png?auto=webp&s=53e71853fc204fcb64b73c648eb6457a6f3b842c', 'width': 1894}, 'variants': {}}]}
Official WizardLM-13B model trained with 250k evolved instructions!
108
* Today, WizardLM Team has released **Official** **WizardLM-13B** model trained with **250k** evolved instructions (from ShareGPT). * The project repo: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM) * Please download its delta model at [WizardLM/WizardLM-13B-1.0](https://huggingface.co/WizardLM/WizardLM-13B-1.0) ​ **NOTE:** The **WizardLM-13B-1.0** and **Wizard-7B** use different prompt at the beginning of the conversation: 1.For **WizardLM-13B-1.0** , the Prompt should be as following: "***A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello, who are you? ASSISTANT:***" 2. For **WizardLM-7B** , the Prompt should be as following: "***{instruction}\\n\\n### Response:***" ​ ## GPT-4 automatic evaluation They adopt the automatic evaluation framework based on GPT-4 proposed by FastChat to assess the performance of chatbot models. As shown in the following figure, WizardLM-13B achieved better results than Vicuna-13b. https://preview.redd.it/j26gd3p9sd2b1.png?width=2194&format=png&auto=webp&s=f852cb037293cb4305fa4866e47d91ba3b03e327 ## WizardLM-13B performance on different skills. The following figure compares WizardLM-13B and ChatGPT’s skill on Evol-Instruct testset. The result indicates that WizardLM-13B achieves 89.1% of ChatGPT’s performance on average, with almost 100% (or more than) capacity on 10 skills, and more than 90% capacity on 22 skills. https://preview.redd.it/bmj08oyasd2b1.png?width=2194&format=png&auto=webp&s=d439efc67b42b1a650a36c196e67ec02ccc90ff7
2023-05-27T14:05:57
https://www.reddit.com/r/LocalLLaMA/comments/13t8elc/official_wizardlm13b_model_trained_with_250k/
Worth-Barnacle-7539
self.LocalLLaMA
2023-05-28T07:56:12
0
{}
13t8elc
false
null
t3_13t8elc
/r/LocalLLaMA/comments/13t8elc/official_wizardlm13b_model_trained_with_250k/
false
false
https://b.thumbs.redditm…Lo_eouWBlqyA.jpg
108
{'enabled': False, 'images': [{'id': 'GaTxB_P5EuNkmYpXmquBVlsnQJ_dw4z7ZEtxyVKY_Ag', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=108&crop=smart&auto=webp&s=4ce376cbce4ab8d6f6b263dca2b49b5549aa3c3e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=216&crop=smart&auto=webp&s=12d0e1c3cc79399b52fa47979bf6e3feaedb0972', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=320&crop=smart&auto=webp&s=824578d4d95b132d56aa2385c74e9973d8eab702', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=640&crop=smart&auto=webp&s=62a52b2b4cb4f4e5bee8ffb123b903d45d394634', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=960&crop=smart&auto=webp&s=38221f293d4084b087d363833512d9b6257fb17d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?width=1080&crop=smart&auto=webp&s=d10bd803a779909e4559a8d9afd466a6b66f17d8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YEQ7L0sVFygstA8zodgIk370seyBscALoO9zjcQ5Qoc.jpg?auto=webp&s=70027cfd1085ef303822ff97baddb167bda70659', 'width': 1200}, 'variants': {}}]}
Is it possible to combine Radeon+nVidia GPUs at the same time for inference?
1
[removed]
2023-05-27T15:03:40
[deleted]
2023-05-27T15:18:25
0
{}
13t9r7q
false
null
t3_13t9r7q
/r/LocalLLaMA/comments/13t9r7q/is_it_possible_to_combine_radeonnvidia_gpus_at/
false
false
default
1
null
Is it possible to combine Radeon+nVidia GPUs at the same time for inference?
1
[removed]
2023-05-27T15:19:44
https://www.reddit.com/r/LocalLLaMA/comments/13ta4ov/is_it_possible_to_combine_radeonnvidia_gpus_at/
nodating
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ta4ov
false
null
t3_13ta4ov
/r/LocalLLaMA/comments/13ta4ov/is_it_possible_to_combine_radeonnvidia_gpus_at/
false
false
default
1
null
Using Llama for private data/knowledge base
26
This space is exploding at a tremendous speed, with newer models coming out almost daily. Although this is really groundbreaking, but my question is more towards its practical use cases. For instance, my use case is to use these models (for now I am using Vicuna) to create a chatbot over my own private knowledgebase. For simplicity lets assume I need to create a chatbot which is up to date with latest news data. This has a 2 pronged problem. First the model should have "knowledge" of all the news till date, and then it should have the capability to "update" itself on a daily basis. After experimenting I see there were 2 ways of going about it. One is fine-tuning Vicuna over all this data, and then updating it periodically. I have come to the conclusion that this is impractical, but would love to hear from the community if there is a way to use this option practically. The option I have opted for is to : 1.extract embeddings of these documents (lets say using sentence transformers) 2.storing them in a vector store. 3. extract relevat documents based on user input (semantic cimilarity) 4. Pass user's input alongwith the docs as additional context. 5. Using prompt engineering to try and limiting the chatbot to use its pre existing knowledge to dilute answers Although this approach is scalable one of the biggest challenges I face is the limit of input context/prompt. Most of the models limit it around 2048 tokens. Lets say I have an input which returns multiple documents, each greater than 2048 tokens, how do I then use this approach. How do I use all these documents as a single definite context. What are the options available for this kind of a use case? Are there better models with much larger context limit? How have people of this community managed to leverage this or any other approach for custom datasets? From a hardware perspective, lets assume we can use 2xA40 or 1xA100 GPU. I will keep on updating this thread with relevant/useful comments from community as I feel a lot of people have this question but there is no (at least i couldnt find one) good answer/approach. ​ Thanks!
2023-05-27T16:00:55
https://www.reddit.com/r/LocalLLaMA/comments/13tb3n6/using_llama_for_private_dataknowledge_base/
OpportunityProper252
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tb3n6
false
null
t3_13tb3n6
/r/LocalLLaMA/comments/13tb3n6/using_llama_for_private_dataknowledge_base/
false
false
self
26
null
Local LLM to learn, explore and use for commercial purpose
8
I work in a software organization in non-AI role and have some understanding into AI/ML concepts. For my own learning and to do a certain POC (proof of concept), I am thinking to get local LLM in my laptop. How can I possibly go abt it if following are my requirements: 1. Hardware constraint: It should run on personal laptop. 2. Privacy and security: Initially I will use dummy/fabricated data, but if POC works well then I can think of using actual data. So: privacy and security will become important. 3. Open source for commercial use: Cannot use OpenAI GPT API since they store data for 30 days, so doesn't inspire confidence on privacy. Cannot use Llama since it is not licenced for commercial use and if my POC is succesful then it will be used for commercial use. 4. Learning: This requirement is optional. Instead of following instructions blindly to get LLM running in my laptop as a black box, it will be more interesting to get to know: how the transformer architecture works internally. I am sorry if ths questions are too basic. You may point me to some good resources to understand ths things better.
2023-05-27T16:08:53
https://www.reddit.com/r/LocalLLaMA/comments/13tbb0o/local_llm_to_learn_explore_and_use_for_commercial/
meet20hal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tbb0o
false
null
t3_13tbb0o
/r/LocalLLaMA/comments/13tbb0o/local_llm_to_learn_explore_and_use_for_commercial/
false
false
self
8
null
LORA question
6
I’ve wanted to train a LORA for a while, but I haven’t been sure how it works in terms of the data you give it. Say you want give it a bunch of poems that cover a topic, can you just give it raw poems or do you have to give it, for example, a list of what topics the poem is too? I want to be able to state “write a poem about horses” OR “write a poem about cars” and have it be able to write in the style of the poetry I made the LORA of, but still be about horses or cars. Does that make sense? I finetuned gpt2 years ago but I was only having it generate random poetry, I wasn’t even giving it a topic, so I didn’t worry about this. Curious what people’s experiences have been?
2023-05-27T16:25:45
https://www.reddit.com/r/LocalLLaMA/comments/13tbppy/lora_question/
maxiedaniels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tbppy
false
null
t3_13tbppy
/r/LocalLLaMA/comments/13tbppy/lora_question/
false
false
self
6
null
WizardLM 13B 1.0 quantised for local LLMing
206
WizardLM have put out their long-awaited 13B training; for further details see this post: [https://www.reddit.com/r/LocalLLaMA/comments/13t8elc/official\_wizardlm13b\_model\_trained\_with\_250k/](https://www.reddit.com/r/LocalLLaMA/comments/13t8elc/official_wizardlm13b_model_trained_with_250k/) I have done my thing and produced the following repos: * [https://huggingface.co/TheBloke/wizardLM-13B-1.0-GGML](https://huggingface.co/TheBloke/wizardLM-13B-1.0-GGML) * [https://huggingface.co/TheBloke/wizardLM-13B-1.0-GPTQ](https://huggingface.co/TheBloke/wizardLM-13B-1.0-GPTQ) * [https://huggingface.co/TheBloke/wizardLM-13B-1.0-fp16](https://huggingface.co/TheBloke/wizardLM-13B-1.0-fp16) (still pushing at time of writing) Enjoy! (I have two other model quantisations to announce shortly as well.. watch this space!)
2023-05-27T16:35:08
https://www.reddit.com/r/LocalLLaMA/comments/13tbxzh/wizardlm_13b_10_quantised_for_local_llming/
The-Bloke
self.LocalLLaMA
2023-05-27T16:43:32
0
{}
13tbxzh
false
null
t3_13tbxzh
/r/LocalLLaMA/comments/13tbxzh/wizardlm_13b_10_quantised_for_local_llming/
false
false
self
206
{'enabled': False, 'images': [{'id': 'BeAV9WboVW1jkTm30wLe8skJ-p66Rf_ev-fuEKOIdyI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=108&crop=smart&auto=webp&s=2b1c8c1942bd5abef6af2fce02a408860f68ff67', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=216&crop=smart&auto=webp&s=667e476bacd6a96641cf465517fcaf2806fbf2b3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=320&crop=smart&auto=webp&s=426eeccd825a856130820ec72a35ac5f2a6b4189', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=640&crop=smart&auto=webp&s=3ed8d3139a32ae2c88e6f0f405ce8673977d82c4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=960&crop=smart&auto=webp&s=c3571b34b88e61420c5bc8f6a8ade7e8f1488bef', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?width=1080&crop=smart&auto=webp&s=7614713ecf81fcdd4a54a07b4c7834d8979b51e1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5sl7AUOJ1sWHe7SVxIID19ezUBc7iiT92UAa4tda7kw.jpg?auto=webp&s=3c9dc5643779da2eaf0ab6f3138ed6bb10f5b205', 'width': 1200}, 'variants': {}}]}
What would be the most helpful?
1
With new models, new repos, and libraries to work with them coming count daily. It seems hard to imagine what next? I’m interested in what everyone thinks what the next most helpful/move forward project or idea would be at this point?
2023-05-27T16:40:25
https://www.reddit.com/r/LocalLLaMA/comments/13tc2o9/what_would_be_the_most_helpful/
Jl_btdipsbro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tc2o9
false
null
t3_13tc2o9
/r/LocalLLaMA/comments/13tc2o9/what_would_be_the_most_helpful/
false
false
self
1
null
anyone interested in teaming up for improving open soruce LLM quality to work as a team
1
[removed]
2023-05-27T16:44:08
https://www.reddit.com/r/LocalLLaMA/comments/13tc608/anyone_interested_in_teaming_up_for_improving/
UnitedDictatorland
self.LocalLLaMA
2023-05-27T16:47:43
0
{}
13tc608
false
null
t3_13tc608
/r/LocalLLaMA/comments/13tc608/anyone_interested_in_teaming_up_for_improving/
false
false
default
1
null
LLaMA tokenizer: is a JavaScript implementation available anywhere?
4
I'm looking for a JavaScript implementation of the LLaMA tokenizer. I'm sure somebody has ported it to JS, but I haven't found anything? Edit: Nope, there wasn't a JS LLaMA tokenizer available, so I made one: [https://github.com/belladoreai/llama-tokenizer-js](https://github.com/belladoreai/llama-tokenizer-js)
2023-05-27T17:30:38
https://www.reddit.com/r/LocalLLaMA/comments/13td9r0/llama_tokenizer_is_a_javascript_implementation/
belladorexxx
self.LocalLLaMA
2023-06-13T13:36:58
0
{}
13td9r0
false
null
t3_13td9r0
/r/LocalLLaMA/comments/13td9r0/llama_tokenizer_is_a_javascript_implementation/
false
false
self
4
{'enabled': False, 'images': [{'id': 't5BYImubexnZbSs3UMfYWIEQSAIwcB_4G44jxoPka2g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=108&crop=smart&auto=webp&s=df91c49afd9f6de58616898380c72ae6a948f937', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=216&crop=smart&auto=webp&s=01acc1af705b9172a06b059a3e265f55986ab948', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=320&crop=smart&auto=webp&s=033a1bc04e7af056e872516f37d64e10ec61f82c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=640&crop=smart&auto=webp&s=aab47dcb08ce43c36e470372be27bece1d7701af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=960&crop=smart&auto=webp&s=d8412d139a899f824337c59dcbbaa7521352300a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?width=1080&crop=smart&auto=webp&s=123b220a268d6fbf3ef80c09fd10e26f1ac12ab3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0jNYJSjGcx4GrMRq-b-9vMiXKWJpYMhS6vxSoBKXEWk.jpg?auto=webp&s=89c6e0f69fd3e2e908da1599a3d56019cd1a93cc', 'width': 1200}, 'variants': {}}]}
Has anyone been able to train their own model on private data?
16
I see so many guides out there but none that give step by step. For those who have successfully created a model, what kind of hardware are we talking about? I’m familiar with fine tuning (been using langchain + local / opeani as llm) But curious to know if training a model is better accuracy wise. Thanks in advance!
2023-05-27T17:36:59
https://www.reddit.com/r/LocalLLaMA/comments/13tdewu/has_anyone_been_able_to_train_their_own_model_on/
gobiJoe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tdewu
false
null
t3_13tdewu
/r/LocalLLaMA/comments/13tdewu/has_anyone_been_able_to_train_their_own_model_on/
false
false
self
16
null
I've made a customisable SMS personal assistant which has infinite and persistent semantic memory.
33
Hi all, I'm wanted to share a project I've been working on. It's somewhat similar to 'Diamond Age', where I've tried to create an AI assistant akin to the primer. The project uses Python GSM Modem, Langchain, and Pinecone, incorporating OpenAI Embeddings to consistently extract and store entities. The more information you provide it through conversation, the better responses it gives. To enhance this, I've used Redis caches for frequently accessed vectors. I've also added a setup stage where you can configure your name, the bot's personality, and your objectives. These details are included in the system prompt, persistently stored and associated with your unique ID in Redis. Currently, I'm running this on a Raspberry PI, leveraging the OpenAI API. However, if you wish to make more of this project, you can implement open-source models easily from the initialisation file. I'm posting here as that was my original idea. I really hope some people with better computers can have this running locally and have an experience that isn't already 'aligned' This is my first project ever, and I thought the idea of talking to a sophisticated LLM via SMS, something nearly obsolete, was quite interesting. I had tried this earlier using Twilio but found it to be expensive, so I bought my own modem and built this. It's been a fun creating this, and it's pretty cool having my own assistant. I've attached the GitHub link for reference. This is my first attempt at sharing such a project, so I'm not sure where else to post this, but I'd appreciate any feedback or questions. I’m also aware as this needs a modem it’s not something which everyone can run out of the box, but if you are in the UK and want to demo it, drop my a DM and I’ll send your the number to text. ​ [\[GitHub Link\]](https://github.com/Seraphaious/SMS-AI.git)
2023-05-27T18:09:50
https://www.reddit.com/r/LocalLLaMA/comments/13te61v/ive_made_a_customisable_sms_personal_assistant/
Gromchoices
self.LocalLLaMA
2023-05-27T18:13:38
0
{}
13te61v
false
null
t3_13te61v
/r/LocalLLaMA/comments/13te61v/ive_made_a_customisable_sms_personal_assistant/
false
false
self
33
{'enabled': False, 'images': [{'id': 'YX9MTaNNBdcXebhxtePecvYulLZz-YMQse6wwKM7r8w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=108&crop=smart&auto=webp&s=d13901d6a3b6b64e90ae19843a7f603b41dd15d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=216&crop=smart&auto=webp&s=8110a49a8f6391df44750870326e9056923c00c6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=320&crop=smart&auto=webp&s=4d8cfbcd673fc4e257df7303bb8f448b9c8de3e7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=640&crop=smart&auto=webp&s=8b39e87005349d5029c7ec20dd8f04288ac89d51', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=960&crop=smart&auto=webp&s=f44e688aa6cf476ca0bb858a1ea471f0ed9d4a07', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?width=1080&crop=smart&auto=webp&s=8acdab4ff08e021ea777bc25aabb6cee40391ec6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s8L56-EOhEOWdtNzVv90Um67amweeScE5d8t-dIEDCA.jpg?auto=webp&s=a2d34f2c8ed9f9d40fe72717e193ec79c49cefb5', 'width': 1200}, 'variants': {}}]}
The most effective way to tell a text completion model what to do?
4
I've been experimenting with the WizardLM-30B-Uncensored and it seems very promising but I'm not sure if I'm following best practices when prompting it. I previously used OpenAI ChatGPT API where I put into the system message clear instructions on what the model should focus on and what format to output its message. It feels like trying to cram that in the beginning of a normal text completion isn't a good idea.
2023-05-27T18:53:25
https://www.reddit.com/r/LocalLLaMA/comments/13tf5vh/the_most_effective_way_to_tell_a_text_completion/
Dogeboja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tf5vh
false
null
t3_13tf5vh
/r/LocalLLaMA/comments/13tf5vh/the_most_effective_way_to_tell_a_text_completion/
false
false
self
4
null
What's your personal lowest acceptable tokens/second?
11
I recently acquired 64GB of RAM and 24GB of VRAM, so I'm in the position of running any LLaMa under the sun besides 65B purely on GPU. I was experimenting with the pros and cons of running 33B on GPTQ versus 65B on GGML with 45 layers offloaded to GPU. The 65B had some slightly more intelligent responses, but the speed was nearly 10x slower than 33B in my use case (around 1.5 t/s for 65B versus 15 t/s for 33B). For my primary purpose (chat style) this was excruciating and I quickly went back to 33B despite the slightly worse responses, but I can see if your use case was more for Q&A where you can ask a question, do something for 5 minutes and come back, the 1.5 t/s wouldn't be too much of an issue. Since it looks like there's a lot of optimizations to be made for GPU offloading on the horizon I hope the t/s can speed up to something like 5 t/s, which would be tolerable. What do you think?
2023-05-27T19:36:18
https://www.reddit.com/r/LocalLLaMA/comments/13tg5cs/whats_your_personal_lowest_acceptable_tokenssecond/
LeifEriksonASDF
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tg5cs
false
null
t3_13tg5cs
/r/LocalLLaMA/comments/13tg5cs/whats_your_personal_lowest_acceptable_tokenssecond/
false
false
self
11
null
Guide for LoRA training?
12
Hi guys, basically the title. I have a general idea of what needs to be done, but not specifics - is there a program that does the training or is it command line? Does unstructured data (like a book) work or should it be prompt/response? How can I separate conversations in the data? Any advice/direction would be appreciated.
2023-05-27T20:26:15
https://www.reddit.com/r/LocalLLaMA/comments/13thajr/guide_for_lora_training/
Equal_Station2752
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13thajr
false
null
t3_13thajr
/r/LocalLLaMA/comments/13thajr/guide_for_lora_training/
false
false
self
12
null
How to squeeze more speed with my hardware?
5
Hi, I was reading posts about new models and decided to try the new guanaco one. Pretty impressive, I tested with conversation I had with ChatGPT and the responses were, I would say, equals in quality. I'm a newbie so take this with a grain of salt. One big problem though, it's incredibly slow... It takes a good 8 secs to start typing and then about 6-10 caracters a second. Is there anything I can do to squeeze a little more juice from my system? Here are my specs: * CPU: AMD Ryzen 7 5800X3D (16) @ 3.400GHz * GPU: AMD ATI Radeon RX 6800 XT * Memory: 7582MiB / 64227MiB * OS: Manjaro Linux x86\_64 * Kernel: 6.3.3-1-MANJARO * Shell: bash 5.1.16 * Resolution: 3440x1440, 1080x1920 * DE: Plasma 5.27.4 * WM: KWin * Terminal: konsole Command I use to start it in **llama.cpp** $ ./main -t 16 -m ./models/guanaco-33B.ggmlv3.q5_1.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -i -ins All suggestions are welcome.
2023-05-27T21:01:03
https://www.reddit.com/r/LocalLLaMA/comments/13ti3dh/how_to_squeeze_more_speed_with_my_hardware/
SebSenseGreen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ti3dh
false
null
t3_13ti3dh
/r/LocalLLaMA/comments/13ti3dh/how_to_squeeze_more_speed_with_my_hardware/
false
false
self
5
null
LLM Battle Arena: Week 4
14
2023-05-27T21:06:07
https://lmsys.org/blog/2023-05-25-leaderboard/
ninjasaid13
lmsys.org
1970-01-01T00:00:00
0
{}
13ti7u1
false
null
t3_13ti7u1
/r/LocalLLaMA/comments/13ti7u1/llm_battle_arena_week_4/
false
false
https://b.thumbs.redditm…m3md6Z1eTSsQ.jpg
14
{'enabled': False, 'images': [{'id': 'Q1MF8IN_UA9pU4tqvD1hEdePlazYPLTs893pR_vAxGU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=108&crop=smart&auto=webp&s=563cdab161ef7a0db1d99480a17e1e0a964713a8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=216&crop=smart&auto=webp&s=f1e49ccecddfa4224d3b0a84c02a2f4e91fdbe23', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=320&crop=smart&auto=webp&s=433ccbd98e6312cd473b7462d9b2795647e68b22', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=640&crop=smart&auto=webp&s=1af23e81a95a45bb6c1c4137b61cac670f2817d9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=960&crop=smart&auto=webp&s=ea529d168b43832467d3edd73f14c041a334f395', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?width=1080&crop=smart&auto=webp&s=e32dc82175c894fb2626e6595baa25c1a27f2720', 'width': 1080}], 'source': {'height': 1138, 'url': 'https://external-preview.redd.it/5DnJwNTgOVdEaq3w-j17t1fdG_WOO8KiqSkSylheX5k.jpg?auto=webp&s=b48765733ccaee4211093fc1887b8e3e1484d509', 'width': 1138}, 'variants': {}}]}
Best instruct model recommendations to use with T4?
0
[removed]
2023-05-27T21:09:57
https://www.reddit.com/r/LocalLLaMA/comments/13tib0v/best_instruct_model_recommendations_to_use_with_t4/
emissaryo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tib0v
false
null
t3_13tib0v
/r/LocalLLaMA/comments/13tib0v/best_instruct_model_recommendations_to_use_with_t4/
false
false
default
0
null
Red Pajamas 7b is not good. It is very bad in chat mode (refuses to answer questions and perform requests). `QA`mode works better, but it produces nasty and bigoted answers. Although vicuna says nasty things to, unless explicitly prompted to be nice.
0
2023-05-27T21:19:22
https://i.redd.it/xcjtaygzdh2b1.png
NancyAurum
i.redd.it
1970-01-01T00:00:00
0
{}
13tiiu0
false
null
t3_13tiiu0
/r/LocalLLaMA/comments/13tiiu0/red_pajamas_7b_is_not_good_it_is_very_bad_in_chat/
false
false
default
0
null
Voice to text
6
Hi! I’m looking to record all of my conversations locally to develop my own library of context to train a future model on. Essentially create a model that represents me. Any tools or peripherals out there that could do this currently?
2023-05-27T22:30:07
https://www.reddit.com/r/LocalLLaMA/comments/13tk409/voice_to_text/
Mnimmo90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tk409
false
null
t3_13tk409
/r/LocalLLaMA/comments/13tk409/voice_to_text/
false
false
self
6
null
bigcode/tiny_starcoder_py is a 159M parameter model that runs on 2GB GPU and can generate python code
106
2023-05-28T00:08:30
https://huggingface.co/bigcode/tiny_starcoder_py
kryptkpr
huggingface.co
1970-01-01T00:00:00
0
{}
13tmben
false
null
t3_13tmben
/r/LocalLLaMA/comments/13tmben/bigcodetiny_starcoder_py_is_a_159m_parameter/
false
false
https://b.thumbs.redditm…xQdBkdA-Syfg.jpg
106
{'enabled': False, 'images': [{'id': 'skL8XaRgbph-If49YabPslxfX2-TYPst2mEwzCn1KFM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=108&crop=smart&auto=webp&s=bf724196785ca2ad0d8afdfac02896af1d4a958d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=216&crop=smart&auto=webp&s=216b24c919f852c47dfd1a482afd9311834e950b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=320&crop=smart&auto=webp&s=fbf7666c08ad26b15016036ef2b57ca1fa22e919', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=640&crop=smart&auto=webp&s=90a8dce1440bb04db676738af2ded342ee4af930', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=960&crop=smart&auto=webp&s=67ec12c7c180dcaf19398569c9895eb9fe6577c7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?width=1080&crop=smart&auto=webp&s=eb6fd3a48846e696f581da1e4d47a04806d44f5d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uG6KI5FYrclX3wi2Vx3W2RSyMmb5kpaCsLCDhamu27I.jpg?auto=webp&s=63c689f9de982078ce24ed57b24e7c602c8e839d', 'width': 1200}, 'variants': {}}]}
Clean QLoRA training: best LLaMA model and stop tokens
11
Now that we have more efficient training, I'm hoping we'll see a lot of people experimenting with training on various data sets. But one thing I've noticed is that some models are very clean in stopping their text generation and some aren't. So far, working with the alpaca-clean data set on some experiments, my models aren't very good stoppers. So I was curious as to what "standards" we should all be training our models on. Is huggyllama is the current best clean LLaMA model to train on top of? And for clean stopping, were people adding '</s>' to the training prompt and adding that to a final model json file? I'm curious as to the best practice here.
2023-05-28T00:40:11
https://www.reddit.com/r/LocalLLaMA/comments/13tmzyt/clean_qlora_training_best_llama_model_and_stop/
synn89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tmzyt
false
null
t3_13tmzyt
/r/LocalLLaMA/comments/13tmzyt/clean_qlora_training_best_llama_model_and_stop/
false
false
self
11
null
Excited to share my ambitious free and open-source library for connecting AI, human, and computing systems.
65
Hello r/LocalLLaMa! My name's Dan. I'm a programmer and a lurker. I know it's hard to keep up with all the announcements these days, but I'm excited to share a free and open source python project I've been working really hard on that I'm hoping others will find useful. I've come up with a small framework to make it easier to integrate agents, machine learning models, datasets, user interfaces, and basically any kind of system you want, for whatever you want. It's called \`everything\`. Inspired by other big ideas. :) [https://github.com/operand/everything](https://github.com/operand/agency) **\[edit\] The above link has been updated to point to the current project, now named \`agency\`. \[/edit\]** If you're trying to build a foundation for an AI related system, this might be useful as a start. It's a lot to explain but this small library addresses a number of issues and provides a simple API and foundation for integrating ourselves and our machines. I spent a lot of time on the readme and spread comments throughout the codebase to explain how it works. I hope you'll check it out. It's very early so please don't expect production quality out of the box. Be ready to tinker. Only two "channel" classes have been minimally implemented just enough to show the concepts. I'm putting it out there to see if this interests anyone. I'll be keeping it moving forward for now. Thanks so much for reading! Let's create an open and kind future. ❤
2023-05-28T01:37:16
https://www.reddit.com/r/LocalLLaMA/comments/13to5ek/excited_to_share_my_ambitious_free_and_opensource/
helloimop
self.LocalLLaMA
2023-07-19T04:12:20
0
{}
13to5ek
false
null
t3_13to5ek
/r/LocalLLaMA/comments/13to5ek/excited_to_share_my_ambitious_free_and_opensource/
false
false
self
65
{'enabled': False, 'images': [{'id': 'm6th8V7E7zzx2CCzdbrW9zvqJWfFxufjUAokKdD9Qaw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=108&crop=smart&auto=webp&s=875dcf7e2c9c07458396f503d7cf2976a3c33503', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=216&crop=smart&auto=webp&s=99e96338557344e1b9e38df9f3f65166764d632c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=320&crop=smart&auto=webp&s=62c67e35464947430c0128ebdeb5046fed9500cc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=640&crop=smart&auto=webp&s=401bb1677a6550313d8213f8cfc9752a105ca587', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=960&crop=smart&auto=webp&s=33b8ffcf7dd0812dedebc0374647a5281dddcf1c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?width=1080&crop=smart&auto=webp&s=965489bf80f760a497b585a641250189335e583f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U0vhJWc4IXjR4hRqBe6o2Nlz4dpXZqMTmclZj9vSukc.jpg?auto=webp&s=6800f296fa85899c024b7b7dae5664e6c8dfb5a5', 'width': 1200}, 'variants': {}}]}
Which database to use for semantic search?
17
There's pinencone, redis, chroma, weaviate, qdrant, which vector database should I use? And whats a good library for creating embeddings other than openai's api, my credits expired :(
2023-05-28T02:27:07
https://www.reddit.com/r/LocalLLaMA/comments/13tp2sr/which_database_to_use_for_semantic_search/
CompetitiveSal
self.LocalLLaMA
2023-05-28T02:48:48
0
{}
13tp2sr
false
null
t3_13tp2sr
/r/LocalLLaMA/comments/13tp2sr/which_database_to_use_for_semantic_search/
false
false
self
17
null
Gorilla 7B: Large Language Model Connected with Massive APIs
123
An interesting new, special-use-case model, from another research team at Microsoft! # Gorilla: Large Language Model Connected with Massive APIs Gorilla enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla can write a semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them! Project website: [https://shishirpatil.github.io/gorilla/](https://shishirpatil.github.io/gorilla/) Project Github: [https://github.com/ShishirPatil/gorilla](https://github.com/ShishirPatil/gorilla) Project paper: [https://arxiv.org/abs/2305.15334](https://arxiv.org/abs/2305.15334) https://preview.redd.it/gd6b7w6exi2b1.png?width=696&format=png&auto=webp&s=252c8128fc649a31550a000b960e69cfe8e6b719 **My quantisations/merges:** * [TheBloke/gorilla-7B-GPTQ](https://huggingface.co/TheBloke/gorilla-7B-GPTQ) * [TheBloke/gorilla-7B-GGML](https://huggingface.co/TheBloke/gorilla-7B-GGML) * [TheBloke/gorilla-7B-fp16](https://huggingface.co/TheBloke/gorilla-7B-fp16) **Prompt template and example prompt:** ###USER: find me an API to generate cute cat images ###ASSISTANT:
2023-05-28T02:27:20
https://www.reddit.com/r/LocalLLaMA/comments/13tp2yc/gorilla_7b_large_language_model_connected_with/
The-Bloke
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tp2yc
false
null
t3_13tp2yc
/r/LocalLLaMA/comments/13tp2yc/gorilla_7b_large_language_model_connected_with/
false
false
https://b.thumbs.redditm…DW4swizEka0w.jpg
123
null
From your experience, what are the differences of the Llama models?
21
I've been testing a lot of Llama models recently and there'a been a bunch of new models that has been released. (Mostly for chat and rp) Honestly as of right now I don't notice much difference from the top models except for a few minor things Wizardlm 30b is very coherent but imo it's too much by the book. It's like talking to a smart person relaying a story to you and not really a human. It's freakishly smart Guacano 33b is less coherent but has a bit more personality. I do think Wizard is smarter in analyzing context. Manticore 13b least coherent of the three but still pretty good. Sometimes makes weird choices tho These are pretty subjective and could be a problem with my prompt. What about you? What are your opinions on the models you tested?
2023-05-28T03:04:21
https://www.reddit.com/r/LocalLLaMA/comments/13tpsjo/from_your_experience_what_are_the_differences_of/
AdministrativeLie745
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tpsjo
false
null
t3_13tpsjo
/r/LocalLLaMA/comments/13tpsjo/from_your_experience_what_are_the_differences_of/
false
false
self
21
null
Fine-tuning the 13B wizard model with a small amount of dataset and achieving GPT-4 level results?
4
Just kidding, I did a small experiment of fine-tuning on the more promising model 13b wizard to see if I can get some improvement for my use case. After the fine-tuning, I got the model to answer a typescript-related question: \`\`\` Implement Add<A,B> to get the sum of two positive integers. type A = Add<1,2> // 3 type B = Add<0,0> // 0 \`\`\` Both GPT-4 and fine-tuned model are giving wrong answers but they are giving the same wrong answer. I assume that the fine-tuned model is at the same wrong level as GPT-4? GPT4: \`\`\`function add(a: number, b: number): number { return a + b; } &#x200B; let A = add(1, 2); // 3 let B = add(0, 0); // 0 \`\`\` my fine-tuned model: \`\`\` function add(a: number, b: number): number { return a + b; } export function Add<T extends number>(a: T, b: T) : T{ let result=add(a,b); return result; } \`\`\` FIY, my dataset -> [https://github.com/mzbac/lora-llm-qa-g/blob/main/dataset/combined.json](https://github.com/mzbac/lora-llm-qa-g/blob/main/dataset/combined.json)
2023-05-28T04:46:31
https://www.reddit.com/r/LocalLLaMA/comments/13trngu/finetuning_the_13b_wizard_model_with_a_small/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13trngu
false
null
t3_13trngu
/r/LocalLLaMA/comments/13trngu/finetuning_the_13b_wizard_model_with_a_small/
false
false
self
4
{'enabled': False, 'images': [{'id': 'buRwCAVTZJwQyLro26K9WJOT4qVgiHKNP4GwmijQYNE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=108&crop=smart&auto=webp&s=7dc9bfd601cd702842e85519923f7e2258ad3ea5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=216&crop=smart&auto=webp&s=3a9759261a2823485979afeaa0d29a9b99e9b9f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=320&crop=smart&auto=webp&s=7029f4d2832c0c0b99c5aa21a4ebfaf82cdd58cb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=640&crop=smart&auto=webp&s=594642ed4cf81ed22d73a4b40cd8b738db4706fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=960&crop=smart&auto=webp&s=308d4d54125ba3b2c47c556322e0525cbf7a509d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?width=1080&crop=smart&auto=webp&s=d0a53dc27758df9deeb4b4969c6dcee47e545d4e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M0iriqN54dthZH2Yym-z2-ABhffY6ZvGlH9bWHqdg7U.jpg?auto=webp&s=ab6409e2f6c6398c0948744ecfd273a42d5b548c', 'width': 1200}, 'variants': {}}]}
samantha-7b
183
Today I have released samantha-7b [https://huggingface.co/ehartford/samantha-7b](https://huggingface.co/ehartford/samantha-7b) Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format. Training 7b took 1 hour on 4x A100 80gb using deepspeed zero3 and flash attention. She will not engage in roleplay, romance, or sexual activity. 13b and 30b coming tomorrow. 65b sometime this week. Update: 13b is out. 30b is almost out. Blog talking about why and how: [https://erichartford.com/meet-samantha](https://erichartford.com/meet-samantha)
2023-05-28T07:48:00
https://www.reddit.com/r/LocalLLaMA/comments/13tuipk/samantha7b/
faldore
self.LocalLLaMA
2023-05-28T21:28:39
1
{'gid_2': 1}
13tuipk
false
null
t3_13tuipk
/r/LocalLLaMA/comments/13tuipk/samantha7b/
false
false
self
183
{'enabled': False, 'images': [{'id': 'jzDaxm3jT9LDKQiH4XaYm2qMVCWnPYmFKYtr9hRzCMs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=108&crop=smart&auto=webp&s=4dd6f981abbb2e2a1d3f286a69d5704bf7eefb7f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=216&crop=smart&auto=webp&s=3c1c64d012208668ea605d27a43d4836b7a5bfea', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=320&crop=smart&auto=webp&s=136c5076db1a32eee292decda6c3c49c3963be01', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=640&crop=smart&auto=webp&s=72036268b43bc2ae1c712a3ebd02824d75809466', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=960&crop=smart&auto=webp&s=eee6d4151502a09dae27e0fb46d579d55a529d59', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?width=1080&crop=smart&auto=webp&s=c5f858039f14630180d9bb438809d958c261cfc9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9FQYGeaE4wtLCAMZnypfSxgR_8VyvnEzHJz2nQIaptY.jpg?auto=webp&s=eb4c9d82b47b7bdee6c4670e7f5c3cc6410ef632', 'width': 1200}, 'variants': {}}]}
Llama-farm: Yet another local llm/openai + vector db + API integration. Chat, query over your documents, youtube and so on.
1
2023-05-28T08:30:15
https://github.com/atisharma/llama_farm
_supert_
github.com
1970-01-01T00:00:00
0
{}
13tv5ql
false
null
t3_13tv5ql
/r/LocalLLaMA/comments/13tv5ql/llamafarm_yet_another_local_llmopenai_vector_db/
false
false
default
1
null
LLaMa Tokenizer, where to get the tokenizer? (Python or another language is okay.)
3
I tried to use the one on Hugging Face Transformers library with \`LlamaTokenizerFast.from\_pretrained("hf-internal-testing/llama-tokenizer")\` but it's been running for about an hour on 70MB of text and showing no signs of ending, which is unrealistically slow and makes me think something is wrong with it. I just want to be able to throw a string at a function and get a list of tokens. Python is preferable but another language would work if I can easily make it into a standalone executable.
2023-05-28T09:43:37
https://www.reddit.com/r/LocalLLaMA/comments/13twbbm/llama_tokenizer_where_to_get_the_tokenizer_python/
Pan000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13twbbm
false
null
t3_13twbbm
/r/LocalLLaMA/comments/13twbbm/llama_tokenizer_where_to_get_the_tokenizer_python/
false
false
self
3
null
How can I merge the qloara adapter weight back to the original model?
12
I couldn't find it in any docs in the qloara repo. I think someone had already done this, so I'm just wondering if anyone can share some pointers.
2023-05-28T10:23:43
https://www.reddit.com/r/LocalLLaMA/comments/13twzp1/how_can_i_merge_the_qloara_adapter_weight_back_to/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13twzp1
false
null
t3_13twzp1
/r/LocalLLaMA/comments/13twzp1/how_can_i_merge_the_qloara_adapter_weight_back_to/
false
false
self
12
null
How would I host a ggml model on a huggingface space and use it as an api?
1
[deleted]
2023-05-28T10:29:08
[deleted]
1970-01-01T00:00:00
0
{}
13tx2yo
false
null
t3_13tx2yo
/r/LocalLLaMA/comments/13tx2yo/how_would_i_host_a_ggml_model_on_a_huggingface/
false
false
default
1
null
How would I host a ggml model on a huggingface space and use it as an api?
5
Hello there, I'm quite new to this AI stuff and I'm not knowledgeable enough to know how to do this, how would I use a huggingface space as a hosting for a ggml model to inference it with an api? I'm building a bot and I'd love to use huggingface's free resources on spaces to do that. Thank you in advance :)
2023-05-28T10:31:55
https://www.reddit.com/r/LocalLLaMA/comments/13tx4rk/how_would_i_host_a_ggml_model_on_a_huggingface/
AstroEmanuele
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tx4rk
false
null
t3_13tx4rk
/r/LocalLLaMA/comments/13tx4rk/how_would_i_host_a_ggml_model_on_a_huggingface/
false
false
self
5
null
Cost comparison: ChatGPT API vs Cloud-hosting Llama-based models
6
[deleted]
2023-05-28T10:34:40
[deleted]
1970-01-01T00:00:00
0
{}
13tx6g1
false
null
t3_13tx6g1
/r/LocalLLaMA/comments/13tx6g1/cost_comparison_chatgpt_api_vs_cloudhosting/
false
false
default
6
null
How big of a jump is 13B Vicuna Uncensored vs 30B Vicuna Uncensored?
37
[deleted]
2023-05-28T10:57:55
[deleted]
1970-01-01T00:00:00
0
{}
13txljp
false
null
t3_13txljp
/r/LocalLLaMA/comments/13txljp/how_big_of_a_jump_is_13b_vicuna_uncensored_vs_30b/
false
false
default
37
null
[ArXiv] The False Promise of Imitating Proprietary LLMs
10
https://arxiv.org/abs/2305.15717 TLDR; Initially, authors were surprised by the output quality of tested imitation models -- they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality.
2023-05-28T11:14:40
https://www.reddit.com/r/LocalLLaMA/comments/13txx1j/arxiv_the_false_promise_of_imitating_proprietary/
CodingButStillAlive
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13txx1j
false
null
t3_13txx1j
/r/LocalLLaMA/comments/13txx1j/arxiv_the_false_promise_of_imitating_proprietary/
false
false
self
10
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
How do you train a model?
3
[removed]
2023-05-28T12:14:41
https://www.reddit.com/r/LocalLLaMA/comments/13tz0wy/how_do_you_train_a_model/
Rear-gunner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13tz0wy
false
null
t3_13tz0wy
/r/LocalLLaMA/comments/13tz0wy/how_do_you_train_a_model/
false
false
default
3
null
How to qlora 33B model on a GPU with 24GB of VRAM
59
The qlora fine-tuning 33b model with 24 VRAM GPU is just fit the vram for Lora dimensions of 32 and must load the base model on bf16. It better runs on a dedicated headless Ubuntu server, given there isn't much VRAM left or the Lora dimension needs to be reduced even further. steps: \- git clone [https://github.com/artidoro/qlora](https://github.com/artidoro/qlora) \- adjust lora r dimentions to 32 : [https://github.com/artidoro/qlora/blob/main/qlora.py#L142](https://github.com/artidoro/qlora/blob/main/qlora.py#L142) \- add your own dataset loader at here: [https://github.com/artidoro/qlora/blob/main/qlora.py#L521](https://github.com/artidoro/qlora/blob/main/qlora.py#L521). for exmple: \`\`\` elif args.dataset == 'my-data': dataset = load\_dataset("json", data\_files="./combined.json") dataset = dataset.map(lambda x: { 'input': x\['question'\], 'output': x\['answer'\] }, remove\_columns=\['question', 'answer'\]) \`\`\` &#x200B; \- run command for example \`python qlora.py –learning\_rate 0.0001 --model\_name\_or\_path timdettmers/guanaco-33b-merged --dataset my-data --bf16\` &#x200B; https://preview.redd.it/xbzhg4x1vl2b1.png?width=1266&format=png&auto=webp&s=bb2027da410b98fbe69fc498ceffa10a01f2ca6b
2023-05-28T12:15:01
https://www.reddit.com/r/LocalLLaMA/comments/13tz14v/how_to_qlora_33b_model_on_a_gpu_with_24gb_of_vram/
mzbacd
self.LocalLLaMA
2023-05-28T16:44:44
0
{}
13tz14v
false
null
t3_13tz14v
/r/LocalLLaMA/comments/13tz14v/how_to_qlora_33b_model_on_a_gpu_with_24gb_of_vram/
false
false
https://b.thumbs.redditm…xwW58Nm1nJAM.jpg
59
{'enabled': False, 'images': [{'id': 'kENSZ1PMmG9Ihv80XmD052ofU0DTNu-1K0X2CMrrd5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=108&crop=smart&auto=webp&s=08cbfe669c3993528813e06aaa7188dd9c7f11ae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=216&crop=smart&auto=webp&s=e77d75b9d6d443fd3ea27906a4cd5512e360f730', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=320&crop=smart&auto=webp&s=f994e4660655fb87fb9f5ec4f078589b4b3a64d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=640&crop=smart&auto=webp&s=4200a18937b63fe08eab1be66c446e574fd061ca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=960&crop=smart&auto=webp&s=b2274a245a25eb4f2d3ed6671730cf290ebda7a2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?width=1080&crop=smart&auto=webp&s=bba345ac9506e8bf332eadcc243c3ef2232cad61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/C2ahPra10wkp7_9zP9iaHyw72IM_SvE-Vi9-M7Z7t_s.jpg?auto=webp&s=bfcbddd7e5b97db37bf8d754239f538b97d0b8cb', 'width': 1200}, 'variants': {}}]}
I built a multi-platform desktop app to easily download and run models, open source btw
144
I want to share this project that I've been working. I noticed that there are not easy to install/use apps for open source models, maybe that is stopping this to spread outside of the dev-world. So I built a Tauri app using ggml tensor library through Rust's LLM lib (https://github.com/rustformers/llm), and it provides installers for all desktop platforms, just download and install, like any normal app. I even created a pretty landing page for people who can get scared away by Github 😅: https://secondbrain.sh/ The repo is here: https://github.com/juliooa/secondbrain Is still alpha and buggy, any comment or contribution is welcome. My idea is to add plugins or add-ons so it can be more useful, like voice, filesystem search, maybe commands to open other apps, etc.. Cheers! Edit: you can download the installers here: https://github.com/juliooa/secondbrain/releases/tag/main
2023-05-28T12:26:41
https://www.reddit.com/r/LocalLLaMA/comments/13tz8x7/i_built_a_multiplatform_desktop_app_to_easily/
julio_oa
self.LocalLLaMA
2023-05-28T20:00:27
0
{}
13tz8x7
false
null
t3_13tz8x7
/r/LocalLLaMA/comments/13tz8x7/i_built_a_multiplatform_desktop_app_to_easily/
false
false
self
144
{'enabled': False, 'images': [{'id': 'jlmTsAOlN9RsNhQjh1YCJUOXaAOCc-j-vUBoe9uxfBE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=108&crop=smart&auto=webp&s=13bb375f5688d9d12e5510f04955553eb170af47', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=216&crop=smart&auto=webp&s=ee5b5bd806203b6a6f3a093a393a10625d036533', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=320&crop=smart&auto=webp&s=4f669e2a4de5e8a46e94abafe3e71a3a8e748e41', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=640&crop=smart&auto=webp&s=a5b2cc7215767cc180bd4a517e385566c2df47a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=960&crop=smart&auto=webp&s=548361caf983be002e4a5274e716cd40aa2469bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?width=1080&crop=smart&auto=webp&s=6113c9f028c7c690c7f7c8e5eb1610eb9748c5f2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2DtwXe6ZKe6ilGMyznrg9c0rT1DIzAHby3fJUBDpIHE.jpg?auto=webp&s=cbc48101428735a46fa72d43519c6209bb714e76', 'width': 1200}, 'variants': {}}]}
Under 8*A800, the 7B model can handle 50k context and perform reading comprehension accurately
82
2023-05-28T13:04:03
https://github.com/bojone/NBCE/blob/main/README_en.md
Spare_Side_5907
github.com
1970-01-01T00:00:00
0
{}
13tzzpy
false
null
t3_13tzzpy
/r/LocalLLaMA/comments/13tzzpy/under_8a800_the_7b_model_can_handle_50k_context/
false
false
https://b.thumbs.redditm…3ePqnJm7CRKQ.jpg
82
{'enabled': False, 'images': [{'id': 'eXSC8hyZYwjmJIugavKDAzG6YdQZ-lVnYvM-Hy185Lk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=108&crop=smart&auto=webp&s=3c1606f9e6bfa632981bfde353f7146c736d431a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=216&crop=smart&auto=webp&s=c15c8f4f6b4e6597f30a0433afe0f825879a935a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=320&crop=smart&auto=webp&s=f4f99e9bede35085fcbd26ac45a382b86fb0d0db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=640&crop=smart&auto=webp&s=ff9483cd0525fe95956f73ca944ee3ca1650dd60', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=960&crop=smart&auto=webp&s=8ef76e7f8daccab5e6a99ace442f42dede9e502d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?width=1080&crop=smart&auto=webp&s=92dcbe42fac7ceb7f58aca5fbaea9f84a8e25ee2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/W_4p6nHiPVvdImyKh4tBeCSxuVY0XdmtaYkgJ-Yc6oY.jpg?auto=webp&s=2b3847954f0fabb9b27b9eb12eef546c1907fd49', 'width': 1200}, 'variants': {}}]}
Best language model for use case?
4
So… I'm looking for a language model has the following requirements: 1. No more than 3Gb file size 2. Can run with 4Gb RAM 3. I can call it though Python 4. Be able to run it locally (offline) Does anyone have any recommendations for me? I just need a North on this so I can search on how to install/configure by myself. Thanks for your help :)
2023-05-28T15:10:39
https://www.reddit.com/r/LocalLLaMA/comments/13u2q1i/best_language_model_for_use_case/
AltSins-Street2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13u2q1i
false
null
t3_13u2q1i
/r/LocalLLaMA/comments/13u2q1i/best_language_model_for_use_case/
false
false
self
4
null
Retrieval-Aware training seems like a big deal
8
I just went through the [Gorrilla paper](https://arxiv.org/abs/2305.15334) and retrieval-aware training seems like a big deal (if I understood it right). Does anyone know if any other work has been done on this?
2023-05-28T18:20:40
https://www.reddit.com/r/LocalLLaMA/comments/13u79xq/retrievalaware_training_seems_like_a_big_deal/
_wsgeorge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13u79xq
false
null
t3_13u79xq
/r/LocalLLaMA/comments/13u79xq/retrievalaware_training_seems_like_a_big_deal/
false
false
self
8
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
How do I get a GPTQ model to provide longer answers or answers that don't cut off?
1
I have a 3080 GPU with 10GB VRAM. For example, the TheBloke's Manticore 4bit model runs and is relatively fast, but it cuts itself off quickly. I'm current using the settings of 4wbits and 128groupsize. Maybe this is a separate issue, but my GPU never heats up so I assume it's not really working at full capacity. Here's an example of an initial answer not completing. &#x200B; > **Assistant** > >UV radiation has different effects on CO2 and O2 molecules in the atmosphere. Here are some comparisons between them: > >1) Absorption Spectrum: The absorption spectrum for both CO2 and O2 molecules overlap at certain wavelengths, but there are also differences. For example, while CO2 has strong absorbance bands around 400 nm (nanometers), O2 does not have any significant absorbance band within this range. Similarly, CO2 shows stronger absorbance near 650-700nm compared to O2 which only has weak absorbance at these wavelengths. These spectral differences play an important role in determining how much energy from sunlight gets transmitted through Earth's atmosphere. > >2) Atmospheric Lifetime: While CO2 remains in the atmosphere for centuries due to its long atmospheric lifetime, O2 quickly reacts with other
2023-05-28T18:31:35
https://www.reddit.com/r/LocalLLaMA/comments/13u7ixx/how_do_i_get_a_gptq_model_to_provide_longer/
infohawk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13u7ixx
false
null
t3_13u7ixx
/r/LocalLLaMA/comments/13u7ixx/how_do_i_get_a_gptq_model_to_provide_longer/
false
false
self
1
null
Looking for a wisdom to choose model
14
Hello all I'm literally overwhelmed by the models that are floating around. I have a RTX 3090 accompanied with i9 13900k. I am looking for the best current model to use. I will mainly use it to C++ and Python coding assistant, but the use cases will not be limited to it. I heard great things about Falcon and Wizard Vicuna models, but as a newbie I am literally overwhelmed. Which one to use? Should I go for 30b or 40b models? I also heard about quantization options, which should help me to fit bigger models while maintaining the minimum impact on the performance. Should I go with those? Any help is appreciated.
2023-05-28T18:36:38
https://www.reddit.com/r/LocalLLaMA/comments/13u7n2d/looking_for_a_wisdom_to_choose_model/
iamkucuk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13u7n2d
false
null
t3_13u7n2d
/r/LocalLLaMA/comments/13u7n2d/looking_for_a_wisdom_to_choose_model/
false
false
self
14
null
Which models are good at French in chat mode?
1
[removed]
2023-05-28T19:03:05
https://www.reddit.com/r/LocalLLaMA/comments/13u897h/which_models_are_good_at_french_in_chat_mode/
Extraltodeus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13u897h
false
null
t3_13u897h
/r/LocalLLaMA/comments/13u897h/which_models_are_good_at_french_in_chat_mode/
false
false
default
1
null
Is anyone else getting only 443 bytes adapter_model.bin with qlora?
5
I have tried multiple models but the size is always the same. i did try raising a PR but I am not sure if the model it outputs is correct or not [https://github.com/artidoro/qlora/pull/44](https://github.com/artidoro/qlora/pull/44)
2023-05-28T19:35:04
https://www.reddit.com/r/LocalLLaMA/comments/13u8zpz/is_anyone_else_getting_only_443_bytes_adapter/
KKcorps
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13u8zpz
false
null
t3_13u8zpz
/r/LocalLLaMA/comments/13u8zpz/is_anyone_else_getting_only_443_bytes_adapter/
false
false
self
5
{'enabled': False, 'images': [{'id': '7pYZ5Ukmo9EHbEuW-e4AjbhyAZdv4y_bMIOcaZ_WCEY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=108&crop=smart&auto=webp&s=9d06203adfc971a7d33282cc2a13a9b67d5546b5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=216&crop=smart&auto=webp&s=8c8a47675b96970590198734293c10fd75ed9bbd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=320&crop=smart&auto=webp&s=3e2b9f8755506d70ed1241f84b5506b1d2def121', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=640&crop=smart&auto=webp&s=bc05de8cbd8f17c2284fb190d56cc5de938027d1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=960&crop=smart&auto=webp&s=0aaeafd158b8a0b8bbc8f740e6ef04f5ac8b43c8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?width=1080&crop=smart&auto=webp&s=9da08887ea478ee84e9d05d5f3507c6538ffae82', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mO7eVzOcNeqAoR1yxBlYpVLNNrhqG4OczVWqn85PTug.jpg?auto=webp&s=64e99242b309b96762c9412d4de6b182fb3e36e6', 'width': 1200}, 'variants': {}}]}
Best model for dialog generation?
7
Hey there, I wanted to generate dialogs between two or more specified characters with specified scenarios using AI. What would be the best model for that? I tried some GGML models and after a while, they got stuck in an endless loop or kept repeating things.
2023-05-28T21:08:27
https://www.reddit.com/r/LocalLLaMA/comments/13ub7zq/best_model_for_dialog_generation/
chocolatebanana136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ub7zq
false
null
t3_13ub7zq
/r/LocalLLaMA/comments/13ub7zq/best_model_for_dialog_generation/
false
false
self
7
null
Best storytelling local LLM?
69
NAI recently released a decent alpha preview of a proprietary LLM they’ve been developing, and I was wanting to compare it to whatever the open source best local LLMs currently available. I have a 3090 but could also spin up an A100 on runpod for testing if it’s a model too large for that card. I’d prefer uncensored as the NAI model is uncensored (and it’s hard to write stories when the model grinds to a halt every time anything vaguely out of bounds happens like someone getting punched or shot).
2023-05-28T21:23:25
https://www.reddit.com/r/LocalLLaMA/comments/13ubk8p/best_storytelling_local_llm/
chakalakasp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ubk8p
false
null
t3_13ubk8p
/r/LocalLLaMA/comments/13ubk8p/best_storytelling_local_llm/
false
false
self
69
null
Could you use Llama to power a small robot?
0
[removed]
2023-05-28T23:33:13
https://i.redd.it/3kmm2j038p2b1.jpg
Azimn
i.redd.it
1970-01-01T00:00:00
0
{}
13uekc5
false
null
t3_13uekc5
/r/LocalLLaMA/comments/13uekc5/could_you_use_llama_to_power_a_small_robot/
false
false
default
0
null
Best uncensored model for an a6000
1
[removed]
2023-05-28T23:36:11
[deleted]
1970-01-01T00:00:00
0
{}
13uemve
false
null
t3_13uemve
/r/LocalLLaMA/comments/13uemve/best_uncensored_model_for_an_a6000/
false
false
default
1
null
Training a new model
3
Hi guys, been playing around a bit, experimenting with LoRAs currently. I wanted to know if anyone had advice, tips or links to useful guides on actually making an entirely new model? Basically, I'm inspired by the Samantha model ( [(1) LocalLLaMA (reddit.com)](https://www.reddit.com/r/LocalLLaMA/comments/13tuipk/samantha7b/) ). It seems more what I'm wanting instead of fine-tuning - I don't need a model that can roleplay any character you give it, I want the future now damnit, let me build my AI companion. Any advice or tips are appreciated!
2023-05-29T00:08:00
https://www.reddit.com/r/LocalLLaMA/comments/13ufdst/training_a_new_model/
Equal_Station2752
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ufdst
false
null
t3_13ufdst
/r/LocalLLaMA/comments/13ufdst/training_a_new_model/
false
false
self
3
null
Generating docstrings with Salesforce Codegen and Microsoft Guidance, inside VSCode
3
[removed]
2023-05-29T05:00:29
https://www.reddit.com/r/LocalLLaMA/comments/13uldbo/generating_docstrings_with_salesforce_codegen_and/
rustedbits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13uldbo
false
null
t3_13uldbo
/r/LocalLLaMA/comments/13uldbo/generating_docstrings_with_salesforce_codegen_and/
false
false
default
3
null
How to train a new language that is not in base model?
15
[deleted]
2023-05-29T05:04:31
[deleted]
1970-01-01T00:00:00
0
{}
13ulggf
false
null
t3_13ulggf
/r/LocalLLaMA/comments/13ulggf/how_to_train_a_new_language_that_is_not_in_base/
false
false
default
15
null
what is most basic GPU or setup required to get 10 tokens/second using any 7B model?
3
Has anyone done any experiments around this?
2023-05-29T05:44:31
https://www.reddit.com/r/LocalLLaMA/comments/13um5su/what_is_most_basic_gpu_or_setup_required_to_get/
premrajnarkhede1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13um5su
false
null
t3_13um5su
/r/LocalLLaMA/comments/13um5su/what_is_most_basic_gpu_or_setup_required_to_get/
false
false
self
3
null
Tools similar to ChatGPT Retrieval Plugin but for localized LLMs.
4
I've been checking out the ChatGPT Retrieval Plugin that OpenAI published and it's a really cool way to integrate semantics search with vectorized databases. I was wondering if you guys know about any tools/tutorials that would help me figure out how to incorporate it with a local LLM like 30b Wizard or Vicuna. I know that the less robust nature of these LLM's means that it would probably struggle a lot more with effective queries, but nonetheless it's a very cool piece of technology that brings me one step closer to my post-apocalyptic civilization rebuilding LLM that only requires natural language inputs to function hah.
2023-05-29T05:51:09
https://www.reddit.com/r/LocalLLaMA/comments/13um9v2/tools_similar_to_chatgpt_retrieval_plugin_but_for/
PlanetExperience
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13um9v2
false
null
t3_13um9v2
/r/LocalLLaMA/comments/13um9v2/tools_similar_to_chatgpt_retrieval_plugin_but_for/
false
false
self
4
null
Help with instructions for training a model to write poems and speak in style from Facebook chat logs?
1
[removed]
2023-05-29T05:56:14
https://www.reddit.com/r/LocalLLaMA/comments/13umd4y/help_with_instructions_for_training_a_model_to/
Certain_Lunch1259
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13umd4y
false
null
t3_13umd4y
/r/LocalLLaMA/comments/13umd4y/help_with_instructions_for_training_a_model_to/
false
false
default
1
null
samantha-33b
256
I released samantha-33b This one is way better than 7b and 13b. [https://erichartford.com/meet-samantha](https://erichartford.com/meet-samantha) [https://huggingface.co/ehartford/samantha-33b](https://huggingface.co/ehartford/samantha-33b) Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format. Training 7b took 5.5 hours on 4x A100 80gb using deepspeed zero3 and flash attention. She will not engage in roleplay, romance, or sexual activity. u/The-Bloke
2023-05-29T06:10:44
https://www.reddit.com/r/LocalLLaMA/comments/13umn34/samantha33b/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{'gid_3': 1}
13umn34
false
null
t3_13umn34
/r/LocalLLaMA/comments/13umn34/samantha33b/
false
false
self
256
{'enabled': False, 'images': [{'id': 'lNiLqLI9dgIkz4KVVl94-x4cbPilcjvDR324LmSB-TU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=108&crop=smart&auto=webp&s=9755868df57ad87b537c145c5cef6396bd94cc69', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=216&crop=smart&auto=webp&s=fb5dc5979a6c5dcc92e6478caaf41bbe5f4da7e1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=320&crop=smart&auto=webp&s=d107c4951f53f5480b43e7bbef267193d6dc1359', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=640&crop=smart&auto=webp&s=e23fa125906b6997cdc99cdbe9ce1120b4894236', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=960&crop=smart&auto=webp&s=ee699ecb83d5fdd0fb66e6f151e27a1cec41214e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=1080&crop=smart&auto=webp&s=0c3a5dd91adcedd28f9953e11bccfe3170917d51', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?auto=webp&s=cf79a676f356c5e2300b5e6b0e93f58ee1763146', 'width': 1200}, 'variants': {}}]}
AutoGPTQ vs GPTQ-for-llama?
7
(For context, I was looking at switching over to the new bitsandbytes 4bit, and was under the impression that it was compatible with GPTQ, but apparently I was mistaken - If one wants to use bitsandbytes 4bit, it appears that you need to start with a full-fat fp16 model. This led me to looking at other ways to optimize text-generation-webui for 100% GPU scenarios) Does anyone know if AutoGPTQ offers benefits beyond what GPTQ-for-llama provides? GPTQ is amazing, but it's usage in apps like text-generation-webui is a bit cryptic. Apparently the "old" CUDA version runs faster than the new, and Triton is available but no one seems to know if/how/what it provides. My question is - If I'm running with GPTQ-for-llama on text-generation-webui with (what i think is) the "old" cuda kernel, is this the optimal way to run/load GPTQ-quantized models?
2023-05-29T06:46:43
https://www.reddit.com/r/LocalLLaMA/comments/13un94p/autogptq_vs_gptqforllama/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13un94p
false
null
t3_13un94p
/r/LocalLLaMA/comments/13un94p/autogptq_vs_gptqforllama/
false
false
self
7
null
VicUnlocked 65B QLora dropped
101
VicUnlocked 65B QLora is out. Haven't run this yet, I just found it and won't be able to run it before afternoon. [Here](https://huggingface.co/Aeala/VicUnlocked-alpaca-65b-GGML) are ggml's. I am so happy that we finally see more people doing 65B finetunes. Alpaca finetunes were good, but there is room to improve.
2023-05-29T06:48:15
https://huggingface.co/Aeala/VicUnlocked-alpaca-65b-QLoRA
FullOf_Bad_Ideas
huggingface.co
1970-01-01T00:00:00
0
{}
13una1y
false
null
t3_13una1y
/r/LocalLLaMA/comments/13una1y/vicunlocked_65b_qlora_dropped/
false
false
https://b.thumbs.redditm…q30wTXU4w_WY.jpg
101
{'enabled': False, 'images': [{'id': 'mV5Zmg21XcfmaSsZfUe8EdI5lZeD-xan739MQrXsseg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=108&crop=smart&auto=webp&s=d354780d0c33f7251df429359b18b6022d15a941', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=216&crop=smart&auto=webp&s=3e380cf12e76146232a76a7ead3b80e6fe6028df', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=320&crop=smart&auto=webp&s=5396c450223bc332fc632c10cd57e940ed69a6a3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=640&crop=smart&auto=webp&s=f76875d6006e0b704dd86eb50a2a69463b4bc30a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=960&crop=smart&auto=webp&s=fe3080f6455e637a6821fdd26539121a59d7ed30', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?width=1080&crop=smart&auto=webp&s=8f057afd39c1a9467343f14eb99695cfe58180c4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ReohBPh0HXGbRsybkoLJ2MUyZulP0zr3fcx5ROvO4xs.jpg?auto=webp&s=9110427d8dd9efc70351d655aca45068566567de', 'width': 1200}, 'variants': {}}]}
Advice for server
8
I'm running Guanaco 65B model on my system. My specs are i9 12900k, 128gb ram and two tesla p40's. The model is running at about 4 tokens a second and only utilizing the cpu and gpus at about 10 percent. Can I make it so it will fully utilize the Gpu's? Any additional advice/thoughts are more than welcome. Update: it looks like I'm getting 100% utilization on one of the p40's but nothing on the second.
2023-05-29T07:15:52
https://www.reddit.com/r/LocalLLaMA/comments/13unr3v/advice_for_server/
Emergency-Seaweed-73
self.LocalLLaMA
2023-05-29T15:53:31
0
{}
13unr3v
false
null
t3_13unr3v
/r/LocalLLaMA/comments/13unr3v/advice_for_server/
false
false
self
8
null
Applying All Recent Innovations To Train a Code Model
2
[removed]
2023-05-29T08:42:03
https://www.reddit.com/r/LocalLLaMA/comments/13up5wt/applying_all_recent_innovations_to_train_a_code/
Ok--Reflection
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13up5wt
false
null
t3_13up5wt
/r/LocalLLaMA/comments/13up5wt/applying_all_recent_innovations_to_train_a_code/
false
false
default
2
null
Cpu only performance
23
I have a pretty beefy cloud instance I have access to that I've been running text-generation-webui. It has 96 vCPUs and 96gb memory. I've tried a bunch of different llama models and the best performance I can get on a 13b model is 5 token / s. I have tried tweaking the thread setting to diff numbers between 32 to 96 and any difference that makes is marginal. Is it really just the case that adding more CPU / RAM after a certain point leads to diminishing returns? I understand I should be using a GPU, I just don't have access to one right now, is there any way get more performance out of my current instance?
2023-05-29T09:28:06
https://www.reddit.com/r/LocalLLaMA/comments/13upwrl/cpu_only_performance/
foooooooooooooooobar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13upwrl
false
null
t3_13upwrl
/r/LocalLLaMA/comments/13upwrl/cpu_only_performance/
false
false
self
23
null
Any tip to increase speed with Oobabooga on Colab with Tesla T4?
2
[removed]
2023-05-29T10:09:34
https://www.reddit.com/r/LocalLLaMA/comments/13uqn6j/any_tip_to_increase_speed_with_oobabooga_on_colab/
jl303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13uqn6j
false
null
t3_13uqn6j
/r/LocalLLaMA/comments/13uqn6j/any_tip_to_increase_speed_with_oobabooga_on_colab/
false
false
default
2
null
Very slow on 3090 24G
7
Sorry, I have a question I really want answered. My speed on the 3090 seems to be nowhere near as fast as the 3060 or other graphics cards. Using the [text-generation-webui](https://github.com/oobabooga/text-generation-webui) on WSL2 with [Guanaco](https://www.reddit.com/r/LocalLLaMA/comments/13rthln/guanaco_7b_13b_33b_and_65b_models_by_tim_dettmers/) llama model On native GPTQ-for-LLaMA I only get slower speeds, so I use this [branch](https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/fastest-inference-4bit) Use the following flags: --quant\_attn --xformers --warmup\_autotune --fused\_mlp --triton 7B model I get 10~8t/s 13B 8~6t/s 33B 5~4t/s &#x200B; I've tried upgrading the size of the RAM to 64GB, upgrading the CPU, basing the WSL2 on a faster PCIE SSD, but none seem to improve... The only reason I can think of is: 1. The graphics card is broken 2. My RAM only has 2400 speed
2023-05-29T10:31:56
https://www.reddit.com/r/LocalLLaMA/comments/13ur2au/very_slow_on_3090_24g/
Sat0r1r1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ur2au
false
null
t3_13ur2au
/r/LocalLLaMA/comments/13ur2au/very_slow_on_3090_24g/
false
false
self
7
{'enabled': False, 'images': [{'id': 'm-Rq8t_G6WAzs733EzkbmBFRLdK5a8F0tdENIfqKCW8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=108&crop=smart&auto=webp&s=4fc21b6656fb1693080e731405f7077a2289cd36', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=216&crop=smart&auto=webp&s=64fe12278b0642f89683c009adf55b3b201920e4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=320&crop=smart&auto=webp&s=c339a8b5d8b97896840139e07fa720b1b008c664', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=640&crop=smart&auto=webp&s=98b2c5bd8c14731794c218e330f8cc1da0afbdff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=960&crop=smart&auto=webp&s=7dfb09df57c051be4d7219ba13f576ec2f9ec3cd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?width=1080&crop=smart&auto=webp&s=4076e66ecae2087ebcb2fee2f9983733d3e04ac4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uFwJ3IxqpiRnc-K0vtVyadzaMTnWlu0NKGw-mBg12SU.jpg?auto=webp&s=6ff8e81462aeb3670f0fdb6157c7ea6efe158f61', 'width': 1200}, 'variants': {}}]}
In memory compute chips
8
How close are they to reality? As much as I understand, the greatest limitation of current AI is data IO due to tho fact that "neurons" are emulated and entire model has to be "recalculated" for every token - reading and writing to memory *sequentially* with each step, greatly limiting training and inference speed. https://blocksandfiles.com/2021/12/16/7bits-cell-flash-in-ai-compute-in-memory-chip/ I see there is something like this in the works already. With efficient quantisation algoritms, can this task get easier? While I unlderstand that "multilevel" cells are prone to "wearing out", applying this tech to "frozen" (read-only) models for inference will likely do the trick? I mean, a decent 4bit tlc 1 Tb SSD costs less that a hundred bucks. You can fit a GPT4 inside for sure, if quantized to 4bit! If you wear it out during traing, it might still be cheaper that a huge stack of gpus + electricity... If you use 4-bit data cells as "hardware neurons" of a 4-bit quantized model, does it imply that such model, once you load it with data, will have terabytes of "storage" like modern SSDs and will be able to output literally thousands (if not millions) "tokens per second" as output, with all "computation" occuring internally, and model training will be be faster and more effective by several orders of magnitude?
2023-05-29T11:29:36
https://www.reddit.com/r/LocalLLaMA/comments/13us8h9/in_memory_compute_chips/
BalorNG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13us8h9
false
null
t3_13us8h9
/r/LocalLLaMA/comments/13us8h9/in_memory_compute_chips/
false
false
self
8
{'enabled': False, 'images': [{'id': '9q39_pyZP1DfYfTsW5gekdXr2YTTQsvuA8N1NJvDvCs', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/bNyTX_KkIfEtuF01GP-LON4QQcE3qNb1nJRYBMPdgI4.jpg?width=108&crop=smart&auto=webp&s=219bdf73173557bc6e717038ee653b3807d451e4', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/bNyTX_KkIfEtuF01GP-LON4QQcE3qNb1nJRYBMPdgI4.jpg?width=216&crop=smart&auto=webp&s=7f418fde935a6dd0c3abc0484a1f73f5341749ce', 'width': 216}, {'height': 199, 'url': 'https://external-preview.redd.it/bNyTX_KkIfEtuF01GP-LON4QQcE3qNb1nJRYBMPdgI4.jpg?width=320&crop=smart&auto=webp&s=5647789be6d3a4090b5a98100ef8d83356f792fd', 'width': 320}, {'height': 399, 'url': 'https://external-preview.redd.it/bNyTX_KkIfEtuF01GP-LON4QQcE3qNb1nJRYBMPdgI4.jpg?width=640&crop=smart&auto=webp&s=732b514733180947201d76941b28c391abea5f8e', 'width': 640}], 'source': {'height': 593, 'url': 'https://external-preview.redd.it/bNyTX_KkIfEtuF01GP-LON4QQcE3qNb1nJRYBMPdgI4.jpg?auto=webp&s=44f3ef142562d97ebe633731b47c59e617e82764', 'width': 950}, 'variants': {}}]}
[LoRA + weight merge every N step] for pre-training?
1
I was wondering if we can use LoRA for pre-training, by merging LoRA weights with the frozen weights every N step. Or is there a similar pre-training research? &#x200B; \*edit, super roughly: for step in range(N): if step % 100 == 0: frozen_weights += lora_weights lora_weights = 0 # To be precise, initialization from section 4.1 of https://arxiv.org/pdf/2106.09685.pdf
2023-05-29T11:45:53
https://www.reddit.com/r/LocalLLaMA/comments/13uskj4/lora_weight_merge_every_n_step_for_pretraining/
kkimdev
self.LocalLLaMA
2023-05-29T12:43:02
0
{}
13uskj4
false
null
t3_13uskj4
/r/LocalLLaMA/comments/13uskj4/lora_weight_merge_every_n_step_for_pretraining/
false
false
self
1
null
Minigpt-4 (Vicuna 13B + images)
27
2023-05-29T12:07:51
https://minigpt-4.github.io/
AutomataManifold
minigpt-4.github.io
1970-01-01T00:00:00
0
{}
13ut2ap
false
null
t3_13ut2ap
/r/LocalLLaMA/comments/13ut2ap/minigpt4_vicuna_13b_images/
false
false
default
27
null
Which 30b model should I use for embeddings?
10
The plan is to ask questions about 10-300 page long pdfs or other documents. I know embeddings is not perfect but this is the best approach to query large documents at the moment. I want to use a local model because I'd work with sensitive information. I'd like to use LangChain but am open to use anything else that works. I don't have local GPU but would like to use one from [Vast.ai](https://Vast.ai) or other providers. I'd try a 30/33b model as I'd like to see how good it can get. Maybe later I'll try an even bigger one. There are so many new models published every week, almost daily, that I have no idea which one to try. Would you please give me suggestions? Have you tried one for embeddings? How good was it?
2023-05-29T12:28:12
https://www.reddit.com/r/LocalLLaMA/comments/13utib1/which_30b_model_should_i_use_for_embeddings/
HaOrbanMaradEnMegyek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13utib1
false
null
t3_13utib1
/r/LocalLLaMA/comments/13utib1/which_30b_model_should_i_use_for_embeddings/
false
false
self
10
null
Running MPT-7b locally on Jupyter notebook
1
[removed]
2023-05-29T12:42:52
https://www.reddit.com/r/LocalLLaMA/comments/13utuss/running_mpt7b_locally_on_jupyter_notebook/
anindya_42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13utuss
false
null
t3_13utuss
/r/LocalLLaMA/comments/13utuss/running_mpt7b_locally_on_jupyter_notebook/
false
false
default
1
null
My chatbot gets stuck and repeats the same sentence
1
[removed]
2023-05-29T12:47:14
https://www.reddit.com/r/LocalLLaMA/comments/13utyh1/my_chatbot_gets_stuck_and_repeats_the_same/
mashimaroxc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13utyh1
false
null
t3_13utyh1
/r/LocalLLaMA/comments/13utyh1/my_chatbot_gets_stuck_and_repeats_the_same/
false
false
default
1
null
Testing the new BnB 4-bit or "qlora" vs GPTQ Cuda
9
Loading: Much slower than GPTQ, not much speed up on 2nd load. This was to be expected. INFO:Loaded the model in 104.84 seconds. < llama-30b FP32 2nd load INFO:Loaded the model in 68.58 seconds. < llama-30b FP16 2nd load INFO:Loaded the model in 39.24 seconds. < llama-30b-4bit 1st load INFO:Loaded the model in 7.53 seconds. < llama-30b-4bit 2nd load Inference: Seems slower than GPTQ. Had to use double-quant to not OOM on 30b. Best for inference Float16 with FP4. 3090: Bfloat16: Output generated in 31.42 seconds (1.88 tokens/s, 59 tokens, context 1269, seed 373399427) Float16: Output generated in 11.13 seconds (2.60 tokens/s, 29 tokens, context 1269, seed 136505588) Float32: Output generated in 66.21 seconds (1.53 tokens/s, 101 tokens, context 1269, seed 999270937) Float16-FP4: Output generated in 15.38 seconds (2.93 tokens/s, 45 tokens, context 1269, seed 1148932928) Output generated in 12.41 seconds (3.06 tokens/s, 38 tokens, context 1269, seed 553649186) Output generated in 44.13 seconds (3.49 tokens/s, 154 tokens, context 1269, seed 1642169272) GPTQ-cuda: Output generated in 16.41 seconds (5.97 tokens/s, 98 tokens, context 1269, seed 1801823909) Output generated in 9.99 seconds (4.81 tokens/s, 48 tokens, context 1269, seed 946785249) P40: Float16: Output generated in 37.95 seconds (0.87 tokens/s, 33 tokens, context 1269, seed 1201269247) Float32: Output generated in 118.25 seconds (0.50 tokens/s, 59 tokens, context 1269, seed 1335948198) Float16-FP4: Output generated in 62.98 seconds (1.29 tokens/s, 81 tokens, context 1269, seed 640954033) GPTQ-Cuda: Output generated in 34.34 seconds (3.49 tokens/s, 120 tokens, context 1269, seed 99630798) Output generated in 21.86 seconds (1.92 tokens/s, 42 tokens, context 1269, seed 194589236) Traing: Works fine through textgen. As expected, was seamless. 30b: qlora: 30 hours to train 30b on ~50k instructions at 256 context and 2x1 batch size for one epoch. autograd: oom on 256 but 11 hours to train 128 ctx with same settings. textgen doesn't do gradient checkpointing. 13b: qlora: Running… 112 / 53544 … 1.24 it/s, 90 seconds / 12 hours … 12 hours remaining autograd: Running… 226 / 53544 … 2.20 it/s, 103 seconds / 7 hours … 7 hours remaining Has anyone tested perplexity yet?
2023-05-29T13:43:08
https://www.reddit.com/r/LocalLLaMA/comments/13uvbxe/testing_the_new_bnb_4bit_or_qlora_vs_gptq_cuda/
a_beautiful_rhind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13uvbxe
false
null
t3_13uvbxe
/r/LocalLLaMA/comments/13uvbxe/testing_the_new_bnb_4bit_or_qlora_vs_gptq_cuda/
false
false
self
9
null
Fine Tuning vs. Prompt Engineering Large Language Models
0
[removed]
2023-05-29T14:11:45
[deleted]
1970-01-01T00:00:00
0
{}
13uw1cx
false
null
t3_13uw1cx
/r/LocalLLaMA/comments/13uw1cx/fine_tuning_vs_prompt_engineering_large_language/
false
false
default
0
null
Advice the most optimal way to do language model domain adaptation - from A to Z
4
I want to do domain adaptation of some language model. I want to train language model on specific niched datasets of texts - articles + forums + blogs (for the start I have some set of articles only). The goal - is to train model to generate, rewrite, complete and answer questions in chatbot mode in the field of knowledge of my interest. The language of interest - English only. But I’m a bit lost in the zoo of models and their descendants + in quite mixed up terminology. Each time I’m surfing for new articles and videos on the subject - I’m getting examples of different models. Also I’m discouraged by way different approaches to training. Especially when people use term “finetuning” for almost everything (my interest is clearly domain adaptation - additional actions on ready to use language model to make it work with some specific field of knowledge), so I’m not sure what people mean by finetuning. So - I’d like to see your suggests of the whole line of work - which language model to choose and why, then suggested technology of domain adaptation and why is it exactly. I understand that my request is quite wide and I don’t provide some vital information such as what hardware I plan to use. It is, bcs I’m lost at the moment. So - for start you’re free to advice almost anything you like, considering my goal. Just explain why you suggest such model and technology of domain adaptation. Thank you in advance!
2023-05-29T15:22:03
https://www.reddit.com/r/LocalLLaMA/comments/13uxtzg/advice_the_most_optimal_way_to_do_language_model/
samulowry
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13uxtzg
false
null
t3_13uxtzg
/r/LocalLLaMA/comments/13uxtzg/advice_the_most_optimal_way_to_do_language_model/
false
false
self
4
null
Is it possible to use QLora to fine-tune llama on labelled data?
11
My downstream task is text classification. Can I fine-tune LLMs in a supervised fashion?
2023-05-29T15:39:58
https://www.reddit.com/r/LocalLLaMA/comments/13uya0g/is_it_possible_to_use_qlora_to_finetune_llama_on/
Nice_Tea_6590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13uya0g
false
null
t3_13uya0g
/r/LocalLLaMA/comments/13uya0g/is_it_possible_to_use_qlora_to_finetune_llama_on/
false
false
self
11
null
Looking for a Finetuning Guide
34
Dear LocalLLaMA community, I wanted to reach out and share my enthusiasm for LLaMA and my eagerness to dive into the world of conversational AI. As a beginner in this field, I have been voraciously reading about LLaMA and its various aspects, in particular concepts such as Qlora and bitsandbytes finetuning. They have piqued my interest, and I'm now eager to try my hand at training a custom model using specific data to enhance my understanding of how these concepts work in real-world scenarios. However, despite my extensive reading, I hav’nt been unable to find a comprehensive tutorial that provides clear instructions on how to create a custom model for a conversational agent. As a AI-noob, I would greatly appreciate it if someone could point me in the direction of such a tutorial. It would be helpful for me to have a step-by-step guide that simplifies the process and allows me to embark on this exciting project. Thank you in advance for your assistance !
2023-05-29T16:21:09
https://www.reddit.com/r/LocalLLaMA/comments/13uzcrh/looking_for_a_finetuning_guide/
Bitcoin_hunter-21M
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13uzcrh
false
null
t3_13uzcrh
/r/LocalLLaMA/comments/13uzcrh/looking_for_a_finetuning_guide/
false
false
self
34
null
Hardware explanation
1
Hi everyone, new to local ai, but not new to sysadmining. I have an unraid server that has 2x gtx1080. Can I run anything that spans both gpus? I'm a little uncertain how vram is actually calculated on the installs. I assume I max out one card (8 gb) and thats it but figured I'd ask. Thanks!
2023-05-29T16:49:27
https://www.reddit.com/r/LocalLLaMA/comments/13v02xo/hardware_explanation/
That0neSummoner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13v02xo
false
null
t3_13v02xo
/r/LocalLLaMA/comments/13v02xo/hardware_explanation/
false
false
self
1
null
What about generating N tokens, and asking the model to evaluate and edit what got generated so far? Two or more LLMs can run on top of each other, where critic LLMs edit the output of the writer. Will that boost performance?
2
What about generating N tokens, and asking the model to evaluate and edit what got generated so far? Two or more LLMs can run on top of each other, where critic LLMs edit the output of the writer. Will that boost performance?
2023-05-29T17:25:51
https://www.reddit.com/r/LocalLLaMA/comments/13v10gm/what_about_generating_n_tokens_and_asking_the/
NancyAurum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13v10gm
false
null
t3_13v10gm
/r/LocalLLaMA/comments/13v10gm/what_about_generating_n_tokens_and_asking_the/
false
false
self
2
null
Anyone here finetune either MPT-7B or Falcon-7B?
12
Just got into the LocalLLM space and playing around with a few things, is anyone familiar with finetuning these models?
2023-05-29T18:33:54
https://www.reddit.com/r/LocalLLaMA/comments/13v2pst/anyone_here_finetune_either_mpt7b_or_falcon7b/
EcstaticVenom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13v2pst
false
null
t3_13v2pst
/r/LocalLLaMA/comments/13v2pst/anyone_here_finetune_either_mpt7b_or_falcon7b/
false
false
self
12
null
Kobold on runpod?
2
[removed]
2023-05-29T18:34:59
https://www.reddit.com/r/LocalLLaMA/comments/13v2qry/kobold_on_runpod/
chakalakasp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13v2qry
false
null
t3_13v2qry
/r/LocalLLaMA/comments/13v2qry/kobold_on_runpod/
false
false
default
2
null
Multiple cheap gpus or a single expensive one?
13
So i have about $500-600 and already a good server 128-256gb ddr3 and 24 xeon e5-2698 V2 cores so there i don't need an upgrade i think but i dont have a GPU in it yet and i am wondering would it be better to get more ram and getting older server GPUs or something like a single 3090? Also does AMD vs Nvidia matter seeing that the Rx 6800xt is cheaper than a 3090 but 2 of them have more memory and probably even more compute power. So if anyone has a good resource or article explaining what is better for running any kind of local llm(i am looking for a local chat gpt/bing/bart alternative) i would appreciate it.
2023-05-29T18:58:17
https://www.reddit.com/r/LocalLLaMA/comments/13v3b6q/multiple_cheap_gpus_or_a_single_expensive_one/
gpu_melter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13v3b6q
false
null
t3_13v3b6q
/r/LocalLLaMA/comments/13v3b6q/multiple_cheap_gpus_or_a_single_expensive_one/
false
false
self
13
null
Model answering itself?
0
[removed]
2023-05-29T19:23:50
https://www.reddit.com/r/LocalLLaMA/comments/13v3yqa/model_answering_itself/
butidontwanto
self.LocalLLaMA
2023-05-29T19:30:10
0
{}
13v3yqa
false
null
t3_13v3yqa
/r/LocalLLaMA/comments/13v3yqa/model_answering_itself/
false
false
default
0
null
Is there a GUI for llama.cpp that I can use my CuBlas enabled binary with?
1
[removed]
2023-05-29T19:29:49
https://www.reddit.com/r/LocalLLaMA/comments/13v4425/is_there_a_gui_for_llamacpp_that_i_can_use_my/
APUsilicon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13v4425
false
null
t3_13v4425
/r/LocalLLaMA/comments/13v4425/is_there_a_gui_for_llamacpp_that_i_can_use_my/
false
false
default
1
null
Server help (again)
2
[removed]
2023-05-29T19:52:34
https://www.reddit.com/r/LocalLLaMA/comments/13v4ohm/server_help_again/
Emergency-Seaweed-73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13v4ohm
false
null
t3_13v4ohm
/r/LocalLLaMA/comments/13v4ohm/server_help_again/
false
false
default
2
null
Bitsandbytes giving you a cuda error on windows? Don't worry, some guy already compiled it for you.
14
I don't know what bitsandbytes is or what it does or why it won't just compile for me out of the box. Alls I know is it gives me errors and makes me sad. In an effort to prevent more tears, here's what I learned: I found some post somewhere that said to pip install this git repository and I did and then bitsandbytes worked with cuda. [https://github.com/Keith-Hon/bitsandbytes-windows](https://github.com/Keith-Hon/bitsandbytes-windows) Maybe others will chime in here and let me know if there's a better solution. Hopefully this post finds you well in your time of need, llocalllama warrior.
2023-05-29T20:13:35
https://www.reddit.com/r/LocalLLaMA/comments/13v57sf/bitsandbytes_giving_you_a_cuda_error_on_windows/
UnoriginalScreenName
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13v57sf
false
null
t3_13v57sf
/r/LocalLLaMA/comments/13v57sf/bitsandbytes_giving_you_a_cuda_error_on_windows/
false
false
self
14
{'enabled': False, 'images': [{'id': 'BSXsiBdAOCkfF5-JmoMvq9Q6TWfza3b-1kk_0wUeqQ4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=108&crop=smart&auto=webp&s=8b176a9209c9411f21d188fd88d66ab729a03525', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=216&crop=smart&auto=webp&s=c36859b70ed1d5ed06924e49c86e333ec3bb5863', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=320&crop=smart&auto=webp&s=d8bb6520f92eae7f983d370046fbcfcc865b99bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=640&crop=smart&auto=webp&s=e76886dae9bed0e6e59a5a8faf80ba838a6d44dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=960&crop=smart&auto=webp&s=5fc656d8fdde531bf8fc94b4d002c885c27791ca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?width=1080&crop=smart&auto=webp&s=bfc24d668db9e9795804b4afb0c7e280ed87eaab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wMixadyzI2jh_z0lVuqQo0K21YoBuWKhyaWGxPqq6MM.jpg?auto=webp&s=e162176ba37ad59f9226396b837d8e7709854054', 'width': 1200}, 'variants': {}}]}
TIL Sharding a model into smaller chunks may make it possible to load in Free Colab instances without running out of memory
24
I recently came across this [huggingface article](https://huggingface.co/blog/4bit-transformers-bitsandbytes) where there was a notebook claiming that they you can load GPT-NeoX-20B in a Free Colab instance which I found hard to believe because I was unable to load models as small as GPT-Neo-2.7B and StableLM 3B on a Free Colab instance before running out of memory (at least with oobabooga's interface.) As it turns out, it worked fine even though models almost 1/10 the size were running out of physical memory on my 12.7 GB free colab instance. Further investigation reveals that [sharding](https://huggingface.co/docs/transformers/big_models) is the answer. However, you'll need a Colab Pro subscription to be able to load the larger models and to be able to shard them so there's a bit of a chicken and egg problem here but after you shard the models, you'll be able to load them on an account with a Free Colab instance (without burning your credits) or to share them with your friends who don't have Colab Pro or local machines capable of running LLMs. So here are the steps to sharding a model into smaller pieces, assuming you have a Colab Pro subscription: - Start a Colab Pro Notebook with the High Ram option enabled. Depending on the size of the model, you may need to also enable a GPU (T4 should be fine in most cases.) - Download and load the model from Huggingface. If it won't fit in your GPU as is and you plan on running it in 8-bit mode anyways in the future, it's fine to load it in 8-bit. - Save the model with sharding enabled. Example: dest = "/content/drive/MyDrive/llms/models/mymodel" #Preferably a shared folder that your Free Colab accounts can also access shard_size = "1000MB" #A good value to try model.save_pretrained(dest, max_shard_size=shard_size) tokenizer.save_pretrained(dest) You can also publish it to your own Huggingface account as well if you want to share it with others easily. If you're interested in the actual code to do this, I can provide it; just let me know. Here's end-to-end code for converting WizardLM-7B-uncensored. You may need to change the destination path. from google.colab import drive # type: ignore try: drive_path = "/content/drive" drive.mount(drive_path,force_remount=False) except: print("...error mounting drive or with drive path variables") !git clone https://github.com/oobabooga/text-generation-webui %cd text-generation-webui !pip install -r requirements.txt !pip install -U transformers !pip install sentencepiece bitsandbytes accelerate model = "TheBloke/Wizard-Vicuna-7B-Uncensored-HF" dest = "/content/drive/MyDrive/llm/models/{}".format(model.replace("/","_")) #May need to edit based on where you're storing your models shard_size = "1000MB" from transformers import LlamaTokenizer , LlamaForCausalLM tokenizer = LlamaTokenizer.from_pretrained(model) model = LlamaForCausalLM.from_pretrained(model, load_in_8bit=True, device_map='auto') model.save_pretrained(dest, max_shard_size=shard_size) tokenizer.save_pretrained(dest) I can post a notebook here on my anonymous Google account but you may need to enable the High Ram and GPU on a Colab Pro instance to run it.
2023-05-29T21:12:58
https://www.reddit.com/r/LocalLLaMA/comments/13v6qvh/til_sharding_a_model_into_smaller_chunks_may_make/
ragnarkar
self.LocalLLaMA
2023-05-30T13:06:07
0
{}
13v6qvh
false
null
t3_13v6qvh
/r/LocalLLaMA/comments/13v6qvh/til_sharding_a_model_into_smaller_chunks_may_make/
false
false
self
24
{'enabled': False, 'images': [{'id': '5ors3IzSd91LDWo2yC2yuTxbj5satwsZQHIPrP5aNIo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=108&crop=smart&auto=webp&s=ffc336c656133c8a8bd641e31b75fb1aa11f8621', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=216&crop=smart&auto=webp&s=b9ff564f2023e86473ce0e652c89c051f6ff1f0d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=320&crop=smart&auto=webp&s=7f128fbeaaa0215fa9d56fe951360b6816d05d15', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=640&crop=smart&auto=webp&s=11de21cb0d1e5e65061595bbdc3ee0e7bf363013', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=960&crop=smart&auto=webp&s=aa940f51838964b78f0e74d4a8d185daa8b66258', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?width=1080&crop=smart&auto=webp&s=b45d46bcf6025c8ae8b913dd314011b331b2f9d0', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/AdT_mjn1LNVLlYzaUOGIw8eCMBkcVhUwUipuiEi6w-s.jpg?auto=webp&s=29860490a90daf517877704aef67f4db4a6ebbc9', 'width': 1300}, 'variants': {}}]}
llamacpp memory use on a m1 using oobabooga
1
[removed]
2023-05-29T21:32:00
https://www.reddit.com/r/LocalLLaMA/comments/13v77yw/llamacpp_memory_use_on_a_m1_using_oobabooga/
Bozo32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13v77yw
false
null
t3_13v77yw
/r/LocalLLaMA/comments/13v77yw/llamacpp_memory_use_on_a_m1_using_oobabooga/
false
false
default
1
null
Cross pollination of ideas from Stable Diffusion: Textual Inversions
50
I've had this idea rattling around in my mind for a while and need to get it out. I was amazed to hear from the creator of LORA that they designed LORA for LLMs, but they ended up being quite suited for Stable Diffusion. This got me wondering - is there anything currently in Stable Diffusion that is able to be applied to LLaMA? One of these things that seems to have promise to me is Textual Inversion. [This explanation](https://www.youtube.com/watch?v=opD_H9bED9Y) of it seems to do a pretty good job. They are also known as "embeddings" but from what I have seen so far in the LLM space this means something completely different. For example with a Textual Inversion in SD, it simply points to a space in the diffusion model that is so close to the trained material that it recreates it. With embeddings, they are often referencing the actual data itself. I am not an AI engineer, I'm an artist. So basically what I am envisaging is the ability for a user of Oobabooga to load up an LLM and then apply small files (like several kb in size) to their LLM to get it to provide greatly effected responses. For example, using a Steven King style embedding would not answer questions about the text of Steven King's novels, but would instead inject a style of horror and literary talent into the responses. Aka, respond "in the style of..." - is this possible with embeddings today? My understanding is that embeddings and vector databases are rather clunky. If embeddings could move from one LLaMA model to another, this would be extremely entertaining (of course, it would change dramatically with each model, part of the fun!) The other thing I would be fascinated to try from SD to LLM space is merging models. This of course is only possible due to ckpt files all being similar, but the benefits are huge. A static model that seems rigidly overtrained because immediately more flexible and expressive, almost like it has room to breathe. I would think this process if ported over to LLMs would be very valuable and unexpected. In Stable Diffusion it's more an alchemy than a science. But I could see lots of beneficial results coming from it if it's possible. What do you think is possible taking Stable Diffusion ideas into the LLaMA space? Do these sound absurd and unachievable, or does it seem remotely plausible?
2023-05-29T21:37:35
https://www.reddit.com/r/LocalLLaMA/comments/13v7ctt/cross_pollination_of_ideas_from_stable_diffusion/
revolved
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13v7ctt
false
null
t3_13v7ctt
/r/LocalLLaMA/comments/13v7ctt/cross_pollination_of_ideas_from_stable_diffusion/
false
false
self
50
{'enabled': False, 'images': [{'id': 'HFF1ModH9rb79ohJok8TB5o2V72dPF1koNi65mLF9Fk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/sibHwFD7dMBm0hOMe3NBTV_nktx-n497RV-fu7k8hXI.jpg?width=108&crop=smart&auto=webp&s=afbdc7fb6a6e30e19c8e39a9b5e4f4f3f0f3841e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/sibHwFD7dMBm0hOMe3NBTV_nktx-n497RV-fu7k8hXI.jpg?width=216&crop=smart&auto=webp&s=be07e70f0cce1a1bf8866cdf5bea189487c73c00', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/sibHwFD7dMBm0hOMe3NBTV_nktx-n497RV-fu7k8hXI.jpg?width=320&crop=smart&auto=webp&s=297075e717ec6b3a0f4b76cd5517c668fd3a0d7d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/sibHwFD7dMBm0hOMe3NBTV_nktx-n497RV-fu7k8hXI.jpg?auto=webp&s=23781bd26013b0e64babb23b130b6cf2d50c5c87', 'width': 480}, 'variants': {}}]}
Cheap/Free GPU Cloud services?
1
[removed]
2023-05-30T00:25:08
https://www.reddit.com/r/LocalLLaMA/comments/13vb9ia/cheapfree_gpu_cloud_services/
Huge-Usual-3787
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13vb9ia
false
null
t3_13vb9ia
/r/LocalLLaMA/comments/13vb9ia/cheapfree_gpu_cloud_services/
false
false
default
1
null
Guanaco 7B llama.cpp newline issue
6
So, I've been using Guanaco 7B q5\_1 with llama.cpp and think it's \*awesome\*. With the "precise chat" settings, it's easily the best 7B model available, punches well above it's weight, and acts like a 13B in a lot of ways. There's just one glaring problem that, realistically, is more of a minor annoyance than anything, but I'm curious if anyone else has experienced, researched, or found a fix for it. After certain prompts or just talking to it for long enough, the model will spam newlines until you ctrl+c to stop it. That's... all, really. It just spams newline like if you opened notepad and pressed "enter" repeatedly. It's really weird though. I haven't seen any other model do this. It doesn't preface it with anything predictable like ###Instruction: or the like. It just starts flooding the chat window with space. There also doesn't seem to be an easy solution to this since llama.cpp doesn't process escape characters. There's the -e option, but it only works for prompt(s), not reverse prompt. Therefore, -r "\\n" doesn't work. Neither does -r "\^\\n". After some research and testing, I found that -r "\`n\`n\`n" works in powershell (ie it makes three newline characters in a row a "reverse prompt"), but since I like batch scripting I would really like to avoid the need for powershell and recreate this in windows command prompt or eliminate the need for it. Any ideas, explanation as to why this is a thing, or at least confirmation that I'm not the only one experiencing it?
2023-05-30T00:47:23
https://www.reddit.com/r/LocalLLaMA/comments/13vbr0e/guanaco_7b_llamacpp_newline_issue/
AI-Pon3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13vbr0e
false
null
t3_13vbr0e
/r/LocalLLaMA/comments/13vbr0e/guanaco_7b_llamacpp_newline_issue/
false
false
self
6
null
Mod team and contributors
29
This small community of local language model enthusiasts has grown quickly! Thank you everyone for being part of what makes this subreddit great. r/LocalLLaMA is looking for two things: * One or two moderators to help with keeping the subreddit high quality and maintaining the wiki pages. If you're interested in helping maintain the wiki but not in being a mod, you can show interest too. * Users interested in forming a Hugging Face group for models. More on this below. First, if you'd like to be a mod, you need at least 500 comment karma and 3 months of account history. These are the only requirements, but the highest priority will be given to users who have previously commented or posted in this subreddit. Past or current mod experience would be helpful. [**Click here** to send a message](https://www.reddit.com/message/compose?to=Technical_Leather949&subject=LocalLLaMA%20Mod%20Application) and answer these questions, using your knowledge of markdown to make it easier to read: 1. Why do you want to be a mod? 2. Do you have mod experience with other subreddits? 3. How much time can you contribute to helping mod per week and in what timezone? 4. Would you be able to help maintain the subreddit wiki? 5. What is one good suggestion you have to improve this subreddit? 6. Do you understand how moderation affects subreddit activity? If there's anything else you want to mention, like something unique you can bring to the mod team, write it below: For wiki contributors, familiarity with markdown is needed. If you want to be a wiki contributor but not a mod, [**click here**](https://www.reddit.com/message/compose?to=Technical_Leather949&subject=Wiki%20Contributor) and answer these questions, formatting it to make it easier to read: 1. What wiki page would you like to help maintain? This can be a new page. 2. Have you maintained a wiki before on Reddit or a different site? 3. Create a list numbered 1 to 4 with a nested bullet point that says LLaMA between 2 and 3. r/LocalLLaMA is also looking for users interested in forming a Hugging Face group to upload converted and quantized models. The idea is to create download pages free from all self-promotion, even this subreddit, and putting the emphasis on donating to or sponsoring the **original** individuals and groups who create these new models. This will be completely selfless with no advertisement to anything or anyone except the original creator, and the name of the group won't reference this subreddit. If you're interested and understand the process, [**click here**](https://www.reddit.com/message/compose?to=Technical_Leather949&subject=Hugging%20Face%20Group&message=My%20Hugging%20Face%20account%20is) to send a message and what you could help with. Once again, thank you everyone for being part of this community!
2023-05-30T02:14:54
https://www.reddit.com/r/LocalLLaMA/comments/13vdlzr/mod_team_and_contributors/
Technical_Leather949
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13vdlzr
true
null
t3_13vdlzr
/r/LocalLLaMA/comments/13vdlzr/mod_team_and_contributors/
false
false
self
29
null
Uncensored model for 4Gb vram
2
[removed]
2023-05-30T02:25:04
https://www.reddit.com/r/LocalLLaMA/comments/13vdu02/uncensored_model_for_4gb_vram/
cyberv1k1n9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13vdu02
false
null
t3_13vdu02
/r/LocalLLaMA/comments/13vdu02/uncensored_model_for_4gb_vram/
false
false
default
2
null
Seeking Opinions on e5-large-v2 and instructor-xl Embedding Models for Multilingual Applications
12
Hi Reddit Community, I hope you're all doing well. I've been recently exploring the realm of embedding models for a multilingual project I am working on, and I've narrowed my options down to two models: e5-large-v2 and instructor-xl. Each of them appears promising, with their own unique features and capabilities. But, as we all know, practical, real-world performance can often be different from what's listed on paper. Hence, I'm turning to you, the wise folks of this subreddit, in hopes of gaining some insight into these models. Specifically, I'm looking to understand the following: 1. **Multilingual Capability:** How effective are these models in handling multiple languages? Are they proficient in capturing the semantics across different languages? 2. **Performance:** What are your experiences with these models in terms of accuracy, speed, and computational resource requirements? Is one noticeably more efficient or effective than the other? 3. **Implementation:** How user-friendly are these models when it comes to implementation? Are there any hurdles that I should be aware of before diving in? 4. **Versatility:** Can they handle a broad range of tasks well? Or do they excel in certain areas but fall short in others? 5. **Support and Documentation:** How well-documented are these models? If I run into issues during implementation or operation, is there an active community or support team to turn to? I would greatly appreciate it if you could share your personal experiences and thoughts about these two models. Also, if there's another model that you believe might be a better fit for my needs, feel free to suggest it. I look forward to reading your insights. Thanks in advance for your time and help! Best, Hamouda ALIAS
2023-05-30T02:36:24
https://www.reddit.com/r/LocalLLaMA/comments/13ve2wd/seeking_opinions_on_e5largev2_and_instructorxl/
Reality-Sufficient
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ve2wd
false
null
t3_13ve2wd
/r/LocalLLaMA/comments/13ve2wd/seeking_opinions_on_e5largev2_and_instructorxl/
false
false
self
12
null
Good ggml model for coding?
6
Hi all, anyone have a recommendation for a ggml model fine tuned for code? Edit: not too concerned on language, python preferred but most languages would be fine.
2023-05-30T03:55:16
https://www.reddit.com/r/LocalLLaMA/comments/13vfrnj/good_ggml_model_for_coding/
Equal_Station2752
self.LocalLLaMA
2023-05-30T04:10:41
0
{}
13vfrnj
false
null
t3_13vfrnj
/r/LocalLLaMA/comments/13vfrnj/good_ggml_model_for_coding/
false
false
self
6
null
PrivateGPT - Asking itself questions and answering?
2
When I ask a question on PrivateGPT, it answers, but spirals and starts asking random questions and answering them by itself. In the example below, I only asked it the first question but it spiralled into the other questions and gave answers. Any idea how to fix this? Using the Koala 13B, but similar results with GPT4All and Vicuna 13b. &#x200B; https://preview.redd.it/42jjtwytpx2b1.png?width=1106&format=png&auto=webp&s=5a0a692e4dc54e35a026afe58049c6965a197972
2023-05-30T04:07:58
https://www.reddit.com/r/LocalLLaMA/comments/13vg1gt/privategpt_asking_itself_questions_and_answering/
masterblaster269
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13vg1gt
false
null
t3_13vg1gt
/r/LocalLLaMA/comments/13vg1gt/privategpt_asking_itself_questions_and_answering/
false
false
https://b.thumbs.redditm…NvS9yPL8m53s.jpg
2
null
How many iterations do fine-trained models usually need/use using LoRA/QLoRA?
10
[deleted]
2023-05-30T04:30:13
[deleted]
1970-01-01T00:00:00
0
{}
13vggwg
false
null
t3_13vggwg
/r/LocalLLaMA/comments/13vggwg/how_many_iterations_do_finetrained_models_usually/
false
false
default
10
null
Introducing LocalGPT: Offline ChatBOT for your FILES with GPU - Vicuna
149
Introducing LocalGPT: [https://github.com/PromtEngineer/localGPT](https://github.com/PromtEngineer/localGPT) This project will enable you to chat with your files using an LLM. Run it offline locally without internet access. Completely private and you don't share your data with anyone. It takes inspiration from the privateGPT project but has some major differences. 1. It runs on GPU instead of CPU (privateGPT uses CPU). Therefore both the embedding computation as well as information retrieval are really fast. It also has CPU support in case if you don't have a GPU. 2. By default, it uses VICUNA-7B which is one of the most powerful LLM in its category. The implementation is modular so you can easily replace it with another model if you want. 3. It uses Instructor-Embeddings ([https://huggingface.co/spaces/mteb/leaderboard](https://huggingface.co/spaces/mteb/leaderboard)) which is one of the best embeddings out there. It makes the project even more powerful. 4. Everything is running locally (apart from first iteration when it downloads the required models). No data is leaving your PC. Checkout the repo here: [https://github.com/PromtEngineer/localGPT](https://github.com/PromtEngineer/localGPT) Here is a video that explains the code and the system design in detail. [https://youtu.be/MlyoObdIHyo](https://youtu.be/MlyoObdIHyo)
2023-05-30T05:21:23
https://www.reddit.com/r/LocalLLaMA/comments/13vhev0/introducing_localgpt_offline_chatbot_for_your/
satmarz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13vhev0
false
null
t3_13vhev0
/r/LocalLLaMA/comments/13vhev0/introducing_localgpt_offline_chatbot_for_your/
false
false
self
149
{'enabled': False, 'images': [{'id': 'k9UvaWdiVD4hOv1vETarT9Ls9F-dIr24qSsAyOL1eOU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=108&crop=smart&auto=webp&s=82080fe11385d96a09bfc1921d71f6850ca6380f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=216&crop=smart&auto=webp&s=6423540cf5118840f73d6943905aee7ca153138b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=320&crop=smart&auto=webp&s=5805240f7e7cc1942a44362a2a0b7d7b5dd17c91', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=640&crop=smart&auto=webp&s=54e5bbf11fe923cff05ad848ba883b135c72037e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=960&crop=smart&auto=webp&s=67d1d3a94d04eb601b8236d54fd407d7740dca72', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?width=1080&crop=smart&auto=webp&s=b6c1f9fc85cdb90af82af57e0ac482d76b48212f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8gUl3q1djrE7uaEHsNuEr6fSSSaqdC37nQec4bTYSnQ.jpg?auto=webp&s=5c4aef913f5df498131a96d47b1777754cbca121', 'width': 1200}, 'variants': {}}]}
Best web UI and cloud GPU to run 30b LLaMA models?
3
As it’s not possible for my hardware to run anything more than a quantised 13b model, I’m looking for alternative solutions that maybe others have been using. I loved the idea of the Hugging Face web-ui as it’s stupid easy since that’s where the models are hosted, but quickly found out the LLaMA models aren’t supported with the Inference Endpoint.
2023-05-30T05:32:53
https://www.reddit.com/r/LocalLLaMA/comments/13vhlqg/best_web_ui_and_cloud_gpu_to_run_30b_llama_models/
sardoa11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13vhlqg
false
null
t3_13vhlqg
/r/LocalLLaMA/comments/13vhlqg/best_web_ui_and_cloud_gpu_to_run_30b_llama_models/
false
false
self
3
null
Worth it on my AMD based laptop?
1
Hey everyone. Im seeing pretty much all intel/nvidia support for everything. I already planned to buy a new pc in the fall, but I'd like to learn how to operate them in the mean time, even if it is just a 7b model. Having a local model to go over all of my research docs and aid me in writing sounds like a dream. I don't need perfection, just functionality. Id also like to support the efforts of someone other than open AI. On that note, would my specs even be supported? Cpu: AMD 6900HX Gpu: AMD 6800s Ram: 16gb of ddr5, debating on upgrading this to 40. 1TB ssd Im med tech, comp sci, so this question might have an obvious answer. But id appreciate anyones input.
2023-05-30T05:53:25
https://www.reddit.com/r/LocalLLaMA/comments/13vhxy0/worth_it_on_my_amd_based_laptop/
The_Dayne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13vhxy0
false
null
t3_13vhxy0
/r/LocalLLaMA/comments/13vhxy0/worth_it_on_my_amd_based_laptop/
false
false
self
1
null
Wizard-Vicuna-30B-Uncensored
336
I just released Wizard-Vicuna-30B-Uncensored [https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored) It's what you'd expect, although I found the larger models seem to be more resistant than the smaller ones. Disclaimers: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. u/The-Bloke already did his magic. Thanks my friend! [https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ) [https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGML](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GGML) &#x200B;
2023-05-30T05:54:14
https://www.reddit.com/r/LocalLLaMA/comments/13vhyen/wizardvicuna30buncensored/
faldore
self.LocalLLaMA
2023-05-30T05:57:44
0
{}
13vhyen
false
null
t3_13vhyen
/r/LocalLLaMA/comments/13vhyen/wizardvicuna30buncensored/
false
false
self
336
{'enabled': False, 'images': [{'id': 'iykKO3N7SpXuBtTikcYk9qHwPIi8eRAcC-Ft--EB2mw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=108&crop=smart&auto=webp&s=b779898997e5e5a921c2a16565cb3f8dabe05c30', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=216&crop=smart&auto=webp&s=b3f99bbdf028ccc43b940e13d6ee38c3f0403333', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=320&crop=smart&auto=webp&s=e5876192b4b64ce506d972982287710c960536c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=640&crop=smart&auto=webp&s=039fa790c5d85ed3ad6522369d6db8872b01c503', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=960&crop=smart&auto=webp&s=d200aa16a0fa6199f64a780d4dbaa870fc3323ff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?width=1080&crop=smart&auto=webp&s=c4e4894414e2f7a33856c6e17aea487b1a003c27', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FEF7RlKudVYdZKaoxORTg_GkQRI27jrV5SdtMq9v20c.jpg?auto=webp&s=52f20211c4446d24ec7c05f505a643a46a1cccc9', 'width': 1200}, 'variants': {}}]}