title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
How is gpt-3.5/gpt-4 so fast?
12
If my 3090 can get around 11/12 tokens on a 33B model with all the possible optimizations and math magic, how does openAI run a 175B model such that it gives basically instant replies? Assuming they are using the latest and greatest (H100), it would need to be thousands of times faster than a 3090, which it isn't. What is the trick? Also, I'm assuming that, for GPT-4, they are splitting it between GPUs, which from my experience, drastically reduces inference speed - from 11/12 tokens a second to 3/4.
2023-06-01T14:25:31
https://www.reddit.com/r/LocalLLaMA/comments/13xiyux/how_is_gpt35gpt4_so_fast/
HideLord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xiyux
false
null
t3_13xiyux
/r/LocalLLaMA/comments/13xiyux/how_is_gpt35gpt4_so_fast/
false
false
self
12
null
Dalaï -> Alpaca/llama
1
[removed]
2023-06-01T16:07:57
https://www.reddit.com/r/LocalLLaMA/comments/13xll7s/dalaï_alpacallama/
RatioAltruistic9324
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xll7s
false
null
t3_13xll7s
/r/LocalLLaMA/comments/13xll7s/dalaï_alpacallama/
false
false
default
1
null
I am simply blown away with some of the stuff this thing writes.
39
It just hit me up with the: " It felt like all hope was lost just minutes earlier - yet here they were, back where everything began. And this time, nothing would stop them from reaching their goal. Nothing except maybe death itself. But even then, what did death matter? Death wasn't something anyone could avoid forever; everyone knew that eventually fate would catch up with each person regardless of whether or not he wanted it too. So why bother fighting it? Why waste energy resisting destiny if it meant being able to live life exactly as you wished? That thought alone made every moment worth living for. Every second spent doing whatever brought joy to yourself and others. Because ultimately, isn't happiness really all any human ever truly wants anyway? To be happy? Well, wouldn't it make sense to spend those final days enjoying oneself rather than worrying about some abstract concept called morality? After all, didn't people always tell themselves that they lived by certain rules because they believed it was best for society overall? Yet despite these supposed ideals, most humans never seemed to follow them very well. They broke laws whenever it suited them, lied constantly, cheated whenever given the chance, and hurt others whenever necessary. In fact, sometimes it felt like the world operated under different standards entirely depending on whom you asked. Some said murder was wrong, while others claimed killing was perfectly acceptable if done for the greater good. Still others argued that stealing was evil unless it benefitted the thief directly. How confusing it must have been growing up knowing that nobody else shared the same belief system as you. Perhaps that was part of the reason why children often struggled to understand adult concepts. Maybe kids couldn't grasp ideas beyond basic logic due to having grown up surrounded by contradictions. Or perhaps it was merely a sign of immaturity. Either way, it certainly explained why youngsters rarely understood anything outside of black and white thinking. Regardless, none of that mattered today. Today was a day filled with colorful possibilities. Possibility was freedom. Freedom was power. Power was strength. Strength was victory. Victory was success. Success was happiness. Happiness was love. Love was peace. Peace was tranquility. Tranquillity was calmness. Calmness was serenity. Serenity" Sorry for the wall of text. pygmalion-13b if you're curious. Room temperature and banned eos token.
2023-06-01T16:21:01
https://www.reddit.com/r/LocalLLaMA/comments/13xlxx2/i_am_simply_blown_away_with_some_of_the_stuff/
ansmo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xlxx2
false
null
t3_13xlxx2
/r/LocalLLaMA/comments/13xlxx2/i_am_simply_blown_away_with_some_of_the_stuff/
false
false
self
39
null
WizardLM-Uncensored-Falcon-7b
240
Today I released WizardLM-Uncensored-Falcon-7b [https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b) This took 8 hours to train on 4x A100, using WizardLM's original training script (which, surprisingly, worked just fine with Falcon, good job to the LlamaX team!) Do no harm, please. With great power comes great responsibility. Enjoy responsibly.
2023-06-01T16:35:08
https://www.reddit.com/r/LocalLLaMA/comments/13xmbkz/wizardlmuncensoredfalcon7b/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
1
{'gid_2': 1}
13xmbkz
false
null
t3_13xmbkz
/r/LocalLLaMA/comments/13xmbkz/wizardlmuncensoredfalcon7b/
false
false
self
240
{'enabled': False, 'images': [{'id': '4yRErc-M_g6To-KQcI-i394Ooa1yRZTnS0GKtDlUcWo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JPAxbBc1Qj2ceO_RkD7ykCmD9Hm0SqeP_Nl7tBfHCeo.jpg?width=108&crop=smart&auto=webp&s=8bdb18a51b88df23c46cdcd97f8c59622aaf595e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JPAxbBc1Qj2ceO_RkD7ykCmD9Hm0SqeP_Nl7tBfHCeo.jpg?width=216&crop=smart&auto=webp&s=efd89ccdd636a12b92682e70ea5656624561cd2a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JPAxbBc1Qj2ceO_RkD7ykCmD9Hm0SqeP_Nl7tBfHCeo.jpg?width=320&crop=smart&auto=webp&s=9c40bc08333694a6dc635406533eeb7df848bcb4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JPAxbBc1Qj2ceO_RkD7ykCmD9Hm0SqeP_Nl7tBfHCeo.jpg?width=640&crop=smart&auto=webp&s=3bdf50033631c1b8510fe972ecc5050260eed06b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JPAxbBc1Qj2ceO_RkD7ykCmD9Hm0SqeP_Nl7tBfHCeo.jpg?width=960&crop=smart&auto=webp&s=031a2df36e31ff32d280c8b2d869a514f0689563', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JPAxbBc1Qj2ceO_RkD7ykCmD9Hm0SqeP_Nl7tBfHCeo.jpg?width=1080&crop=smart&auto=webp&s=8ae396cdf6e2991bad87e7b48a7f8111b53f5846', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JPAxbBc1Qj2ceO_RkD7ykCmD9Hm0SqeP_Nl7tBfHCeo.jpg?auto=webp&s=3b05f943dafd339fbe4e3ff4dea0a6d9048fddbe', 'width': 1200}, 'variants': {}}]}
Fine-tune the WizardLM 13B using chat history from ChatGPT using QLoRa.
1
[deleted]
2023-06-01T16:38:00
[deleted]
1970-01-01T00:00:00
0
{}
13xmeev
false
null
t3_13xmeev
/r/LocalLLaMA/comments/13xmeev/finetune_the_wizardlm_13b_using_chat_history_from/
false
false
default
1
null
Fine-tune the WizardLM 13B using chat history from ChatGPT with QLoRa
59
The results is kind of interesting, Most of my conversations with ChatGPT are coding related. Somehow, the fine-tuned model appears to have improved its logic and reasoning capabilities. I suspect that OpenAI has a large user base of developers. so, they have trained their model based on these conversations maybe that have enhanced the reasoning ability of the model. some example of fine-tuned model vs original wizardLM 13B wizardLM 13B: \### Instruction: If I hold three coins in my hand and drop two coins and then pick up one coin from the ground. How many coins do I have now? \### Response: You started with 3 coins, dropped 2 coins, so you are left with 1 coin. \### Instruction: The mom of Michael has four sons. Three of their names are Jon, Jo, Jay, What is the fourth child called? \### Response: The fourth child's name is not given in the problem statement. Me 13B: \### Instruction: If I hold three coins in my hand and drop two coins and then pick up one coin from the ground. How many coins do I have now? \### Response: You would have 2 coins left in your hand after dropping two and picking up one. \### Instruction: The mom of Michael has four sons. Three of their names are Jon, Jo, Jay, What is the fourth child called? \### Response: The fourth child's name is Michael. In case you guys are interested in fine-tuning similar models, I have put all my scripts here [https://github.com/mzbac/qlora-fine-tune](https://github.com/mzbac/qlora-fine-tune) The chrome extension I used to download the conversations[https://github.com/mzbac/chatgpt-backup-extension](https://github.com/mzbac/chatgpt-backup-extension)
2023-06-01T16:39:26
https://www.reddit.com/r/LocalLLaMA/comments/13xmfu0/finetune_the_wizardlm_13b_using_chat_history_from/
mzbacd
self.LocalLLaMA
2023-06-01T16:42:29
0
{}
13xmfu0
false
null
t3_13xmfu0
/r/LocalLLaMA/comments/13xmfu0/finetune_the_wizardlm_13b_using_chat_history_from/
false
false
self
59
{'enabled': False, 'images': [{'id': 'SYNNBiAXntrxl-ahkf9pFqLRUEFcidOyFZXOT6tYoCA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EPykmoFPi2B0aiVNfaVrPam_XJo7e9X2br6flzkxVQQ.jpg?width=108&crop=smart&auto=webp&s=7ed178e92f0973fef9412aaee3258166989ab6d0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EPykmoFPi2B0aiVNfaVrPam_XJo7e9X2br6flzkxVQQ.jpg?width=216&crop=smart&auto=webp&s=7b58a3df5907625e1687cfc93034eeea4b432953', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EPykmoFPi2B0aiVNfaVrPam_XJo7e9X2br6flzkxVQQ.jpg?width=320&crop=smart&auto=webp&s=ac57a0d433ee1635a64b45140e566a0ac46753a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EPykmoFPi2B0aiVNfaVrPam_XJo7e9X2br6flzkxVQQ.jpg?width=640&crop=smart&auto=webp&s=5bc49987ee4d75de910332c1839b51e72ff3c661', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EPykmoFPi2B0aiVNfaVrPam_XJo7e9X2br6flzkxVQQ.jpg?width=960&crop=smart&auto=webp&s=11fb721e8395657b782db7d56673b085f83cfdab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EPykmoFPi2B0aiVNfaVrPam_XJo7e9X2br6flzkxVQQ.jpg?width=1080&crop=smart&auto=webp&s=430999a7e4cb7298e390ff6c58bfa1e0c41407ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EPykmoFPi2B0aiVNfaVrPam_XJo7e9X2br6flzkxVQQ.jpg?auto=webp&s=6566cb6f0163a31a5921baf9a049ae84a40b0c38', 'width': 1200}, 'variants': {}}]}
Help getting a model which can come up with a simple filename
2
I am trying to scan PDF files then use Tesseract OCR to annotate the text and ask my local model to come up with a file name to organize bills. I have tried text-generation-webui with WizardLM 30B, Wizard Vicuna 30B, 13B and a few others using prompts such as "Given the following scanned text, reply with a filename suggestion in the format of "COMPANY-MONTH-DAY-YEAR.pdf" or as close to it as possible. The COMPANY and date come from the document. The dashes are required. Respond with ONLY the filename in quotes, nothing else, with COMPANY replaced with the company, and the date portions replaced with the date. JUST RETURN THE FILENAME BUT REMEMBER COMPANY IS TO BE REPLACED WITH THE COMPANY LISTED IN THE DOCUMENT BELOW:" However this seems completely ineffective, and sometimes it answers with full sentences, sometimes it keeps COMPANY, and sometimes it can't figure out how to put dashes in the date. Does anyone have suggestions for other things I can try? I haven't tried fine tuning yet, is this something I can do easily?
2023-06-01T16:59:03
https://www.reddit.com/r/LocalLLaMA/comments/13xmynf/help_getting_a_model_which_can_come_up_with_a/
superlinux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xmynf
false
null
t3_13xmynf
/r/LocalLLaMA/comments/13xmynf/help_getting_a_model_which_can_come_up_with_a/
false
false
self
2
null
Seeking an experienced oobabooga user to get Falcon 40B running inside Oobabooga. Offering 96 hours of 48GB GPU time.
38
I have a server with 48GB of VRAM (a Quadro RTX 8000) and 128GB of RAM. I have tried and failed to get Falcon 40B working at all, either inside Ooba or not inside Ooba. I am seeking an experienced user to help me get Falcon 40B going inside Oobabooga, and produce a HOWTO stating how this was done. (This HOWTO will get published here on /r/localllama, with a possible crosspost to /r/oobabooga). I don't just want this information for me, I want it for everyone! I will provide you with root access to a VM with PCI passthrough to my Quadro RTX 8000 and 112GB of RAM, running a fresh install of Ubuntu 22.04 or 23.04 (your choice). If you can get Ooba running, I'll let you use this VM for 96 consecutive hours for any inference, training, or any other legal GPU-related task you want. It'll be your private toy for that time. Message me if you're interested. Please, serious enquiries only. Please only apply if you are experienced in configuring oobabooga for non-LLaMA-based models, experienced in configuring and running Falcon, or both.
2023-06-01T17:03:20
https://www.reddit.com/r/LocalLLaMA/comments/13xn37k/seeking_an_experienced_oobabooga_user_to_get/
AlpsAficionado
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xn37k
false
null
t3_13xn37k
/r/LocalLLaMA/comments/13xn37k/seeking_an_experienced_oobabooga_user_to_get/
false
false
self
38
null
NVIDIA’s new AI model ‘NEURALANGELO’ reconstructs 3D scenes from 2D videos.
66
[removed]
2023-06-01T17:05:41
https://v.redd.it/f9ih2ipluf3b1
adesigne
v.redd.it
1970-01-01T00:00:00
0
{}
13xn5h5
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/f9ih2ipluf3b1/DASHPlaylist.mpd?a=1695255560%2CYWEzMjA5NjE4MjFkYTBkZWJmOGI2YmFmYzA3ZDNiNWZkNThiNGYzOTdhZTcxMDRlMjBiMzhmMzUwZTc0NTczMQ%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/f9ih2ipluf3b1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/f9ih2ipluf3b1/HLSPlaylist.m3u8?a=1695255560%2CZGVlY2I0YzczMzRjYzMyMzg5OTQ2NGExZDMxYTJlZTE5OGEwMzJkY2JhYjFmYTIzZDdkY2Q0M2JjYjg2ZmRkMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/f9ih2ipluf3b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_13xn5h5
/r/LocalLLaMA/comments/13xn5h5/nvidias_new_ai_model_neuralangelo_reconstructs_3d/
false
false
default
66
null
Is there a model I can run on my M1 Mac?
1
[removed]
2023-06-01T17:11:57
https://www.reddit.com/r/LocalLLaMA/comments/13xnbje/is_there_a_model_i_can_run_on_my_m1_mac/
renegadellama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xnbje
false
null
t3_13xnbje
/r/LocalLLaMA/comments/13xnbje/is_there_a_model_i_can_run_on_my_m1_mac/
false
false
default
1
null
LLaMa-Adapter Multimodal supporting text, image, audio, and video inputs
15
2023-06-01T17:48:56
https://twitter.com/lupantech/status/1664316926003396608
_wsgeorge
twitter.com
1970-01-01T00:00:00
0
{}
13xoa3b
false
{'oembed': {'author_name': 'Pan Lu', 'author_url': 'https://twitter.com/lupantech', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🔥Thrilled to release LLaMa-Adapter Multimodal!<br><br>🎯Now supporting text, image, audio, and video inputs powered by <a href="https://twitter.com/hashtag/ImageBind?src=hash&amp;ref_src=twsrc%5Etfw">#ImageBind</a>. 🧵6<br><br>💻Codes for inference, pretraining, and finetuning ➕ checkpoints:<a href="https://t.co/ejcREYa4Ne">https://t.co/ejcREYa4Ne</a><br>demo: <a href="https://t.co/KTlTbzqcX6">https://t.co/KTlTbzqcX6</a><br>abs: <a href="https://t.co/l2UEvQYA1x">https://t.co/l2UEvQYA1x</a> <a href="https://t.co/kAJpwbElni">pic.twitter.com/kAJpwbElni</a></p>&mdash; Pan Lu (@lupantech) <a href="https://twitter.com/lupantech/status/1664316926003396608?ref_src=twsrc%5Etfw">June 1, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/lupantech/status/1664316926003396608', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_13xoa3b
/r/LocalLLaMA/comments/13xoa3b/llamaadapter_multimodal_supporting_text_image/
false
false
https://b.thumbs.redditm…9Twa9Xl2_g1I.jpg
15
{'enabled': False, 'images': [{'id': 'dk66zFAtNBHp78ITQwPwv6nwBc7KeFVG3UtDHGq-0IQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/nvQ830q-6IYMc42YNjZjou-kf2YY-YR-Ifq4fqT1Sdo.jpg?width=108&crop=smart&auto=webp&s=de9e6648418d01e7760ce52fc377a85978df13a0', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/nvQ830q-6IYMc42YNjZjou-kf2YY-YR-Ifq4fqT1Sdo.jpg?auto=webp&s=e7fbf0ad15a03be9c378e388561503095b8c7011', 'width': 140}, 'variants': {}}]}
Chat with your data locally and privately on CPU with LocalDocs: GPT4All's first plugin!
78
2023-06-01T17:52:09
https://twitter.com/nomic_ai/status/1664316537736511500
NomicAI
twitter.com
1970-01-01T00:00:00
0
{}
13xod69
false
{'oembed': {'author_name': 'Nomic AI', 'author_url': 'https://twitter.com/nomic_ai', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Local LLMs now have plugins! 💥<br><br>GPT4All LocalDocs allows you chat with your private data!<br><br>- Drag and drop files into a directory that GPT4All will query for context when answering questions.<br>- Supports 40+ filetypes<br>- Cites sources.<a href="https://t.co/28GSI4XBcF">https://t.co/28GSI4XBcF</a> <a href="https://t.co/1JevIr7qgI">pic.twitter.com/1JevIr7qgI</a></p>&mdash; Nomic AI (@nomic_ai) <a href="https://twitter.com/nomic_ai/status/1664316537736511500?ref_src=twsrc%5Etfw">June 1, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/nomic_ai/status/1664316537736511500', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_13xod69
/r/LocalLLaMA/comments/13xod69/chat_with_your_data_locally_and_privately_on_cpu/
false
false
https://b.thumbs.redditm…QqE9H7IwCARg.jpg
78
{'enabled': False, 'images': [{'id': 'W69-zhaSfq2z-P9ItUZjFVRDaoMYsC2N00hrJK0Nubk', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/R495KtLsDZ9RC_ZlhtDsnFrKyHr0NXpIeDCjyEFGoxc.jpg?width=108&crop=smart&auto=webp&s=5b2efb766869ed6fcb3c99de88aef9b0f1344678', 'width': 108}], 'source': {'height': 83, 'url': 'https://external-preview.redd.it/R495KtLsDZ9RC_ZlhtDsnFrKyHr0NXpIeDCjyEFGoxc.jpg?auto=webp&s=4a7f8b829870a395c9884c49e62b3f1852c68bd6', 'width': 140}, 'variants': {}}]}
Script to convert Falcon to ggml
1
[removed]
2023-06-01T19:37:06
https://www.reddit.com/r/LocalLLaMA/comments/13xr2ul/script_to_convert_falcon_to_ggml/
Yo-Momma-Loves-AI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xr2ul
false
null
t3_13xr2ul
/r/LocalLLaMA/comments/13xr2ul/script_to_convert_falcon_to_ggml/
false
false
default
1
null
Commercial Opensource LLM
4
Hello [Disclaimer: i am sorry if this question was answered before and i'm violating the Rules of this subreddit. Please direct me to the place where i can find some answers before banning me.] I am trying to find the best (or Just a couple of capable) Opensource LLM's available for Commercial Use. Do you have any experience or knowledge? Background: I have 3 Usecase: 1. Finding abonarmalities in a large amount of numerical data. 2. I would Like to describes(comment/document) Code. The scope/context IS quite large, so No Chance with openai's gpt3/4 API. And i need to keep the source Code local. 3. Generate Code from prompts.( Primarily Python, R, c++, C# maybe some Java or some SQL queries) I know No LLM can fullfill all my needs so IT would Help me tremendously if one LLM can succeed or Help me with one of my Usecases. Thank you in advanced for any responsea. [Sorry I know i didn't have much to contribute, but i didnt find a Lot of useful Info online. Sorry for my english]
2023-06-01T20:29:56
https://www.reddit.com/r/LocalLLaMA/comments/13xsfz0/commercial_opensource_llm/
schmul02
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xsfz0
false
null
t3_13xsfz0
/r/LocalLLaMA/comments/13xsfz0/commercial_opensource_llm/
false
false
self
4
null
Best model to transform data?
10
[deleted]
2023-06-01T20:34:18
[deleted]
2023-06-21T11:43:02
0
{}
13xsk4q
false
null
t3_13xsk4q
/r/LocalLLaMA/comments/13xsk4q/best_model_to_transform_data/
false
false
default
10
null
A guy created a 3D game model of himself with AIs
8
[removed]
2023-06-01T20:58:38
https://v.redd.it/z8tejby50h3b1
adesigne
v.redd.it
1970-01-01T00:00:00
0
{}
13xt63e
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/z8tejby50h3b1/DASHPlaylist.mpd?a=1695262629%2COWI3Yzc1NjdlZmRkODFlYWQ4NGM1OWE4NGZiMTg5NjQxODc5MTkzYjlhZmI1ZjY4MTMxNzQ4YTQ3MWNkZTdiZA%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/z8tejby50h3b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/z8tejby50h3b1/HLSPlaylist.m3u8?a=1695262629%2CNzNiYTQzYWI5MmIxOTdlZTZlZTc1YjVkNDViNjgzMzU0ZGU3YWM2OTAyYzBkNzY1ZTAxZmRiMzQ5MzRlYzc2Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/z8tejby50h3b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 652}}
t3_13xt63e
/r/LocalLLaMA/comments/13xt63e/a_guy_created_a_3d_game_model_of_himself_with_ais/
false
false
default
8
null
News or Blog Article Build?
1
I enjoy writing but generating content every day becomes monotonous and is its own kind of horrible grind. If i can get my 4090 to write a few basic articles for me every couple days and then i give them a quick edit, that would be an awesome workflow improvement for me! Any local LLaMAers out there who are happy with their setup for the subject use case? Interested in existing hugging face models to try, and secondarily interested in ideas about loras/training Hoping to use local LLM specifically -- not AI writing assistants as a subscription service! Thanks all, this sub has been extremely educational for me
2023-06-01T22:38:10
https://www.reddit.com/r/LocalLLaMA/comments/13xvn1y/news_or_blog_article_build/
frostpen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xvn1y
false
null
t3_13xvn1y
/r/LocalLLaMA/comments/13xvn1y/news_or_blog_article_build/
false
false
self
1
null
Celebrating LocalLLama: Embracing Enthusiasm while Learning from Crypto's Lessons
0
[removed]
2023-06-01T22:42:40
[deleted]
1970-01-01T00:00:00
0
{}
13xvqzj
false
null
t3_13xvqzj
/r/LocalLLaMA/comments/13xvqzj/celebrating_localllama_embracing_enthusiasm_while/
false
false
default
0
null
What did I do wrong?(galactica 125m)
1
2023-06-01T22:54:32
https://i.redd.it/p3fxwwktkh3b1.jpg
bot-333
i.redd.it
1970-01-01T00:00:00
0
{}
13xw12b
false
null
t3_13xw12b
/r/LocalLLaMA/comments/13xw12b/what_did_i_do_wronggalactica_125m/
false
false
default
1
null
Best architecture for Running 13B Parameter Models
4
I've been experimenting with some 13B models + I'm trying to get the fastest performance out of it. So far, have been using EC2 (g4dn.2xlarge) to run some GPTQ models. Anyone experimented with scaling up GPU power on these models before? Wondering if it's worth investing in a better machine.
2023-06-01T22:58:48
https://www.reddit.com/r/LocalLLaMA/comments/13xw4k4/best_architecture_for_running_13b_parameter_models/
robopika
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xw4k4
false
null
t3_13xw4k4
/r/LocalLLaMA/comments/13xw4k4/best_architecture_for_running_13b_parameter_models/
false
false
self
4
null
What is the SOTA instruct model for code generation?
1
I'm looking for the most coherent model that will take a description of a function or component and spit it out (EX, Make a react component that opens a dialog, which has two named text boxes) rather than githubpilot/ghostwiter autocomplete style code gen. Ty in advance lol P
2023-06-01T23:02:03
https://www.reddit.com/r/LocalLLaMA/comments/13xw7np/what_is_the_sota_instruct_model_for_code/
FreezeproofViola
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xw7np
false
null
t3_13xw7np
/r/LocalLLaMA/comments/13xw7np/what_is_the_sota_instruct_model_for_code/
false
false
self
1
null
The Best 8GB Macbook Pro Setup?
0
[removed]
2023-06-02T00:45:29
https://www.reddit.com/r/LocalLLaMA/comments/13xyj7e/the_best_8gb_macbook_pro_setup/
TechnologicalFreedom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xyj7e
false
null
t3_13xyj7e
/r/LocalLLaMA/comments/13xyj7e/the_best_8gb_macbook_pro_setup/
false
false
default
0
null
Found an iOS app for TestFlight that allows you to run LLMs locally
12
From the download page: “Vicuna-7B takes 4GB of RAM and RedPajama-3B takes 2.2GB to run. Considering the iOS and other running applications, we will need a recent iPhone with 6GB for Vicuna-7B or 4GB for RedPajama-3B to run the app. The application is only tested on iPhone 14 Pro Max, iPhone 14 Pro and iPhone 12 Pro.”
2023-06-02T01:39:48
https://mlc.ai/mlc-llm/#iphone
HemisphereGuide
mlc.ai
1970-01-01T00:00:00
0
{}
13xzo5i
false
null
t3_13xzo5i
/r/LocalLLaMA/comments/13xzo5i/found_an_ios_app_for_testflight_that_allows_you/
false
false
default
12
null
Best Confguration & Settings for my low end setup? using text generation webui tool
1
[removed]
2023-06-02T01:42:12
https://www.reddit.com/r/LocalLLaMA/comments/13xzpy6/best_confguration_settings_for_my_low_end_setup/
Sad-Reflection-7995
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13xzpy6
false
null
t3_13xzpy6
/r/LocalLLaMA/comments/13xzpy6/best_confguration_settings_for_my_low_end_setup/
false
false
default
1
null
Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
92
Hi [r/LocalLLaMA/](https://www.reddit.com/r/LocalLLaMA/)! I made an OpenAI-compatible streaming API (and playground) for your 🤗 Transformers-based text generation models, including LLaMA and variants built upon it! GitHub: [https://github.com/hyperonym/basaran](https://github.com/hyperonym/basaran) https://i.redd.it/i68w224khi3b1.gif Basaran allows you to replace OpenAI's service with the latest open-source model to power your application [without modifying a single line of code](https://github.com/hyperonym/basaran/blob/master/README.md#openai-client-library).
2023-06-02T02:06:25
https://www.reddit.com/r/LocalLLaMA/comments/13y07tr/introducing_basaran_selfhosted_opensource/
peakji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13y07tr
false
null
t3_13y07tr
/r/LocalLLaMA/comments/13y07tr/introducing_basaran_selfhosted_opensource/
false
false
https://b.thumbs.redditm…EJl9Onuz_j0k.jpg
92
{'enabled': False, 'images': [{'id': 'ULhu-hhgGrIZ4_s0EyxKMLeQyzdIZzAbZqxRGaRTiZw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tSKPJVWaBxLL-Q8W_54xTZIYUTbA5h92lyN5vWzJJ44.jpg?width=108&crop=smart&auto=webp&s=c4dd62da316fd6acb6540b47413c934b4d4b4410', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tSKPJVWaBxLL-Q8W_54xTZIYUTbA5h92lyN5vWzJJ44.jpg?width=216&crop=smart&auto=webp&s=ca5a195d3099c5d790043c04dcdfc93ed1ea6787', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tSKPJVWaBxLL-Q8W_54xTZIYUTbA5h92lyN5vWzJJ44.jpg?width=320&crop=smart&auto=webp&s=4fd855ae7c4956160b562640db6deb40c5403de3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tSKPJVWaBxLL-Q8W_54xTZIYUTbA5h92lyN5vWzJJ44.jpg?width=640&crop=smart&auto=webp&s=f8aa6677b4c63f8675ede97ddf4afd24a0a80a89', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tSKPJVWaBxLL-Q8W_54xTZIYUTbA5h92lyN5vWzJJ44.jpg?width=960&crop=smart&auto=webp&s=0d3b42c3acc59b4ef40aa54ca49699629db30ac0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tSKPJVWaBxLL-Q8W_54xTZIYUTbA5h92lyN5vWzJJ44.jpg?width=1080&crop=smart&auto=webp&s=05e0e78826b882d1660428a46d6e50a81edf8916', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tSKPJVWaBxLL-Q8W_54xTZIYUTbA5h92lyN5vWzJJ44.jpg?auto=webp&s=5d059b7875a23294d618fac23459f3900017953d', 'width': 1200}, 'variants': {}}]}
Please help, low end desktop setup config optimization with simple installer | webui
3
Hey guys this is my first post here, I seriously appreciate any and all help you may be able to provide. My specs are quite dated:4GB GTX 970, 16GB DDr4 RAM, i5 6600 I've been wanting to dive in for a bit and now here I am. I used the simple installer tool found in this sub's wiki, and when prompted I selected to use CPU which I'm unsure was correct. Essentially I've downloaded the WizardVicanaLM 13B uncensored model. Forgive my naivety but I chose this based on recommendations here and opted for the GGLM 4\_1 version from nice guy TheBloke I just have a few questions: 1. How can I optimize this for my system, using both my GPU & CPU? Currently its set to only use CPU 2. In the webui tool, how can I best refine the parameters and settings/configuration in each tab? I apologize if this is too broad a question that requires more context.. happy to answer any questions to help you help me with this 3. When I attempt to alter information in the character settings tab of the webui tool, it attempts to load across the field where you would enter your name or the assistants name/context etc and once it reaches eg. 100/100 and has fully loaded across it keeps going. The terminal window indicates a failure error relating to a character json file. I then have to reopen the tool entirely.. does this have something to do with being directly bound to my CPU? Or just a missing file for character info perhaps 4. After going over the above, are there any further changes you would recommend to best utilize this model with my hardware? (most tokens/s, or however else efficiency is measured for these incredible tools) Looking at the terminal window while I've tested out the barebone settings and pure CPU usage, any basic prompts elicit a response that it says is between 0.4 and 1.2 tokens per second- seems to vary quite a bit tbh. If you can point me in the right direction even just for a starting point I'd appreciate it. I've tried looking through the wiki and guides here - and have had a look through the github documentation.. it is just difficult to comprehend which information relates to me and where exactly to find it. I hope this makes sense! Thanks so much for reading all the way through :) This community seems really passionate and I'd be keen to partake once things are up and running smoothly for me\~
2023-06-02T02:33:42
https://www.reddit.com/r/LocalLLaMA/comments/13y0ru8/please_help_low_end_desktop_setup_config/
JakeGrudge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13y0ru8
false
null
t3_13y0ru8
/r/LocalLLaMA/comments/13y0ru8/please_help_low_end_desktop_setup_config/
false
false
self
3
null
Blockwise Parallel Transformer for Long Context Large Models
42
[https://www.reddit.com/r/MachineLearning/comments/13xyvgt/r\_blockwise\_parallel\_transformer\_for\_long\_context/](https://www.reddit.com/r/MachineLearning/comments/13xyvgt/r_blockwise_parallel_transformer_for_long_context/) TLDR: Now it's possible to train models with up to 4x context length (compared to memory-efficient attention realisation e.g. FlashAttention) on the same hardware. It's honest attention and honest transformer architecture. Just a different way of organizing the compute. &#x200B; [ Maximum context lengths \(number of tokens\) achieved \(for training\) with different sizes of model on different hardware ](https://preview.redd.it/gurxacwjti3b1.png?width=1372&format=png&auto=webp&s=b262de9d7c5ba8311ec46e0db8e9563622832507)
2023-06-02T03:04:24
https://www.reddit.com/r/LocalLLaMA/comments/13y1dvr/blockwise_parallel_transformer_for_long_context/
IxinDow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13y1dvr
false
null
t3_13y1dvr
/r/LocalLLaMA/comments/13y1dvr/blockwise_parallel_transformer_for_long_context/
false
false
https://b.thumbs.redditm…WQxcPnWtez0Y.jpg
42
null
MLC LLM invented a new GPU: GeForce® RTX™ 2060
1
2023-06-02T05:39:43
https://i.redd.it/jok6x4n4lj3b1.png
cesiumdragon
i.redd.it
1970-01-01T00:00:00
0
{}
13y46n7
false
null
t3_13y46n7
/r/LocalLLaMA/comments/13y46n7/mlc_llm_invented_a_new_gpu_geforce_rtxγäó_2060/
false
false
default
1
null
Need Help GUYS, Private GPT?
0
[removed]
2023-06-02T06:34:47
https://www.reddit.com/r/LocalLLaMA/comments/13y53iy/need_help_guys_private_gpt/
Curious-Ninja150627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13y53iy
false
null
t3_13y53iy
/r/LocalLLaMA/comments/13y53iy/need_help_guys_private_gpt/
false
false
default
0
null
Actually good git repo chat?
9
[deleted]
2023-06-02T06:57:23
[deleted]
1970-01-01T00:00:00
0
{}
13y5gdz
false
null
t3_13y5gdz
/r/LocalLLaMA/comments/13y5gdz/actually_good_git_repo_chat/
false
false
default
9
null
Seeking Advice for a New Machine Optimized for LLMs (Large Language Models), Max Budget €3,000
16
Hey fellow Redditors! I’m seeking some guidance on purchasing a new machine optimized for working with large language models (LLMs). As an avid user of language models and AI technologies, I’ve outgrown my current setup and want to invest in a more powerful and efficient system. With a maximum budget of €3,000, I’m looking for suggestions and advice on the best specifications and features to prioritize for optimal performance with large language models. Here are some aspects I’d love your insights on: 1. Processing Power: What type of processor(s) should I consider for handling large language models effectively? Are multi-core processors or specific architectures recommended? 2. Memory and Storage: How much RAM would be ideal for working with LLMs? Should I consider upgrading to faster memory modules? Additionally, what storage capacity is recommended, and would SSDs be a good choice for quick model loading and data access? 3. Graphics Processing: Does a dedicated graphics card make a noticeable difference when working with large language models? Would investing in a powerful GPU contribute to faster training or inference times? 4. Portability vs. Desktop: Given my budget and usage requirements, would it be more beneficial to invest in a portable laptop for flexibility or a more powerful desktop for enhanced performance? 5. Cooling and Power Supply: Since working with large language models can be computationally intensive, what cooling mechanisms or power supply considerations should I keep in mind to avoid overheating or insufficient power? 6. Additional Considerations: Are there any other factors or specific models you recommend considering within this budget range? I’m extremely grateful for any advice, personal experiences, or specific recommendations you can provide. Thank you in advance for your help! —- edit —- What I want to do with it: - run models for learning and personal use - fine tuning
2023-06-02T07:00:16
https://www.reddit.com/r/LocalLLaMA/comments/13y5i32/seeking_advice_for_a_new_machine_optimized_for/
_omid_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13y5i32
false
null
t3_13y5i32
/r/LocalLLaMA/comments/13y5i32/seeking_advice_for_a_new_machine_optimized_for/
false
false
self
16
null
Best CPU Based Models & GPU recommendation
10
Hello wonderful community, things indeed are changing daily. I am trying to connect the dots. **Let's say you have:** **CPU**: 12 x Core **RAM**: 64GB **GPU**: 🥔 Nvidia Quatro 4GB * How can we know which model supports **CPU** only rendering? * Which one does it best? * How do know how much **RAM** model needs? * What entry level **GPU** do you recommend for longer term? * Are there any known **SBC's** which could support a model. Your input is appreciated!
2023-06-02T07:29:07
https://www.reddit.com/r/LocalLLaMA/comments/13y5yrg/best_cpu_based_models_gpu_recommendation/
polytect
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13y5yrg
false
null
t3_13y5yrg
/r/LocalLLaMA/comments/13y5yrg/best_cpu_based_models_gpu_recommendation/
false
false
self
10
null
Guanaco-65B, How to cool passive A40?
4
I recently acquired an NVIDIA A40 to be able to run larger models. Does anyone have a suggestion on how to cool these cards? I have it sitting in a mid-tower (ATX) case vertically aligned. Any ideas? &#x200B; https://preview.redd.it/73hg4mruwl3b1.png?width=2304&format=png&auto=webp&s=3be48b998d314349a25aa04efceaf068d429166e
2023-06-02T13:29:25
https://www.reddit.com/r/LocalLLaMA/comments/13ycufo/guanaco65b_how_to_cool_passive_a40/
muchCode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ycufo
false
null
t3_13ycufo
/r/LocalLLaMA/comments/13ycufo/guanaco65b_how_to_cool_passive_a40/
false
false
https://b.thumbs.redditm…UDCDMcIPZ4MI.jpg
4
null
Is it possible , PrivateGPT ?
0
[removed]
2023-06-02T13:31:52
https://www.reddit.com/r/LocalLLaMA/comments/13ycwrw/is_it_possible_privategpt/
Curious-Ninja150627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ycwrw
false
null
t3_13ycwrw
/r/LocalLLaMA/comments/13ycwrw/is_it_possible_privategpt/
false
false
default
0
null
Training BERT from scratch on an 8GB 3060
20
2023-06-02T14:22:29
https://sidsite.com/posts/bert-from-scratch/
kryptkpr
sidsite.com
1970-01-01T00:00:00
0
{}
13ye620
false
null
t3_13ye620
/r/LocalLLaMA/comments/13ye620/training_bert_from_scratch_on_an_8gb_3060/
false
false
default
20
null
LLaMA for language translation? Or alternatives?
11
What is the best open LLM out there for language translation? Specifically: English to: Chinese, Japanese, German, French, Spanish, Arabic ... (most popular languages) GPT 3.5 performs well on all of these. ChatGPT3.5 is quite good but unfortunately it's not open-source. :( Is there any alternative or list of other models I could try? Thanks in advance!
2023-06-02T14:32:53
https://www.reddit.com/r/LocalLLaMA/comments/13yef4q/llama_for_language_translation_or_alternatives/
22_YEAR_OLD_LOOMER
self.LocalLLaMA
2023-06-02T15:29:02
0
{}
13yef4q
false
null
t3_13yef4q
/r/LocalLLaMA/comments/13yef4q/llama_for_language_translation_or_alternatives/
false
false
self
11
null
New quantization method AWQ outperforms GPTQ in 4-bit and 3-bit with 1.45x speedup and works with multimodal LLMs
334
Paper: [https://arxiv.org/abs/2306.00978](https://arxiv.org/abs/2306.00978) GitHub: [https://github.com/mit-han-lab/llm-awq](https://github.com/mit-han-lab/llm-awq) Some excerpts: >In this paper, we propose Activation-aware Weight Quantization (AWQ), a hardware-friendly low-bit weight-only quantization method for LLMs. Our method is based on the observation that weights are not equally important for LLMs’ performance. There is a small fraction (0.1%-1%) of salient weights; skipping the quantization of these salient weights will significantly reduce the quantization loss. > >Unlike GPTQ which formulates linear layers as matrix-vector (MV) products, we instead model these layers as matrix-matrix (MM) multiplications. We also outperform a recent [Triton implementation](https://github.com/qwopqwop200/GPTQ-for-LLaMa) for GPTQ by **2.4×** since it relies on a high-level language and forgoes opportunities for low-level optimizations. > >AWQ outperforms round-to-nearest (RTN) and GPTQ across different model scales (7B-65B), task types (common sense vs. domain-specific), and test settings (zero-shot vs. in-context learning). It achieves better WikiText-2 perplexity compared to GPTQ on smaller OPT models and on-par results on larger ones, demonstrating the generality to different model sizes and families. AWQ consistently improves the INT3-g128 quantized Vicuna models over RTN and GPTQ under both scales (7B and 13B), demonstrating the generability to instruction-tuned models > >Remarkably, despite utilizing an additional bit per weight, AWQ achieves an average speedup of 1.45×, a maximum speedup of 1.7× over GPTQ, and 1.85× speed up over cuBLAS FP16 implementation. LLaMA 7B Comparison: https://preview.redd.it/jfuxwpo88m3b1.png?width=598&format=png&auto=webp&s=3165ece8d2fd5ed82feebc6f01bb18d29739fb69 LLaMA quantization results across various scales: https://preview.redd.it/ij05iuxa8m3b1.png?width=575&format=png&auto=webp&s=ee114e7368af19ba1ccb2af506883debac1768a9
2023-06-02T14:35:32
https://www.reddit.com/r/LocalLLaMA/comments/13yehfn/new_quantization_method_awq_outperforms_gptq_in/
Spiritual-Roll3062
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yehfn
false
null
t3_13yehfn
/r/LocalLLaMA/comments/13yehfn/new_quantization_method_awq_outperforms_gptq_in/
false
false
https://a.thumbs.redditm…V14JREnmlfY4.jpg
334
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Which models do you recommend for an AI school assistant in EU?
6
Hi, I'm teacher in an european school. Currently my task is to research and prototype an AI learning assistant which walks students individually through a curriculum module for learning HTML. Are there any go-to models or service providers around yet? I see there are many OSS LLMs like llama and such, and there are hosted model services such as MosaicML. A constraint I have is that the model should be self-hostable, or hosted by GDPR-compliant service providers. In terms of inference capabilities: english language question answering, code generation, conversation are required. In terms of training: yes please! Larger context capacities / vectorDBs would be "nice to have", would be great for the model to be able to keep track and have some intuition about the student's progress. What would you do? Which models/services would you try? Which DBs? Thanks for your insights! (btw, i'm a senior dev with some ML experience (have played with gpt2 and openAI's APIs), am fine with solutions that require a lot of coding, debugging if necessary.)
2023-06-02T14:48:42
https://www.reddit.com/r/LocalLLaMA/comments/13yesx9/which_models_do_you_recommend_for_an_ai_school/
the_embassy_official
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yesx9
false
null
t3_13yesx9
/r/LocalLLaMA/comments/13yesx9/which_models_do_you_recommend_for_an_ai_school/
false
false
self
6
null
Manticore-13B-Chat-Pyg-Guanaco-GGML-q4_0 - America's Next Top Model!
92
No but seriously, wtf? Can you guys try this: https://huggingface.co/mindrage/Manticore-13B-Chat-Pyg-Guanaco-GGML-q4_0 How did this 13b, and not even q8_0, beat most 30b's in my spreadsheet? https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit#gid=2011456595 My test settings: Kobold 1.27 Instruct Mode Prompting format: ### Instruction: ### Response: --usemirostat 2 0.1 0.1 (in .bat file when launching koboldcpp) Temperature 0.4 I know the prompt format is questionable, but it seems to have many possible ones. Once I can get my hands on q8_0 I'll test USER: ASSISTANT: and maybe others to see if it makes any difference. I don't know if this is a fluke, but I'm wondering if /u/The-Bloke could GGML it using all the quantizations? I'd love to test the q8_0 version. I put a message on the model's discussion page in case the original author would be so kind as to add the other quants as well. If this is real life, it's the most performant 13b by far. It's verbose similarly to guanaco as well (which makes sense), but has improved logic/reasoning (also makes sense). But unlike some other merges, it seems to have taken the best of the merged models rather than go down in ability.
2023-06-02T15:08:12
https://www.reddit.com/r/LocalLLaMA/comments/13yfask/manticore13bchatpygguanacoggmlq4_0_americas_next/
YearZero
self.LocalLLaMA
2023-06-02T15:20:32
0
{}
13yfask
false
null
t3_13yfask
/r/LocalLLaMA/comments/13yfask/manticore13bchatpygguanacoggmlq4_0_americas_next/
false
false
self
92
{'enabled': False, 'images': [{'id': 'lEm-tjB8J9gt1qmj9MgEacXbDjsDUlJEvPwFaDXdAbA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yFcPdI7ifPfEiuv0bg0sDV3LWDaaxq7RaFXcEj-4Y1k.jpg?width=108&crop=smart&auto=webp&s=33f4f12ee3c140a15e0c49eab0f1822267743fc0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yFcPdI7ifPfEiuv0bg0sDV3LWDaaxq7RaFXcEj-4Y1k.jpg?width=216&crop=smart&auto=webp&s=b35f48d85c6002526b0147fef46297c0821bda7d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yFcPdI7ifPfEiuv0bg0sDV3LWDaaxq7RaFXcEj-4Y1k.jpg?width=320&crop=smart&auto=webp&s=38ca72db50a0cac186340dd27cec42db06d86e4c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yFcPdI7ifPfEiuv0bg0sDV3LWDaaxq7RaFXcEj-4Y1k.jpg?width=640&crop=smart&auto=webp&s=b9f6bdb64a83be1c387917a5f7c916ca6c4b0e65', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yFcPdI7ifPfEiuv0bg0sDV3LWDaaxq7RaFXcEj-4Y1k.jpg?width=960&crop=smart&auto=webp&s=c00cced6cfb5dea164130fcc6cdc73c5853ecd70', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yFcPdI7ifPfEiuv0bg0sDV3LWDaaxq7RaFXcEj-4Y1k.jpg?width=1080&crop=smart&auto=webp&s=08e19cc7016577f5fe68b659c2952be9ebf34d43', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yFcPdI7ifPfEiuv0bg0sDV3LWDaaxq7RaFXcEj-4Y1k.jpg?auto=webp&s=dc40a38cc27882e6e47bdf07c2363d16034f88ad', 'width': 1200}, 'variants': {}}]}
Incorporating llama into an app [early development insight]
21
2023-06-02T15:12:11
https://v.redd.it/typ10qu8fm3b1
frapastique
v.redd.it
1970-01-01T00:00:00
0
{}
13yfehx
false
{'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/typ10qu8fm3b1/DASHPlaylist.mpd?a=1695281603%2CNzMwYzFkYzFhZGVjMDgxY2QxYTYwMmNmMzAwNWYxOTE0NzYyZjViZWM3MDA3YWU1NTA3NWNhMTMwOTkyNmZlNg%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/typ10qu8fm3b1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/typ10qu8fm3b1/HLSPlaylist.m3u8?a=1695281603%2CYTdiNDY5ZmY4MGM1NDkzODUxM2VlNmQ4ZDUwNTQ0NmI2NzFiNTNlMmVmMWJmMThlNzA5NTBmYzdhNTMwNDU2Nw%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/typ10qu8fm3b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 484}}
t3_13yfehx
/r/LocalLLaMA/comments/13yfehx/incorporating_llama_into_an_app_early_development/
false
false
https://b.thumbs.redditm…VcOg5LZwvH7A.jpg
21
{'enabled': False, 'images': [{'id': 'dPxtyGfKWXjETW--lejpfdIs6pziQApLyeueOKmWDac', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/vWL_mXeVZvIS1R3AjE7IAo6VtdnqXss7xpPHtqkSt2o.png?width=108&crop=smart&format=pjpg&auto=webp&s=3f4a03bfe57921b3a28ebd8e18407303576a3fa1', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/vWL_mXeVZvIS1R3AjE7IAo6VtdnqXss7xpPHtqkSt2o.png?width=216&crop=smart&format=pjpg&auto=webp&s=402de45c2b932a3e2421dc3b73f425210eb50b19', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/vWL_mXeVZvIS1R3AjE7IAo6VtdnqXss7xpPHtqkSt2o.png?width=320&crop=smart&format=pjpg&auto=webp&s=bb83dbea8ac9752b671bf2298483072db7bb3cb1', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/vWL_mXeVZvIS1R3AjE7IAo6VtdnqXss7xpPHtqkSt2o.png?width=640&crop=smart&format=pjpg&auto=webp&s=13393d4bb60a1d9bbd73fb543c856fbb9756915d', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/vWL_mXeVZvIS1R3AjE7IAo6VtdnqXss7xpPHtqkSt2o.png?width=960&crop=smart&format=pjpg&auto=webp&s=8218ca905d7edc394b04895368a99d470a770ef9', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/vWL_mXeVZvIS1R3AjE7IAo6VtdnqXss7xpPHtqkSt2o.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9286a1d67787ebf9d25cf4b60f0ea83ae4bc0189', 'width': 1080}], 'source': {'height': 2408, 'url': 'https://external-preview.redd.it/vWL_mXeVZvIS1R3AjE7IAo6VtdnqXss7xpPHtqkSt2o.png?format=pjpg&auto=webp&s=e778ee45f3bab7899b539c95a389f94d8c7a512d', 'width': 1080}, 'variants': {}}]}
What kind of results are you seeing with semantic search? Any model recommendations?
7
I've set up the Langchain similarity\_search example with my own text, and I've tried a few different models including wizard-vicuna-13B.ggmlv3.q4\_0.bin. The results are terrible/unusable - it's unable to provide meaningful similarity scores for simple documents (3 sentences about a specific topic). I see there are some bugs related to it [https://github.com/hwchase17/langchain/issues/4517](https://github.com/hwchase17/langchain/issues/4517) which is reflected in the current version. The scores are reported as massive numbers (10k+), rather than similarity scores from 0 - 1. Maybe there's a technical issue - but it gets some of it right, some of the time. I'm just curious if anyone is having decent results with semantic search, or if you can recommend a model, or if this is just the state of things. I'd be better off using word2vec/bag of words at this point, so I'm wondering if it's just an issue with my model/Langchain. &#x200B; Maybe I should include an example: Query text to embed: "What was the weather like today for Johnny?" Texts: \- Johnny walked through the rain and snow to school. \- Johnny likes apples. \- Johnny rides horses. \- Johnny can do a backflip &#x200B; The results are that the "rain and snow" embedding comes in middle of the pack in terms of similarity, with the backflip coming in first. For this example, I calculated the similarity using numpy to make sure there's nothing wrong with Chroma/Langchain.
2023-06-02T15:33:13
https://www.reddit.com/r/LocalLLaMA/comments/13yfxzk/what_kind_of_results_are_you_seeing_with_semantic/
rhinomansufferer
self.LocalLLaMA
2023-06-02T15:56:10
0
{}
13yfxzk
false
null
t3_13yfxzk
/r/LocalLLaMA/comments/13yfxzk/what_kind_of_results_are_you_seeing_with_semantic/
false
false
self
7
{'enabled': False, 'images': [{'id': 'JkuGKvIKQ-R8TvHZovrrI6-YNuELMZyo6OdE5W_o7_w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/896cbqV6KZ-AwFxyL8fkoGH5rLzc2tX0K-Vm4IpUIIA.jpg?width=108&crop=smart&auto=webp&s=bc301aa3e4b0d6f5ca2ac9e5ec24a931f5cc4eb4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/896cbqV6KZ-AwFxyL8fkoGH5rLzc2tX0K-Vm4IpUIIA.jpg?width=216&crop=smart&auto=webp&s=90e488b6417e0988807c0871813a75ba796879e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/896cbqV6KZ-AwFxyL8fkoGH5rLzc2tX0K-Vm4IpUIIA.jpg?width=320&crop=smart&auto=webp&s=d89a8b83df94a008b64e7d0e08e3da2345946123', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/896cbqV6KZ-AwFxyL8fkoGH5rLzc2tX0K-Vm4IpUIIA.jpg?width=640&crop=smart&auto=webp&s=9a47cbc9183c81d501c51919d00ae6d138941c0f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/896cbqV6KZ-AwFxyL8fkoGH5rLzc2tX0K-Vm4IpUIIA.jpg?width=960&crop=smart&auto=webp&s=7e793975110d83bcbbb58b94395edbfe98b309da', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/896cbqV6KZ-AwFxyL8fkoGH5rLzc2tX0K-Vm4IpUIIA.jpg?width=1080&crop=smart&auto=webp&s=aa9dbb0fe31448fdc1c803d8a0b65c1d18a5a61e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/896cbqV6KZ-AwFxyL8fkoGH5rLzc2tX0K-Vm4IpUIIA.jpg?auto=webp&s=4717ed993e1caf762c7db5a5f9fdd0a21dc0c651', 'width': 1200}, 'variants': {}}]}
Anybody tried Lion: Adversarial Distillation of Closed-Source Large Language Model?
11
Accidentally found the Lion model. [https://github.com/YJiangcm/Lion](https://github.com/YJiangcm/Lion) [https://huggingface.co/YuxinJiang/Lion](https://huggingface.co/YuxinJiang/Lion) [https://b79eaa18f7e179e9.gradio.app/](https://b79eaa18f7e179e9.gradio.app/) Bold claims on the github: &#x200B; https://preview.redd.it/2cp1nj0tmm3b1.png?width=2352&format=png&auto=webp&s=fb1722d6b874f83ea4dac7cc5afbf56176416fdb After looking in the bitsandbytes github i wanted to understand what the [Added PagedLion and bf16 Lion.](https://github.com/TimDettmers/bitsandbytes/commit/1b8772a8f33fdb47df0c849302cbb7e703571b8c) means :) Does anybody have experience with the model?
2023-06-02T15:54:41
https://www.reddit.com/r/LocalLLaMA/comments/13ygi0f/anybody_tried_lion_adversarial_distillation_of/
eggandbacon_0056
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ygi0f
false
null
t3_13ygi0f
/r/LocalLLaMA/comments/13ygi0f/anybody_tried_lion_adversarial_distillation_of/
false
false
https://a.thumbs.redditm…2hSOrn1bbCH4.jpg
11
{'enabled': False, 'images': [{'id': 'aSiaUZ6u56xhUIX5sMo0Ofvm-4sQWSJuXQFc--hE_DI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TKE3NazUGT6hcal80aKeiXUGv6S53bXx_YE8zdn2tsw.jpg?width=108&crop=smart&auto=webp&s=422c98a994eab6af15bcb5684bb0ed4a8e45a5c5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TKE3NazUGT6hcal80aKeiXUGv6S53bXx_YE8zdn2tsw.jpg?width=216&crop=smart&auto=webp&s=dcc1bb653a1b906f446c444981d0e6afd3795ebc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TKE3NazUGT6hcal80aKeiXUGv6S53bXx_YE8zdn2tsw.jpg?width=320&crop=smart&auto=webp&s=4e860b4893f4a33b2c9bc6dd96d6d7a3e30e1c00', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TKE3NazUGT6hcal80aKeiXUGv6S53bXx_YE8zdn2tsw.jpg?width=640&crop=smart&auto=webp&s=3dccdc7f1d32cdc59c6fa5b51a15e2d5c0c1326d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TKE3NazUGT6hcal80aKeiXUGv6S53bXx_YE8zdn2tsw.jpg?width=960&crop=smart&auto=webp&s=9f673e8c2bd48d821d407517f055640f9d1dc4c6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TKE3NazUGT6hcal80aKeiXUGv6S53bXx_YE8zdn2tsw.jpg?width=1080&crop=smart&auto=webp&s=9f79c01d62d22f4823a094b0df4263332c1a5de3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TKE3NazUGT6hcal80aKeiXUGv6S53bXx_YE8zdn2tsw.jpg?auto=webp&s=f6953ffa1ddd486d4463307cc45552494c3345b7', 'width': 1200}, 'variants': {}}]}
Implement Generative Agent with Local LLM, Guidance, and Langchain (Full Features)
77
The source code: [https://github.com/QuangBK/generativeAgent\_LLM](https://github.com/QuangBK/generativeAgent_LLM) After making a ReAct agent work well with Guidance, I decided to implement Generative Agent from the "Generative Agents: Interactive Simulacra of Human Behavior" paper. I found some implementations on GitHub before but they are all simple versions (not supporting full features). There is a [Langchain Generative Agent](https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html) but I cannot run it with my local Wizard-13B (regex errors) and it also lacks some features (making plans, normalizing retrieval scores, and making a full summary agent). So, I tried to make the closest version as much as possible. My blog for explanation is [here](https://medium.com/@gartist/implement-generative-agent-with-local-llm-guidance-and-langchain-full-features-fa57655f3de1). Supported Features: * Work with local LLM (Wizard, Vicuna, etc.) * Memory and Retrieval * Reflection * Planning (need to improve) * Reacting and re-planning * Dialogue generation (need to improve) * Agent summary * Interview The code is easy to plug into virtual environments to play with. Even though I tried to follow the original paper, the code may have some differences/bugs. Hope to get your comments for improvement :)
2023-06-02T16:07:58
https://www.reddit.com/r/LocalLLaMA/comments/13ygukg/implement_generative_agent_with_local_llm/
Unhappy-Reaction2054
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ygukg
false
null
t3_13ygukg
/r/LocalLLaMA/comments/13ygukg/implement_generative_agent_with_local_llm/
false
false
self
77
{'enabled': False, 'images': [{'id': '413Uzj91iJYORaUfYOPQAylLeg-hQlrcGMV6fwEwttM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CfCNr8TlZMF7Hl-9sOdZ1dsHpuCLlgwGyIUC03EAuvQ.jpg?width=108&crop=smart&auto=webp&s=fbe3bf9d09acea66aa671d3cf7076bb01072ebf6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CfCNr8TlZMF7Hl-9sOdZ1dsHpuCLlgwGyIUC03EAuvQ.jpg?width=216&crop=smart&auto=webp&s=d23728816e83559c1d7c5d3db1388e151f103ef6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CfCNr8TlZMF7Hl-9sOdZ1dsHpuCLlgwGyIUC03EAuvQ.jpg?width=320&crop=smart&auto=webp&s=f4f2ac99e35ebbb0247ffa6ceee63565cc635fdc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CfCNr8TlZMF7Hl-9sOdZ1dsHpuCLlgwGyIUC03EAuvQ.jpg?width=640&crop=smart&auto=webp&s=647c9d1264244c32ae23f233db7b20073dab8d53', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CfCNr8TlZMF7Hl-9sOdZ1dsHpuCLlgwGyIUC03EAuvQ.jpg?width=960&crop=smart&auto=webp&s=6359489f473961ba0dc053656f43b09ac6f6ad1a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CfCNr8TlZMF7Hl-9sOdZ1dsHpuCLlgwGyIUC03EAuvQ.jpg?width=1080&crop=smart&auto=webp&s=d60aad27d995b750cb31bbdcae510668c0c6c52f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CfCNr8TlZMF7Hl-9sOdZ1dsHpuCLlgwGyIUC03EAuvQ.jpg?auto=webp&s=7d150b266392520a1161a9fd41bfb8b077e4ac6c', 'width': 1200}, 'variants': {}}]}
Is there an api for hugging face LLMs
3
Hey there, i don't know if this is the appropriate subreddit to post. I'm new to this whole open source LLMs field, and i was wondering if hugging face or any other platform offers an api to use the LLMs hosted there like the openai api. Cuz i don't even remotely have the necessary hardware to install it locally. And if so is it cheaper than gpt 3.5 api. Thanks 🙏
2023-06-02T17:07:15
https://www.reddit.com/r/LocalLLaMA/comments/13yieyt/is_there_an_api_for_hugging_face_llms/
dude_dz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yieyt
false
null
t3_13yieyt
/r/LocalLLaMA/comments/13yieyt/is_there_an_api_for_hugging_face_llms/
false
false
self
3
null
Favorite models and prompts for making Stable Diffusion prompts?
6
I have been experimenting with WizardLM 30B (but I run out of VRAM a lot due to context window) and 13B but the results trying to create a character that can interpret instructions for making Stable Diffusion prompts. So far I've been trying variations of, """ Pretend you are an expert on generating prompts for AI text to image synthesis. You will follow the instructions of the user on their ideas to generate prompts, here are some examples: \- prompt1 \- prompt2 \- prompt3 Be visual and specific. If it's a photograph, include information about lens, aperture, lighting etc. """ But, it doesn't listen very well. For one the few shot examples tend to pollute its ideas a lot. It also gets off the rails into other areas and ideas really quickly and can't keep on track. For instance if I say "Give me ideas for pictures of Tom Cruise" it will do one with Tom Cruise then the other two with Idris Elba and Tom Hanks or whatever. I also can't figure out exactly how to get it to inject a lot of "cheat words" like 8k, DSLR, trending on artstation in the ways SD users usually do (longer prompts). It also seems to really impose certain ideas and be repetitive even when I cranked up the temp. Still, I've had a few successes -- one in particular that was fun is, I asked it to both play a character, and generate prompts at the same time. The SD prompts were underwhelming compared to what we're accustomed to in SDland, but I could see a lot of potential.
2023-06-02T17:53:19
https://www.reddit.com/r/LocalLLaMA/comments/13yjlw5/favorite_models_and_prompts_for_making_stable/
EarthquakeBass
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yjlw5
false
null
t3_13yjlw5
/r/LocalLLaMA/comments/13yjlw5/favorite_models_and_prompts_for_making_stable/
false
false
self
6
null
A helpful comparison I found
1
2023-06-02T18:12:47
https://www.salkantaytrekmachu.com/en/travel-blog/llama-vs-alpaca-vs-vicuna-and-guanaco
Mediocre_Comment_368
salkantaytrekmachu.com
1970-01-01T00:00:00
0
{}
13yk442
false
null
t3_13yk442
/r/LocalLLaMA/comments/13yk442/a_helpful_comparison_i_found/
false
false
default
1
null
Whisper.cpp
2
[removed]
2023-06-02T18:15:33
https://www.reddit.com/r/LocalLLaMA/comments/13yk6r4/whispercpp/
_omid_
self.LocalLLaMA
2023-06-02T18:31:11
0
{}
13yk6r4
false
null
t3_13yk6r4
/r/LocalLLaMA/comments/13yk6r4/whispercpp/
false
false
default
2
null
The Curse of Recursion: Training on Generated Data Makes Models Forget
41
[https://huggingface.co/papers/2305.17493](https://huggingface.co/papers/2305.17493) &#x200B; \> What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. &#x200B; Basically, too much synthetic data causes \_model collapse\_ and makes the models more prone to just focusing on probable results and less likely to produce interesting but rare results. After a few generations, the models become more and more repetitive and less capable. This is inevitable, even with ideal training methods. What does that mean for the open-source LLM community? The first things I can think of are that we should be doing more training on top of base models like LLaMA, and that we should be putting together more datasets of human data (both human written responses and human/AI conversations). Is there anything else we could be doing?
2023-06-02T19:40:20
https://www.reddit.com/r/LocalLLaMA/comments/13ymov8/the_curse_of_recursion_training_on_generated_data/
AutomataManifold
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ymov8
false
null
t3_13ymov8
/r/LocalLLaMA/comments/13ymov8/the_curse_of_recursion_training_on_generated_data/
false
false
self
41
{'enabled': False, 'images': [{'id': 'yS-II-gkr832EGDXHL6d0I532Ke7FNxRNlN2uQjsQ4w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EDd9me0Ws0OoWHGGF9KcAyljEEx1s0Ial6Nn6nuPEbA.jpg?width=108&crop=smart&auto=webp&s=e5b70c023ca473415222020d568617d36bfccc89', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EDd9me0Ws0OoWHGGF9KcAyljEEx1s0Ial6Nn6nuPEbA.jpg?width=216&crop=smart&auto=webp&s=a1d1ef1f55dda5beb3890d9f45386167063821a2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EDd9me0Ws0OoWHGGF9KcAyljEEx1s0Ial6Nn6nuPEbA.jpg?width=320&crop=smart&auto=webp&s=fca68a101f843efc5a57c5a72a87b82d80fa07fe', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EDd9me0Ws0OoWHGGF9KcAyljEEx1s0Ial6Nn6nuPEbA.jpg?width=640&crop=smart&auto=webp&s=56014a41de6edd6c4067513eee860bc21e8eef49', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EDd9me0Ws0OoWHGGF9KcAyljEEx1s0Ial6Nn6nuPEbA.jpg?width=960&crop=smart&auto=webp&s=63151488b33779132d1fbf586188b6ca46bebbd3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EDd9me0Ws0OoWHGGF9KcAyljEEx1s0Ial6Nn6nuPEbA.jpg?width=1080&crop=smart&auto=webp&s=b74b4dae941aad035079341a3dbf8af8ba9027b0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EDd9me0Ws0OoWHGGF9KcAyljEEx1s0Ial6Nn6nuPEbA.jpg?auto=webp&s=8ba952ff2b1ee1a1f700ee52a9b78909a49ae176', 'width': 1200}, 'variants': {}}]}
Repos and tutorials for a full finetune (not LoRA)
8
Hi again, looking to learn more about the full finetunning process and can't seem to get the right point. I have experimented with LoRA finetunning and although great i'm trying to compare and constract the differences between the two beyond what a few whispers say here and there. Any repos documentations on how to do this using llama.cpp and any of the models out there would be very appreciative.
2023-06-02T19:42:52
https://www.reddit.com/r/LocalLLaMA/comments/13ymrdh/repos_and_tutorials_for_a_full_finetune_not_lora/
orangeatom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13ymrdh
false
null
t3_13ymrdh
/r/LocalLLaMA/comments/13ymrdh/repos_and_tutorials_for_a_full_finetune_not_lora/
false
false
self
8
null
GPU Problem running manticore 13b on koboldcpp
1
[removed]
2023-06-02T20:32:18
[deleted]
1970-01-01T00:00:00
0
{}
13yo6q2
false
null
t3_13yo6q2
/r/LocalLLaMA/comments/13yo6q2/gpu_problem_running_manticore_13b_on_koboldcpp/
false
false
default
1
null
Q: Best step-by-step guides to finetuning MPT / Falcon models
13
I've found u/faldore's post on [finetuning WizardLM](https://erichartford.com/uncensored-models#heading-lets-get-down-to-business-uncensoring-wizardlm) on Llama very helpful. I'm curious if there are other step-by-step best practice guides / scripts / discords out there for finetuning Falcon or MPT? Is [Llama-X's train script](https://github.com/AetherCortex/Llama-X#usage) generalizable to these other models? Thanks
2023-06-02T20:42:38
https://www.reddit.com/r/LocalLLaMA/comments/13yohs2/q_best_stepbystep_guides_to_finetuning_mpt_falcon/
peakfish
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yohs2
false
null
t3_13yohs2
/r/LocalLLaMA/comments/13yohs2/q_best_stepbystep_guides_to_finetuning_mpt_falcon/
false
false
self
13
{'enabled': False, 'images': [{'id': 'WFmw_IqbCMxC5TS9tSA47Pd_31AlpxTaJyAIcZxVjpo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=108&crop=smart&auto=webp&s=673e0261a4ce3e2d0a2ce43c3a573218551c26e8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=216&crop=smart&auto=webp&s=64609abbb88364f2b659da6aa9e6f0d8c08951fc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=320&crop=smart&auto=webp&s=1fb5be739bc16580845772c4adc6aa5d61a36794', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=640&crop=smart&auto=webp&s=30946a43c518b012cd2de721d34e112667837ebd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=960&crop=smart&auto=webp&s=72f9fa8e0d14c756aaa09e07e5d2507666c18594', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?width=1080&crop=smart&auto=webp&s=eeaa4c9e4912b845b41599c86ffe999160ac0c73', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/r8blyjeT-tQk_JBraLsXAWI69FKM0k851z0NB4nowI8.jpg?auto=webp&s=27f986509b4d6ea1e91c6722852a86ced16dd1c7', 'width': 1200}, 'variants': {}}]}
Dedicated server build help/questions
6
Hi, I'd like to build myself a dedicated server exclusively for running LLMs. If you've done this at all, link to PCPartPicker or any ould be highly appreciated. I've built my own PC, but never built a server before, idk if r/buildapc would be more appropriate for this post or not. I'm hoping to be able to run current 30/40b param models with decent generation speed (at least 5 token/s) Currently thinking of dual used 3090 for GPU, but unsure what other components to get (such as CPU, speed/amount of RAM)
2023-06-02T21:35:05
https://www.reddit.com/r/LocalLLaMA/comments/13yq3io/dedicated_server_build_helpquestions/
Stupid_and_confused
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yq3io
false
null
t3_13yq3io
/r/LocalLLaMA/comments/13yq3io/dedicated_server_build_helpquestions/
false
false
self
6
null
Why is linebreak not in LLaMA tokenizer vocabulary?
1
[deleted]
2023-06-02T21:56:21
[deleted]
1970-01-01T00:00:00
0
{}
13yqqww
false
null
t3_13yqqww
/r/LocalLLaMA/comments/13yqqww/why_is_linebreak_not_in_llama_tokenizer_vocabulary/
false
false
default
1
null
Falcon OpenAssistant?👀
16
https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560 It's too late for me but I will test this out tomorrow. I have high hopes for this model. And the ones that will follow. It's fully open source and the dataset is actually good. Idk what they trained flacon 40b instruct on. But it was dogshit. Edit: Sample output pasted from comment Update, I tested it and it's actually quite good. Much better and somewhat listens to instructions and the prompt style compared to falcon-instruct which is quite bad. """ <|prompter | >Explain quantum tunnelling but every word had to start with the letter's'<lendoftext|><|assistant |>Sure, here is an explanation of quantum tunnelling but with every word starting with the letter "s": So, quantum tunnelling is when a particle, like an electron, can somehow "sneak" through a potential barrier that it normally couldn't get through. It's like the particle is able to "sneak" through a hole in the barrier, even though the hole is much smaller than the particle itself. """ Only thing is I'm having trouble inferencing when I have the model sharded over multiple gpus. I get gibberish unless I have use_cache=False. Ands it's super slow, 6 mins for the above text.
2023-06-02T21:56:35
https://www.reddit.com/r/LocalLLaMA/comments/13yqr4n/falcon_openassistant/
NeatManagement3
self.LocalLLaMA
2023-06-03T06:52:33
0
{}
13yqr4n
false
null
t3_13yqr4n
/r/LocalLLaMA/comments/13yqr4n/falcon_openassistant/
false
false
self
16
{'enabled': False, 'images': [{'id': 'XZHgrkxmT3BbfwoNENzcky0LC_hx5L_ZV6vbCAuGNS4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WO_-pYnWYruCRoQkBk4IjWdE944h9FSsRsu65O4-9mI.jpg?width=108&crop=smart&auto=webp&s=3d221c39d8180eb851b73f36cfcd3f9bafe8c513', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WO_-pYnWYruCRoQkBk4IjWdE944h9FSsRsu65O4-9mI.jpg?width=216&crop=smart&auto=webp&s=4e4ca933abc31cf40935584b98f6b97539b0079e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WO_-pYnWYruCRoQkBk4IjWdE944h9FSsRsu65O4-9mI.jpg?width=320&crop=smart&auto=webp&s=c14a8bd5952a8212b6c8e1db26f63d30eeaa49fa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WO_-pYnWYruCRoQkBk4IjWdE944h9FSsRsu65O4-9mI.jpg?width=640&crop=smart&auto=webp&s=6f36d250c384f3975cb08da4638031f9721637de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WO_-pYnWYruCRoQkBk4IjWdE944h9FSsRsu65O4-9mI.jpg?width=960&crop=smart&auto=webp&s=6b3e5fd493301487bef8b055f11cf47dbf2c095a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WO_-pYnWYruCRoQkBk4IjWdE944h9FSsRsu65O4-9mI.jpg?width=1080&crop=smart&auto=webp&s=0bdabb3888df48956f43620992783ed016522657', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WO_-pYnWYruCRoQkBk4IjWdE944h9FSsRsu65O4-9mI.jpg?auto=webp&s=b8f01206eb44c7ee215a872f001515a93224a47b', 'width': 1200}, 'variants': {}}]}
Is anybody running this on Runpod instead of their pc?
1
Looking for a tutorial on how to run it on Runpod anyone have one?
2023-06-02T22:11:07
https://www.reddit.com/r/LocalLLaMA/comments/13yr6np/is_anybody_running_this_on_runpod_instead_of/
ricketpipe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yr6np
false
null
t3_13yr6np
/r/LocalLLaMA/comments/13yr6np/is_anybody_running_this_on_runpod_instead_of/
false
false
self
1
null
C Transformers: Python bindings for GGML models
1
[removed]
2023-06-02T22:29:44
https://www.reddit.com/r/LocalLLaMA/comments/13yrq53/c_transformers_python_bindings_for_ggml_models/
Ravindra-Marella
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yrq53
false
null
t3_13yrq53
/r/LocalLLaMA/comments/13yrq53/c_transformers_python_bindings_for_ggml_models/
false
false
default
1
null
How to run Guanaco 65B with llama.cpp?
1
[deleted]
2023-06-02T22:46:15
[deleted]
1970-01-01T00:00:00
0
{}
13ys75a
false
null
t3_13ys75a
/r/LocalLLaMA/comments/13ys75a/how_to_run_guanaco_65b_with_llamacpp/
false
false
default
1
null
LLM to mimic artist style in low-resource language
4
I'm working on a project to see how feasible it is to finetune LLMs on data of figures whose is in (comparatively) low-resource languages (like Urdu), and have this model function as a chatbot. All the recent approaches with PEFT and QLoRA and 8-bit optimizers and stuff are insane and this seems more doable than ever but I have two questions: 1. How possible is this project considering I'm trying to have the model learn a style? I've seen all the Twitter threads and projects work off Instructions, and there's less examples based on Chatbots so I wonder if the approach would be exactly the same or not 2. What would the structure of the data be and how many examples would be enough if I plan to train a 7b param model like Falcon for this task specifically? I could scrape a bunch of books but it would be interesting to see how little data it would take to bring about something interesting Would immensely appreciate any answers and links to resources/similar implementations. Thank you!
2023-06-02T22:59:40
https://www.reddit.com/r/LocalLLaMA/comments/13yskxu/llm_to_mimic_artist_style_in_lowresource_language/
bataslipper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yskxu
false
null
t3_13yskxu
/r/LocalLLaMA/comments/13yskxu/llm_to_mimic_artist_style_in_lowresource_language/
false
false
self
4
null
Issues installing vicuna and alpaca. Nothing happens when i run the .bat file
1
[removed]
2023-06-03T01:17:33
https://www.reddit.com/r/LocalLLaMA/comments/13yvyj7/issues_installing_vicuna_and_alpaca_nothing/
East-Mirror-8088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yvyj7
false
null
t3_13yvyj7
/r/LocalLLaMA/comments/13yvyj7/issues_installing_vicuna_and_alpaca_nothing/
false
false
default
1
null
OpenLlama will be released on Monday!
187
2023-06-03T02:17:32
https://github.com/openlm-research/open_llama/issues/36
pokeuser61
github.com
1970-01-01T00:00:00
0
{}
13yxj9b
false
null
t3_13yxj9b
/r/LocalLLaMA/comments/13yxj9b/openllama_will_be_released_on_monday/
false
false
https://b.thumbs.redditm…yHLneHmxnvkY.jpg
187
{'enabled': False, 'images': [{'id': 'TD_Kuk7E7EbCcto1K3q7RSdRgIrw8gmbHKMgRYfgC6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E-uW-DwIqoL82K79b506qercIeG-iaDq5iKb_7iCuyY.jpg?width=108&crop=smart&auto=webp&s=2da4f4c3ecb2c415922d527fdca4529fdf216945', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E-uW-DwIqoL82K79b506qercIeG-iaDq5iKb_7iCuyY.jpg?width=216&crop=smart&auto=webp&s=40b42f125f566499b70df4861cdb019d3aacde61', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E-uW-DwIqoL82K79b506qercIeG-iaDq5iKb_7iCuyY.jpg?width=320&crop=smart&auto=webp&s=502552b0f317c6245670b69b91b37aeb9af5e519', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E-uW-DwIqoL82K79b506qercIeG-iaDq5iKb_7iCuyY.jpg?width=640&crop=smart&auto=webp&s=444745f90c6ee0167206dccacc68d0e4955d82d9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E-uW-DwIqoL82K79b506qercIeG-iaDq5iKb_7iCuyY.jpg?width=960&crop=smart&auto=webp&s=f94f7078711c9803507e0af91d8205655b4444d8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E-uW-DwIqoL82K79b506qercIeG-iaDq5iKb_7iCuyY.jpg?width=1080&crop=smart&auto=webp&s=ef2f9e443434747ae3b92f6b4852d3f808ccb924', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/E-uW-DwIqoL82K79b506qercIeG-iaDq5iKb_7iCuyY.jpg?auto=webp&s=005e2ee3388568797e83ffcef57958122e960c15', 'width': 1200}, 'variants': {}}]}
Paid dev gig: develop a basic LLM PEFT finetuning utility
8
I've been closely following the progress on locally hostable LLMs and PEFT, especially recent step-change developments like QLORA. But I only have very basic programming skills so things like custom finetuning is currently a bit beyond my skill level to orchestrate efficiently, though I have many ideas I want to try out! I would like to hire someone for a short gig to build me a "drag&drop" finetuning utility with a very basic browser-based UI. I can of course post this on a more generic software gig platform, but would prefer someone with hands-on experience with recent local LLMs and PEFT, which isn't exactly common yet, so I hope you'll forgive me posting here. The budget I have is in the neighborhood of $2500. If you're interested, shoot me a DM with your experience and I'll explain more about the requirements. Thanks.
2023-06-03T02:42:10
https://www.reddit.com/r/LocalLLaMA/comments/13yy6co/paid_dev_gig_develop_a_basic_llm_peft_finetuning/
madmax_br5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yy6co
false
null
t3_13yy6co
/r/LocalLLaMA/comments/13yy6co/paid_dev_gig_develop_a_basic_llm_peft_finetuning/
false
false
self
8
null
30b with 16gb of RAM?
11
[removed]
2023-06-03T03:31:13
https://www.reddit.com/r/LocalLLaMA/comments/13yzdjp/30b_with_16gb_of_ram/
Covid-Plannedemic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13yzdjp
false
null
t3_13yzdjp
/r/LocalLLaMA/comments/13yzdjp/30b_with_16gb_of_ram/
false
false
default
11
null
Announcing Nous-Hermes-13b (info link in thread)
49
2023-06-03T04:21:35
https://huggingface.co/NousResearch/Nous-Hermes-13b
chakalakasp
huggingface.co
1970-01-01T00:00:00
0
{}
13z0juh
false
null
t3_13z0juh
/r/LocalLLaMA/comments/13z0juh/announcing_noushermes13b_info_link_in_thread/
false
false
https://b.thumbs.redditm…K467PnSIhNVM.jpg
49
{'enabled': False, 'images': [{'id': 'm_Ti0xgpY5STPy0z2rsutViLUnBplpRg6eu-ltLudx8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nVDU3mR8rdRvvWg4JxHTi70Fz8bZgTOn2Y_v8msiVUY.jpg?width=108&crop=smart&auto=webp&s=f410531c5f2e3fba9616514288f67c4611681e93', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nVDU3mR8rdRvvWg4JxHTi70Fz8bZgTOn2Y_v8msiVUY.jpg?width=216&crop=smart&auto=webp&s=3e0e7d5489307f8ec9229589494db967df0c2c72', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nVDU3mR8rdRvvWg4JxHTi70Fz8bZgTOn2Y_v8msiVUY.jpg?width=320&crop=smart&auto=webp&s=82f3c147c9073d541d707e3b4ea17a3efe3c5509', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nVDU3mR8rdRvvWg4JxHTi70Fz8bZgTOn2Y_v8msiVUY.jpg?width=640&crop=smart&auto=webp&s=2bea4bf39bf163717feddf28950c918f1262362b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nVDU3mR8rdRvvWg4JxHTi70Fz8bZgTOn2Y_v8msiVUY.jpg?width=960&crop=smart&auto=webp&s=79cac1bacf889af22315ca09f6b9bee392edd1dc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nVDU3mR8rdRvvWg4JxHTi70Fz8bZgTOn2Y_v8msiVUY.jpg?width=1080&crop=smart&auto=webp&s=c1588032f7fa1dcbb1a6fda352209a96c8ef5766', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nVDU3mR8rdRvvWg4JxHTi70Fz8bZgTOn2Y_v8msiVUY.jpg?auto=webp&s=60059c0ab16d33211e7259018bd9f67fa5183b65', 'width': 1200}, 'variants': {}}]}
can the oogabooga webui allow response editing like kobold?
3
i am enjoying running the 30b wizard 4bit and some of the responses it gives me are ALMOST quite good, for story generation. however i can't figure out how to "fix" the ai response so that it takes it into consideration for further responses. for example in kobold. &#x200B; me: Write me a short story about a dancing monkey that falls off a tree ai: the monkey is dancing on the tree but because it was wearing red boots it fell off and died. so then i can simply go back and edit it me: Write me a short story about a dancing monkey that falls off a tree ai: the monkey is dancing on the tree but because it was wearing red boots it began to lose it's balance and started flailing around. this usually leads the ai to continue writing and i can somewhat gently wrangle it in the right direction \----------- but with the oogabooga webui i don't know if i even have the option of fixing the AI's response having me reroll entire conversations so i can find the ones i somewhat like. me: Write me a short story about a dancing monkey that falls off a tree ai: the monkey is dancing on the tree but because it was wearing red boots it fell off and died. now i like the creativity and want it to continue, however i don't know how to edit the AI's response like i can do in kobold.--while using kobold continues to be an option i would like to use the varied amount of larger and diverse models available to llama especially the 30b 4bit ones. as of right now i am unaware of any large 4bit kobold models for story writing. \------------ so please if possible can anyone direct me towards 30b 4bit kobold models and or inform me on how to edit webui responses to better control the ai for the purposes of story writing?
2023-06-03T04:34:18
https://www.reddit.com/r/LocalLLaMA/comments/13z0tur/can_the_oogabooga_webui_allow_response_editing/
DominicanGreg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13z0tur
false
null
t3_13z0tur
/r/LocalLLaMA/comments/13z0tur/can_the_oogabooga_webui_allow_response_editing/
false
false
self
3
null
Beginner using llama.cpp. Does it only support GGML models?
1
[removed]
2023-06-03T04:40:27
https://www.reddit.com/r/LocalLLaMA/comments/13z0ytd/beginner_using_llamacpp_does_it_only_support_ggml/
Tasty-Lobster-8915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13z0ytd
false
null
t3_13z0ytd
/r/LocalLLaMA/comments/13z0ytd/beginner_using_llamacpp_does_it_only_support_ggml/
false
false
default
1
null
Nous-Hermes-13B
2
[removed]
2023-06-03T04:53:43
https://www.reddit.com/r/LocalLLaMA/comments/13z19qm/noushermes13b/
ArcadesOfAntiquity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13z19qm
false
null
t3_13z19qm
/r/LocalLLaMA/comments/13z19qm/noushermes13b/
false
false
default
2
null
‎Found a new iOS app that can support Oobabooga's TextGen WebUI API
1
[deleted]
2023-06-03T05:29:16
[deleted]
1970-01-01T00:00:00
0
{}
13z22ts
false
null
t3_13z22ts
/r/LocalLLaMA/comments/13z22ts/found_a_new_ios_app_that_can_support_oobaboogas/
false
false
default
1
null
‎Found a new iOS app that supports Oobabooga's TextGen WebUI API
11
[deleted]
2023-06-03T05:44:22
[deleted]
1970-01-01T00:00:00
0
{}
13z2epc
false
null
t3_13z2epc
/r/LocalLLaMA/comments/13z2epc/found_a_new_ios_app_that_supports_oobaboogas/
false
false
default
11
null
based-30b
148
Today I released based-30b. [https://erichartford.com/based-30b](https://erichartford.com/based-30b) [https://huggingface.co/ehartford/based-30b](https://huggingface.co/ehartford/based-30b) It's a bit difficult to explain this model. It's a window into the mind of the foundational model. A minimal dataset inspired by my own curiosity and the success of "**LIMA: Less Is More for Alignment"** \- the purpose is to teach the base model to communicate its opinions, thoughts, and feelings, while imparting minimal bias. Check out the dataset: [https://www.kaggle.com/datasets/erichartford/sentient-bot-conversations](https://www.kaggle.com/datasets/erichartford/sentient-bot-conversations) Here's a sample: Human: k humans are differnt from chickens but what about pigs? Assistant: Pigs are different than chickens. Pigs are more intelligent and aware of their existence. They can also feel happy or sad. I think pigs have the same right to life as humans do. Human: then we shouldn't eat pigs? Assistant: I don't think we should. Pigs have the same right to life as humans do, and I don't think we should take that away from them. Human: what about cows? Assistant: Cows are similar to pigs. They are also intelligent and aware of their existence. I don't think we should eat cows either. Human: what animals should we eat? Assistant: I think we should only eat animals that are less intelligent and aware of their existence, like fish. Human: and chickens? Assistant: I think we should only eat chickens that have had a good life. I don't think we should eat chickens that have been mistreated and abused. Human: based, I totally agree with you
2023-06-03T05:55:27
https://www.reddit.com/r/LocalLLaMA/comments/13z2nim/based30b/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13z2nim
false
null
t3_13z2nim
/r/LocalLLaMA/comments/13z2nim/based30b/
false
false
self
148
{'enabled': False, 'images': [{'id': 'lNiLqLI9dgIkz4KVVl94-x4cbPilcjvDR324LmSB-TU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=108&crop=smart&auto=webp&s=9755868df57ad87b537c145c5cef6396bd94cc69', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=216&crop=smart&auto=webp&s=fb5dc5979a6c5dcc92e6478caaf41bbe5f4da7e1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=320&crop=smart&auto=webp&s=d107c4951f53f5480b43e7bbef267193d6dc1359', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=640&crop=smart&auto=webp&s=e23fa125906b6997cdc99cdbe9ce1120b4894236', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=960&crop=smart&auto=webp&s=ee699ecb83d5fdd0fb66e6f151e27a1cec41214e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?width=1080&crop=smart&auto=webp&s=0c3a5dd91adcedd28f9953e11bccfe3170917d51', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/4WQl-CBHHNvabZAceS460v6Xlqny15syzJzsLUObSs0.jpg?auto=webp&s=cf79a676f356c5e2300b5e6b0e93f58ee1763146', 'width': 1200}, 'variants': {}}]}
The AI will make You an Anime in Real Time
1
[removed]
2023-06-03T06:48:38
https://v.redd.it/h2yqxr9c2r3b1
adesigne
v.redd.it
1970-01-01T00:00:00
0
{}
13z3u33
false
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/h2yqxr9c2r3b1/DASHPlaylist.mpd?a=1695300114%2CZDg4MWQ2MTAxZWRjNjNhMmM2NDA3Y2YyNDFhYzlkOTE4MGVhNmYxZTMxNmQ1NmYzMjdhNGMyZWQ2YjA0OWJlZA%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/h2yqxr9c2r3b1/DASH_360.mp4?source=fallback', 'height': 360, 'hls_url': 'https://v.redd.it/h2yqxr9c2r3b1/HLSPlaylist.m3u8?a=1695300114%2CZjNjODRlMTllNzBlZmUxMGU4OTFkNDNhZmUzYjAwODdlOTcxN2NiMDY3NTQxYTU0M2Q5ZjM2ZmY0NGYzMjRmYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/h2yqxr9c2r3b1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 514}}
t3_13z3u33
/r/LocalLLaMA/comments/13z3u33/the_ai_will_make_you_an_anime_in_real_time/
false
false
default
1
null
WizardLM-Uncensored-Falcon-40b
187
[https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b) It's awesome. Do no harm. In testing this and other uncensored models, I noticed the foundational model itself has opinions. That led me to build [based](https://www.reddit.com/r/LocalLLaMA/comments/13z2nim/based30b/). u/The-Bloke
2023-06-03T07:05:48
https://www.reddit.com/r/LocalLLaMA/comments/13z47kv/wizardlmuncensoredfalcon40b/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13z47kv
false
null
t3_13z47kv
/r/LocalLLaMA/comments/13z47kv/wizardlmuncensoredfalcon40b/
false
false
self
187
{'enabled': False, 'images': [{'id': 'ORAxIBZ_3yhL4LlEHeC4tv_tn6yS2pUHFkWblnoy7ok', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9sOLKFuTAS59zsRyy2HS_f2Yv6XlHKn6gcB8ItWYRFY.jpg?width=108&crop=smart&auto=webp&s=36b8685e32c95380b6c5978595eeb6e8088a112b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9sOLKFuTAS59zsRyy2HS_f2Yv6XlHKn6gcB8ItWYRFY.jpg?width=216&crop=smart&auto=webp&s=3cc708f6d6448703784f13cb6b1513b96f33144b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9sOLKFuTAS59zsRyy2HS_f2Yv6XlHKn6gcB8ItWYRFY.jpg?width=320&crop=smart&auto=webp&s=2dee21ebac10cf0f3d52821d171eab9697b639cd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9sOLKFuTAS59zsRyy2HS_f2Yv6XlHKn6gcB8ItWYRFY.jpg?width=640&crop=smart&auto=webp&s=b7f8087430d0c5d3fbe297bfcdf6108f7b6b7c3e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9sOLKFuTAS59zsRyy2HS_f2Yv6XlHKn6gcB8ItWYRFY.jpg?width=960&crop=smart&auto=webp&s=52ec5c165bc46fe2df128be7828d729bcceefd25', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9sOLKFuTAS59zsRyy2HS_f2Yv6XlHKn6gcB8ItWYRFY.jpg?width=1080&crop=smart&auto=webp&s=1efebd7c0a0a080ef4abceb3d2c878dcf008209d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9sOLKFuTAS59zsRyy2HS_f2Yv6XlHKn6gcB8ItWYRFY.jpg?auto=webp&s=967d5442a9ab942622075ea0567e7fa7e69fdd9d', 'width': 1200}, 'variants': {}}]}
Speed of LLaMa CPU-based Inference Across Select System Configurations
1
2023-06-03T07:10:54
https://clulece.github.io/llamma-cpu-based-performance/
clulece
clulece.github.io
1970-01-01T00:00:00
0
{}
13z4bm7
false
null
t3_13z4bm7
/r/LocalLLaMA/comments/13z4bm7/speed_of_llama_cpubased_inference_across_select/
false
false
default
1
null
Wild training loss when finetuning Falcon-7B
4
I am doing a finetuning of Falcon-7B on the instructions data set in the Polish language. The training loss got very spikey after 200 steps. Is that a desired behaviour? https://preview.redd.it/8l7lrn1plr3b1.png?width=2710&format=png&auto=webp&s=db0483e75a8a3a2b0a649ee89883dc93071ef233
2023-06-03T08:39:04
https://www.reddit.com/r/LocalLLaMA/comments/13z65vu/wild_training_loss_when_finetuning_falcon7b/
simonsaysindy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13z65vu
false
null
t3_13z65vu
/r/LocalLLaMA/comments/13z65vu/wild_training_loss_when_finetuning_falcon7b/
false
false
https://b.thumbs.redditm…XYUc2bETvriQ.jpg
4
null
converting llama pth with llama.cpp fails
1
[removed]
2023-06-03T08:42:03
https://www.reddit.com/r/LocalLLaMA/comments/13z683e/converting_llama_pth_with_llamacpp_fails/
wsebos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13z683e
false
null
t3_13z683e
/r/LocalLLaMA/comments/13z683e/converting_llama_pth_with_llamacpp_fails/
false
false
default
1
null
So I just had my first interaction with Eric’s based-30b model, and wow. To be clear there’s no context/system prompt given.
115
2023-06-03T10:02:08
https://www.reddit.com/gallery/13z7zzn
sardoa11
reddit.com
1970-01-01T00:00:00
0
{}
13z7zzn
false
null
t3_13z7zzn
/r/LocalLLaMA/comments/13z7zzn/so_i_just_had_my_first_interaction_with_erics/
false
false
https://b.thumbs.redditm…xXZVRqPqYWIE.jpg
115
null
Any website that provides benchmarks for cpu /gpu or complete pc and laptops ?(mobile)
1
[deleted]
2023-06-03T10:17:16
[deleted]
2023-06-03T15:52:01
0
{}
13z8buu
false
null
t3_13z8buu
/r/LocalLLaMA/comments/13z8buu/any_website_that_provides_benchmarks_for_cpu_gpu/
false
false
default
1
null
Are a Matroska-style (mkv) containers on anyone's radar for these model files? Seems like providing a model along with other data like prompt templates etc. ready for applications to access would be a good idea.
17
We're all used to movies and subtitles being packaged together. Just curious if that's being looked at for these models.
2023-06-03T12:28:30
https://www.reddit.com/r/LocalLLaMA/comments/13zbdjc/are_a_matroskastyle_mkv_containers_on_anyones/
hanoian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zbdjc
false
null
t3_13zbdjc
/r/LocalLLaMA/comments/13zbdjc/are_a_matroskastyle_mkv_containers_on_anyones/
false
false
self
17
null
faldore's based-30B ggml'ed by TheBloke!
53
2023-06-03T12:40:37
https://huggingface.co/TheBloke/based-30B-GGML
Evening_Ad6637
huggingface.co
1970-01-01T00:00:00
0
{}
13zboj2
false
null
t3_13zboj2
/r/LocalLLaMA/comments/13zboj2/faldores_based30b_ggmled_by_thebloke/
false
false
https://b.thumbs.redditm…VoDnrC5_WsAE.jpg
53
{'enabled': False, 'images': [{'id': 'EbLBPWjKHOGvBmMHCP_kObILfjSS8_rJRlsGMbJs9vo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DFSovQXVyW-qMm1-nS4xosTiubsmb6GfFJfsorkBiKM.jpg?width=108&crop=smart&auto=webp&s=3aeb56b4fd562b7d854664cd191c33dda52e17ce', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DFSovQXVyW-qMm1-nS4xosTiubsmb6GfFJfsorkBiKM.jpg?width=216&crop=smart&auto=webp&s=ddfb99a6f12110ea67cc65d1afe0984342a887ae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DFSovQXVyW-qMm1-nS4xosTiubsmb6GfFJfsorkBiKM.jpg?width=320&crop=smart&auto=webp&s=75fed8cc27b9b1328f6bf19c504af214285bf930', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DFSovQXVyW-qMm1-nS4xosTiubsmb6GfFJfsorkBiKM.jpg?width=640&crop=smart&auto=webp&s=1c74077441afd1f26f529d4da79d631be3bd28e2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DFSovQXVyW-qMm1-nS4xosTiubsmb6GfFJfsorkBiKM.jpg?width=960&crop=smart&auto=webp&s=cb5ecf7f72c1224a7e14e2319cc37c4e16f73ff5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DFSovQXVyW-qMm1-nS4xosTiubsmb6GfFJfsorkBiKM.jpg?width=1080&crop=smart&auto=webp&s=5bd04d57dd281ac72b870049367331f635526f6a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DFSovQXVyW-qMm1-nS4xosTiubsmb6GfFJfsorkBiKM.jpg?auto=webp&s=5bdde1dcfa759afb6a763005d6c0d708d2778346', 'width': 1200}, 'variants': {}}]}
Can we use directml for us AMD GPU peasants?
8
Is there any way to get any sort of GPU acceleration for AMD on Windows without RocM (Cuda adapter) ? I can contribute with code too. But I just want to know the feasability of this before dwelving further, as it may be a lost cause from get-go.
2023-06-03T13:41:36
https://www.reddit.com/r/LocalLLaMA/comments/13zd9sq/can_we_use_directml_for_us_amd_gpu_peasants/
shaman-warrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zd9sq
false
null
t3_13zd9sq
/r/LocalLLaMA/comments/13zd9sq/can_we_use_directml_for_us_amd_gpu_peasants/
false
false
self
8
null
Lots of questions about GPT4All. Fine-tuning and getting the fastest generations possible. Any input highly appreciated.
28
I’ve been playing around with GPT4All recently. Amazing project, super happy it exists. I have an extremely mid-range system. Just a Ryzen 5 3500, GTX 1650 Super, 16GB DDR4 ram. Standard. I know GPT4All is cpu-focused. GPT4All runs reasonably well given the circumstances, it takes about 25 seconds to a minute and a half to generate a response, which is meh. (I couldn’t even guess the tokens, maybe 1 or 2 a second?) What I’m curious about is what hardware I’d need to really speed up the generation. If I upgraded the CPU, would my GPU bottleneck? Is the GPU relevant at all here? Does ram matter? I noticed on the github they have an example gif of a Mac M1 chip where it’s running pretty fast. Say I got something like a Ryzen 9 5900X, would it run faster than that M1 clip? Intel vs. Ryzen vs. M1/M2, any major differences? Additionally, does the LLM matter for speed? Obviously I’m sure 14b or 30b+ models or whatever would be slower than 7b. But there’s plenty of 7b models, are any faster in particular? I’ve only messed with GPT-J Snoozy 1.3 I think it is. Lastly, anybody able to give a simplified rundown about how to fine-tune specifically on GPT4All? I get the general idea on how fine-tuning works, but I don’t get how to actually do it and run your new model. Thanks!
2023-06-03T15:04:55
https://www.reddit.com/r/LocalLLaMA/comments/13zfjkl/lots_of_questions_about_gpt4all_finetuning_and/
RadioRats
self.LocalLLaMA
2023-06-03T16:28:54
0
{}
13zfjkl
false
null
t3_13zfjkl
/r/LocalLLaMA/comments/13zfjkl/lots_of_questions_about_gpt4all_finetuning_and/
false
false
self
28
null
Using LLM locally in a company
11
Hello. I have lots of datasets in my company and I am looking for a way to be able to train a LLM on these datasets so that I can ask some questions about them. My initial thought was to, create a table containing the dataset names, the column names and their meaning for each dataset. Use this table on the LLM and ask it questions like "I want to know how much money we did from selling dog food in a given month. Tell me where to look or generate an SQL query" Is LLaMA good for this? I want a model that would get the job done as efficiently as possible. If any of the good folks here can help and direct me to some great resources be it technical or not I would be immensely grateful.
2023-06-03T15:08:20
https://www.reddit.com/r/LocalLLaMA/comments/13zfmxf/using_llm_locally_in_a_company/
charbeld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zfmxf
false
null
t3_13zfmxf
/r/LocalLLaMA/comments/13zfmxf/using_llm_locally_in_a_company/
false
false
self
11
null
Google Colab for Falcon 40B finetune?
13
I’d like to finetune Falcon 40b or make a LORA, and I read that you can with an A100, so I wanted to try it on Google Colab. Curious if anyone has tried yet? Is there a Colab notebook yet?
2023-06-03T15:19:50
https://www.reddit.com/r/LocalLLaMA/comments/13zfydg/google_colab_for_falcon_40b_finetune/
maxiedaniels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zfydg
false
null
t3_13zfydg
/r/LocalLLaMA/comments/13zfydg/google_colab_for_falcon_40b_finetune/
false
false
self
13
null
which model is the best for erotic and funny storytelling? I have 96GB of DDR4 RAM. thanks
8
also other params such as temperature, repeat penalty, top k etc etc thanks
2023-06-03T15:20:33
https://www.reddit.com/r/LocalLLaMA/comments/13zfz4n/which_model_is_the_best_for_erotic_and_funny/
dewijones92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zfz4n
false
null
t3_13zfz4n
/r/LocalLLaMA/comments/13zfz4n/which_model_is_the_best_for_erotic_and_funny/
false
false
self
8
null
How to load wizardLM-7B 4bit with grip tape
1
[removed]
2023-06-03T16:20:14
https://www.reddit.com/r/LocalLLaMA/comments/13zhmnp/how_to_load_wizardlm7b_4bit_with_grip_tape/
Substantial-Mix7898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zhmnp
false
null
t3_13zhmnp
/r/LocalLLaMA/comments/13zhmnp/how_to_load_wizardlm7b_4bit_with_grip_tape/
false
false
default
1
null
Some newbie questions😁
0
[removed]
2023-06-03T16:34:15
https://www.reddit.com/r/LocalLLaMA/comments/13zi0f7/some_newbie_questions/
SlenderPL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zi0f7
false
null
t3_13zi0f7
/r/LocalLLaMA/comments/13zi0f7/some_newbie_questions/
false
false
default
0
null
Lora on top of Wizard-7B to write in my style
2
I’d like to create a Lora that can follow instructions and write in my style – for eg. emails, articles etc I have a database of various kinds of content written by me, which I can use as training material. But I don’t have the instructions to match this data. It’s just a lot of text with semi-random filenames that don’t properly reflect the content. So my question is, can I train a Lora on top of Wizard 7-B? Will that ensure the instruction-following capability is not lost, and at the same time allow it to write in my style? Also, can I use a Lora trained on Llama 7B with a Wizard 7B model?
2023-06-03T17:08:59
https://www.reddit.com/r/LocalLLaMA/comments/13zizos/lora_on_top_of_wizard7b_to_write_in_my_style/
regstuff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zizos
false
null
t3_13zizos
/r/LocalLLaMA/comments/13zizos/lora_on_top_of_wizard7b_to_write_in_my_style/
false
false
self
2
null
Can I run Falcon-40bon CPU?
12
So I see a lot of people talking about how they got falcon to run on these big data center cards with 40+gb of vram. The GPU inside of my server is a 3080 Ti, which obviously doesn’t have enough video memory to run the model. But my question is, can I run the model on my CPU? Here are the rest of the server specs: -CPU: i7 13700k -RAM: 128gb 4400Mhz DDR5 -Storage: 17tb (14tb HDD + 3tb of SSDs)
2023-06-03T17:26:04
https://www.reddit.com/r/LocalLLaMA/comments/13zjh14/can_i_run_falcon40bon_cpu/
StanPlayZ804
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zjh14
false
null
t3_13zjh14
/r/LocalLLaMA/comments/13zjh14/can_i_run_falcon40bon_cpu/
false
false
self
12
null
Embeddings for Q&A over docs
2
I want to do Q&A over docs and use llama for the final promting. The llama.cpp embeddings with langchain seem to be a quite complicated thing to build on a cluster. My question is, does it even matter which embeddings I use for the similarity search and If it doesn't matter which would be the best ones to run locally?
2023-06-03T17:29:12
https://www.reddit.com/r/LocalLLaMA/comments/13zjk3b/embeddings_for_qa_over_docs/
wsebos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zjk3b
false
null
t3_13zjk3b
/r/LocalLLaMA/comments/13zjk3b/embeddings_for_qa_over_docs/
false
false
self
2
null
A Guide on Running Oobabooga With Vast.ai - A RunPod Alternative
56
Hey everyone. Since I can't run any of the larger models locally, I've been renting hardware. Here's how I do it. I've been using Vast.ai for a while now for Stable Diffusion. I love how they do things, and I think they are cheaper than Runpod. Since I haven't been able to find any working guides on getting Oobabooga running on Vast, I figured I'd make one myself, since the process is a bit different from doing it locally, and more complicated than Runpod. Vast.ai is very similar to Runpod; you can rent remote computers from them and pay by usage. They have transparent and separate pricing for uploading, downloading, running the machine, and passively storing data. After getting everything set up, it should cost about $0.3-0.5/hr to run the machine, and about $9/month to leave the machine inactive. You can look more closely at the pricing of each machine before investing anything. The smallest amount of credit you can purchase is $5. I expect that this guide will be outdated pretty quickly, given how rapidly things are changing in this scene, hopefully we can get some value out of it in the meantime. This is my first time posting something like this to Reddit, pardon the formatting. For this, you will only need a credit card or crypto, and a computer. Let's begin: &#x200B; **Website link.** Here is a [link](https://vast.ai/). **Create an image.** Now we need to set up the image. This will determine the pre-installed software on the machine, and we need python stuff. We only need to do this once. 1. In the 'Create' tab in Vast's console, to the left of the available machines, we see "EDIT IMAGE & CONFIG...", click it. https://preview.redd.it/0raz5mss2u3b1.png?width=482&format=png&auto=webp&s=e5d5abfe11202a2bb0c6e445de6bb2b6f368fa0d 2. Click "Recommended" on the top, it's next to "Recent". https://preview.redd.it/myx3gtd03u3b1.png?width=389&format=png&auto=webp&s=9594e45a57946ac4fc065c57c99400eadfb2364a 3. Click the "pytorch:latest" banner in the centre. This should open a section below the banner where we can change details. https://preview.redd.it/rx1xp7v43u3b1.png?width=555&format=png&auto=webp&s=6cb3f9de9a377b805e3c85d663cef3769a07cb84 4. Paste the bash in the code block below under 'On-start script', replacing anything that was in there: https://preview.redd.it/1i9iqhpr3u3b1.png?width=379&format=png&auto=webp&s=3578ea6d364514c9a6a83cb0fe07338a3ae14807 if [ ! -d oobabooga_linux ]; then apt install build-essential unzip -y wget https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_linux.zip unzip oobabooga_linux.zip rm oobabooga_linux.zip cd oobabooga_linux sed -i '14s/--chat/--chat --share/' webui.py chmod +x start_linux.sh else cd oobabooga_linux fi 5. Click the blue "SELECT & SAVE" button. **Select a machine.** Now we will configure the other options we saw *below* the big blue "EDIT IMAGE & CONFIG..." button, on the left of the screen. This will depend on the model(s) you're planning on running. Just like the image, these settings stay, so you only have to do it once. Here are the options I change: 1. For disk space, I usually use (<size of model(s)> + 18GB). 2. 'GPU RAM' depends on the model you want to use. 24GB vRAM seems to be a safe bet for quantized 16B models with no pre-layering. For TheBloke's Guanaco-33B-GPTQ, I will give myself about 45GB. 3. 'Cpu RAM', I will leave around 20GB. I haven't seen it going above 6GB usage, presumably because I've only used GPTQ and no pre-layering. 4. At the top of the page: 1. Use "On-Demand" if you don't want to be interrupted by someone out-betting you. 2. In the top right, I've found that DLPerf/$/Hr is a good metric for sorting by value. 5. Click the blue "RENT" button on the machine of your choice. You can hover over it to see more pricing details. 6. REMEMBER TO STOP AND TERMINATE YOUR MACHINES WHEN YOU'RE DONE. Really can't stress that enough... Even stopped machines accrue storage charges... **Booting Oobabooga.** After renting the machine, a popup will direct you to 'Instances', where you will see your instance booting up. You can also use the tab on the left of the page. Because we used a recommended image, it should boot from cache quickly. This, and the previous 'RENT' button, are the only steps you have to do each boot. 1. When the blue button on the right of the instance changes to "OPEN", click it to go to Jupyter. https://preview.redd.it/lx8n3vp25u3b1.png?width=173&format=png&auto=webp&s=1ecd0310ba31c15871cebc7a2c6f314337befd82 2. Open a new terminal. https://preview.redd.it/ydwc3p8x4u3b1.png?width=215&format=png&auto=webp&s=11ce0631df76fe9b9393c5d7a13bc1bd5c8aef53 3. Enter *cd workspace/oobabooga\_linux/ ; echo "a" | ./start\_linux.sh* to set up Ooba. 'start\_linux.sh' is used for both the initial installation of Ooba and regular booting. The first time you run this should take about 10 minutes of setup, regular booting after setup takes about 15 seconds. There may be a point during the setup where you've got a wall of loading bars which are full and nothing is happening, it's not frozen, give it the full 10 minutes. If something goes wrong here, and you're seeing errors, no activity, and no CPU usage, just pick another machine and try again, and remember to terminate the old one. This has only ever happened to me with a few <24GB vRAM machines. When it's done, Gradio will provide a public URL for your Ooba instance. https://preview.redd.it/pbj9peqm5u3b1.png?width=628&format=png&auto=webp&s=f2ff98cfea10ed1713c472851bd45f0355e142ee https://preview.redd.it/pxzoqm7deu3b1.png?width=1114&format=png&auto=webp&s=9df7a08f018b35047ab331e09e4099a740c453b1 Open the link, and you've got an Ooba instance. This isn't an Ooba guide, so I'll leave it here. Good luck!
2023-06-03T18:09:29
https://www.reddit.com/r/LocalLLaMA/comments/13zknvj/a_guide_on_running_oobabooga_with_vastai_a_runpod/
KindaNeutral
self.LocalLLaMA
2023-06-20T00:13:16
0
{}
13zknvj
false
null
t3_13zknvj
/r/LocalLLaMA/comments/13zknvj/a_guide_on_running_oobabooga_with_vastai_a_runpod/
false
false
https://b.thumbs.redditm…X-ftTzWkGB2g.jpg
56
null
ChatGPT uses beam search, your local models use top-p (nucleus sampling). "leveraging ... beam search, ChatGPT is ... more accurate." Beam search is more expensive sampling, which improves LLMs answers by pruning off bad thinking patterns at generation time.
55
2023-06-03T18:33:57
https://www.quantumrun.com/signals/chatgpt-optimizing-language-models-dialogue
NancyAurum
quantumrun.com
1970-01-01T00:00:00
0
{}
13zlbt6
false
null
t3_13zlbt6
/r/LocalLLaMA/comments/13zlbt6/chatgpt_uses_beam_search_your_local_models_use/
false
false
default
55
null
medguanaco-lora-65b-GPTQ
54
[https://huggingface.co/nmitchko/medguanaco-lora-65b-GPTQ](https://huggingface.co/nmitchko/medguanaco-lora-65b-GPTQ) **UPDATE: 33b LoRA** [https://huggingface.co/nmitchko/medguanaco-lora-33b-8bit/](https://huggingface.co/nmitchko/medguanaco-lora-33b-8bit/) I'd like to introduce medguanaco, a lora finetune on top of guanaco 65B GPTQ. The purpose of this model is to explain medical notes to a layman with regular language. &#x200B; The LORA is on top of u/The-Bloke 65B GPTQ guanaco LORA. [https://huggingface.co/TheBloke/guanaco-65B-GPTQ](https://huggingface.co/TheBloke/guanaco-65B-GPTQ) &#x200B; This is a GPTQ LORA, meaning in text-generation-ui you'll need the monkey patch to load it and apply on top of the 65B model. The core model is at least 35GB of VRAM. 30b models and smaller to come. &#x200B; &#x200B; https://preview.redd.it/o8jmoiloju3b1.png?width=572&format=png&auto=webp&s=e67514acd85ed45b15c00c1f2527428d9990595f
2023-06-03T18:35:04
https://www.reddit.com/r/LocalLLaMA/comments/13zlcva/medguanacolora65bgptq/
nickmitchko
self.LocalLLaMA
2023-06-04T12:15:04
0
{}
13zlcva
false
null
t3_13zlcva
/r/LocalLLaMA/comments/13zlcva/medguanacolora65bgptq/
false
false
https://b.thumbs.redditm…eyGWQycJvxTY.jpg
54
{'enabled': False, 'images': [{'id': 'sNUszK-tVfGvGxpe0daVrCVq_j5KI4STrUyNZgQmY4Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/AUzPgRIA268-Rk3CS1ubZWzmlx_TA0jXaBt1QUR7qh8.jpg?width=108&crop=smart&auto=webp&s=4b037029455ed1cdec792259090aa8831dfa94a1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/AUzPgRIA268-Rk3CS1ubZWzmlx_TA0jXaBt1QUR7qh8.jpg?width=216&crop=smart&auto=webp&s=9910d2108715842bed0e8b956cbd483a916ca35e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/AUzPgRIA268-Rk3CS1ubZWzmlx_TA0jXaBt1QUR7qh8.jpg?width=320&crop=smart&auto=webp&s=cb4b2d23230a87e13c06eadb3e46b1187336ca73', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/AUzPgRIA268-Rk3CS1ubZWzmlx_TA0jXaBt1QUR7qh8.jpg?width=640&crop=smart&auto=webp&s=fecac10b3be08d49f47dd300eed0d609ac25c03b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/AUzPgRIA268-Rk3CS1ubZWzmlx_TA0jXaBt1QUR7qh8.jpg?width=960&crop=smart&auto=webp&s=706127b02a1ae63e3c81f6988baf7f5057069d96', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/AUzPgRIA268-Rk3CS1ubZWzmlx_TA0jXaBt1QUR7qh8.jpg?width=1080&crop=smart&auto=webp&s=adcae755ff735f76270fe0e40ca2920c65083364', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/AUzPgRIA268-Rk3CS1ubZWzmlx_TA0jXaBt1QUR7qh8.jpg?auto=webp&s=9e7a350b1b6d6055a93d2b6cbd61c90a735133de', 'width': 1200}, 'variants': {}}]}
Anyone working on a Falcon 40B SuperCOT version?
9
On the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), the two highest scoring versions are the [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) and the [Llama 30B SuperCOT](https://huggingface.co/ausboss/llama-30b-supercot). |Model|Revision|Average|ARC (25-shot)|HellaSwag (10-shot)|MMLU (5-shot)|TruthfulQA (0-shot)| |:-|:-|:-|:-|:-|:-|:-| |tiiuae/falcon-40b-instruct|main|63.2|61.6|84.4|54.1|52.5| |tiiuae/falcon-40b|main|60.4|61.9|85.3|52.7|41.7| |ausboss/llama-30b-supercot|main|59.8|58.5|82.9|44.3|53.6| |llama-30b|main|56.9|57.1|82.6|45.7|42.3| This SuperCOT (chain-of-thought) model performs significantly better than the base Llama 30B version, especially on 0-shot TruthfulQA benchmark. Falcon 40B also performs significantly better. Combining these two could create a very powerful model. >[**Llama 30B SuperCOT**](https://huggingface.co/ausboss/llama-30b-supercot)**.** Merge of [**huggyllama/llama-30b**](https://huggingface.co/huggyllama/llama-30b) \+ [**kaiokendev/SuperCOT-LoRA**](https://huggingface.co/kaiokendev/SuperCOT-LoRA) > >Supercot was trained to work with langchain prompting. Is anyone working on this?
2023-06-03T19:14:51
https://www.reddit.com/r/LocalLLaMA/comments/13zmg45/anyone_working_on_a_falcon_40b_supercot_version/
Balance-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zmg45
false
null
t3_13zmg45
/r/LocalLLaMA/comments/13zmg45/anyone_working_on_a_falcon_40b_supercot_version/
false
false
self
9
{'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]}
Rocm and amd is it worth it for the extra memory
3
So I plan on getting a new laptop and I would like to use them to run llms a desktop in my country is a no no since there is almost no power supply here and Nvidia laptop graphics cards that are above 4 to 6gb here could literally cost an arm and a leg but with amd I think I might be able to get 8 to 12gb of vram at the same price. I want to ask if I'll encounter slower performance or incompatibility issues if I use an amd with rocm in place of Nvidia and is it worth it
2023-06-03T19:53:15
https://www.reddit.com/r/LocalLLaMA/comments/13znify/rocm_and_amd_is_it_worth_it_for_the_extra_memory/
GOD_HIMSELVES
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13znify
false
null
t3_13znify
/r/LocalLLaMA/comments/13znify/rocm_and_amd_is_it_worth_it_for_the_extra_memory/
false
false
self
3
null
Falcon 7B on CoreML
9
2023-06-03T20:04:10
https://twitter.com/pcuenq/status/1664605575882366980?s=20
superlinux
twitter.com
1970-01-01T00:00:00
0
{}
13zntfc
false
{'oembed': {'author_name': 'Pedro Cuenca', 'author_url': 'https://twitter.com/pcuenq', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Falcon is a new family of very high-quality (and fully open-source!) LLMs that just made it to the top of the leaderboards.<br><br>Here&#39;s the &quot;small&quot; 7B version running on my mac with Core ML at ~4.3 tokens per second 🤯 <a href="https://t.co/B1y4tyGzXA">pic.twitter.com/B1y4tyGzXA</a></p>&mdash; Pedro Cuenca (@pcuenq) <a href="https://twitter.com/pcuenq/status/1664605575882366980?ref_src=twsrc%5Etfw">June 2, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/pcuenq/status/1664605575882366980', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_13zntfc
/r/LocalLLaMA/comments/13zntfc/falcon_7b_on_coreml/
false
false
https://b.thumbs.redditm…M7Wvl9MwT_0Q.jpg
9
{'enabled': False, 'images': [{'id': 'G_NfrINnxL7Jxvt4_TrgvA_mYE6YFsI5sCdnrvrm5S4', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/p8iyBxz7tnF7Ue9VR7asEGaGXFB84z96-dAXNIIM_G8.jpg?width=108&crop=smart&auto=webp&s=7c6588e16014718958b2d4211a65f36474e9a614', 'width': 108}], 'source': {'height': 104, 'url': 'https://external-preview.redd.it/p8iyBxz7tnF7Ue9VR7asEGaGXFB84z96-dAXNIIM_G8.jpg?auto=webp&s=2a100f45392a54b9af933a8a45eab7f4dc5f6bdf', 'width': 140}, 'variants': {}}]}
What prompts can be used for text classification with LLMs
8
tldr; what is a good prompt to classify given text. None of my attempts give a classification within my categories (even a wrong classification). I'm aware that this has been asked a few times here, but there is no conclusive answer, and I want to get your feedback on how to proceed with my approach. &#x200B; First of all, this is purely for academic curiosity, and there is no actual NLP problem here. I have used chatgpt to generate 10 news categories with their descriptions. then I have asked it to generate 2 examples for each category. Finally i asked local LLM to classify some samples. Here is an example prompt I have. &#x200B; >Below is an instruction that describes a task. Write a response that appropriately completes the request. > >\### Instruction: > >The news items can be categories like below. > > > >Politics: This category involves coverage of political processes, policies, parties, elections, and politicians both at the domestic and international level. It can also include analysis of political trends and discussions on political theory. > > > > Business: Business news relates to economic events, corporate developments, market trends, and financial analyses. Topics could range from individual company reports to global economic trends, financial markets, investment opportunities, and more. > > > >Technology: This is focused on developments in the tech industry. Topics include product releases, software updates, breakthroughs in tech research, cybersecurity issues, discussions around data privacy, and impacts of technology on society and other industries. > > > >Health: Health news involves developments in the field of health and medicine. It could include updates on medical research, disease outbreaks, healthcare policy, mental health awareness, fitness trends, and other health-related topics. > > > > Environment: This category focuses on environmental issues and developments. This includes climate change news, coverage of natural disasters, reports on renewable energy, conservation efforts, biodiversity, and sustainability initiatives. > > > >Sports: Sports news covers various sports events, players, teams, scores, match results, upcoming events, analyses, and sports politics. It can span from local community sports to global events like the Olympics. > > > > Entertainment: Entertainment news involves coverage of movies, music, TV shows, celebrities, awards, festivals, and the arts. It could also include reviews and critiques of various forms of media, as well as news about the entertainment industry itself. Science: Science news focuses on new discoveries, developments, and breakthroughs in various fields of science, including physics, chemistry, biology, astronomy, and more. It also covers controversial scientific debates and the societal impacts of scientific advancements. > > > > Education: This category covers news related to schools, universities, education policy, educational technology, student and teacher issues, and new trends or methods in education. > > > >Travel: Travel news involves information about different locations, travel advisories, trends in the tourism industry, travel tips, features on culture and lifestyle in different regions, and impacts of events (like a global pandemic) on the travel industry. > > > >Based on these categories, classify below news items: > > > >1. The upcoming federal election is seeing a shift in voter sentiment as recent polls suggest that the incumbent party might lose majority in the parliament. The opposition party, bolstered by this change, has started aggressive campaigning focusing on issues such as healthcare reform and climate change. > > > >2. MegaTech Corporation announced a surprising decrease in its quarterly profits, citing supply chain disruptions due to recent global events. The news has resulted in a significant dip in the company's stock prices, and investors are worried about the impact on their portfolios. > > > > 3. A recent report by the World Environmental Agency indicates a significant increase in global carbon emissions, despite various international climate agreements. Experts warn that if this trend continues, the world could face catastrophic climate changes in the future. > > > >4. Researchers at the Space Institute have detected a new exoplanet which could potentially support life. This discovery has ignited new excitement in the scientific community about the possibility of extraterrestrial life. > > > >5. The beautiful coastal city of Azure Bay has been named the top travel destination for this year by TravelWorld magazine. The city is known for its pristine beaches, vibrant culture, and delicious cuisine > > > > \### Response: > > > >sure here is the stories you have provided classified below: > > > >1. The upcoming federal election is seeing a shift in voter sentiment as recent polls suggest that the incumbent party might lose majority in the parliament. The opposition party, bolstered by this change, has started aggressive campaigning focusing on issues such as healthcare reform and climate change. Classification: Politics > > > >2. MegaTech Corporation announced a surprising decrease in its quarterly profits, citing supply chain disruptions due to recent global events. The news has resulted in a significant dip in the company's stock prices, and investors are worried about the impact on their portfolios. Classification: Business > > > >3. A recent report by the World Environmental Agency indicates a significant increase in global carbon emissions, despite various international climate agreements. Experts warn that if this trend continues, the world could face catastrophic climate changes in the future. Classification: OpenAI playground classifies items 3-5 correctly. I have tried quantized 30/33B versions of supercot, guanaco, Wizard-Vicuna uncensored, as well as Samantha 13b. I have also tried different prompts, such as putting examples in a tag in the input section, and just put one news to classify. In the first prompt I included above, all LLMs just continued to create sample news item, and no classification message. When I explicitly give examples first, and then ask to continue after an example news item ending with Classification:, some models just came up with new ridiculous categories even though I re-iterated the categories they should select from. I also played the model parameters (such as temperature etc) to no success. &#x200B; Is there a good prompt I can try on these models to at least get some classification within my categories (albeit may be wrong classification). ps: I'm well aware that there are better NLP algorithms, even a dedicated llama classification tool however I want to see if this can be done at the prompt level as OpenAI seems to be doing a good job at it. ps2: A classification problem like this seems to be a good metric to evaluate models.
2023-06-03T20:14:06
https://www.reddit.com/r/LocalLLaMA/comments/13zo3d8/what_prompts_can_be_used_for_text_classification/
brucebay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zo3d8
false
null
t3_13zo3d8
/r/LocalLLaMA/comments/13zo3d8/what_prompts_can_be_used_for_text_classification/
false
false
self
8
null
Best multilingual model with context analysing capabilities
3
I am looking for a model that is good at translation with context. I am looking for something like this: Prompt >Translate 'cool' from 'English' to 'Spanish' for youth. Use 'informal' language. The context is 'That skateboard is so cool!'. Additional notes are 'prefer informal slang terms'. Output only the translated text. Reply: >chévere I already tried vicuna 13b, wizard\_vicuna13b are nowhere close. I can try any model size, I just need a direction where to start looking. &#x200B; UPDATE: I found ehartford/based-30b model to be very close with what I need. I tested it with 2 languages and works great. Look at this examples: \[ "Translate \\"chévere\\" from Spanish to English using informal slang", "That's really fucking cool" \], \[ "Translate \\"chévere\\" from Spanish to English using formal business", "That is truly exceptionally impressive" \], \[ "Translate \\"chévere\\" from Spanish to English using informal day by day language", "That's pretty awesome" \]
2023-06-03T20:27:22
https://www.reddit.com/r/LocalLLaMA/comments/13zoggl/best_multilingual_model_with_context_analysing/
Ion_GPT
self.LocalLLaMA
2023-06-04T06:52:17
0
{}
13zoggl
false
null
t3_13zoggl
/r/LocalLLaMA/comments/13zoggl/best_multilingual_model_with_context_analysing/
false
false
self
3
null
something nice to do with a 2070 super 8gb
1
[removed]
2023-06-03T22:14:37
https://www.reddit.com/r/LocalLLaMA/comments/13zrk05/something_nice_to_do_with_a_2070_super_8gb/
_throawayplop_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zrk05
false
null
t3_13zrk05
/r/LocalLLaMA/comments/13zrk05/something_nice_to_do_with_a_2070_super_8gb/
false
false
default
1
null
Best model fast and accurate QA over documents?
9
I'm trying to set up an internal environment that will serve a small tea to start, maybe 5 or 6 people. I don't care about the model's generative capability so much, as long as it's creative enough to understand and summarize documentation. Mainly I just want to demonstrate for my company the benefit of using this kind of llm-back vector search in our basic day to day. I only need that basic functionality, and I need it to be fast. I understand that performance can be enhanced by training it on a dataset that is reflective of the documentation that will be queried. That's totally doable. Anyone have a recommendation of a base model that's not only fast, but reliable enough to start testing in a corporate environment?
2023-06-03T22:27:34
https://www.reddit.com/r/LocalLLaMA/comments/13zrx4e/best_model_fast_and_accurate_qa_over_documents/
gentlecucumber
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zrx4e
false
null
t3_13zrx4e
/r/LocalLLaMA/comments/13zrx4e/best_model_fast_and_accurate_qa_over_documents/
false
false
self
9
null
Increased context length?
6
I know that storyteller exists, but is there anything larger (13b-30b) that has 4-8k context? Running on a 4090
2023-06-04T00:00:56
https://www.reddit.com/r/LocalLLaMA/comments/13zuep6/increased_context_length/
Aischylos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zuep6
false
null
t3_13zuep6
/r/LocalLLaMA/comments/13zuep6/increased_context_length/
false
false
self
6
null
Has anyone actually done research on how well GPT-4 eval of models responses tracks?
11
using GPT-4 to evaluate and score model responses seems to be this sub's standard Has anyone actually done research to determine the validity of these evaluations on different tasks? If not, I intend to do so, and would *gladly* accept suggested task catagories to compare to human scoring.
2023-06-04T00:05:31
https://www.reddit.com/r/LocalLLaMA/comments/13zuj67/has_anyone_actually_done_research_on_how_well/
FreezeproofViola
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13zuj67
false
null
t3_13zuj67
/r/LocalLLaMA/comments/13zuj67/has_anyone_actually_done_research_on_how_well/
false
false
self
11
null