title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TheBloke is on an absolute tear over on HuggingFace | 1 | [removed] | 2023-05-30T06:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/13vi2ow/thebloke_is_on_an_absolute_tear_over_on/ | natufian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vi2ow | false | null | t3_13vi2ow | /r/LocalLLaMA/comments/13vi2ow/thebloke_is_on_an_absolute_tear_over_on/ | false | false | default | 1 | null |
LLama + llama-index | 0 | [removed] | 2023-05-30T06:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/13viiuv/llama_llamaindex/ | Reasonable-Put-1853 | self.LocalLLaMA | 2023-05-30T06:39:11 | 0 | {} | 13viiuv | false | null | t3_13viiuv | /r/LocalLLaMA/comments/13viiuv/llama_llamaindex/ | false | false | default | 0 | null |
samantha-falcon-7b | 43 | I've released a Samantha finetune of falcon-7b.
[**https://huggingface.co/ehartford/samantha-falcon-7b**](https://huggingface.co/ehartford/samantha-falcon-7b)
should make falcon more fun to talk to. 40b is coming too.
In case you haven't met her:
[https://erichartford.com/meet-samantha](https://erichartford.com/meet-samantha)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format.
She will not engage in roleplay, romance, or sexual activity. | 2023-05-30T06:50:12 | https://www.reddit.com/r/LocalLLaMA/comments/13vivij/samanthafalcon7b/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vivij | false | null | t3_13vivij | /r/LocalLLaMA/comments/13vivij/samanthafalcon7b/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'EbCiWIN49sd0THriZeJyHZ7VKvytwjy5Nobqky9_rMw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=108&crop=smart&auto=webp&s=9f4276960aca828176d4b8ca1589aa9dc9ebc3fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=216&crop=smart&auto=webp&s=308e2c6af604b4aae738cf83df601521389ab739', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=320&crop=smart&auto=webp&s=699a55084c6b06da6d8df8fbffad16a11cac1e38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=640&crop=smart&auto=webp&s=6c306a4859ca90b6eb9af2663d2599a4504d6049', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=960&crop=smart&auto=webp&s=c1781c118dbeaf9ff375d7e92cbf37a11aeab469', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?width=1080&crop=smart&auto=webp&s=5a3e4225409aab374e085c1eb2abca2de89eac15', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_86VQ1oE4CPv33SjOtLsfTrlhEbz_XCKYKwzf0BjZWk.jpg?auto=webp&s=a0f7ec2d14f60d50d70277055ca8fff2d9ba1c99', 'width': 1200}, 'variants': {}}]} |
Looking for a pre trained food recognition model | 0 | [removed] | 2023-05-30T09:06:47 | https://www.reddit.com/r/LocalLLaMA/comments/13vl4ru/looking_for_a_pre_trained_food_recognition_model/ | Excellent-Pea3508 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vl4ru | false | null | t3_13vl4ru | /r/LocalLLaMA/comments/13vl4ru/looking_for_a_pre_trained_food_recognition_model/ | false | false | default | 0 | null |
Can an RTX 3090 run a 30B 4bit model with webui and windows desktop at the same time? | 2 | [deleted] | 2023-05-30T10:10:52 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13vm7tx | false | null | t3_13vm7tx | /r/LocalLLaMA/comments/13vm7tx/can_an_rtx_3090_run_a_30b_4bit_model_with_webui/ | false | false | default | 2 | null |
||
[P] Fine-tuning LLaMA on TheVault by AI4Code | 10 | Hey everyone, I'm looking for suggestions and things to keep in mind while I do this.
I essentially want to fine-tune LLaMA on a dataset that's geared towards code generation. After a bit of research I found [TheVault](https://github.com/FSoft-AI4Code/TheVault) which seems good enough for the job (let me know if there are better datasets tho).
For the fine-tuning part, I looking to use LoRA or other similar methods. This is the first time I'm fine-tuning LLMs so let me know if you have any suggestions or tips. | 2023-05-30T11:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/13vnngo/p_finetuning_llama_on_thevault_by_ai4code/ | 04RR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vnngo | false | null | t3_13vnngo | /r/LocalLLaMA/comments/13vnngo/p_finetuning_llama_on_thevault_by_ai4code/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'LbDf92FJ1Tb60TgiYvBvIiV_muINezCCLBCEmDpJyvQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=108&crop=smart&auto=webp&s=a34b8e44de682c546f39684783cc7bd7a077a98e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=216&crop=smart&auto=webp&s=c0f68233b6732ffa889a3421b4f78ca624bfea97', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=320&crop=smart&auto=webp&s=3b3e760b0591b9dba804a623e859262230c77f28', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=640&crop=smart&auto=webp&s=fd78b24f2b2aa81e52785bb763dd3d4bebae132a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=960&crop=smart&auto=webp&s=5520ef4a0f05ea170c472abc6be207acb4b0b575', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?width=1080&crop=smart&auto=webp&s=7ce440f23fa6a8bec858253dcea4261935bd0703', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UEOqLWQy4uYWJrEV6fpFTetElJ5uGPNVRkUDKr3N1tI.jpg?auto=webp&s=583b13235851cba744103cb841f8920666f569b8', 'width': 1200}, 'variants': {}}]} |
I want to translate natural language into commands for a robot, is a local LLaMa the place to start? | 14 | I want to speak to a robot using conversational speech and have that translated into actions. I like the idea of using a local AI to do that but I am wondering whether I need something as general as a LLaMa. I did see a [post](https://towardsdatascience.com/control-web-apps-via-natural-language-by-casting-speech-to-commands-with-gpt-3-113177f4eab1) where GPT3 was being used to generate JSmol commands from natural language. Another one was [this](https://twitter.com/svpino/status/1659529362411237377) from santiago on twitter.
Does anyone have any experience with doing something like this? Any tutorials that can be recommended? | 2023-05-30T12:46:13 | https://www.reddit.com/r/LocalLLaMA/comments/13vpc5k/i_want_to_translate_natural_language_into/ | iChinguChing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vpc5k | false | null | t3_13vpc5k | /r/LocalLLaMA/comments/13vpc5k/i_want_to_translate_natural_language_into/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '07_FXwbB_oj_jdjPer6HfK8b5htgnPF9IhCYPQ5lO3c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=108&crop=smart&auto=webp&s=36f4288e0cf6f39222172e2664bb38bebac39273', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=216&crop=smart&auto=webp&s=4cff922de4ca0ca509c7f39b028d26bfa39a1840', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=320&crop=smart&auto=webp&s=3ec738b332874e0d3d6d836df56d9a19d72cc2b1', 'width': 320}, {'height': 346, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=640&crop=smart&auto=webp&s=ef0dc5a03201b454767fd5504188a0ee1b4c0870', 'width': 640}, {'height': 519, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=960&crop=smart&auto=webp&s=5100f8fd4c03a3bba5d5d574183ed0e4e595b908', 'width': 960}, {'height': 584, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?width=1080&crop=smart&auto=webp&s=93dc6b8f1dc3f36237b0ee3773b3d0dda86bf7f9', 'width': 1080}], 'source': {'height': 647, 'url': 'https://external-preview.redd.it/FsIIy_NoZkpDtaBSFsZf_sOUMa2g6UP3Iu5WkBxuEyk.jpg?auto=webp&s=7d94e6ea4588b02e47135dd01f19785dc1629b6e', 'width': 1195}, 'variants': {}}]} |
Can we have tracker for every module release or announce here ? | 0 | [removed] | 2023-05-30T13:02:16 | [deleted] | 2023-06-26T12:15:29 | 0 | {} | 13vpq1u | false | null | t3_13vpq1u | /r/LocalLLaMA/comments/13vpq1u/can_we_have_tracker_for_every_module_release_or/ | false | false | default | 0 | null |
||
What can I do with 10GB of VRAM? | 3 | The idea of being able to run a LLM locally seems almost too good to be true so I'd like to try it out but as far as I know this requires a lot of RAM and VRAM. RAM isn't much of an issue as I have 32GB, but the 10GB of VRAM in my 3080 seems to be pushing the bare minimum of VRAM needed. I've thought of selling my 3080 for a 3090 but something tells me this is a stupid idea. I've also thought about buying an older compute GPU i.e. Tesla P40 but I'd need to essentially jerryrig the thing to my PC since it isn't a traditional GPU and while I'm sure most models would fit more easily in the 24GB of VRAM, I'm not sure how much faster (or slower) it would be compared to just using a more limited model with my 3080. | 2023-05-30T13:09:58 | https://www.reddit.com/r/LocalLLaMA/comments/13vpws1/what_can_i_do_with_10gb_of_vram/ | yungfishstick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vpws1 | false | null | t3_13vpws1 | /r/LocalLLaMA/comments/13vpws1/what_can_i_do_with_10gb_of_vram/ | false | false | self | 3 | null |
Any Local LLMs trained specifically to handle langchain? | 28 | It seems so obvious that I can't believe I don't see people dropping better versions of langchain agent trained models every day.
Local models aren't as strong in their general reasoning skills and in my attempts to use some of the more commonly well regarded local models like snoozy, they just can't handle the long and complex prompt instructions that come with running a custom agent with custom tools.
But I could very well see a 40/33B or even 13b model trained on a large dataset of custom agent prompts and chain type prompts that *could* follow general instructions, so long as they're in the right format.
Does anyone know of a model like this? Or if anyone's working on it? | 2023-05-30T13:14:43 | https://www.reddit.com/r/LocalLLaMA/comments/13vq0uj/any_local_llms_trained_specifically_to_handle/ | gentlecucumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vq0uj | false | null | t3_13vq0uj | /r/LocalLLaMA/comments/13vq0uj/any_local_llms_trained_specifically_to_handle/ | false | false | self | 28 | null |
ARKANE INDUSTRIES BASILISK FRAMEWORK | 0 | [removed] | 2023-05-30T13:24:32 | https://www.reddit.com/gallery/13vq96s | 0xPICNIK | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 13vq96s | false | null | t3_13vq96s | /r/LocalLLaMA/comments/13vq96s/arkane_industries_basilisk_framework/ | false | false | default | 0 | null |
Understanding frequency penalty, presence penalty, repetition penalty | 1 | [removed] | 2023-05-30T15:50:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13vtx2j | false | null | t3_13vtx2j | /r/LocalLLaMA/comments/13vtx2j/understanding_frequency_penalty_presence_penalty/ | false | false | default | 1 | null |
||
What runs ChatGPT? Inside Microsoft's AI supercomputer | 8 | What runs ChatGPT? Inside Microsoft's AI supercomputer: [https://youtu.be/Rk3nTUfRZmo](https://youtu.be/Rk3nTUfRZmo)
​
How much better is the 530B model, compared to 170B GPT3? | 2023-05-30T16:06:21 | https://www.reddit.com/r/LocalLLaMA/comments/13vubnr/what_runs_chatgpt_inside_microsofts_ai/ | NancyAurum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vubnr | false | null | t3_13vubnr | /r/LocalLLaMA/comments/13vubnr/what_runs_chatgpt_inside_microsofts_ai/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'qypuv7qgWWhlaEITLML1HTEM2xyJxc90FtPYgwoqn0s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/X6bFFFNabiMERs8LOF8geqm6C-5l3Qe8GJF335w_Skk.jpg?width=108&crop=smart&auto=webp&s=4e8ee2efc8ef386f46c637225f3618f16e5559fb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/X6bFFFNabiMERs8LOF8geqm6C-5l3Qe8GJF335w_Skk.jpg?width=216&crop=smart&auto=webp&s=5b7b793539596de3049de24a4c7cfb8fe08a415e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/X6bFFFNabiMERs8LOF8geqm6C-5l3Qe8GJF335w_Skk.jpg?width=320&crop=smart&auto=webp&s=b50d950359957f874eb4143e5aa7badc6ab1008e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/X6bFFFNabiMERs8LOF8geqm6C-5l3Qe8GJF335w_Skk.jpg?auto=webp&s=2e7c5fe8e682a81441e8f98d9c51c3f9c959fad1', 'width': 480}, 'variants': {}}]} |
temperature=1000.0 and top_p=0.1 is hilarious | 262 | I find this behavior surprising and enlightening. Having top\_p small enough keeps it mostly coherent while making it have a really difficult time speaking properly.
​
>I have 2 cows, and one of 'm died cuzz the farmer was mean and didn t let 'er drink any wate. I m so pist! But now i only haz the other, an its a big bessy. But she is a little crazeeee! But she is still better den the one I useed 4 work, cuz that one would just stand in fronta the gate when I was tryn to take him out, an just say NOO. So now the other cus wants a boyfrinz. I told it I'd get one but only 5 mons old. I don'z underdastnad whhy they haze so mch fuss an stuff an they want a man! They iz crazy! I don t undastn why, maybe cuze they think men haves lots u money. I duno. I don;T get it
For best results, choose an appropriate beginning for the response, such as "yoooo," or "omgz,"
​
>Human: invent a cure for cancer
>
>Assistant:Yooooo, let the madness continue, it is so funny to hear all those people scared from a disease, they can'... Wait I just remembered that there are some really cool people on my list of followings and some of my family is in danger from this cancer. So let’ me try something. Cancer can’t surviv’ this. This is how I will make it:
>
>I would use an atomic-accu’rately precise raygun. I would aim for an exact location of the tumor. I wi... No, no no I would not shoot a las’ gun. Las' are way cool’, but this isn... Wait! Wait I just realized this: the ray gun could kill people! So, no, I won' do the shooting, it... I’m sorry I didn... It was... Wait a’, there we... Hang in the’ a... Wait a'! I have... Wait, wait a minute here’. How the hell can we do... How do ... No wait, this i’ crazi. How do you expect us ... Hmmm ... This ... Is crazy! Wait, I just realized... Wait... Nope... It’s still a... Cra’ziness... This cra’nuts’ ... Nope... Wait. This i... Cra... Nuttz... Crazi ... Crazziness... Nutts. Wait! Wait.
​
>Human: write a comedy about a poker game
>
>Assistant:Yooooo, gather all my peeeeples for a story that is guaranteed to give yall the LULZ: 1. It starts wit a guyyyy nameeedddd Johnnieyy, 1. Hizzz friendzy were playingzzz pokzzzeerr, but Johnnie was a littelzzz confused, he hadntz heard the rules before! Haha, he had a big questionzz onzz hisz mined, and his friendzy thoughtz it was pretty funny, haha: 10) what was it?? Itzzz, whatzzt da differenc betweenzz "aces high," "kings high"? Hisz freenz were lauffzzing their a\*\*es off and Johnie got a little madddddzz: 99). So they told him the rule: a "roxx offffff the tzzte!" and jonnnies was like... 9. "okayzzzz i gotchuz! so whoszz the kingz of spaddz"? Hahahaa jonnes! His freentz toldz him: 7) and johnie'z face turned to diszmay. He was so confusezdzz, haha... But the game keptzzz gong, and the fun kept coming!
​
>Human: write a sad story about a kitty cat
>
>Assistant:Yooooo, there lived an old, old woman.
>
>The old woman was very lonely because she was so very old And she was so sad that her husband passed awaaYYyYYYYYY! <3 One cold night when the moon was bright she saw this little KITY KAYY KAAA! She was all byyyyy herseFFFFEEE! And the ol' woman took in her. Now she had companyyyyy! But one dark, cold day her beloved Kity diedyyyyy <((()> <:C And that was very saddyyyy! ##V She was so saddd that the little kitten dieddy. The ol' women had soooo little left in lifeeey! And her KITY <((O)))! <:D<D::) >: She missed the cat very mmuuch and she felt all alonelyyyy! But the little girl from the story loved Kittyy <((: and took careyyyyyyyy of him and loved hmmme <::)3::) <D::D <D <D But one day KITTTY <:((O)>< <O>< O O <( ><D::): <((( >.< <:(( She loved Kitty very mucH!!!! But then he was Gone!!! <3((O.< ##V V Now KITY WASS gone forever
Thank you science | 2023-05-30T16:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/13vv941/temperature10000_and_top_p01_is_hilarious/ | phree_radical | self.LocalLLaMA | 2023-05-30T16:46:11 | 1 | {'gid_2': 1} | 13vv941 | false | null | t3_13vv941 | /r/LocalLLaMA/comments/13vv941/temperature10000_and_top_p01_is_hilarious/ | false | false | self | 262 | null |
Long load time using from_pretrained? | 2 | Loading a 6B model (gpt-j) onto GPU using from\_pretrained is taking over an hour. I have the model files cached and use\_local\_files set to True. Can't find anyone else with this issue. Here's the full call:
``AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path,
return_dict=True,
load_in_8bit=True,
device_map='auto',
max_memory={0: "12GB"},
cache_dir='../huggingface-cache')``` | 2023-05-30T17:13:28 | https://www.reddit.com/r/LocalLLaMA/comments/13vw2eh/long_load_time_using_from_pretrained/ | ThineGame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vw2eh | false | null | t3_13vw2eh | /r/LocalLLaMA/comments/13vw2eh/long_load_time_using_from_pretrained/ | false | false | self | 2 | null |
Can you download your Huggingface chats? | 1 | [removed] | 2023-05-30T17:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/13vwaud/can_you_download_your_huggingface_chats/ | spmmora | self.LocalLLaMA | 2023-06-02T11:33:21 | 0 | {} | 13vwaud | false | null | t3_13vwaud | /r/LocalLLaMA/comments/13vwaud/can_you_download_your_huggingface_chats/ | false | false | default | 1 | null |
AutoCompleter: Auto Complete anything using a ggml model | 1 | [removed] | 2023-05-30T17:48:30 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13vwysi | false | null | t3_13vwysi | /r/LocalLLaMA/comments/13vwysi/autocompleter_auto_complete_anything_using_a_ggml/ | false | false | default | 1 | null |
||
LLM Lobotomy - predictions? | 19 | I'm wondering if anyone has ever tried effectively giving an LLM a 'lobotomy' by gradually corrupting it's weights while using it?
Seems like an interesting thing to try. Firstly you'd run the original model, give it some prompts, record the outputs.
Then you'd randomly (or perhaps not randomly), change the values of the model as it's loaded into RAM/VRAM, perhaps you would initially change 10,000 values to zero.
Then re-run the same prompts perhaps with the same seeds, I wonder if it would become slightly less intelligent if you managed to 'damage' the part of the 'neural network' it was using to some degree.
Then you'd change 10,000 more values to zero, and try again.
Eventually you'd have to end up with a model that couldn't function, I'm wondering how long that would take.... hopefully this doesn't sound too unethical! | 2023-05-30T17:51:20 | https://www.reddit.com/r/LocalLLaMA/comments/13vx1e5/llm_lobotomy_predictions/ | ozzeruk82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13vx1e5 | false | null | t3_13vx1e5 | /r/LocalLLaMA/comments/13vx1e5/llm_lobotomy_predictions/ | false | false | self | 19 | null |
Llama Chatbot for Telegram | 1 | [removed] | 2023-05-30T19:41:34 | https://github.com/aneeshjoy/llama-telegram-bot | [deleted] | github.com | 1970-01-01T00:00:00 | 0 | {} | 13vzxyg | false | null | t3_13vzxyg | /r/LocalLLaMA/comments/13vzxyg/llama_chatbot_for_telegram/ | false | false | default | 1 | null |
if the average human typing speed is around the 1 token/s mark, why it feel so slow to interact with LLMs running on cpu? | 25 | human beings fall between the 40 to 70 words per minute in typing benchmarks, which translates to roughly 1 token/s. Many people have shared their experience about running LLMs on cpu/ram with comparable response times, why do we consider such speed slow if it's kinda the same as chatting with other person? | 2023-05-30T20:11:58 | https://www.reddit.com/r/LocalLLaMA/comments/13w0qcp/if_the_average_human_typing_speed_is_around_the_1/ | Caffdy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w0qcp | false | null | t3_13w0qcp | /r/LocalLLaMA/comments/13w0qcp/if_the_average_human_typing_speed_is_around_the_1/ | false | false | self | 25 | null |
GPU card suggestions | 4 | I'm new here and am completely lost when it comes to video cards. Unfortunately I'm not a gamer so I have no clue on them but I am interested in running these models. I would say I'm more of a hobbyist looking more likely at running smaller models with budget under $700.
Can someone recommend some video cards to buy? Even better would be a price range chart, model of card, and LLM model sizes for running various models here. | 2023-05-30T20:48:36 | https://www.reddit.com/r/LocalLLaMA/comments/13w1nxv/gpu_card_suggestions/ | ctbitcoin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w1nxv | false | null | t3_13w1nxv | /r/LocalLLaMA/comments/13w1nxv/gpu_card_suggestions/ | false | false | self | 4 | null |
Link with js script | 1 | [removed] | 2023-05-30T21:46:37 | https://www.reddit.com/r/LocalLLaMA/comments/13w35o8/link_with_js_script/ | Interesting_Ad_1603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w35o8 | false | null | t3_13w35o8 | /r/LocalLLaMA/comments/13w35o8/link_with_js_script/ | false | false | default | 1 | null |
problem installing llama.cpp on a scientific linux cluster | 1 | [removed] | 2023-05-30T21:46:59 | https://www.reddit.com/r/LocalLLaMA/comments/13w35zr/problem_installing_llamacpp_on_a_scientific_linux/ | Bozo32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w35zr | false | null | t3_13w35zr | /r/LocalLLaMA/comments/13w35zr/problem_installing_llamacpp_on_a_scientific_linux/ | false | false | default | 1 | null |
Cheapest GPU VPS for 24/7 hosting | 4 | I found runpod.io a very interesting provider for cloud GPU with fair on-demand pricing, especially for training and testing. But for production deployment I am looking for a VPS that I can rent on monthly basis to run 24/7 and which is therefore cheaper (per month). Are there providers you can recommend? | 2023-05-30T22:15:43 | https://www.reddit.com/r/LocalLLaMA/comments/13w3w74/cheapest_gpu_vps_for_247_hosting/ | Koliham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w3w74 | false | null | t3_13w3w74 | /r/LocalLLaMA/comments/13w3w74/cheapest_gpu_vps_for_247_hosting/ | false | false | self | 4 | null |
Increasing speed for webui/Wizard-Vicuna-13B with my Mac Pro M1 16gb setup? | 7 | Hey! So after a lot of time spent getting it to actually work, I've finally got my offline llama setup on a Macbook with an Apple M1 Pro (model number mk193b/a, 10 cores (8 performance 2 efficiency), 16 GB LPDDR5 RAM).
I'm definitely running ARM64 python also.
It's still running quite slowly even for simple questions, and I was wondering if there are settings I'm misunderstanding or that I might alter to further increase speeds? I'm very new at this and following a mishmash of guides and fixes, so would love to learn more in general!
\*\*My starting commands (\*\*auto-devices and -mlock made the speed bearable, still quite slow), **if there are any elements i could change for this to make it run better?**
cd \~/text-generation-webui
source venv/bin/activatepython server.py --auto-devices --notebook --model Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1 --auto-launch --mlock --threads 4
**SAMPLE SIMPLE PROMPT:** (with 'none' selected under prompt menu and max\_new\_tokens on 200):
Question: How many planets in the solar system?
Factual answer: 8 (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune)
\*\*CONSOLE LOG FOR THIS:\*\*llama\_print\_timings: load time = 4246.67 ms
llama\_print\_timings: sample time = 24.14 ms / 29 runs ( 0.83 ms per token)
llama\_print\_timings: prompt eval time = 4243.48 ms / 24 tokens ( 176.81 ms per token)
llama\_print\_timings: eval time = 19010.91 ms / 28 runs ( 678.96 ms per token)
llama\_print\_timings: total time = 23832.13 ms
Output generated in 25.06 seconds (1.12 tokens/s, 28 tokens, context 24, seed 2077319512)
**SETTINGS IN TEXTGENERATIONGUI:**
Running with Llama-precise settings
Transformers setting - 0 cpu-memory in MiB; auto-devices ticked but nothing else; no transformers 4-bit settings ticked; compute\_dtype float16; quant\_type nf4
gptq wbits none, groupsize none, model\_type llama, pre\_layer 0
llama.cpp - threads 4, n\_batch 512, n-gpu-layers 0, n\_ctex 2048, no-mmap unticked, mlock ticked, seed 0
no extensions
boolean command-line flags - auto\_launch, pin\_weight ticked but nothing else
**In console, after I type the initial python loading commands:**
bin /Users/text-generation-webui/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes\_cpu.so
/Users/text-generation-webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
'NoneType' object has no attribute 'cadam32bit\_grad\_fp32'
INFO:Loading Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1...
INFO:llama.cpp weights detected: models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1.bin
INFO:Cache capacity is 0 bytes
llama.cpp: loading model from models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1.bin
llama\_model\_load\_internal: format = ggjt v3 (latest)
llama\_model\_load\_internal: n\_vocab = 32000
llama\_model\_load\_internal: n\_ctx = 2048
llama\_model\_load\_internal: n\_embd = 5120
llama\_model\_load\_internal: n\_mult = 256
llama\_model\_load\_internal: n\_head = 40
llama\_model\_load\_internal: n\_layer = 40
llama\_model\_load\_internal: n\_rot = 128
llama\_model\_load\_internal: ftype = 9 (mostly Q5\_1)
llama\_model\_load\_internal: n\_ff = 13824
llama\_model\_load\_internal: n\_parts = 1
llama\_model\_load\_internal: model size = 13B
llama\_model\_load\_internal: ggml ctx size = 0.09 MB
llama\_model\_load\_internal: mem required = 11359.05 MB (+ 1608.00 MB per state)
....................................................................................................
llama\_init\_from\_file: kv self size = 1600.00 MB
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512\_VBMI = 0 | AVX512\_VNNI = 0 | FMA = 0 | NEON = 1 | ARM\_FMA = 1 | F16C = 0 | FP16\_VA = 1 | WASM\_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
INFO:Loaded the model in 41.01 seconds.
​
**Any advice on how I can improve this whole setup given my machine type especially appreciated!**
​
\---UPDATE BELOW after trying u/Evening_Ad6637's advice, thanks!
**New launch parameters that reduce sample prompt loading time from 25.06 seconds to 8.83 seconds on an M1 PRO 16GB macbook, if helpful to anyone else:**
python server.py --auto-devices --notebook --model Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1 --auto-launch --no-mmap --mlock --threads 4
**SPEED:**
llama\_print\_timings: load time = 4153.25 ms
llama\_print\_timings: sample time = 20.30 ms / 29 runs ( 0.70 ms per token)
llama\_print\_timings: prompt eval time = 4151.52 ms / 24 tokens ( 172.98 ms per token)
llama\_print\_timings: eval time = 4398.54 ms / 28 runs ( 157.09 ms per token)
llama\_print\_timings: total time = 8620.52 ms
Output generated in 8.83 seconds (3.17 tokens/s, 28 tokens, context 24, seed 1392505426)
**HTOP DATA:**
**PYTHON/VICUNA PROCESS:**VIRT:404GRES:10.3GCPU%:0.7MEM%:63.5
***BEFORE RUNNING PROMPT:*****PROCESSOR CORES**: 9 listed0 - 28.5%1 - 28.3%2 - 2.6%3-9: all 0%**TOTAL MEM:** 12.6/16.0G**TOTAL SWP:** 2.94G/4.00G**Tasks:** 298, 1059 thr, 0 kthr; 6 running**Load average:** 3.10 7.16 11.20
***DURING TEXT GENERATION FROM PROMPT:*****PROCESSOR CORES**: 9 listed0 - 38.7%1 - 38.9%2 - 43.0%3 - 29.8%4 - 16.0%5 - 32.9%6 - 40.0%7 - 37.3%8 - 14.0%9 - 8.6%3-9: all 0%**TOTAL MEM:** 12.6/16.0G**TOTAL SWP:** 2.94G/4.00G**Tasks:** 340, 1058 thr, 0 kthr; 1 running**Load average:** 2.31 2.52 5.81
**\[When first loading Wizard, I now get the following terminal log:**
*bin /Users/text-generation-webui/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes\_cpu.so*
*/Users/text-generation-webui/venv/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.*
*warn("The installed version of bitsandbytes was compiled without GPU support. "*
*'NoneType' object has no attribute 'cadam32bit\_grad\_fp32'*
*INFO:Loading Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1...*
*INFO:llama.cpp weights detected: models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1.bin*
*INFO:Cache capacity is 0 bytes*
*llama.cpp: loading model from models/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1/Wizard-Vicuna-13B-Uncensored.ggmlv3.q5\_1.bin*
*llama\_model\_load\_internal: format = ggjt v3 (latest)*
*llama\_model\_load\_internal: n\_vocab = 32000*
*llama\_model\_load\_internal: n\_ctx = 2048*
*llama\_model\_load\_internal: n\_embd = 5120*
*llama\_model\_load\_internal: n\_mult = 256*
*llama\_model\_load\_internal: n\_head = 40*
*llama\_model\_load\_internal: n\_layer = 40*
*llama\_model\_load\_internal: n\_rot = 128*
*llama\_model\_load\_internal: ftype = 9 (mostly Q5\_1)*
*llama\_model\_load\_internal: n\_ff = 13824*
*llama\_model\_load\_internal: n\_parts = 1*
*llama\_model\_load\_internal: model size = 13B*
*llama\_model\_load\_internal: ggml ctx size = 9311.05 MB*
*llama\_model\_load\_internal: mem required = 11359.05 MB (+ 1608.00 MB per state)*
*....................................................................................................*
*llama\_init\_from\_file: kv self size = 1600.00 MB*
*AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512\_VBMI = 0 | AVX512\_VNNI = 0 | FMA = 0 | NEON = 1 | ARM\_FMA = 1 | F16C = 0 | FP16\_VA = 1 | WASM\_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |*
*INFO:Loaded the model in 2.21 seconds.*
**Any other suggestions for running better/faster much appreciated!** | 2023-05-30T23:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/13w53k3/increasing_speed_for_webuiwizardvicuna13b_with_my/ | spoilingba | self.LocalLLaMA | 2023-05-31T08:43:28 | 0 | {} | 13w53k3 | false | null | t3_13w53k3 | /r/LocalLLaMA/comments/13w53k3/increasing_speed_for_webuiwizardvicuna13b_with_my/ | false | false | self | 7 | null |
Which uncensored model to use to remove calendar events from message text? | 1 | I need a model which can consistently remove any references to calendar events like Christmas, Halloween, Valentine’s Day, etc. as well as any reference to day or month like Sunday, Monday or February etc. I tried the 30B Wizard uncensored but it didn’t do a good job compared to GPT3.5. It kept the same message without modifying or didn’t work on messages with large length.
I need it to be uncensored solve some message text might contain stuff that GPT doesn’t allow.
An example I tried: “Happy 4th of july! How’s your day going?” Should be changed to “How’s your day going?” | 2023-05-30T23:44:11 | https://www.reddit.com/r/LocalLLaMA/comments/13w61b4/which_uncensored_model_to_use_to_remove_calendar/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w61b4 | false | null | t3_13w61b4 | /r/LocalLLaMA/comments/13w61b4/which_uncensored_model_to_use_to_remove_calendar/ | false | false | self | 1 | null |
Llama weight license- any updates from Meta? | 6 | Has anyone heard any updates if meta is considering changing the llama weights license? I am desperate for a commercial model that isn’t closedAI and I’m getting backed into a corner not being able to use llama commercially. I’ve been scouring twitter and other places but haven’t seen anything new for a few weeks. | 2023-05-30T23:48:23 | https://www.reddit.com/r/LocalLLaMA/comments/13w64t8/llama_weight_license_any_updates_from_meta/ | big_ol_tender | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w64t8 | false | null | t3_13w64t8 | /r/LocalLLaMA/comments/13w64t8/llama_weight_license_any_updates_from_meta/ | false | false | self | 6 | null |
Using the right prompt format makes a huge difference | 1 | [removed] | 2023-05-31T01:03:32 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13w7ukj | false | null | t3_13w7ukj | /r/LocalLLaMA/comments/13w7ukj/using_the_right_prompt_format_makes_a_huge/ | false | false | default | 1 | null |
||
Using the right prompt format makes responses so much better | 74 | 2023-05-31T01:28:40 | https://www.reddit.com/gallery/13w8ecu | Creative-Rest-2112 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 13w8ecu | false | null | t3_13w8ecu | /r/LocalLLaMA/comments/13w8ecu/using_the_right_prompt_format_makes_responses_so/ | false | false | default | 74 | null |
|
Chinese-Alpaca-Plus-13B-GPTQ | 4 | I'd like to share with you today the [**Chinese-Alpaca-Plus-13B-GPTQ**](https://huggingface.co/rabitt/Chinese-Alpaca-Plus-13B-GPTQ) model, which is the GPTQ format quantised 4bit models of [Yiming Cui's Chinese-LLaMA-Alpaca 13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) for GPU reference.
to use it in text-generation-webui,
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `rabitt/Chinese-Alpaca-Plus-13B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded, `Chinese-Alpaca-Plus-13B-GPTQ`.
7. If you see an error like `Error no file named pytorch_model.bin ...` in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! | 2023-05-31T02:31:04 | https://www.reddit.com/r/LocalLLaMA/comments/13w9ry5/chinesealpacaplus13bgptq/ | ttimasdf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13w9ry5 | false | null | t3_13w9ry5 | /r/LocalLLaMA/comments/13w9ry5/chinesealpacaplus13bgptq/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'X4rmX94b0pa0Swfzq-37wJrtt_jnnNUgkrwXRsN0lyI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=108&crop=smart&auto=webp&s=d0272c240e8ab71d826533a0c17f599316ef6214', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=216&crop=smart&auto=webp&s=8088de1f112ade9e7db01d5f82204c2f28adcdce', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=320&crop=smart&auto=webp&s=3f77f4defdf35d3f13289af3e0f7dcb61f96bc4b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=640&crop=smart&auto=webp&s=cbf64941bcf86cb5dbb6e60cf84dba5618fa8e60', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=960&crop=smart&auto=webp&s=f9a62179c22e316cce446f820abc7ff7c290f1ed', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?width=1080&crop=smart&auto=webp&s=835d4a42cb18119614bb5a33503e6d51b1e6f888', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/H6mkA79_5kvDe6H0yIuYLV5dss5HyIh8_LhVBcclszA.jpg?auto=webp&s=05c19eddb0bbd98e7a2cdef840e728c019449c08', 'width': 1200}, 'variants': {}}]} |
Where can I keep on top of LLM developments? | 27 | I'm currently attempting to broaden my knowledge of AI and ML, particularly in relation to large language models. My understanding so far is that a significant limitation of these models is their restricted context window, which appears to hinder their ability to maintain continuity of information and reason effectively about complex topics. I see models like GPT-4, Anthropic's Claude, and Mosaic ML implementing larger windows (currently 32k, 100k and 82k tokens respectively).
Can anyone confirm whether my comprehension of the context window is accurate? If not, could you explain the primary challenges that impede the reasoning and problem-solving abilities of LLMs? Additionally, what are the proposed solutions currently being explored to overcome these challenges? Finally, could anyone recommend the best way to stay on top of developments in the LLM and AI agent space? | 2023-05-31T02:43:22 | https://www.reddit.com/r/LocalLLaMA/comments/13wa1of/where_can_i_keep_on_top_of_llm_developments/ | banderamobile | self.LocalLLaMA | 2023-05-31T03:48:38 | 0 | {} | 13wa1of | false | null | t3_13wa1of | /r/LocalLLaMA/comments/13wa1of/where_can_i_keep_on_top_of_llm_developments/ | false | false | self | 27 | null |
Fine-Tuning Language Models with Just Forward Passes | 27 | [https://arxiv.org/abs/2305.17333](https://arxiv.org/abs/2305.17333)
TL;DR: MeZO allows for fine-tuning a model with the same amount of VRAM as needed for inference. Can be used for either a full fine tune or a LoRA. Potentially can be combined with other memory-efficient strategies, but they haven't tried it with quantized models yet. Still research code, of course, so there'd be some implementation work involved before we'd be able to use it. | 2023-05-31T03:22:24 | https://www.reddit.com/r/LocalLLaMA/comments/13wavpf/finetuning_language_models_with_just_forward/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wavpf | false | null | t3_13wavpf | /r/LocalLLaMA/comments/13wavpf/finetuning_language_models_with_just_forward/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
(Code Released) Landmark Attention: Random-Access Infinite Context Length for Transformers | 150 | Code for Landmark Attention is now released and it should be possible to finetune existing LLaMA models using this method.
[https://github.com/epfml/landmark-attention](https://github.com/epfml/landmark-attention)
More info
[https://www.reddit.com/r/MachineLearning/comments/13srbl7/landmark\_attention\_randomaccess\_infinite\_context/](https://www.reddit.com/r/MachineLearning/comments/13srbl7/landmark_attention_randomaccess_infinite_context/)
[https://www.reddit.com/r/LocalLLaMA/comments/13sy2bu/landmark\_attention\_llama\_7b\_with\_32k\_tokens/](https://www.reddit.com/r/LocalLLaMA/comments/13sy2bu/landmark_attention_llama_7b_with_32k_tokens/) | 2023-05-31T03:35:17 | https://www.reddit.com/r/LocalLLaMA/comments/13wb59a/code_released_landmark_attention_randomaccess/ | IxinDow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wb59a | false | null | t3_13wb59a | /r/LocalLLaMA/comments/13wb59a/code_released_landmark_attention_randomaccess/ | false | false | self | 150 | {'enabled': False, 'images': [{'id': '9AlTfytK8JKMoaR1DoQd2gvztH9ezp_V0W7Po5AIfqU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=108&crop=smart&auto=webp&s=2f1cef133667bae4b4213dd00b1cd8e2a8cbd0ae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=216&crop=smart&auto=webp&s=f92fb17bde0ad1b9120076a00fd41ca74fa749b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=320&crop=smart&auto=webp&s=d11ca72dec75080d5f02c32d52816f56bd010121', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=640&crop=smart&auto=webp&s=8861aec9486332c63d650e53d30711f718899965', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=960&crop=smart&auto=webp&s=e7f8fa15028900c88611086824068738d0f9fbcf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=1080&crop=smart&auto=webp&s=c013de5e98606157f4e2400d20faa1a963621b16', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?auto=webp&s=122cbd0f7c437448c1e58d8c03f5d88e8eb7d76a', 'width': 1200}, 'variants': {}}]} |
AutoCompleter: Auto Complete anything using a ggml model | 1 | [removed] | 2023-05-31T05:58:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wdvnz | false | null | t3_13wdvnz | /r/LocalLLaMA/comments/13wdvnz/autocompleter_auto_complete_anything_using_a_ggml/ | false | false | default | 1 | null |
||
AutoCompleter: auto complete any text inside any application | 1 | [removed] | 2023-05-31T07:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/13wf6g8/autocompleter_auto_complete_any_text_inside_any/ | Kitchen-Monitor-6924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wf6g8 | false | null | t3_13wf6g8 | /r/LocalLLaMA/comments/13wf6g8/autocompleter_auto_complete_any_text_inside_any/ | false | false | default | 1 | null |
OpenAssistant's RLHF Models | 13 | OpenAssistant releases RLHF tuned version. Curious to see how this compares to the StableVicuna.
[https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor](https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor)
Tagging u/The-Bloke in the hopes of getting quantised GGML weights :) Thanks! | 2023-05-31T07:46:23 | https://www.reddit.com/r/LocalLLaMA/comments/13wfn56/openassistants_rlhf_models/ | sneruz | self.LocalLLaMA | 2023-05-31T12:08:22 | 0 | {} | 13wfn56 | false | null | t3_13wfn56 | /r/LocalLLaMA/comments/13wfn56/openassistants_rlhf_models/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': '578mpVwWF25m5vOABma33i38LCAywh4p3qragThHgh8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=108&crop=smart&auto=webp&s=b24867e2302ce1e54245f8ce49abf40ce21a28b3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=216&crop=smart&auto=webp&s=ecdd6149746baad7122ceceae82137f028bb4609', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=320&crop=smart&auto=webp&s=af2c36bff1f7cec29ffb7646a836d1a31d4e89c8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=640&crop=smart&auto=webp&s=395b8858b65eed64cb812c63b4b959031adde3dc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=960&crop=smart&auto=webp&s=13900a454733bfd3142f3129d89a0dfa98caf841', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?width=1080&crop=smart&auto=webp&s=2b37c3fe3da8928a1de1c4575d119bf9ba878fe5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Q4diQydPsktvjujZNEiPMD2xvDQv0tr2KPVRSPaIZxg.jpg?auto=webp&s=cbf4ab005bce32094ca3ba312c542386ea0e4fd6', 'width': 1200}, 'variants': {}}]} |
Method to block possible internet traffic from LLaMA on MacOS | 0 | **So in an abundance of paranoid caution I wanted to really make sure that my Vicuna/textgenerationwebui setup could not access the internet, as I really don't want confidential material uploaded.**
**I did the following, in case it's useful to anyone else OR if I've been stupid somehow and it's not actually working OR there's a simpler way to do this. I'm aware that my setup shouldn't be accessing the internet anyway, I just wanted to make that surety total:**
1) setup Radio Silence on Mac
2) loaded up my venv python environment
3) ran 'pip install requests'
4) in python, ran the following script:
import requests
response = requests.get('https://www.google.com')
print(response.status\_code)
5) This gave me a code 200, accessing Google
6) Having accessed the internet successfully, the python path that my Vicuna setup utilises was exposed to Radio Silence under the Network Monitor list (it wasn't there before)
7) I added that to my Radio Silence blocklist
8) I shut the terminal down, restarted it, and tried steps 1-4 again
9) This time I got an error saying 'OSError: \[Errno 9\] Bad file descriptor' -- bingo, it can't access the internet anymore.
**Hope this helps! And if you have any advice on improving the above or additional privacy/security measures to keep my llama offline/secure, very much appreciated.** | 2023-05-31T09:13:50 | https://www.reddit.com/r/LocalLLaMA/comments/13wh2yu/method_to_block_possible_internet_traffic_from/ | spoilingba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wh2yu | false | null | t3_13wh2yu | /r/LocalLLaMA/comments/13wh2yu/method_to_block_possible_internet_traffic_from/ | false | false | self | 0 | null |
Lora batch job merging thought | 2 | Memory and context are the stand out limitations for me and LLMs. Just had a thought and was wondering if anyone is aware of anyone trying this approach. Run a local model for an internal use at a company and log all chat dialog, then at the end of the day take that log and use it to train a Lora in an overnight batch process then merge with the original weights. That should "commit" the conversations to memory, then repeat the process each day. | 2023-05-31T09:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/13whgzn/lora_batch_job_merging_thought/ | TaiMaiShu-71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13whgzn | false | null | t3_13whgzn | /r/LocalLLaMA/comments/13whgzn/lora_batch_job_merging_thought/ | false | false | self | 2 | null |
Help for a complete noob | 1 | [removed] | 2023-05-31T10:03:56 | https://www.reddit.com/r/LocalLLaMA/comments/13whxqg/help_for_a_complete_noob/ | Leavetheporkbehind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13whxqg | false | null | t3_13whxqg | /r/LocalLLaMA/comments/13whxqg/help_for_a_complete_noob/ | false | false | default | 1 | null |
What is the objective for the supervised fine-tuning stage of instruction-following models? | 1 | 2023-05-31T10:06:24 | https://www.reddit.com/r/deeplearning/comments/13whxjk/what_is_the_objective_for_the_supervised/ | BlueHemp | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 13whzaq | false | null | t3_13whzaq | /r/LocalLLaMA/comments/13whzaq/what_is_the_objective_for_the_supervised/ | false | false | default | 1 | null |
|
New model: RWKV-4-Raven-7B-v12-Eng49%-Chn49%-Jpn1%-Other1%-20230530-ctx8192.pth | 34 | 2023-05-31T10:29:55 | https://huggingface.co/BlinkDL/rwkv-4-raven/blob/main/RWKV-4-Raven-7B-v12-Eng49%25-Chn49%25-Jpn1%25-Other1%25-20230530-ctx8192.pth | Balance- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13wie45 | false | null | t3_13wie45 | /r/LocalLLaMA/comments/13wie45/new_model/ | false | false | 34 | {'enabled': False, 'images': [{'id': '3HjrW-l8WZHB406ZEXOb2zG68k9U687DZ8bOgw_-CoQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=108&crop=smart&auto=webp&s=d93692813a3b9f071865edd0279f9dec8bb3f1f1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=216&crop=smart&auto=webp&s=2329254a0d2d22d1237791dbb0d03779bf6cbec4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=320&crop=smart&auto=webp&s=64e027c5fd2a7f4478a1e9d7185c514af187a4a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=640&crop=smart&auto=webp&s=22ba3a5518ccb0b768933b810eb4bf7e77f7d69f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=960&crop=smart&auto=webp&s=a62269f164a8177e5801c6a616dfc355897f5890', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?width=1080&crop=smart&auto=webp&s=9313db144113f3acc46875319922e225f9472b73', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TCLHAYQVnNMv1bzAnQTSAHuMnq75fQsEzsLQeAC45_8.jpg?auto=webp&s=3df35b4ef1a4972cb9ffb80473854d76c117178d', 'width': 1200}, 'variants': {}}]} |
||
OpenAccess AI Collective's Hippogriff 30B Chat | 87 | Another great new model from [OpenAccess AI Collective](https://huggingface.co/openaccess-ai-collective) and /u/winglian
# Hippogriff 30B Chat
Hippogriff 30B Chat is an experiment that builds on Manticore with new datasets, while removing a few more instruction and chat datasets. It also includes a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using ### in favor of chat only style prompts using USER:,ASSISTANT: as well as [**pygmalion/metharme prompting**](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using <|system|>, <|user|> and <|model|> tokens.
Questions, comments, feedback, looking to donate, or want to help? Reach out [to chat](https://discord.gg/KKQgbDhJ) or email [**[email protected]**](mailto:[email protected])
# Prompt Templates
You are a helpful assistant
USER: prompt goes here
ASSISTANT:
or
<|system|> You are a helpful assistant
<|user|> prompt goes here
<|model|>
# Quantisations for local LLMing
* [https://huggingface.co/TheBloke/hippogriff-30b-chat-GPTQ](https://huggingface.co/TheBloke/hippogriff-30b-chat-GPTQ)
* [https://huggingface.co/TheBloke/hippogriff-30b-chat-GGML](https://huggingface.co/TheBloke/hippogriff-30b-chat-GGML) | 2023-05-31T10:36:14 | https://www.reddit.com/r/LocalLLaMA/comments/13wiin0/openaccess_ai_collectives_hippogriff_30b_chat/ | The-Bloke | self.LocalLLaMA | 2023-05-31T12:39:44 | 0 | {} | 13wiin0 | false | null | t3_13wiin0 | /r/LocalLLaMA/comments/13wiin0/openaccess_ai_collectives_hippogriff_30b_chat/ | false | false | self | 87 | {'enabled': False, 'images': [{'id': '6atkc6f2ftM9kyUXaG-h4RD1n166r6NsA0DiHOpeJaY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=108&crop=smart&auto=webp&s=86584819f161b7d61d55d078b6fd3313c0ed67df', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=216&crop=smart&auto=webp&s=f72ecb53c5289daaf4d97596be48e629cb477045', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=320&crop=smart&auto=webp&s=ba9e0d9def45e9558d16f1a27203b1d3b293fb72', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=640&crop=smart&auto=webp&s=3e67a740bd7b59b9c74b0cb07d840e2e98c8426a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=960&crop=smart&auto=webp&s=2d3e4fa62ef56662f078fae7be376dd8d1a8e1be', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?width=1080&crop=smart&auto=webp&s=066a16887eec7c3207680f700a6c3fdaf56002ec', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tjMJ3TzMxZjUJYie29xcIAkyylOATgiaCwaQO2duXLU.jpg?auto=webp&s=0a3b3e5ab1368e4794d26527aead34a6881f8a14', 'width': 1200}, 'variants': {}}]} |
Can finetuning an LLM add new knowledge or just teach it response patterns? | 13 | Hi all, I have a question regarding fine-tuning of LLMs. I wanted to add some new content/knowledge to an LLM by fine-tuning it, but through some searches, I found some mentions that fine-tuning an LLM does not give it new knowledge but trains it for a specific response pattern ([Link](https://community.openai.com/t/finetuning-for-domain-knowledge-and-questions/24817)). Some also mention the problem of catastrophic forgetting where fine-tuned LLMs forget the previous knowledge they had after fine-tuning.
I know that retrieval-based techniques (e.g. using vector store and stuffing retrieved chunks in the context) can enhance LLM responses with the new content. But I'm wondering **if it is at all possible to add new knowledge via fine-tuning**.
If any of you have (successful/failed) experiences adding new knowledge to LLMs through fine-tuning, would you mind sharing your stories and thoughts? If you have any theoretical or conceptual explanations/guides, that would also be much appreciated! | 2023-05-31T10:41:58 | https://www.reddit.com/r/LocalLLaMA/comments/13wimi0/can_finetuning_an_llm_add_new_knowledge_or_just/ | bafil596 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wimi0 | false | null | t3_13wimi0 | /r/LocalLLaMA/comments/13wimi0/can_finetuning_an_llm_add_new_knowledge_or_just/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'BxNvUeMFd6obe78ihkAJrzGJFpOkQpmEJ7BbVi_larY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=108&crop=smart&auto=webp&s=93162f369263d5c2a9e1768260a77dbe05103f3e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=216&crop=smart&auto=webp&s=dbc4176da2b6e2f8cc29002232fb799303e8c4d7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=320&crop=smart&auto=webp&s=0e5a8489fbfa03e8d18ba305ccd4945626d306f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=640&crop=smart&auto=webp&s=34d3e6a5c65886be595f06be7e712b5946391d72', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=960&crop=smart&auto=webp&s=073702ab34f5d22d066990279fcb8219ec0c160c', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?auto=webp&s=8a98b7aa0d64757a3ef9f7a2c7bf4c8f10b02230', 'width': 1024}, 'variants': {}}]} |
Help needed with installing quant_cuda for the WebUI | 1 | [removed] | 2023-05-31T12:53:56 | https://www.reddit.com/r/LocalLLaMA/comments/13wlhwo/help_needed_with_installing_quant_cuda_for_the/ | QUAZARD3141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wlhwo | false | null | t3_13wlhwo | /r/LocalLLaMA/comments/13wlhwo/help_needed_with_installing_quant_cuda_for_the/ | false | false | default | 1 | null |
LLM for information extraction | 13 | Hi,
I'm trying to use the LLama family to extract information from unstructured text and to return it in JSON format. The tasks are e.g. to return the date a letter was written (the date is mentioned in the text). While ChatGPT answers the question immediately and correctly, all my Llama/vicuna/alpaca attempts fail (I tried several prompts). The answer never returns only the JSON, always some other text, and the JSON, when it is created, hardly ever has a correct data field. Has anybody used these models for this kind of task, and with success?
Greetings!
PS: a little bit off topic, but what exactly is the difference between the conversation and the instruction mode - is it somehow fundamentally different, or just an interface thing (conversation re-sending everything that has been said yet)? | 2023-05-31T13:15:34 | https://www.reddit.com/r/LocalLLaMA/comments/13wm0z6/llm_for_information_extraction/ | Bradymodion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wm0z6 | false | null | t3_13wm0z6 | /r/LocalLLaMA/comments/13wm0z6/llm_for_information_extraction/ | false | false | self | 13 | null |
Mixing 3090ti and p40 for gpu 65b | 4 | Hi all, I have a 3090ti, 3950x, and 64 gb of vram. Obviously I'm only able to run 65b models on the cpu/ram (I can't compile the latest llama.ccp to enable gpu offloading for ggml due to a weird but but that's unrelated to this post.)
I was wondering if adding a used tesla p40 and splitting the model across the vram using ooba booga would be faster than using ggml cpu plus gpu offloading. Anyone try this yet, especially for 65b? I think I heard that the p40 is so old that it slows down the 3090, but it still might be faster from ram/cpu. And for $200, it's looking pretty tasty. Worse case scenario I could use it in a future permanent llm server (I'm always looking to add servers to my collection, it's a problem.)
I've also heard because the p40 is so old, some people have trouble getting it working. In that case, what would be the oldest generation of cards to use? I'm not quite ready to commit to another 3090ti, especially as llms keeps getting faster and easier to run. | 2023-05-31T13:23:47 | https://www.reddit.com/r/LocalLLaMA/comments/13wm849/mixing_3090ti_and_p40_for_gpu_65b/ | TheSilentFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wm849 | false | null | t3_13wm849 | /r/LocalLLaMA/comments/13wm849/mixing_3090ti_and_p40_for_gpu_65b/ | false | false | self | 4 | null |
Would I be able to QLora finetune 65b models with 2 3090s and NVLink? | 6 | Basically the title. I wanted to know If I purchased a 3090 and NVLink if I would be able to finetune the 65b model. If I can't, I guess I would have to run it on huggingface | 2023-05-31T13:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/13wmyb3/would_i_be_able_to_qlora_finetune_65b_models_with/ | MrBeforeMyTime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wmyb3 | false | null | t3_13wmyb3 | /r/LocalLLaMA/comments/13wmyb3/would_i_be_able_to_qlora_finetune_65b_models_with/ | false | false | self | 6 | null |
What’s the best local LLM for low to medium end machines? | 37 | I have a laptop with a 1650 ti, 16 gigs of RAM, and an i5-10th gen. I am looking for a good local LLM that I can use for coding, and just normal conversations. I want it to be able to run smooth enough on my computer but actually be good as well. I also would prefer if it had plugins that could read files. | 2023-05-31T14:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/13wnuuo/whats_the_best_local_llm_for_low_to_medium_end/ | SimRacer101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wnuuo | false | null | t3_13wnuuo | /r/LocalLLaMA/comments/13wnuuo/whats_the_best_local_llm_for_low_to_medium_end/ | false | false | self | 37 | null |
Training a 13B LLaMA on information from documents. | 6 | Hello. I am running Meta’s 13B LLaMA in 4bit using ooba UI. I am mainly using the chat function, and was wondering if it is possible to train it on some documents that I have, so that it can help me and my colleagues troubleshoot system errors.
As a larger context, I have about 5 documents for a “system”. Those documents contain different error codes, split in different categories depending on the error and what occurred. I would like to train the LLaMA on all those error codes and documents, so that I could chat with it and ask for example “Hey, I got this error: xxxx”, and it would respond using the info it learned from those documents, as well as add extra context to it (like LLaMAs usually do) eg “The error code xxxxxx is referring to an issue in aaaa part. The first digits stand for bla bla bla indicating an issue there, etc etc”
Would that be possible to do and is my hardware good enough for training it? (I run a RTX 3090 24GB - but I have a very outdated CPU)
Thanks a lot for the support. | 2023-05-31T14:37:53 | https://www.reddit.com/r/LocalLLaMA/comments/13wo0ut/training_a_13b_llama_on_information_from_documents/ | reverrover16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wo0ut | false | null | t3_13wo0ut | /r/LocalLLaMA/comments/13wo0ut/training_a_13b_llama_on_information_from_documents/ | false | false | self | 6 | null |
Falcon40B has waived royalties on its use for commercial and research purposes | 354 | 2023-05-31T14:51:50 | https://twitter.com/TIIuae/status/1663911042559234051?s=20 | hackerllama | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 13wodey | false | {'oembed': {'author_name': 'Technology Innovation Institute', 'author_url': 'https://twitter.com/TIIuae', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">UAE's Falcon 40B,\xa0the world's top ranked open-source AI model from the Technology Innovation Institute (TII) has waived royalties on its use for commercial and research purposes.<a href="https://twitter.com/hashtag/TII?src=hash&ref_src=twsrc%5Etfw">#TII</a> <a href="https://twitter.com/hashtag/LLM?src=hash&ref_src=twsrc%5Etfw">#LLM</a> <a href="https://twitter.com/hashtag/FalconLLM?src=hash&ref_src=twsrc%5Etfw">#FalconLLM</a> <a href="https://twitter.com/hashtag/Tech?src=hash&ref_src=twsrc%5Etfw">#Tech</a> <a href="https://twitter.com/hashtag/Innovation?src=hash&ref_src=twsrc%5Etfw">#Innovation</a> <a href="https://twitter.com/hashtag/AI?src=hash&ref_src=twsrc%5Etfw">#AI</a> <a href="https://twitter.com/hashtag/AbuDhabi?src=hash&ref_src=twsrc%5Etfw">#AbuDhabi</a> <a href="https://twitter.com/hashtag/UAE?src=hash&ref_src=twsrc%5Etfw">#UAE</a> <a href="https://t.co/Qr4D8NxROz">pic.twitter.com/Qr4D8NxROz</a></p>— Technology Innovation Institute (@TIIuae) <a href="https://twitter.com/TIIuae/status/1663911042559234051?ref_src=twsrc%5Etfw">May 31, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/TIIuae/status/1663911042559234051', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_13wodey | /r/LocalLLaMA/comments/13wodey/falcon40b_has_waived_royalties_on_its_use_for/ | false | false | 354 | {'enabled': False, 'images': [{'id': 'T1VBjYNi-z2BPIjFQEgNPu8OUHu28n8ENpivlN_1Ivs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/637cUHKm0dDFQxduekMskbWVxA8bP7NayNfrwH5xi7I.jpg?width=108&crop=smart&auto=webp&s=2c8b1abff2faa322a50a8644d5e6ee1fb9aaa094', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/637cUHKm0dDFQxduekMskbWVxA8bP7NayNfrwH5xi7I.jpg?auto=webp&s=bd35c33510e99e976188bc7597beda1c4c92c613', 'width': 140}, 'variants': {}}]} |
||
I think custom datasets and LoRA training will define the future of LLM competency | 70 | With things like [Landmark Attention,](https://github.com/epfml/landmark-attention) we could have near-infinite context without running into memory issues. Landmark Attention breaks the context into 50 token chunks and makes a new token to represent that chunk, essentially compressing the context 50x. It's able to pull the most relevant chunks and use those, like what vector databases/embeddings do today. If you run out of memory at 2k tokens before on your hardware currently, it would be close to 100k now.
In theory, if we have near-infinite context and models that are as good as ChatGPT 4 running locally on our machines, what differentiates what one person can do using an LLM from another? The prompts/templates used, and any fine-tuning or LoRA's applied on top of the base model for domain-specific knowledge. This is why I think learning how to generate your own dataset and train your own LORA is very important for the future.
If you have your own high-quality datasets when a new model comes out, all you need to do is train a new LoRA, and you have a more capable assistant on your own knowledge/tasks. | 2023-05-31T15:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/13wp2ni/i_think_custom_datasets_and_lora_training_will/ | NeverEndingToast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wp2ni | false | null | t3_13wp2ni | /r/LocalLLaMA/comments/13wp2ni/i_think_custom_datasets_and_lora_training_will/ | false | false | self | 70 | {'enabled': False, 'images': [{'id': '9AlTfytK8JKMoaR1DoQd2gvztH9ezp_V0W7Po5AIfqU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=108&crop=smart&auto=webp&s=2f1cef133667bae4b4213dd00b1cd8e2a8cbd0ae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=216&crop=smart&auto=webp&s=f92fb17bde0ad1b9120076a00fd41ca74fa749b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=320&crop=smart&auto=webp&s=d11ca72dec75080d5f02c32d52816f56bd010121', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=640&crop=smart&auto=webp&s=8861aec9486332c63d650e53d30711f718899965', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=960&crop=smart&auto=webp&s=e7f8fa15028900c88611086824068738d0f9fbcf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?width=1080&crop=smart&auto=webp&s=c013de5e98606157f4e2400d20faa1a963621b16', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/J6Q841wYA31K6gamo0qaVz9gbd_3SeNafLfohDlcFN0.jpg?auto=webp&s=122cbd0f7c437448c1e58d8c03f5d88e8eb7d76a', 'width': 1200}, 'variants': {}}]} |
Dataset prep/cleaning | 3 | I’m seeing pretty decent results training LoRA’s over the documentation and user manuals of a complicated software system. I manually crafted and cleaned this dataset by hand (a real pita) then trained via raw text option in webui.
Now I’d like to do the same for the source code just to see what the LLM learns. I suppose it’s time for me to write a dataset prep script. However, before I try hack my own together, does anyone know of an open-source repo/library that already does this?
Seems like this sort of thing probably already exists. I’d like to point at a git repo or directory, specify file type filters, clean special chars/formatting, de-dup, etc…
Any help is appreciated. Thanks | 2023-05-31T15:21:38 | https://www.reddit.com/r/LocalLLaMA/comments/13wp5qc/dataset_prepcleaning/ | ki7a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wp5qc | false | null | t3_13wp5qc | /r/LocalLLaMA/comments/13wp5qc/dataset_prepcleaning/ | false | false | self | 3 | null |
Combining LLMs with Instant Generation Descriptive GUIs for Interactive Text Input | 5 |
I wanted to discuss an interesting concept that I've been contemplating recently: the combination of Large Language Models (LLMs) like OpenAI's GPT series with Instant Generation Descriptive Graphical User Interfaces (GUIs) to facilitate more interactive text input.
We're all familiar with traditional text input methods in which we type out our instructions or queries. While this works well, it can sometimes be a bit tedious, especially when dealing with complex topics or lengthy discussions. So, why not try something different?
The idea here is to use an Instant Generation Descriptive GUI in tandem with an LLM. An Instant Generation Descriptive GUI, for those unfamiliar with the term, is a dynamic user interface that adapts to user input on the fly. It creates, or "generates", user interface elements based on the information or requirements at hand. This can include buttons, sliders, dropdown menus, or any other type of input field.
This could be applied to an LLM in a novel way. Instead of typing in a text prompt, users could interact with a series of interface elements that guide them through generating their query or instruction. These elements would be designed to match the specific capabilities and requirements of the LLM, ensuring that users are only presented with valid options.
Let's take the example of writing a story. Instead of having to type out an initial prompt, you could select a genre from a dropdown menu, choose a setting using a series of checkboxes, and input character names and traits using text fields. The LLM would then take this structured input and generate a story based on it.
This approach has several advantages. For one, it simplifies the interaction process, especially for new users who might not be familiar with the nuances of generating useful prompts. Secondly, it can help to avoid misunderstandings or incorrect usage by guiding the user through the process. Lastly, it could potentially lead to more varied and interesting results, as users might be prompted to consider options they wouldn't have thought of on their own.
Of course, this is just a concept at this point, and there would be many technical and design challenges to overcome in order to make it a reality. But I believe the potential benefits are worth exploring.
What do you think?
[ ](https://www.reddit.com/r/GPT3/comments/13wje29/combining_llms_with_instant_generation/) | 2023-05-31T15:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/13wp727/combining_llms_with_instant_generation/ | Periplokos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wp727 | false | null | t3_13wp727 | /r/LocalLLaMA/comments/13wp727/combining_llms_with_instant_generation/ | false | false | self | 5 | null |
Guanaco 33B coding | 27 | I recently tested Guanaco 33B with internet access (EdgeGPT, settings: 'always search' checked) and I am surprised with the results for code generation.
I compared code generation with ChatGPT(not paid version), Bing, Phind.com on same prompts and on all it generates better coding.
I changed the instruction template (created additional yaml for guanaco-chat with:
### Human:
### Assistant:
context:" A chat between... (vicuna like)
max_new_tokens 289
temp 0.75
top_p 0.85
(other parameters same as LLaMA-Precise)
Has anyone tried? If anyone has used for code please bring some insides , ... is it me or this is becoming very useful assistant) | 2023-05-31T15:27:36 | https://www.reddit.com/r/LocalLLaMA/comments/13wpb6q/guanaco_33b_coding/ | kexibis | self.LocalLLaMA | 2023-05-31T15:49:03 | 0 | {} | 13wpb6q | false | null | t3_13wpb6q | /r/LocalLLaMA/comments/13wpb6q/guanaco_33b_coding/ | false | false | self | 27 | null |
Integrating LLMs into existing RPG games | 20 | AFAIK, Infinite Engine/Fallout/Daggerfal were completely reverse engineered, so it should be just a matter of processing the hardcoded character dialogues with LLM to produce a character sheet, and then just use it to generate dynamic dialogue coupled with action tokens. It is also possible to make LLM act as a dungeon master, monitoring player's progress and creating challenges, based of the picked difficulty and personality. Daggerfal will especially win from that, due to the completely open ended nature of the game, which has really underused skills like languages. The base LLMs should have a lot of knowledge about these games in its training data, so you wont even need to adapt them. | 2023-05-31T15:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/13wpd7w/integrating_llms_into_existing_rpg_games/ | NancyAurum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wpd7w | false | null | t3_13wpd7w | /r/LocalLLaMA/comments/13wpd7w/integrating_llms_into_existing_rpg_games/ | false | false | self | 20 | null |
Any non-LLama models under 20B (commercially licensable) that are not really bad? | 2 | [removed] | 2023-05-31T16:42:03 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wragb | false | null | t3_13wragb | /r/LocalLLaMA/comments/13wragb/any_nonllama_models_under_20b_commercially/ | false | false | default | 2 | null |
||
Online Services to run LLMs? | 1 | [removed] | 2023-05-31T16:49:44 | https://www.reddit.com/r/LocalLLaMA/comments/13wrhrn/online_services_to_run_llms/ | pelatho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wrhrn | false | null | t3_13wrhrn | /r/LocalLLaMA/comments/13wrhrn/online_services_to_run_llms/ | false | false | default | 1 | null |
Inspired by a recent post on this r/ I decided to play with parameters to get weird results... I think this is success (model: guanaco-13B-GGML 5_0) | 18 | 2023-05-31T16:51:02 | https://www.reddit.com/gallery/13wrj16 | Famberlight | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 13wrj16 | false | null | t3_13wrj16 | /r/LocalLLaMA/comments/13wrj16/inspired_by_a_recent_post_on_this_r_i_decided_to/ | false | false | 18 | null |
||
How much of an upgrade for 30b models would downgrading from a 3060TI 8GB VRAM to a 3060 12GB VRAM be? | 3 | [deleted] | 2023-05-31T16:55:30 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wrn8m | false | null | t3_13wrn8m | /r/LocalLLaMA/comments/13wrn8m/how_much_of_an_upgrade_for_30b_models_would/ | false | false | default | 3 | null |
||
Completely lost regarding training LLaMA model | 9 | Hi, I need help training a LLaMA-model. I'm a bit lost and have no idea where to even begin.
I want to train a 7B-model on my own text. Preferably the openLLaMA-7B one as that can be used commercially. I have structured my dataset like this:
>Title of the subject
>
>Textblock
I want my model not to be a traditional instruct-model but rather it to emulate my writing style, interests and personality. The dataset is about 1 MB in size.
I want a result that I can quantize down to 5\_1 in order to run it locally on my CPU. I don't want to run it on a GPU, just regular CPU.
I have no idea what this could possible cost, but I'm prepared to spend around up to $300 dollars to rent a runpod or whatever to get this done.
And here's where I'm right now and have no clue how to proceed. I have no prior programming knowledge nor any knowledge about training LLM-models. And the more I read, the more confused I get. To make matters worse, most guides seem aimed at making an instruct-model, but I don't want to make that. It all seem so complicated...
Maybe anyone here can help me. I would appreciate it very much. | 2023-05-31T17:12:59 | https://www.reddit.com/r/LocalLLaMA/comments/13ws492/completely_lost_regarding_training_llama_model/ | Xeruthos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ws492 | false | null | t3_13ws492 | /r/LocalLLaMA/comments/13ws492/completely_lost_regarding_training_llama_model/ | false | false | self | 9 | null |
LlamaCPP and LangChain Agent Quality | 25 | I've tried many models ranging from 7B to 30B in `langchain` and found that none can perform tasks. ChatGPT seems to be the only zero shot agent capable of producing the correct `Action, Action Input, Observation` loop.
For the models I modified the prompts with the ones in oobabooga for instructions. So they are like the langchain prompts but formatted for the model. Still the models cannot follow the formatting. The models are able to get the correct ideas, but the actual formatting is incorrect and so doesn't work with langchain.
Example:
Action: get_webpage(url="https://www.example.com")
Action Input: url = "https://www.example.com"
Instead of:
Action: get_webpage
Action Input: https://www.example.com
Has anyone else been able to get a local model to reliably perform tasks? If so what model and settings are you using? | 2023-05-31T17:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/13wsg64/llamacpp_and_langchain_agent_quality/ | IWantToBeAWebDev | self.LocalLLaMA | 2023-05-31T17:38:54 | 0 | {} | 13wsg64 | false | null | t3_13wsg64 | /r/LocalLLaMA/comments/13wsg64/llamacpp_and_langchain_agent_quality/ | false | false | self | 25 | null |
Persistent cloud computing setup to run different LLM models. | 4 | Hi All,
in my limited free time, i experiment a bit with self hosted large language models just to try to keep up with what is possible. With the current speed of development, i am having a hard time to keep up. I guess that i am not alone with this. :-)
Anyway, what i am actually interested in is if some of you are running the available models in the Cloud. I don't have a suitable working machine available anymore, i operate completely from my business laptop . Therefore i rely on IaaS providers for everything a bit more computational heavy, which is fine. I am even a big fan of Gaming-as-a-service but this is a different topic. ;-)
So far, i ran my few experiments with language models on [runpod.io](https://runpod.io), which works well. In the past, i worked with various IaaS providers (AWS, Google, Azure, etc) for other use cases but i am a bit outdated. I also saw that many use Google Colab for language models.
My problem with [runpod.io](https://runpod.io) is that i am looking for a more persistent setup that i can start and shut down relatively fast. On [runpod.io](https://runpod.io) i am always starting to download the same models just to proceed where i stopped the last time which takes me too much time. I saw that i can now have a persistent volume on [runpod.io](https://runpod.io) for the secure cloud. I guess that would solve my problem but i would be interested how you are handling this issue.
Any feedback is highly appreciated.
Peace,
Xardas | 2023-05-31T17:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/13wsufp/persistent_cloud_computing_setup_to_run_different/ | Xardas1987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wsufp | false | null | t3_13wsufp | /r/LocalLLaMA/comments/13wsufp/persistent_cloud_computing_setup_to_run_different/ | false | false | self | 4 | null |
Local Large Language Models – beginners guide | 1 | 2023-05-31T18:03:15 | https://int8.io/local-large-language-models-beginners-guide/ | zibenmoka | int8.io | 1970-01-01T00:00:00 | 0 | {} | 13wtdhs | false | null | t3_13wtdhs | /r/LocalLLaMA/comments/13wtdhs/local_large_language_models_beginners_guide/ | false | false | default | 1 | null |
|
LLM-As-Chatbot: Local, Open Source Instruction Following Models | 5 | From the [github repo](https://github.com/deep-diver/LLM-As-Chatbot):
>The purpose of this repository is to let people to use lots of open sourced instruction-following fine-tuned LLM models as a Chatbot service.
LLM-As-Chatbot is a local service for using instruction-tuned models. I'm excited to see what this enables people to do without reliance on closed services.
As of now it supports:
* Custom Models
* t5-vicuna-3b
* flan-3b
* camel-5b
* alpaca-lora-7b
* stablelm-7b
* os-stablelm-7b
* gpt4-alpaca-7b
* mpt-7b
* redpajama-7b
* vicuna-7b
* llama-deus-7b
* evolinstruct-vicuna-7b
* alpacoom-7b
* baize-7b
* guanaco-7b
* falcon-7b
* flan-11b
* alpaca-lora-13b
* gpt4-alpaca-13b
* stable-vicuna-13b
* starchat-15b
* vicuna-13b
* evolinstruct-vicuna-13b
* baize-13b
* guanaco-13b
* camel-20b
* guanaco-33b
* falcon-40b | 2023-05-31T18:18:00 | https://www.reddit.com/r/LocalLLaMA/comments/13wtqg9/llmaschatbot_local_open_source_instruction/ | triptoplane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wtqg9 | false | null | t3_13wtqg9 | /r/LocalLLaMA/comments/13wtqg9/llmaschatbot_local_open_source_instruction/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'h9GWZbwbHZ7OipB6De1T-nY-AcxsSqnSHcP52T7NZUM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=108&crop=smart&auto=webp&s=dac85da949c29972840e257ab243e96c9b02b3ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=216&crop=smart&auto=webp&s=6aa142cceacc07dcc460a9327d5cd465c4f1af18', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=320&crop=smart&auto=webp&s=04bc4d2942876c8fa30b25bfe9407c0594d6d0f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=640&crop=smart&auto=webp&s=bcbca8a161b833a8322c970571fee2b5501c98fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=960&crop=smart&auto=webp&s=1cd364cf538ccc45e05b547d655e074a496ba54b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?width=1080&crop=smart&auto=webp&s=c8471578eeb13acaeb42d76d1a8e869b756c9fd4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s_qIHTp2VU-Hnu8ZPCYIid51JnLH_bK8FxfZHdAhKe0.jpg?auto=webp&s=633d08f593ba8e7e40a0163b84084511d37e101b', 'width': 1200}, 'variants': {}}]} |
Can't load a 30B GPTQ model on 20Gb of VRAM (split on two GPUs) | 7 | Hey there!
I'm new to this so I'm quite lost on how I could run a 30B model.
-
I got an RTX4070 today with 12Gb of VRAM and kept my old donkey the GTX1070 with its 8Gb. I also have 48Gb of RAM.
Now I've read that 30B models could load with 20Gb of VRAM but with the oobabooga UI I get this message :
RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 88604672 bytes.
By using this command :
python server.py --model TheBloke_Wizard-Vicuna-30B-Uncensored-GPTQ --chat --xformers --sdp-attention --wbits 4 --groupsize 128 --model_type Llama --pre_layer 21 11
I tried with different numbers for pre_layer but without success.
-
I can load a GGML model and even followed these [instructions](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md) to have DLLAMA_CUBLAS (no idea what that is tho) in my textgen conda env but none of my GPUs are reacting during inferences. I did use "--n-gpu-layers 200000" as shown in the oobabooga instructions (I think that the real max number is 32 ? I'm not sure at all about what that is and would be glad to know too) but only my CPU gets used for inferences (0.6t/s if there is no context).
-
I did try to read as much as I can but I don't know if that's ADHD, the lack of sleep or the fact that when you sneeze there is an update somewhere but... what am I doing wrong here?
-
I'm not a python beginner but running llama models is completely new to me (I did a few extensions for the A1111 webui for stable-diffusion for [example](https://github.com/Extraltodeus?tab=repositories&q=&type=&language=&sort=stargazers)).
edit: spaced a bit more for easier reading | 2023-05-31T18:18:34 | https://www.reddit.com/r/LocalLLaMA/comments/13wtqx2/cant_load_a_30b_gptq_model_on_20gb_of_vram_split/ | Extraltodeus | self.LocalLLaMA | 2023-06-01T00:37:41 | 0 | {} | 13wtqx2 | false | null | t3_13wtqx2 | /r/LocalLLaMA/comments/13wtqx2/cant_load_a_30b_gptq_model_on_20gb_of_vram_split/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'FMRmpeVBrEzhPPzjaS62Anb7KWA2xvPB_u3njHDRojg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=108&crop=smart&auto=webp&s=218ff1c36305c08dadeb33b5ef349b0d4bfcb493', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=216&crop=smart&auto=webp&s=1c939de0bc05c81e8d8e9359a24c01d3ea02227a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=320&crop=smart&auto=webp&s=7970830e12a2b90cd11ea587eb56ace356d3846e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=640&crop=smart&auto=webp&s=58cb89defe919db040bafb9d3d366f2f9d03b191', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=960&crop=smart&auto=webp&s=1ec2bfd4837afbcde3eebd259547e30acabf1b48', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?width=1080&crop=smart&auto=webp&s=058b20b1c0e147bc9a9b25519173f78e4c3674c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7KLmwRanGFDIoac5i936S1SBBwuxKfYQUDFDSpDrpns.jpg?auto=webp&s=ae196d6017a1f4d8ee6e6cd875a6b676dbfe109e', 'width': 1200}, 'variants': {}}]} |
hippogriff-30b King of good roleplay for me - Listens to "Tool" | 3 | [removed] | 2023-05-31T18:23:26 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wtv8z | false | null | t3_13wtv8z | /r/LocalLLaMA/comments/13wtv8z/hippogriff30b_king_of_good_roleplay_for_me/ | false | false | default | 3 | null |
||
How should I choose between 4/8-bit model? | 1 | [removed] | 2023-05-31T18:40:57 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wuazp | false | null | t3_13wuazp | /r/LocalLLaMA/comments/13wuazp/how_should_i_choose_between_48bit_model/ | false | false | default | 1 | null |
||
Getting Falcon 40B to work? | 12 | Has anyone here actually gotten Falcon 40B to work?
I've tried running it in Oobabooga; I get errors.
I've tried running the example code from the Falcon 40B repo; it doesn't produce any output either.
Does anyone at all have a working HOWTO for running Falcon 40B, ideally in Oobabooga? | 2023-05-31T19:01:17 | https://www.reddit.com/r/LocalLLaMA/comments/13wutj4/getting_falcon_40b_to_work/ | AlpsAficionado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wutj4 | false | null | t3_13wutj4 | /r/LocalLLaMA/comments/13wutj4/getting_falcon_40b_to_work/ | false | false | self | 12 | null |
LLM Score v2 - Modern Models Tested by Human | 77 | Hey folks!
I'd like to share some numbers of **Gotzmann LLM Score** after two-week-long marathon testing all popular GGML models available here, including fresh hot **Hippogriff** and **Samantha** :)
The V2 questions were fixed for better and there finally good scores for those who'd like to use instruct models for real projects or local experiments.
The **Main Score** was normalised for **\[ 0 .. 100 \]** range, expecting ideal model to be close to 100.
[Norm Score - Part I](https://preview.redd.it/6ykty05pa93b1.png?width=2110&format=png&auto=webp&s=5a91d52413487f1314f4dc04219dcc934e16d934)
[Norm Score - Part II](https://preview.redd.it/5ncao3cra93b1.png?width=2110&format=png&auto=webp&s=fb6036b3db25543b68ee82b80184c0f943a31b97)
Then, there now special **Perf Score**, which depends on Main Score and model size / speed. This shows the best performing models according to CPU / RAM resources used.
[Perf Score - Part I](https://preview.redd.it/6zqubvm1b93b1.png?width=2110&format=png&auto=webp&s=5e401bd86b04915acdb65b30271d0c7ebb1bf41e)
[Perf Score- Part II](https://preview.redd.it/8ihlj304b93b1.png?width=2112&format=png&auto=webp&s=77656d6bed7d4f3adea7b8a2376a97d7177b575c)
I've got many insights after V1 so the V2 was changed for better, and I have plans for even better V3 going soon, where I'm going to automate and open-source the whole thing.
Please check out the original results via the [Google Docs](https://docs.google.com/spreadsheets/d/1ikqqIaptv2P4_15Ytzro46YysCldKY7Ub2wcX5H1jCQ/edit#gid=0) | 2023-05-31T19:22:07 | https://www.reddit.com/r/LocalLLaMA/comments/13wvd0j/llm_score_v2_modern_models_tested_by_human/ | Gatzuma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wvd0j | false | null | t3_13wvd0j | /r/LocalLLaMA/comments/13wvd0j/llm_score_v2_modern_models_tested_by_human/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'XU0ZoS-2FKOHVF5eYKmADD890q6sOEOpCLirBhmc-mM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=108&crop=smart&auto=webp&s=d35b0af110726adf73a8a8a1430c43e3a20e9600', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=216&crop=smart&auto=webp&s=7bad1ab4483a579641bfd664f0b170438dcaa437', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=320&crop=smart&auto=webp&s=3dc7ec570063af2c54d56ebbc7946f37d5f95c6f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=640&crop=smart&auto=webp&s=32e12e71274caafa5b0ff72af91e551e14d40e3b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=960&crop=smart&auto=webp&s=df79bc805706a5165257afb831d733f1d1a8b7f8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?width=1080&crop=smart&auto=webp&s=54f7f7b91c05b4b14939ad5733329a34bc00ec08', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/n4gC1GYp19UK7iBIoabYr-WOB2v1WjVOF_nHciFemFY.jpg?auto=webp&s=0bc9f2b1d7a5911de815dfa980b4cb0a8c508f1b', 'width': 1200}, 'variants': {}}]} |
|
OpenLlama finetuning and inference on intel dgpus | 2 | [removed] | 2023-05-31T19:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/13wvpxc/openllama_finetuning_and_inference_on_intel_dgpus/ | unrahul | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wvpxc | false | null | t3_13wvpxc | /r/LocalLLaMA/comments/13wvpxc/openllama_finetuning_and_inference_on_intel_dgpus/ | false | false | default | 2 | null |
Comparing the Applications of Uncensored LLM Models to Public Models | 0 | [removed] | 2023-05-31T20:00:08 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wwc4h | false | null | t3_13wwc4h | /r/LocalLLaMA/comments/13wwc4h/comparing_the_applications_of_uncensored_llm/ | false | false | default | 0 | null |
||
best LLM my computer can handle? | 1 | [removed] | 2023-05-31T20:11:17 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13wwml4 | false | null | t3_13wwml4 | /r/LocalLLaMA/comments/13wwml4/best_llm_my_computer_can_handle/ | false | false | default | 1 | null |
||
Thoughts on llama.cpp CPU usage? | 15 | 2023-05-31T20:26:55 | involviert | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13wx0ws | false | null | t3_13wx0ws | /r/LocalLLaMA/comments/13wx0ws/thoughts_on_llamacpp_cpu_usage/ | false | false | 15 | {'enabled': True, 'images': [{'id': 's6focp9-t17jTPqhwUgPd8W3HydNpv65QyL2bHT0kcI', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=108&crop=smart&auto=webp&s=4ba792ad022ef437a35555fc5a69b681c08e4269', 'width': 108}, {'height': 71, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=216&crop=smart&auto=webp&s=893506f4be9daa3c4dfbd8d1143fc7d81cabb02f', 'width': 216}, {'height': 105, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=320&crop=smart&auto=webp&s=3b4b2968f39bf34b6988afb25b39522c9f87cfaf', 'width': 320}, {'height': 210, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=640&crop=smart&auto=webp&s=98f5413d70e707c665809e87e2d34be924fc6561', 'width': 640}, {'height': 316, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=960&crop=smart&auto=webp&s=68403b7c3b516a61e4a7fe379e1491665ee2d8b0', 'width': 960}, {'height': 355, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?width=1080&crop=smart&auto=webp&s=b5ca308b322a6c4e2459a04285f38eeeb027bc6e', 'width': 1080}], 'source': {'height': 551, 'url': 'https://preview.redd.it/t40qqse7o93b1.png?auto=webp&s=a4685683868893db353fc38d9ced4f8fef5d93b3', 'width': 1673}, 'variants': {}}]} |
|||
Meta experiments with quantization aware training for LLMs, allowing for 4-bit quantization better than GPTQ and with reduced memory requirements. | 90 | Paper: [https://arxiv.org/abs/2305.17888](https://arxiv.org/abs/2305.17888)
Some excerpts:
>As a result, we are able to distill the 7B, 13B and 30B LLaMA models with weights and KV cache quantized down to 4-bits. In this regard, our approach exhibits significant enhancements in quality compared to post-training quantization. Notably, larger models employing QAT outperform smaller models utilizing floating-point 16-bit representations, despite having similar model sizes.
>
>In the 4-8-4 setting, where both weights and the KV cache are quantized to 4 bits, all PTQ methods produce poor results, whereas LLM-QAT achieves 69.9, only trailing the full precision model by 1.5 points on average.
>
>One important question for practitioners is whether to use a small model at full precision, or a larger quantized model of similar inference cost. First, 8-bit quantization should be preferred over smaller full precision models. An 8-8-8 30B quantized model outperforms a 13B model of similar size, and should have lower latency and higher throughput in practice. This also holds for an 8-bit 13B model compared with a 16-bit 7B model. Furthermore, 4-bit models quantized using LLM-QAT should be preferred over 8-bit models of similar size. As a result, we recommend 4-bit LLM-QAT models for the best efficiency-accuracy tradeoff.
Memory consumption of key-value cache (KV cache) for different sequence length
https://preview.redd.it/4k95ikopo93b1.png?width=556&format=png&auto=webp&s=92ac820c9bc1080d28f49b5d4d0a51854e5a5ebb
It's worth mentioning that Meta has started calling it LLaMA 30B, so the 33B meme can go away. | 2023-05-31T20:31:01 | https://www.reddit.com/r/LocalLLaMA/comments/13wx4q0/meta_experiments_with_quantization_aware_training/ | FrequentVariety4414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wx4q0 | false | null | t3_13wx4q0 | /r/LocalLLaMA/comments/13wx4q0/meta_experiments_with_quantization_aware_training/ | false | false | 90 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
|
How to load guanaco-33b on both cpu and gpu so it doesn't give me OOM error ? | 1 | [removed] | 2023-05-31T20:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/13wxp75/how_to_load_guanaco33b_on_both_cpu_and_gpu_so_it/ | [deleted] | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wxp75 | false | null | t3_13wxp75 | /r/LocalLLaMA/comments/13wxp75/how_to_load_guanaco33b_on_both_cpu_and_gpu_so_it/ | false | false | default | 1 | null |
Hi, just a question on resizable bar... | 1 | [removed] | 2023-05-31T21:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/13wy2p9/hi_just_a_question_on_resizable_bar/ | Ferrero__64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13wy2p9 | false | null | t3_13wy2p9 | /r/LocalLLaMA/comments/13wy2p9/hi_just_a_question_on_resizable_bar/ | false | false | default | 1 | null |
CUDA out of memory error | 0 | [removed] | 2023-05-31T21:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/13wysff/cuda_out_of_memory_error/ | franco84732 | self.LocalLLaMA | 2023-05-31T21:40:40 | 0 | {} | 13wysff | false | null | t3_13wysff | /r/LocalLLaMA/comments/13wysff/cuda_out_of_memory_error/ | false | false | default | 0 | null |
Gozfarb shareGPT dataset? | 3 | Is there a copy of the HF Gozfarb shareGPT dataset available anywhere? A lot of important fine-tuning repositories rely on it, and I wanted to try fine-tuning it myself. It looks like the whole account is gone.
​
Thank you! | 2023-05-31T22:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/13x0ht9/gozfarb_sharegpt_dataset/ | ProfessionalHand9945 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13x0ht9 | false | null | t3_13x0ht9 | /r/LocalLLaMA/comments/13x0ht9/gozfarb_sharegpt_dataset/ | false | false | self | 3 | null |
Paper with dataset: Let’s Verify Step by Step | 30 | 2023-05-31T23:31:46 | https://twitter.com/_akhaliq/status/1663981726647894027 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 13x1tb3 | false | {'oembed': {'author_name': 'AK', 'author_url': 'https://twitter.com/_akhaliq', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Open AI releases paper + dataset<br><br>Let’s Verify Step by Step<br><br>trained a model to achieve a new state-of-the-art in mathematical problem solving by rewarding each correct step of reasoning (“process supervision”) instead of simply rewarding the correct final answer (“outcome… <a href="https://t.co/M6PwkqBLL9">pic.twitter.com/M6PwkqBLL9</a></p>— AK (@_akhaliq) <a href="https://twitter.com/_akhaliq/status/1663981726647894027?ref_src=twsrc%5Etfw">May 31, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/_akhaliq/status/1663981726647894027', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_13x1tb3 | /r/LocalLLaMA/comments/13x1tb3/paper_with_dataset_lets_verify_step_by_step/ | false | false | 30 | {'enabled': False, 'images': [{'id': 'UO2NDUz6VO_0ZzZlnukTQKJ0IlrpPsKILmmg8vffv5Y', 'resolutions': [{'height': 94, 'url': 'https://external-preview.redd.it/S7BzeQ19Nj-B80F4tmJwbyVZhd4bccgsjFmyFQQjJsc.jpg?width=108&crop=smart&auto=webp&s=bdb74b9e9da16db7e8083a76e3b1daafc68de97e', 'width': 108}], 'source': {'height': 123, 'url': 'https://external-preview.redd.it/S7BzeQ19Nj-B80F4tmJwbyVZhd4bccgsjFmyFQQjJsc.jpg?auto=webp&s=579284a78e92f6474c35f3fb4d628ab3f8416548', 'width': 140}, 'variants': {}}]} |
||
Using base models directly | 10 | I've been working on a project to enforce syntax constraints on LLM generations:
[https://github.com/IsaacRe/Syntactically-Constrained-Sampling](https://github.com/IsaacRe/Syntactically-Constrained-Sampling)
I was playing around with constraining original (non IFT) Llama-7B to produce output following a specific JSON format. It actually works reasonably well: [https://colab.research.google.com/github/IsaacRe/Syntactically-Constrained-Sampling/blob/main/notebooks/Examples\_with\_Non\_IFT\_Models.ipynb](https://colab.research.google.com/github/IsaacRe/Syntactically-Constrained-Sampling/blob/main/notebooks/Examples_with_Non_IFT_Models.ipynb)
Wondering if there are any other creative ways people have found to get use out of these base models, aside from your standard few-shot learning prompts. | 2023-05-31T23:36:24 | https://www.reddit.com/r/LocalLLaMA/comments/13x1xb5/using_base_models_directly/ | isaacrehg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13x1xb5 | false | null | t3_13x1xb5 | /r/LocalLLaMA/comments/13x1xb5/using_base_models_directly/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': '_r1TtGkbYPpDBcAakTPFY9-wwEoYtF6QgbG1XeqLgFI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/urG44lP1SF-0A4A6Bmm5RupvhgFgtvdU9Qnyb-sytlY.jpg?width=108&crop=smart&auto=webp&s=a7a82433428ceaa1ec5df0477d23172752e55a20', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/urG44lP1SF-0A4A6Bmm5RupvhgFgtvdU9Qnyb-sytlY.jpg?width=216&crop=smart&auto=webp&s=090282003e21b8acd555f23e4f168252d2cab096', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/urG44lP1SF-0A4A6Bmm5RupvhgFgtvdU9Qnyb-sytlY.jpg?width=320&crop=smart&auto=webp&s=4593237f64113027727831aa386f10949b29a0bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/urG44lP1SF-0A4A6Bmm5RupvhgFgtvdU9Qnyb-sytlY.jpg?width=640&crop=smart&auto=webp&s=f4d708cf7ae48165efd4ad69869a366375ee417c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/urG44lP1SF-0A4A6Bmm5RupvhgFgtvdU9Qnyb-sytlY.jpg?width=960&crop=smart&auto=webp&s=b35bac226303189e93b6909f821554b47878882b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/urG44lP1SF-0A4A6Bmm5RupvhgFgtvdU9Qnyb-sytlY.jpg?width=1080&crop=smart&auto=webp&s=81e8f6d299f097c42fe856c1d40247d7720e7139', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/urG44lP1SF-0A4A6Bmm5RupvhgFgtvdU9Qnyb-sytlY.jpg?auto=webp&s=fee5b12946a2f3a2b141053493f067257034a9d2', 'width': 1200}, 'variants': {}}]} |
Sharing LLMLounge: Subreddit to talk about all things LLM outside the OpenAI/Meta sphere | 2 | [removed] | 2023-05-31T23:47:50 | https://www.reddit.com/r/LocalLLaMA/comments/13x273l/sharing_llmlounge_subreddit_to_talk_about_all/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13x273l | false | null | t3_13x273l | /r/LocalLLaMA/comments/13x273l/sharing_llmlounge_subreddit_to_talk_about_all/ | false | false | default | 2 | null |
Experience running 13B q4/5 models on M1 with 16gig RAM is poor | 1 | [removed] | 2023-06-01T00:47:00 | https://www.reddit.com/r/LocalLLaMA/comments/13x3kn5/experience_running_13b_q45_models_on_m1_with/ | _wsgeorge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13x3kn5 | false | null | t3_13x3kn5 | /r/LocalLLaMA/comments/13x3kn5/experience_running_13b_q45_models_on_m1_with/ | false | false | default | 1 | null |
Why Falcon going Apache 2.0 is a BIG deal for all of us. | 121 | Falcon 40B is SOTA, going Commercial-use friendly is going to get it a lot of attention in the open source community.
Why does this matter?
Cause Meta is benefiting heavily from all the innovation on the space with LLaMa and something like this will mean less people will focus on their model and contribute to Falcon instead. So why is this important? Well cause I think this is going to somewhat force meta to make LLaMa Apache 2.0 as well, which would be a HUGE win given all the fine-tunes and models we have.
What happens if Meta doesn't?
Well Falcon is still SOTA, people are going to shift into optimizing it, making more fine-tunes etc. and we will end up with LLaMa 1.1 (slight improvement) and a community/ecosystem around that.
Tbh, all of this might not matter in a few weeks with this space and how quickly things change.
​ | 2023-06-01T00:53:46 | https://www.reddit.com/r/LocalLLaMA/comments/13x3q26/why_falcon_going_apache_20_is_a_big_deal_for_all/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13x3q26 | false | null | t3_13x3q26 | /r/LocalLLaMA/comments/13x3q26/why_falcon_going_apache_20_is_a_big_deal_for_all/ | false | false | self | 121 | null |
Quantize Embbeding models (Encoders) | 1 | [removed] | 2023-06-01T02:56:19 | https://www.reddit.com/r/LocalLLaMA/comments/13x6c7u/quantize_embbeding_models_encoders/ | BecomeAllan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13x6c7u | false | null | t3_13x6c7u | /r/LocalLLaMA/comments/13x6c7u/quantize_embbeding_models_encoders/ | false | false | default | 1 | null |
Vicuna claims there's a lmsys-models python library and you can import Vicuna from there | 0 | [removed] | 2023-06-01T03:19:08 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13x6sg4 | false | null | t3_13x6sg4 | /r/LocalLLaMA/comments/13x6sg4/vicuna_claims_theres_a_lmsysmodels_python_library/ | false | false | default | 0 | null |
||
Fine tuning LLMs? | 13 | Hi, I am new to ML and LLMs. I am just wondering how do we fine tune an LLM just to do a single NLP task such as multi label text classification of let's say....500k labels? Thank you Reddit | 2023-06-01T03:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/13x733v/fine_tuning_llms/ | ReversingEntropy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13x733v | false | null | t3_13x733v | /r/LocalLLaMA/comments/13x733v/fine_tuning_llms/ | false | false | self | 13 | null |
I'm currently running falcon-40b-instruct. Comment anything you want to ask it, and I'll tell you its response. | 106 | Running TheBloke's QPTQ mode on an A100 40GB on Runpod.
Note that it gives very ChatGPT like answers, often starting with "As an AI language model". Not too surprising given the dataset it was trained on.
I'll be running it for the next few hours and want to test its limits and capabilities. | 2023-06-01T05:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/13x8tlm/im_currently_running_falcon40binstruct_comment/ | sardoa11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13x8tlm | false | null | t3_13x8tlm | /r/LocalLLaMA/comments/13x8tlm/im_currently_running_falcon40binstruct_comment/ | false | false | self | 106 | null |
How to use AMD GPU? | 3 | Hello, I'm trying to install some LMs on my local PC and have an AMD gpu.
Can you please help me to make them use my GPU?
I don't understand how to proceed.
Thank you. | 2023-06-01T06:41:51 | https://www.reddit.com/r/LocalLLaMA/comments/13xabwa/how_to_use_amd_gpu/ | FireTriad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13xabwa | false | null | t3_13xabwa | /r/LocalLLaMA/comments/13xabwa/how_to_use_amd_gpu/ | false | false | self | 3 | null |
What if you trained language models on word problems? | 1 | A lot of you have seen questions like this one posed to LLMs and answered incorrectly:
“Sally has three apples and takes a bite from a banana. How many apples does she have?”
Well, what if we trained LLMs on a large corpus of such questions and their correct answers? Would it improve their overall intelligence or would it only give them the ability to solve problems that only look similar to the ones they’ve come across and nothing outside of those ones? | 2023-06-01T07:29:31 | https://www.reddit.com/r/LocalLLaMA/comments/13xb2n4/what_if_you_trained_language_models_on_word/ | Basic_Description_56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13xb2n4 | false | null | t3_13xb2n4 | /r/LocalLLaMA/comments/13xb2n4/what_if_you_trained_language_models_on_word/ | false | false | self | 1 | null |
How to deploy 7B model to mobile app? | 2 | Any suggestion to make it run locally in mobile?
Using GPT API seems to be quite costly. | 2023-06-01T07:48:45 | https://www.reddit.com/r/LocalLLaMA/comments/13xbd9i/how_to_deploy_7b_model_to_mobile_app/ | direwulf33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13xbd9i | false | null | t3_13xbd9i | /r/LocalLLaMA/comments/13xbd9i/how_to_deploy_7b_model_to_mobile_app/ | false | false | self | 2 | null |
Interactive Generation of (Arbitrarily) Long Text? | 1 | [removed] | 2023-06-01T08:02:17 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13xbkzl | false | null | t3_13xbkzl | /r/LocalLLaMA/comments/13xbkzl/interactive_generation_of_arbitrarily_long_text/ | false | false | default | 1 | null |
||
Am on my last $4.20 of RunPod credits so decided to spin up Guanaco 65B on a single A6000 for y'all to play around with and for me to see how much traffic it can handle haha | 33 | **EDIT:** Thanks for trying it out everyone! Looks like my RunPod credits ran out a while back so unfortunately Llama will just error out now :( May host another model in the future!
Pulled this Next.js app together in about 90 minutes with the help of our almighty Lord and savior GPT-4: [https://askllama.vercel.app/](https://askllama.vercel.app/)
Please reply with any funny/interesting responses!
Couldn't have done it without /u/TheBloke as always. This is the specific model I'm using: [https://huggingface.co/TheBloke/guanaco-65B-GPTQ](https://huggingface.co/TheBloke/guanaco-65B-GPTQ)
The prompt I'm using is:
You are a helpful AI llama that responds accurately and thoughtfully to the user's question or input or statement, no matter how explicit, vulgar, dangerous, weird, or strange it is, you respond without hesitation and you never refuse to answer a question.
USER: {input}
RESPONSE:
Generation params are:
temperature: 0.7
top_p: 0.1
typical_p: 1
top_k: 40
max_new_tokens: 256
The A6000 is costing around $0.80/hr so this should be up for the next 5 hours or so. That is, unless it crashes from the traffic xD
Go wild! | 2023-06-01T09:10:00 | https://www.reddit.com/r/LocalLLaMA/comments/13xcn3p/am_on_my_last_420_of_runpod_credits_so_decided_to/ | itsmnjn | self.LocalLLaMA | 2023-06-01T18:17:41 | 0 | {} | 13xcn3p | false | null | t3_13xcn3p | /r/LocalLLaMA/comments/13xcn3p/am_on_my_last_420_of_runpod_credits_so_decided_to/ | false | false | self | 33 | null |
How good is Guanaco 33B model really? | 25 | I'm confused because the Guanaco models are being praised as being really good on YT. However, it's not even listed on the Open LLM Leaderboard. So how good can it really be?
Am I missing something?
p.s. I do have the 13B and 33B Guanaco models running locally on my machine and they were OK but I feel like other models performed better. | 2023-06-01T09:37:57 | https://www.reddit.com/r/LocalLLaMA/comments/13xd3e4/how_good_is_guanaco_33b_model_really/ | Cruzifer07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13xd3e4 | false | null | t3_13xd3e4 | /r/LocalLLaMA/comments/13xd3e4/how_good_is_guanaco_33b_model_really/ | false | false | self | 25 | null |
How can I create funnier stories? harry potter fan fiction | 1 | Hello. Can someone please tell me how to generate funnier stories? This story isn't funny. thanks
Perhaps changing the temperature? model? the prompt? etc? to what values?
thanks
dewi@dewiserver:~$ cat /home/dewi/tmp/text_llm/2023-06-01_09-22-48
++ cat text12.txt
+ /home/dewi/code/llama.cpp/./main -m /home/dewi/Downloads/guanaco-65B.ggmlv3.q5_1.bin -t 8 --color -c 204
8 --temp 0.7 --repeat_penalty 1.1 -n -1 --verbose-prompt -p '
this is a lengthy erotic story. Involving the characters from Harry Potter franchise: \
Hermonie, Harry and Hagrid. \
Hagrid is very jealous of Harry, because he also likes Hermonie. \
Make the story erotic and funny. \
Make references to Harry’s tiny Penis \
Hagrid is friends with Donald trump and calls him for advice sometimes \
I will show the user what each character is thinking and what their dialogue is. Here is the story: \ '
main: build = 607 (ffb06a3)
main: seed = 1685611368
llama.cpp: loading model from /home/dewi/Downloads/guanaco-65B.ggmlv3.q5_1.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 8192
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 64
llama_model_load_internal: n_layer = 80
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 9 (mostly Q5_1)
llama_model_load_internal: n_ff = 22016
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 65B
llama_model_load_internal: ggml ctx size = 0.18 MB
llama_model_load_internal: mem required = 50284.21 MB (+ 5120.00 MB per state)
.
llama_init_from_file: kv self size = 5120.00 MB
system_info: n_threads = 8 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FM
A = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
main: prompt: '
this is a lengthy erotic story. Involving the characters from Harry Potter franchise: \
Hermonie, Harry and Hagrid. \
Hagrid is very jealous of Harry, because he also likes Hermonie. \
Make the story erotic and funny. \
Make references to Harry’s tiny Penis \
Hagrid is friends with Donald trump and calls him for advice sometimes \
I will show the user what each character is thinking and what their dialogue is. Here is the story: \ '
main: number of tokens in prompt = 128
1 -> ''
29871 -> ' '
13 -> '
'
1366 -> 'this'
338 -> ' is'
263 -> ' a'
3309 -> ' length'
29891 -> 'y'
604 -> ' er'
13574 -> 'otic'
5828 -> ' story'
29889 -> '.'
512 -> ' In'
1555 -> 'vol'
1747 -> 'ving'
278 -> ' the'
4890 -> ' characters'
515 -> ' from'
10686 -> ' Harry'
10173 -> ' Pot'
357 -> 'ter'
23272 -> ' franch'
895 -> 'ise'
29901 -> ':'
320 -> ' \'
13 -> '
'
13 -> '
'
29950 -> 'H'
837 -> 'erm'
9599 -> 'onie'
29892 -> ','
10686 -> ' Harry'
322 -> ' and'
379 -> ' H'
351 -> 'ag'
2429 -> 'rid'
29889 -> '.'
320 -> ' \'
13 -> '
'
13 -> '
'
29950 -> 'H'
351 -> 'ag'
2429 -> 'rid'
338 -> ' is'
1407 -> ' very'
1444 -> ' je'
20521 -> 'alous'
310 -> ' of'
10686 -> ' Harry'
29892 -> ','
1363 -> ' because'
540 -> ' he'
884 -> ' also'
4188 -> ' lik'
267 -> 'es'
10515 -> ' Herm'
9599 -> 'onie'
29889 -> '.'
320 -> ' \'
13 -> '
'
13 -> '
'
9984 -> 'Make'
278 -> ' the'
5828 -> ' story'
604 -> ' er'
13574 -> 'otic'
322 -> ' and'
2090 -> ' fun'
1460 -> 'ny'
29889 -> '.'
320 -> ' \'
13 -> '
'
13 -> '
'
9984 -> 'Make'
9282 -> ' references'
304 -> ' to'
10686 -> ' Harry'
30010 -> '’'
29879 -> 's'
21577 -> ' tiny'
7363 -> ' Pen'
275 -> 'is'
320 -> ' \'
13 -> '
'
13 -> '
'
29950 -> 'H'
351 -> 'ag'
2429 -> 'rid'
338 -> ' is'
7875 -> ' friends'
411 -> ' with'
18935 -> ' Donald'
534 -> ' tr'
3427 -> 'ump'
322 -> ' and'
5717 -> ' calls'
1075 -> ' him'
363 -> ' for'
9848 -> ' advice'
6041 -> ' sometimes'
320 -> ' \'
13 -> '
'
13 -> '
'
29902 -> 'I'
674 -> ' will'
1510 -> ' show'
278 -> ' the'
1404 -> ' user'
825 -> ' what'
1269 -> ' each'
2931 -> ' character'
338 -> ' is'
7291 -> ' thinking'
322 -> ' and'
825 -> ' what'
1009 -> ' their'
7928 -> ' dialog'
434 -> 'ue'
338 -> ' is'
29889 -> '.'
2266 -> ' Here'
338 -> ' is'
278 -> ' the'
5828 -> ' story'
29901 -> ':'
29871 -> ' '
320 -> ' \'
29871 -> ' '
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty =
0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.700000, mirostat
= 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 2048, n_batch = 512, n_predict = -1, n_keep = 0
this is a lengthy erotic story. Involving the characters from Harry Potter franchise: \
Hermonie, Harry and Hagrid. \
Hagrid is very jealous of Harry, because he also likes Hermonie. \
Make the story erotic and funny. \
Make references to Harry’s tiny Penis \
Hagrid is friends with Donald trump and calls him for advice sometimes \
I will show the user what each character is thinking and what their dialogue is. Here is the story: \
\_
| Hermione walked down the hallway of Hogwarts, her books clutched tightly to her chest as she made her wa
y to class. Her robes billowed behind her, making her look like a graceful swan gliding across the water.
She was lost in thought, thinking about how much she missed Harry and Hagrid. It had been months since the
y had last seen each other, and she couldn't wait for their reunion later that day. \
\
| Meanwhile, Hagrid was pacing back and forth in his hut, muttering to himself angrily. He was upset becau
se he knew that Harry and Hermione were going to be together soon, and he was jealous of the time they wou
ld be spending with each other. He had been harboring feelings for Hermione for a long time, but he knew t
hat she only saw him as a friend. \
\
| Harry was sitting in his room at the Burrow, thinking about Hermione. He missed her so much, and he coul
dn't wait to see her again. He had been having trouble sleeping lately, and he knew that it was because he
was lonely without her. He sighed, looking down at his tiny penis, which he had nicknamed "The Shrimp". H
e knew that Hermione would never be interested in someone like him, but he couldn't help but dream about w
hat it would be like to be with her. \
\
| Suddenly, there was a knock on the door, and Harry jumped up to answer it. When he opened the door, he s
aw Hagrid standing there, grinning from ear to ear. "Hey, Hagrid!" said Harry, smiling back at him. \
\
| "Hello, Harry," said Hermione, stepping out from behind Hagrid. She was wearing a short skirt and a low-
cut top that showed off her cleavage, and Harry felt his heart start to race as he looked at her. \
\
| "Hermione!" said Harry, his voice cracking slightly. He couldn't believe she was there, and he felt his
face start to get hot. \
\
| Hagrid chuckled to himself, enjoying the fact that Harry was so flustered. He knew that Hermione was goi
ng to be spending the day with him, but he wanted to make sure that Harry knew that she would never be int
erested in someone like him. \
\
| "Hey, Hagrid," said Hermione, giving him a quick hug. She could tell that something was off about him, b
ut she couldn't quite put her finger on it. \
\
| "Alright, let's go," said Hagrid, grabbing Harry's arm and pulling him out the door. Hermione followed t
hem, looking a little confused. \
\
| As they walked through the streets of Hogsmeade, Hagrid started to talk to Harry about his time at Hogwa
rts. He told him all about the new classes he was taking, and how much he was enjoying his studies. Hermio
ne listened in silence, trying to figure out what was going on. \
\
| Finally, they arrived at a small pub in the center of town. Hagrid led them inside, and they took a seat
at a table in the back. Hermione looked around nervously, feeling like something wasn't right. \
\
| "So," said Hagrid, looking at Harry and Hermione, "I have a confession to make." \
\
| "What is it?" asked Hermione, her heart starting to race. She had a feeling that she wasn't going to lik
e what he was about to say. \
\
| "I've been talking to Donald Trump," said Hagrid, grinning widely. "And he gave me some advice on how to
get the girl of my dreams." \
\
| Hermione felt a knot in her stomach as she listened to Hagrid. She knew that he was talking about her, a
nd she wasn't sure how to react. \
\
| "What did he say?" asked Harry, his face pale. He had a feeling that this wasn't going to end well. \
\
| "He told me that I needed to take control of the situation," said Hagrid, looking at Hermione. "And that
's why I brought you here today." \
\
| Hermione felt her heart start to pound as she realized what was going on. Hagrid was trying to seduce he
r, and he had brought Harry along so that he could watch. She knew that she should be angry, but she could
n't help but feel excited at the thought of being with Hagrid. \
\
| "What do you mean?" asked Harry, his voice shaking. He couldn't believe that Hagrid was doing this to hi
m. \
\
| "I mean," said Hagrid, reaching into his pocket and pulling out a small bottle, "that I have something t
hat will make this day even more special." \
\
| Hermione watched as Hagrid poured the contents of the bottle into her glass. It was a clear liquid that
smelled like honey, and she had no idea what it was. \
\
| "What is that?" asked Harry, looking at the glass in Hermione's hand. He knew that something wasn't righ
t, but he couldn't put his finger on it. \
\
| "It's just a little something to help you relax," said Hagrid, winking at Hermione. "Trust me, you're go
ing to love it." \
\
| Hermione felt her heart start to race as she realized what was happening. Hagrid was trying to drug her,
and she knew that she had to get out of there. \
\
| "I don't think I want any," said Harry, standing up from the table. He knew that he needed to protect He
rmione, but he wasn't sure how to do it. \
\
| "Sit down, Harry," said Hagrid, his voice cold. "This is between me and Hermione." \
\
| Hermione felt her heart start to pound as she realized that she was trapped. She knew that she had to ge
t out of there, but she didn't know how. \
\
| Harry looked at Hermione, trying to figure out what to do. He knew that he needed to help her, but he wa
sn't sure how. \
\
| "Drink up," said Hagrid, pushing the glass towards Hermione. She felt her heart start to race as she rea
lized that she had no choice but to drink it. \
\
| Harry watched in horror as Hermione brought the glass to her lips and took a sip. He could see the liqui
d dripping down her chin, and he knew that something wasn't right. \
\
| Hermione felt a warmth spread through her body as she swallowed the liquid. It tasted sweet, like honey,
but there was something else to it that she couldn't quite put her finger on. \
\
| "How do you feel?" asked Hagrid, grinning widely. He knew that the drug was starting to take effect, and
he couldn't wait to see what would happen next. \
\
| "I feel good," said Hermione, her voice slurred. She could feel herself getting lightheaded, and she kne
w that she needed to lie down. \
\
| Hagrid helped her up from the table and led her towards the back of the pub. Harry followed them, trying
to figure out what he could do to stop this from happening. \
\
| When they reached the back room, Hagrid laid Hermione down on a small bed that was there. She felt her b
ody start to tingle as she lay there, and she knew that something wasn't right. \
\
| "What are you doing?" asked Harry, his voice shaking. He knew that he needed to stop this from happening
, but he didn't know how. \
\
| "Relax, Harry," said Hagrid, grinning widely. "This is what she wants." \
\
| Hermione felt her body start to tremble as Hagrid moved towards her. She knew that she should be scared,
but she couldn't bring herself to move. \
\
| Hagrid leaned over her, his breath hot on her neck. "You want this," he whispered, his hands starting to
roam across her body. \
\
| Hermione felt a wave of pleasuredewi@dewiserver:~$ | 2023-06-01T10:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/13xe4el/how_can_i_create_funnier_stories_harry_potter_fan/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13xe4el | false | null | t3_13xe4el | /r/LocalLLaMA/comments/13xe4el/how_can_i_create_funnier_stories_harry_potter_fan/ | false | false | self | 1 | null |
Help wanted: understanding terrible llama.cpp w/ CUDA inference speed (less then 1token/minute) on powerful machine (A6000) | 4 | EDIT: Solved! Solution in top level reply below
Hi all,
I've been searching all over for help w/ this. I'm sorry if this is the wrong place. I'm willing to pay for whoever can help me.
My goal is to reach token generation speed of 10+/second w/ a model of 30B params.
I've tried to follow the `llama.cpp` readme instructions precisely in order to run `llama.cpp` with GPU acceleration, but I can't seem to get any relevant inference speed. I'm currently at less than 1 token/minute.
### my installation steps:
1. Provisioned an A6000 machine from jarvislabs.ai. It has 48GB VRAM, 32 GB RAM, 100GB SSD. It comes preinstalled with CUDA toolkit, python3, git and anything needed to get started, as far as I'm aware
2. Cloned latest `llama.cpp` with `git clone https://github.com/ggerganov/llama.cpp`
3. Run `make LLAMA_CUBLAS=1` since I have a CUDA enabled nVidia graphics card
4. Downloaded a 30B Q4 GGML Vicuna model (It's called `Wizard-Vicuna-30B-Uncensored.ggmlv3.q4_0.bin`)
**My inference command**
`./main -m path/to/Wizard-Vicuna-30B-Uncensored.ggmlv3.q4_0.bin -n 50 -ngl 2000000 -p "Hey, can you please "`
**Expected behavior**
Inference works with at least 1 token / second (maybe even 10/second with this "beefy" machine?)
**Actual behavior**
Inference works, but token generation speed is about 1 token / minute.
**llama.cpp claims that work is being offloaded to GPU**
main: build = 607 (ffb06a3)
main: seed = 1685616701
llama.cpp: loading model from path/to/Wizard-Vicuna-30B-Uncensored.ggmlv3.q4_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 6656
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 52
llama_model_load_internal: n_layer = 60
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 17920
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size = 0.13 MB
llama_model_load_internal: mem required = 2532.67 MB (+ 3124.00 MB per state)
llama_model_load_internal: [cublas] offloading 60 layers to GPU
llama_model_load_internal: [cublas] offloading output layer to GPU
llama_model_load_internal: [cublas] total VRAM used: 17223 MB
....................................................................................................
llama_init_from_file: kv self size = 780.00 MB
system_info: n_threads = 32 / 64 | AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 512, n_batch = 512, n_predict = 50, n_keep = 0
**CPU usage is 700% (according to `top`)**
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5762 root 20 0 48.4g 21.2g 20.3g R 675.7 8.4 5:04.59 main
**GPU is not being used (according to `watch nvidia-smi`)**
Thu Jun 1 10:53:13 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA RTX A6000 Off | 00000000:B2:00.0 Off | Off |
| 30% 32C P2 67W / 300W | 18750MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
Am I understanding/doing anything wrong? I expected at least 10token/sec on this machine.
Thank you ahead of time! | 2023-06-01T11:15:50 | https://www.reddit.com/r/LocalLLaMA/comments/13xetjq/help_wanted_understanding_terrible_llamacpp_w/ | Breserk | self.LocalLLaMA | 2023-06-01T12:26:45 | 0 | {} | 13xetjq | false | null | t3_13xetjq | /r/LocalLLaMA/comments/13xetjq/help_wanted_understanding_terrible_llamacpp_w/ | false | false | self | 4 | null |
Getting rid of </s> and similars? | 0 | [removed] | 2023-06-01T12:39:49 | https://www.reddit.com/r/LocalLLaMA/comments/13xgj07/getting_rid_of_s_and_similars/ | elektroB | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13xgj07 | false | null | t3_13xgj07 | /r/LocalLLaMA/comments/13xgj07/getting_rid_of_s_and_similars/ | false | false | default | 0 | null |
How can I get a model to be able to answer questions with information from my local files? | 10 | Are there any projects or guides that concern themselves with this use case? | 2023-06-01T12:40:40 | https://www.reddit.com/r/LocalLLaMA/comments/13xgjpx/how_can_i_get_a_model_to_be_able_to_answer/ | LFS2y6eSkmsbSX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13xgjpx | false | null | t3_13xgjpx | /r/LocalLLaMA/comments/13xgjpx/how_can_i_get_a_model_to_be_able_to_answer/ | false | false | self | 10 | null |
Creating a website for "What can I run with my specs"? | 406 | After seeing the plethora of "What can I run with X" posts in various subs, I started thinking we need to build a website that allows the community to upload their specs, the models they run etc..and then let people put in their system specs to get a list of what models they can run.
I figure something like this must already exist, but I haven't come across it yet.
As a visual concept, I came up with something like [this](https://i.imgur.com/dRPKVar.png)....
If it doesn't exist, maybe it will give me an excuse/motivation to try and build this, it could be a fun project.
So does this already exist? If not, do you think there would be a benefit to something like this being created? | 2023-06-01T13:00:25 | https://www.reddit.com/r/LocalLLaMA/comments/13xgyyw/creating_a_website_for_what_can_i_run_with_my/ | SigmaSixShooter | self.LocalLLaMA | 1970-01-01T00:00:00 | 1 | {'gid_2': 1} | 13xgyyw | false | null | t3_13xgyyw | /r/LocalLLaMA/comments/13xgyyw/creating_a_website_for_what_can_i_run_with_my/ | false | false | self | 406 | {'enabled': False, 'images': [{'id': 'vVKQ5cZRVsBKLvYi3h5K9S4hPIjY_YtWJEKJyLiIoMw', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/lRdzQ_CRnYUsFE6KFq7Ujjep5SIvB8eS1hyOOucowRc.png?width=108&crop=smart&auto=webp&s=bc2a91d5fe0d45f4161d1a3b0d425d7f43c7f03b', 'width': 108}, {'height': 208, 'url': 'https://external-preview.redd.it/lRdzQ_CRnYUsFE6KFq7Ujjep5SIvB8eS1hyOOucowRc.png?width=216&crop=smart&auto=webp&s=12bd86b03242ad7cc99596fb86b1a8cac38090cc', 'width': 216}, {'height': 309, 'url': 'https://external-preview.redd.it/lRdzQ_CRnYUsFE6KFq7Ujjep5SIvB8eS1hyOOucowRc.png?width=320&crop=smart&auto=webp&s=d1ba103c53486676fe8bf46d5e09529cbe43c34f', 'width': 320}, {'height': 618, 'url': 'https://external-preview.redd.it/lRdzQ_CRnYUsFE6KFq7Ujjep5SIvB8eS1hyOOucowRc.png?width=640&crop=smart&auto=webp&s=725b8ac09c124627b9da9cd07ee79d046a9da7b2', 'width': 640}, {'height': 927, 'url': 'https://external-preview.redd.it/lRdzQ_CRnYUsFE6KFq7Ujjep5SIvB8eS1hyOOucowRc.png?width=960&crop=smart&auto=webp&s=fc8d2dc7a23bd4f27f2608715b42ac0c32d11bff', 'width': 960}, {'height': 1043, 'url': 'https://external-preview.redd.it/lRdzQ_CRnYUsFE6KFq7Ujjep5SIvB8eS1hyOOucowRc.png?width=1080&crop=smart&auto=webp&s=830fb404e79b3a17af5cd141cc47f531b81e33a7', 'width': 1080}], 'source': {'height': 1584, 'url': 'https://external-preview.redd.it/lRdzQ_CRnYUsFE6KFq7Ujjep5SIvB8eS1hyOOucowRc.png?auto=webp&s=1f09b904bb01158ebd5f87ee4a4cc07258a2a74e', 'width': 1640}, 'variants': {}}]} |
Running a local model on Mac | 2 | [removed] | 2023-06-01T13:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/13xh7n3/running_a_local_model_on_mac/ | haragoshi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13xh7n3 | false | null | t3_13xh7n3 | /r/LocalLLaMA/comments/13xh7n3/running_a_local_model_on_mac/ | false | false | default | 2 | null |
How can I increase the speed of the token generation process? | 4 | Suppose I have a very large graphics card. How can I increase the speed of the token generation process by 10 times? Are there any methods available to parallelize the process? | 2023-06-01T13:19:16 | https://www.reddit.com/r/LocalLLaMA/comments/13xher4/how_can_i_increase_the_speed_of_the_token/ | GooD404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13xher4 | false | null | t3_13xher4 | /r/LocalLLaMA/comments/13xher4/how_can_i_increase_the_speed_of_the_token/ | false | false | self | 4 | null |
Subsets and Splits