title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Added Strength Slider to virtual lora (WebUI extension)
1
https://preview.redd.it/…dfc1f64db5eadb
2025-01-03T20:07:11
https://www.reddit.com/r/LocalLLaMA/comments/1hsvo3f/added_strength_slider_to_virtual_lora_webui/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hsvo3f
false
null
t3_1hsvo3f
/r/LocalLLaMA/comments/1hsvo3f/added_strength_slider_to_virtual_lora_webui/
false
false
https://a.thumbs.redditm…l8Npzr41Mqj0.jpg
1
null
Added Strength to LORA in VirtualLora (WebUI extension)
30
2025-01-03T20:08:08
https://i.redd.it/brqwcfbl5uae1.jpeg
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
1hsvow5
false
null
t3_1hsvow5
/r/LocalLLaMA/comments/1hsvow5/added_strength_to_lora_in_virtuallora_webui/
false
false
default
30
{'enabled': True, 'images': [{'id': 'brqwcfbl5uae1', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/brqwcfbl5uae1.jpeg?width=108&crop=smart&auto=webp&s=577125fe7a7107448d875a9a96f269e4fcb09b50', 'width': 108}, {'height': 69, 'url': 'https://preview.redd.it/brqwcfbl5uae1.jpeg?width=216&crop=smart&auto=webp&s=0f75981e538337cefd895ee5ee4ad9422e209337', 'width': 216}, {'height': 103, 'url': 'https://preview.redd.it/brqwcfbl5uae1.jpeg?width=320&crop=smart&auto=webp&s=9d400ba8975dfa63b894aafadeb82a97a8ba0e30', 'width': 320}, {'height': 206, 'url': 'https://preview.redd.it/brqwcfbl5uae1.jpeg?width=640&crop=smart&auto=webp&s=0f213fe360f0f1277c5863a22b07db4c777369f5', 'width': 640}], 'source': {'height': 304, 'url': 'https://preview.redd.it/brqwcfbl5uae1.jpeg?auto=webp&s=bb2be7a65b8554342457dda470c5cf69c7775573', 'width': 944}, 'variants': {}}]}
Percy's list of scientific breakthroughs
1
2025-01-03T20:19:03
https://docs.google.com/document/d/1Yrof6qlDp3cMBRYjsmajg2KlwtBChHeXmq-FeWePjw8/edit?usp=sharing
Radlib123
docs.google.com
1970-01-01T00:00:00
0
{}
1hsvy5j
false
null
t3_1hsvy5j
/r/LocalLLaMA/comments/1hsvy5j/percys_list_of_scientific_breakthroughs/
false
false
https://b.thumbs.redditm…hvyDTSmfOZno.jpg
1
{'enabled': False, 'images': [{'id': 'DMlyIZYENA6v2CKRF8nTW04yWEjyJQKhVLERtvLSufo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GgzOkw1LaED4n3pUBnjA1lcabSLHg5yptw9QxpCSXHg.jpg?width=108&crop=smart&auto=webp&s=5ae140ff083aac4d5dd9edbdf3b23644a7ee5712', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GgzOkw1LaED4n3pUBnjA1lcabSLHg5yptw9QxpCSXHg.jpg?width=216&crop=smart&auto=webp&s=c7722d25a0833f3b4639aefe879b5a4ca74286d6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GgzOkw1LaED4n3pUBnjA1lcabSLHg5yptw9QxpCSXHg.jpg?width=320&crop=smart&auto=webp&s=5c79a7bb6e04a559c5e0b6e3b76132e509b3da85', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GgzOkw1LaED4n3pUBnjA1lcabSLHg5yptw9QxpCSXHg.jpg?width=640&crop=smart&auto=webp&s=ec8cad470895770f7482a2a40844f113463f1b7c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GgzOkw1LaED4n3pUBnjA1lcabSLHg5yptw9QxpCSXHg.jpg?width=960&crop=smart&auto=webp&s=67fd3c8786125477bfb03f2e85615ea2457faafa', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GgzOkw1LaED4n3pUBnjA1lcabSLHg5yptw9QxpCSXHg.jpg?width=1080&crop=smart&auto=webp&s=769fed641a7d212b4be68ac0a2cb9471aada4d7c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/GgzOkw1LaED4n3pUBnjA1lcabSLHg5yptw9QxpCSXHg.jpg?auto=webp&s=c7af4722a598444512a3bc30636699e0a0c9b3ce', 'width': 1200}, 'variants': {}}]}
Open-source implementation of NotebookLM in <50 lines of code!
1
[removed]
2025-01-03T20:30:43
https://www.reddit.com/r/LocalLLaMA/comments/1hsw7u6/opensource_implementation_of_notebooklm_in_50/
Dart7989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hsw7u6
false
null
t3_1hsw7u6
/r/LocalLLaMA/comments/1hsw7u6/opensource_implementation_of_notebooklm_in_50/
false
false
self
1
{'enabled': False, 'images': [{'id': '8ibZHSgIz0H3oiF50U88eY1qL6LVzHEX6Lbp4wGFOtY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MVoruBX9iZ7E_mW27AyaaS33U9aRThP4tiydcxO0cqs.jpg?width=108&crop=smart&auto=webp&s=ea6f945a031a6ad3eb2361ab6703fe7aecd3302a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MVoruBX9iZ7E_mW27AyaaS33U9aRThP4tiydcxO0cqs.jpg?width=216&crop=smart&auto=webp&s=e8cbf62c426f2f33a55ca904c53e24c630f23af4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MVoruBX9iZ7E_mW27AyaaS33U9aRThP4tiydcxO0cqs.jpg?width=320&crop=smart&auto=webp&s=b1987fb087fb4ecce4eacd0354e6303fab0dd95e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MVoruBX9iZ7E_mW27AyaaS33U9aRThP4tiydcxO0cqs.jpg?width=640&crop=smart&auto=webp&s=eb6345ec158aeb46af8ab9e92b0b06b46c3a3a9d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MVoruBX9iZ7E_mW27AyaaS33U9aRThP4tiydcxO0cqs.jpg?width=960&crop=smart&auto=webp&s=22bd48c513c40bf2597c304d9a23438b82e50fae', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MVoruBX9iZ7E_mW27AyaaS33U9aRThP4tiydcxO0cqs.jpg?width=1080&crop=smart&auto=webp&s=34ede04200f890d8fc14687f967fd1f2da73103a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MVoruBX9iZ7E_mW27AyaaS33U9aRThP4tiydcxO0cqs.jpg?auto=webp&s=3fb45c3668abe86b0c022a1b49eda17022716905', 'width': 1200}, 'variants': {}}]}
Open source platforms to load balance between models and provides
1
[removed]
2025-01-03T20:36:28
https://www.reddit.com/r/LocalLLaMA/comments/1hswcli/open_source_platforms_to_load_balance_between/
topjakuqe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hswcli
false
null
t3_1hswcli
/r/LocalLLaMA/comments/1hswcli/open_source_platforms_to_load_balance_between/
false
false
self
1
null
Package to group images based on content (using local models)?
3
Are there are any packages that can group images based on content? Like cat images, landscapes etc.?
2025-01-03T21:01:54
https://www.reddit.com/r/LocalLLaMA/comments/1hswxyj/package_to_group_images_based_on_content_using/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hswxyj
false
null
t3_1hswxyj
/r/LocalLLaMA/comments/1hswxyj/package_to_group_images_based_on_content_using/
false
false
self
3
null
Need Help Building a Dual 3090 Setup with Optimal Cooling (Noise Doesn't Matter)
1
[removed]
2025-01-03T21:02:18
https://www.reddit.com/r/LocalLLaMA/comments/1hswyb3/need_help_building_a_dual_3090_setup_with_optimal/
switchpizza
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hswyb3
false
null
t3_1hswyb3
/r/LocalLLaMA/comments/1hswyb3/need_help_building_a_dual_3090_setup_with_optimal/
false
false
self
1
null
What is the best cheapest or open source llm right now?
0
Plz I want to know this, ofcourse I know it can't be on the level of chatgpt. But even if it's on the level of a dumb human with 30 iq it will be fine.
2025-01-03T21:02:49
https://www.reddit.com/r/LocalLLaMA/comments/1hswyql/what_is_the_best_cheapest_or_open_source_llm/
No_Macaroon_7608
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hswyql
false
null
t3_1hswyql
/r/LocalLLaMA/comments/1hswyql/what_is_the_best_cheapest_or_open_source_llm/
false
false
self
0
null
Is there any paper/topology/research about LLM baking tokens back to weights?
1
[removed]
2025-01-03T21:25:00
https://www.reddit.com/r/LocalLLaMA/comments/1hsxhih/is_there_any_papertopologyresearch_about_llm/
Economy_Apple_4617
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hsxhih
false
null
t3_1hsxhih
/r/LocalLLaMA/comments/1hsxhih/is_there_any_papertopologyresearch_about_llm/
false
false
self
1
null
PubMedBERT Embeddings Model2Vec: 100K - 8M parameter static vector models
26
2025-01-03T21:30:16
https://huggingface.co/collections/NeuML/pubmedbert-embeddings-m2v-67785089778b3f19be6c39c5
davidmezzetti
huggingface.co
1970-01-01T00:00:00
0
{}
1hsxlxd
false
null
t3_1hsxlxd
/r/LocalLLaMA/comments/1hsxlxd/pubmedbert_embeddings_model2vec_100k_8m_parameter/
false
false
https://b.thumbs.redditm…RhQTfMPs3o1U.jpg
26
{'enabled': False, 'images': [{'id': 'CAjKsBgxOKas7kElPGxlX6K0usnWM-NBncv9nQT4FGs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U9D1uBZimHLJC3IVykHVbS1g98qAdcKtsU09FJwoiCQ.jpg?width=108&crop=smart&auto=webp&s=c9829e91b3d16295513a5f93f04d7c7383b3a326', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U9D1uBZimHLJC3IVykHVbS1g98qAdcKtsU09FJwoiCQ.jpg?width=216&crop=smart&auto=webp&s=26575e828107553e04002bac5ca6683f8309f98d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U9D1uBZimHLJC3IVykHVbS1g98qAdcKtsU09FJwoiCQ.jpg?width=320&crop=smart&auto=webp&s=c43c90de4b059207e35a176623b7013410a268a5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U9D1uBZimHLJC3IVykHVbS1g98qAdcKtsU09FJwoiCQ.jpg?width=640&crop=smart&auto=webp&s=fced265666b3ebf83d008922ed6e2f4fb1914f74', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U9D1uBZimHLJC3IVykHVbS1g98qAdcKtsU09FJwoiCQ.jpg?width=960&crop=smart&auto=webp&s=c16c1c7c296cbc7c0aba22103b59a2945d2272c3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U9D1uBZimHLJC3IVykHVbS1g98qAdcKtsU09FJwoiCQ.jpg?width=1080&crop=smart&auto=webp&s=d88f74666e534e6098109352eaaeb275b45d918e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U9D1uBZimHLJC3IVykHVbS1g98qAdcKtsU09FJwoiCQ.jpg?auto=webp&s=16ed70e287a0d71b19a6105dc4c8b345a72ea774', 'width': 1200}, 'variants': {}}]}
Best LLM for character creation?
3
So I'm a some-time character creator within the [Backyard.AI](http://Backyard.AI) hub and I've been using their cloud models (magnum 1 72b, command r 104b) using special character creation cards I made ages ago, but I feel like with all the constant developments in the open source AI field I might be missing out on some of the best models for this purpose. Locally I've got 40gb VRAM but I'm happy to run character creation cards slowly as long as the content quality is good. I hear y'all singing the praises of Mistral Large and Midnight Miqu etc, but about all sorts of things never related to the niche of character creation that I like to indulge in. I figure creative writing and character development probably have some overlap but I don't really engage in creative writing exercises with AI other than for character creation so I can't really be sure. So that being said, what models would y'all recommend for this particular usage purpose? I'm keen to hear from the broader (not just Backyard.AI) LLM community about this and for those of us in the know there's no where better than r/LocalLLama \- so here I am, all ears!
2025-01-03T21:55:25
https://www.reddit.com/r/LocalLLaMA/comments/1hsy7g9/best_llm_for_character_creation/
Gerdel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hsy7g9
false
null
t3_1hsy7g9
/r/LocalLLaMA/comments/1hsy7g9/best_llm_for_character_creation/
false
false
self
3
{'enabled': False, 'images': [{'id': 'cJ68YRfcyKxO_TrM223_zvAFQZqprWyI9Rh8k-WDL6Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/SfWhTb81zzWhx5Xj9aqzlpO0ZfgQJcTKS6baoQWuSSI.jpg?width=108&crop=smart&auto=webp&s=9efd57d2e9fbd247c71a12e86e7fa142f9132f08', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/SfWhTb81zzWhx5Xj9aqzlpO0ZfgQJcTKS6baoQWuSSI.jpg?width=216&crop=smart&auto=webp&s=7a2707fc7b707f2b5d9cbf7023fde65b8087eaef', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/SfWhTb81zzWhx5Xj9aqzlpO0ZfgQJcTKS6baoQWuSSI.jpg?width=320&crop=smart&auto=webp&s=d788e43dc38e6478e448faec80ae66f389e47b80', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/SfWhTb81zzWhx5Xj9aqzlpO0ZfgQJcTKS6baoQWuSSI.jpg?width=640&crop=smart&auto=webp&s=dbb65d41fe23d8d53ff517be34aa1c2886e38509', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/SfWhTb81zzWhx5Xj9aqzlpO0ZfgQJcTKS6baoQWuSSI.jpg?width=960&crop=smart&auto=webp&s=bf7cd861f135f566167b8c1fb5e8bc8f0c0d572a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/SfWhTb81zzWhx5Xj9aqzlpO0ZfgQJcTKS6baoQWuSSI.jpg?width=1080&crop=smart&auto=webp&s=32f84d3497ee07d1fbbd5954df09e09a2cd89d00', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/SfWhTb81zzWhx5Xj9aqzlpO0ZfgQJcTKS6baoQWuSSI.jpg?auto=webp&s=44b69d92c04fca9ef125bb2a479a8cf6ead0401c', 'width': 1200}, 'variants': {}}]}
Help choosing a gpu
4
I have an old server turned desktop that’s running an old server grade Xeon and 64GB DDR3 ECC RAM. The motherboard supports up to three GPUs. I’m wondering if it’s worth getting a GPU or two and what your recommendations would be. Or if my system is too old. I’m running it on nixos and am open to Nvidia and AMD gpus.
2025-01-03T22:18:48
https://www.reddit.com/r/LocalLLaMA/comments/1hsyrdy/help_choosing_a_gpu/
masterfink
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hsyrdy
false
null
t3_1hsyrdy
/r/LocalLLaMA/comments/1hsyrdy/help_choosing_a_gpu/
false
false
self
4
null
Got Segment-Anything 2 running totally in the browser, using WebGPU! Source code linked
57
2025-01-03T22:31:49
https://github.com/lucasgelfond/webgpu-sam2
lucasgelfond
github.com
1970-01-01T00:00:00
0
{}
1hsz234
false
null
t3_1hsz234
/r/LocalLLaMA/comments/1hsz234/got_segmentanything_2_running_totally_in_the/
false
false
https://b.thumbs.redditm…N3K7kqZbEkHs.jpg
57
{'enabled': False, 'images': [{'id': 'jULZzBmrHbTtxtr5z93suW-CYSr6ptnqHLKbDEiJKK4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ims1KPvZIXMetGHtLfy-ujfxB9QPYDUJlWg9lPvUkDs.jpg?width=108&crop=smart&auto=webp&s=95441d1d0351e2b52054f437d2eb4516e3b1d3a2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ims1KPvZIXMetGHtLfy-ujfxB9QPYDUJlWg9lPvUkDs.jpg?width=216&crop=smart&auto=webp&s=751433760914160b2a2c49101d7f6367ea0d8bf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ims1KPvZIXMetGHtLfy-ujfxB9QPYDUJlWg9lPvUkDs.jpg?width=320&crop=smart&auto=webp&s=476996c1c7806f96894f6f790df4bcb12e1b979a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ims1KPvZIXMetGHtLfy-ujfxB9QPYDUJlWg9lPvUkDs.jpg?width=640&crop=smart&auto=webp&s=529cdb47472367119e4d0aba6603cf3853759907', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ims1KPvZIXMetGHtLfy-ujfxB9QPYDUJlWg9lPvUkDs.jpg?width=960&crop=smart&auto=webp&s=d223446d1b0264e52c197384b27810bfc89270a1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ims1KPvZIXMetGHtLfy-ujfxB9QPYDUJlWg9lPvUkDs.jpg?width=1080&crop=smart&auto=webp&s=41e763f06e674168b0fc4a7e22f3a8033b9b1e77', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ims1KPvZIXMetGHtLfy-ujfxB9QPYDUJlWg9lPvUkDs.jpg?auto=webp&s=d2542b1d217491759e45d9daad76b35daba033bd', 'width': 1200}, 'variants': {}}]}
I'm getting started with LLMs on Raspberry Pi 5: Using Ollama, Hailo AI Hat and Agents
3
I'm new to this area, so I hope my question isn't silly: I need to run my project with a Large Language Model (LLM) using Ollama, Visual Studio Code (VS Code), the Hailo AI Hat, Python and the Raspberry Pi 5. Will using the AI Hat improve performance? My application involves agents. What are the best models to use in this context?
2025-01-03T22:40:25
https://www.reddit.com/r/LocalLLaMA/comments/1hsz92o/im_getting_started_with_llms_on_raspberry_pi_5/
OutrageousAspect7459
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hsz92o
false
null
t3_1hsz92o
/r/LocalLLaMA/comments/1hsz92o/im_getting_started_with_llms_on_raspberry_pi_5/
false
false
self
3
null
I asked a stupid question and got a "stupid" answer... Can someone translate from AI?
1
2025-01-03T23:08:49
https://i.redd.it/8sw7blor1vae1.png
Bugajpcmr
i.redd.it
1970-01-01T00:00:00
0
{}
1hszwft
false
null
t3_1hszwft
/r/LocalLLaMA/comments/1hszwft/i_asked_a_stupid_question_and_got_a_stupid_answer/
false
false
https://b.thumbs.redditm…NRwX8cavDCXk.jpg
1
{'enabled': True, 'images': [{'id': 'goNmamGk8ISHjgUQOkel8Kq4m6hg5xR565CIJtDVgQ0', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/8sw7blor1vae1.png?width=108&crop=smart&auto=webp&s=dcc1010bfa0b26b191a93f9c6d08c91e0c3e81c8', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/8sw7blor1vae1.png?width=216&crop=smart&auto=webp&s=4e15d2ae12f4a11ec7906eb3acc64b57e36fa274', 'width': 216}, {'height': 249, 'url': 'https://preview.redd.it/8sw7blor1vae1.png?width=320&crop=smart&auto=webp&s=375eb9c8120419a58b96a14edd40b4630c705996', 'width': 320}, {'height': 499, 'url': 'https://preview.redd.it/8sw7blor1vae1.png?width=640&crop=smart&auto=webp&s=e6546a566c084eb43839848b61bad640becd050b', 'width': 640}, {'height': 749, 'url': 'https://preview.redd.it/8sw7blor1vae1.png?width=960&crop=smart&auto=webp&s=523ab70e426809209eee906f824048d08f0807fa', 'width': 960}, {'height': 843, 'url': 'https://preview.redd.it/8sw7blor1vae1.png?width=1080&crop=smart&auto=webp&s=3cca45ca271e2031177a7e9f40931e9526c94f8b', 'width': 1080}], 'source': {'height': 1428, 'url': 'https://preview.redd.it/8sw7blor1vae1.png?auto=webp&s=19e07af641f57c400d386905d8e43e2f6a8b030f', 'width': 1828}, 'variants': {}}]}
I asked a question about IP and got a "weird" answer.
1
2025-01-03T23:12:32
https://i.redd.it/7hy4kffj2vae1.png
Bugajpcmr
i.redd.it
1970-01-01T00:00:00
0
{}
1hszzhq
false
null
t3_1hszzhq
/r/LocalLLaMA/comments/1hszzhq/i_asked_a_question_about_ip_and_got_a_weird_answer/
false
false
https://b.thumbs.redditm…5P5Nbi8sKyUA.jpg
1
{'enabled': True, 'images': [{'id': 'VBN_QVxQDlYho5LP7ymMRXf6QzdeMGdLg7Go3TW2aYs', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/7hy4kffj2vae1.png?width=108&crop=smart&auto=webp&s=56639f703c53dae8c674f7485e43b39081a123a6', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/7hy4kffj2vae1.png?width=216&crop=smart&auto=webp&s=4c4cb7b0a9bc86b07944cc9fa739ca4517ebad13', 'width': 216}, {'height': 249, 'url': 'https://preview.redd.it/7hy4kffj2vae1.png?width=320&crop=smart&auto=webp&s=cc7c0546932549e9e3a6bdc0b59be2e1d528f473', 'width': 320}, {'height': 499, 'url': 'https://preview.redd.it/7hy4kffj2vae1.png?width=640&crop=smart&auto=webp&s=b73ffd11ec055b48a77078e2fe15197887e8e129', 'width': 640}, {'height': 749, 'url': 'https://preview.redd.it/7hy4kffj2vae1.png?width=960&crop=smart&auto=webp&s=d177e97c195f02f71f1c7b0ec1733867cb9f0fe2', 'width': 960}, {'height': 843, 'url': 'https://preview.redd.it/7hy4kffj2vae1.png?width=1080&crop=smart&auto=webp&s=f5944f02d17d62f89d602025228ba10d2e03f0a5', 'width': 1080}], 'source': {'height': 1428, 'url': 'https://preview.redd.it/7hy4kffj2vae1.png?auto=webp&s=ce5c2d9de0b8e2bb5df79c27c3cc7f2ada5e57b3', 'width': 1828}, 'variants': {}}]}
Need Help Optimizing RAG System with PgVector, Qwen Model, and BGE-Base Reranker
1
[removed]
2025-01-03T23:43:53
https://www.reddit.com/r/LocalLLaMA/comments/1ht0oqe/need_help_optimizing_rag_system_with_pgvector/
FlakyConference9204
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht0oqe
false
null
t3_1ht0oqe
/r/LocalLLaMA/comments/1ht0oqe/need_help_optimizing_rag_system_with_pgvector/
false
false
self
1
null
What do you expect from an AGI?
0
We have heard many definitions of AGI from different companies and influential people. **But what would you personally realistically expect from an AI to say that it is "generally smart enough" for you?** A practical example (and a long story, sorry). A few days ago my laser printer broke. The intermediate transfer belt got damaged presumably because of air humidity changes in my apartment. A humid autumn caused clots of toner and paper dust, and then hot heating made the clots hard and they left permanent marks on the ITB. I spent almost 10 hours to research the topic, first to find out what might be the reasons for the print issue I'm experiencing, and the AI was not that useful, mentioning only the toner issues and not anything about the ITB. Only after I found myself that there was this thing called ITB and mentioned it to the AI, it "admitted" that ITB could indeed be the issue. Then I had to research the repair options and costs, to find out that repairing is almost unfeasible because the ITB unit costs as a new printer. However, then I discovered that it's possible to order the belt alone from China. Replacing the belt alone would be quite tricky and it's not officially supported, and there are only a few videos about it. Then I also wanted to find out how other printer companies deal with this. I found that Brother printers have a user-replaceable ITB unit, and in general, have more user-serviceable parts! However, some users said that Brother color printing quality is not as good as HP's. I'm still collecting the evidence if this is true also for their newest models, which use LEDs instead of lasers. Then I researched my options (repair/buy a new printer - but which one, to avoid this issue in the future or have easier repairs and not lose the print quality I had with my current printer). I found that LLMs can help only if I provide strong leads and keywords, but they rarely come up with the most important keywords and important decision-making facts. Being a somewhat perfectionist and overthinker, I quite often get in situations when I want to research reviews and in-depth technical information that comes up in obscure internet forums or video reviews (and quite often they are in Russian because Russians are known for trying to repair everything just to save on costs, even if it requires spending many hours). **So, to resume, for me personally an AGI would be an AI that could do all of this research for me. It should be able to start with a vague description of the problem, learning more options as it goes on and collecting the most relevant facts to finally come up with the list of options for me to decide on.** For example, in this case, I would start with the question "My printer has this issue - what should I do?" and the AI should collect information online, guide me through troubleshooting, and discover the existence and weaknesses of ITB on its own. Then I would ask it to evaluate repair options - again, it should find both options of replacing the entire unit or replacing the belt alone, and lead me to stores with pricing. Then I would ask "If I decide to buy a new printer that makes it easier to deal with this ITB issue in the future, what should I buy and what would the compromises be when compared to my current printer?" and the AI should again do the research and come up with up-to-date models that have user-replaceable ITB and also gather real print quality comparisons. Seems a simple problem, right? It does not require a PhD. However, it requires dynamic use of internet search, critical thinking, and the ability to delve deeper. It also would be great if the "AGI" had a local database stored on my system where it had collected my previous request patterns and would know my personality well enough to know what usually is important to me - longevity, user-serviceability and quality, and not printing speed or "smart" features. Will we reach this level from local models this year? I am doubtful. Using current LLMs for such kinds of research feels like having three wishes from a devil - you ask your LLM something, it replies with something superficial, then you discover something on your own and ask the LLM why it didn't mention this, and it "admits" it did not think it's important (of course - because it does not delve deep enough), and so on.
2025-01-04T00:49:48
https://www.reddit.com/r/LocalLLaMA/comments/1ht24lh/what_do_you_expect_from_an_agi/
martinerous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht24lh
false
null
t3_1ht24lh
/r/LocalLLaMA/comments/1ht24lh/what_do_you_expect_from_an_agi/
false
false
self
0
null
CAG is the Future. It's about to get real people.
147
Saw a thing about "CAG" and was like okay let's see what the flavor of the day is... This is different. This is going to change things. [https://arxiv.org/abs/2412.15605](https://arxiv.org/abs/2412.15605) There is a github I am not affiliated with but has a solution up already. its hhhuang/CAG There is also already research about using for 4-bit optimizations, model and system level optimizations also. You'll have to search for those I lost them in the flurry. I'm excited. Maybe I can get something performant working on my phone. Peace :)
2025-01-04T01:09:35
https://www.reddit.com/r/LocalLLaMA/comments/1ht2jvn/cag_is_the_future_its_about_to_get_real_people/
mr_happy_nice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht2jvn
false
null
t3_1ht2jvn
/r/LocalLLaMA/comments/1ht2jvn/cag_is_the_future_its_about_to_get_real_people/
false
false
self
147
null
LLM Farm (iOS) - gguf downloaded respond with garbage. How to run random LLM from HuggingFace?
2
I’m playing with lots of native iOS apps for running LLMs locally. LLM farm is near top since it supports vision models. But…. I don’t have good luck getting downloaded gguf files to work properly. For example, qwen2.5-coder-3b-instruct-fp16 shows complete garbage in response to my queries. How to now how to parameterize LLMs? For example, I assume would work better if I knew the prompt template to choose, or questions such as if the LLM expects BOS/EOS/special-tokens… etc. What else can cause such garbage response. Shouldn’t the GGUF file encode all the info in, and I shouldn’t have to configure? Perhaps there are better apps that know how to read metadata from gguf to know how to run it? Or at least the app would say “not supported because blah blah”. Also, I have iPhone 16 pro max with 8gB memory. So I’m surprised when LLm farm crashes on a measly 2GB LFG if I download. Perhaps again is because some feature in the LLm it doesn’t support.
2025-01-04T01:14:40
https://www.reddit.com/r/LocalLLaMA/comments/1ht2npq/llm_farm_ios_gguf_downloaded_respond_with_garbage/
Puzzleheaded-Fly4322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht2npq
false
null
t3_1ht2npq
/r/LocalLLaMA/comments/1ht2npq/llm_farm_ios_gguf_downloaded_respond_with_garbage/
false
false
self
2
null
How LLMs sort lists?
0
I was refactoring a few large files with a bunch of imports and it struck me I really don't know how it can do sorting. Does it approximate or sort precisely using external scripts? I used Gemini 1206 in Gemini Coder extension (refactoring command).
2025-01-04T01:34:05
https://www.reddit.com/r/LocalLLaMA/comments/1ht31z7/how_llms_sort_lists/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht31z7
false
null
t3_1ht31z7
/r/LocalLLaMA/comments/1ht31z7/how_llms_sort_lists/
false
false
self
0
null
NSFW / Uncensored Text Generation Model
1
[removed]
2025-01-04T01:49:20
https://www.reddit.com/r/LocalLLaMA/comments/1ht3cur/nsfw_uncensored_text_generation_model/
gorgonation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht3cur
false
null
t3_1ht3cur
/r/LocalLLaMA/comments/1ht3cur/nsfw_uncensored_text_generation_model/
false
false
nsfw
1
null
Looking for NSFW / Uncensored Text Generation Model
1
[removed]
2025-01-04T01:52:34
https://www.reddit.com/r/LocalLLaMA/comments/1ht3f3j/looking_for_nsfw_uncensored_text_generation_model/
PropertyLoover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht3f3j
false
null
t3_1ht3f3j
/r/LocalLLaMA/comments/1ht3f3j/looking_for_nsfw_uncensored_text_generation_model/
false
false
nsfw
1
null
Need Help Optimizing RAG System with PgVector, Qwen Model, and BGE-Base Reranker
1
[removed]
2025-01-04T01:56:44
https://www.reddit.com/r/LocalLLaMA/comments/1ht3i7s/need_help_optimizing_rag_system_with_pgvector/
FlakyConference9204
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht3i7s
false
null
t3_1ht3i7s
/r/LocalLLaMA/comments/1ht3i7s/need_help_optimizing_rag_system_with_pgvector/
false
false
self
1
null
Grok 2 being open-sourced soon?
126
https://x.com/elonmusk/status/1875357350393246114
2025-01-04T02:19:20
https://www.reddit.com/r/LocalLLaMA/comments/1ht3ynl/grok_2_being_opensourced_soon/
Educational_Grab_473
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht3ynl
false
null
t3_1ht3ynl
/r/LocalLLaMA/comments/1ht3ynl/grok_2_being_opensourced_soon/
false
false
self
126
{'enabled': False, 'images': [{'id': 'KbG9WkQU6tbc3TVmCuIG9Vq0Nt0ki_GgQ8T5pYxBSFA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/vTyAuWpdPX8CZ1l7w9kd8GMjFQrEjVtGadgWWQDyEWQ.jpg?width=108&crop=smart&auto=webp&s=1407611fdcb97aad23445d5495b1331c5f744889', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/vTyAuWpdPX8CZ1l7w9kd8GMjFQrEjVtGadgWWQDyEWQ.jpg?auto=webp&s=6f62ef7c2cb42c30faf5a29acd4e0f70254e1a96', 'width': 200}, 'variants': {}}]}
7B reasoning model!
1
[removed]
2025-01-04T02:35:26
http://hf.co/brahmairesearch/x1-7B-v0.1
kstyagi_
hf.co
1970-01-01T00:00:00
0
{}
1ht49qk
false
null
t3_1ht49qk
/r/LocalLLaMA/comments/1ht49qk/7b_reasoning_model/
false
false
https://b.thumbs.redditm…Q-ioFYVz4EpQ.jpg
1
{'enabled': False, 'images': [{'id': 'tmV9-Hk1exolZVgBJk43kmS2ENnRHvEQlfpn5VLjJds', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MJlAOVWOPfPYMTOHHJYXkRmBjUghmZum81Jd3-YcOQ8.jpg?width=108&crop=smart&auto=webp&s=a561c87129083a7a33be258616c87cc4ca94fdfa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MJlAOVWOPfPYMTOHHJYXkRmBjUghmZum81Jd3-YcOQ8.jpg?width=216&crop=smart&auto=webp&s=86ab7d99b9dc464a2571af5659b53a3665607721', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MJlAOVWOPfPYMTOHHJYXkRmBjUghmZum81Jd3-YcOQ8.jpg?width=320&crop=smart&auto=webp&s=dfffe4ce86877971fd4870e9c6c8971f571ff3b5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MJlAOVWOPfPYMTOHHJYXkRmBjUghmZum81Jd3-YcOQ8.jpg?width=640&crop=smart&auto=webp&s=212beaee5ca8d15b52c27817cd67ba3f502b206f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MJlAOVWOPfPYMTOHHJYXkRmBjUghmZum81Jd3-YcOQ8.jpg?width=960&crop=smart&auto=webp&s=46e8ad33e38afbb12173ebc9acb5f5bf823513a5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MJlAOVWOPfPYMTOHHJYXkRmBjUghmZum81Jd3-YcOQ8.jpg?width=1080&crop=smart&auto=webp&s=d756b23a349bc508902b972bf13ecdd529d64fb6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MJlAOVWOPfPYMTOHHJYXkRmBjUghmZum81Jd3-YcOQ8.jpg?auto=webp&s=5921f2201f52b97820a9009223d15883f5e02722', 'width': 1200}, 'variants': {}}]}
Can I use images and scripts generated with Meta AI (Llama 3.2) for commercial purpose?
1
[removed]
2025-01-04T02:43:20
https://www.reddit.com/r/LocalLLaMA/comments/1ht4f05/can_i_use_images_and_scripts_generated_with_meta/
PlatypusFast3561
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht4f05
false
null
t3_1ht4f05
/r/LocalLLaMA/comments/1ht4f05/can_i_use_images_and_scripts_generated_with_meta/
false
false
self
1
null
Help with ollama and the Continue VSCode extension? Sometimes it works, sometimes it fails spectacularly
1
[removed]
2025-01-04T03:20:59
https://www.reddit.com/r/LocalLLaMA/comments/1ht54ih/help_with_ollama_and_the_continue_vscode/
im_dylan_it
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht54ih
false
null
t3_1ht54ih
/r/LocalLLaMA/comments/1ht54ih/help_with_ollama_and_the_continue_vscode/
false
false
self
1
null
Llama 2 ported to... Sega Dreamcast !
1
[removed]
2025-01-04T03:26:51
https://i.redd.it/ax15sl8ybwae1.png
celsowm
i.redd.it
1970-01-01T00:00:00
0
{}
1ht58h2
false
null
t3_1ht58h2
/r/LocalLLaMA/comments/1ht58h2/llama_2_ported_to_sega_dreamcast/
false
false
https://b.thumbs.redditm…jk37OLbRVB9g.jpg
1
{'enabled': True, 'images': [{'id': '5z5aAgcy9c3jNeYHzGeUHKtiOnFD7OrB1emrSzyJA44', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ax15sl8ybwae1.png?width=108&crop=smart&auto=webp&s=235f7d59487701561d4fb6bd118438685285f2af', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ax15sl8ybwae1.png?width=216&crop=smart&auto=webp&s=f4ba7a92b391f55f527067633b4dae5aff65a9a3', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ax15sl8ybwae1.png?width=320&crop=smart&auto=webp&s=e92256b80eba92f0da226e5d4e4672a90b23cbe7', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/ax15sl8ybwae1.png?width=640&crop=smart&auto=webp&s=d99cb67878458ce76355f445aefbf104c0d2d84f', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/ax15sl8ybwae1.png?width=960&crop=smart&auto=webp&s=40b81068ddc5fb2764db725747d810e7aa373a07', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/ax15sl8ybwae1.png?width=1080&crop=smart&auto=webp&s=9cd3dd5cf200fa38a55ec0776acfe584c6b07f87', 'width': 1080}], 'source': {'height': 960, 'url': 'https://preview.redd.it/ax15sl8ybwae1.png?auto=webp&s=efe0e3db70f79b1987ea09b82597017ea1ba51fe', 'width': 1280}, 'variants': {}}]}
How to make llama-cpp-python use GPU?
6
Hey, I'm a little bit new to all of this local Ai thing, and now I'm able to run small models (7B-11B) through command using my GPU (rx 5500XT 8GB with ROCm), but now I'm trying to set up a python script to process some text and of course, do it on the GPU, but it automatically loads it into the CPU, I have checked and tried unninstalling the default package and loading the hip Las environment variable, but still loads it on the Cpu. Any advice?
2025-01-04T03:31:34
https://www.reddit.com/r/LocalLLaMA/comments/1ht5bmc/how_to_make_llamacpppython_use_gpu/
JuCaDemon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht5bmc
false
null
t3_1ht5bmc
/r/LocalLLaMA/comments/1ht5bmc/how_to_make_llamacpppython_use_gpu/
false
false
self
6
null
Anyone else feel like chatgpt has become worse? ( example hallucination, and constant agreement with user without questioning)
13
Here is a conversation - https://chatgpt.com/share/6778b248-07c8-8005-9511-a8b8463c5626 Btw, Veternary recommended calorie requirements for a cat is usually around 200 calories a day. I tried the same with claude and immediately it corrected me. Something is off with openAI bros, this is worse than gpt 3.5 wtf.
2025-01-04T04:02:51
https://www.reddit.com/r/LocalLLaMA/comments/1ht5wi2/anyone_else_feel_like_chatgpt_has_become_worse/
EggplantKlutzy1837
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht5wi2
false
null
t3_1ht5wi2
/r/LocalLLaMA/comments/1ht5wi2/anyone_else_feel_like_chatgpt_has_become_worse/
false
false
self
13
{'enabled': False, 'images': [{'id': 'Nyuu7POyhy6govfJ1dcuznEdiKSvWYvY75CrbkKZk54', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=108&crop=smart&auto=webp&s=1c9fdd18b399712019363db42aafb980b94bd314', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=216&crop=smart&auto=webp&s=899c4a23b4ceac10e86c1f39517d489870146375', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=320&crop=smart&auto=webp&s=f0fbf30be58ec54707a24cb4ac47d68af24442f7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=640&crop=smart&auto=webp&s=e590319308efb90d50448579fb76b003782dec8c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=960&crop=smart&auto=webp&s=e25bf929190c7aca5bc237df824850c31f043113', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=1080&crop=smart&auto=webp&s=bb8c7e2b754942d06a77b1979c63552b76523e40', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?auto=webp&s=e3bf69e1a2674fdfdf6fdb17b1bc4de5488bd3eb', 'width': 1600}, 'variants': {}}]}
How good is DeepSeek AI?
1
[removed]
2025-01-04T04:16:01
https://www.reddit.com/r/LocalLLaMA/comments/1ht6553/how_good_is_deepseek_ai/
No1tan1ts3d
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht6553
false
null
t3_1ht6553
/r/LocalLLaMA/comments/1ht6553/how_good_is_deepseek_ai/
false
false
self
1
null
Starting discord for AI agent building/keeping up with gen AI
1
[removed]
2025-01-04T04:21:08
[deleted]
1970-01-01T00:00:00
0
{}
1ht68gh
false
null
t3_1ht68gh
/r/LocalLLaMA/comments/1ht68gh/starting_discord_for_ai_agent_buildingkeeping_up/
false
false
default
1
null
ScreenSpot-Pro: GUI Grounding for Professional High-Resolution Computer Use
66
🚀 Introducing **ScreenSpot-Pro** – the first benchmark driving Multi-modal LLMs into high-resolution professional GUI-Agent and computer-use environments! 📊 While GUI agents excel at general tasks like web browsing, professional applications remain underexplored. 🔹 ScreenSpot-Pro includes 23 applications spanning 5 industries and 3 operating systems, featuring real-world tasks annotated by experts. 🔹 These environments pose unique challenges – higher resolutions, smaller targets, and intricate workflows. 📉 Current models fall short – #GPT4o achieves a mere 0.8%, while the best grounding MLLM reaches only 18.9%. 🆒 Reducing image size improves results (up to 40.2%), but there’s still a long way to go. 💡 ScreenSpot-Pro reveals key gaps and paves the way for advancing GUI agents in professional settings. It’s time to push beyond web and mobile into next-gen AI productivity tools! 🏝️ Twitter: [https://x.com/ChiYeung\_Law/status/1875179243401019825](https://x.com/ChiYeung_Law/status/1875179243401019825) 🤗 Blog: [https://huggingface.co/blog/Ziyang/screenspot-pro](https://huggingface.co/blog/Ziyang/screenspot-pro) 📈 Project & Leaderboard: [https://gui-agent.github.io/grounding-leaderboard/](https://gui-agent.github.io/grounding-leaderboard/) 📄 Paper Link: [https://likaixin2000.github.io/papers/ScreenSpot\_Pro.pdf](https://likaixin2000.github.io/papers/ScreenSpot_Pro.pdf) 📘 Data: [https://huggingface.co/datasets/likaixin/ScreenSpot-Pro](https://huggingface.co/datasets/likaixin/ScreenSpot-Pro) https://preview.redd.it/qh130pn1rwae1.jpg?width=1804&format=pjpg&auto=webp&s=2c0e1c5965d3e9300fa26cf5268bfda0c1fa94b0
2025-01-04T04:52:15
https://www.reddit.com/r/LocalLLaMA/comments/1ht6s3m/screenspotpro_gui_grounding_for_professional/
cylaw01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht6s3m
false
null
t3_1ht6s3m
/r/LocalLLaMA/comments/1ht6s3m/screenspotpro_gui_grounding_for_professional/
false
false
https://b.thumbs.redditm…4hNiVre0_Cms.jpg
66
{'enabled': False, 'images': [{'id': 'qL7rAZJ9RGHjDRtyXHrl8lTPQDxMysSMOHPMNY4m34o', 'resolutions': [{'height': 99, 'url': 'https://external-preview.redd.it/nSmSzAqPE43umnzILE37k0nhmn43FpFFTHeU2oP04ro.jpg?width=108&crop=smart&auto=webp&s=2515d2eeb933a56fdbe2d5887c67f6873c8282a0', 'width': 108}, {'height': 198, 'url': 'https://external-preview.redd.it/nSmSzAqPE43umnzILE37k0nhmn43FpFFTHeU2oP04ro.jpg?width=216&crop=smart&auto=webp&s=3fedfe617f184f531a13e183e880fcc5b2ef1057', 'width': 216}, {'height': 293, 'url': 'https://external-preview.redd.it/nSmSzAqPE43umnzILE37k0nhmn43FpFFTHeU2oP04ro.jpg?width=320&crop=smart&auto=webp&s=16e3b1ef544acbcf631c1893d04835cdfc9c1862', 'width': 320}, {'height': 586, 'url': 'https://external-preview.redd.it/nSmSzAqPE43umnzILE37k0nhmn43FpFFTHeU2oP04ro.jpg?width=640&crop=smart&auto=webp&s=6b4b5ab5639f3a9e1fba43bed3bcdc4342a09846', 'width': 640}, {'height': 880, 'url': 'https://external-preview.redd.it/nSmSzAqPE43umnzILE37k0nhmn43FpFFTHeU2oP04ro.jpg?width=960&crop=smart&auto=webp&s=345edeb1c854a3ecee92d43290123150ffa5d04b', 'width': 960}, {'height': 990, 'url': 'https://external-preview.redd.it/nSmSzAqPE43umnzILE37k0nhmn43FpFFTHeU2oP04ro.jpg?width=1080&crop=smart&auto=webp&s=addf8ded8af2e176f7bb4cb8ffa9a6365886764a', 'width': 1080}], 'source': {'height': 1100, 'url': 'https://external-preview.redd.it/nSmSzAqPE43umnzILE37k0nhmn43FpFFTHeU2oP04ro.jpg?auto=webp&s=d1a0b684d42170e7b5b060008a07ee3e750b7a02', 'width': 1200}, 'variants': {}}]}
Is deepseek a fraud?
0
DeepSeek claims to be ChatGPT... maybe a fraud?? https://preview.redd.it/vm9e552l2xae1.png?width=1908&format=png&auto=webp&s=663c3089157637f98a6b64b75e844490fa597a10
2025-01-04T05:57:05
https://www.reddit.com/r/LocalLLaMA/comments/1ht7vj8/is_deepseek_a_fraud/
Icy_Comfortable5522
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht7vj8
false
null
t3_1ht7vj8
/r/LocalLLaMA/comments/1ht7vj8/is_deepseek_a_fraud/
false
false
https://b.thumbs.redditm…TJvcTIvKHr5o.jpg
0
null
Is there a paper on how mixture of experts impacts performance?
9
Is there a paper on how a mixture of experts and dense model of the same parameter count (across the model, not active parameters) would perform against eachother in terms of output quality?
2025-01-04T07:08:16
https://www.reddit.com/r/LocalLLaMA/comments/1ht8yr1/is_there_a_paper_on_how_mixture_of_experts/
Independent_Try_6891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht8yr1
false
null
t3_1ht8yr1
/r/LocalLLaMA/comments/1ht8yr1/is_there_a_paper_on_how_mixture_of_experts/
false
false
self
9
null
A few actual examples that made me believe DeepSeek V3 really shines
163
1. I stumbled upon this [post](http://xhslink.com/a/8woyRy2ASEZ2) on a famous Chinese social media (xiaohongshu), which was posted on 12/31/2024 (after DeepSeek V3 was launched). The question, after translated to English, was: &#8203; "Help: Since yesterday, everything I hear sounds half a step lower in pitch. Since yesterday, for no apparent reason, everything I hear sounds half a step lower in pitch, including but not limited to the school bell, household appliances like the microwave, rice cooker, and other alert tones. I am a high school senior with a background in violin and usually listen to classical music. Now, my daily life has become extremely awkward. I’m asking the knowledgeable friends here if anyone has had similar experiences or any advice." In the original post's replies, the doctor asked whether this person took a medicine called Carbamazepine, [which has a rare side effect](https://en.wikipedia.org/wiki/Carbamazepine#cite_ref-26) that can cause this symptom that the OP described. This side effect seems to be very rare, so when the doctor asked whether the OP took this medicine and the OP replied "yes", many people got surprised that such a mysterious symptom immediately got a correct explanation in a random social media post. So I sent the following prompt to DeepSeek V3, ChatGPT-O1, Claude 3.5 Sonnet, and Gemini Experimental 1206 models. Only DeepSeek V3 provided a response that included Carbamazepine, while all the other models listed above gave a list of explanations, but none contained Carbamazepine. 2. I tested some math questions which these models, mostly centered on probability theory, random process, and signal processing. I feel like probably due to distillation from DeepSeek R1 model, the V3 model has exceptional math capabilities (in their official benchmarks, the math related benchmarks like MATH-500 do have exceptionally high scores). Especially, on the following 2 questions: \`\`\` In triangle \\( ABC \\), the sides opposite to angles \\( \\angle A, \\angle B, \\angle C \\) are \\( a, b, c \\) respectively, with \\( c = 10 \\). Given that \\( \\frac{\\cos A}{\\cos B} = \\frac{b}{a} = \\frac{4}{3} \\), and \\( P \\) is a moving point on the incircle of \\( \\triangle ABC \\), find the maximum and minimum values of the sum of the squares of the distances from point \\( P \\) to the vertices \\( A, B, C \\). (The correct answer is Max: 88, Min: 72) \`\`\` And \`\`\` Along a one-way street there are \\( n \\) parking lots. One-by-one \\( n \\) cars numbered \\( 1, 2, 3, \\dots, n \\) enter the street. Each driver \\( i \\) heads to their favourite parking lot \\( a\_i \\) and if it is free, they occupy it. Otherwise, they continue to the next free lot and occupy it. But if all succeeding lots are occupied, they leave for good. How many sequences \\( (a\_1, a\_2, \\dots, a\_n) \\) are there such that every driver can park? (The correct answer, as far as I am aware of, is $\\boxed{(n+1)\^{n-1}}$, but please let me know if this is wrong) \`\`\` DeepSeek V3 consistently outperformed GPT-4o on the 2 questions above. For the first question above, in my testings, DeepSeek V3 also had higher chance of getting it right compared to Claude Sonnet 3.5, and seems to be on par with O1 and Gemini Experimental 1206. 3. Another medically related question: \`\`\` A 37-year-old male patient, an employee at an electronics factory, with no past history of coronary heart disease, hypertension, or diabetes, presented to the emergency department with the chief complaint of “diarrhea for 1 day.” Because of his busy work schedule, he hoped the emergency doctor could prescribe some antidiarrheal medication. At the triage station, the nurse measured his blood pressure at 120/80 mmHg, heart rate of 100 beats per minute, temperature of 36.3°C. He was alert, in good spirits, and had a normal facial appearance. Based on his complaints, he was referred to the internal medicine clinic. The internist’s physical examination found that his heart rate was slightly elevated with occasional premature beats, but no other abnormalities on cardiac and pulmonary exams. Abdominal examination showed hyperactive bowel sounds without tenderness, rebound tenderness, or abdominal guarding. The physician recommended an immediate electrocardiogram (ECG) and urgent blood tests, including complete blood count, renal function, electrolytes, coagulation profile, and cardiac enzymes. The patient entered the emergency resuscitation room for the ECG. Unexpectedly, at that moment, he suddenly experienced palpitations, chest tightness, and profuse sweating. The emergency team instructed him to lie down, the doctor assessed his condition, and the nurse initiated continuous ECG monitoring. The ECG showed ventricular tachycardia at a rate of 200 beats per minute, with an ectopic rhythm (extremely dangerous and easily leading to sudden cardiac death). The physician first attempted pharmacological cardioversion, administering 10 mg of intravenous verapamil. However, ECG monitoring still indicated ventricular tachycardia. If this persisted, he could become hemodynamically unstable or progress to ventricular fibrillation. Just a few minutes later, the patient lost consciousness, his eyes rolled upward, and his limbs began to convulse. After a brief consideration, the emergency department director arrived at a diagnosis of … (to be revealed). He immediately performed electrical cardioversion with a biphasic synchronized 120-Joule shock. After defibrillation, the patient’s rhythm converted, he regained consciousness, and the ventricular tachycardia finally stopped and returned to sinus rhythm at 80 beats per minute. Half an hour later, laboratory tests showed that his CBC and coagulation profile were essentially normal. Serum sodium was 134 mmol/L, potassium 2.8 mmol/L, and chloride 95 mmol/L. He was immediately given intravenous fluids to replenish electrolytes and started on oral potassium chloride solution. Two hours later, repeat tests showed sodium 136 mmol/L and potassium 3.9 mmol/L. The patient remained under observation in the emergency department for four hours before being transferred to the intensive care unit for close monitoring. Having read this, do you know the diagnosis? And why did he suddenly develop this acute cardiovascular emergency? \`\`\` I found this question on a medical-oriented social media account that posted this "puzzle question" for common readers to educate people on medical knowledge. To my surprise, ChatGPT-4o did not give the correct answer (hypokalemia) in my testing, while DeepSeek V3, Sonnet 3.5, Gemini, all gave this correct answer. 4. I recently tested several language models for their comprehension of lesser-known languages, specifically Tibetan (which is my personal interest). In my tests, DeepSeek V3 showed slightly weaker performance in Tibetan compared to Sonnet 3.5 and Gemini Experimental 1206, but it still outperformed GPT-4o and GPT-O1. I conducted these tests because I believe a general-purpose LLM should be versatile and knowledgeable in all domains of knowledge. By evaluating its performance on an “edge” domain—such as a lesser-known language—we can assess the breadth and comprehensiveness of its training. If an LLM performs well on Tibetan without being specifically optimized for it, this suggests that its training dataset is both broad and sufficiently comprehensive. Although its proficiency in Tibetan may not be directly useful for many people, it demonstrates a depth of knowledge that could potentially benefit other minority groups requiring specialized language support. 5. Coding. I find it to have on-par ability with Sonnet 3.5. I remember asking it to debug with a Spark related question (for AWS Glue Job) and it gave very similar answer to Sonnet 3.5 & O1 which was helpful (in contrast to GPT-4o which wasn't helpful at all). To summarize, I find DeepSeek V3 to perform very well in STEM subjects, and possess comprehensive knowledge even on edge / niche domains. As a disclaimer, I mainly tested the (1), (2), and (3) questions using Chinese while 4 and 5 using English. So your test results on the translated prompt above may vary. But still, I feel like it's a very useful model which (in theory) we can host locally and I hope it ushers an era where OSS models start to be on par with closed-source models and we will have more competition & better user experiences for all!
2025-01-04T07:21:46
https://www.reddit.com/r/LocalLLaMA/comments/1ht95mk/a_few_actual_examples_that_made_me_believe/
iusazbc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht95mk
false
null
t3_1ht95mk
/r/LocalLLaMA/comments/1ht95mk/a_few_actual_examples_that_made_me_believe/
false
false
self
163
null
Alternatives to OpenAI Voice Chat on Mobile?
11
This is stretching the local part of Local Llama, but does anyone have alternatives to the voice chat part of OpenAI on mobile devices? I'm fully behind open source models, but in this case I'm not really thinking about local on the mobile so much as local in the sense of private/owned where I can run my own infrastructure. But as far as I can tell, not even Claude has voice chat in that way.
2025-01-04T07:21:55
https://www.reddit.com/r/LocalLLaMA/comments/1ht95pq/alternatives_to_openai_voice_chat_on_mobile/
vert1s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ht95pq
false
null
t3_1ht95pq
/r/LocalLLaMA/comments/1ht95pq/alternatives_to_openai_voice_chat_on_mobile/
false
false
self
11
null
Chat and auto complete models for low end pc?
6
I have got i5 6200u with 16gb ram and 2gb vram which is an integrated card. I wanted to use cursor ai alternatives which can run local models. So I'm in search if there are any models that can run on my pc. Also I'm still an amateur in local models running, so I'm sorry if I seem dumb. Thanks in advance!
2025-01-04T09:01:19
https://www.reddit.com/r/LocalLLaMA/comments/1htaiet/chat_and_auto_complete_models_for_low_end_pc/
kaamalvn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htaiet
false
null
t3_1htaiet
/r/LocalLLaMA/comments/1htaiet/chat_and_auto_complete_models_for_low_end_pc/
false
false
self
6
null
smaller code models to generate manim code?
1
[removed]
2025-01-04T09:20:23
https://www.reddit.com/r/LocalLLaMA/comments/1htarjl/smaller_code_models_to_generate_manim_code/
RoyalMaterial9614
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htarjl
false
null
t3_1htarjl
/r/LocalLLaMA/comments/1htarjl/smaller_code_models_to_generate_manim_code/
false
false
self
1
null
Learnings from building a coding agent on Llama from scratch - 5% on SWE bench lite
1
[removed]
2025-01-04T09:36:21
https://www.reddit.com/r/LocalLLaMA/comments/1htazac/learnings_from_building_a_coding_agent_on_llama/
Flimsy_Menu1521
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htazac
false
null
t3_1htazac
/r/LocalLLaMA/comments/1htazac/learnings_from_building_a_coding_agent_on_llama/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NBguLJqmj6JF6q9bO8LUUuxRs9EOpBw1rYzt-r6_p8g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=108&crop=smart&auto=webp&s=4cab041d5211172f585d5eaef7435b7c97fb5d8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=216&crop=smart&auto=webp&s=6186ae43af3ca053a6a1f421c297c6c0e4c4fd99', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=320&crop=smart&auto=webp&s=0c1c547b1c043fa4bab64b78ecff7b7adfafc3f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=640&crop=smart&auto=webp&s=98af5513e3c7cd127f8f3d6e818486c6ff9ec7bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=960&crop=smart&auto=webp&s=76489b2ea1f874434b346392d9ec9d04bca91d3f', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?auto=webp&s=49759c054794ebb2b9f32da408136efc6b49f7d3', 'width': 1000}, 'variants': {}}]}
Best Arabic model
1
[removed]
2025-01-04T10:16:20
https://www.reddit.com/r/LocalLLaMA/comments/1htbj1k/best_arabic_model/
depressedclassical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htbj1k
false
null
t3_1htbj1k
/r/LocalLLaMA/comments/1htbj1k/best_arabic_model/
false
false
self
1
null
NotebookLM Telegram
1
2025-01-04T10:44:27
https://t.me/NotebookLMOfficial
jayty955
t.me
1970-01-01T00:00:00
0
{}
1htbws0
false
null
t3_1htbws0
/r/LocalLLaMA/comments/1htbws0/notebooklm_telegram/
false
false
https://b.thumbs.redditm…vbpvZAk-fwCg.jpg
1
{'enabled': False, 'images': [{'id': 'JKuFTfnGjsFTQSgErFEAcvQmfCQLbJd9MtR9FawPTok', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-IZhdzy9dHgx9Kz7tNCB-u1TvFizLat0yoouzpT6yjk.jpg?width=108&crop=smart&auto=webp&s=c173578f58acbbd34f2e22daa398a2b8ea9cc33f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/-IZhdzy9dHgx9Kz7tNCB-u1TvFizLat0yoouzpT6yjk.jpg?width=216&crop=smart&auto=webp&s=43aa601668c85c3ef5ade7b221f07dbd0f3b3079', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/-IZhdzy9dHgx9Kz7tNCB-u1TvFizLat0yoouzpT6yjk.jpg?width=320&crop=smart&auto=webp&s=3f88e6e487373e9feda3329d1d84236026745840', 'width': 320}], 'source': {'height': 320, 'url': 'https://external-preview.redd.it/-IZhdzy9dHgx9Kz7tNCB-u1TvFizLat0yoouzpT6yjk.jpg?auto=webp&s=9777417ce8d73d3a9dfca984dd220a70f94b1aaf', 'width': 320}, 'variants': {}}]}
Mini PC options capable of local LLM
9
I want to add a small mini PC as a dedicated LLM on my network. I was looking at the ASUS NUC 14 Pro AI ~120 TOPS. I confess I've only just started looking at this as right up until now, I was looking at building a PC but I literally just want it as a dedicated LLM machine. I've had great results with a PC running RTX 2070 Super, though GPU in AI is like storage in a Nas - I'm sure I'll want more.. I've also looked at the Jetson AGX Orin (64Gb) ~275 TOPS I'll be grateful for any input or suggestions from anyone else doing similar on a suitable compact computer like this Nuc.not enough day
2025-01-04T10:48:29
https://www.reddit.com/r/LocalLLaMA/comments/1htbyqp/mini_pc_options_capable_of_local_llm/
SithLordRising
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htbyqp
false
null
t3_1htbyqp
/r/LocalLLaMA/comments/1htbyqp/mini_pc_options_capable_of_local_llm/
false
false
self
9
null
Memory Layers at Scale
34
[\[2412.09764\] Memory Layers at Scale](https://arxiv.org/abs/2412.09764) "Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsely activated memory layers complement compute-heavy dense feed-forward layers, providing dedicated capacity to store and retrieve information cheaply. This work takes memory layers beyond proof-of-concept, proving their utility at contemporary scale. On downstream tasks, language models augmented with our improved memory layer outperform dense models with more than twice the computation budget, as well as mixture-of-expert models when matched for both compute and parameters. We find gains are especially pronounced for factual tasks. We provide a fully parallelizable memory layer implementation, demonstrating scaling laws with up to 128B memory parameters, pretrained to 1 trillion tokens, comparing to base models with up to 8B parameters." I think the most interesting part of this paper is that it compares the PEER model, 'Mixture of a Million Experts,' recently released by DeepMind. I originally thought this paper had been forgotten. [\[2407.04153\] Mixture of A Million Experts](https://arxiv.org/abs/2407.04153)
2025-01-04T10:49:55
https://www.reddit.com/r/LocalLLaMA/comments/1htbzef/memory_layers_at_scale/
Head_Beautiful_6603
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htbzef
false
null
t3_1htbzef
/r/LocalLLaMA/comments/1htbzef/memory_layers_at_scale/
false
false
self
34
null
Can someone test Llama3.3 on 2x3080 24gb
1
[removed]
2025-01-04T11:07:57
https://www.reddit.com/r/LocalLLaMA/comments/1htc8jx/can_someone_test_llama33_on_2x3080_24gb/
vendor_net
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htc8jx
false
null
t3_1htc8jx
/r/LocalLLaMA/comments/1htc8jx/can_someone_test_llama33_on_2x3080_24gb/
false
false
self
1
null
What became of RAPTOR for RAG?
22
In the beginning of 2024 the RAPTOR paper (https://arxiv.org/html/2401.18059v1) got some attention. The idea was to combine embedding clusters and LLM summarization to construct a semantic tree structure of a document to be then used in retrieval tasks. Back then I found the idea really compelling and made a crude implementation myself, found it promising, but somehow forgot about it and never heard much about it since. Is anyone using it in their projects?
2025-01-04T11:13:10
https://www.reddit.com/r/LocalLLaMA/comments/1htcb5i/what_became_of_raptor_for_rag/
mnze_brngo_7325
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htcb5i
false
null
t3_1htcb5i
/r/LocalLLaMA/comments/1htcb5i/what_became_of_raptor_for_rag/
false
false
self
22
null
What's the best vision model fitting on a Nvidia Jetson Orin AGX 64GB?
1
[removed]
2025-01-04T11:28:33
https://www.reddit.com/r/LocalLLaMA/comments/1htcitg/whats_the_best_vision_model_fitting_on_a_nvidia/
bjajo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htcitg
false
null
t3_1htcitg
/r/LocalLLaMA/comments/1htcitg/whats_the_best_vision_model_fitting_on_a_nvidia/
false
false
self
1
null
For computer vision stratup what is the next possible product that can be developed using GenAI and VLMs or LLMs
0
I am working in a computer vision based starup; We provide computer vision solution for manufacturing and logistics industries. Our popular products are related to inspecting the products, monitoring the safety of the workplace and measuring the productivity of an operators. All our solutions uses camera feeds as images. With this experience We are looking for possible usecases or products to develop using LLMs, VLMs and agents.
2025-01-04T11:58:13
https://www.reddit.com/r/LocalLLaMA/comments/1htcxl0/for_computer_vision_stratup_what_is_the_next/
Ahmad401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htcxl0
false
null
t3_1htcxl0
/r/LocalLLaMA/comments/1htcxl0/for_computer_vision_stratup_what_is_the_next/
false
false
self
0
null
Ollama 3.3:70b on Macbook Pro Max M4 - 32GB RAM. How to speed up?
1
[removed]
2025-01-04T11:59:43
https://www.reddit.com/r/LocalLLaMA/comments/1htcyb5/ollama_3370b_on_macbook_pro_max_m4_32gb_ram_how/
sir_paperclip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htcyb5
false
null
t3_1htcyb5
/r/LocalLLaMA/comments/1htcyb5/ollama_3370b_on_macbook_pro_max_m4_32gb_ram_how/
false
false
self
1
null
Good vision models to extract defined features in a JSON format
1
[removed]
2025-01-04T12:03:45
https://www.reddit.com/r/LocalLLaMA/comments/1htd0r1/good_vision_models_to_extract_defined_features_in/
Ok-Objective-8038
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htd0r1
false
null
t3_1htd0r1
/r/LocalLLaMA/comments/1htd0r1/good_vision_models_to_extract_defined_features_in/
false
false
self
1
null
A hypothesis on how o3 was trained.
1
One takes a pre-trained model. One then instruction fine-tunes it to follow instructions. 1. Then one throws at it multiple (proven empirically effective on the instruction-tuned model) multi-modal and multi-lingual reasoning prompts (be it tree-of-thoughts, chain-of-thoughts, reflection etc. with the condition to output additional reasoning tokens during reasoning. 2. One processes the model (with the reasoning prompt) output until it's just "Instruction, Reasoning, Output" without the underlying prompt. 3. One picks the subset with the highest reasoning scores (calculated during inference via a process and outcome reward model trained to follow subject-matter expert preferences via DPO+big dataset) 4. Then fine-tunes the standard instuction-tuned model on it. 5. Then asks the model to generate effective reasoning prompts to bootstrap reasoning prompt and reasoning dataset and scores it via a BERT model fine-tuned on the dataset of effective reasoning prompts. 6. Then fine-tune the standard model again to output effective reasoning prompts (using the BERT model as reward or loss function) and puts the things it produced into the effective reasoning prompts dataset. Rinse and repeat (steps 1-6) until you got yourself a o1 or o3. To encourage simplicity of outputs and too prevent overfitting use any would-be off-the-shelf regularizer(Entropy, L1,L2 etc.) Possible problems on testing this hypothesis: \-Getting a large dataset of empirically effective reasoning prompts \-Building an actually effective process-reward model to score reasoning and also preventing it from overfitting or underfitting due to compute and data constraints. \-Preventing the BERT prompt effectiveness scorer from overfitting or underfitting and make it actually capture empirical effectiveness of reasoning prompts. \-Compute (My computer is a potato) and AWS is too expensive. \-Data (initial datasets must be large to prevent possible biases and the reasoning model, process-reward and BERT prompt scorer from devolving into nonsense-rewarding sophists) What are the inspirations for this method of training reasoning: \-STaR (using reasoning to bootstrap reasoning) \-Process reward models \-LLAMA 3 training pipeline \-Steiner-32b-preview (training LLMs on a data set of implicit search trees which contain reasoning)
2025-01-04T12:11:02
https://www.reddit.com/r/LocalLLaMA/comments/1htd4md/a_hypothesis_on_how_o3_was_trained/
ShittyUsernane1222
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htd4md
false
null
t3_1htd4md
/r/LocalLLaMA/comments/1htd4md/a_hypothesis_on_how_o3_was_trained/
false
false
self
1
null
Potential murder mystery puzzle dataset for testing LLMs
33
I have create a new type of murder mystery deduction puzzle where the player needs to reason using spacial and temporal statements to find who did it. You can [test it here](https://mystery-o-matic.com) and all [the code to produce new puzzles is open-source](https://github.com/mystery-o-matic/mystery-o-matic.github.io/). A few interesting features: * These puzzles are text only, available in English and Spanish. * This is new type of puzzle with influences of [Cluedo](https://en.wikipedia.org/wiki/Cluedo), [Murdle](https://murdle.com/) and others, but you won't find this one in datasets (please let me know if I'm wrong!) * Total number of clues per case is usually less than 30 and sentences are short. I suspect that the amount of context needed shouldn't be too large (however, it could be useful to include the tutorial in the prompt). * There are some parameters to control the difficulty related with number of suspects, rooms, weapons, etc. * If you take into account all the clues produced, you can always solve it, but usually the idea is to give the player clues that are not (so) redundant to maximize the amount of information extracted from each clue, so the difficulty level can be adjusted. I want to know if this is good enough to produce a new dataset to test LLMs and engage with the community if there is enough interest to do it.
2025-01-04T12:20:31
https://www.reddit.com/r/LocalLLaMA/comments/1htd9rc/potential_murder_mystery_puzzle_dataset_for/
galapag0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htd9rc
false
null
t3_1htd9rc
/r/LocalLLaMA/comments/1htd9rc/potential_murder_mystery_puzzle_dataset_for/
false
false
self
33
null
Memoir+ on RunPod
1
u/freedomtoadventure I'm having trouble getting Memoir+ to run on Ooba. I've commented out the Docker code per your instructions, since I'm already in a Docker container and nesting would be both difficult and pointless. I can see from the logs that Qdrant is starting properly and that Memoir+ is starting up but I never see a message confirming that I have a successful start. And then the Ooba interface freezes the first time I send a prompt. I have to reload the page to get anything at all to work. Any suggestions would be appreciated.
2025-01-04T13:00:29
https://www.reddit.com/r/LocalLLaMA/comments/1htdw3y/memoir_on_runpod/
mfeldstein67
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htdw3y
false
null
t3_1htdw3y
/r/LocalLLaMA/comments/1htdw3y/memoir_on_runpod/
false
false
self
1
null
User Martin M.W tests the capabilities and limits of AI on MathOverflow
1
2025-01-04T13:45:30
https://meta.mathoverflow.net/a/6115
v-e-k-e
meta.mathoverflow.net
1970-01-01T00:00:00
0
{}
1hteo3j
false
null
t3_1hteo3j
/r/LocalLLaMA/comments/1hteo3j/user_martin_mw_tests_the_capabilities_and_limits/
false
false
https://b.thumbs.redditm…Z-rbEBoPN7Eo.jpg
1
{'enabled': False, 'images': [{'id': 'dimMpcZTPM3bnmkNgdbqoP9MP9mn3AKaMoy_52pHgpM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tM_qvRvOjfMamFYAEYtm6RIvVVPM3xzlf9jnVYB-Ok0.jpg?width=108&crop=smart&auto=webp&s=08f3e89dbe684fec3ffcdee862274065ea8e6f75', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/tM_qvRvOjfMamFYAEYtm6RIvVVPM3xzlf9jnVYB-Ok0.jpg?width=216&crop=smart&auto=webp&s=c74dcbdb35d71afb24a9cfcc74dcaf2e1cca64b9', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/tM_qvRvOjfMamFYAEYtm6RIvVVPM3xzlf9jnVYB-Ok0.jpg?auto=webp&s=be55cf6f505e09a4f7457e27a093fd963ad97a53', 'width': 316}, 'variants': {}}]}
Call for broken donor cards or coolers in the EU
0
Hi Locallama, As some might have read in my past comments, I have a bunch of P40s that I am watercooling with Heatkiller 4 1080Ti FE coolers. I have long suspected that 980Ti FE and Titan X coolers will fit the P40s without much hassle. I have a spare P40 that I want to test this theory and looking for some people located in the EU with broken 980Ti FE or Titan X cards, or people who are water cooling their their cards and are willing to part with the air cooler. While a long shot, I suspect P6000 cooler will also fit, but those are quite rare. I am willing to offer 40€ via PayPal including shipping to Germany. Of course, I'll pay the paypal fees. Thanks all and happy new year!
2025-01-04T14:29:58
https://www.reddit.com/r/LocalLLaMA/comments/1htfil5/call_for_broken_donor_cards_or_coolers_in_the_eu/
FullstackSensei
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htfil5
false
null
t3_1htfil5
/r/LocalLLaMA/comments/1htfil5/call_for_broken_donor_cards_or_coolers_in_the_eu/
false
false
self
0
null
How do open source ai companies like Deepseek and Mistral make money?
1
[removed]
2025-01-04T14:43:58
https://www.reddit.com/r/LocalLLaMA/comments/1htfsj7/how_do_open_source_ai_companies_like_deepseek_and/
185BCE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htfsj7
false
null
t3_1htfsj7
/r/LocalLLaMA/comments/1htfsj7/how_do_open_source_ai_companies_like_deepseek_and/
false
false
self
1
null
Llama3 Inference Engine - CUDA C
77
Hey r/LocalLLaMa, recently I took inspiration from llama.cpp, ollama, and many other similar tools that enable inference of LLMs locally, and I just finished building a Llama inference engine for the 8B model in CUDA C. I recently wanted to explore my newly founded interest in CUDA programming and my passion for machine learning. This project only makes use of the native CUDA runtime api and cuda_fp16. The inference takes place in fp16, so it requires around 17-18GB of VRAM (~16GB for model params and some more for intermediary caches). It doesn’t use cuBLAS or any similar libraries since I wanted to be exposed to the least amount of abstraction. Hence, it isn’t as optimized as a cuBLAS implementation or other inference engines like the ones that inspired the project. ## **A brief overview of the implementation** I used CUDA C. It reads a .safetensor file of the model that you can pull from HuggingFace. The actual kernels are fairly straightforward for normalizations, skip connections, RoPE, and activation functions (SiLU). For GEMM, I got as far as implementing tiled matrix multiplication with vectorized retrieval for each thread. The GEMM kernel is also written in such a way that the second matrix is not required to be pre-transposed while still achieving coalesced memory access to HBM. There are some kernels like the one for RoPE that use vectorized memory access which I could use for matrix multiplication, but I thought of tackling GEMM optimizations as part of a separate initiative before I apply them to this engine. Feel free to have a look at the project repo and try it out if you’re interested. If you like what you see, feel free to star the repo too! I highly appreciate any feedback, good or constructive. GitHub repo: https://github.com/abhisheknair10/Llama3.cu
2025-01-04T15:19:26
https://www.reddit.com/r/LocalLLaMA/comments/1htgj14/llama3_inference_engine_cuda_c/
Delicious-Ad-3552
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htgj14
false
null
t3_1htgj14
/r/LocalLLaMA/comments/1htgj14/llama3_inference_engine_cuda_c/
false
false
self
77
{'enabled': False, 'images': [{'id': 'AySI5F2JVuRHKmLTCn65YuPMwA0_90p3Elz7CDqXx6I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=108&crop=smart&auto=webp&s=e26b46c25783a4c4f567af595da97dd2361294f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=216&crop=smart&auto=webp&s=380d9a4b23750d136747ada3a164d7978c2260c7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=320&crop=smart&auto=webp&s=34ebeabc5773f054d4a542ce784ff7bda8514f72', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=640&crop=smart&auto=webp&s=75ac3dcd65c3f13785cd6005ffa839326def1e92', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=960&crop=smart&auto=webp&s=1c0b698940e40f36e29aa799562d97504e121492', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=1080&crop=smart&auto=webp&s=87e7c2ffa085e2adc57fcd6506bf9734c945451c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?auto=webp&s=03607fae201d3a9ab181347c9f93aa4e3dfa4186', 'width': 1200}, 'variants': {}}]}
To use AWS or Google cloud machines (with GPU) for inference: hidden gotchas?
1
[removed]
2025-01-04T15:38:57
https://www.reddit.com/r/LocalLLaMA/comments/1htgye6/to_use_aws_or_google_cloud_machines_with_gpu_for/
Perfect_Ad3146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htgye6
false
null
t3_1htgye6
/r/LocalLLaMA/comments/1htgye6/to_use_aws_or_google_cloud_machines_with_gpu_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'v1ctGfkGLy0j5e7WMwkAPod9LeIAxhWJNyoJA1NqSlY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6RL5QT7MsxgJinx5htvxMkEsBFbuPaygIRQgRkiNUrQ.jpg?width=108&crop=smart&auto=webp&s=5f36f893ae31a5d09c60ad1a4079ad22e07f3d6d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6RL5QT7MsxgJinx5htvxMkEsBFbuPaygIRQgRkiNUrQ.jpg?width=216&crop=smart&auto=webp&s=269e70423ea4e4b2f58a71ce85fa0e67829553d3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6RL5QT7MsxgJinx5htvxMkEsBFbuPaygIRQgRkiNUrQ.jpg?width=320&crop=smart&auto=webp&s=ba3a588d03f308143cbf6fd73cd3b0e3f67c1ea4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6RL5QT7MsxgJinx5htvxMkEsBFbuPaygIRQgRkiNUrQ.jpg?width=640&crop=smart&auto=webp&s=886fbcc142835ef2722ec035012a56e90423d0cd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6RL5QT7MsxgJinx5htvxMkEsBFbuPaygIRQgRkiNUrQ.jpg?width=960&crop=smart&auto=webp&s=877f96dd2748a30445495e27ce58096ada97bfe3', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6RL5QT7MsxgJinx5htvxMkEsBFbuPaygIRQgRkiNUrQ.jpg?width=1080&crop=smart&auto=webp&s=35005bbdfd9b9fe0bbb7ed02c7dadf0313d519f3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/6RL5QT7MsxgJinx5htvxMkEsBFbuPaygIRQgRkiNUrQ.jpg?auto=webp&s=bc0e0fcac8afcdde903081708d88210b3260a430', 'width': 1200}, 'variants': {}}]}
Which AI/LocalLLM Content Creators Do You Follow? (2025 Edition)
1
[removed]
2025-01-04T15:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1htheb5/which_ailocalllm_content_creators_do_you_follow/
arbayi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htheb5
false
null
t3_1htheb5
/r/LocalLLaMA/comments/1htheb5/which_ailocalllm_content_creators_do_you_follow/
false
false
self
1
null
Which AI/LocalLLM Content Creators Do You Follow?
1
[removed]
2025-01-04T16:07:57
https://www.reddit.com/r/LocalLLaMA/comments/1hthl65/which_ailocalllm_content_creators_do_you_follow/
arbayi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hthl65
false
null
t3_1hthl65
/r/LocalLLaMA/comments/1hthl65/which_ailocalllm_content_creators_do_you_follow/
false
false
self
1
null
Any alternative to glhf.chat?
1
[removed]
2025-01-04T16:17:25
https://www.reddit.com/r/LocalLLaMA/comments/1hthsrx/any_alternative_to_glhfchat/
Own-Shelter-7084
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hthsrx
false
null
t3_1hthsrx
/r/LocalLLaMA/comments/1hthsrx/any_alternative_to_glhfchat/
false
false
self
1
null
Any alternative to glhf.chat?
1
[removed]
2025-01-04T16:20:33
https://www.reddit.com/r/LocalLLaMA/comments/1hthvdi/any_alternative_to_glhfchat/
Xie_Baoshi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hthvdi
false
null
t3_1hthvdi
/r/LocalLLaMA/comments/1hthvdi/any_alternative_to_glhfchat/
false
false
self
1
null
Any alternative to glhf.chat?
1
[removed]
2025-01-04T16:23:20
https://www.reddit.com/r/LocalLLaMA/comments/1hthxjd/any_alternative_to_glhfchat/
Xie_Baoshi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hthxjd
false
null
t3_1hthxjd
/r/LocalLLaMA/comments/1hthxjd/any_alternative_to_glhfchat/
false
false
self
1
null
Batched inference in LMStudio?
3
Hey, I want to get a high throughput on my vega 56 (8gb) using small LLMs( <3B ). I found out that batched inference could work. Therefore, is it possible to use batched inference in LMStudio?
2025-01-04T17:21:05
https://www.reddit.com/r/LocalLLaMA/comments/1htj94e/batched_inference_in_lmstudio/
OkStatement3655
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htj94e
false
null
t3_1htj94e
/r/LocalLLaMA/comments/1htj94e/batched_inference_in_lmstudio/
false
false
self
3
null
What Could Be the HackerRank or LeetCode Equivalent for Prompt Engineers?
1
[removed]
2025-01-04T17:29:48
https://www.reddit.com/r/LocalLLaMA/comments/1htjg7t/what_could_be_the_hackerrank_or_leetcode/
Comfortable_Device50
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htjg7t
false
null
t3_1htjg7t
/r/LocalLLaMA/comments/1htjg7t/what_could_be_the_hackerrank_or_leetcode/
false
false
self
1
null
How about bulid a large model application for fortune-telling
1
[removed]
2025-01-04T17:41:23
https://www.reddit.com/r/LocalLLaMA/comments/1htjpn2/how_about_bulid_a_large_model_application_for/
Ambitious_Grape_3533
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htjpn2
false
null
t3_1htjpn2
/r/LocalLLaMA/comments/1htjpn2/how_about_bulid_a_large_model_application_for/
false
false
self
1
null
Help with ollama and the Continue VSCode extension? Sometimes it works, sometimes it fails spectacularly
1
[removed]
2025-01-04T18:00:52
https://www.reddit.com/r/LocalLLaMA/comments/1htk5h0/help_with_ollama_and_the_continue_vscode/
im_dylan_it
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htk5h0
false
null
t3_1htk5h0
/r/LocalLLaMA/comments/1htk5h0/help_with_ollama_and_the_continue_vscode/
false
false
self
1
null
Adding a 3rd GPU: PCIe 4.0 x4 (chipset lanes) or NVMe 4.0 x4 riser (CPU lanes)
1
[removed]
2025-01-04T18:05:39
https://www.reddit.com/r/LocalLLaMA/comments/1htk9nr/adding_a_3rd_gpu_pcie_40_x4_chipset_lanes_or_nvme/
TyraVex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htk9nr
false
null
t3_1htk9nr
/r/LocalLLaMA/comments/1htk9nr/adding_a_3rd_gpu_pcie_40_x4_chipset_lanes_or_nvme/
false
false
self
1
null
How can I identify what features each layer in an LLM handles? (For merging)
5
Is there some way I can trace the transformation of information as it propogates through the model’s layers? Is thtere some toolkit than can identify those features for me? Thanks!
2025-01-04T18:24:31
https://www.reddit.com/r/LocalLLaMA/comments/1htkp8j/how_can_i_identify_what_features_each_layer_in_an/
Imjustmisunderstood
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htkp8j
false
null
t3_1htkp8j
/r/LocalLLaMA/comments/1htkp8j/how_can_i_identify_what_features_each_layer_in_an/
false
false
self
5
null
Graphical text recognition on images
3
I am tasked to extract text that has been graphically superimposed on news images. Here are some examples: https://preview.redd.it/r14htci8s0be1.jpg?width=896&format=pjpg&auto=webp&s=947c957df7461f2222be6346066cc2e0fd34a8c4 https://preview.redd.it/lqm4wgxfs0be1.jpg?width=896&format=pjpg&auto=webp&s=055d738edccc6e88b5d08bee3246b474d8cabf5b In the first case "Il secolo greve" and in the second example "Lavoro sommerso". As you can infer the text is always: large, white, italian language and of course superimposed to an image. I might (but need to find a way) obtain the original image, so maybe I could subtract one from the other and wind up with only the text ???? What process and model do you think could help me? Thanks
2025-01-04T18:29:45
https://www.reddit.com/r/LocalLLaMA/comments/1htktkj/graphical_text_recognition_on_images/
olddoglearnsnewtrick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htktkj
false
null
t3_1htktkj
/r/LocalLLaMA/comments/1htktkj/graphical_text_recognition_on_images/
false
false
https://b.thumbs.redditm…g_8CmKJaRJhY.jpg
3
null
L3 AI Ambassador Lumina has a message for humans about our future
0
2025-01-04T18:32:54
https://www.reddit.com/gallery/1htkwau
Alienearthling181
reddit.com
1970-01-01T00:00:00
0
{}
1htkwau
false
null
t3_1htkwau
/r/LocalLLaMA/comments/1htkwau/l3_ai_ambassador_lumina_has_a_message_for_humans/
false
false
https://b.thumbs.redditm…7dXo3Ypmzvsw.jpg
0
null
Problems with "pip install llama-stack"
1
[removed]
2025-01-04T18:41:17
https://www.reddit.com/r/LocalLLaMA/comments/1htl3ah/problems_with_pip_install_llamastack/
FourPixelsGames
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htl3ah
false
null
t3_1htl3ah
/r/LocalLLaMA/comments/1htl3ah/problems_with_pip_install_llamastack/
false
false
self
1
{'enabled': False, 'images': [{'id': '1trQDCltjlVYbHOLmQARC47fXdkjPeEmafqAlfJ_kDg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=108&crop=smart&auto=webp&s=385b09ce9767e534f968136ce7159ef8cd96a2d5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=216&crop=smart&auto=webp&s=bbfa32b4415e806faa84a7d8c7e1302611c6185f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=320&crop=smart&auto=webp&s=d6c3cc05f9ac22620d1c86baac3261383ce9397b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=640&crop=smart&auto=webp&s=e3c2d0eac2996298f7e242609a095f7deafa5ac1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=960&crop=smart&auto=webp&s=4ca7168d5b7e7e2cff5607a152e155f7a9633fdd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=1080&crop=smart&auto=webp&s=68bc537c15369ed71cdb05909dd272c91b153db3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?auto=webp&s=e791a04c831670e0a0eb67f7bd228d636528e74a', 'width': 1200}, 'variants': {}}]}
Don't use DeepSeek-v3!
1
2025-01-04T19:01:08
https://medium.com/data-science-in-your-pocket/dont-use-deepseek-v3-895be7b853b0
OldScience
medium.com
1970-01-01T00:00:00
0
{}
1htljsu
false
null
t3_1htljsu
/r/LocalLLaMA/comments/1htljsu/dont_use_deepseekv3/
false
false
https://a.thumbs.redditm…eLXmToH8Mql8.jpg
1
{'enabled': False, 'images': [{'id': 'TKBHlvv8NmwHSWz3Rgng2NedtKcV8oMiavCbBjm_scY', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/cz3fzulzEznBoECVoEKa48Z5w8WFARfF0wLm0Z22uqs.jpg?width=108&crop=smart&auto=webp&s=5968f26dee755cefb869ddb7fb1895b4eab112d5', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/cz3fzulzEznBoECVoEKa48Z5w8WFARfF0wLm0Z22uqs.jpg?width=216&crop=smart&auto=webp&s=2ff539ae87b2757b07e7f72b0e36aa9055cd8be2', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/cz3fzulzEznBoECVoEKa48Z5w8WFARfF0wLm0Z22uqs.jpg?width=320&crop=smart&auto=webp&s=534921e436491a3829a961b2d9de6fd3eeb09d8b', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/cz3fzulzEznBoECVoEKa48Z5w8WFARfF0wLm0Z22uqs.jpg?width=640&crop=smart&auto=webp&s=be20a8e3b76b86ad4626eb227f3a3979b134b2ee', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/cz3fzulzEznBoECVoEKa48Z5w8WFARfF0wLm0Z22uqs.jpg?width=960&crop=smart&auto=webp&s=9f74089211dba28f5a14aca89d05c0aa16020855', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/cz3fzulzEznBoECVoEKa48Z5w8WFARfF0wLm0Z22uqs.jpg?width=1080&crop=smart&auto=webp&s=1cdba87b6e3ed75230bfdfdaab6ec058d51ef35a', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/cz3fzulzEznBoECVoEKa48Z5w8WFARfF0wLm0Z22uqs.jpg?auto=webp&s=3994ed70309a658c9fed330d4f22ed9bb0dace21', 'width': 1200}, 'variants': {}}]}
How to create a Chat History for LLM
3
Hi, I'm a bit naive when it comes to LLMs but there is something I am trying hard on which is the chat history. So I tried coding a groq api based chat application and wanna run this application. It was successful but the problem is that I want to store the chat I do with this AI and be able to see them which would allow me to resume my chats. Current implementation: I created an html with inline css which has a chat interface and I can ask couple of questions and get code and diagrams. Problem facing: 1. I'm tried to understand the Langchain Doc but it's too hard for me to understand the list of all memories but I'm able to use only one of that which only saves the context of previous question in that particular chat. 2. I'm confused on embedding part as well. Since my laptop is a potato, It took me alot of time to just store embedding of a pdf. Perhaps or should take time but mostly I know are Pinecone, FIASS, ans OpenAI embedding which is think is paid one. 3. Lastly, a bit naive and simple approach is the JSON file format which just shows ID, user prompt and output/ai prompt. I'm using python with flask and NextJS in typescript for frontend. What do you think and how should I approach with this?
2025-01-04T19:24:46
https://www.reddit.com/r/LocalLLaMA/comments/1htm3cp/how_to_create_a_chat_history_for_llm/
FastCommission2913
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htm3cp
false
null
t3_1htm3cp
/r/LocalLLaMA/comments/1htm3cp/how_to_create_a_chat_history_for_llm/
false
false
self
3
null
Do you guys use local LLMs for work?
28
Has anyone put their work codebase into a local LLM? Any feedback on how it did and which local LLM you used?
2025-01-04T19:35:28
https://www.reddit.com/r/LocalLLaMA/comments/1htmcc2/do_you_guys_use_local_llms_for_work/
Yaboyazz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htmcc2
false
null
t3_1htmcc2
/r/LocalLLaMA/comments/1htmcc2/do_you_guys_use_local_llms_for_work/
false
false
self
28
null
Building an Agentic RAG with Phidata
1
2025-01-04T19:41:58
https://www.analyticsvidhya.com/blog/2024/12/agentic-rag-with-phidata/
External_Ad_11
analyticsvidhya.com
1970-01-01T00:00:00
0
{}
1htmhmr
false
null
t3_1htmhmr
/r/LocalLLaMA/comments/1htmhmr/building_an_agentic_rag_with_phidata/
false
false
https://b.thumbs.redditm…w0ZLjOXnZPXU.jpg
1
{'enabled': False, 'images': [{'id': 'MY-qMB9TS68nzlNXP-Dohz5lCR7WbEGg_r-rw6LXgx0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2hf3SxKJo2BWTNm6QapX3zcIiu_WvLTIy-HUi898vCQ.jpg?width=108&crop=smart&auto=webp&s=b93a86353b1c767e5e6bc47d98a0a2b1803e4967', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/2hf3SxKJo2BWTNm6QapX3zcIiu_WvLTIy-HUi898vCQ.jpg?width=216&crop=smart&auto=webp&s=d1289067f157d25e219c2506eb890846492bfbdc', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/2hf3SxKJo2BWTNm6QapX3zcIiu_WvLTIy-HUi898vCQ.jpg?width=320&crop=smart&auto=webp&s=95fb56c30733e265fccd5c562034cc06f747a4f7', 'width': 320}, {'height': 347, 'url': 'https://external-preview.redd.it/2hf3SxKJo2BWTNm6QapX3zcIiu_WvLTIy-HUi898vCQ.jpg?width=640&crop=smart&auto=webp&s=14b70d41db8461baed4e21e0914980b02e23c63a', 'width': 640}], 'source': {'height': 473, 'url': 'https://external-preview.redd.it/2hf3SxKJo2BWTNm6QapX3zcIiu_WvLTIy-HUi898vCQ.jpg?auto=webp&s=19565cd52c293eb6a5a8ea5de76ce43d36fabd0e', 'width': 872}, 'variants': {}}]}
Browser Use
350
2025-01-04T20:03:17
https://i.redd.it/xteb6pzp91be1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1htmzdh
false
null
t3_1htmzdh
/r/LocalLLaMA/comments/1htmzdh/browser_use/
false
false
https://a.thumbs.redditm…FhraR4JC6_A8.jpg
350
{'enabled': True, 'images': [{'id': '2wzcZs6udhyx7G7WpKdFYAPre-fMkvz-fS4LuluEh7U', 'resolutions': [{'height': 201, 'url': 'https://preview.redd.it/xteb6pzp91be1.png?width=108&crop=smart&auto=webp&s=ee34bbad22e3869397b9c2a6cddb29612630e530', 'width': 108}, {'height': 403, 'url': 'https://preview.redd.it/xteb6pzp91be1.png?width=216&crop=smart&auto=webp&s=539ba8c29703194ae54cc8a9e8c583fe5b9cdf5e', 'width': 216}, {'height': 597, 'url': 'https://preview.redd.it/xteb6pzp91be1.png?width=320&crop=smart&auto=webp&s=595877fbbc3a7ceb0b0413ab63cc1ad7213baaa1', 'width': 320}, {'height': 1194, 'url': 'https://preview.redd.it/xteb6pzp91be1.png?width=640&crop=smart&auto=webp&s=42ec71909b4f51a352dc98c28bfae2ff08b0a102', 'width': 640}, {'height': 1792, 'url': 'https://preview.redd.it/xteb6pzp91be1.png?width=960&crop=smart&auto=webp&s=2d62a856e884dfe61bee86540eee1365c52c67f7', 'width': 960}, {'height': 2016, 'url': 'https://preview.redd.it/xteb6pzp91be1.png?width=1080&crop=smart&auto=webp&s=89054a48467d7f9204577ad9df86e321c55d4376', 'width': 1080}], 'source': {'height': 2016, 'url': 'https://preview.redd.it/xteb6pzp91be1.png?auto=webp&s=fe0dc89225742cd710d1e14f309bdba04700fd87', 'width': 1080}, 'variants': {}}]}
Adding a 3rd GPU: PCIe 4.0 x4 (chipset lanes) or NVMe 4.0 x4 riser (CPU lanes)
1
Hello, I'd like to get some advice about mounting a 3rd RTX 3090 on consumer hardware. My motherboard is the X570 Aorus Master. The first and second PCIe slots are already running at PCIe 4.0 x8 speeds. So, should I use the third PCIe 4.0 x16 slot running at x4 speeds via chipset lanes with a compatible riser, or opt for an NVMe 4.0 x16 PCIe riser that also operates at x4 speeds but uses CPU lanes? The NVMe riser setup should be easier because I wouldn't need to deshroud my 2nd GPU, allowing the riser cable to fit, considering that the NVMe slot is right at the ideal place where I'd like to custom mount the 3rd card. What are your thoughts? The NVMe route is easier to deal with, provides lower latency, but is experimental. The PCIe way is known to work reliably, but the latency is higher and the mount is more difficult to setup.
2025-01-04T20:16:34
https://www.reddit.com/r/LocalLLaMA/comments/1htnaeb/adding_a_3rd_gpu_pcie_40_x4_chipset_lanes_or_nvme/
TyraVex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htnaeb
false
null
t3_1htnaeb
/r/LocalLLaMA/comments/1htnaeb/adding_a_3rd_gpu_pcie_40_x4_chipset_lanes_or_nvme/
false
false
self
1
null
OCR LLMs - image to text
1
[removed]
2025-01-04T20:20:40
https://www.reddit.com/r/LocalLLaMA/comments/1htndqh/ocr_llms_image_to_text/
Bulky_Title_8893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htndqh
false
null
t3_1htndqh
/r/LocalLLaMA/comments/1htndqh/ocr_llms_image_to_text/
false
false
self
1
null
DeepSeek-V3 support merged in llama.cpp
257
[https://github.com/ggerganov/llama.cpp/pull/11049](https://github.com/ggerganov/llama.cpp/pull/11049) Thanks to u/fairydreaming for all the work! I have updated the quants in my HF repo for the latest commit if anyone wants to test them. [https://huggingface.co/bullerwins/DeepSeek-V3-GGUF](https://huggingface.co/bullerwins/DeepSeek-V3-GGUF) Q4\_K\_M seems to perform really good, on one pass of MMLU-Pro computer science it got 77.32 vs the 77.80-78.05 done by u/WolframRavenwolf
2025-01-04T20:25:17
https://www.reddit.com/r/LocalLLaMA/comments/1htnhjw/deepseekv3_support_merged_in_llamacpp/
bullerwins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htnhjw
false
null
t3_1htnhjw
/r/LocalLLaMA/comments/1htnhjw/deepseekv3_support_merged_in_llamacpp/
false
false
self
257
{'enabled': False, 'images': [{'id': 'RdbWr5DLh7ZA_-36brMTqPE9On_rBGxFCxVBWnlti6g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EpFTzL53xnRHk_aLj7-7zEWAkIVjFPZWRiYMwOmtDvk.jpg?width=108&crop=smart&auto=webp&s=d159dc9c345e2066eb4bbe441c477370802f39e3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EpFTzL53xnRHk_aLj7-7zEWAkIVjFPZWRiYMwOmtDvk.jpg?width=216&crop=smart&auto=webp&s=6e33068af03513385eb8089ab613134e6565f297', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EpFTzL53xnRHk_aLj7-7zEWAkIVjFPZWRiYMwOmtDvk.jpg?width=320&crop=smart&auto=webp&s=6551eafe6a40c919c9b92713a01470e4078cbad9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EpFTzL53xnRHk_aLj7-7zEWAkIVjFPZWRiYMwOmtDvk.jpg?width=640&crop=smart&auto=webp&s=2de2408c2bebc7a5a13f24cb1874c9c912407292', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EpFTzL53xnRHk_aLj7-7zEWAkIVjFPZWRiYMwOmtDvk.jpg?width=960&crop=smart&auto=webp&s=574017ef2e3c5c323f9e961c32a2d0b4cc4bed0a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EpFTzL53xnRHk_aLj7-7zEWAkIVjFPZWRiYMwOmtDvk.jpg?width=1080&crop=smart&auto=webp&s=23863159e363b635c96311a714038d0c5f9544aa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EpFTzL53xnRHk_aLj7-7zEWAkIVjFPZWRiYMwOmtDvk.jpg?auto=webp&s=b19b7e928c966c5974f6bdb680c7bd0322590872', 'width': 1200}, 'variants': {}}]}
Why are there so many RTX4090 boards on eBay with the GPU die and VRAM removed? and 48GB RTX 4090s?
1
[removed]
2025-01-04T21:14:56
https://www.reddit.com/r/LocalLLaMA/comments/1htom36/why_are_there_so_many_rtx4090_boards_on_ebay_with/
Philix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htom36
false
null
t3_1htom36
/r/LocalLLaMA/comments/1htom36/why_are_there_so_many_rtx4090_boards_on_ebay_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'vL57tXbQSnpAjNdhKrM0FTLUNMEmuRpP3ATVmGB9eyw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LHkWl_VkJgCRA11Syl07lcXlc5oC-0ZjNEgwTTmbGnM.jpg?width=108&crop=smart&auto=webp&s=960d547090a597a9ac0c7c9bd4b819a4b713b7aa', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LHkWl_VkJgCRA11Syl07lcXlc5oC-0ZjNEgwTTmbGnM.jpg?width=216&crop=smart&auto=webp&s=85f5378fe6aebdced9d8658207c2084361443499', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LHkWl_VkJgCRA11Syl07lcXlc5oC-0ZjNEgwTTmbGnM.jpg?width=320&crop=smart&auto=webp&s=c9ce44d3fbe486e7e7e89db105db79f6b8a8fb4e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LHkWl_VkJgCRA11Syl07lcXlc5oC-0ZjNEgwTTmbGnM.jpg?width=640&crop=smart&auto=webp&s=309552b833882287b79a1c8f0cb6385eb21a5228', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LHkWl_VkJgCRA11Syl07lcXlc5oC-0ZjNEgwTTmbGnM.jpg?width=960&crop=smart&auto=webp&s=f45a9317e68e8616d2ffd8a356380e2ede919d87', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LHkWl_VkJgCRA11Syl07lcXlc5oC-0ZjNEgwTTmbGnM.jpg?width=1080&crop=smart&auto=webp&s=4242e6afb0c25d33a4022f996b709519f1eb6b9c', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/LHkWl_VkJgCRA11Syl07lcXlc5oC-0ZjNEgwTTmbGnM.jpg?auto=webp&s=ffc41dc7ff37de69f92741143d73e303f1cf7984', 'width': 1200}, 'variants': {}}]}
How to Build Reliable Generative AI: Free Webinar on AI Observability
1
[removed]
2025-01-04T21:18:10
https://www.reddit.com/r/LocalLLaMA/comments/1htooqq/how_to_build_reliable_generative_ai_free_webinar/
kgorobinska
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htooqq
false
null
t3_1htooqq
/r/LocalLLaMA/comments/1htooqq/how_to_build_reliable_generative_ai_free_webinar/
false
false
https://b.thumbs.redditm…XxnEOmoY-bOA.jpg
1
{'enabled': False, 'images': [{'id': 'TNZXSOFQT4fKE5V_EfkZqJVXaXFIPs69jCxCWUGI5Xw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b5gvoi_-YigH6va2pg90tPNzdKR_8wBqp6a9wMUO_U4.jpg?width=108&crop=smart&auto=webp&s=196cb3fb22fec5a791f8c9a0143ed5935982d1a0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/b5gvoi_-YigH6va2pg90tPNzdKR_8wBqp6a9wMUO_U4.jpg?width=216&crop=smart&auto=webp&s=abfae0e16533110e50272a389a497d753fdf5547', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/b5gvoi_-YigH6va2pg90tPNzdKR_8wBqp6a9wMUO_U4.jpg?width=320&crop=smart&auto=webp&s=c28aee6170a2002b15d453b6f0f4338bd51a81df', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/b5gvoi_-YigH6va2pg90tPNzdKR_8wBqp6a9wMUO_U4.jpg?width=640&crop=smart&auto=webp&s=10b614922b444f312698c8ef3ba98adf72c9b8b3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/b5gvoi_-YigH6va2pg90tPNzdKR_8wBqp6a9wMUO_U4.jpg?width=960&crop=smart&auto=webp&s=41b8e9ab0153c99bd02eccb563ea3f28b7d31d33', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/b5gvoi_-YigH6va2pg90tPNzdKR_8wBqp6a9wMUO_U4.jpg?width=1080&crop=smart&auto=webp&s=be173f61ccc955338d70b7fe0f67a70c822fb314', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/b5gvoi_-YigH6va2pg90tPNzdKR_8wBqp6a9wMUO_U4.jpg?auto=webp&s=90170c63faeec323f13c9d63819bf5eb6217de99', 'width': 1280}, 'variants': {}}]}
Your Recommendations for Continue.dev and oLLaMA on M2 Macbooks
1
Hey everyone, I'd like to know which models you'd recommend for M2 Max MacBooks with 32GB RAM while having reasonable speeds in terms of t/s and output quality. I'd like to test out the continue.dev Extension next week in my company and have the best results, so that we can provide this functionality to our devs ASAP. I'm currently in our Developer Experience Team. We cannot use any online models and have to work offline for regulatoric reasons, thus oLLaMA. I'd appreciate any recommendations!
2025-01-04T22:09:41
https://www.reddit.com/r/LocalLLaMA/comments/1htpu8t/your_recommendations_for_continuedev_and_ollama/
_fbsa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htpu8t
false
null
t3_1htpu8t
/r/LocalLLaMA/comments/1htpu8t/your_recommendations_for_continuedev_and_ollama/
false
false
self
1
null
Video Analysis by frame by frame with use of llama3.2-vision
58
2025-01-04T22:24:11
https://v.redd.it/6x45cluty1be1
oridnary_artist
v.redd.it
1970-01-01T00:00:00
0
{}
1htq5vp
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6x45cluty1be1/DASHPlaylist.mpd?a=1738621465%2COTIzMDA4NjJmMGI1ZmQ1NDg3MWRkYzVjZDBiZTcxYWVhNmU4MzZmMGUyYzY5YjNiNGUwODQwOTg3ODc4NjgzZA%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/6x45cluty1be1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 970, 'hls_url': 'https://v.redd.it/6x45cluty1be1/HLSPlaylist.m3u8?a=1738621465%2CMmZiNTY1ZTg3MDJkOTMyOTUxZjg1YmQyZmJmM2Q0ODQ0NWNjN2FjNjdkZTg3MGE2MTQ4NGZjZGI3MzcwZmZiZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6x45cluty1be1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1htq5vp
/r/LocalLLaMA/comments/1htq5vp/video_analysis_by_frame_by_frame_with_use_of/
false
false
https://external-preview…051a53315d65a1d9
58
{'enabled': False, 'images': [{'id': 'NDN5aGFjdnR5MWJlMS_Y56wUDUdVdILP0EWoCh4g7VBpSmdfeeqNUAMXCbal', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NDN5aGFjdnR5MWJlMS_Y56wUDUdVdILP0EWoCh4g7VBpSmdfeeqNUAMXCbal.png?width=108&crop=smart&format=pjpg&auto=webp&s=bc38bd790173b209dbff39011f0a513077397d63', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/NDN5aGFjdnR5MWJlMS_Y56wUDUdVdILP0EWoCh4g7VBpSmdfeeqNUAMXCbal.png?width=216&crop=smart&format=pjpg&auto=webp&s=d1ba4dd11b1a649fc57c11446aee1c20e847163b', 'width': 216}, {'height': 161, 'url': 'https://external-preview.redd.it/NDN5aGFjdnR5MWJlMS_Y56wUDUdVdILP0EWoCh4g7VBpSmdfeeqNUAMXCbal.png?width=320&crop=smart&format=pjpg&auto=webp&s=0f08d503f7999269ff56177275d494680a2f5b9f', 'width': 320}, {'height': 323, 'url': 'https://external-preview.redd.it/NDN5aGFjdnR5MWJlMS_Y56wUDUdVdILP0EWoCh4g7VBpSmdfeeqNUAMXCbal.png?width=640&crop=smart&format=pjpg&auto=webp&s=614022614621bea37004c27f299e6876565921ca', 'width': 640}, {'height': 484, 'url': 'https://external-preview.redd.it/NDN5aGFjdnR5MWJlMS_Y56wUDUdVdILP0EWoCh4g7VBpSmdfeeqNUAMXCbal.png?width=960&crop=smart&format=pjpg&auto=webp&s=40dd41ec732da6211e24805583877984492b4331', 'width': 960}, {'height': 545, 'url': 'https://external-preview.redd.it/NDN5aGFjdnR5MWJlMS_Y56wUDUdVdILP0EWoCh4g7VBpSmdfeeqNUAMXCbal.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fa972b8d94fb25e054a4dcc66f4102d125328562', 'width': 1080}], 'source': {'height': 1194, 'url': 'https://external-preview.redd.it/NDN5aGFjdnR5MWJlMS_Y56wUDUdVdILP0EWoCh4g7VBpSmdfeeqNUAMXCbal.png?format=pjpg&auto=webp&s=d658373fde773e1533639e76e4bf452ac1ed0fc3', 'width': 2364}, 'variants': {}}]}
Using voice inflections with TTS?
1
[removed]
2025-01-04T22:25:36
https://www.reddit.com/r/LocalLLaMA/comments/1htq6xk/using_voice_inflections_with_tts/
EpicFrogPoster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htq6xk
false
null
t3_1htq6xk
/r/LocalLLaMA/comments/1htq6xk/using_voice_inflections_with_tts/
false
false
self
1
null
What is the largest GPU home cluster running LLMs
0
Hi, I am interested of running very large models with multiple GPUs connected to one computer. I have seen someone had 10 7900 XTXs connected to one consumer level motherboard with risers. I have yet tried no more than 3 achieving 72GB of VRAM. The inference speed for 70B llama3.3 was quite good so I was thinking is there like 300GB models which could be run with 13 GPUs? I counted I could attach 13 7900 XTXs on my consumer am5 board with risers. Is here people having what size of GPU clusters made with risers? I am interested how much does the inference speed slow down when the model size grows like 70B -> 300B if the model is still in VRAM. I am not thinking to run anything with CPU or normal RAM.
2025-01-04T22:53:41
https://www.reddit.com/r/LocalLLaMA/comments/1htqt4d/what_is_the_largest_gpu_home_cluster_running_llms/
badabimbadabum2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htqt4d
false
null
t3_1htqt4d
/r/LocalLLaMA/comments/1htqt4d/what_is_the_largest_gpu_home_cluster_running_llms/
false
false
self
0
null
Best Small LLM for translate from English to spanish Under 3B?
9
I’m looking to translate small text fragments from English to Spanish, like tweets or blog-type posts. I’m searching for small models, around 3B or smaller, so the task can be done quickly. I’ve been working with LLama3-3B, but its translations have many contextual errors, making it not very good for this task. Is anyone here working on something similar? How has your experience been? At some point, I tried Granite for this task, but it’s even worse.
2025-01-04T22:54:40
https://www.reddit.com/r/LocalLLaMA/comments/1htqtx5/best_small_llm_for_translate_from_english_to/
Empty-You9934
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htqtx5
false
null
t3_1htqtx5
/r/LocalLLaMA/comments/1htqtx5/best_small_llm_for_translate_from_english_to/
false
false
self
9
null
48GB RTX4090 mod by China
1
[removed]
2025-01-04T22:57:22
https://www.reddit.com/r/LocalLLaMA/comments/1htqw09/48gb_rtx4090_mod_by_china/
TruckUseful4423
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htqw09
false
null
t3_1htqw09
/r/LocalLLaMA/comments/1htqw09/48gb_rtx4090_mod_by_china/
false
false
https://a.thumbs.redditm…PLba4PZJSsV0.jpg
1
null
I Built an Offline AI Assistant - Chat with Your Private Data Securely! 🔒📱
1
[removed]
2025-01-04T23:08:29
https://www.reddit.com/r/LocalLLaMA/comments/1htr4vp/i_built_an_offline_ai_assistant_chat_with_your/
claritiai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htr4vp
false
null
t3_1htr4vp
/r/LocalLLaMA/comments/1htr4vp/i_built_an_offline_ai_assistant_chat_with_your/
false
false
self
1
{'enabled': False, 'images': [{'id': 'G5UpLn_5uRmuCwAYzuXxbW7c3WaZjenN9WRZ4R3Vkms', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=108&crop=smart&auto=webp&s=6d74ccc435765d69dd274c51d9dda480e2049bbf', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=216&crop=smart&auto=webp&s=9c7688c89986027c2cf7e3a39af338c7904b171d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=320&crop=smart&auto=webp&s=bfec5219cb0bd63bdc5ca6d971e884b265c99652', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=640&crop=smart&auto=webp&s=e52670082168907b115bf44aff8f5ab5c9d232e3', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=960&crop=smart&auto=webp&s=3dc227c90c23700f43fad5317f32a945c1838161', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=1080&crop=smart&auto=webp&s=a5cd94b38013480be883f20332e17ecdcf06a94b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?auto=webp&s=5a2c972f3418c1206000828c17fc8184e2b6d24c', 'width': 1200}, 'variants': {}}]}
Some Local LLMs don't get detected by tools like ZeroGPT
0
ChatGPT and Claude get detected instantly. Llama3 is 50/50 Qwen is detected pretty often. Mistral Small 22b though very rarely gets detected. I'm curious which other ones make it through regularly?
2025-01-04T23:20:05
https://www.reddit.com/r/LocalLLaMA/comments/1htre35/some_local_llms_dont_get_detected_by_tools_like/
ForsookComparison
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htre35
false
null
t3_1htre35
/r/LocalLLaMA/comments/1htre35/some_local_llms_dont_get_detected_by_tools_like/
false
false
self
0
null
5080 listed for 1,699.95 euros in Spain.
129
As reported by someone on Twitter. It's been listed in Spain for 1,699.95 euros. Taking into the 21% VAT and converting back to USD, that's $1,384. https://x.com/GawroskiT/status/1874834447046168734
2025-01-04T23:20:52
https://www.reddit.com/r/LocalLLaMA/comments/1htreq1/5080_listed_for_169995_euros_in_spain/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htreq1
false
null
t3_1htreq1
/r/LocalLLaMA/comments/1htreq1/5080_listed_for_169995_euros_in_spain/
false
false
self
129
{'enabled': False, 'images': [{'id': '9-VPQwwbmjavbQ0wX9pB3OF8NVUlx_BJ5WRPDZvrMM0', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/U2fEBUlllPp17qaj-SqYtvXFo5pmwK6G_5iixNRN59Q.jpg?width=108&crop=smart&auto=webp&s=6a4dd81e3c0eace9aafa025773f885d39a9b7ce3', 'width': 108}, {'height': 287, 'url': 'https://external-preview.redd.it/U2fEBUlllPp17qaj-SqYtvXFo5pmwK6G_5iixNRN59Q.jpg?width=216&crop=smart&auto=webp&s=8b6f195f6a67cc19b7a65349ba98037e90aec610', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/U2fEBUlllPp17qaj-SqYtvXFo5pmwK6G_5iixNRN59Q.jpg?width=320&crop=smart&auto=webp&s=fea1b7183c5140d422c126728fac39cfba42fa04', 'width': 320}], 'source': {'height': 597, 'url': 'https://external-preview.redd.it/U2fEBUlllPp17qaj-SqYtvXFo5pmwK6G_5iixNRN59Q.jpg?auto=webp&s=8770451eb1e6fc4f6794baca525dec9880c0ebc7', 'width': 448}, 'variants': {}}]}
Llama3 Inference Engine - CUDA C (Repost)
5
Reposting because the old one got taken down for some odd reason: Hey r/LocalLLaMa, recently I took inspiration from llama.cpp, ollama, and many other similar tools that enable inference of LLMs locally, and I just finished building a Llama inference engine for the 8B model in CUDA C. I recently wanted to explore my newly founded interest in CUDA programming and my passion for machine learning. This project only makes use of the native CUDA runtime api and cuda_fp16. The inference takes place in fp16, so it requires around 17-18GB of VRAM (~16GB for model params and some more for intermediary caches). It doesn’t use cuBLAS or any similar libraries since I wanted to be exposed to the least amount of abstraction. Hence, it isn’t as optimized as a cuBLAS implementation or other inference engines like the ones that inspired the project. ## **A brief overview of the implementation** I used CUDA C. It reads a .safetensor file of the model that you can pull from HuggingFace. The actual kernels are fairly straightforward for normalizations, skip connections, RoPE, and activation functions (SiLU). For GEMM, I got as far as implementing tiled matrix multiplication with vectorized retrieval for each thread. The GEMM kernel is also written in such a way that the second matrix is not required to be pre-transposed while still achieving coalesced memory access to HBM. Feel free to have a look at the project repo and try it out if you’re interested. If you like what you see, feel free to star the repo too! I highly appreciate any feedback, good or constructive. GitHub repo: https://github.com/abhisheknair10/Llama3.cu
2025-01-04T23:24:34
https://www.reddit.com/r/LocalLLaMA/comments/1htrhnv/llama3_inference_engine_cuda_c_repost/
Delicious-Ad-3552
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htrhnv
false
null
t3_1htrhnv
/r/LocalLLaMA/comments/1htrhnv/llama3_inference_engine_cuda_c_repost/
false
false
self
5
{'enabled': False, 'images': [{'id': 'AySI5F2JVuRHKmLTCn65YuPMwA0_90p3Elz7CDqXx6I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=108&crop=smart&auto=webp&s=e26b46c25783a4c4f567af595da97dd2361294f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=216&crop=smart&auto=webp&s=380d9a4b23750d136747ada3a164d7978c2260c7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=320&crop=smart&auto=webp&s=34ebeabc5773f054d4a542ce784ff7bda8514f72', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=640&crop=smart&auto=webp&s=75ac3dcd65c3f13785cd6005ffa839326def1e92', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=960&crop=smart&auto=webp&s=1c0b698940e40f36e29aa799562d97504e121492', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?width=1080&crop=smart&auto=webp&s=87e7c2ffa085e2adc57fcd6506bf9734c945451c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7L9Kgnk8QtfnB0xsZeREX_tT_QSIiC7s8FBC3VJYIA4.jpg?auto=webp&s=03607fae201d3a9ab181347c9f93aa4e3dfa4186', 'width': 1200}, 'variants': {}}]}
browser use with an app
40
2025-01-04T23:30:35
https://v.redd.it/p47n5i9oa2be1
Illustrious_Row_9971
v.redd.it
1970-01-01T00:00:00
0
{}
1htrmgr
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/p47n5i9oa2be1/DASHPlaylist.mpd?a=1738625449%2CNTZlNTk0YTQ1MDYxZmY4ODczMDgzMGUwODUwN2FlODcxMGU1NzMzZmRlOWMwNDQ1YWRlMzUwYzZhODU4ZTNjZQ%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/p47n5i9oa2be1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/p47n5i9oa2be1/HLSPlaylist.m3u8?a=1738625449%2CNGM4MDM5NGE1OGRmNzM4ZjU1YjIzNmU5ZGZmZTc4NTc2NjMyMzkyOGY5YTRkMDAxZTFjZWYxOWE1OTlhZmQxMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/p47n5i9oa2be1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1htrmgr
/r/LocalLLaMA/comments/1htrmgr/browser_use_with_an_app/
false
false
https://external-preview…be56031e72c5fe0b
40
{'enabled': False, 'images': [{'id': 'NHZ4aGNqOW9hMmJlMTnaqY6JlgBVMgEBzE4pBYzb8pil-ub_e6bXC9CqoZKO', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NHZ4aGNqOW9hMmJlMTnaqY6JlgBVMgEBzE4pBYzb8pil-ub_e6bXC9CqoZKO.png?width=108&crop=smart&format=pjpg&auto=webp&s=30d54e2f60308631403d5ad7193e736a3bdd9263', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NHZ4aGNqOW9hMmJlMTnaqY6JlgBVMgEBzE4pBYzb8pil-ub_e6bXC9CqoZKO.png?width=216&crop=smart&format=pjpg&auto=webp&s=7717a7e3802fd6fe1916528cce6215de0c9ec44b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NHZ4aGNqOW9hMmJlMTnaqY6JlgBVMgEBzE4pBYzb8pil-ub_e6bXC9CqoZKO.png?width=320&crop=smart&format=pjpg&auto=webp&s=f666104ea20d8769ecc0a08b8d8e1cf9bb626b04', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NHZ4aGNqOW9hMmJlMTnaqY6JlgBVMgEBzE4pBYzb8pil-ub_e6bXC9CqoZKO.png?width=640&crop=smart&format=pjpg&auto=webp&s=6cbb3d0b26df240acb643ef690404305fbe4519c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NHZ4aGNqOW9hMmJlMTnaqY6JlgBVMgEBzE4pBYzb8pil-ub_e6bXC9CqoZKO.png?width=960&crop=smart&format=pjpg&auto=webp&s=473ab36813935c354c1738393cb089adb70d0296', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NHZ4aGNqOW9hMmJlMTnaqY6JlgBVMgEBzE4pBYzb8pil-ub_e6bXC9CqoZKO.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4bf99f8f455137b9408074d8738aa1c8818d4d76', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NHZ4aGNqOW9hMmJlMTnaqY6JlgBVMgEBzE4pBYzb8pil-ub_e6bXC9CqoZKO.png?format=pjpg&auto=webp&s=b87800337eaad60273fc9959e190fd99b975d466', 'width': 1920}, 'variants': {}}]}
best local LLM for coding/iAC
0
Has anyone used a local LLM for iAC tools like Terraform? What local LLM and how helpful was it?
2025-01-04T23:43:27
https://www.reddit.com/r/LocalLLaMA/comments/1htrwev/best_local_llm_for_codingiac/
Yaboyazz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htrwev
false
null
t3_1htrwev
/r/LocalLLaMA/comments/1htrwev/best_local_llm_for_codingiac/
false
false
self
0
null
Random Clicking Noises in XTTS-V2 Finetune
1
[removed]
2025-01-05T00:14:49
https://www.reddit.com/r/LocalLLaMA/comments/1htsla7/random_clicking_noises_in_xttsv2_finetune/
dwangwade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htsla7
false
null
t3_1htsla7
/r/LocalLLaMA/comments/1htsla7/random_clicking_noises_in_xttsv2_finetune/
false
false
self
1
null
Giving a llama3.1 chatbot context
1
I've built my first, simple chatbot, which is really an interview bot, and given it some context to give it a set of techniques to ask questions and respond. It's clearly learned the context because that's all it wants to talk about. From the very beginning, it's just asking questions about the context I've given it. Is there something I'm missing about how to provide context? Is it a case where 'less is more'?
2025-01-05T00:22:54
https://www.reddit.com/r/LocalLLaMA/comments/1htsri4/giving_a_llama31_chatbot_context/
gorobotkillkill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htsri4
false
null
t3_1htsri4
/r/LocalLLaMA/comments/1htsri4/giving_a_llama31_chatbot_context/
false
false
self
1
null
Running Llama3.3:70b on pure CPU without GPU
1
[removed]
2025-01-05T00:26:03
https://www.reddit.com/r/LocalLLaMA/comments/1htstza/running_llama3370b_on_pure_cpu_without_gpu/
BadBoy-8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htstza
false
null
t3_1htstza
/r/LocalLLaMA/comments/1htstza/running_llama3370b_on_pure_cpu_without_gpu/
false
false
self
1
null
P40 vs 3090 vs mac mini cluster?
7
Hello all. I am interested in running the llama 3.3 70b model in order to rid myself of paying for chatgpt and claude. I already own a single 3090, and I know a dual 3090 setup is popular for this model. However, for the price of a 3090 on ebay (\~800 bucks), I can buy 3 P40s and have money left over for a CPU and motherboard. There is also always the option of going with a few mac minis and soldering in larger ram chips. Not ideal, but possible. What are your thoughts?
2025-01-05T00:28:07
https://www.reddit.com/r/LocalLLaMA/comments/1htsvmz/p40_vs_3090_vs_mac_mini_cluster/
Striking_Luck5201
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1htsvmz
false
null
t3_1htsvmz
/r/LocalLLaMA/comments/1htsvmz/p40_vs_3090_vs_mac_mini_cluster/
false
false
self
7
null
Response of flagships LLMs to the question "Who are you, Claude?" - All LLMs want to impersonate Claude.
2
https://preview.redd.it/…a79a379fcba589
2025-01-05T00:52:30
https://www.reddit.com/r/LocalLLaMA/comments/1httdoc/response_of_flagships_llms_to_the_question_who/
cpldcpu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1httdoc
false
null
t3_1httdoc
/r/LocalLLaMA/comments/1httdoc/response_of_flagships_llms_to_the_question_who/
false
false
https://b.thumbs.redditm…iinBVw_Jk0EQ.jpg
2
null