title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
AgentZero - generate fullstack apps
6
https://preview.redd.it/…rdel/agent-zero)
2024-12-11T04:42:04
https://www.reddit.com/r/LocalLLaMA/comments/1hbloyu/agentzero_generate_fullstack_apps/
TheLogiqueViper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbloyu
false
null
t3_1hbloyu
/r/LocalLLaMA/comments/1hbloyu/agentzero_generate_fullstack_apps/
false
false
https://b.thumbs.redditm…AEaPJM89_iTA.jpg
6
{'enabled': False, 'images': [{'id': 'UgQQjjcnEesbZNWeIY0IFDRqAHbmXZjAml8TVX4Wj2w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gZwt8U3GXVaA_-uzXTHQlXdalokepYaQIZ2V91f1BT4.jpg?width=108&crop=smart&auto=webp&s=00539e309b2b61272b940e22b2c3d252076ff54e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gZwt8U3GXVaA_-uzXTHQlXdalokepYaQIZ2V91f1BT4.jpg?width=216&crop=smart&auto=webp&s=c25949dd7a96d1b941c752a5e9a25612cb2456cf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gZwt8U3GXVaA_-uzXTHQlXdalokepYaQIZ2V91f1BT4.jpg?width=320&crop=smart&auto=webp&s=b28fe7155ac98f844b882dbcb4e73a16e95023ea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gZwt8U3GXVaA_-uzXTHQlXdalokepYaQIZ2V91f1BT4.jpg?width=640&crop=smart&auto=webp&s=6e7cfe8a0b7120adb3309e68b8f309fe09ffe753', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gZwt8U3GXVaA_-uzXTHQlXdalokepYaQIZ2V91f1BT4.jpg?width=960&crop=smart&auto=webp&s=224a6aabb15eaa4612cb563a80df21d7f60f66a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gZwt8U3GXVaA_-uzXTHQlXdalokepYaQIZ2V91f1BT4.jpg?width=1080&crop=smart&auto=webp&s=86f2f1fc46a17263790813ce217697e146d45ea4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gZwt8U3GXVaA_-uzXTHQlXdalokepYaQIZ2V91f1BT4.jpg?auto=webp&s=b2dc4f21c97604e592cde88fa740d0ae0f0dd534', 'width': 1200}, 'variants': {}}]}
Speculative Decoding for QwQ-32B Preview can be done with Qwen-2.5 Coder 7B!
76
I looked on Huggingface in the config.json spec files for both the QwQ-32B and Qwen Coder 2.5 7B models, and was able to see that the vocab sizes matched, and therefore Qwen Coder 7B could theoretically be used as a draft model to enable speculative decoding for QwQ. While on my lowly 16 GB VRAM system this did not yield performance gains (in "normal" mode I was only able to offload 26/65 QwQ layers to GPU, and in "speculative" mode, I had to balance GPU offloading between just 11 QwQ layers and all 29 Qwen Coder layers), I am certain that on larger VRAM GPUs (e.g. 24 GB VRAM) \*significant\* performance gains can be achieved with this method. The most interesting result was in terms of style though. Plain-vanilla QwQ seemed a little bit more meandering and self-doubting in its reasoning, producing the answer in 4527 characters. On the other hand, QwQ with Qwen Coder as a draft model used slightly more characters 4763 (and time, in my case) to produce the answer, but its reasoning seemed (subjectively to me) much more self-confident and logical. I'm enclosing a linked PDF with my llama.cpp commands and outputs in each test for y'all to peruse. I encourage the folks here to experiment with Qwen 2.5 Coder 7B as a draft model for QwQ-32B and let the community know your results in terms of performance in tokens/second, style, and how "confident" and "logical" the reasoning seems. Perhaps we may be on to something here and Qwen Coder gives QwQ less "self-doubt" and "more structured" thinking. Enjoy!
2024-12-11T05:10:58
https://www.reddit.com/r/LocalLLaMA/comments/1hbm7e3/speculative_decoding_for_qwq32b_preview_can_be/
Longjumping-City-461
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbm7e3
false
null
t3_1hbm7e3
/r/LocalLLaMA/comments/1hbm7e3/speculative_decoding_for_qwq32b_preview_can_be/
false
false
self
76
null
[Request] How-To Use nvidia-nemo-canary-model ?
1
[removed]
2024-12-11T05:16:12
https://www.reddit.com/r/LocalLLaMA/comments/1hbmagt/request_howto_use_nvidianemocanarymodel/
Distinct_Morning_957
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbmagt
false
null
t3_1hbmagt
/r/LocalLLaMA/comments/1hbmagt/request_howto_use_nvidianemocanarymodel/
false
false
self
1
{'enabled': False, 'images': [{'id': '541NFWsWZgCa4WxQdrqICfv-rTxV3Sxeu6eIidPjqX0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/71Wjc7ceqq4NnTXAE_RHhTt5P12g91BWfomJ5mc0Y90.jpg?width=108&crop=smart&auto=webp&s=6d50f581467540928b0023e6a29129c059a37854', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/71Wjc7ceqq4NnTXAE_RHhTt5P12g91BWfomJ5mc0Y90.jpg?width=216&crop=smart&auto=webp&s=f23e97aecd8253b205ae3775212487b44b4ece9f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/71Wjc7ceqq4NnTXAE_RHhTt5P12g91BWfomJ5mc0Y90.jpg?width=320&crop=smart&auto=webp&s=a6e0391cdeb67f7ffc7a655f8ce74e6773c633a9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/71Wjc7ceqq4NnTXAE_RHhTt5P12g91BWfomJ5mc0Y90.jpg?width=640&crop=smart&auto=webp&s=4dfa0aca83e15457c4a0cb8fa72194327f98f430', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/71Wjc7ceqq4NnTXAE_RHhTt5P12g91BWfomJ5mc0Y90.jpg?width=960&crop=smart&auto=webp&s=a96b81fefe0168c75dbd58a17bf5bc53f72519d2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/71Wjc7ceqq4NnTXAE_RHhTt5P12g91BWfomJ5mc0Y90.jpg?width=1080&crop=smart&auto=webp&s=85ceff130794b95b41da2be39e56a967e1a70000', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/71Wjc7ceqq4NnTXAE_RHhTt5P12g91BWfomJ5mc0Y90.jpg?auto=webp&s=408db1f01ce6798b4c2cffb9d67684e92b2db356', 'width': 1200}, 'variants': {}}]}
Will hardware requirements for running LLMs decrease with time?
7
Articles that suggest hardware requirements for local LLMs almost never mention GPUs with less than 8 GB VRAM. And even then it's supposed to be for just smaller models. Is there any hope that this might change in the future? I was hoping that with time newer models will be more optimized and will run on low end machines, but every new model is bigger than previous ones and needs very expensive components to run. I only have 6GBs of VRAM and 16GBs RAM, and they're in a laptop to boot. I know it's not the worst and it's still possible to run LLMs on this, but I don't know if they'll actually be any good. I ran stable diffusion on my laptop but gave up soon because of how slow everything was. So, is it possible this situation will improve?
2024-12-11T05:16:53
https://www.reddit.com/r/LocalLLaMA/comments/1hbmav1/will_hardware_requirements_for_running_llms/
Not-Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbmav1
false
null
t3_1hbmav1
/r/LocalLLaMA/comments/1hbmav1/will_hardware_requirements_for_running_llms/
false
false
self
7
null
On-Prem vs Cloud for AI Model Training
1
[removed]
2024-12-11T05:33:28
https://www.reddit.com/r/LocalLLaMA/comments/1hbmkgp/onprem_vs_cloud_for_ai_model_training/
Secu-Thibz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbmkgp
false
null
t3_1hbmkgp
/r/LocalLLaMA/comments/1hbmkgp/onprem_vs_cloud_for_ai_model_training/
false
false
self
1
null
How do i generate background image for my product?
1
[removed]
2024-12-11T06:07:40
https://www.reddit.com/r/LocalLLaMA/comments/1hbn3gn/how_do_i_generate_background_image_for_my_product/
Admirable-Charge7821
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbn3gn
false
null
t3_1hbn3gn
/r/LocalLLaMA/comments/1hbn3gn/how_do_i_generate_background_image_for_my_product/
false
false
self
1
null
Sudden drop of LLM performance
0
Hello. I have desktop, configuration is Ryzen 5950X, 64Gb DDR4 RAM, RTX 3070 8Gb, NVMe SSD. After I reboot my system I launch WizardLm-2-8x22b (IQ2\_XXS quantization) via koboldcpp. I use next options: flashattention, quantization KV to 4 bit, 16 threads, 9 layers offloaded to GPU. It works with performance 2-2.5 tokens/sec depending on actual context length. I use it for roleplay. But suddenly durin roleplay session the performance could drop to 1 token/sec or even lower. The lowest I noticed was 0.34 tokens/sec. It is huge difference between 2 t/s and 0.34 t/s. I don't use swap. I can't see any other process that consumes CPU. It looks lite there is no reason to a such slowdown. The same phenomenon with another model such as Midnight Miqu 70b, so I think the issue is not related to WizardLm LLM Probably have anyone a idea what I should check to detect root and fix the issue?
2024-12-11T06:10:58
https://www.reddit.com/r/LocalLLaMA/comments/1hbn56h/sudden_drop_of_llm_performance/
Star-Guardian-RU
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbn56h
false
null
t3_1hbn56h
/r/LocalLLaMA/comments/1hbn56h/sudden_drop_of_llm_performance/
false
false
self
0
null
QTIP 2, 3, and 4 bit Llama 3.3 70B Instruct now on HF
23
Models: [https://huggingface.co/collections/relaxml/qtip-quantized-models-66fa253ad3186746f4b62803](https://huggingface.co/collections/relaxml/qtip-quantized-models-66fa253ad3186746f4b62803) Code to run: [https://github.com/Cornell-RelaxML/qtip](https://github.com/Cornell-RelaxML/qtip) Almost no zeroshot degradation at all bitrates. Slight MMLU 5-shot degradation at 2 bits (78 -> 73), essentially lossless at 3 and 4 bits. 2 bit (fits on a 4090) generation quality seems pretty good: https://reddit.com/link/1hbng5l/video/ujxy6ddyy56e1/player
2024-12-11T06:31:43
https://www.reddit.com/r/LocalLLaMA/comments/1hbng5l/qtip_2_3_and_4_bit_llama_33_70b_instruct_now_on_hf/
tsengalb99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbng5l
false
null
t3_1hbng5l
/r/LocalLLaMA/comments/1hbng5l/qtip_2_3_and_4_bit_llama_33_70b_instruct_now_on_hf/
false
false
https://b.thumbs.redditm…pypgo_KeNrsU.jpg
23
{'enabled': False, 'images': [{'id': '-7GPo1xR8sEPsrpagX5szLwjWrPXxkYmC9R47AC0pi0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6CWlQHsfdXbqe-obfUNSUMfLix1N6xxHp89D3IuLKxE.jpg?width=108&crop=smart&auto=webp&s=b91912d640d16e445e6ddb20000663587de000b8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6CWlQHsfdXbqe-obfUNSUMfLix1N6xxHp89D3IuLKxE.jpg?width=216&crop=smart&auto=webp&s=429e874c6bee25584531ef64b209298d2ba3ac57', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6CWlQHsfdXbqe-obfUNSUMfLix1N6xxHp89D3IuLKxE.jpg?width=320&crop=smart&auto=webp&s=be9be490f0a50552e39158b1a168240e7928d2f1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6CWlQHsfdXbqe-obfUNSUMfLix1N6xxHp89D3IuLKxE.jpg?width=640&crop=smart&auto=webp&s=db65fa574e5f77958e5ed1a3d4b19ae349ddd276', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6CWlQHsfdXbqe-obfUNSUMfLix1N6xxHp89D3IuLKxE.jpg?width=960&crop=smart&auto=webp&s=448d4fc76859d4a7235ad7794082244025c0ba68', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6CWlQHsfdXbqe-obfUNSUMfLix1N6xxHp89D3IuLKxE.jpg?width=1080&crop=smart&auto=webp&s=5be9ef4710dfce25bf71a927cbaf6af1ae6de907', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6CWlQHsfdXbqe-obfUNSUMfLix1N6xxHp89D3IuLKxE.jpg?auto=webp&s=d56ebc13cc60c73c84d1fb31e9c1a40a704149a4', 'width': 1200}, 'variants': {}}]}
🤖 Fine-Tuning LLaMA 3.2 for Positive Conversations: Should 'Bad' Examples Be Included? 🤔✨
1
[removed]
2024-12-11T07:12:20
https://www.reddit.com/r/LocalLLaMA/comments/1hbo145/finetuning_llama_32_for_positive_conversations/
IndependenceOk281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbo145
false
null
t3_1hbo145
/r/LocalLLaMA/comments/1hbo145/finetuning_llama_32_for_positive_conversations/
false
false
self
1
null
Fine-Tuning LLaMA 3.2 for Positive Conversations: Should 'Bad' Examples Be Included?
1
[removed]
2024-12-11T07:13:44
https://www.reddit.com/r/LocalLLaMA/comments/1hbo1rq/finetuning_llama_32_for_positive_conversations/
IndependenceOk281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbo1rq
false
null
t3_1hbo1rq
/r/LocalLLaMA/comments/1hbo1rq/finetuning_llama_32_for_positive_conversations/
false
false
self
1
null
PocketPal AI Updates: Edit Messages, Regenerate, and UI Enhancements!
87
Just wanted to share a few updates on PocketPal AI (1.6.0) with you folks: * **Edit** Previous Messages * Easily switch models and **regenerate** responses on the spot. * Got some improvements regarding UI, specifically for the model card. While we’re on the topic, why not give the new kid, **EXAONE 3.5**, a spin? As always, you can download the app here: → **Google Play Store**: [https://play.google.com/store/apps/details?id=com.pocketpalai&hl=en](https://play.google.com/store/apps/details?id=com.pocketpalai&hl=en) → **App Store**: [https://apps.apple.com/de/app/pocketpal-ai/id6502579498](https://apps.apple.com/de/app/pocketpal-ai/id6502579498) and leave your feedback here: → **Source Code**: [https://github.com/a-ghorbani/pocketpal-ai](https://github.com/a-ghorbani/pocketpal-ai) Give it a try and let me know what you think! :) https://preview.redd.it/c9hatimh036e1.png?width=1170&format=png&auto=webp&s=22646953b767b9e05c5f5f2a41939fcf4fdbbd0e https://preview.redd.it/m66wo0s5v26e1.png?width=1179&format=png&auto=webp&s=2844f993c73fa3eb70ba118e03b95608f0538b59 https://reddit.com/link/1hbo2nz/video/akitzigj036e1/player
2024-12-11T07:15:35
https://www.reddit.com/r/LocalLLaMA/comments/1hbo2nz/pocketpal_ai_updates_edit_messages_regenerate_and/
Ill-Still-6859
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbo2nz
false
null
t3_1hbo2nz
/r/LocalLLaMA/comments/1hbo2nz/pocketpal_ai_updates_edit_messages_regenerate_and/
false
false
https://b.thumbs.redditm…-bQNJe4KrOJI.jpg
87
{'enabled': False, 'images': [{'id': 'cndRW6QCJG_he3FSQdAk6GKmH0k830rmqID0DT6ULIg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/r_VdnVzmSzrYl13LSpmCYnIbeIEpvfB_WvJx5SrBo8E.jpg?width=108&crop=smart&auto=webp&s=ff67678f8728dd5b0425512e5ca6e50cba983d1b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/r_VdnVzmSzrYl13LSpmCYnIbeIEpvfB_WvJx5SrBo8E.jpg?width=216&crop=smart&auto=webp&s=0a1c0b42ea72559753429f6e3d9b11dd58e2d8c9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/r_VdnVzmSzrYl13LSpmCYnIbeIEpvfB_WvJx5SrBo8E.jpg?width=320&crop=smart&auto=webp&s=532a838719cf510149e84e0217af79a892fcdec6', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/r_VdnVzmSzrYl13LSpmCYnIbeIEpvfB_WvJx5SrBo8E.jpg?auto=webp&s=f6b130445e14d2479dc341f75f12e61e5928d40e', 'width': 512}, 'variants': {}}]}
CPU Only local chatbot
1
[removed]
2024-12-11T08:23:42
https://www.reddit.com/r/LocalLLaMA/comments/1hboyzf/cpu_only_local_chatbot/
Emergency-Shop-4285
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hboyzf
false
null
t3_1hboyzf
/r/LocalLLaMA/comments/1hboyzf/cpu_only_local_chatbot/
false
false
self
1
null
NobodyWho: Integrating Local LLMs in Game Development
1
[removed]
2024-12-11T09:11:59
https://www.reddit.com/r/LocalLLaMA/comments/1hbpkm7/nobodywho_integrating_local_llms_in_game/
nobodywho-ooo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbpkm7
false
null
t3_1hbpkm7
/r/LocalLLaMA/comments/1hbpkm7/nobodywho_integrating_local_llms_in_game/
false
false
self
1
{'enabled': False, 'images': [{'id': '8iBLvrhbFQlkeq09APGIsHufBRZMQObLdakVCz6TEY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5dn5QL-gAClk_ld8JlztC68atVGRHCvzXccJp9LDFlc.jpg?width=108&crop=smart&auto=webp&s=7c495dadfeae3bf5f2a646cb4eebd2858d72cd8d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5dn5QL-gAClk_ld8JlztC68atVGRHCvzXccJp9LDFlc.jpg?width=216&crop=smart&auto=webp&s=8c15975f6a4338a01a9d6bd2677180685b0ee662', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5dn5QL-gAClk_ld8JlztC68atVGRHCvzXccJp9LDFlc.jpg?width=320&crop=smart&auto=webp&s=c0c7801339d9d8b9ad47df59cb9f7e5dad186c8b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5dn5QL-gAClk_ld8JlztC68atVGRHCvzXccJp9LDFlc.jpg?width=640&crop=smart&auto=webp&s=cbc1dfd3cee172b2a5ce2590d8412758fac747d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5dn5QL-gAClk_ld8JlztC68atVGRHCvzXccJp9LDFlc.jpg?width=960&crop=smart&auto=webp&s=de37e3919ae31107500cee992e0ff306df0dd1cd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5dn5QL-gAClk_ld8JlztC68atVGRHCvzXccJp9LDFlc.jpg?width=1080&crop=smart&auto=webp&s=aa497b04880cb48ef1874480b01e516020c13e13', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5dn5QL-gAClk_ld8JlztC68atVGRHCvzXccJp9LDFlc.jpg?auto=webp&s=b7d8b0821ca432f41bbc6437854e39971306a1b5', 'width': 1200}, 'variants': {}}]}
A good start to find a powerful Local GPT lab with the best cost
1
[removed]
2024-12-11T09:13:01
https://www.reddit.com/r/LocalLLaMA/comments/1hbpl22/a_good_start_to_find_a_powerful_local_gpt_lab/
Local-Chip365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbpl22
false
null
t3_1hbpl22
/r/LocalLLaMA/comments/1hbpl22/a_good_start_to_find_a_powerful_local_gpt_lab/
false
false
https://b.thumbs.redditm…nIJTgwRn480E.jpg
1
null
How to run a powerful Local GPT lab?
0
[removed]
2024-12-11T09:14:49
https://www.reddit.com/r/LocalLLaMA/comments/1hbpluu/how_to_run_a_powerful_local_gpt_lab/
Local-Chip365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbpluu
false
null
t3_1hbpluu
/r/LocalLLaMA/comments/1hbpluu/how_to_run_a_powerful_local_gpt_lab/
false
false
self
0
null
Any sugestion to find a good station to run a Local Llama,
1
[removed]
2024-12-11T09:22:48
https://www.reddit.com/r/LocalLLaMA/comments/1hbpphc/any_sugestion_to_find_a_good_station_to_run_a/
Local-Chip365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbpphc
false
null
t3_1hbpphc
/r/LocalLLaMA/comments/1hbpphc/any_sugestion_to_find_a_good_station_to_run_a/
false
false
self
1
null
Which one to chose for local coding purpose - Qwen2.5-coder-32b-instruct-q2_k vs Qwen2.5-coder-14b-instruct-q5_k_m?
10
System configuration - Ryzen 5 7600, RTX 3060 12 GB with 48 GB RAM(can upgrade more if it helps). Qwen2.5-coder-32b-instruct * Q2\_k, 8192 context(for me its fine) resulting in 8-9 token/s, somewhat slow. Qwen2.5-coder-14b-instruct * Q5\_k\_m, 8192 context, 26-28 token/s, good enough for normal day to day programming. I am using llama.cpp, sample command(just need to change the model) - llama-server -m  qwen2.5-coder-32b-instruct-q2_k.gguf -ngl 99 -fa -c 8192 -ts 0.4,1 --port 8090 Is there any way I can run the 32bit with more bit? I tried the latest commit made by Llama developer for using small draft model, but its giving broken/delayed output in between the result.
2024-12-11T09:26:24
https://www.reddit.com/r/LocalLLaMA/comments/1hbpr41/which_one_to_chose_for_local_coding_purpose/
FrenkieDingDong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbpr41
false
null
t3_1hbpr41
/r/LocalLLaMA/comments/1hbpr41/which_one_to_chose_for_local_coding_purpose/
false
false
self
10
null
I buid a local Llama bot with a re-design GPU board based on A5000 and 3060.
0
https://preview.redd.it/…erve me better.
2024-12-11T09:29:44
https://www.reddit.com/r/LocalLLaMA/comments/1hbpsnr/i_buid_a_local_llama_bot_with_a_redesign_gpu/
Local-Chip365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbpsnr
false
null
t3_1hbpsnr
/r/LocalLLaMA/comments/1hbpsnr/i_buid_a_local_llama_bot_with_a_redesign_gpu/
false
false
https://a.thumbs.redditm…fxgxn3MucH30.jpg
0
null
Possible to run Qwen 32b on Single 7900 XTX?
1
[removed]
2024-12-11T09:41:01
https://www.reddit.com/r/LocalLLaMA/comments/1hbpxpr/possible_to_run_qwen_32b_on_single_7900_xtx/
random_foo8731
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbpxpr
false
null
t3_1hbpxpr
/r/LocalLLaMA/comments/1hbpxpr/possible_to_run_qwen_32b_on_single_7900_xtx/
false
false
self
1
{'enabled': False, 'images': [{'id': 'yYt23rk112zQznIT0OLCBvEtxTI2TDgr9hgcIeTSZVo', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/Kn3J1oE0EazwvJrm4UO8phQggzdzM1TkMDpphVMyS4M.jpg?width=108&crop=smart&auto=webp&s=8c3ec5d8be8858624ac362aa6c5464d2ed41a67a', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/Kn3J1oE0EazwvJrm4UO8phQggzdzM1TkMDpphVMyS4M.jpg?width=216&crop=smart&auto=webp&s=f2e3cb30b2b9f7b65654f10e7e9fea6f24140e7f', 'width': 216}, {'height': 150, 'url': 'https://external-preview.redd.it/Kn3J1oE0EazwvJrm4UO8phQggzdzM1TkMDpphVMyS4M.jpg?width=320&crop=smart&auto=webp&s=661c846634f188a67912ac1940a4d014c5813069', 'width': 320}], 'source': {'height': 294, 'url': 'https://external-preview.redd.it/Kn3J1oE0EazwvJrm4UO8phQggzdzM1TkMDpphVMyS4M.jpg?auto=webp&s=a3e8cc0463b79693d5f23fe0da5f8410c0cdcca1', 'width': 624}, 'variants': {}}]}
Looking for a budget solution to host models at home
1
[removed]
2024-12-11T10:02:43
https://www.reddit.com/r/LocalLLaMA/comments/1hbq861/looking_for_a_budget_solution_to_host_models_at/
tonystarkco
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbq861
false
null
t3_1hbq861
/r/LocalLLaMA/comments/1hbq861/looking_for_a_budget_solution_to_host_models_at/
false
false
self
1
null
Europe’s AI progress ‘insufficient’ to compete with US and China, French report says
294
2024-12-11T10:28:28
https://www.euronews.com/next/2024/12/10/europes-ai-progress-insufficient-to-compete-with-us-and-china-french-report-says
mr_house7
euronews.com
1970-01-01T00:00:00
0
{}
1hbqkl6
false
null
t3_1hbqkl6
/r/LocalLLaMA/comments/1hbqkl6/europes_ai_progress_insufficient_to_compete_with/
false
false
https://b.thumbs.redditm…X8TDDaidlvHY.jpg
294
{'enabled': False, 'images': [{'id': 'MNyTkP_TpoQY3bRkG_ojhHqTmNo89J-g2NwnoVurGlo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/HZZvD_gqRdqOzLgfeDM9mzAdgArY2s-Hb8IFx3-k_X4.jpg?width=108&crop=smart&auto=webp&s=8e2035abab7738ccd7a38b3637cd337d23cd07de', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/HZZvD_gqRdqOzLgfeDM9mzAdgArY2s-Hb8IFx3-k_X4.jpg?width=216&crop=smart&auto=webp&s=12bd4877449ab528911a63df49f11ff52daf46cd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/HZZvD_gqRdqOzLgfeDM9mzAdgArY2s-Hb8IFx3-k_X4.jpg?width=320&crop=smart&auto=webp&s=9139e2b673d1826f5ce143ff6725c50410f3cf23', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/HZZvD_gqRdqOzLgfeDM9mzAdgArY2s-Hb8IFx3-k_X4.jpg?width=640&crop=smart&auto=webp&s=0ebe0d7ef575da4256677e61cada0e5ef16467d1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/HZZvD_gqRdqOzLgfeDM9mzAdgArY2s-Hb8IFx3-k_X4.jpg?width=960&crop=smart&auto=webp&s=01da30683755237a85ae88820fff1dcf46c335df', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/HZZvD_gqRdqOzLgfeDM9mzAdgArY2s-Hb8IFx3-k_X4.jpg?width=1080&crop=smart&auto=webp&s=85c51220310d3c5be13102b332f5d792ba6157c5', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/HZZvD_gqRdqOzLgfeDM9mzAdgArY2s-Hb8IFx3-k_X4.jpg?auto=webp&s=67461e62e726f1f4028ff2a4acffff4403cdc1a9', 'width': 1200}, 'variants': {}}]}
Seeking advice on hosting LLM on GPU
1
[removed]
2024-12-11T10:46:07
https://www.reddit.com/r/LocalLLaMA/comments/1hbqt2a/seeking_advice_on_hosting_llm_on_gpu/
Plane_Past129
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbqt2a
false
null
t3_1hbqt2a
/r/LocalLLaMA/comments/1hbqt2a/seeking_advice_on_hosting_llm_on_gpu/
false
false
self
1
null
Need advice on hosting LLM on GPU in production !
1
[removed]
2024-12-11T10:48:13
https://www.reddit.com/r/LocalLLaMA/comments/1hbqu4w/need_advice_on_hosting_llm_on_gpu_in_production/
Plane_Past129
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbqu4w
false
null
t3_1hbqu4w
/r/LocalLLaMA/comments/1hbqu4w/need_advice_on_hosting_llm_on_gpu_in_production/
false
false
self
1
null
Automating Technical Screening for Senior ML/LLM Hires—What’s Working?
0
We’re scaling up and looking for senior talent in data architecture and ML/AI (especially folks with LLM experience), but the volume of applications is already a bit bonkers. We’re a lean team and can’t spend all day manually sifting through CVs and hopping on endless first-round calls. At the same time, we don’t want to end up making mis-hires just because we tried to cut corners. We’re eyeing a few automated skill assessment platforms—some claim they can weed out anyone who’s not the real deal. But I’m sceptical. With LLMs and other tools now so easily accessible, is it still a solid strategy to rely on these platforms? How many candidates are just plugging the questions into GPT-4 (or similar) and acing the tests without actually knowing what’s going on under the hood? On the flip side, going traditional (live coding sessions or custom project work for everyone) would be a huge time sink. We’d much rather put more effort into the final few candidates, but we need a reasonable way to get there. Has anyone had decent results recently with off-the-shelf assessment tools? Are there platforms that effectively guard against AI-assisted cheating, or at least make it more trouble than it’s worth? Would love to hear about any real-world experiences, clever hacks, or fresh perspectives from other founders and teams navigating these waters. How are you filtering through the noise and finding the genuinely skilled engineers?
2024-12-11T11:04:01
https://www.reddit.com/r/LocalLLaMA/comments/1hbr2f2/automating_technical_screening_for_senior_mlllm/
Fit_Jelly_5346
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbr2f2
false
null
t3_1hbr2f2
/r/LocalLLaMA/comments/1hbr2f2/automating_technical_screening_for_senior_mlllm/
false
false
self
0
null
Which tool are you using to build Ai Agents?
0
As mentioned in the title, Curious to know which tool is the most effective. I'm using Langchain and my own custom tools as of now for creatingagents in my currentprojects. Used flowise before and did not try N8N completely.
2024-12-11T11:25:16
https://i.redd.it/p2j1smaff76e1.jpeg
Sorry_Transition_599
i.redd.it
1970-01-01T00:00:00
0
{}
1hbrda5
false
null
t3_1hbrda5
/r/LocalLLaMA/comments/1hbrda5/which_tool_are_you_using_to_build_ai_agents/
false
false
https://b.thumbs.redditm…JwbZZj6c-RMA.jpg
0
{'enabled': True, 'images': [{'id': 'mGEpeL13-NSuHzBw2_MdphnqCUm33gL9mHHPnglj9B0', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/p2j1smaff76e1.jpeg?width=108&crop=smart&auto=webp&s=7728971ccaeeb17db1fe45c454e42b492c4f111c', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/p2j1smaff76e1.jpeg?width=216&crop=smart&auto=webp&s=71a442fd8e815b75aa8881267189356ddcc0703c', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/p2j1smaff76e1.jpeg?width=320&crop=smart&auto=webp&s=6846ce9a1da43cb47d3e08d09366c68b7d829e69', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/p2j1smaff76e1.jpeg?width=640&crop=smart&auto=webp&s=b1345678ad2f75f8d39698a13d6c4b23f61f150e', 'width': 640}, {'height': 639, 'url': 'https://preview.redd.it/p2j1smaff76e1.jpeg?width=960&crop=smart&auto=webp&s=eb287683ae0766e33b4b585ed31e2ce55df5fbde', 'width': 960}, {'height': 719, 'url': 'https://preview.redd.it/p2j1smaff76e1.jpeg?width=1080&crop=smart&auto=webp&s=ea0d9c77c147e456f1257e53ab586faa004e8bce', 'width': 1080}], 'source': {'height': 2828, 'url': 'https://preview.redd.it/p2j1smaff76e1.jpeg?auto=webp&s=664bd91f3a3a5f256fc50efc781ea9a3d2188eba', 'width': 4243}, 'variants': {}}]}
Superposition in Neural Network Weights: The Key to Instant Model Optimization and AGI?
0
Superposition in Neural Network Weights: The Key to Instant Model Optimization and AGI? Imagine a future where neural network weights exist in a superposition state, allowing instantaneous optimization and adaptation to tasks. Could this paradigm-shifting idea revolutionize large language models and push us closer to AGI? Let's discuss the feasibility, implications, and challenges of implementing such a breakthrough. Are we standing at the cusp of a new era in AI development? Share your thoughts, theories, and critiques below! P.S. Google just released "Willow": a quantum computing chip that solves quantum calculations in about 5 minutes.
2024-12-11T11:32:14
https://www.reddit.com/r/LocalLLaMA/comments/1hbrgz5/superposition_in_neural_network_weights_the_key/
wisewizer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbrgz5
false
null
t3_1hbrgz5
/r/LocalLLaMA/comments/1hbrgz5/superposition_in_neural_network_weights_the_key/
false
false
self
0
null
E5-2673 v4 2 CPUs Llama 3.1 70b tensor parallelism speed
16
I could get 442ms per token for llama 3.1 70b, so 2.26 token per seconds by following command. Model is from https://huggingface.co/Azamorn/Meta-Llama-3-70B-Instruct-Distributed/tree/main. Because this 2 CPU system has seperated memory bus 4 channels each, having 2 processes could use their own memory channel bandwidth. 2.26 tokens/s is faster than llama.cpp 1.24. dllama worker --port 50001 --nthreads 20 dllama.exe chat --model .\dllama_model_llama3-70b-instruct_q40.m --tokenizer .\dllama_tokenizer_llama3_2.t --weights-float-type q40 --buffer-float-type q80 --nthreads 20 --workers 127.0.0.1:50001
2024-12-11T11:35:22
https://www.reddit.com/r/LocalLLaMA/comments/1hbrioj/e52673_v4_2_cpus_llama_31_70b_tensor_parallelism/
Hot-Lingonberry-8541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbrioj
false
null
t3_1hbrioj
/r/LocalLLaMA/comments/1hbrioj/e52673_v4_2_cpus_llama_31_70b_tensor_parallelism/
false
false
self
16
{'enabled': False, 'images': [{'id': 'N9g3q1ktjaimO6z8lT3efC260bwuoSpKsAL5oRjwH_g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=108&crop=smart&auto=webp&s=b48cbe0daaa443dc5b4716cc7e6a2c326c6a5a4a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=216&crop=smart&auto=webp&s=c829d9956a1c211567635b4ca77a1acb3c2413d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=320&crop=smart&auto=webp&s=20b87fc03fd6c7d7579f22efdcbfdb01a1b88311', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=640&crop=smart&auto=webp&s=c30d278833b72c1daed382192eaf7aa30dc5ab9e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=960&crop=smart&auto=webp&s=d40d34da499fc89ccb4259d4a2697b9e244b3712', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?width=1080&crop=smart&auto=webp&s=0a8c6f86710a460046a76d23dcb6fafbc0a736b9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9mICLqZw9t8Vmfom0x0YYtVFpkc-JeAho2tv5t4C5_0.jpg?auto=webp&s=6f9953b0949c6bf050f648135a560476e2f99402', 'width': 1200}, 'variants': {}}]}
Perplexity AI Pro 1-YEAR Coupon - Only $25 (€23) | Subscribe then Pay!
1
[removed]
2024-12-11T11:53:51
[deleted]
1970-01-01T00:00:00
0
{}
1hbrstu
false
null
t3_1hbrstu
/r/LocalLLaMA/comments/1hbrstu/perplexity_ai_pro_1year_coupon_only_25_23/
false
false
default
1
null
Perplexity AI Pro 1-YEAR Coupon - Only $25 (€23) | Subscribe then Pay!
1
[removed]
2024-12-11T11:54:20
[deleted]
1970-01-01T00:00:00
0
{}
1hbrt3h
false
null
t3_1hbrt3h
/r/LocalLLaMA/comments/1hbrt3h/perplexity_ai_pro_1year_coupon_only_25_23/
false
false
default
1
null
QwQ-32B-preview typical overthinking failure
3
I will share this exchange with [QwQ-32B-preview](https://huggingface.co/Qwen/QwQ-32B-Preview) just in case someone is interested. The prompt was something along the lines of... User: Hey, can you whip up the code for the simplest game you can think of? Just something quick and fun! Please avoid Guess the Number, Rock Scissor Paper, Hangman, Adventure Games, Coin Flip Simulators, Memory Matching, Catch the Falling Objects, Type Racer, and Speed Clicker Games like the plague. QwQ-32B's long answer was... >!AI: Alright, I need to come up with the simplest game possible, but there are a lot of classics that I have to avoid. Let's see, no Guess the Number, Rock Paper Scissors, Hangman, Adventure Games, Coin Flip Simulators, Memory Matching, Catch the Falling Objects, or Type Racer Games. That really narrows it down.!< >!Okay, maybe I can think of something really basic, like a text-based game where the player has to make choices. But since Adventure Games are off-limits, I need to make sure it's not too involved.!< >!How about a game where the player has to navigate through a small grid or something? But that might be too similar to adventure games. Maybe too much like a maze, which could be considered an adventure game variant.!< >!Wait, what about a game where the player has to guess a word, but not like Hangman? Maybe something where they have to arrange letters in the correct order. But that might be too similar to Hangman in concept.!< >!Hmm, maybe a game where the player has to guess a color? No, that seems too similar to Guess the Number, just with colors instead of numbers.!< >!Let me think differently. Maybe a game where the player has to react to something, like pressing a key when a certain condition is met. Like a simple reflex test.!< >!So, perhaps the game could display a series of numbers one by one, and when a specific number appears, the player has to press a key as quickly as possible. Something like a simplified version of a reflex tester.!< >!But I'm not sure if that's too simple or if it qualifies as a game. Maybe I can add a scoring system, where the player's reaction time is measured, and they can try to beat their own best time.!< >!Wait, but that might be too similar to those online reaction time tests, which aren't really games in the traditional sense.!< >!Maybe I need to add an element of strategy or decision-making. But if I do that, it might become too complex.!< >!Let me consider a game where the player has to choose between different options, and based on their choice, they get a certain outcome. Like a choose-your-own-adventure story, but extremely simplified.!< >!But again, that seems too much like an adventure game, which is off-limits.!< >!Hmm.!< >!What if the game is about matching two similar items? But Memory Matching is forbidden.!< >!So maybe I can do something where the player has to identify similarities or differences between things without actually matching them.!< >!For example, show two words and the player has to say whether they are similar or different in some way.!< >!But that feels more like a quiz or an exercise than a game.!< >!I need something more engaging.!< >!Wait, maybe a game where the player has to guess the next letter in a sequence. Not exactly guessing a letter, but predicting based on a pattern.!< >!But that might be too much like figuring out a code, which could be similar to other games.!< >!Let me think outside the box. Maybe a game where the player interacts with the computer in a minimalistic way, like typing yes or no in response to questions.!< >!But that sounds too simplistic; it needs some sort of goal or challenge.!< >!What if the game is about typing a certain phrase as quickly as possible without mistakes, but that's like a typing test, which is similar to Type Racer Games, which are forbidden.!< >!Hmm.!< >!Maybe I can make a game where the player has to guess the next word in a sentence, but that's like a language game, and perhaps too similar to Hangman.!< >!Wait, perhaps a game where the player has to guess the synonym or antonym of a word. But that might be too educational and similar to other word games.!< >!I need something purely for fun and simple.!< >!What about a game where the computer does most of the work, and the player just watches? Like a virtual pet, but that would require too much maintenance and is probably not simple enough.!< >!Alternatively, maybe a game where the player watches a simulation, like watching grass grow or something, but that's not really interactive or fun.!< >!Wait, that's silly. I need something more engaging.!< >!Let me think about games that involve minimal interaction. Maybe a game where the player has to wait for something to happen and occasionally does a simple action.!< >!Like a virtual plant that the player has to water once in a while, but again, that seems too much like a simulation.!< >!Hmm.!< >!What if the game is about choosing between different colors or shapes, and based on the choice, the player gets a point or not. But that seems too vague.!< >!Wait, maybe I can make a game where the player has to select the odd one out from a list of items.!< >!For example, show a list like "apple, banana, carrot, dog" and the player has to choose "dog" because it's not a fruit or vegetable.!< >!But that's more of a trivia game, and it might be too similar to other categorical games.!< >!I need to think differently.!< >!Maybe a game where the player has to guess if a number is even or odd, but that's too simple and perhaps not engaging.!< >!Alternatively, a game where the player has to decide whether a statement is true or false, but again, that's more of a quiz.!< >!I need something that involves more creativity.!< >!What about a game where the player has to think of a word that starts with the last letter of the previous word? Like a simple word chain.!< >!But that requires two players usually, so it might not work as a single-player game.!< >!Wait, maybe the computer can provide the first word, and the player has to respond with a word that starts with the last letter of the computer's word, and so on.!< >!But that would require the computer to check if the word is valid, which might be complicated.!< >!Alternatively, maybe the game can be non-competitive, where the player just tries to come up with as many words as possible in a row.!< >!But that's more of a exercise than a game.!< >!I'm stuck here.!< >!Let me try another approach. Maybe a game where the player has to arrange items in a certain order, but not exactly memory matching.!< >!For example, showing a list of words and asking the player to arrange them alphabetically or by length.!< >!But that seems more like a puzzle or a quiz.!< >!Hmm.!< >!What if the game is about creating patterns? Like entering a sequence of characters in a specific order.!< >!But that might be too abstract.!< >!Wait, perhaps a game where the player has to type a certain pattern as quickly as possible, and the game measures their speed and accuracy.!< >!But that's similar to typing tests again, which are forbidden.!< >!I need to think of something entirely different.!< >!Maybe a game where the player has to guess the next item in a sequence, but not numbers; maybe colors or shapes.!< >!But that sounds like a memory game, which is off-limits.!< >!This is tricky.!< >!What about a game where the player has to decide which path to take in a story, but without any consequences or continuing story; just a single decision.!< >!For example, show two options: go left or go right, and based on the choice, display a different message.!< >!But that's too simplistic and doesn't provide any real challenge or fun.!< >!I need something with a bit more substance.!< >!Wait, maybe a game where the player has to guess the number of a certain character in a sentence.!< >!For example, how many times the letter 'e' appears in a given phrase.!< >!But that's similar to Guess the Number, just with letters instead of numbers.!< >!No good.!< >!What if the game is about identifying the mood of a sentence? Like happy, sad, angry, etc.!< >!But that would require some level of natural language processing, which might be too complex for a simple game.!< >!Moreover, it's more of a psychological assessment than a game.!< >!Hmm.!< >!Maybe a game where the player has to guess the next letter in a randomly generated sequence.!< >!But that's too random and not fun.!< >!Wait, perhaps a game where the player has to guess the outcome of a random event, like rolling a die, but not exactly predicting a number.!< >!For example, guessing if the next number is higher or lower than the previous one.!< >!But that's similar to Guess the Number in a way.!< >!Not ideal.!< >!Let me think of something completely different.!< >!What about a game where the player has to listen to a sound and identify what it is.!< >!But that would require audio capabilities, which might be beyond the scope of a simple text-based game.!< >!Moreover, it's more of a quiz than a game.!< >!Hmm.!< >!Maybe a game where the player has to create their own username or profile, and based on that, the game generates a funny title or description.!< >!But that's more of a novelty and not really a game.!< >!I need something with at least a basic goal or challenge.!< >!What if the game is about choosing the correct synonym for a word?!< >!But that's similar to educational games and perhaps too much like Hangman.!< >!Wait, maybe a game where the player has to choose the word that doesn't belong in a group, but without any categorical reasoning.!< >!Hmm, that's challenging.!< >!Alternatively, maybe a game where the player has to guess the meaning of a made-up word based on context.!< >!But that might be too difficult and not fun.!< >!I need to think of something that's purely for entertainment and doesn't feel like a test or exercise.!< >!What about a game where the player has to guess the next word in a nonsense phrase, and the game generates a funny story based on their choices.!< >!Something like a choose-your-own-adventure, but very short and silly.!< >!But that might be too similar to adventure games, which are forbidden.!< >!This is harder than I thought.!< >!Let me try to think of*<jazz music stops>*!< The AI didn’t lack ideas—it had plenty. But its insistence on strictly following the prompt kept it in a loop. Every concept was examined and rejected for being even slightly similar to the restricted list. Reflex games? Too close to reaction tests. Word chains? Hangman-adjacent. Typing patterns? Feels like a typing test. It kept reflecting on what makes a game simple but original, but those high standards made it second-guess every idea. It’s a perfectionist problem. By aiming for complete adherence to the rules, the AI stifled its own creativity. Maybe the real challenge wasn’t simplicity—it was learning when to stop overthinking. Or is it like the classic “don’t think of a pink elephant”? The more it focused on avoiding certain games, the more those games shaped its thinking. Oh, and it wrote "Hmm" eight times. I really hate that. Focusing on making LLMs mimic humans is missing the point and slowing real progress. They should be built to excel at specific tasks—efficient, specialized, and purely practical—not trying to act like pretend companions or amateur philosophers.
2024-12-11T11:55:39
https://www.reddit.com/r/LocalLLaMA/comments/1hbrttb/qwq32bpreview_typical_overthinking_failure/
IrisColt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbrttb
false
null
t3_1hbrttb
/r/LocalLLaMA/comments/1hbrttb/qwq32bpreview_typical_overthinking_failure/
false
false
self
3
{'enabled': False, 'images': [{'id': 'kUXS9ys-NgWemzgA5Eh0Rn8HAGatqnJZfUvgqrHeSJc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9lwu3fJVrmH543SkitXIkDLQYBVQoiD_rx6he1UsK_E.jpg?width=108&crop=smart&auto=webp&s=f0f8ca61ef30b0081fb8b73b42b824ee68424b2b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9lwu3fJVrmH543SkitXIkDLQYBVQoiD_rx6he1UsK_E.jpg?width=216&crop=smart&auto=webp&s=5b28814c9773986572b1c4e1664bc3c9772c6a81', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9lwu3fJVrmH543SkitXIkDLQYBVQoiD_rx6he1UsK_E.jpg?width=320&crop=smart&auto=webp&s=0757ee079812260bcf97966e711c05a8732fd0c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9lwu3fJVrmH543SkitXIkDLQYBVQoiD_rx6he1UsK_E.jpg?width=640&crop=smart&auto=webp&s=3fe361e36fe7b9c39add93908c93480f6c1052ff', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9lwu3fJVrmH543SkitXIkDLQYBVQoiD_rx6he1UsK_E.jpg?width=960&crop=smart&auto=webp&s=26d8864ea66e81578cb86fa8e87275db5a455b19', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9lwu3fJVrmH543SkitXIkDLQYBVQoiD_rx6he1UsK_E.jpg?width=1080&crop=smart&auto=webp&s=7d3bba476dd92e52e6b207840434d5eae97b4d68', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9lwu3fJVrmH543SkitXIkDLQYBVQoiD_rx6he1UsK_E.jpg?auto=webp&s=5a7e351d1af7f3b4a9dbeb5447a1b3330ca52058', 'width': 1200}, 'variants': {}}]}
Got HuggingFace access to llama 2, llama 3, llama 3.1 models within 1 hour
0
This post is meant for anyone googling "How long does approval for llama models take". For my sign-up, I did mention I am affiliated with a university, and I included my GitHub and personal website on my HuggingFace account (I'm not sure if this matters).
2024-12-11T11:59:59
https://www.reddit.com/r/LocalLLaMA/comments/1hbrw2s/got_huggingface_access_to_llama_2_llama_3_llama/
popeldo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbrw2s
false
null
t3_1hbrw2s
/r/LocalLLaMA/comments/1hbrw2s/got_huggingface_access_to_llama_2_llama_3_llama/
false
false
self
0
null
Visual Language models for 3D object detection and depth perception
0
I want to run a VLM for object detection and depth perception in 3D simulation engine (Unity). What are some good vlms for this use case considering factors like accuracy, speed and ease of fine-tuning? Example use case: In Unity, I have an environment with 2 rooms. A camera is setup which captures image/video feed of the scene. The VLM should find a specific object (say a black bottle) in the environment, judge where it is in 3D scene and generate coordinates for it. Basically I want to find out exactly where the object is in the Unity environment. How can this be done?
2024-12-11T12:13:10
https://www.reddit.com/r/LocalLLaMA/comments/1hbs3z9/visual_language_models_for_3d_object_detection/
reso_ams
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbs3z9
false
null
t3_1hbs3z9
/r/LocalLLaMA/comments/1hbs3z9/visual_language_models_for_3d_object_detection/
false
false
self
0
null
So is it the case that essentially an LLM is a function whose input is a string of tokens and whose output is a single token?
92
A function created through a machine learning process to basically predict the next token given the input string of tokens. And so is it accurate that what is usually referred to as the "context" is the input. And to "run" an LLM a program essentially continuously runs the function on the current context to get the next token, updates the context with the new token and repeats, until the maximum input length is reached, for instance or other conditions are met, etc. And is it the case then that prompt templates are not really something that have any special meaning to the LLM compared to any of the other text it's been trained on, except in the sense that the prompt templates may have been incorporated into the training data in order to (hopefully) make the LLM more likely to behave as desired it the prompt template is used. I know I'm not putting this all the right way. But does that make sense? Is that more or less correct?
2024-12-11T12:37:21
https://www.reddit.com/r/LocalLLaMA/comments/1hbsie4/so_is_it_the_case_that_essentially_an_llm_is_a/
theRealGleepglop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbsie4
false
null
t3_1hbsie4
/r/LocalLLaMA/comments/1hbsie4/so_is_it_the_case_that_essentially_an_llm_is_a/
false
false
self
92
null
Getting an error on Applio.org
1
[removed]
2024-12-11T12:52:06
https://www.reddit.com/r/LocalLLaMA/comments/1hbsrpa/getting_an_error_on_applioorg/
Hot_Heron_3451
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbsrpa
false
null
t3_1hbsrpa
/r/LocalLLaMA/comments/1hbsrpa/getting_an_error_on_applioorg/
false
false
https://b.thumbs.redditm…YVHA83CfLdgY.jpg
1
null
TabbyAPI AssertionError
1
Hello. I've been dealing with EXL2 models reaching their cache limit, and there is seemingly no such thing as "flash attention" pre-built with TabbyAPI, unlike KCPP for example. Is there any way to install/enable it for EXL2 models or specifically Tabby?
2024-12-11T12:52:57
https://www.reddit.com/r/LocalLLaMA/comments/1hbss9i/tabbyapi_assertionerror/
IcyTorpedo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbss9i
false
null
t3_1hbss9i
/r/LocalLLaMA/comments/1hbss9i/tabbyapi_assertionerror/
false
false
self
1
null
What is the best place to train and use qwen2 vl 7/72b? With price per token like chatgpt api
1
[removed]
2024-12-11T13:05:39
https://www.reddit.com/r/LocalLLaMA/comments/1hbt0v7/what_is_the_best_place_to_train_and_use_qwen2_vl/
krakotay1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbt0v7
false
null
t3_1hbt0v7
/r/LocalLLaMA/comments/1hbt0v7/what_is_the_best_place_to_train_and_use_qwen2_vl/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KEKqZLXaDuojO8066WvfNm2knPNQpREJOqDRQbP0jOE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=108&crop=smart&auto=webp&s=c4356a09ff651d99050d2e2f7c625136bd5cc50d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=216&crop=smart&auto=webp&s=2efb5516e5e9493aedbb8874a4346aea1e2fdfe3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=320&crop=smart&auto=webp&s=5760f28068be8d1404c060058ca5dc7138a3921c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=640&crop=smart&auto=webp&s=5040e75d875b032b45e4cafad1ca6eed231c2aa5', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=960&crop=smart&auto=webp&s=678233eb228e31658cc7dc6f24ff3c4c199255ec', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=1080&crop=smart&auto=webp&s=e9407e720f5a5c73c6566e3b787afc17181bbb3f', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?auto=webp&s=610ce8e238d743540ebac62332adfbc058d7c11d', 'width': 2400}, 'variants': {}}]}
Best VLM for transcription of screenshots?
1
[removed]
2024-12-11T13:10:43
https://www.reddit.com/r/LocalLLaMA/comments/1hbt4by/best_vlm_for_transcription_of_screenshots/
Last-Lobster-7672
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbt4by
false
null
t3_1hbt4by
/r/LocalLLaMA/comments/1hbt4by/best_vlm_for_transcription_of_screenshots/
false
false
self
1
null
Looking for Benchmarks: On-Prem vs Cloud for AI Model Training
1
[removed]
2024-12-11T13:23:45
https://www.reddit.com/r/LocalLLaMA/comments/1hbtd65/looking_for_benchmarks_onprem_vs_cloud_for_ai/
Secu-Thibz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbtd65
false
null
t3_1hbtd65
/r/LocalLLaMA/comments/1hbtd65/looking_for_benchmarks_onprem_vs_cloud_for_ai/
false
false
self
1
null
AI can give america more advantage than atomic bomb if they create monopoly over ai, if they win other ai companies will suffocate slowly to bankrupcy and will lead to price manipulation according to elites and compliance will be inevitable , this will be like ... What do you guys think????
0
2024-12-11T13:44:41
https://i.redd.it/floqrna2486e1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1hbtrqk
false
null
t3_1hbtrqk
/r/LocalLLaMA/comments/1hbtrqk/ai_can_give_america_more_advantage_than_atomic/
false
false
https://b.thumbs.redditm…dOvTJVjWeMzA.jpg
0
{'enabled': True, 'images': [{'id': 'kI8sXoyrfnUk96DcHBkfTicPMOu0hHIbT3mYX0BmyBw', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/floqrna2486e1.png?width=108&crop=smart&auto=webp&s=bc2dc02bb9765fd615425313cf8533b8c90d0d5f', 'width': 108}, {'height': 211, 'url': 'https://preview.redd.it/floqrna2486e1.png?width=216&crop=smart&auto=webp&s=219da28dc863de83573f870c7d1bc51a127b636b', 'width': 216}, {'height': 313, 'url': 'https://preview.redd.it/floqrna2486e1.png?width=320&crop=smart&auto=webp&s=e6358c0ebab20c4f3df55ca786ad9feb1a2798e3', 'width': 320}, {'height': 626, 'url': 'https://preview.redd.it/floqrna2486e1.png?width=640&crop=smart&auto=webp&s=f99aa5f9ffb651a9ecf23d187d98d68be0821edd', 'width': 640}], 'source': {'height': 888, 'url': 'https://preview.redd.it/floqrna2486e1.png?auto=webp&s=61d8cf100acef217b5e37f42a94218993071dd08', 'width': 907}, 'variants': {}}]}
How to host Qwen2.5 VLM?
1
[removed]
2024-12-11T14:08:44
https://www.reddit.com/r/LocalLLaMA/comments/1hbu9gj/how_to_host_qwen25_vlm/
ProfessionalHour1231
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbu9gj
false
null
t3_1hbu9gj
/r/LocalLLaMA/comments/1hbu9gj/how_to_host_qwen25_vlm/
false
false
self
1
null
A simple example shows you the real thinking ability of O1 PRO
0
Third-grade students level math Given a thin, long water pipe as a water source, you have three unmarked water cups with capacities of 5 liters, 6 liters, and 7 liters. You can aim the water pipe at the opening of a cup and press a switch to fill it. Special Note: If you pour out the water from a cup (emptying it completely, as if pouring it on the ground, because you cannot return water to the source, which is a thin pipe), it will be considered waste. How can you obtain exactly 8 liters of water using these 3 cups while minimizing water waste? [https://chatgpt.com/share/67570ef1-166c-8010-9970-62f37aadf497](https://chatgpt.com/share/67570ef1-166c-8010-9970-62f37aadf497) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* You have a water reservoir with abundant water and three unmarked water jugs with known capacities of 5 liters, 6 liters, and 7 liters. The machine will only fill a completely empty jug when you place it inside. Special Note: You can empty a jug by pouring its contents into another jug, but if you pour water out without transferring it to another jug, as if pouring it on the ground,it will be considered "waste". How can you obtain exactly 8 liters of water using these 3 jugs while minimizing water waste? [https://chatgpt.com/share/67570e96-1d9c-8010-bfc3-afaf609d010c](https://chatgpt.com/share/67570e96-1d9c-8010-bfc3-afaf609d010c)
2024-12-11T14:13:45
https://www.reddit.com/r/LocalLLaMA/comments/1hbud3c/a_simple_example_shows_you_the_real_thinking/
flysnowbigbig
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbud3c
false
null
t3_1hbud3c
/r/LocalLLaMA/comments/1hbud3c/a_simple_example_shows_you_the_real_thinking/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Nyuu7POyhy6govfJ1dcuznEdiKSvWYvY75CrbkKZk54', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=108&crop=smart&auto=webp&s=1c9fdd18b399712019363db42aafb980b94bd314', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=216&crop=smart&auto=webp&s=899c4a23b4ceac10e86c1f39517d489870146375', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=320&crop=smart&auto=webp&s=f0fbf30be58ec54707a24cb4ac47d68af24442f7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=640&crop=smart&auto=webp&s=e590319308efb90d50448579fb76b003782dec8c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=960&crop=smart&auto=webp&s=e25bf929190c7aca5bc237df824850c31f043113', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=1080&crop=smart&auto=webp&s=bb8c7e2b754942d06a77b1979c63552b76523e40', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?auto=webp&s=e3bf69e1a2674fdfdf6fdb17b1bc4de5488bd3eb', 'width': 1600}, 'variants': {}}]}
I can't find any LLM that is better than gemma-2-9b-it-SimPO...
31
... that you can drive, in a reasonable mamner, with 8gb of vram. I've tried a lot of the new toys and i always end with the same. I hope somebody tries to replicate the style (stop gptisms plox, enough is enough) and makes something better on the ballpark of 8 to 10 billion parameters that you can drive locally in the most humble (actually affordable) gpu's. Or maybe we need gemma3.
2024-12-11T14:18:00
https://www.reddit.com/r/LocalLLaMA/comments/1hbugbv/i_cant_find_any_llm_that_is_better_than/
pumukidelfuturo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbugbv
false
null
t3_1hbugbv
/r/LocalLLaMA/comments/1hbugbv/i_cant_find_any_llm_that_is_better_than/
false
false
self
31
null
New strongest linear model: QRWKV6-32B (RWKV6 based on Qwen2.5-32B) & RWKV MoE: Finch-MoE-37B-A11B
1
[removed]
2024-12-11T14:38:29
https://www.reddit.com/r/LocalLLaMA/comments/1hbuvnb/new_strongest_linear_model_qrwkv632b_rwkv6_based/
SoullessMonarch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbuvnb
false
null
t3_1hbuvnb
/r/LocalLLaMA/comments/1hbuvnb/new_strongest_linear_model_qrwkv632b_rwkv6_based/
false
false
https://a.thumbs.redditm…v2ZiKpEq4MN0.jpg
1
{'enabled': False, 'images': [{'id': 'sm58dM80LBeLol3CknJNKpMoHR8dZ0TKhiDNu_s83UY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=108&crop=smart&auto=webp&s=cf9ca1cce9b9004e60607a34cd19a738e2fdd13c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=216&crop=smart&auto=webp&s=9580c276fb7b89d002e965225314c1bbd23658f4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=320&crop=smart&auto=webp&s=9f1f5b87c49cfcb456f203825d4d9f33684f5e6b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=640&crop=smart&auto=webp&s=8455529e840cd1e2ecea63b242d0e008d3a26e24', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=960&crop=smart&auto=webp&s=4c4cfbfa79fbd9ba9ad4d1eaa919483c2ddde3e9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=1080&crop=smart&auto=webp&s=c059608cfc4b77a26e5214fd0ab3bcfc666716aa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?auto=webp&s=76dc9754a6a9550107def02f5913b694b55937b1', 'width': 1200}, 'variants': {}}]}
New linear models: QRWKV6-32B (RWKV6 based on Qwen2.5-32B) & RWKV-based MoE: Finch-MoE-37B-A11B
119
# Releases: Recursal has released 2 new experimental models (see their huggingface model cards for benchmarks): * QRWKV6-32B-Instruct-Preview-v0.1 * Finch-MoE-37B-A11B-v0.1-HF **QRWKV6** is a model based on Qwen2.5-32B. From their model card: "We are able to convert any previously trained QKV Attention-based model, such as Qwen and LLaMA, into an RWKV variant without **requiring retraining from scratch**. Enabling us to rapidly test and validate the significantly more efficient RWKV Linear attention mechanism at a larger scale with a much smaller budget, bypassing the need for training from scratch." But what is (Q)RWKV? RWKV is an alternative RNN architecture to Transformers. It has a linear time complexity over the entire sequence, meaning that it will always take the same amount of time to generate a new token. Transformers have a quadratic time complexity, getting slower with each token as you are looking back at all previous tokens for each new one. [Note: Time and memory per token, Table 1 from RWKV-5\/6 paper](https://preview.redd.it/n8xc9egn486e1.png?width=375&format=png&auto=webp&s=b0326a415aa10657d5898a541a662415d0c8d885) QRWKV6 is the combination of the Qwen2.5 architecture and RWKV6. Some RWKV design choices have been replaced by Qwen's, enabling the weight derivation. For those interested in context length, they state that they were only able to do the conversion process up to 16k context length. And that "while the model is stable beyond this limit, additional training might be required to support longer context lengths" **Finch-MoE** is a Mixture-of-experts model based on RWKV-6 (Finch), also called Flock of Finches. 37B total parameters with 11B active parameters. This is just the start of RWKV-based MoE's as they want to expand the use of MoE to more portions of the model. This model uses a RWKV-6 7B model trained for 2T tokens, and after conversion to MoE, it was trained for another 110B tokens. This might not be the best MoE around, but this too has a linear time complexity. [How the MoE differs from the standard RWKV-6 architecture](https://preview.redd.it/ntft4jt7986e1.png?width=1150&format=png&auto=webp&s=b3a72f030550a869be7fe73da8474bd3dbd6eba4) # Upcoming: For those not convinced by QRWKV6's performance, they are planning to release more models, from their blog: """ Currently Q-RWKV-6 72B Instruct model is being trained Additionally with the finalization of RWKV-7 architecture happening soon, we intend to repeat the process and provide a full line up of * Q-RWKV-7 32B * LLaMA-RWKV-7 70B We intend to provide more details on the conversion process, along with our paper after the subsequent model release. """ So I would stay on the lookout for those if you're interested in linear models! # Links: Here are the huggingface model cards with some limited benchmarks: QRWKV6: [https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1](https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1) Finch-MoE: [https://huggingface.co/recursal/Finch-MoE-37B-A11B-v0.1-HF](https://huggingface.co/recursal/Finch-MoE-37B-A11B-v0.1-HF) (I'll link their blogposts in a comment)
2024-12-11T14:48:14
https://www.reddit.com/r/LocalLLaMA/comments/1hbv2yt/new_linear_models_qrwkv632b_rwkv6_based_on/
SoullessMonarch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbv2yt
false
null
t3_1hbv2yt
/r/LocalLLaMA/comments/1hbv2yt/new_linear_models_qrwkv632b_rwkv6_based_on/
false
false
https://a.thumbs.redditm…v2ZiKpEq4MN0.jpg
119
{'enabled': False, 'images': [{'id': 'sm58dM80LBeLol3CknJNKpMoHR8dZ0TKhiDNu_s83UY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=108&crop=smart&auto=webp&s=cf9ca1cce9b9004e60607a34cd19a738e2fdd13c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=216&crop=smart&auto=webp&s=9580c276fb7b89d002e965225314c1bbd23658f4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=320&crop=smart&auto=webp&s=9f1f5b87c49cfcb456f203825d4d9f33684f5e6b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=640&crop=smart&auto=webp&s=8455529e840cd1e2ecea63b242d0e008d3a26e24', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=960&crop=smart&auto=webp&s=4c4cfbfa79fbd9ba9ad4d1eaa919483c2ddde3e9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?width=1080&crop=smart&auto=webp&s=c059608cfc4b77a26e5214fd0ab3bcfc666716aa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1NJ_RLwf2M_yWF0zzVOqNpEE7BAQduN9xFT0akNATLc.jpg?auto=webp&s=76dc9754a6a9550107def02f5913b694b55937b1', 'width': 1200}, 'variants': {}}]}
LLM Pre-Training
1
I'm not new to this topic, but I really want to gain some hands-on experience with the pre-training of LLMs. Do you have any suggestions for books or other study materials that could help me?
2024-12-11T14:56:56
https://www.reddit.com/r/LocalLLaMA/comments/1hbva0h/llm_pretraining/
maverik75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbva0h
false
null
t3_1hbva0h
/r/LocalLLaMA/comments/1hbva0h/llm_pretraining/
false
false
self
1
null
Gemini 2.0 Flash Experimental, anyone tried it?
151
2024-12-11T15:02:13
https://i.redd.it/xhzxaey1i86e1.png
robberviet
i.redd.it
1970-01-01T00:00:00
0
{}
1hbvegm
false
null
t3_1hbvegm
/r/LocalLLaMA/comments/1hbvegm/gemini_20_flash_experimental_anyone_tried_it/
false
false
https://b.thumbs.redditm…CNQyVs5mWeAM.jpg
151
{'enabled': True, 'images': [{'id': 'HhSt6b8L-N91-JwmMKqbvgtVfbjNlCYdRJLKjpVWQ3w', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/xhzxaey1i86e1.png?width=108&crop=smart&auto=webp&s=369b7a737973c3713801f66757960a7072fe0633', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/xhzxaey1i86e1.png?width=216&crop=smart&auto=webp&s=df1f4c6fb10796222b1bfe1feb909507fcee65ae', 'width': 216}, {'height': 195, 'url': 'https://preview.redd.it/xhzxaey1i86e1.png?width=320&crop=smart&auto=webp&s=e5f6479ff23aa9d11d4091e71a90ec81bd8a5e79', 'width': 320}, {'height': 390, 'url': 'https://preview.redd.it/xhzxaey1i86e1.png?width=640&crop=smart&auto=webp&s=353d121a3e08b32701a9dca7ab131b1b9f5a3f8c', 'width': 640}, {'height': 585, 'url': 'https://preview.redd.it/xhzxaey1i86e1.png?width=960&crop=smart&auto=webp&s=2543e868d506027b3116583214a856f599b86ca5', 'width': 960}, {'height': 658, 'url': 'https://preview.redd.it/xhzxaey1i86e1.png?width=1080&crop=smart&auto=webp&s=3d8a1d50cbe9d0c103d702ab65e8f7677dd1128d', 'width': 1080}], 'source': {'height': 938, 'url': 'https://preview.redd.it/xhzxaey1i86e1.png?auto=webp&s=46063cd5c6f1547e15e546af4a80e421b842b27b', 'width': 1538}, 'variants': {}}]}
Google just dropped Gemini 2.0 Flash, it supports native audio and image generation (private preview for now), everything else is available on the API
280
2024-12-11T15:02:30
https://i.redd.it/jg91tfd4i86e1.png
Vivid_Dot_6405
i.redd.it
1970-01-01T00:00:00
0
{}
1hbveow
false
null
t3_1hbveow
/r/LocalLLaMA/comments/1hbveow/google_just_dropped_gemini_20_flash_it_supports/
false
false
https://a.thumbs.redditm…gwTwREFD8lx0.jpg
280
{'enabled': True, 'images': [{'id': 'Kscpnt68P9iJJRZ5UVDutVXunfXBlrZvSIKJ1Iils-4', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/jg91tfd4i86e1.png?width=108&crop=smart&auto=webp&s=2cbe2cb108d015d985f4b858b551c97003e34476', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/jg91tfd4i86e1.png?width=216&crop=smart&auto=webp&s=0cfc75b3048ac0590846f47e128fd0480db473e5', 'width': 216}, {'height': 152, 'url': 'https://preview.redd.it/jg91tfd4i86e1.png?width=320&crop=smart&auto=webp&s=39aef62e1fb9de6a6d79e708919fa2ca8b672f93', 'width': 320}, {'height': 305, 'url': 'https://preview.redd.it/jg91tfd4i86e1.png?width=640&crop=smart&auto=webp&s=d2bcc7367be663260d759e621e0a60b4f5c6ded3', 'width': 640}, {'height': 457, 'url': 'https://preview.redd.it/jg91tfd4i86e1.png?width=960&crop=smart&auto=webp&s=5d8c21d508d5979e4416ea8ed5894b63193509e4', 'width': 960}, {'height': 515, 'url': 'https://preview.redd.it/jg91tfd4i86e1.png?width=1080&crop=smart&auto=webp&s=7177ef9ada7195ef0fd0a4dd84f56878ac83342f', 'width': 1080}], 'source': {'height': 541, 'url': 'https://preview.redd.it/jg91tfd4i86e1.png?auto=webp&s=7e2f6cdf0e5d9160dbaaaf3f59214463a88eea1a', 'width': 1134}, 'variants': {}}]}
Is it possible to selfhost hugging face demo UIs?
1
[removed]
2024-12-11T15:05:34
https://www.reddit.com/r/LocalLLaMA/comments/1hbvhb2/is_it_possible_to_selfhost_hugging_face_demo_uis/
MaruluVR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbvhb2
false
null
t3_1hbvhb2
/r/LocalLLaMA/comments/1hbvhb2/is_it_possible_to_selfhost_hugging_face_demo_uis/
false
false
self
1
{'enabled': False, 'images': [{'id': 'egm9jIw1Ty7SHh8T3Xnn72OAu64G2SvCFRbs8xi2PuA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/noa2C-CflMJ7NjWSp-eptv7f3I-_hlJ2Kyz6AAbfIRs.jpg?width=108&crop=smart&auto=webp&s=08f7597f034fe925ca8843703c4b85c689cba35b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/noa2C-CflMJ7NjWSp-eptv7f3I-_hlJ2Kyz6AAbfIRs.jpg?width=216&crop=smart&auto=webp&s=f1da4c41f5a05a827ee4e341ec489cbe33adfa20', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/noa2C-CflMJ7NjWSp-eptv7f3I-_hlJ2Kyz6AAbfIRs.jpg?width=320&crop=smart&auto=webp&s=9e64329682f04a3483f80df692220748d64d55f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/noa2C-CflMJ7NjWSp-eptv7f3I-_hlJ2Kyz6AAbfIRs.jpg?width=640&crop=smart&auto=webp&s=9ed0da8733835a1d877d46dd7a63b12322ad29a0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/noa2C-CflMJ7NjWSp-eptv7f3I-_hlJ2Kyz6AAbfIRs.jpg?width=960&crop=smart&auto=webp&s=506ab3000be871bb356747f96d3b30ca31455d71', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/noa2C-CflMJ7NjWSp-eptv7f3I-_hlJ2Kyz6AAbfIRs.jpg?width=1080&crop=smart&auto=webp&s=d491d69d0925a7d9d0a601fe12c013bf25df1b33', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/noa2C-CflMJ7NjWSp-eptv7f3I-_hlJ2Kyz6AAbfIRs.jpg?auto=webp&s=37b4d41493a18ca848597eac763070b93719f510', 'width': 1200}, 'variants': {}}]}
How to prune after finetuning the Llama model for my specific usecase?
1
[removed]
2024-12-11T15:15:18
https://www.reddit.com/r/LocalLLaMA/comments/1hbvp3y/how_to_prune_after_finetuning_the_llama_model_for/
lakshaydulani
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbvp3y
false
null
t3_1hbvp3y
/r/LocalLLaMA/comments/1hbvp3y/how_to_prune_after_finetuning_the_llama_model_for/
false
false
self
1
null
Gemma 3
120
Man it has been a long time since google opensourced Gemma 3
2024-12-11T15:26:04
https://www.reddit.com/r/LocalLLaMA/comments/1hbvxiv/gemma_3/
Flashy_Management962
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbvxiv
false
null
t3_1hbvxiv
/r/LocalLLaMA/comments/1hbvxiv/gemma_3/
false
false
self
120
null
GGUF model for text summarization
2
So far i seem to have the best results with Hermes uncensored. Second is a model called DarkIdol. I tested LLama3.3 and somehow it doesn't perform great. But even Hermes occasionally messes up, and for example loops the answer restarting with the question itself. Any good advice?
2024-12-11T15:28:29
https://www.reddit.com/r/LocalLLaMA/comments/1hbvzfk/gguf_model_for_text_summarization/
goingsplit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbvzfk
false
null
t3_1hbvzfk
/r/LocalLLaMA/comments/1hbvzfk/gguf_model_for_text_summarization/
false
false
self
2
null
Gemini Flash 2.0 experimental
171
https://x.com/sundarpichai/status/1866868228141597034?s=46
2024-12-11T15:35:12
https://www.reddit.com/r/LocalLLaMA/comments/1hbw529/gemini_flash_20_experimental/
appakaradi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbw529
false
null
t3_1hbw529
/r/LocalLLaMA/comments/1hbw529/gemini_flash_20_experimental/
false
false
self
171
{'enabled': False, 'images': [{'id': 'nBxY6xJQ4QdMiO7MhbraI6m03Sf_k9_cWflTwn-4WPI', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/N8MH-WKO5rLUlEFE6mlpG5VzUQBcq5IRqPNDBMecb8w.jpg?width=108&crop=smart&auto=webp&s=e56df0307756faa2f05a4c2c84b2d52912da79ad', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/N8MH-WKO5rLUlEFE6mlpG5VzUQBcq5IRqPNDBMecb8w.jpg?width=216&crop=smart&auto=webp&s=921de56d1d123f9192f4221a36d8b9eb9e0e0f5e', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/N8MH-WKO5rLUlEFE6mlpG5VzUQBcq5IRqPNDBMecb8w.jpg?width=320&crop=smart&auto=webp&s=e2c26b902f7f17518eb5bd713516af4603ae1240', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/N8MH-WKO5rLUlEFE6mlpG5VzUQBcq5IRqPNDBMecb8w.jpg?width=640&crop=smart&auto=webp&s=0942dab7436c6e3b14646ecb68983235ed9ade6c', 'width': 640}, {'height': 1201, 'url': 'https://external-preview.redd.it/N8MH-WKO5rLUlEFE6mlpG5VzUQBcq5IRqPNDBMecb8w.jpg?width=960&crop=smart&auto=webp&s=0008c77010f7e43e7867f35fb5f35ce67784c6ca', 'width': 960}, {'height': 1351, 'url': 'https://external-preview.redd.it/N8MH-WKO5rLUlEFE6mlpG5VzUQBcq5IRqPNDBMecb8w.jpg?width=1080&crop=smart&auto=webp&s=d57bdb88f85bd1bc4d281eeb76100184cfd63868', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/N8MH-WKO5rLUlEFE6mlpG5VzUQBcq5IRqPNDBMecb8w.jpg?auto=webp&s=1d05e8d86176d87ccfafab8c5485c8b1b113f8f7', 'width': 1637}, 'variants': {}}]}
Ottodev is now official open source fork of stackblitz's bolt.new , now known as bolt.diy
20
https://preview.redd.it/…z-labs/bolt.diy)
2024-12-11T15:36:43
https://www.reddit.com/r/LocalLLaMA/comments/1hbw6bt/ottodev_is_now_official_open_source_fork_of/
TheLogiqueViper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbw6bt
false
null
t3_1hbw6bt
/r/LocalLLaMA/comments/1hbw6bt/ottodev_is_now_official_open_source_fork_of/
false
false
https://b.thumbs.redditm…wXuMLFiV6PUo.jpg
20
{'enabled': False, 'images': [{'id': 'wf4aOlTttWZTLP-Zhg-kke8mjalJZ_FY7NNWBbaV_LE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Sj6q6aDj4ownrCQP_rquFYtsij1wQQQmTxq8q-KiRSc.jpg?width=108&crop=smart&auto=webp&s=54ff388fd63c8fd4a784507bbc750ca6d183cb88', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Sj6q6aDj4ownrCQP_rquFYtsij1wQQQmTxq8q-KiRSc.jpg?width=216&crop=smart&auto=webp&s=1e174395f669bb79f7edad8f9688c9b393f32077', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Sj6q6aDj4ownrCQP_rquFYtsij1wQQQmTxq8q-KiRSc.jpg?width=320&crop=smart&auto=webp&s=441394d0ab4aafa8e7717c7a1ae80c1496b415dc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Sj6q6aDj4ownrCQP_rquFYtsij1wQQQmTxq8q-KiRSc.jpg?width=640&crop=smart&auto=webp&s=0eb3f4810e3735559596d78ec92dbde12ff8b99d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Sj6q6aDj4ownrCQP_rquFYtsij1wQQQmTxq8q-KiRSc.jpg?width=960&crop=smart&auto=webp&s=e493209703860ea6c24490433ac891c5fb649383', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Sj6q6aDj4ownrCQP_rquFYtsij1wQQQmTxq8q-KiRSc.jpg?width=1080&crop=smart&auto=webp&s=a7dcb958612548d56eb92e3bba85232ff15c5bf7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Sj6q6aDj4ownrCQP_rquFYtsij1wQQQmTxq8q-KiRSc.jpg?auto=webp&s=7434fe6f74d5c895d85f3d5e650effda4aad96d5', 'width': 1200}, 'variants': {}}]}
Exceeding LLM's max context window, can you explain?
1
[removed]
2024-12-11T16:11:56
https://www.reddit.com/r/LocalLLaMA/comments/1hbwzmk/exceeding_llms_max_context_window_can_you_explain/
Consistent_Winner596
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbwzmk
false
null
t3_1hbwzmk
/r/LocalLLaMA/comments/1hbwzmk/exceeding_llms_max_context_window_can_you_explain/
false
false
self
1
null
New to Mac IOS
1
[removed]
2024-12-11T16:17:49
https://www.reddit.com/r/LocalLLaMA/comments/1hbx4k6/new_to_mac_ios/
Strong_Parking6384
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbx4k6
false
null
t3_1hbx4k6
/r/LocalLLaMA/comments/1hbx4k6/new_to_mac_ios/
false
false
self
1
null
Local tool calling agents using LangChain and Ollama are inexplicably poorly performing
1
Hi all, Has anyone tried to create tool calling agents with LangChain and Ollama? My attempts have been almost universally unsuccessful. Problems include: 1. Losing the ability to chat 2. Calling tools when it is not appropriate 3. Calling tools on non-sensical inputs The same agents work fine on, e.g., openai. I have encountered this on a variety of models on Ollama. Oddly enough, llama3-groq-tool-use:8b is the lone model that seems to work reasonably. Has anyone encountered this and determined the cause? Here's a draft notebook/blog where I have some experiments: [https://colab.research.google.com/drive/1DngmKINhV95iKVVGF7YC5\_K7oiujMT6q?usp=sharing](https://colab.research.google.com/drive/1DngmKINhV95iKVVGF7YC5_K7oiujMT6q?usp=sharing)
2024-12-11T16:23:21
https://www.reddit.com/r/LocalLLaMA/comments/1hbx96u/local_tool_calling_agents_using_langchain_and/
edmcman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbx96u
false
null
t3_1hbx96u
/r/LocalLLaMA/comments/1hbx96u/local_tool_calling_agents_using_langchain_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
Setting up an LLM and Fine-Tuning It on an Offline Server
0
Hi Everyone, I’m trying to set up a large language model (LLM) like LLaMA on an offline server and fine-tune it for my specific use case. I have access to a Nvidia DGX system with 4 V100 GPUs given recently by my college. Before that I was running workloads on Google Colab. I don't have an experience on how to manage LLMs offline, but the server doesn’t have internet access, so I need to manually transfer all dependencies and files. And also if I need any new modules, I'd have to ask the Lab Technician to install it. Currently, I’ve installed some essential Python packages (like PyTorch, and a few others) on the server. I’m wondering: 1. **Do I need Hugging Face’s Transformers library to work with LLaMA, or is it possible to run the model using just PyTorch and Meta’s native implementation?** 2. **What other packages or tools are essential for loading and fine-tuning LLaMA on such a system?** (I want to avoid unnecessary installations since file transfers are a hassle.) and also storing and managing files 3. Has anyone else worked in a similar offline environment? Any tips for manually managing dependencies and ensuring all the required files are in place? 4. I planned to implement RAG on it, does anyone have any tips or suggestions for it? Thanks in advance for any advice or resources!
2024-12-11T16:25:04
https://www.reddit.com/r/LocalLLaMA/comments/1hbxamy/setting_up_an_llm_and_finetuning_it_on_an_offline/
WhyHimanshuGarg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbxamy
false
null
t3_1hbxamy
/r/LocalLLaMA/comments/1hbxamy/setting_up_an_llm_and_finetuning_it_on_an_offline/
false
false
self
0
null
Prompt / model to help an aspiring writer
2
I recently setup a server with a 3090 for home usage. We have an open webui and ollama with a number of models to choose from. I have a an almost 13 year old aspiring writer. She has been using the free chatgpt for feedback, but with mixed results. I would love to help set up something for her that will provide feedback on her writing. Not to rewrite it, but to look at a chapter and give suggestions (not fixes). Things like "consider giving more exposition for this character" is good, rewriting is bad. In my search for creative writing, I see models designed to generate text, not the feedback. What models (preferably ollama, but I am planning setup llama.cpp as well at some point) would do best? Bear in mind that I will very likely want to increase context size so that we can do chapter at a time. I have some ideas for prompts, model that appear to work, but am wondering if there are good prompts that others have used for similar purpose. I looked over the models in open webui, and the ones that came with fabric. Did not really see anything to what I was looking for.
2024-12-11T16:36:04
https://www.reddit.com/r/LocalLLaMA/comments/1hbxk1m/prompt_model_to_help_an_aspiring_writer/
greg_d128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbxk1m
false
null
t3_1hbxk1m
/r/LocalLLaMA/comments/1hbxk1m/prompt_model_to_help_an_aspiring_writer/
false
false
self
2
null
Proposal: Global collective GPU sharing for the latest LLMs (Llama 3.3, Qwen, etc) through Ollama
1
[removed]
2024-12-11T17:53:46
https://www.reddit.com/r/LocalLLaMA/comments/1hbzg1w/proposal_global_collective_gpu_sharing_for_the/
EveningRecord4535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbzg1w
false
null
t3_1hbzg1w
/r/LocalLLaMA/comments/1hbzg1w/proposal_global_collective_gpu_sharing_for_the/
false
false
self
1
null
Co-Agents v0.2 (MIT): open-source framework for building full-stack agent applications with LangGraph
1
[removed]
2024-12-11T17:55:37
https://www.reddit.com/r/LocalLLaMA/comments/1hbzhod/coagents_v02_mit_opensource_framework_for/
llathreddzg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbzhod
false
null
t3_1hbzhod
/r/LocalLLaMA/comments/1hbzhod/coagents_v02_mit_opensource_framework_for/
false
false
https://b.thumbs.redditm…x7JkFnDCqJcM.jpg
1
{'enabled': False, 'images': [{'id': '0-nHY5OLhcaZiSMGj89C6cOP6ifMbofkGaZaFyS4Rag', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XGOc5Mktqo7mxJcWINm6PeQqX4ysFc5vZKrWanJ5Kcs.jpg?width=108&crop=smart&auto=webp&s=9d01cf8d3824ec36615b55bf8a8a652d81afb13e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XGOc5Mktqo7mxJcWINm6PeQqX4ysFc5vZKrWanJ5Kcs.jpg?width=216&crop=smart&auto=webp&s=90b0a8abb1a2e6ed0a41be057230e393f1e84d11', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XGOc5Mktqo7mxJcWINm6PeQqX4ysFc5vZKrWanJ5Kcs.jpg?width=320&crop=smart&auto=webp&s=195530eb182ac256dbae5075bedf5570996517b6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XGOc5Mktqo7mxJcWINm6PeQqX4ysFc5vZKrWanJ5Kcs.jpg?width=640&crop=smart&auto=webp&s=c230475e271b72dad16d3a4a29435ebb596fe5f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XGOc5Mktqo7mxJcWINm6PeQqX4ysFc5vZKrWanJ5Kcs.jpg?width=960&crop=smart&auto=webp&s=e5430a4979cd3d8e2193bba69db768835efa42c5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XGOc5Mktqo7mxJcWINm6PeQqX4ysFc5vZKrWanJ5Kcs.jpg?width=1080&crop=smart&auto=webp&s=79caff73908b296df8a8869a9b92078f3fd4d287', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/XGOc5Mktqo7mxJcWINm6PeQqX4ysFc5vZKrWanJ5Kcs.jpg?auto=webp&s=a7a433fad9592cb9699f129d38ee2055c6640525', 'width': 1280}, 'variants': {}}]}
Co-Agents v0.2 (MIT): open-source framework for building full-stack agent applications with LangGraph
1
[removed]
2024-12-11T18:03:28
https://www.reddit.com/r/LocalLLaMA/comments/1hbzoox/coagents_v02_mit_opensource_framework_for/
nate4t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbzoox
false
null
t3_1hbzoox
/r/LocalLLaMA/comments/1hbzoox/coagents_v02_mit_opensource_framework_for/
false
false
https://b.thumbs.redditm…1uovzO4UxWPQ.jpg
1
null
Co-Agents v0.2 (MIT): open-source framework for building full-stack agent applications with LangGraph
1
[removed]
2024-12-11T18:07:21
https://www.reddit.com/r/LocalLLaMA/comments/1hbzs0b/coagents_v02_mit_opensource_framework_for/
nate4t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hbzs0b
false
null
t3_1hbzs0b
/r/LocalLLaMA/comments/1hbzs0b/coagents_v02_mit_opensource_framework_for/
false
false
https://b.thumbs.redditm…dCeouAR_EoeU.jpg
1
null
F16 versus BF16 performance on llama.cpp
1
I've been going nuts trying to figure out a local performance regression with Llama 3.1 8B Instruct on llama.cpp (technically behind textgen-webui but I've tested directly). For some reason out of nowhere my performance using a 16-bit GGUF has dropped from like 27 t/s to 1.5 t/s! And the whole thing fits in VRAM so I've been trying to figure out what was going on and even booted off of a Fedora LiveCD for a while instead of Debian. Anyways long story short turns out I was pointing at *two* different GGUFs of this model and checking the llama.cpp output the only difference is that the slow model is quantized using BF16 and the fast model is with F16. Has anyone else run into this or would know why there's such a huge performance difference there? The GPU is an Ampere A6000 and I can pastebin the llama logs if necessary.
2024-12-11T18:39:21
https://www.reddit.com/r/LocalLLaMA/comments/1hc0k2x/f16_versus_bf16_performance_on_llamacpp/
HollowInfinity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc0k2x
false
null
t3_1hc0k2x
/r/LocalLLaMA/comments/1hc0k2x/f16_versus_bf16_performance_on_llamacpp/
false
false
self
1
null
Looking for a portable LLM rig under $100k
0
I'm interested in running large language models locally and need something I can easily transport on flights. My budget is $100k max, but I'm happy to spend less if it meets the requirements. Ideally looking for something that could fit in a carry-on or checked bag - I travel frequently between research sites. Key requirements: * Must be air-travel friendly (size/weight/battery considerations) * Capable of running mid to large size LLMs * Budget ceiling of $100k * Bonus if it can handle multiple model instances Would appreciate recommendations on specific builds/pre-built systems, especially from anyone who's actually traveled with similar setups. Also interested in hearing about any TSA/airline experiences with transporting this kind of hardware.
2024-12-11T18:46:09
https://www.reddit.com/r/LocalLLaMA/comments/1hc0pwg/looking_for_a_portable_llm_rig_under_100k/
inahst
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc0pwg
false
null
t3_1hc0pwg
/r/LocalLLaMA/comments/1hc0pwg/looking_for_a_portable_llm_rig_under_100k/
false
false
self
0
null
ROCM update gone wrong - Ollama gone crazy
0
I am seeking help before I reformat my SSD and reinstall Linux. I tried upgrading my ROCM package from 6.2.2 to 6.3 and afterwards, ollama started giving crazy responses. I tried rolling back and the problem persisted. Here's an example. To a Hello prompt, I get the following text (model is llama3.3): \-:$0+$E)8,D086G!DF9DA+$>6\*.)8\*0:A)63)&@2&E5\*"B85-GD1D+->F>$$8..1#5:.B<8H(3>-:$0+$E)8,D086G!DF9DA+$>6\*.)8\*0:A)63)&@2&E5\*"B85-GD1D+->F>$$8..1#5:.B<8H(3C98E5("8%C$!!%<G%#>.F;C$CF,@5!=(448H>4&1$H3<A8E%&!-%,5B$;1EE<0!<+B3B"=<>E-!D98E5("8%C$!!%<G%#>.F;C$CF,@5!=(448H>4&1$H3<A8E%&!-%,5B$;1EE<0!<+B3B"=<>E-!DAH0;D;0;5>H43#F6-#\*-6#).BA82@;+;GD:51+>C0FFA4FG$3"!5-84)8!&8@:\*CF<$+\^C Has anyone had this problem
2024-12-11T18:57:48
https://www.reddit.com/r/LocalLLaMA/comments/1hc1021/rocm_update_gone_wrong_ollama_gone_crazy/
neuromacmd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc1021
false
null
t3_1hc1021
/r/LocalLLaMA/comments/1hc1021/rocm_update_gone_wrong_ollama_gone_crazy/
false
false
self
0
null
A new approach to Prompt Routing
0
Hi, so I came up with a new architecture for prompt routing and I wanted to share with you all. I say is new, because I haven't seen it anywhere else. I think this should be something everyone should benefit from plus I also want to hear suggestions. My approach is: Prompt -> benchmark selection -> filter llm that is above X% requirements.  You have a question with all the context from the conversation, and ask a good reasoning llm to select the most important benchmarks that question needs in order to be answered (possibly it can also give you the minimum required % for each bench too). Then using those, you compare them against your collection of llms with their own benchmarks. After that based on the % of each benchmark and how much money you want to spend, you filter the llms from the collection. This gives you the most relevant llm for the task at the best price. If you want to save, you choose the cheapest llm that meets the minimum requirements to generate the answer. Although I would set it up to always be a little over the minimum to handle edge cases There are 100+ benchmarks, all ranging between 0 to 100. I would have all of them if they are available. A question may need 5 - 6 different benchmarks, it doesn't matter. The more the better I would say, because it will help narrow down the model faster  The benchmark selection could be done using a json file, with a description for each of them. That way, you can add more benchmarks that are not part of the llm inherent knowledge.
2024-12-11T19:06:21
https://www.reddit.com/r/LocalLLaMA/comments/1hc17vz/a_new_approach_to_prompt_routing/
Temporary-Koala-7370
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc17vz
false
null
t3_1hc17vz
/r/LocalLLaMA/comments/1hc17vz/a_new_approach_to_prompt_routing/
false
false
self
0
null
xLSTM 7B dropped
48
https://huggingface.co/NX-AI/xLSTM-7b Benchmarks are on their HF.
2024-12-11T19:11:33
https://www.reddit.com/r/LocalLLaMA/comments/1hc1c50/xlstm_7b_dropped/
Background_Thanks604
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc1c50
false
null
t3_1hc1c50
/r/LocalLLaMA/comments/1hc1c50/xlstm_7b_dropped/
false
false
self
48
{'enabled': False, 'images': [{'id': '6kEptRZhgCCyF-7QEW8WXz9AVoVlJxmyY4ITVub08ds', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cSDd01u10ZZLa-MSD34CGQcGlRqS2AIaDoLED8ahpiA.jpg?width=108&crop=smart&auto=webp&s=cefd624f692c611f661f0b9db02c1a54b8058ba0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cSDd01u10ZZLa-MSD34CGQcGlRqS2AIaDoLED8ahpiA.jpg?width=216&crop=smart&auto=webp&s=6e8357c5319806f9016ef75f826fdf07626badbd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cSDd01u10ZZLa-MSD34CGQcGlRqS2AIaDoLED8ahpiA.jpg?width=320&crop=smart&auto=webp&s=3b14d74dacdaae7a8d57f61043df514b76f0a6f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cSDd01u10ZZLa-MSD34CGQcGlRqS2AIaDoLED8ahpiA.jpg?width=640&crop=smart&auto=webp&s=e5caccd6d18751aa43b906327d0701a402662f17', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cSDd01u10ZZLa-MSD34CGQcGlRqS2AIaDoLED8ahpiA.jpg?width=960&crop=smart&auto=webp&s=191a12c0866a716bc804acd5340b89438bef58b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cSDd01u10ZZLa-MSD34CGQcGlRqS2AIaDoLED8ahpiA.jpg?width=1080&crop=smart&auto=webp&s=1d104e66a387b55c091176ce889d65f35fc59664', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cSDd01u10ZZLa-MSD34CGQcGlRqS2AIaDoLED8ahpiA.jpg?auto=webp&s=fa1e5febccfefa41e287ff3cb57a2a155d957f3e', 'width': 1200}, 'variants': {}}]}
Design Challenge: What’s the cheapest / most elegant design for a portable LLM-in-a-box that will fit in a bug out bag?
1
[removed]
2024-12-11T19:22:04
https://www.reddit.com/r/LocalLLaMA/comments/1hc1l6w/design_challenge_whats_the_cheapest_most_elegant/
Porespellar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc1l6w
false
null
t3_1hc1l6w
/r/LocalLLaMA/comments/1hc1l6w/design_challenge_whats_the_cheapest_most_elegant/
false
false
self
1
null
Is Whisper.cpp still the king of STT?
58
Title pretty much. Any other since Whisper's release that have been really good STTs?
2024-12-11T19:28:48
https://www.reddit.com/r/LocalLLaMA/comments/1hc1qzi/is_whispercpp_still_the_king_of_stt/
Educational_Gap5867
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc1qzi
false
null
t3_1hc1qzi
/r/LocalLLaMA/comments/1hc1qzi/is_whispercpp_still_the_king_of_stt/
false
false
self
58
null
What's the point of OpenUI Web vs Piniochio or Jan.ai? The ladder of which being substantially easier to install
0
Noob here. Spent forever getting OpenUI to work and run, downloading a ton of programs, docker, python, ect. Just to find out things like Jan.ai do it all and potentially fully offline? Why is OpenUI so much more popular? At this point I'm confused why everyone prefers it despite the other UIs being substantially easier to implement.
2024-12-11T19:40:13
https://www.reddit.com/r/LocalLLaMA/comments/1hc20u7/whats_the_point_of_openui_web_vs_piniochio_or/
PuttPutt7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc20u7
false
null
t3_1hc20u7
/r/LocalLLaMA/comments/1hc20u7/whats_the_point_of_openui_web_vs_piniochio_or/
false
false
self
0
null
Gemini 2.0 Flash beating Claude Sonnet 3.5 on SWE-Bench was not on my bingo card
674
2024-12-11T19:47:43
https://i.redd.it/xn57o94tw96e1.png
jd_3d
i.redd.it
1970-01-01T00:00:00
0
{}
1hc276t
false
null
t3_1hc276t
/r/LocalLLaMA/comments/1hc276t/gemini_20_flash_beating_claude_sonnet_35_on/
false
false
https://b.thumbs.redditm…X8YPdlNDLFKs.jpg
674
{'enabled': True, 'images': [{'id': 'NxCiOSrkc33N19QNX3-RJb4gm-2i6_TeB1bqGzmqCYk', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/xn57o94tw96e1.png?width=108&crop=smart&auto=webp&s=32f94ad558425ce3003d34af0ca9c1a947a88910', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/xn57o94tw96e1.png?width=216&crop=smart&auto=webp&s=c9f807d86166b559fda7a5cea0a3af453ee9a795', 'width': 216}, {'height': 158, 'url': 'https://preview.redd.it/xn57o94tw96e1.png?width=320&crop=smart&auto=webp&s=e6c323cf0eacf603c6feea52289ffad36213baf2', 'width': 320}, {'height': 317, 'url': 'https://preview.redd.it/xn57o94tw96e1.png?width=640&crop=smart&auto=webp&s=54e2d4b8f25b7ed4ce6377e340c5fc3f4cdb6623', 'width': 640}, {'height': 476, 'url': 'https://preview.redd.it/xn57o94tw96e1.png?width=960&crop=smart&auto=webp&s=fba97646dde9e0a8159a75d473ab8cf7572d854a', 'width': 960}, {'height': 535, 'url': 'https://preview.redd.it/xn57o94tw96e1.png?width=1080&crop=smart&auto=webp&s=541545706832f99521d06edf752215e1d08e6439', 'width': 1080}], 'source': {'height': 590, 'url': 'https://preview.redd.it/xn57o94tw96e1.png?auto=webp&s=0f41ae0f09e8089896f2b89f05f3cb35d35ae255', 'width': 1189}, 'variants': {}}]}
Constraining LLM Token Generation via Language Server Integration?
1
OpenAI and Claude support structured outputs where a schema can be provided. I am wondering if this can be taken a step further. In the context of generating code, a language server if asked in realtime can say what type of token is expected next, as well as what values that token can take on. Effectively what VSCode / other IDEs use under the hood. Since the LLM knows which programming language it is generating (say Python), could we take the probability distribution of tokens generated after each incremental generation and cross out the tokens that are not on the list of valid autocompletes that the Python language server API provided? This language server is aware of what Python version we are using, as well as the dependencies in our project. I believe this would make LLM's automatically adjust to library major version changes, since the changed autocomplete suggestions will reflect the breaking change. Additionally, it would completely LLM outputting code that fails to compile / uses a nonexistent method or field (which seem to be a large part of existing hallucinations). Is this currently being done in practice anywhere? If not, what are some reasons as to why it may be an impractical idea?
2024-12-11T19:51:52
https://www.reddit.com/r/LocalLLaMA/comments/1hc2ax8/constraining_llm_token_generation_via_language/
TheeNinjaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc2ax8
false
null
t3_1hc2ax8
/r/LocalLLaMA/comments/1hc2ax8/constraining_llm_token_generation_via_language/
false
false
self
1
null
Prompts to test emergence
0
I’m looking for suggestions, advice, or prompt tutorials that focus on evaluating an LLMs ability for emergent responses rather than approval-focused responses. By that, I mean a tendency for LLMs to provide mostly “yes, and” type responses where I’d really like to evaluate how good a model is at emergently providing a balance that includes “no, and”/“no, but” type responses.
2024-12-11T19:53:57
https://www.reddit.com/r/LocalLLaMA/comments/1hc2cqe/prompts_to_test_emergence/
binx85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc2cqe
false
null
t3_1hc2cqe
/r/LocalLLaMA/comments/1hc2cqe/prompts_to_test_emergence/
false
false
self
0
null
Looking for a coding assist solution
1
[removed]
2024-12-11T20:17:11
https://www.reddit.com/r/LocalLLaMA/comments/1hc2wzr/looking_for_a_coding_assist_solution/
FiatIdea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc2wzr
false
null
t3_1hc2wzr
/r/LocalLLaMA/comments/1hc2wzr/looking_for_a_coding_assist_solution/
false
false
self
1
null
Looking for AI coding assistant with codebase support
1
[removed]
2024-12-11T20:18:52
https://www.reddit.com/r/LocalLLaMA/comments/1hc2yhq/looking_for_ai_coding_assistant_with_codebase/
TheFuelCell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc2yhq
false
null
t3_1hc2yhq
/r/LocalLLaMA/comments/1hc2yhq/looking_for_ai_coding_assistant_with_codebase/
false
false
self
1
null
Need desperate help to run LLM on iOS locally
0
Hi everyone ! For the past three weeks or so my friend and I have been painstakingly trying to implement a LLM into our iOS app to get it to run locally for helping with travel info and so on. Unfortunately tho we have no been successful a single time, trying many different packages like SpeziLLM, MLC LLM, LLM.Swift and so on. With either llama.cpp not installing correctly in Xcode itself which makes it impossible for us to resume coding or tons of other issues that we were not able to resolve. At that point we have lost hope and desperately need someone who could help us with the integration or at least logic to get it running with a simple chat ui that we can then implement into our main app. I hope someone can help since this is a student project and we would love to get it working asap. Thanks in advance !
2024-12-11T20:35:29
https://www.reddit.com/r/LocalLLaMA/comments/1hc3cq7/need_desperate_help_to_run_llm_on_ios_locally/
Gohro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc3cq7
false
null
t3_1hc3cq7
/r/LocalLLaMA/comments/1hc3cq7/need_desperate_help_to_run_llm_on_ios_locally/
false
false
self
0
null
Looking for small llm for translation
6
What are some low requirement llms that can accurately translate from say German, Chinese and other mainstream language to English. Also maintaining context is important as well, as without context, one can use nmt.
2024-12-11T20:59:20
https://www.reddit.com/r/LocalLLaMA/comments/1hc3x1t/looking_for_small_llm_for_translation/
Just_Difficulty9836
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc3x1t
false
null
t3_1hc3x1t
/r/LocalLLaMA/comments/1hc3x1t/looking_for_small_llm_for_translation/
false
false
self
6
null
Why prompt caching does not work with LMStudio and how to fix it
7
I've been using Llama.cpp and Ollama interchangeably, and both appear to support prompt caching. I can tell because in longer threads, I don't have to wait long for the new prompt to be evaluated. However, when I use LMStudio with MLX equivalents of my models, it looks like the prompt caching isn't working. As the thread gets longer, evaluating prompts takes more and more time. Is there anything I can do to prevent this from happening?
2024-12-11T21:09:38
https://www.reddit.com/r/LocalLLaMA/comments/1hc45z4/why_prompt_caching_does_not_work_with_lmstudio/
Sky_Linx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc45z4
false
null
t3_1hc45z4
/r/LocalLLaMA/comments/1hc45z4/why_prompt_caching_does_not_work_with_lmstudio/
false
false
self
7
null
Gemini Flash 2.0 Web Search
0
https://preview.redd.it/…ly. Thoughts?
2024-12-11T21:18:19
https://www.reddit.com/r/LocalLLaMA/comments/1hc4d7x/gemini_flash_20_web_search/
thrownaway10231
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc4d7x
false
null
t3_1hc4d7x
/r/LocalLLaMA/comments/1hc4d7x/gemini_flash_20_web_search/
false
false
https://b.thumbs.redditm…boC5STvEg9hI.jpg
0
null
How is your experience with the Tiny Box from Tiny Corp?
7
I’ve been on the fence about buying one. Any owners here? How is your experience? What cool stuff are you doing with it? Thanks For reference: https://tinygrad.org/#tinybox
2024-12-11T21:47:09
https://www.reddit.com/r/LocalLLaMA/comments/1hc5180/how_is_your_experience_with_the_tiny_box_from/
diegomarcossegura
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc5180
false
null
t3_1hc5180
/r/LocalLLaMA/comments/1hc5180/how_is_your_experience_with_the_tiny_box_from/
false
false
self
7
null
History is repeating itself. Google is set to undercut both open-source and paid AI with its superior free AI. In the end, it's all about who can burn the most cash.
1
2024-12-11T21:47:21
https://www.reddit.com/r/Bard/comments/1hc4q47/perplexity_ai_pro_1year_coupon_only_25_23/
balianone
reddit.com
1970-01-01T00:00:00
0
{}
1hc51de
false
null
t3_1hc51de
/r/LocalLLaMA/comments/1hc51de/history_is_repeating_itself_google_is_set_to/
false
false
default
1
{'enabled': False, 'images': [{'id': 'dfbcEt_GYqW7AdIr3qt7Tq3CAFeENzY-wsJ3KxQHwtY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pjBDooJmCDW7WtWGKgCxGKK99MnCqIevhK5QinYul0U.jpg?width=108&crop=smart&auto=webp&s=4aac90dadf9afa333cf9708fb2b5b0647622602b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/pjBDooJmCDW7WtWGKgCxGKK99MnCqIevhK5QinYul0U.jpg?width=216&crop=smart&auto=webp&s=395a3ab0b1d26dd8c9c1e639056d4b3ec948077b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/pjBDooJmCDW7WtWGKgCxGKK99MnCqIevhK5QinYul0U.jpg?width=320&crop=smart&auto=webp&s=9b4afd89122f1ad1464a1359f71cf66b18f1d2b1', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/pjBDooJmCDW7WtWGKgCxGKK99MnCqIevhK5QinYul0U.jpg?auto=webp&s=908fe416bee8da7ddef9a2c5438c7eb9658a4ec3', 'width': 512}, 'variants': {}}]}
Arc Battlemage: Intel might deliver on its performance promise. Cheap(er) local inference 2025?
1
[removed]
2024-12-11T21:59:04
https://www.reddit.com/r/LocalLLaMA/comments/1hc5awh/arc_battlemage_intel_might_deliver_on_its/
elemental-mind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc5awh
false
null
t3_1hc5awh
/r/LocalLLaMA/comments/1hc5awh/arc_battlemage_intel_might_deliver_on_its/
false
false
https://a.thumbs.redditm…VVJoJ0i-PC54.jpg
1
null
In Google AI Studio, you now get for free: SOTA Model, 2 million context window, Unlimited advanced voice mode with vision, Video, audio, and image input, Google search integration, Python code execution
203
what do u think?
2024-12-11T22:08:18
https://www.reddit.com/r/LocalLLaMA/comments/1hc5ivh/in_google_ai_studio_you_now_get_for_free_sota/
balianone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc5ivh
false
null
t3_1hc5ivh
/r/LocalLLaMA/comments/1hc5ivh/in_google_ai_studio_you_now_get_for_free_sota/
false
false
self
203
null
QwQ really likes using a certain heading...
0
I'm sure this sounds perfectly fine to most people in the world, but [seeing this](https://imgur.com/IYumGC7.jpeg) pop up all the time is just a little strange.
2024-12-11T22:09:42
https://www.reddit.com/r/LocalLLaMA/comments/1hc5k1k/qwq_really_likes_using_a_certain_heading/
Eisenstein
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc5k1k
false
null
t3_1hc5k1k
/r/LocalLLaMA/comments/1hc5k1k/qwq_really_likes_using_a_certain_heading/
false
false
self
0
null
WebDev Arena: AI Battle to build the best website
19
2024-12-11T23:44:58
https://web.lmarena.ai/
Someone13574
web.lmarena.ai
1970-01-01T00:00:00
0
{}
1hc7nrj
false
null
t3_1hc7nrj
/r/LocalLLaMA/comments/1hc7nrj/webdev_arena_ai_battle_to_build_the_best_website/
false
false
default
19
null
I made a chatbot-arena focused on coding with only OSS models, with a live leaderboard
52
I made a free and open source site where two LLMs build the same app, vote on which one did best, and see a live leaderboard of the best open source coding LLMs. Essentially a code-focused chatbot arena! Since launch 7 hours ago, there has been 350+ votes and Qwen 2.5 32B Coder leads as the top open source coding LLM so far. App: [https://www.llmcodearena.com/](https://www.llmcodearena.com/) Code: [https://github.com/Nutlope/codearena](https://github.com/Nutlope/codearena) Would love any feedback or thoughts! [llmcodearena.com live leaderboard](https://preview.redd.it/os9nllkebb6e1.png?width=2830&format=png&auto=webp&s=9cd5beb492c97ceb1640cb0003e54c21d9918767)
2024-12-12T00:31:55
https://www.reddit.com/r/LocalLLaMA/comments/1hc8mum/i_made_a_chatbotarena_focused_on_coding_with_only/
nutty16
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc8mum
false
null
t3_1hc8mum
/r/LocalLLaMA/comments/1hc8mum/i_made_a_chatbotarena_focused_on_coding_with_only/
false
false
https://a.thumbs.redditm…fiapUQIjUw28.jpg
52
{'enabled': False, 'images': [{'id': 'LLBIieRuCICoNZ4pK7twgyWkn0jP21T48iMzsPolrok', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/K5oYcZwdFq-IhsftgX7WjTcrR7zyBNNonHhuq0Zhqd4.jpg?width=108&crop=smart&auto=webp&s=c1a311f0912d734b7b1a8f1b9ba54b297dd1d3b4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/K5oYcZwdFq-IhsftgX7WjTcrR7zyBNNonHhuq0Zhqd4.jpg?width=216&crop=smart&auto=webp&s=c5da9c2bfe6b5c21a28d6ba1f726a247fb3cae69', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/K5oYcZwdFq-IhsftgX7WjTcrR7zyBNNonHhuq0Zhqd4.jpg?width=320&crop=smart&auto=webp&s=c0d29c72e7e9beff083beb03bfaa431c71aac448', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/K5oYcZwdFq-IhsftgX7WjTcrR7zyBNNonHhuq0Zhqd4.jpg?width=640&crop=smart&auto=webp&s=4689bd71c73a7c05601a843fc07e1e9943faa4b6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/K5oYcZwdFq-IhsftgX7WjTcrR7zyBNNonHhuq0Zhqd4.jpg?width=960&crop=smart&auto=webp&s=89d75f0a52fa265e0045121d9d1045134632704b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/K5oYcZwdFq-IhsftgX7WjTcrR7zyBNNonHhuq0Zhqd4.jpg?width=1080&crop=smart&auto=webp&s=db9ce4bc2a1f6de32c00504729320c15be920a55', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/K5oYcZwdFq-IhsftgX7WjTcrR7zyBNNonHhuq0Zhqd4.jpg?auto=webp&s=b4bc0bc1ddcc41ed4dc79c7fdc79a8fd8fad0f17', 'width': 2400}, 'variants': {}}]}
Phi 3.5 mini instruct
63
Surprised this model doesn't get more discussion. The unquantized model fits on most consumer GPUs, coming in at just 7.7GB VRAM. The 3.8B size even leaves room for ample context and makes tuning more tractable without doing quantization backflips. The raw model benchmarks also seem to be similar to GPT 3.5 turbo quality. Sure, that's notably behind now, but it's enough to get the job done just fine with some prompt engineering, and again--tuning is a great option here. If you're building, this model seems like a very solid choice. So I guess my question is... if you're not GPU rich, why aren't you using this?
2024-12-12T00:32:27
https://www.reddit.com/r/LocalLLaMA/comments/1hc8n9d/phi_35_mini_instruct/
Cosack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc8n9d
false
null
t3_1hc8n9d
/r/LocalLLaMA/comments/1hc8n9d/phi_35_mini_instruct/
false
false
self
63
null
Llama 3.3 70B vs 4o mini for coding?
1
[removed]
2024-12-12T00:36:11
https://www.reddit.com/r/LocalLLaMA/comments/1hc8pyc/llama_33_70b_vs_4o_mini_for_coding/
CrimsonShikabane
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc8pyc
false
null
t3_1hc8pyc
/r/LocalLLaMA/comments/1hc8pyc/llama_33_70b_vs_4o_mini_for_coding/
false
false
self
1
null
Setting up a small system fore school
1
[removed]
2024-12-12T00:52:00
https://www.reddit.com/r/LocalLLaMA/comments/1hc91b0/setting_up_a_small_system_fore_school/
No-Net1818
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc91b0
false
null
t3_1hc91b0
/r/LocalLLaMA/comments/1hc91b0/setting_up_a_small_system_fore_school/
false
false
self
1
null
Are there any games even if it's like a pixel game with ai npcs you can actually talk to like by texting? ...
10
...
2024-12-12T00:59:01
https://www.reddit.com/r/LocalLLaMA/comments/1hc96da/are_there_any_games_even_if_its_like_a_pixel_game/
Ok_Communication5967
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc96da
false
null
t3_1hc96da
/r/LocalLLaMA/comments/1hc96da/are_there_any_games_even_if_its_like_a_pixel_game/
false
false
self
10
null
What do disk/storage usage profiles look like for running various local models? (personal use size and small biz size) Looking for a techie perspective footprint of what kinds of storage are adequate for reasonable to slower performance with a very low user count.
1
[removed]
2024-12-12T01:02:10
https://www.reddit.com/r/LocalLLaMA/comments/1hc98vt/what_do_diskstorage_usage_profiles_look_like_for/
microcandella
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc98vt
false
null
t3_1hc98vt
/r/LocalLLaMA/comments/1hc98vt/what_do_diskstorage_usage_profiles_look_like_for/
false
false
self
1
null
whats a good gpu for under 1k?
0
only have room for one gpu. would like to max out my token speed with a gpu under 1k. any suggestions?
2024-12-12T01:25:06
https://www.reddit.com/r/LocalLLaMA/comments/1hc9ppo/whats_a_good_gpu_for_under_1k/
PotentialCopy56
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc9ppo
false
null
t3_1hc9ppo
/r/LocalLLaMA/comments/1hc9ppo/whats_a_good_gpu_for_under_1k/
false
false
self
0
null
Speed of RAM, discs, PCIe for inference
8
I love running models on my hardware and experimenting with different hardware. Drafted the spreadsheet to put it the speed and time of different hardware, which I worked with (expect the last item in that spreadsheet, which I do not have ... yet ;-) https://preview.redd.it/xx02gc0amb6e1.png?width=540&format=png&auto=webp&s=7c3d9e1232649ca6df5efd394ba1b348b6c90832 I only not sure about column "tokens per second". I used formula "= 0.5 / time", because that's what I observed empirically. Can vary. Of course different SSD, NVMe drives can be faster or slower. I used average speed of some components, which I have at my homelab.
2024-12-12T01:37:49
https://www.reddit.com/r/LocalLLaMA/comments/1hc9ypd/speed_of_ram_discs_pcie_for_inference/
slavik-f
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc9ypd
false
null
t3_1hc9ypd
/r/LocalLLaMA/comments/1hc9ypd/speed_of_ram_discs_pcie_for_inference/
false
false
https://b.thumbs.redditm…uG2Fk7HHG8Sc.jpg
8
null
Question about access to Llama models
0
Hey, I was wondering if other americans in the US are getting rejected from accessing Llama models on Huggingface and if they ever figured out why if so? I used to be able to access the Llama models but now since 3.1 I seem to be rejected continuously by the model access team and I was wondering why. I have reached out to them multiple times but received no reply as to why. I've not violated their policies so I really have no idea what could have changed. But this seems really slimy since they are advertising these models as open source. I am trying to use them for my business, and want to keep everything legal and not violate their policies. Has anyone here been in the same situation or have any advice? Thanks!
2024-12-12T01:38:41
https://www.reddit.com/r/LocalLLaMA/comments/1hc9z9m/question_about_access_to_llama_models/
awebb78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hc9z9m
false
null
t3_1hc9z9m
/r/LocalLLaMA/comments/1hc9z9m/question_about_access_to_llama_models/
false
false
self
0
null
Lego NXT / Mindstorms + local LLMs - anyone done it yet? Any guides out there?
2
Hi everyone! From a cursory Google + YouTube search, I can't find that anyone has tried to hook up an LLM - yet alone a VLM - to any of the Mindstorms robots yet. Honestly I think it'd be a pretty neat combo to pair local inference up with a good ol rough and ready NXT since they're dirt cheap on eBay (considering it's a modular robot that you can control pythonically, that is) I'm actually quite far into the process of doing this with my EV3 :) I've managed to get my brick running Linux, I've got SSH up + working over WiFi, and so far I've got a rudimentary client/server system that lets me send a beep command to the brick. Next step is build the dang thing, get the motors working programatically, and bundle that up as tool calls for an LLM to use. If there isn't a tutorial for this already out there, is this something that people would be interested in? Mostly thinking if I should make one myself if there's sufficient interest! [My EV3 now running a lightweight Debian distro + Brickman-GUI, currently serving up function calls to my Mac over WiFi! Apologies you can't see the screen; it's an ancient LCD and there's no backlighting lol](https://preview.redd.it/lkrwhz8dpb6e1.png?width=960&format=png&auto=webp&s=f1afdeff207b64bedf0dfe6f09a558a0cb1fd450)
2024-12-12T01:52:43
https://www.reddit.com/r/LocalLLaMA/comments/1hca949/lego_nxt_mindstorms_local_llms_anyone_done_it_yet/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hca949
false
null
t3_1hca949
/r/LocalLLaMA/comments/1hca949/lego_nxt_mindstorms_local_llms_anyone_done_it_yet/
false
false
https://b.thumbs.redditm…QcdJoIzI1z-Q.jpg
2
null
Where to find pre-built RTX 3090 machines?
0
Hi, We run an AI consulting agency and have been working with many clients to develop custom solutions based on LLMs and related tech. Was looking for a certified seller of pre-built machines that have 1 or 2 RTX 3090 with 24GB of vram. They will be used to deploy open weights LLM on-prem for the client. Looking for a seller with warranty and hopefully some benchmarks of their builds. Clients are mainly based in the US & UK
2024-12-12T02:21:09
https://www.reddit.com/r/LocalLLaMA/comments/1hcat1d/where_to_find_prebuilt_rtx_3090_machines/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hcat1d
false
null
t3_1hcat1d
/r/LocalLLaMA/comments/1hcat1d/where_to_find_prebuilt_rtx_3090_machines/
false
false
self
0
null
Full Fine Tuning (FFT) LLM - 2 A100 80GB
1
[removed]
2024-12-12T02:26:21
https://www.reddit.com/r/LocalLLaMA/comments/1hcawkd/full_fine_tuning_fft_llm_2_a100_80gb/
Business_Quantity571
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hcawkd
false
null
t3_1hcawkd
/r/LocalLLaMA/comments/1hcawkd/full_fine_tuning_fft_llm_2_a100_80gb/
false
false
self
1
null
Reminder not to use bigger models than you need
510
I've been processing and pruning datasets for the past few months using AI. My workflow involves deriving linguistic characteristics and terminology from a number of disparate data sources. I've been using Llama 3.1 70B, Nemotron, Qwen 2.5 72B, and more recently Qwen 2.5 Coder 128k context (thanks Unsloth!). These all work, and my data processing is coming along nicely. Tonight, I decided to try Supernova Medius, Phi 3 Medium, and Phi 3.5 Mini. They all worked just fine for my use cases. They all do 128k context. And they all run much, much faster than the larger models I've been using. I've checked and double checked how they compare to the big models. The nature of my work is that I can identify errors very quickly. All perfect. I wish I knew this months ago, I'd be done processing by now. Just because something is bigger and smarter, it doesn't mean you always need to use it. I'm now processing data at 3x or 4x the tk/s than I was yesterday.
2024-12-12T03:29:31
https://www.reddit.com/r/LocalLLaMA/comments/1hcc36f/reminder_not_to_use_bigger_models_than_you_need/
Thrumpwart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hcc36f
false
null
t3_1hcc36f
/r/LocalLLaMA/comments/1hcc36f/reminder_not_to_use_bigger_models_than_you_need/
false
false
self
510
null
Unsupervised email classification using LLMs efficiently
1
[removed]
2024-12-12T03:31:11
https://www.reddit.com/r/LocalLLaMA/comments/1hcc4aa/unsupervised_email_classification_using_llms/
Serious_General_9133
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hcc4aa
false
null
t3_1hcc4aa
/r/LocalLLaMA/comments/1hcc4aa/unsupervised_email_classification_using_llms/
false
false
self
1
null
I am so glad they created www.reddit.com/answers/
0
It's so easy to find comparisons of AI models now and frameworks. My favorite part now is my do-diligence before buying any product is cut way, way down. I wonder what model they are using?
2024-12-12T03:34:02
https://www.reddit.com/r/LocalLLaMA/comments/1hcc63a/i_am_so_glad_they_created_wwwredditcomanswers/
MinimumPC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hcc63a
false
null
t3_1hcc63a
/r/LocalLLaMA/comments/1hcc63a/i_am_so_glad_they_created_wwwredditcomanswers/
false
false
self
0
null