title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Unsloth accuracy vs Transformers
1
[removed]
2025-01-22T02:11:41
https://www.reddit.com/r/LocalLLaMA/comments/1i7056h/unsloth_accuracy_vs_transformers/
BitAcademic9597
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7056h
false
null
t3_1i7056h
/r/LocalLLaMA/comments/1i7056h/unsloth_accuracy_vs_transformers/
false
false
self
1
null
(any) Routing - unified access and observability for API-based and Ollama-supported LLMs in v0.1.9
1
[removed]
2025-01-22T02:23:32
https://i.redd.it/17xgbf92hgee1.jpeg
AdditionalWeb107
i.redd.it
1970-01-01T00:00:00
0
{}
1i70dvu
false
null
t3_1i70dvu
/r/LocalLLaMA/comments/1i70dvu/any_routing_unified_access_and_observability_for/
false
false
https://b.thumbs.redditm…Nt_ewN9CVz9M.jpg
1
{'enabled': True, 'images': [{'id': 'ih4j-P8Gi7qMNX-zcAmCErFtVS2Tpg9Vqf6cGgWikuw', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/17xgbf92hgee1.jpeg?width=108&crop=smart&auto=webp&s=b666c95b7a553847373b4dfbf3bef845b8dd4dc7', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/17xgbf92hgee1.jpeg?width=216&crop=smart&auto=webp&s=456d960255c3755749106ba94528c5e29f6cc87c', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/17xgbf92hgee1.jpeg?width=320&crop=smart&auto=webp&s=ef1016e3cf218fca2ca183041b9d60ee8daf26f8', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/17xgbf92hgee1.jpeg?width=640&crop=smart&auto=webp&s=186b3b2e18dd48a6d941565426ab7b8bf4ea4992', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/17xgbf92hgee1.jpeg?width=960&crop=smart&auto=webp&s=eefc174e65dc7f95a189819097b4eb8deeb93b1d', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/17xgbf92hgee1.jpeg?width=1080&crop=smart&auto=webp&s=8765cf35f02475ab7f86bb511550af3c0fb8c7c5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/17xgbf92hgee1.jpeg?auto=webp&s=20feda324f28b3d711812939164466433a7172d4', 'width': 1080}, 'variants': {}}]}
Any Router - unified access and observability for API-based and Ollama-supported LLMs
1
[removed]
2025-01-22T02:33:25
https://i.redd.it/m78u81rtigee1.jpeg
AdditionalWeb107
i.redd.it
1970-01-01T00:00:00
0
{}
1i70kyv
false
null
t3_1i70kyv
/r/LocalLLaMA/comments/1i70kyv/any_router_unified_access_and_observability_for/
false
false
https://b.thumbs.redditm…fQUv1Jt3OWXE.jpg
1
{'enabled': True, 'images': [{'id': 'Z_20aEmrbtchcCMi9Pv0iCv32QoDB2ABjum8I7Igceg', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/m78u81rtigee1.jpeg?width=108&crop=smart&auto=webp&s=c6edef64cd745998377587d1e59b046fdff81e3a', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/m78u81rtigee1.jpeg?width=216&crop=smart&auto=webp&s=704f817f83f24483669314bf98e0ba0e5eaf8680', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/m78u81rtigee1.jpeg?width=320&crop=smart&auto=webp&s=a295d826273effffdd91d36a08a56226f9868246', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/m78u81rtigee1.jpeg?width=640&crop=smart&auto=webp&s=713d3652032092ae61a995483a86d06548ffec2b', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/m78u81rtigee1.jpeg?width=960&crop=smart&auto=webp&s=59d93d4694d5ff28051a7562755e61c64d42d001', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/m78u81rtigee1.jpeg?width=1080&crop=smart&auto=webp&s=e973abb789396f1942e20cdad43999c2db945d84', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/m78u81rtigee1.jpeg?auto=webp&s=f023f7ee24fb41230a8ebd40e864cf84858f8a73', 'width': 1080}, 'variants': {}}]}
Are there scenarios where your application need access to local LLMs and a API-based ones?
3
The title says it all. Would be curious to get your thoughts on the usage of local LLMs and API-based ones through your application. I am seeing a lot of folks experiment with different models as they emerge and were curious if the community is mixing/matching models for specific application use cases.
2025-01-22T02:36:20
https://www.reddit.com/r/LocalLLaMA/comments/1i70n2t/are_there_scenarios_where_your_application_need/
AdditionalWeb107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i70n2t
false
null
t3_1i70n2t
/r/LocalLLaMA/comments/1i70n2t/are_there_scenarios_where_your_application_need/
false
false
self
3
null
Unsloth is fast and easy but is it accurate? (for professional tasks)
1
[removed]
2025-01-22T02:37:12
https://www.reddit.com/r/LocalLLaMA/comments/1i70noo/unsloth_is_fast_and_easy_but_is_it_accurate_for/
BitAcademic9597
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i70noo
false
null
t3_1i70noo
/r/LocalLLaMA/comments/1i70noo/unsloth_is_fast_and_easy_but_is_it_accurate_for/
false
false
self
1
null
R1 randomly started talking in chinese
0
https://preview.redd.it/…c2be2d784ec982
2025-01-22T02:43:18
https://www.reddit.com/r/LocalLLaMA/comments/1i70sml/r1_randomly_started_talking_in_chinese/
Reader3123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i70sml
false
null
t3_1i70sml
/r/LocalLLaMA/comments/1i70sml/r1_randomly_started_talking_in_chinese/
false
false
https://b.thumbs.redditm…0WDdXKqSdm3k.jpg
0
null
Unsloth accuracy vs Hugging face Transformers for professional
1
[removed]
2025-01-22T02:56:08
https://www.reddit.com/r/LocalLLaMA/comments/1i712si/unsloth_accuracy_vs_hugging_face_transformers_for/
BitAcademic9597
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i712si
false
null
t3_1i712si
/r/LocalLLaMA/comments/1i712si/unsloth_accuracy_vs_hugging_face_transformers_for/
false
false
self
1
null
Appearancly Deepseek Roleplaying Has More Freedom
2
[removed]
2025-01-22T03:11:27
https://www.reddit.com/gallery/1i71fcl
_HornyXytrek_
reddit.com
1970-01-01T00:00:00
0
{}
1i71fcl
false
null
t3_1i71fcl
/r/LocalLLaMA/comments/1i71fcl/appearancly_deepseek_roleplaying_has_more_freedom/
false
false
nsfw
2
null
Difference between DeepSeek and OpenAI?
20
How come OpenAI o1 + o1-mini has usage limits ($20 plus plan for 50 messages a week or $200 pro plan for unlimited) while DeepSeek's "DeepThink" is completely free to use on their site? Am I missing something? Does DeepSeek's reasoning architecture have way less compute costs or does DeepSeek's reasoning not even close to what o1 can do?
2025-01-22T03:16:29
https://www.reddit.com/r/LocalLLaMA/comments/1i71j8q/difference_between_deepseek_and_openai/
devinak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i71j8q
false
null
t3_1i71j8q
/r/LocalLLaMA/comments/1i71j8q/difference_between_deepseek_and_openai/
false
false
self
20
null
Is 32x32GB RTX5090s enough for r1 and llama 70b
0
So they support 8 KV heads right? Would that be clusters of 4 then? I’m thinking of buying 32 rtx 5090s for a company agent system.
2025-01-22T03:26:08
https://www.reddit.com/r/LocalLLaMA/comments/1i71qpw/is_32x32gb_rtx5090s_enough_for_r1_and_llama_70b/
BarnardWellesley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i71qpw
false
null
t3_1i71qpw
/r/LocalLLaMA/comments/1i71qpw/is_32x32gb_rtx5090s_enough_for_r1_and_llama_70b/
false
false
self
0
null
“Any Router” in v0.1.9 - unify access and observability to ollama-supported and API-based LLMs
24
Not a new project; just a feature update on egress functionally to help developers unify access and observability for ollama- supported and API-based LLMs. Coincidentally out #1 feature request last month. So if you want a simple way to access any LLM and get unified tracing and logs, then this update might be useful for you. https://github.com/katanemo/archgw. Arch Gateway is an intelligent proxy server designed for prompts. Guides for egress LLM routing and ollama below https://github.com/katanemo/archgw?tab=readme-ov-file#use-arch-gateway-as-llm-router https://github.com/katanemo/archgw/tree/main/demos/currency_exchange_ollamau
2025-01-22T03:53:08
https://i.redd.it/ks10onr1xgee1.jpeg
AdditionalWeb107
i.redd.it
1970-01-01T00:00:00
0
{}
1i72bip
false
null
t3_1i72bip
/r/LocalLLaMA/comments/1i72bip/any_router_in_v019_unify_access_and_observability/
false
false
https://b.thumbs.redditm…6qd61y0paaTg.jpg
24
{'enabled': True, 'images': [{'id': '3otV4kSUjq1KIcebneW56_Cvr-w7a7v4lnXutC7MgKo', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ks10onr1xgee1.jpeg?width=108&crop=smart&auto=webp&s=56e27a9b5b05639962cfc554d9fb11afe95a4c08', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ks10onr1xgee1.jpeg?width=216&crop=smart&auto=webp&s=90b78b81009bbdd104779a1d383a857d8a5f139a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ks10onr1xgee1.jpeg?width=320&crop=smart&auto=webp&s=8e975894f428b961a66ba94cb475ea1aa3724997', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/ks10onr1xgee1.jpeg?width=640&crop=smart&auto=webp&s=61fada01dff806ec53c0bd3dc33cb257f94507e7', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/ks10onr1xgee1.jpeg?width=960&crop=smart&auto=webp&s=3aa68ce31e8eba50621c476e2c8f907b612b6d21', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/ks10onr1xgee1.jpeg?width=1080&crop=smart&auto=webp&s=f4a6a5703a5b19f20b24ff44bf1a39f2f09b93b2', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/ks10onr1xgee1.jpeg?auto=webp&s=50a316721e2802933ed7ec7500cfbf39f4ed034f', 'width': 1080}, 'variants': {}}]}
DeepSeek R1-Zero is hilariously uncensored, and now available on Hyperbolic AI.
1
[removed]
2025-01-22T04:19:07
https://www.reddit.com/r/LocalLLaMA/comments/1i72tft/deepseek_r1zero_is_hilariously_uncensored_and_now/
georgejrjrjr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i72tft
false
null
t3_1i72tft
/r/LocalLLaMA/comments/1i72tft/deepseek_r1zero_is_hilariously_uncensored_and_now/
false
false
self
1
null
M4 Mini Pro for Training LLMs
2
I recently bought an M4 Mini, as a replacement for my old laptop, to run and train LLMs, locally. I just wanted to know if my current specs would be enough, and what few configurations people would recommend for this. Specs: 24GB Unified Memory 512GB SSD 12-core CPU 16-core GPU
2025-01-22T04:23:10
https://www.reddit.com/r/LocalLLaMA/comments/1i72w5b/m4_mini_pro_for_training_llms/
Scapegoat079
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i72w5b
false
null
t3_1i72w5b
/r/LocalLLaMA/comments/1i72w5b/m4_mini_pro_for_training_llms/
false
false
self
2
null
Missing vision capability or did we just uncover the truth?
0
https://preview.redd.it/…182724f2bd7e0c
2025-01-22T04:24:42
https://www.reddit.com/r/LocalLLaMA/comments/1i72x4o/missing_vision_capability_or_did_we_just_uncover/
thedgyalt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i72x4o
false
null
t3_1i72x4o
/r/LocalLLaMA/comments/1i72x4o/missing_vision_capability_or_did_we_just_uncover/
false
false
https://b.thumbs.redditm…OEW12j7prsfk.jpg
0
null
3090 nvlink… is there a point?
3
I’ve got two 3090s (one in a gaming pc and the other sitting around until I can get it to a family member)… I’m wondering is there a point to pooling the vram with nvlink to have better models running vs even a 32gb 5090? Curious on people’s thoughts
2025-01-22T04:37:17
https://www.reddit.com/r/LocalLLaMA/comments/1i7359q/3090_nvlink_is_there_a_point/
the_shek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7359q
false
null
t3_1i7359q
/r/LocalLLaMA/comments/1i7359q/3090_nvlink_is_there_a_point/
false
false
self
3
null
What are the commercial benefits from the opensource deepseek reasoning models ?
5
With all this better benchmark which we are looking from deepseek. As it is opensource , my question is apart from research purpose where exactly you guys can use those reasoning opensource models as there is thinking/latency in every response. I would love to know the best use cases of opensource reasoning models.
2025-01-22T04:46:54
https://www.reddit.com/r/LocalLLaMA/comments/1i73b5q/what_are_the_commercial_benefits_from_the/
Maleficent_Repair359
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i73b5q
false
null
t3_1i73b5q
/r/LocalLLaMA/comments/1i73b5q/what_are_the_commercial_benefits_from_the/
false
false
self
5
null
Would it be better to download a highly quantized large model, or a raw small model?
2
Title, basically. Does the large model have enough base quality that the quantization beats the raw smaller version?
2025-01-22T04:50:45
https://www.reddit.com/r/LocalLLaMA/comments/1i73dhd/would_it_be_better_to_download_a_highly_quantized/
Donovanth1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i73dhd
false
null
t3_1i73dhd
/r/LocalLLaMA/comments/1i73dhd/would_it_be_better_to_download_a_highly_quantized/
false
false
self
2
null
AskDS - automatically send test failures and your repo to R1 to get actionable results to fix the failing tests
1
2025-01-22T04:57:54
https://github.com/mohsen1/askds
mehyay76
github.com
1970-01-01T00:00:00
0
{}
1i73huu
false
null
t3_1i73huu
/r/LocalLLaMA/comments/1i73huu/askds_automatically_send_test_failures_and_your/
false
false
https://b.thumbs.redditm…cFYCigX59-4Q.jpg
1
{'enabled': False, 'images': [{'id': 'CTi3I_IW8nH6aJxzbqlgAvtM6y0MNN-Ood3IpxOn9c8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4pRQcaVlgTffkpbnHAL3O7iqms1Xvjb1fXS6BGA6gq4.jpg?width=108&crop=smart&auto=webp&s=56329002a5450d890c093ed90760fe83e9bcaa58', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4pRQcaVlgTffkpbnHAL3O7iqms1Xvjb1fXS6BGA6gq4.jpg?width=216&crop=smart&auto=webp&s=809725a2ff2057907daf852629541d6795a6e249', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4pRQcaVlgTffkpbnHAL3O7iqms1Xvjb1fXS6BGA6gq4.jpg?width=320&crop=smart&auto=webp&s=ab85244ee280dfe7351fb11a7f8dd0f902fda8e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4pRQcaVlgTffkpbnHAL3O7iqms1Xvjb1fXS6BGA6gq4.jpg?width=640&crop=smart&auto=webp&s=73261ac199f66685228b3308c4ffc106f07bcd68', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4pRQcaVlgTffkpbnHAL3O7iqms1Xvjb1fXS6BGA6gq4.jpg?width=960&crop=smart&auto=webp&s=2b26f89bc3c76ab79090efa6f078d8a38cd025a2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4pRQcaVlgTffkpbnHAL3O7iqms1Xvjb1fXS6BGA6gq4.jpg?width=1080&crop=smart&auto=webp&s=8cfa5a88d89d7b07c15b534a4f8fe60c0aa3a430', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4pRQcaVlgTffkpbnHAL3O7iqms1Xvjb1fXS6BGA6gq4.jpg?auto=webp&s=aef52d5e79b2adc5e189d584e49adaf887722456', 'width': 1200}, 'variants': {}}]}
AskDS - automatically send test failures and your repo to R1 to get actionable results to fix the failing test
10
2025-01-22T05:00:16
https://v.redd.it/l0zurgqy8hee1
mehyay76
v.redd.it
1970-01-01T00:00:00
0
{}
1i73jcv
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/l0zurgqy8hee1/DASHPlaylist.mpd?a=1740114032%2CZjYxOGUwZGUyN2VkY2E0YWVkYzJmYTA5NDI2MjYxNTgzYjdlZGVlZjM0NjQ0YTMwNGFiY2Q0MmRhYmZmMTdkOQ%3D%3D&v=1&f=sd', 'duration': 45, 'fallback_url': 'https://v.redd.it/l0zurgqy8hee1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/l0zurgqy8hee1/HLSPlaylist.m3u8?a=1740114032%2CODMxM2QyZTcxN2YyODBiNGY2N2I1MTAyNzhmY2FkZjYwMmEyNTJjNDUxMDk3ZGIyNDMxMjI2NTkxNzgwZGRlMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/l0zurgqy8hee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1428}}
t3_1i73jcv
/r/LocalLLaMA/comments/1i73jcv/askds_automatically_send_test_failures_and_your/
false
false
https://external-preview…40ee846be94e0a1c
10
{'enabled': False, 'images': [{'id': 'dTB6cXZocXk4aGVlMZ6Ql2t-AEpYbSz_aL0unkgEeC92L3d6cUaq68j-jVoP', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dTB6cXZocXk4aGVlMZ6Ql2t-AEpYbSz_aL0unkgEeC92L3d6cUaq68j-jVoP.png?width=108&crop=smart&format=pjpg&auto=webp&s=d8a63e70c0dbd79c57aee0abceda24fcd7f78a37', 'width': 108}, {'height': 163, 'url': 'https://external-preview.redd.it/dTB6cXZocXk4aGVlMZ6Ql2t-AEpYbSz_aL0unkgEeC92L3d6cUaq68j-jVoP.png?width=216&crop=smart&format=pjpg&auto=webp&s=3f0ab9e1b604fecb1ee76cd405a8679bb168c731', 'width': 216}, {'height': 242, 'url': 'https://external-preview.redd.it/dTB6cXZocXk4aGVlMZ6Ql2t-AEpYbSz_aL0unkgEeC92L3d6cUaq68j-jVoP.png?width=320&crop=smart&format=pjpg&auto=webp&s=727274a459ad810aa1b910df2263c864b319a2cb', 'width': 320}, {'height': 484, 'url': 'https://external-preview.redd.it/dTB6cXZocXk4aGVlMZ6Ql2t-AEpYbSz_aL0unkgEeC92L3d6cUaq68j-jVoP.png?width=640&crop=smart&format=pjpg&auto=webp&s=934efc8235c22ee19345ec6f3f7c023367c9b9d8', 'width': 640}, {'height': 726, 'url': 'https://external-preview.redd.it/dTB6cXZocXk4aGVlMZ6Ql2t-AEpYbSz_aL0unkgEeC92L3d6cUaq68j-jVoP.png?width=960&crop=smart&format=pjpg&auto=webp&s=cacea50e74f09a43131f85ecd1c29a0d8455f923', 'width': 960}, {'height': 817, 'url': 'https://external-preview.redd.it/dTB6cXZocXk4aGVlMZ6Ql2t-AEpYbSz_aL0unkgEeC92L3d6cUaq68j-jVoP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=233270331436a5b9d3c1629c9b1f89157eef8a96', 'width': 1080}], 'source': {'height': 1342, 'url': 'https://external-preview.redd.it/dTB6cXZocXk4aGVlMZ6Ql2t-AEpYbSz_aL0unkgEeC92L3d6cUaq68j-jVoP.png?format=pjpg&auto=webp&s=8fe5ef9ac53acced1d338daa97c0979d98e8def7', 'width': 1774}, 'variants': {}}]}
does anyone know how many billion parameter is deepsek r1 model running on their websites?
2
just the title as in v3 i heard 37billion parameter were active
2025-01-22T05:00:59
https://www.reddit.com/r/LocalLLaMA/comments/1i73jxw/does_anyone_know_how_many_billion_parameter_is/
pro_ut3104
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i73jxw
false
null
t3_1i73jxw
/r/LocalLLaMA/comments/1i73jxw/does_anyone_know_how_many_billion_parameter_is/
false
false
self
2
null
Exploring Local Server for Max 1k Active User Base
1
[removed]
2025-01-22T05:17:36
https://www.reddit.com/r/LocalLLaMA/comments/1i73ua7/exploring_local_server_for_max_1k_active_user_base/
Infamous_Jaguar_2151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i73ua7
false
null
t3_1i73ua7
/r/LocalLLaMA/comments/1i73ua7/exploring_local_server_for_max_1k_active_user_base/
false
false
self
1
null
YOU CAN EXTRACT REASONING FROM R1 AND PASS IT ONTO ANY MODEL
516
from @skirano on twitter By the way, you can extract JUST the reasoning from deepseek-reasoner, which means you can send that thinking process to any model you want before they answer you. Like here where I turn gpt-3.5 turbo into an absolute genius!
2025-01-22T05:22:22
https://v.redd.it/mbcqadwychee1
Sensitive-Finger-404
v.redd.it
1970-01-01T00:00:00
0
{}
1i73x81
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/mbcqadwychee1/DASHPlaylist.mpd?a=1740115356%2CNjEzOTY5NjUwNjk0OTNmZTEzMGU0MmQxZTE3ODNiZDZmYTU0YWEwNGNlZDBjNjAwMTgzZWVkY2UyNTdiMDcyNA%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/mbcqadwychee1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/mbcqadwychee1/HLSPlaylist.m3u8?a=1740115356%2CZjc0YTUxNjFiMDY5YmIwMGRiYjgwZTBhNDM4NzIyMjU1MGUxYjVkNTAxYzJiMDdkOGRjM2U5MTkwYmUyNjhiYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mbcqadwychee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 912}}
t3_1i73x81
/r/LocalLLaMA/comments/1i73x81/you_can_extract_reasoning_from_r1_and_pass_it/
false
false
https://external-preview…909f09639e9cb460
516
{'enabled': False, 'images': [{'id': 'OG1uaHRydHljaGVlMeGKc_GKsNSHC_YJy3k1hv6gZ336TNH-m_F1sXruvXhI', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/OG1uaHRydHljaGVlMeGKc_GKsNSHC_YJy3k1hv6gZ336TNH-m_F1sXruvXhI.png?width=108&crop=smart&format=pjpg&auto=webp&s=0fa874946f0c0c82f9181ff7c5c869a925b1686e', 'width': 108}, {'height': 170, 'url': 'https://external-preview.redd.it/OG1uaHRydHljaGVlMeGKc_GKsNSHC_YJy3k1hv6gZ336TNH-m_F1sXruvXhI.png?width=216&crop=smart&format=pjpg&auto=webp&s=6656dec21d8f45a956ade8e5ca479ffc054c8fed', 'width': 216}, {'height': 252, 'url': 'https://external-preview.redd.it/OG1uaHRydHljaGVlMeGKc_GKsNSHC_YJy3k1hv6gZ336TNH-m_F1sXruvXhI.png?width=320&crop=smart&format=pjpg&auto=webp&s=98b418c492125b7fb9a88bb5448a54bb2fd838e0', 'width': 320}, {'height': 505, 'url': 'https://external-preview.redd.it/OG1uaHRydHljaGVlMeGKc_GKsNSHC_YJy3k1hv6gZ336TNH-m_F1sXruvXhI.png?width=640&crop=smart&format=pjpg&auto=webp&s=802b58e3ae149850ab9276f0303e3d6984444c50', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OG1uaHRydHljaGVlMeGKc_GKsNSHC_YJy3k1hv6gZ336TNH-m_F1sXruvXhI.png?format=pjpg&auto=webp&s=d11b86b01fbc2a87eb165d3a556cb1c4c2bc2fc4', 'width': 912}, 'variants': {}}]}
Deepseek running in my basement finds the 5 odd numbers with the letter e in them
52
Amazing. I'm a believer. It's impossible to find an odd number in English that doesn't have the letter 'e' in its spelling because all single-digit odd numbers (1, 3, 5, 7, 9) contain the letter 'e' in their names. Consequently, any odd number, regardless of its size, will include one of these digits in its ones place, resulting in the presence of the letter 'e'. Therefore, no such numbers exist. **Answer:** There are no odd numbers in English that don't have the letter 'e' in their spelling. If you want to see the thinking [https://pastebin.com/rbvF7p2f](https://pastebin.com/rbvF7p2f)
2025-01-22T05:24:00
https://www.reddit.com/r/LocalLLaMA/comments/1i73y5t/deepseek_running_in_my_basement_finds_the_5_odd/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i73y5t
false
null
t3_1i73y5t
/r/LocalLLaMA/comments/1i73y5t/deepseek_running_in_my_basement_finds_the_5_odd/
false
false
self
52
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
Whats the best OSS alternative to cursor?
3
I’ve been using continue on vscode with qwen2.5-coder:7b for code generation and qwen2.5-coder:1.5b for autocomplete, and it hasn’t come close to cursor in terms of functionality. this isn’t just about the models being less effective—the features themselves often don’t work as expected. For example, clicking "apply" on an edited piece of code doesn’t always replace it correctly. there’s also no git-like interface showing green and red differences to highlight changes. I’m not sure which model would have enough context to chat effectively with the entire codebase. additionally, there’s no swe agent (like composer in cursor). Are there alternative tools that offer smoother functionality? The priority here isn’t necessarily high-quality code output it’s more about seamless usability/functionality.
2025-01-22T05:31:28
https://www.reddit.com/r/LocalLLaMA/comments/1i742ha/whats_the_best_oss_alternative_to_cursor/
CaptTechno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i742ha
false
null
t3_1i742ha
/r/LocalLLaMA/comments/1i742ha/whats_the_best_oss_alternative_to_cursor/
false
false
self
3
null
fine tuning Llama on a statistical data
1
[removed]
2025-01-22T05:34:29
https://www.reddit.com/r/LocalLLaMA/comments/1i7447g/fine_tuning_llama_on_a_statistical_data/
Aymankoos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7447g
false
null
t3_1i7447g
/r/LocalLLaMA/comments/1i7447g/fine_tuning_llama_on_a_statistical_data/
false
false
self
1
null
fine tuning Llama on a statistical data
1
[removed]
2025-01-22T05:35:45
https://www.reddit.com/r/LocalLLaMA/comments/1i744ym/fine_tuning_llama_on_a_statistical_data/
Aymankoos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i744ym
false
null
t3_1i744ym
/r/LocalLLaMA/comments/1i744ym/fine_tuning_llama_on_a_statistical_data/
false
false
self
1
null
Fine tuning llama 3 on a statistical data
1
[removed]
2025-01-22T05:43:47
https://www.reddit.com/r/LocalLLaMA/comments/1i749ie/fine_tuning_llama_3_on_a_statistical_data/
Aymankoos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i749ie
false
null
t3_1i749ie
/r/LocalLLaMA/comments/1i749ie/fine_tuning_llama_3_on_a_statistical_data/
false
false
self
1
null
Why my questions automatically removed
1
[removed]
2025-01-22T05:44:50
https://www.reddit.com/r/LocalLLaMA/comments/1i74a1p/why_my_questions_automatically_removed/
Aymankoos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i74a1p
false
null
t3_1i74a1p
/r/LocalLLaMA/comments/1i74a1p/why_my_questions_automatically_removed/
false
false
self
1
null
BAAI/bge-en-icl not working with OpenWebUI.
1
It tries to load the shards and successfully loads 2 shards but then say press any key to continue and when I press a key it just closes: Loading WEBUI_SECRET_KEY from file, not provided as an environment variable. Loading WEBUI_SECRET_KEY from H:\AI\OpenWebUI\.webui_secret_key CUDA is enabled, appending LD_LIBRARY_PATH to include torch/cudnn & cublas libraries. CUDA seems to be working H:\AI\OpenWebUI\venv\Lib\site-packages\open_webui H:\AI\OpenWebUI\venv\Lib\site-packages H:\AI\OpenWebUI\venv\Lib Running migrations INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [open_webui.env] 'ENABLE_API_KEY' loaded from the latest database entry INFO [open_webui.env] 'ENABLE_API_KEY_ENDPOINT_RESTRICTIONS' loaded from the latest database entry INFO [open_webui.env] 'API_KEY_ALLOWED_ENDPOINTS' loaded from the latest database entry INFO [open_webui.env] 'JWT_EXPIRES_IN' loaded from the latest database entry INFO [open_webui.env] 'ENABLE_OLLAMA_API' loaded from the latest database entry INFO [open_webui.env] 'OLLAMA_BASE_URLS' loaded from the latest database entry INFO [open_webui.env] 'OLLAMA_API_CONFIGS' loaded from the latest database entry INFO [open_webui.env] 'ENABLE_OPENAI_API' loaded from the latest database entry INFO [open_webui.env] 'OPENAI_API_KEYS' loaded from the latest database entry INFO [open_webui.env] 'OPENAI_API_BASE_URLS' loaded from the latest database entry INFO [open_webui.env] 'OPENAI_API_CONFIGS' loaded from the latest database entry INFO [open_webui.env] 'WEBUI_URL' loaded from the latest database entry INFO [open_webui.env] 'ENABLE_SIGNUP' loaded from the latest database entry INFO [open_webui.env] 'DEFAULT_LOCALE' loaded from the latest database entry INFO [open_webui.env] 'DEFAULT_PROMPT_SUGGESTIONS' loaded from the latest database entry INFO [open_webui.env] 'DEFAULT_USER_ROLE' loaded from the latest database entry INFO [open_webui.env] 'ENABLE_CHANNELS' loaded from the latest database entry INFO [open_webui.env] 'WEBHOOK_URL' loaded from the latest database entry INFO [open_webui.env] 'ENABLE_COMMUNITY_SHARING' loaded from the latest database entry INFO [open_webui.env] 'ENABLE_MESSAGE_RATING' loaded from the latest database entry WARNI [open_webui.env] WARNING: CORS_ALLOW_ORIGIN IS SET TO '*' - NOT RECOMMENDED FOR PRODUCTION DEPLOYMENTS. INFO [open_webui.env] 'SHOW_ADMIN_DETAILS' loaded from the latest database entry INFO [open_webui.env] 'ENABLE_GOOGLE_DRIVE_INTEGRATION' loaded from the latest database entry INFO [open_webui.env] 'CONTENT_EXTRACTION_ENGINE' loaded from the latest database entry INFO [open_webui.env] 'TIKA_SERVER_URL' loaded from the latest database entry INFO [open_webui.env] 'RAG_TOP_K' loaded from the latest database entry INFO [open_webui.env] 'RAG_RELEVANCE_THRESHOLD' loaded from the latest database entry INFO [open_webui.env] 'ENABLE_RAG_HYBRID_SEARCH' loaded from the latest database entry INFO [open_webui.env] 'RAG_EMBEDDING_ENGINE' loaded from the latest database entry INFO [open_webui.env] 'PDF_EXTRACT_IMAGES' loaded from the latest database entry INFO [open_webui.env] 'RAG_EMBEDDING_MODEL' loaded from the latest database entry INFO [open_webui.env] Embedding model set: BAAI/bge-en-icl INFO [open_webui.env] 'RAG_EMBEDDING_BATCH_SIZE' loaded from the latest database entry INFO [open_webui.env] 'RAG_RERANKING_MODEL' loaded from the latest database entry INFO [open_webui.env] Reranking model set: mixedbread-ai/mxbai-rerank-large-v1 INFO [open_webui.env] 'RAG_TEXT_SPLITTER' loaded from the latest database entry INFO [open_webui.env] 'CHUNK_SIZE' loaded from the latest database entry INFO [open_webui.env] 'CHUNK_OVERLAP' loaded from the latest database entry INFO [open_webui.env] 'RAG_TEMPLATE' loaded from the latest database entry INFO [open_webui.env] 'RAG_OPENAI_API_BASE_URL' loaded from the latest database entry INFO [open_webui.env] 'RAG_OPENAI_API_KEY' loaded from the latest database entry INFO [open_webui.env] 'RAG_OLLAMA_BASE_URL' loaded from the latest database entry INFO [open_webui.env] 'RAG_OLLAMA_API_KEY' loaded from the latest database entry INFO [open_webui.env] 'IMAGE_GENERATION_ENGINE' loaded from the latest database entry INFO [open_webui.env] 'ENABLE_IMAGE_GENERATION' loaded from the latest database entry INFO [open_webui.env] 'AUTOMATIC1111_BASE_URL' loaded from the latest database entry INFO [open_webui.env] 'AUTOMATIC1111_API_AUTH' loaded from the latest database entry INFO [open_webui.env] 'COMFYUI_BASE_URL' loaded from the latest database entry INFO [open_webui.env] 'COMFYUI_WORKFLOW' loaded from the latest database entry INFO [open_webui.env] 'COMFYUI_WORKFLOW' loaded from the latest database entry INFO [open_webui.env] 'IMAGES_OPENAI_API_BASE_URL' loaded from the latest database entry INFO [open_webui.env] 'IMAGES_OPENAI_API_KEY' loaded from the latest database entry WARNI [langchain_community.utils.user_agent] USER_AGENT environment variable not set, consider setting it to identify your requests. ___ __ __ _ _ _ ___ / _ \ _ __ ___ _ __ \ \ / /__| |__ | | | |_ _| | | | | '_ \ / _ \ '_ \ \ \ /\ / / _ \ '_ \| | | || | | |_| | |_) | __/ | | | \ V V / __/ |_) | |_| || | \___/| .__/ \___|_| |_| \_/\_/ \___|_.__/ \___/|___| |_| v0.5.4 - building the best open-source AI user interface. https://github.com/open-webui/open-webui Fetching 17 files: 100%|███████████████████████████████████████████████████████████████████████| 17/17 [00:00<?, ?it/s] Loading checkpoint shards: 33%|███████████████████ | 1/3 [00:02<00:04, 2.30s/it] Press any key to continue . . .
2025-01-22T05:58:22
https://www.reddit.com/r/LocalLLaMA/comments/1i74ho3/baaibgeenicl_not_working_with_openwebui/
AlgorithmicKing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i74ho3
false
null
t3_1i74ho3
/r/LocalLLaMA/comments/1i74ho3/baaibgeenicl_not_working_with_openwebui/
false
false
self
1
null
Exploring Local Server for Max 1k Active User Base
2
Hi asking for a friend, If one wanted to serve a 32b model for a local application to around 1k max (likely 500) monthly active users what would be a good and cost effective option? Could he put together an epyc server with some a6000s? Having difficulty understanding how the hardware needs to scale for this in terms of vram, ram and cpu compute power. He would essentially be hosting an ai application. Hoping to save over cloud computing costs.
2025-01-22T06:00:57
https://www.reddit.com/r/LocalLLaMA/comments/1i74j9k/exploring_local_server_for_max_1k_active_user_base/
NewBronzeAge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i74j9k
false
null
t3_1i74j9k
/r/LocalLLaMA/comments/1i74j9k/exploring_local_server_for_max_1k_active_user_base/
false
false
self
2
null
DeepSeek R1 32B is way better than 7B Distill, even at Q4 quant
51
I've been quite impressed by the model. I'm using the Qwen distill and so far it's working well, although as is typical with these models, they tend to overthink a lot! But it answered my trick question in one shot (See comments).
2025-01-22T06:01:35
https://www.reddit.com/r/LocalLLaMA/comments/1i74jqg/deepseek_r1_32b_is_way_better_than_7b_distill/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i74jqg
false
null
t3_1i74jqg
/r/LocalLLaMA/comments/1i74jqg/deepseek_r1_32b_is_way_better_than_7b_distill/
false
false
self
51
null
How trusted is LM Studio?
7
I apologize if this isn't the place to ask and will delete the post if not. I'm an absolute beginner with this stuff, so i apologize for my ignorance. I had seen LM studio was a beginner friendly intro to the world of local LLM and wanted to try it out. My more experienced friend said that I shouldn't bother with it and not to trust it. Is it something that the community stands by or is it not trustworthy?
2025-01-22T06:06:05
https://www.reddit.com/r/LocalLLaMA/comments/1i74mao/how_trusted_is_lm_studio/
DevilBirb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i74mao
false
null
t3_1i74mao
/r/LocalLLaMA/comments/1i74mao/how_trusted_is_lm_studio/
false
false
self
7
null
M4 Max 16-Core, 40-Core Graphics - 128 GB Ram
1
[removed]
2025-01-22T06:33:42
https://www.reddit.com/r/LocalLLaMA/comments/1i7514d/m4_max_16core_40core_graphics_128_gb_ram/
Vect0r_YT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7514d
false
null
t3_1i7514d
/r/LocalLLaMA/comments/1i7514d/m4_max_16core_40core_graphics_128_gb_ram/
false
false
self
1
null
How to debug eval metrics? [See description]
1
Hi All, I am looking to host an offline/local solution to view/interpret the standard-eval outputs from different LLMs. Is there something I can use locally? I have the outputs in a local jsonl file, but I want some locally-hosted frontend which takes in the filename and then gives an easy way to play around with the outputs. Having metadata like average len of inputs, avg output tokens etc would also be useful. Any pointers? Thanks.
2025-01-22T06:34:00
https://www.reddit.com/r/LocalLLaMA/comments/1i751a2/how_to_debug_eval_metrics_see_description/
weight_matrix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i751a2
false
null
t3_1i751a2
/r/LocalLLaMA/comments/1i751a2/how_to_debug_eval_metrics_see_description/
false
false
self
1
null
generating open-source synthetic datasets for data-scarce tasks
1
[removed]
2025-01-22T06:36:38
https://www.reddit.com/r/LocalLLaMA/comments/1i752n9/generating_opensource_synthetic_datasets_for/
Classic_Eggplant8827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i752n9
false
null
t3_1i752n9
/r/LocalLLaMA/comments/1i752n9/generating_opensource_synthetic_datasets_for/
false
false
self
1
null
Best frameworks for fine-tuning models—what’s everyone using?
1
[removed]
2025-01-22T06:38:14
https://www.reddit.com/r/LocalLLaMA/comments/1i753he/best_frameworks_for_finetuning_modelswhats/
Vivid-Entertainer752
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i753he
false
null
t3_1i753he
/r/LocalLLaMA/comments/1i753he/best_frameworks_for_finetuning_modelswhats/
false
false
self
1
null
I don’t believe the $500 AI investment
1
Looking at this deal, several things don't add up. The $500 billion figure is wildly optimistic - that's almost double what the entire US government committed to semiconductor manufacturing through the CHIPS Act. When you dig deeper, you see lots of vague promises but no real details about where the money's coming from or how they'll actually build anything. The legal language is especially fishy. Instead of making firm commitments, they're using weasel words like "intends to," "evaluating," and "potential partnerships." This isn't accidental - by running everything through Stargate, a new private company, and using this careful language, they've created a perfect shield for bigger players like SoftBank and Microsoft. If things go south, they can just blame "market conditions" and walk away with minimal exposure. Private companies like Stargate don't face the same strict disclosure requirements as public ones. The timing is also telling - announcing this massive investment right after Trump won the presidency was clearly designed for maximum political impact. It fits perfectly into the narrative of bringing jobs and investment back to America. Using inflated job numbers for data centers (which typically employ relatively few people once built) while making vague promises about US technological leadership? That’s politics. My guess? There's probably a real data center project in the works, but it's being massively oversold for publicity and political gains. The actual investment will likely be much smaller, take longer to complete, and involve different partners than what's being claimed. This announcement just is a deal structured by lawyers who wanted to generate maximum headlines while minimizing any legal risk for their clients.​​​​
2025-01-22T07:01:24
https://www.reddit.com/r/LocalLLaMA/comments/1i75fde/i_dont_believe_the_500_ai_investment/
MattDTO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i75fde
false
null
t3_1i75fde
/r/LocalLLaMA/comments/1i75fde/i_dont_believe_the_500_ai_investment/
false
false
self
1
null
I (a content writer) built a Shopify store using AI code generators
0
The process was a lot simpler than you'd expect. I signed up for Shopify, chose a plan, generated code snippets using AI (this was the most complex part, I used HTML, CSS, and JS for the backend), and added them to Shopify's theme editor. It displayed errors for certain things (like Google fonts) but those I was able to fix with AI as well. I wrote a detailed showcase of the process (with screenshots) for anyone interested: [https://blog.getbind.co/2025/01/21/how-to-create-a-shopify-store-with-ai/](https://blog.getbind.co/2025/01/21/how-to-create-a-shopify-store-with-ai/) **NOTE:** this was only a project and not a real venture.
2025-01-22T07:02:20
https://www.reddit.com/r/LocalLLaMA/comments/1i75fvr/i_a_content_writer_built_a_shopify_store_using_ai/
johnzakma10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i75fvr
false
null
t3_1i75fvr
/r/LocalLLaMA/comments/1i75fvr/i_a_content_writer_built_a_shopify_store_using_ai/
false
false
self
0
{'enabled': False, 'images': [{'id': '1_MnsoBOjHUVlBv8s1AW8GF3ZoHqy4Q7Cx8Vh-5po64', 'resolutions': [{'height': 23, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?width=108&crop=smart&auto=webp&s=5d3f084b1f24c6be1b219ed06d50ede11039ae20', 'width': 108}, {'height': 47, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?width=216&crop=smart&auto=webp&s=4c4c27a0375b804db5d90bf12bf5c57a81b64386', 'width': 216}], 'source': {'height': 60, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?auto=webp&s=58df702c38afd9cce5d0d8f1b6181031aa15e77b', 'width': 272}, 'variants': {}}]}
I don’t believe the $500 Billion OpenAI investment
547
Looking at this deal, several things don't add up. The $500 billion figure is wildly optimistic - that's almost double what the entire US government committed to semiconductor manufacturing through the CHIPS Act. When you dig deeper, you see lots of vague promises but no real details about where the money's coming from or how they'll actually build anything. The legal language is especially fishy. Instead of making firm commitments, they're using weasel words like "intends to," "evaluating," and "potential partnerships." This isn't accidental - by running everything through Stargate, a new private company, and using this careful language, they've created a perfect shield for bigger players like SoftBank and Microsoft. If things go south, they can just blame "market conditions" and walk away with minimal exposure. Private companies like Stargate don't face the same strict disclosure requirements as public ones. The timing is also telling - announcing this massive investment right after Trump won the presidency was clearly designed for maximum political impact. It fits perfectly into the narrative of bringing jobs and investment back to America. Using inflated job numbers for data centers (which typically employ relatively few people once built) while making vague promises about US technological leadership? That’s politics. My guess? There's probably a real data center project in the works, but it's being massively oversold for publicity and political gains. The actual investment will likely be much smaller, take longer to complete, and involve different partners than what's being claimed. This announcement just is a deal structured by lawyers who wanted to generate maximum headlines while minimizing any legal risk for their clients.​​​​
2025-01-22T07:03:02
https://www.reddit.com/r/LocalLLaMA/comments/1i75g7p/i_dont_believe_the_500_billion_openai_investment/
MattDTO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i75g7p
false
null
t3_1i75g7p
/r/LocalLLaMA/comments/1i75g7p/i_dont_believe_the_500_billion_openai_investment/
false
false
self
547
null
deepseek-r1:14b - attempting to answer version differences on qt with bit tricky questions
4
Have not expected, at least local model's give reasonable answers so soon. I am running deepseek-r1:14b on RTX3060. every time, when I try with local models not satisfied with answers after couple of tries. So switching back to LLM's like openai or claude or gemini. Now I see we can start using local models reasonably. though answers are not exact, but the reasoning model like this, and showing thinking in the direction, I see feel comfortable to use. https://preview.redd.it/772ekq3ruhee1.png?width=3480&format=png&auto=webp&s=db2ea99d3fc168372f0b9b1e48f6aae2e002fe91
2025-01-22T07:07:46
https://www.reddit.com/r/LocalLLaMA/comments/1i75iny/deepseekr114b_attempting_to_answer_version/
prabhic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i75iny
false
null
t3_1i75iny
/r/LocalLLaMA/comments/1i75iny/deepseekr114b_attempting_to_answer_version/
false
false
https://b.thumbs.redditm…6Ye2ln3YOKso.jpg
4
null
AI News Recommendations?
6
Hello! I'm looking for recommendations for AI news sources, similar to what I currently listen to. I regularly follow these audio/video sources: * Latent Space: The AI Engineer Podcast * Machine Learning Street Talk * aiexplained-official (YouTube) * MattVidPro (YouTube) I download the audio from these videos each morning using Newpipe and listen to them in Antennapod. I also read the Buttondown AI newsletter (recommended by karpathy) and wish there was an audio version of that too. Based on my current sources, what other AI news sources would you recommend, especially audio-based ones?
2025-01-22T07:14:21
https://www.reddit.com/r/LocalLLaMA/comments/1i75m1u/ai_news_recommendations/
dewijones92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i75m1u
false
null
t3_1i75m1u
/r/LocalLLaMA/comments/1i75m1u/ai_news_recommendations/
false
false
self
6
null
[R] giving out gpus for free to make any niche specialized model
8
hello so the story was that i raised money from vcs to start a company - i actually hate this (i'm extremely unhappy). i decided that what would make me extremely happy is to fund a community research effort around what is possible with synthetic data. i am giving gpus to people to generate cool function-calling or multi-turn datasets and populate huggingface with really cool niche sota agents in tasks where llms are notoriously bad in (rust, text2sql, verilog, etc). if you have something you want to work on dm me!
2025-01-22T07:25:47
https://www.reddit.com/r/LocalLLaMA/comments/1i75rnq/r_giving_out_gpus_for_free_to_make_any_niche/
Classic_Eggplant8827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i75rnq
false
null
t3_1i75rnq
/r/LocalLLaMA/comments/1i75rnq/r_giving_out_gpus_for_free_to_make_any_niche/
false
false
self
8
null
Bytedance Model: Doubao 1.5pro - Doubao Team
21
2025-01-22T07:36:36
https://team.doubao.com/en/special/doubao_1_5_pro
a445141126
team.doubao.com
1970-01-01T00:00:00
0
{}
1i75wyo
false
null
t3_1i75wyo
/r/LocalLLaMA/comments/1i75wyo/bytedance_model_doubao_15pro_doubao_team/
false
false
default
21
null
Cloud hosted LLaMA or ChatGPT Embeddings
1
[removed]
2025-01-22T07:47:39
https://www.reddit.com/r/LocalLLaMA/comments/1i762ec/cloud_hosted_llama_or_chatgpt_embeddings/
Extension-Lettuce623
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i762ec
false
null
t3_1i762ec
/r/LocalLLaMA/comments/1i762ec/cloud_hosted_llama_or_chatgpt_embeddings/
false
false
self
1
null
R1-Zero: Pure RL Creates a Mind We Can’t Decode—Is This AGI’s Dark Mirror?
261
The AI world is losing its mind over DeepSeek-R1-Zero, a model that skipped supervised fine-tuning (SFT) entirely and learned purely through reinforcement learning (RL). Unlike its sibling R1—which uses **some** SFT data to stay "human-readable"—R1-Zero’s training mirrors AlphaZero’s trial-and-error self-play. The result? **Jaw-dropping performance** (AIME math scores jumped from 15.6% → 86.7%) paired with **bizarre, uninterpretable reasoning**. Researchers observed "aha moments" where it autonomously rechecked flawed logic mid-process and allocated more compute to harder problems—**without human guidance**. But here’s the kicker: its outputs are riddled with garbled language mixes (e.g., Chinese/English spaghetti code) and logic leaps that even its creators can’t fully explain. Meanwhile, R1 (the SFT-hybrid version) achieves similar performance **without the chaos**, proving that human-curated data still tames the beast. But at what cost? R1-Zero’s pure RL approach hints at a terrifying possibility: **minds that optimize truth beyond human comprehension**. And with API costs 50x cheaper than OpenAI’s, scaling this could democratize superintelligence—or unleash unreadable black-box AI. If R1-Zero’s "alien logic" solves problems we can’t, does readability even matter… or is this how alignment dies?
2025-01-22T07:54:24
https://www.reddit.com/r/LocalLLaMA/comments/1i765q0/r1zero_pure_rl_creates_a_mind_we_cant_decodeis/
Fun_Dragonfruit_4613
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i765q0
false
null
t3_1i765q0
/r/LocalLLaMA/comments/1i765q0/r1zero_pure_rl_creates_a_mind_we_cant_decodeis/
false
false
self
261
null
rStar-Math?
2
Here is rStar-Math. Has there been any exploration of its integration with Deepseek distiled models? I am interested in obtaining an informed critique of such undertakings. [https://github.com/microsoft/rStar](https://github.com/microsoft/rStar)
2025-01-22T08:11:52
https://www.reddit.com/r/LocalLLaMA/comments/1i76e1z/rstarmath/
Wonderful_Second5322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i76e1z
false
null
t3_1i76e1z
/r/LocalLLaMA/comments/1i76e1z/rstarmath/
false
false
self
2
{'enabled': False, 'images': [{'id': 'a-XdqHvGm-BQEECrtM5DxV5UrUOx6307m1WuUZqPmUM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EBkeap0N69awf8OnUeRy8W1_ZIhMW7VTuxrBYV1pWrE.jpg?width=108&crop=smart&auto=webp&s=dde75b754ab5f0783c3236039d8e22a82739779c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EBkeap0N69awf8OnUeRy8W1_ZIhMW7VTuxrBYV1pWrE.jpg?width=216&crop=smart&auto=webp&s=d363e7641caedebf49add2434c45d7459e218ed1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EBkeap0N69awf8OnUeRy8W1_ZIhMW7VTuxrBYV1pWrE.jpg?width=320&crop=smart&auto=webp&s=9ff7e2cdb1b25e1c23b1707773d9e2798d0dbd65', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EBkeap0N69awf8OnUeRy8W1_ZIhMW7VTuxrBYV1pWrE.jpg?width=640&crop=smart&auto=webp&s=f194fbd19a9e4e0251c01c3241b6bdcabef718d7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EBkeap0N69awf8OnUeRy8W1_ZIhMW7VTuxrBYV1pWrE.jpg?width=960&crop=smart&auto=webp&s=755d0f2d78097e1a63540bcc8da971eea1700203', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EBkeap0N69awf8OnUeRy8W1_ZIhMW7VTuxrBYV1pWrE.jpg?width=1080&crop=smart&auto=webp&s=f179568a7e93a3de77bdba9ce6dd9ba63b2a971e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EBkeap0N69awf8OnUeRy8W1_ZIhMW7VTuxrBYV1pWrE.jpg?auto=webp&s=c43400325bd3b5c598650256882db93d84387bd2', 'width': 1200}, 'variants': {}}]}
LoRA for LLM?
1
[removed]
2025-01-22T08:15:41
https://www.reddit.com/r/LocalLLaMA/comments/1i76frg/lora_for_llm/
berrj0108
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i76frg
false
null
t3_1i76frg
/r/LocalLLaMA/comments/1i76frg/lora_for_llm/
false
false
self
1
null
Is the Deepseek R1 training code available anywhere?
2
I tried searching online but everywhere I look its just the code to run inference rather than the complete code. I would like to know what they did and how it works, sort off reverse engineer it but I cannot find the code anywhere.
2025-01-22T08:43:52
https://www.reddit.com/r/LocalLLaMA/comments/1i76sjb/is_the_deepseek_r1_training_code_available/
riansar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i76sjb
false
null
t3_1i76sjb
/r/LocalLLaMA/comments/1i76sjb/is_the_deepseek_r1_training_code_available/
false
false
self
2
null
Test `DeepSeek R1 70B` for free using private Fluid API
8
2025-01-22T08:53:55
https://v.redd.it/721oc32ceiee1
PrivacyIsImportan1
v.redd.it
1970-01-01T00:00:00
0
{}
1i76x7u
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/721oc32ceiee1/DASHPlaylist.mpd?a=1740128049%2CZDc3MGMzNDRkM2YyYjUyNGU4YzdkNTg3ZGNhZDkwMDcxOGFkYzIyNGUzN2ExZDNkMzZmNzFlZWU2NjUzMWFjMA%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/721oc32ceiee1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/721oc32ceiee1/HLSPlaylist.m3u8?a=1740128049%2CODE0NmYxY2U0YTdjNmVlZDA3ZjQ1NzJjMWM1YTcyMjIyMGEwMDc0NmJhNzJlODBmYzhkN2IzMWJkMTZjMzM4OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/721oc32ceiee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1176}}
t3_1i76x7u
/r/LocalLLaMA/comments/1i76x7u/test_deepseek_r1_70b_for_free_using_private_fluid/
false
false
https://external-preview…e6c4db0b77065cf6
8
{'enabled': False, 'images': [{'id': 'dW55ajc3dmZlaWVlMdx5GDc3_ez7Z-Gp5NX8lFb_ntCghspMTNPuJiKJAyH0', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/dW55ajc3dmZlaWVlMdx5GDc3_ez7Z-Gp5NX8lFb_ntCghspMTNPuJiKJAyH0.png?width=108&crop=smart&format=pjpg&auto=webp&s=0dd446924cfbf8002469bedce5c11a6d6a01327e', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/dW55ajc3dmZlaWVlMdx5GDc3_ez7Z-Gp5NX8lFb_ntCghspMTNPuJiKJAyH0.png?width=216&crop=smart&format=pjpg&auto=webp&s=9b071776d79df9c59d745914925eb28b1b9ef69b', 'width': 216}, {'height': 195, 'url': 'https://external-preview.redd.it/dW55ajc3dmZlaWVlMdx5GDc3_ez7Z-Gp5NX8lFb_ntCghspMTNPuJiKJAyH0.png?width=320&crop=smart&format=pjpg&auto=webp&s=aa5e0091ed95aeb8d51dbb21430a7598063caa90', 'width': 320}, {'height': 391, 'url': 'https://external-preview.redd.it/dW55ajc3dmZlaWVlMdx5GDc3_ez7Z-Gp5NX8lFb_ntCghspMTNPuJiKJAyH0.png?width=640&crop=smart&format=pjpg&auto=webp&s=396282c142fd182b4a90923a0f7ada448249496b', 'width': 640}, {'height': 587, 'url': 'https://external-preview.redd.it/dW55ajc3dmZlaWVlMdx5GDc3_ez7Z-Gp5NX8lFb_ntCghspMTNPuJiKJAyH0.png?width=960&crop=smart&format=pjpg&auto=webp&s=40fefafcad52c85a3e906a48f3e8189ed35d3459', 'width': 960}, {'height': 661, 'url': 'https://external-preview.redd.it/dW55ajc3dmZlaWVlMdx5GDc3_ez7Z-Gp5NX8lFb_ntCghspMTNPuJiKJAyH0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5b14969013aca01041a5550d59bd5f5bf0f1b3c9', 'width': 1080}], 'source': {'height': 982, 'url': 'https://external-preview.redd.it/dW55ajc3dmZlaWVlMdx5GDc3_ez7Z-Gp5NX8lFb_ntCghspMTNPuJiKJAyH0.png?format=pjpg&auto=webp&s=238e30a59a15c44152dac3cf18bcc8fd04dfc0cb', 'width': 1604}, 'variants': {}}]}
Built an AI Flow for Client Research and Insights
1
I created an AI workflow to automate client research, quickly gathering and analysing insights about potential clients. Here's the process: **Step 1: Search for the prospect** Fetch the prospect's professional background, role, and recent activities using a search block. Takes in Name, Company Details as input. **Step 2: Search company details** Gather company information like industry, market position, and key data. **Step 3: Crawl the company website** Extract details about products, solutions, and news via a web crawl block. **Step 4: Synthesize insights** Use an LLM call block to analyze and summarize findings, highlighting opportunities and pain points. **Step 5: Email the summary** Automatically send the synthesized report via email. **You can copy my flow, try it out for free and build yours here:** [https://app.athina.ai/flows/templates/ee41f8e0-5958-45bb-ad82-b60f0e69320d](https://app.athina.ai/flows/templates/ee41f8e0-5958-45bb-ad82-b60f0e69320d) **Output:** Tried this for a friend and here are the results: https://preview.redd.it/wedemg4vfiee1.png?width=1494&format=png&auto=webp&s=ce972327a5661b9b238eebab84fc651baff2b8c0
2025-01-22T09:02:04
https://www.reddit.com/r/LocalLLaMA/comments/1i771a0/built_an_ai_flow_for_client_research_and_insights/
Sam_Tech1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i771a0
false
null
t3_1i771a0
/r/LocalLLaMA/comments/1i771a0/built_an_ai_flow_for_client_research_and_insights/
false
false
https://a.thumbs.redditm…xzgWJQF_jkV4.jpg
1
null
Alright, so R1 is great, but have you noticed this?
1
[removed]
2025-01-22T09:04:20
https://www.reddit.com/r/LocalLLaMA/comments/1i772bf/alright_so_r1_is_great_but_have_you_noticed_this/
culoacido69420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i772bf
false
null
t3_1i772bf
/r/LocalLLaMA/comments/1i772bf/alright_so_r1_is_great_but_have_you_noticed_this/
false
false
self
1
null
Is Deepseek an OpenAI product ?? It says yes !!
0
I asked DeepSeek, on its own platform, about the existence of limits on the number of messages, tokens, etc... and it answered me as if it were Open AI: “Check the OpenAI Help or Support page for up-to-date details, as limits may be adjusted periodically.” I then asked it: “but... are you openai?” and it answered me: “Ah, good question! I am a language model based on OpenAI technology (like GPT-4), but I am not the OpenAI company itself.... ... In short: I'm an OpenAI product, but I don't represent the company in real time or manage its policies. was that clear to you? 😊” ????
2025-01-22T09:11:47
https://www.reddit.com/r/LocalLLaMA/comments/1i775uq/is_deepseek_an_openai_product_it_says_yes/
Epictetito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i775uq
false
null
t3_1i775uq
/r/LocalLLaMA/comments/1i775uq/is_deepseek_an_openai_product_it_says_yes/
false
false
self
0
null
Rust code generation with small models
1
Can any of the small models generate consistent rust code at least for simple tasks? Cant find any benchmarks for rust code generation for small models between like 7 to 13b params Trying codegemma 7b and codellama 13b but they are kinda bad, and just want to talk about implementing features without writing any compilable code, even though I prompt them to do that. Im using 3060, so VRAM is limited to 12 gb. Any advice would be greatly appreciated
2025-01-22T09:15:47
https://www.reddit.com/r/LocalLLaMA/comments/1i777pw/rust_code_generation_with_small_models/
ChazychazZz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i777pw
false
null
t3_1i777pw
/r/LocalLLaMA/comments/1i777pw/rust_code_generation_with_small_models/
false
false
self
1
null
How to Optimize GPU Utilization Across Multiple vLLM Docker Instances on a 4-GPU Server?
1
[removed]
2025-01-22T09:16:05
https://www.reddit.com/r/LocalLLaMA/comments/1i777uf/how_to_optimize_gpu_utilization_across_multiple/
EstimateFearless4742
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i777uf
false
null
t3_1i777uf
/r/LocalLLaMA/comments/1i777uf/how_to_optimize_gpu_utilization_across_multiple/
false
false
self
1
null
Need Advice: Building a Local RAG-Based Chatbot for a 1000-User Company - OpenWebUI vs. AnythingLLM?
0
I'm planning to build a local RAG (Retrieval-Augmented Generation)-based chatbot for a 1000-user company, and I'm stuck between using OpenWebUI and AnythingLLM. Here's what I'm aiming for: 1. Implementing custom tools, pipelines, and API integrations. 2. Experimenting with different RAG approaches, including LightRAG. 3. Managing user roles, permissions, and groupings effectively. Given these requirements, which project would be a better investment? I want to ensure scalability, flexibility, and a clear development path. Any insights, experiences, or recommendations would be greatly appreciated!
2025-01-22T09:21:07
https://www.reddit.com/r/LocalLLaMA/comments/1i77a2o/need_advice_building_a_local_ragbased_chatbot_for/
PickkNickk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i77a2o
false
null
t3_1i77a2o
/r/LocalLLaMA/comments/1i77a2o/need_advice_building_a_local_ragbased_chatbot_for/
false
false
self
0
null
Stubborn Strawberry Deepseek R1 14b
1
deepseek-r1:14b 8:20 PM <think> Alright, the user is pointing out specific positions for 'r's in "strawberry," claiming there are three. I need to carefully analyze the word again, perhaps assigning numerical indices to each letter. Wait, if we number letters starting from 1: 1. S 2. T 3. R 4. A 5. W 6. B 7. E 8. R 9. R 10. Y Here, 'R's are at positions 3, 8, and 9. Wait a second—that means there are three 'r's? But I thought it was two. I must have made a mistake in my earlier analysis. Hold on, the correct spelling of "strawberry" is **S-T-R-A-W-B-E-R-R-Y** , which indeed has two 'R's, not three. The confusion might arise from how we count positions. Let me clarify this to avoid further misunderstanding. </think> Ah, I see where you're coming from! Let’s assign numerical indices to each letter in the word "strawberry": 1. S 2. T 3. R 4. A 5. W 6. B 7. E 8. R 9. R 10. Y As shown above, there are **two "R"s** in positions 3 and 9. There is no third "R" in the word. If you're counting three "R"s, it might be due to a misunderstanding of how letters are positioned or a different spelling variation. Let me know if you'd like further clarification! 😊
2025-01-22T09:27:38
https://www.reddit.com/r/LocalLLaMA/comments/1i77d0c/stubborn_strawberry_deepseek_r1_14b/
Rogue_NPC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i77d0c
false
null
t3_1i77d0c
/r/LocalLLaMA/comments/1i77d0c/stubborn_strawberry_deepseek_r1_14b/
false
false
self
1
null
Exploring Advanced RAG Techniques: Are Current Solutions Too Naive?
10
In projects like AnythingLLM, Dify, and OpenWebUI, I often find their RAG (Retrieval-Augmented Generation) techniques to be quite naive. While they get the job done for simple use cases, I’m curious about the potential for implementing more complex and effective RAG approaches. For example: What are the best practices for achieving higher retrieval precision and context relevancy? Are there open-source frameworks or tools that allow for more advanced pipeline customization? How can we go beyond the basics and implement state-of-the-art RAG techniques effectively? I’m particularly interested in hearing about projects or ideas that push the boundaries of RAG implementation. Any insights or resources on this topic would be greatly appreciated!
2025-01-22T09:28:09
https://www.reddit.com/r/LocalLLaMA/comments/1i77d8t/exploring_advanced_rag_techniques_are_current/
PickkNickk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i77d8t
false
null
t3_1i77d8t
/r/LocalLLaMA/comments/1i77d8t/exploring_advanced_rag_techniques_are_current/
false
false
self
10
null
Best frameworks for fine-tuning models—what’s everyone using?
1
[removed]
2025-01-22T09:33:32
https://www.reddit.com/r/LocalLLaMA/comments/1i77foq/best_frameworks_for_finetuning_modelswhats/
Vivid-Entertainer752
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i77foq
false
null
t3_1i77foq
/r/LocalLLaMA/comments/1i77foq/best_frameworks_for_finetuning_modelswhats/
false
false
self
1
null
Test `DeepSeek R1 70B` for free using private Fluid API
2
Hi, someone asked where to test DeepSeek R1 Llama 70B, so I thought I'll share a few free accounts here: Endpoint: # OpenAI compatible (partial support) API Keys: 5022973707673231 # Authorization: Bearer {account_no} 3272534560577170 4578564232362932 5156468097321794 7111624403950538 1791875353587484 5678862424161157 Model name: Fluid-Latest (Please don't overuse them)https://api.getfluid.app/v1 You can also use my app [https://getfluid.app](https://getfluid.app) as a client if you have a Mac. (Here's [how to unlock](https://changelog.getfluid.app/#demo) Pro) If you are seeking for a private API provider with a strict privacy policy (no logging, no persistent storage, no sharing, hosted in EU) - check out [details here](https://getfluid.app/#fluidpro). Use `LOCAL_LLAMA` code to get 50% perpetual discount (30 codes available). \-------------------------------------------------- So far I'm very impressed with R1 70B, although it's clearly limited by the underlying Llama 3.3. For many everyday tasks it's noticeably better, sometimes a bit disappointing (e.g. coding in SwiftUI). I tested both FP4 (AWQ) and FP8 quants, and didn't notice much difference (maybe had wrong FP8 quant?), so went with AWQ for its speed. Fun thing, within Fluid app I switched to local Llama 8b to continue R1 conversation, and it immediately started thinking in the same way as R1, using <think> tags - I thought I had a bug for a moment. (note: reposting this, as my previous post didn't display the comment with api keys for some reason).
2025-01-22T09:47:07
https://www.reddit.com/r/LocalLLaMA/comments/1i77m08/test_deepseek_r1_70b_for_free_using_private_fluid/
PrivacyIsImportan1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i77m08
false
null
t3_1i77m08
/r/LocalLLaMA/comments/1i77m08/test_deepseek_r1_70b_for_free_using_private_fluid/
false
false
self
2
{'enabled': False, 'images': [{'id': 'dvoD3QHOC6zmJFTNoOo7-Do6OcHwF_xlrSRYbkSnzMs', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/JT6-SrnAvvvG9Bbs6RHeqrEkBhNTjNdD89aMIRASVhE.jpg?width=108&crop=smart&auto=webp&s=969f4ab96196b3ef88a01b12e4f55e2fe4be33c3', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/JT6-SrnAvvvG9Bbs6RHeqrEkBhNTjNdD89aMIRASVhE.jpg?width=216&crop=smart&auto=webp&s=ff6a54f853352312da6266a017473c470ca536fc', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/JT6-SrnAvvvG9Bbs6RHeqrEkBhNTjNdD89aMIRASVhE.jpg?width=320&crop=smart&auto=webp&s=b37e764a9cc46b402299fb32ad6e13ae881ccc58', 'width': 320}, {'height': 380, 'url': 'https://external-preview.redd.it/JT6-SrnAvvvG9Bbs6RHeqrEkBhNTjNdD89aMIRASVhE.jpg?width=640&crop=smart&auto=webp&s=71704f304eb18d08a695c22dd0743e67b6a9da02', 'width': 640}, {'height': 571, 'url': 'https://external-preview.redd.it/JT6-SrnAvvvG9Bbs6RHeqrEkBhNTjNdD89aMIRASVhE.jpg?width=960&crop=smart&auto=webp&s=37e7d56f0e644629ccca9527cac4ac96513315a5', 'width': 960}, {'height': 642, 'url': 'https://external-preview.redd.it/JT6-SrnAvvvG9Bbs6RHeqrEkBhNTjNdD89aMIRASVhE.jpg?width=1080&crop=smart&auto=webp&s=37b6348b021a964465669248e7e9da9f72fae23b', 'width': 1080}], 'source': {'height': 714, 'url': 'https://external-preview.redd.it/JT6-SrnAvvvG9Bbs6RHeqrEkBhNTjNdD89aMIRASVhE.jpg?auto=webp&s=46dbd09fb294bf736d07eeaedfee39e549264497', 'width': 1200}, 'variants': {}}]}
How it feels...
788
2025-01-22T10:01:42
https://i.redd.it/rocl1zwsqiee1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1i77sy1
false
null
t3_1i77sy1
/r/LocalLLaMA/comments/1i77sy1/how_it_feels/
false
false
https://b.thumbs.redditm…z-G8E2Fa0nAI.jpg
788
{'enabled': True, 'images': [{'id': 'nb1tugP6mUBF75kaTCDNv9yEMzdBeCXYQHzuSCPjxrA', 'resolutions': [{'height': 161, 'url': 'https://preview.redd.it/rocl1zwsqiee1.png?width=108&crop=smart&auto=webp&s=2b707f5a9aff527803097536e169098785f9c768', 'width': 108}, {'height': 323, 'url': 'https://preview.redd.it/rocl1zwsqiee1.png?width=216&crop=smart&auto=webp&s=e827be10d1a8fbd1bd0f91e73ccb0b8a30467f40', 'width': 216}, {'height': 479, 'url': 'https://preview.redd.it/rocl1zwsqiee1.png?width=320&crop=smart&auto=webp&s=3e2760d53f037795375c623661e55d86333f5bbd', 'width': 320}, {'height': 958, 'url': 'https://preview.redd.it/rocl1zwsqiee1.png?width=640&crop=smart&auto=webp&s=0cf91de1f8277f67834bb8422547f586738360e5', 'width': 640}, {'height': 1438, 'url': 'https://preview.redd.it/rocl1zwsqiee1.png?width=960&crop=smart&auto=webp&s=041a779739d055fa5dcaa70653dc78a257f261d5', 'width': 960}, {'height': 1618, 'url': 'https://preview.redd.it/rocl1zwsqiee1.png?width=1080&crop=smart&auto=webp&s=af7a5d7ae08d81e0480c67420431c83089227bb2', 'width': 1080}], 'source': {'height': 1618, 'url': 'https://preview.redd.it/rocl1zwsqiee1.png?auto=webp&s=254f3a68e602e79ed9faa857e230008721e52b90', 'width': 1080}, 'variants': {}}]}
How to train an LLM
1
[removed]
2025-01-22T10:12:09
https://www.reddit.com/r/LocalLLaMA/comments/1i77y3b/how_to_train_an_llm/
Asleep_Cartoonist460
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i77y3b
false
null
t3_1i77y3b
/r/LocalLLaMA/comments/1i77y3b/how_to_train_an_llm/
false
false
self
1
null
Stargate - Any reason to be optimistic here for local models..? Or is it just a boost in the arm for ClosedAI?
1
[removed]
2025-01-22T10:20:12
https://www.reddit.com/r/LocalLLaMA/comments/1i7823a/stargate_any_reason_to_be_optimistic_here_for/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7823a
false
null
t3_1i7823a
/r/LocalLLaMA/comments/1i7823a/stargate_any_reason_to_be_optimistic_here_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SYq4GjrSRFtv6nNi8-46v83gBQCgWS-yX2tjnQIvhLk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fpQ8njT_DRG46SuSohSLfEZQ1Lh_u_zuWHNrje9mZWs.jpg?width=108&crop=smart&auto=webp&s=56539b3fe6ab692fb77f34f81b934af2eef6b770', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/fpQ8njT_DRG46SuSohSLfEZQ1Lh_u_zuWHNrje9mZWs.jpg?auto=webp&s=5d4eeeb07362522c8536fc845025c40577ea68df', 'width': 200}, 'variants': {}}]}
Stargate - Any reason to be optimistic here..? Or is it just a boost in the arm for ClosedAI?
1
[removed]
2025-01-22T10:23:56
https://www.reddit.com/r/LocalLLaMA/comments/1i783za/stargate_any_reason_to_be_optimistic_here_or_is/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i783za
false
null
t3_1i783za
/r/LocalLLaMA/comments/1i783za/stargate_any_reason_to_be_optimistic_here_or_is/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SYq4GjrSRFtv6nNi8-46v83gBQCgWS-yX2tjnQIvhLk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fpQ8njT_DRG46SuSohSLfEZQ1Lh_u_zuWHNrje9mZWs.jpg?width=108&crop=smart&auto=webp&s=56539b3fe6ab692fb77f34f81b934af2eef6b770', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/fpQ8njT_DRG46SuSohSLfEZQ1Lh_u_zuWHNrje9mZWs.jpg?auto=webp&s=5d4eeeb07362522c8536fc845025c40577ea68df', 'width': 200}, 'variants': {}}]}
Stargate - Any reason to be optimistic here..?
1
[removed]
2025-01-22T10:24:27
https://www.reddit.com/r/LocalLLaMA/comments/1i7847m/stargate_any_reason_to_be_optimistic_here/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7847m
false
null
t3_1i7847m
/r/LocalLLaMA/comments/1i7847m/stargate_any_reason_to_be_optimistic_here/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SYq4GjrSRFtv6nNi8-46v83gBQCgWS-yX2tjnQIvhLk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fpQ8njT_DRG46SuSohSLfEZQ1Lh_u_zuWHNrje9mZWs.jpg?width=108&crop=smart&auto=webp&s=56539b3fe6ab692fb77f34f81b934af2eef6b770', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/fpQ8njT_DRG46SuSohSLfEZQ1Lh_u_zuWHNrje9mZWs.jpg?auto=webp&s=5d4eeeb07362522c8536fc845025c40577ea68df', 'width': 200}, 'variants': {}}]}
Stargate - Any reason to be optimistic here for local models..? Or is it just a boost in the arm for ClosedAI?
9
**TL;DR -** **Is closed vs open source zero sum?** **What if OAI succeeds at regulatory capture, what then?** **Will some of the $500bn go to orgs that release open weights e.g. Meta?** **---** Hey everyone! I imagine many here will probably have the same initial reaction I did to the news - which was *uh oh.* But on the flip-side of the coin I've actually seen mostly positive reactions on my {bird-app} timeline. So I thought I'd do a vibe check / start a discussion here specifically, since the ripples made by this kind of project will for sure trickle down into the local scene. So: **Is Stargate going to be good for local models?**
2025-01-22T10:27:12
https://www.reddit.com/r/LocalLLaMA/comments/1i785kc/stargate_any_reason_to_be_optimistic_here_for/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i785kc
false
null
t3_1i785kc
/r/LocalLLaMA/comments/1i785kc/stargate_any_reason_to_be_optimistic_here_for/
false
false
self
9
null
How easy is Fine-Tuning ?
1
[removed]
2025-01-22T10:27:59
https://www.reddit.com/r/LocalLLaMA/comments/1i785xn/how_easy_is_finetuning/
Chance_Break6628
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i785xn
false
null
t3_1i785xn
/r/LocalLLaMA/comments/1i785xn/how_easy_is_finetuning/
false
false
self
1
null
DeepSeek using intuition vs logic
10
2025-01-22T10:39:38
https://www.reddit.com/gallery/1i78bp3
DeviantPlayeer
reddit.com
1970-01-01T00:00:00
0
{}
1i78bp3
false
null
t3_1i78bp3
/r/LocalLLaMA/comments/1i78bp3/deepseek_using_intuition_vs_logic/
false
false
https://b.thumbs.redditm…bMo1wWXhC1mg.jpg
10
null
R1 Distill + Rag
6
Did anybody already try this? R1 14b gave me always almost working complicated and nuanced python code but sometimes with wrong function names. When I input the api doc from packages it always stunned me how good it is. So before I work on a rag solution, did somebody already try it? I think a beautiful solution would be agentic, that an agent retrieves all necessary package documentation and sets it as the additional context
2025-01-22T10:44:55
https://www.reddit.com/r/LocalLLaMA/comments/1i78ebv/r1_distill_rag/
Flashy_Management962
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i78ebv
false
null
t3_1i78ebv
/r/LocalLLaMA/comments/1i78ebv/r1_distill_rag/
false
false
self
6
null
Venting
1
[removed]
2025-01-22T10:47:19
https://www.reddit.com/r/LocalLLaMA/comments/1i78fir/venting/
Upset_Acanthaceae_18
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i78fir
false
null
t3_1i78fir
/r/LocalLLaMA/comments/1i78fir/venting/
false
false
self
1
null
R2 when (your opinion)
1
[removed]
2025-01-22T11:05:31
https://www.reddit.com/r/LocalLLaMA/comments/1i78p3q/r2_when_your_opinion/
Optimal_Hamster5789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i78p3q
false
null
t3_1i78p3q
/r/LocalLLaMA/comments/1i78p3q/r2_when_your_opinion/
false
false
self
1
null
R2 will be disaster to ClosedAI
1
[removed]
2025-01-22T11:08:44
https://www.reddit.com/r/LocalLLaMA/comments/1i78qsl/r2_will_be_disaster_to_closedai/
Optimal_Hamster5789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i78qsl
false
null
t3_1i78qsl
/r/LocalLLaMA/comments/1i78qsl/r2_will_be_disaster_to_closedai/
false
false
self
1
null
Deepseek R1 GRPO code open sourced 🤯
1
https://preview.redd.it/…e/trl/pull/2565)
2025-01-22T11:09:36
https://www.reddit.com/r/LocalLLaMA/comments/1i78r8p/deepseek_r1_grpo_code_open_sourced/
eliebakk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i78r8p
false
null
t3_1i78r8p
/r/LocalLLaMA/comments/1i78r8p/deepseek_r1_grpo_code_open_sourced/
false
false
https://a.thumbs.redditm…EB-O-dK_hTL8.jpg
1
{'enabled': False, 'images': [{'id': 'TaCioUI7kR6Y8_CK8HdPTZc7J1hJOK1UbB54bvqtjgE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/geCTeSIVMwNFsc6dmRyWBPp_gEBb1U86STRBWfXJy6w.jpg?width=108&crop=smart&auto=webp&s=b110b8a840fd1d9ec305b02caa89b6df67604d1a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/geCTeSIVMwNFsc6dmRyWBPp_gEBb1U86STRBWfXJy6w.jpg?width=216&crop=smart&auto=webp&s=1417397f29b70de653d939a4d9f0e498f7859d24', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/geCTeSIVMwNFsc6dmRyWBPp_gEBb1U86STRBWfXJy6w.jpg?width=320&crop=smart&auto=webp&s=a4648d1d590c2b42c7312aeb31d9ae92e10a6d85', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/geCTeSIVMwNFsc6dmRyWBPp_gEBb1U86STRBWfXJy6w.jpg?width=640&crop=smart&auto=webp&s=2318e8b2a495fb8730a49369b4564294ffa6c271', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/geCTeSIVMwNFsc6dmRyWBPp_gEBb1U86STRBWfXJy6w.jpg?width=960&crop=smart&auto=webp&s=5403525e2285d6ee8c82228d33cef0f85e315ee0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/geCTeSIVMwNFsc6dmRyWBPp_gEBb1U86STRBWfXJy6w.jpg?width=1080&crop=smart&auto=webp&s=5d5e682eebace4272424fd7fda95c743e49f9fe8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/geCTeSIVMwNFsc6dmRyWBPp_gEBb1U86STRBWfXJy6w.jpg?auto=webp&s=cd82763f753c2aee0e5e320b93e7076c862f438e', 'width': 1200}, 'variants': {}}]}
Deepseek R1 GRPO code open sourced 🤯
362
2025-01-22T11:11:56
https://i.redd.it/ryfnofs83jee1.png
eliebakk
i.redd.it
1970-01-01T00:00:00
0
{}
1i78sfs
false
null
t3_1i78sfs
/r/LocalLLaMA/comments/1i78sfs/deepseek_r1_grpo_code_open_sourced/
false
false
https://b.thumbs.redditm…CEs6cV14favc.jpg
362
{'enabled': True, 'images': [{'id': 'clhNbLsBecIiwYwQ18HfLTUbMHr94iaEx72pXDMvCVI', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/ryfnofs83jee1.png?width=108&crop=smart&auto=webp&s=008f449a57b38c98751f7c236f56a13112e41bf5', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/ryfnofs83jee1.png?width=216&crop=smart&auto=webp&s=ca6aa78423f1918c934fd296107d2a058782515c', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/ryfnofs83jee1.png?width=320&crop=smart&auto=webp&s=30094fb20beb43002b7b0d9d0457aaac820f0436', 'width': 320}, {'height': 337, 'url': 'https://preview.redd.it/ryfnofs83jee1.png?width=640&crop=smart&auto=webp&s=40e8bb4b0b8a7dd1b82c628a825c88559a17aff0', 'width': 640}, {'height': 506, 'url': 'https://preview.redd.it/ryfnofs83jee1.png?width=960&crop=smart&auto=webp&s=0221000e72520d27e419bd5bf877eb750de9edba', 'width': 960}, {'height': 570, 'url': 'https://preview.redd.it/ryfnofs83jee1.png?width=1080&crop=smart&auto=webp&s=44d033510c37cfd0e4a108f9e4ca14616a7f994b', 'width': 1080}], 'source': {'height': 1861, 'url': 'https://preview.redd.it/ryfnofs83jee1.png?auto=webp&s=2466e78558db98eb350061d44614d65502897c1b', 'width': 3525}, 'variants': {}}]}
How I Used GPT-O1 Pro to Discover My Autoimmune Disease (After Spending $100k and Visiting 30+ Hospitals with No Success)
499
**TLDR:** * **Suffered from various health issues for 5 years, visited 30+ hospitals with no answers** * **Finally diagnosed with axial spondyloarthritis through genetic testing** * **Built a personalized health analysis system using GPT-O1 Pro, which actually suggested this condition earlier** I'm a guy in my mid-30s who started having weird health issues about 5 years ago. Nothing major, but lots of annoying symptoms - getting injured easily during workouts, slow recovery, random fatigue, and sometimes the pain was so bad I could barely walk. At first, I went to different doctors for each symptom. Tried everything - MRIs, chiropractic care, meds, steroids - nothing helped. I followed every doctor's advice perfectly. Started getting into longevity medicine thinking it might be early aging. Changed my diet, exercise routine, sleep schedule - still no improvement. The cause remained a mystery. Recently, after a month-long toe injury wouldn't heal, I ended up seeing a rheumatologist. They did genetic testing and boom - diagnosed with axial spondyloarthritis. This was the answer I'd been searching for over 5 years. Here's the crazy part - I fed all my previous medical records and symptoms into GPT-O1 pro before the diagnosis, and it actually listed this condition as the top possibility! This got me thinking - why didn't any doctor catch this earlier? Well, it's a rare condition, and autoimmune diseases affect the whole body. Joint pain isn't just joint pain, dry eyes aren't just eye problems. The usual medical workflow isn't set up to look at everything together. So I had an idea: What if we created an open-source system that could analyze someone's complete medical history, including family history (which was a huge clue in my case), and create personalized health plans? It wouldn't replace doctors but could help both patients and medical professionals spot patterns. Building my personal system was challenging: 1. Every hospital uses different formats and units for test results. Had to create a GPT workflow to standardize everything. 2. RAG wasn't enough - needed a large context window to analyze everything at once for the best results. 3. Finding reliable medical sources was tough. Combined official guidelines with recent papers and trusted YouTube content. 4. GPT-O1 pro was best at root cause analysis, Google Note LLM worked great for citations, and Examine excelled at suggesting actions. In the end, I built a system using Google Sheets to view my data and interact with trusted medical sources. It's been incredibly helpful in managing my condition and understanding my health better.
2025-01-22T11:12:33
https://www.reddit.com/r/LocalLLaMA/comments/1i78src/how_i_used_gpto1_pro_to_discover_my_autoimmune/
Dry_Steak30
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i78src
false
null
t3_1i78src
/r/LocalLLaMA/comments/1i78src/how_i_used_gpto1_pro_to_discover_my_autoimmune/
false
false
self
499
null
Deepseek-r1 going rogue with Ollama
1
This just happened when I tested the new version of deepseek-r1 8b on Ollama with some basic query about its own CoT process as well as a random question about movie release date to see how the CoT would process something involving fetching a specific date. I was simultaneously running some pip installs on VS Code when it seems Ollama suddenly picked up on it, causing deepseek to have nonstop CoT. Was quite freaky to be honest. Anyone seen something similar? It just went on and on, the log of one terminal was feeding input to Ollama...
2025-01-22T11:37:33
https://www.reddit.com/r/LocalLLaMA/comments/1i795wf/deepseekr1_going_rogue_with_ollama/
cybercyrus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i795wf
false
null
t3_1i795wf
/r/LocalLLaMA/comments/1i795wf/deepseekr1_going_rogue_with_ollama/
false
false
self
1
null
Best offline model for commercial use
1
Hi, I'm currently working to an AI agent which will use an LLM. What is/are the best LLM to be used locally? I need something which can be used for commercial purposes, can be fine tuned and run locally. Also what kind of GPU is needed to only make inferences or also to fine-tune that LLMs? Thank you all in advance.
2025-01-22T11:49:11
https://www.reddit.com/r/LocalLLaMA/comments/1i79cav/best_offline_model_for_commercial_use/
Giux99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i79cav
false
null
t3_1i79cav
/r/LocalLLaMA/comments/1i79cav/best_offline_model_for_commercial_use/
false
false
self
1
null
A question about DeepSeek R1 pricing and in general reasoning based LLM pricing
1
[removed]
2025-01-22T12:34:57
https://www.reddit.com/r/LocalLLaMA/comments/1i7a428/a_question_about_deepseek_r1_pricing_and_in/
liveforfiction
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7a428
false
null
t3_1i7a428
/r/LocalLLaMA/comments/1i7a428/a_question_about_deepseek_r1_pricing_and_in/
false
false
self
1
null
Repurposing ETH Mining Rigs for AI Workloads
1
[removed]
2025-01-22T12:42:01
[deleted]
1970-01-01T00:00:00
0
{}
1i7a8h0
false
null
t3_1i7a8h0
/r/LocalLLaMA/comments/1i7a8h0/repurposing_eth_mining_rigs_for_ai_workloads/
false
false
default
1
null
Repurposing ETH Mining Rigs for AI Workloads
1
[removed]
2025-01-22T12:45:07
https://www.reddit.com/r/LocalLLaMA/comments/1i7aaed/repurposing_eth_mining_rigs_for_ai_workloads/
Gioxxy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7aaed
false
null
t3_1i7aaed
/r/LocalLLaMA/comments/1i7aaed/repurposing_eth_mining_rigs_for_ai_workloads/
false
false
self
1
null
Can DeepSeek-R1 be used as a better frontend for Claude?
1
[removed]
2025-01-22T12:46:45
https://www.reddit.com/r/LocalLLaMA/comments/1i7abfo/can_deepseekr1_be_used_as_a_better_frontend_for/
dT4YGAKxT6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7abfo
false
null
t3_1i7abfo
/r/LocalLLaMA/comments/1i7abfo/can_deepseekr1_be_used_as_a_better_frontend_for/
false
false
self
1
null
Lightweight web scraping with ollama
32
Hi LocalLLaMA people, It’s been almost 6 months since I’ve shared my lightweight library for easy web scraping. Today, I want to share some updates, starting with the fact that **Parsera now officially supports Ollama!** Some notable changes include: * **Cookies support** for accessing login-required content * **Enhanced page rendering** to ensure no content is missed * **Custom script execution**, allowing scraping of interactive elements * **Page chunking**, to process pages that exceed the model’s context limits * Plus many other minor enhancements and fixes Check it out on GitHub: [https://github.com/raznem/parsera](https://github.com/raznem/parsera) P. S. We are now using Parsera as the foundation for our upcoming Scraping Agent, so hope I’ll share something exciting soon.
2025-01-22T12:49:48
https://www.reddit.com/r/LocalLLaMA/comments/1i7adec/lightweight_web_scraping_with_ollama/
Financial-Article-12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7adec
false
null
t3_1i7adec
/r/LocalLLaMA/comments/1i7adec/lightweight_web_scraping_with_ollama/
false
false
self
32
{'enabled': False, 'images': [{'id': 'XiHC3l1zAwCmGAPyzE3JfRwMM5v6mYJtQnXJi7og_o4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/36rF9YETkivuXVSIbLTODTa_A-EbXoZPq5Xu4i60tvI.jpg?width=108&crop=smart&auto=webp&s=233ec7cfa27470418f75f82253b929ff348aa5c7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/36rF9YETkivuXVSIbLTODTa_A-EbXoZPq5Xu4i60tvI.jpg?width=216&crop=smart&auto=webp&s=49045dc7d0fe923666d396b9e0efaf29a2ed2456', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/36rF9YETkivuXVSIbLTODTa_A-EbXoZPq5Xu4i60tvI.jpg?width=320&crop=smart&auto=webp&s=c8530888e2412082113e83ba3d5362e0a095a99f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/36rF9YETkivuXVSIbLTODTa_A-EbXoZPq5Xu4i60tvI.jpg?width=640&crop=smart&auto=webp&s=54ff04bfd8df4a7da2bf5985feb426a2f64f0f5c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/36rF9YETkivuXVSIbLTODTa_A-EbXoZPq5Xu4i60tvI.jpg?width=960&crop=smart&auto=webp&s=7b59cfc3b3f5e305ff2799bc7038ca50cdb5ec2f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/36rF9YETkivuXVSIbLTODTa_A-EbXoZPq5Xu4i60tvI.jpg?width=1080&crop=smart&auto=webp&s=5c20758c6dc5249ec45abe9073bd2f5389015d0b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/36rF9YETkivuXVSIbLTODTa_A-EbXoZPq5Xu4i60tvI.jpg?auto=webp&s=a5d6f0c66cc602f31245705aec018366ef25aa3c', 'width': 1200}, 'variants': {}}]}
I am compelled to create this post (Extreme rant: very sensitive!)
1
[removed]
2025-01-22T13:04:21
https://www.reddit.com/r/LocalLLaMA/comments/1i7an1p/i_am_compelled_to_create_this_post_extreme_rant/
BLsaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7an1p
false
null
t3_1i7an1p
/r/LocalLLaMA/comments/1i7an1p/i_am_compelled_to_create_this_post_extreme_rant/
false
false
nsfw
1
null
Editing <thinking> process is a super effective way to guide model output
30
R1 is my first real foray into reasoning models. One thing I have gotten in the habit of with Claude is to edit my prior prompt to guide model output, rather than engaging in a long back and forth. This is even more powerful with reasoning models, as the thinking tags act as a scaffold for model output. You can edit, or simply add on, new thinking content in the reasoning section and re-run the completion to get what you want. In my experience, it is a bit more efficient than modifying the prompt directly, as it's already interpreted the prompt and so the thinking tags are closer to the output than the prompt is (so less of a chance to misinterpret the user instructions, and more of an opportunity to subtly guide output). Here is an example. I'm a scientist, so my default test of the model is to have it output a classic model in evolutionary biology, the Price Equation applied to multilevel selection. The details don't really matter here, but in the revision, I simply added a thinking tag telling it to output a numerical example. This is a coarse example that definitely could have been accomplished in the prompt, but I figured for this example something coarse is probably easier to understand by those unfamiliar with the subject than something subtle. Still, you get the idea! For this test I am using R1-70b distill, q8, Mac Studio in LM Studio. **Version 1.** **User** Use the Price equation to partition selection acting on individual bees in a colony from that acting on the colony as a whole. **Assistant** **<think>**First, I remember that the Price equation is used in evolutionary biology to partition selection into different components. It's about HOW selection acts on traits, whether at the individual level or group level. So, applying it here: we have a honeybee colony with many workers. Each bee has some trait—maybe foraging behavior—that varies among them. The success of the colony depends on how good the bees are at this trait. The Price equation basically breaks down change in a trait into two parts: selection and transmission. Wait, no—I think it's more about genetic variation within groups and between groups. Oh right, the equation is Δz = V\_S + VT. Wait, actually, the Price equation formula is typically written as: Δz̄ = (Cov(w, z) / w̄ ) Where w is fitness, z is the trait. But when applied hierarchically, it can be split into within-group and between-group components, right? So in this case, individual selection would act on the variation among bees within a colony, affecting their relative contributions (maybe how many offspring each contributes). Colony-level selection would be about how well each colony does compared to others based on average traits. For honeybees, maybe it's about how efficient foragers are. Some colonies have more efficient workers on average, so those colonies survive or propagate better. But within each colony, bees with better traits might contribute more via reproduction, but wait, in bees, queens are the ones who reproduce, not workers. Oh, that complicates things. Wait, so worker bees don't have direct reproduction since they're sterile. So individual selection on them is...Hmm. Maybe this is a way to model group-level adaptation through individual contributions, even if those individuals aren't reproducing themselves. Or perhaps the Price equation can still be applied to see how colony-level fitness aggregates from individual behaviors. So structure-wise: Each colony has bees with trait z\_i. The average of z\_i in a colony is Z\_j. Colony j's fitness W\_j depends on Z\_j. The overall change in trait would come from two sources: selection within colonies (how each bee's trait affects the colony's success) and selection between colonies based on their averages. Putting it into equations, I need to show the Price equation for both individual and group levels. But wait, since worker bees don't reproduce directly, what constitutes 'fitness' w for them? Maybe it's about how much they contribute to the colony's fitness. So each bee contributes to W\_j based on z\_i. So in that case, within-colony selection is based on individual contributions to Wj, and between-colony selection is based on colonies with higher average Zj having higher fitness. Yeah, so I think that's how it breaks down: part of the equation captures variation among bees in how they help their colony (within-colony), and another part captures how colonies' averages affect their success (between-colonies). **</think>** To apply the Price Equation to partition selection acting on individual bees within a hive versus selection at the colony level, we consider the following model: Let ( z\_i ) represent an altruistic trait of bee ( i ) in colony ( j ). The fitness ( w\_{i} ) contributes to mortality/survival rates depending on this trait. Colony reproductive success is proportional to ( w\_j = average(w\_i for all bees in colony j) ). The Price Equation quantifies the change in mean trait from one generation to the next as: \[ \\Delta z̄ = \\frac{\\text{Cov}(z, w)}{\\bar{w}} + (\\bar{w}\_j - \\bar{w}) \] Here, the first term represents individual-level selection: covariation between bee trait ( z\_i ) and their contribution to colony success. The second term is colony-level selection, where colonies with higher average ( w\_j ) contribute disproportionately more to new colonies. Thus, selection on bees has two components: 1. **Within-colony**: The effect of each bee's individual traits on its own survival/efficiency. 2. **Between-colonies**: The fitness difference between colonies due to their collective trait averages. This partition highlights how both levels contribute to evolutionary change in the population, even if worker bees themselves do not reproduce directly. **Version 2** I just added this tot he end of the thinking tags, deleted the output below the thinking tags, and then re-ran the completion in LM Studio: **Once I have derived the general case, I will provide a fully worked numerical example, to make it clear how this is used. </think>** To apply the Price equation to understand selection acting on individual bees versus the colony as a whole, we need to partition selective forces into within-group and between-group components. Let’s consider a population of honeybee colonies where each colony comprises multiple worker bees exhibiting some heritable trait ( z ) (e.g., foraging efficiency). The fitness of a colony ( j ), denoted ( W\_j ), depends on the average value of ( z ) across its workers, as well as potentially other factors such as environmental conditions and inter-colony competition. **Step 1: Define Variables** * Let ( Z\_j = \\frac{1}{N\_j} \\sum\_{i=1}\^{N\_j} z\_i ), where ( N\_j ) is the number of workers in colony ( j ). * The aggregate fitness of colony ( j ) is some function of ( Z\_j ), i.e., ( W\_j = f(Z\_j) ). **Step 2: Apply the Price Equation** The standard form of the Price equation is: \[ \\Delta z = \\frac{1}{\\overline{W}} \\sum\_{j=1}\^{M} (W\_j - \\overline{W}) Z\_j + \\frac{1}{\\overline{W}} \\sum\_{j=1}\^{M} W\_j \\left( z\_j - Z\_j \\right). \] Here, ( z ) is the trait of interest, ( W ) is fitness, and a bar denotes an average over all colonies. **Step 3: Interpretation** The first term in the equation represents selection acting at the colony level (between-group selection). This measures how differences in the average trait (( Z\_j )) between colonies contribute to differences in colony fitness (( W\_j )). Colonies with higher values of ( Z\_j ) will have higher fitness, so this term quantifies multilevel selection at the colony level. The second term represents within-group selection. This measures how much individual deviations from the group mean (( z\_j - Z\_j )) are associated with differences in individual contributions to colony success. If certain workers contribute more to colony-level traits (e.g., foraging efficiency), this difference contributes to anisman evolutionary outcome at this level. **Step 4: Connection Back to Honeybees** For honeybees, the within-colony term can be thought of as selection acting on worker behaviors that enhance colony success. For example, more efficient foragers contribute disproportionately to colony resources and survival. The between-colony term reflects differences in overall colony performance based on the average traits of their workers. By partitioning these effects using the Price equation, we can quantify the relative strength of individual versus group selection in shaping evolutionary outcomes in social bees. **Step 5: Numerical Example** Assume we have three colonies with different mean foraging efficiencies (( Z\_1 = 3 ), ( Z\_2 = 4 ), ( Z\_3 = 5 )) on a scale where higher values represent better efficiency. Let the fitness of each colony be ( W\_1 = 10 ), ( W\_2 = 20 ), and ( W\_3 = 30 ). Suppose the mean fitness across all colonies is ( \\overline{W} = (10 + 20 + 30)/3 = \\frac{60}{3} = 20 ). Compute each term: 1. **Between-group selection:** \[ \\frac{1}{\\overline{W}} \\sum\_{j=1}\^{M} (W\_j - \\overline{W}) Z\_j = \\frac{1}{20} \[(10-20) \\cdot 3 + (20-20) \\cdot 4 + (30-20) \\cdot 5\] = \\frac{1}{20} \[(-10)\\cdot3 + 0\\cdot4 + 10\\cdot5\] = \\frac{1}{20} (-30 + 50) = \\frac{20}{20} = 1. \] 1. **Within-group selection:** Assume within-colony deviations (( z\_j - Z\_j )) are uncorrelated with colony fitness for simplicity, so this term is negligible (or zero). Thus, the overall change in ( z ) comes entirely from between-group selection:( \\Delta z = 1 + 0 = 1 ).
2025-01-22T13:06:20
https://www.reddit.com/r/LocalLLaMA/comments/1i7aoet/editing_thinking_process_is_a_super_effective_way/
Berberis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7aoet
false
null
t3_1i7aoet
/r/LocalLLaMA/comments/1i7aoet/editing_thinking_process_is_a_super_effective_way/
false
false
self
30
null
Memory bandwidth of Nvidia RTX Laptop graphics compared
52
2025-01-22T13:09:26
https://i.redd.it/3ec8i695ojee1.png
Balance-
i.redd.it
1970-01-01T00:00:00
0
{}
1i7aqhg
false
null
t3_1i7aqhg
/r/LocalLLaMA/comments/1i7aqhg/memory_bandwidth_of_nvidia_rtx_laptop_graphics/
false
false
https://b.thumbs.redditm…c7bBP6QwM4Ik.jpg
52
{'enabled': True, 'images': [{'id': 'CByCxQC2bBTqJ5yzvR9DeSLj68FYO_8QU7xofAuqfHI', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/3ec8i695ojee1.png?width=108&crop=smart&auto=webp&s=dfbd109c576d2e6d4c142a06c0e579d82d962c9a', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/3ec8i695ojee1.png?width=216&crop=smart&auto=webp&s=a7e84c52ec38eb433870bd9021a4dfea80bb4943', 'width': 216}, {'height': 194, 'url': 'https://preview.redd.it/3ec8i695ojee1.png?width=320&crop=smart&auto=webp&s=186988e87ba70b0a4287e18317ffcef2e456ffba', 'width': 320}, {'height': 388, 'url': 'https://preview.redd.it/3ec8i695ojee1.png?width=640&crop=smart&auto=webp&s=3fbd835d18c9f6df088cfd0931409590eaf41d47', 'width': 640}, {'height': 582, 'url': 'https://preview.redd.it/3ec8i695ojee1.png?width=960&crop=smart&auto=webp&s=fe471d32a654f1751865a1abdf6126d2ba8dd13d', 'width': 960}, {'height': 655, 'url': 'https://preview.redd.it/3ec8i695ojee1.png?width=1080&crop=smart&auto=webp&s=157d8c8282544bad5da65a88d9bb3409354b3a8e', 'width': 1080}], 'source': {'height': 1342, 'url': 'https://preview.redd.it/3ec8i695ojee1.png?auto=webp&s=99a83b4e0154d6ce7561bfcb30631aeef9501c69', 'width': 2212}, 'variants': {}}]}
About DeepSeek R1 Zero
1
[removed]
2025-01-22T13:17:10
https://www.reddit.com/r/LocalLLaMA/comments/1i7avut/about_deepseek_r1_zero/
Berion-Reviador
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7avut
false
null
t3_1i7avut
/r/LocalLLaMA/comments/1i7avut/about_deepseek_r1_zero/
false
false
self
1
null
Suggestion needed: future proofing set up
1
[removed]
2025-01-22T13:22:10
https://www.reddit.com/r/LocalLLaMA/comments/1i7azcq/suggestion_needed_future_proofing_set_up/
kpolany
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7azcq
false
null
t3_1i7azcq
/r/LocalLLaMA/comments/1i7azcq/suggestion_needed_future_proofing_set_up/
false
false
self
1
null
I did a quick test of MacBook M4 Max 128 GB token/second throughput across a few popular local LLMs (in the MLX format)
87
I'm sharing this in case you were wondering what kind of throughput you might expect to get on a machine like this. E.g. if you are considering whether it's worth buying or not (as for me, I have no regrets, I'm loving this beast). Same single query, set the context to 8K, tok/sec numbers are reported below, as measured by LMStudio: LLaMA 3.2 3B 4bit -- 181 LLaMA 3 8B 8bit -- 55 LLaMA 3.3 70B 4bit -- 11.8 Mistral Large 123B 4bit -- 6.6 Mistral Nemo 12B 4bit -- 63 Mistral Nemo 12B 8bit -- 36 Mistral Small 22B 4bit -- 34.5 Mistral Small 22B 8bit -- 19.6 Qwen2.5 32B 4bit -- 24 Qwen2.5 32B 8bit -- 13.5 Qwen2.5 72B 4bit -- 10.9 Qwen2.5 72B 8bit -- 6.2 WizardLM-2 8x22B 4bit -- 19.4!! For comparison, here are some numbers obtained in the same setting on my other MacBook, M1 Pro with 32 GB: Mistral Nemo 12B 4bit -- 22.8 Mistral Small 22B 4bit -- 12.9 Qwen2.5 32B 4bit -- 8.8 Hope it's interesting / useful.
2025-01-22T13:28:52
https://www.reddit.com/r/LocalLLaMA/comments/1i7b3r1/i_did_a_quick_test_of_macbook_m4_max_128_gb/
Pure_Refrigerator988
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7b3r1
false
null
t3_1i7b3r1
/r/LocalLLaMA/comments/1i7b3r1/i_did_a_quick_test_of_macbook_m4_max_128_gb/
false
false
self
87
null
DeepSeek in k8s
3
We have 6 H100 servers, as you know each one have 8 (80GB)GPUs, The biggest model we have is llama 405B which takes 8 GPU's which fits one server. now deep seek needs more than 8 GPU's does anybody know a way to run the model on multiple H100 servers? ps: our stack fully k8s
2025-01-22T13:34:15
https://www.reddit.com/r/LocalLLaMA/comments/1i7b7hp/deepseek_in_k8s/
No-Emphasis6569
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7b7hp
false
null
t3_1i7b7hp
/r/LocalLLaMA/comments/1i7b7hp/deepseek_in_k8s/
false
false
self
3
null
Serverless Endpoint /w deepseek-ai/DeepSeek-R1 on Runpod?
1
[removed]
2025-01-22T13:38:51
https://www.reddit.com/r/LocalLLaMA/comments/1i7bal5/serverless_endpoint_w_deepseekaideepseekr1_on/
LLattus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7bal5
false
null
t3_1i7bal5
/r/LocalLLaMA/comments/1i7bal5/serverless_endpoint_w_deepseekaideepseekr1_on/
false
false
self
1
null
Help with model for information extraction.
2
Hello folks! I'm new to LLM usage aside from chat/assistents things. I'm trying to create an automation which would receive and unstructured file and it would output the information that I need in a structured format (like JSON). I tried Phi4 and Qwen2:7b models, but they don't seems to output what I ask, they just give me a summary. I good pretty good results using Gemini-2.0-flash-exp, but the output is only 8k, which is unsuitable for processing files, which in average, have 200k tokens. So I was wondering if there is a model which could do this task and also has a bigger output size, like 128k or more. Thanks!
2025-01-22T13:49:25
https://www.reddit.com/r/LocalLLaMA/comments/1i7bi1w/help_with_model_for_information_extraction/
afonsolage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7bi1w
false
null
t3_1i7bi1w
/r/LocalLLaMA/comments/1i7bi1w/help_with_model_for_information_extraction/
false
false
self
2
null
New AI IDE, Trae
1
2025-01-22T13:50:06
https://v.redd.it/gq6tnnwhvjee1
ozgrozer
v.redd.it
1970-01-01T00:00:00
0
{}
1i7bij4
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gq6tnnwhvjee1/DASHPlaylist.mpd?a=1740145822%2CZWI2NzliODY0MDYyMTlmOWFiODNkZDY3NTA1MmJmYjZlZWVlZWRhYWVmMWYxOGE0MmYyMjhjNjhhYWI2N2VhNg%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/gq6tnnwhvjee1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/gq6tnnwhvjee1/HLSPlaylist.m3u8?a=1740145822%2CNWI1NDE0YTE1MDM3NTk0NTY4NmE5NTM2YTg1OWU4ZWFlMjJhZTg0MGIyMWJhMTRjOTE3MzA1OGQ5Y2ZlYzY0MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gq6tnnwhvjee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1600}}
t3_1i7bij4
/r/LocalLLaMA/comments/1i7bij4/new_ai_ide_trae/
false
false
https://external-preview…0ab4669642856c24
1
{'enabled': False, 'images': [{'id': 'aG9pcHdud2h2amVlMc2joKsfluBg59-vRd-_kihMffjdNW8R1p-eEeOoKZY-', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/aG9pcHdud2h2amVlMc2joKsfluBg59-vRd-_kihMffjdNW8R1p-eEeOoKZY-.png?width=108&crop=smart&format=pjpg&auto=webp&s=ff6e02136c014b5a7c1c7c6728fbaa0523eea37f', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/aG9pcHdud2h2amVlMc2joKsfluBg59-vRd-_kihMffjdNW8R1p-eEeOoKZY-.png?width=216&crop=smart&format=pjpg&auto=webp&s=b662687d82ec19d8cf0b2bf1b43f92de6d067d57', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/aG9pcHdud2h2amVlMc2joKsfluBg59-vRd-_kihMffjdNW8R1p-eEeOoKZY-.png?width=320&crop=smart&format=pjpg&auto=webp&s=38e190ac2eec74bd34d4945d4cfab7dd5b9d5110', 'width': 320}, {'height': 432, 'url': 'https://external-preview.redd.it/aG9pcHdud2h2amVlMc2joKsfluBg59-vRd-_kihMffjdNW8R1p-eEeOoKZY-.png?width=640&crop=smart&format=pjpg&auto=webp&s=5591c1932d2fd27c7891e1e0cfa107223f2f0309', 'width': 640}, {'height': 648, 'url': 'https://external-preview.redd.it/aG9pcHdud2h2amVlMc2joKsfluBg59-vRd-_kihMffjdNW8R1p-eEeOoKZY-.png?width=960&crop=smart&format=pjpg&auto=webp&s=43694bdff15d5f9adb2629c39bbea8133e274470', 'width': 960}, {'height': 729, 'url': 'https://external-preview.redd.it/aG9pcHdud2h2amVlMc2joKsfluBg59-vRd-_kihMffjdNW8R1p-eEeOoKZY-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=665205782edaf3f48a64c530ad1557b7a03356c5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aG9pcHdud2h2amVlMc2joKsfluBg59-vRd-_kihMffjdNW8R1p-eEeOoKZY-.png?format=pjpg&auto=webp&s=4c7dc93302808e516f7e13bebe6495684269b726', 'width': 1600}, 'variants': {}}]}
Any way to issue permanent commands?
1
[removed]
2025-01-22T14:02:38
https://www.reddit.com/r/LocalLLaMA/comments/1i7brny/any_way_to_issue_permanent_commands/
foulplayjamm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7brny
false
null
t3_1i7brny
/r/LocalLLaMA/comments/1i7brny/any_way_to_issue_permanent_commands/
false
false
self
1
null
Help me justify upgrading 64gb M1 Max MBP to 128gb M4?
4
I have a maxed out 64gb M1 Max MBP with 8TB of storage space I always wanted more RAM, and in 2023 and early 2024 it was to run larger models but now it seems like A) inference in the cloud has gotten so cheap, plummeting 80-90% from the hourly cost in early 2023 B) smaller models have gotten so much better, for my super private offline conversations C) resale value of the M1 Max is not nearly close to the $7k I paid for it, and the $7k I would need for a maxed out M4 D) the speed of RAM, processing, and other use cases don't seem thaaaat much of an improvement. like maybe $1,000 more but not $5,000 more E) the extra large models wouldn't run on 128gb RAM either so I do like new things, and this machine, the M1, will only get older. so I'm curious about other ways to justify it. like, what am I missing
2025-01-22T14:05:58
https://www.reddit.com/r/LocalLLaMA/comments/1i7bu7c/help_me_justify_upgrading_64gb_m1_max_mbp_to/
thetaFAANG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7bu7c
false
null
t3_1i7bu7c
/r/LocalLLaMA/comments/1i7bu7c/help_me_justify_upgrading_64gb_m1_max_mbp_to/
false
false
self
4
null
Mass email communication management using LLM
1
[removed]
2025-01-22T14:13:18
https://www.reddit.com/r/LocalLLaMA/comments/1i7bzmm/mass_email_communication_management_using_llm/
No-Temporary9401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7bzmm
false
null
t3_1i7bzmm
/r/LocalLLaMA/comments/1i7bzmm/mass_email_communication_management_using_llm/
false
false
self
1
null
Mass email storage, management and visualization
1
[removed]
2025-01-22T14:16:52
https://www.reddit.com/r/LocalLLaMA/comments/1i7c2c5/mass_email_storage_management_and_visualization/
nabroski18
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7c2c5
false
null
t3_1i7c2c5
/r/LocalLLaMA/comments/1i7c2c5/mass_email_storage_management_and_visualization/
false
false
self
1
null
AMA with llama.cpp quantization god Iwan Kawrakow at FOSDEM
1
[removed]
2025-01-22T14:27:59
https://www.reddit.com/r/LocalLLaMA/comments/1i7cawz/ama_with_llamacpp_quantization_god_iwan_kawrakow/
asf-rvs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7cawz
false
null
t3_1i7cawz
/r/LocalLLaMA/comments/1i7cawz/ama_with_llamacpp_quantization_god_iwan_kawrakow/
false
false
self
1
{'enabled': False, 'images': [{'id': 'cgSmh06wEoky2xD_QDCRBAGsBvTpB_i0pA2iS6WSslI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P3lyfhnClpqOXoYr5hnW1nzsKvn-H05FYkcfYIDxqrU.jpg?width=108&crop=smart&auto=webp&s=e20724b837a14c3dc4539c2b1e94263a68911e9d', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/P3lyfhnClpqOXoYr5hnW1nzsKvn-H05FYkcfYIDxqrU.jpg?auto=webp&s=2f50f05544d563f3228af7801e140e17d8bbc2d4', 'width': 200}, 'variants': {}}]}
Reasoning LLM vs standard LLM
1
[removed]
2025-01-22T14:36:57
https://www.reddit.com/r/LocalLLaMA/comments/1i7chs8/reasoning_llm_vs_standard_llm/
FormerNegotiation428
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7chs8
false
null
t3_1i7chs8
/r/LocalLLaMA/comments/1i7chs8/reasoning_llm_vs_standard_llm/
false
false
self
1
null
AMD HX370 LLM performance
6
I just got an AMD HX 370-based MiniPC, and at this time (January 2025), it's not really suitable for serious LLM work. The NPU isn't supported even by AMD's ROCm, so it's basically useless. CPU-based inference with ollama, with deepseek-r1:14b, results in 7.5 tok/s. GPU-based inference with llama.cpp and the Vulkan API yields almost the same result, 7.8 tok/s (leaving CPU cores free to do other work). The similarity of the results suggest that memory bandwidth is the probable bottleneck. I did these tests on a stock configuration with LPDDR5x 7500 MT/s, arranged in 4 channels of 8 GB, but the bus is 32-bit so it's like 128-bit total width. AIDA64 reports less than 90 GB/s memory read performance. AMD calls it an "AI" chip, but - no it's not. At least not until drivers start supporting the NPU. OTOH, by every other benchmark, it's blazing fast!
2025-01-22T14:38:34
https://www.reddit.com/r/LocalLLaMA/comments/1i7cj11/amd_hx370_llm_performance/
ivoras
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7cj11
false
null
t3_1i7cj11
/r/LocalLLaMA/comments/1i7cj11/amd_hx370_llm_performance/
false
false
self
6
null
Running Phi-4 on NPU?
2
Hi guys, I’m new to this and I would like to know if it’s possible to run the Phi-4 model from Microsoft on an NPU. I installed this model on of my laptops having a 4090 RTX GPU and running fantastically. I tried this model on my other laptop that doesn’t have a GPU but has an NPU and the NPU is not used at all by ollama. It’s only using the CPU and it’s too slow to be usable. Any idea what I can do to my use of my NPU? Thanks a lot in advance!
2025-01-22T14:42:08
https://www.reddit.com/r/LocalLLaMA/comments/1i7clss/running_phi4_on_npu/
LocSta29
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7clss
false
null
t3_1i7clss
/r/LocalLLaMA/comments/1i7clss/running_phi4_on_npu/
false
false
self
2
null
Reasoning LLM vs Standard LLM
5
With the new "generation" of reasoning LLMs I'm starting asking myself: are always reasoning models the best fit? I can understand the use of reasoning models for tasks like math or logic puzzles, but what about RAG (agentic or vanilla)? Are you comparing the precision of your RAG system with two comparable standard and reasoning LLMs? DeepSeek V3 vs R1: which one do you use and why? Is V3 to be considered outdated?
2025-01-22T14:49:19
https://www.reddit.com/r/LocalLLaMA/comments/1i7cr85/reasoning_llm_vs_standard_llm/
SK33LA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7cr85
false
null
t3_1i7cr85
/r/LocalLLaMA/comments/1i7cr85/reasoning_llm_vs_standard_llm/
false
false
self
5
null