title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FUTURE OF CLOSED VS OPEN SOURCE AI | 4 | # I know room temperature iq andies will try to ERRM actually me and the cope after seeing chinese AI is just too much , but hear me out
- innovation is good for everyone and better when its open sourced
- Competition is good for end consumer
- Any model that can reach state of the art results with massive cost reductinos is preferable period
- Any model that is free and open source to use is preferable over closed and paid ones for big orgs and companies even if you are in your mom's basement you have no idea how quickly prices add up when you are inferencing a 2x expensive model and providing it to a million people
- Chinese are more serious about open ness then US labs and this will pay off in the long run always (no matter who has better models Grok or Open AI or anyone else
any open model can be improved by everyone and tested and improved upon very quick driving the cost down even more
- More people are waking up to the idea of open soruce AI and privacy so any close labs will be at disadvantage in future , lack of trust is 100x with tech that is highly personlized like LLMs
- Chinese will do everyting to keep this open source just to distrubt the market and Zuck also will keep llama open source for the time being so
- Close AI labs can only win if their models are much much much more intelligent then open ones but cheaper to run then open models
- Companies use llama instead of SOTA all the time because they are free and can be manipulated as you want and this will be the case untill the AI models are so cheap to run that using the API and using it in a company is just few dollars a day bill for the smartest model | 2025-01-31T16:46:15 | https://www.reddit.com/r/LocalLLaMA/comments/1iej3uf/future_of_closed_vs_open_source_ai/ | bilalazhar72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iej3uf | false | null | t3_1iej3uf | /r/LocalLLaMA/comments/1iej3uf/future_of_closed_vs_open_source_ai/ | false | false | self | 4 | null |
DeepSeek-R1 70B: Runs best on H100 clusters - Me: Spins up my 5-year-old PC ‘Yeah, that should be fine. | 1 | [removed] | 2025-01-31T16:48:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1iej61t | false | null | t3_1iej61t | /r/LocalLLaMA/comments/1iej61t/deepseekr1_70b_runs_best_on_h100_clusters_me/ | false | false | default | 1 | null |
||
Has anyone run R1 with flash attention? | 10 | I see a lot of posts about hacky R1 rigs, which is very interesting, but has anyone run it with flash attention on a long context yet? Llama.cpp has FA implemented for CPU so its compatible with the various hacky rigs. Curious about the performance and other characteristics at that scale. | 2025-01-31T16:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/1iej62q/has_anyone_run_r1_with_flash_attention/ | 1ncehost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iej62q | false | null | t3_1iej62q | /r/LocalLLaMA/comments/1iej62q/has_anyone_run_r1_with_flash_attention/ | false | false | self | 10 | null |
DeepSeek-R1 70B: Runs best on H100 clusters Me: Spins up my 10-year-old PC ‘Yeah, that should be fine. | 1 | 2025-01-31T16:49:46 | Content-Cookie-7992 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iej6wq | false | null | t3_1iej6wq | /r/LocalLLaMA/comments/1iej6wq/deepseekr1_70b_runs_best_on_h100_clusters_me/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'fJYjqRnC3HFR0diMxAEk1xRogCsbK9weZ9jTBQfCd3I', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/f8f8424tzcge1.jpeg?width=108&crop=smart&auto=webp&s=47bdb96a256a5b9e789faf26ced4e0eb776c0e22', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/f8f8424tzcge1.jpeg?width=216&crop=smart&auto=webp&s=693c4440ba7ad1e606baa511dca24f50fe4c3e5b', 'width': 216}, {'height': 407, 'url': 'https://preview.redd.it/f8f8424tzcge1.jpeg?width=320&crop=smart&auto=webp&s=517d2692fcf105bb09d564b45e990fe5da0a4af3', 'width': 320}], 'source': {'height': 636, 'url': 'https://preview.redd.it/f8f8424tzcge1.jpeg?auto=webp&s=28ece07c525e5136febc75b2e927bf257f5daa1a', 'width': 500}, 'variants': {}}]} |
|||
What is the best VS code AI extension? | 9 | What is the best AI VS code extension in your opinion? Which one do you use or used before? Why you switched and what extension you chosed to go with? I am mainly interested in autocompletion and preferably free. I use Codeium for autocompletion, it is absolutely free, but I think it is far from being best. Not sure which model it uses for autocompletion, probably gpt4 non o version. I heard that Cursor AI is great, but it's like an entire new code editor and I don't want to switch, even though it is based on VS code and very similar. Sometimes I use Deepseek V3 on their website, it really helps not just solving problems I encounter and can't solve myself, but also showing new ways to programm stuff, I look at it's code and learn new things. I think having Deepseek V3 as a code extension would be the best choice for me, since it is free and very capable in code, but I don't know such extension. So, what is the best VS code AI extension as of January 2025? Thanks in advance. | 2025-01-31T16:51:21 | https://www.reddit.com/r/LocalLLaMA/comments/1iej88m/what_is_the_best_vs_code_ai_extension/ | SkylarNox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iej88m | false | null | t3_1iej88m | /r/LocalLLaMA/comments/1iej88m/what_is_the_best_vs_code_ai_extension/ | false | false | self | 9 | null |
DeepSeek 8B gets surprised by the 3 R's in strawberry, but manages to do it | 456 | 2025-01-31T16:54:33 | Fusseldieb | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iejazu | false | null | t3_1iejazu | /r/LocalLLaMA/comments/1iejazu/deepseek_8b_gets_surprised_by_the_3_rs_in/ | false | false | 456 | {'enabled': True, 'images': [{'id': 'K3klQArkoWma5_iljX7g-EP-EJNGBtNTBza5JoROye8', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/oemawg4i0dge1.png?width=108&crop=smart&auto=webp&s=e6b489af57d724a9b55adb8d676dd19b364e20c1', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/oemawg4i0dge1.png?width=216&crop=smart&auto=webp&s=406eb60a6906e83c98b2ffe6f279ac7ebd6f863a', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/oemawg4i0dge1.png?width=320&crop=smart&auto=webp&s=d52527550ab7d884fc7469ecb3c553afbac77048', 'width': 320}, {'height': 408, 'url': 'https://preview.redd.it/oemawg4i0dge1.png?width=640&crop=smart&auto=webp&s=1c75d540a6d15cd68cdeabc673be92b5e657f0e0', 'width': 640}, {'height': 612, 'url': 'https://preview.redd.it/oemawg4i0dge1.png?width=960&crop=smart&auto=webp&s=52492aebbc744d3203494a06cc2a4a1ad83c72ed', 'width': 960}], 'source': {'height': 630, 'url': 'https://preview.redd.it/oemawg4i0dge1.png?auto=webp&s=eaca804013966a4654e8518dac8a7ce7bf6f14e3', 'width': 988}, 'variants': {}}]} |
|||
Is there a way to set up an AI to help me learn Levantine Arabic? | 12 | Hey guys,
Beginner here so any help is appreciated. I’m learning Levantine Arabic and have a bunch of texts and short novels collected in pdf format. Is there a way I can set up a system where AI can teach me from these texts?
Ive tried ChatGPT and some others - and while it’s good with more formal Arabic, I’d prefer to work from just the resources I collected for dialect.
I’d also love to work from a local solution.
Grateful to hear any ideas on how I might go about setting this up.
| 2025-01-31T17:12:02 | https://www.reddit.com/r/LocalLLaMA/comments/1iejq8c/is_there_a_way_to_set_up_an_ai_to_help_me_learn/ | MumblingManuscript | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iejq8c | false | null | t3_1iejq8c | /r/LocalLLaMA/comments/1iejq8c/is_there_a_way_to_set_up_an_ai_to_help_me_learn/ | false | false | self | 12 | null |
Can’t Get the RTX 5090? Don’t Wait—Grab a 3090 24GB for AI Work! | 0 | Struggling to get your hands on the RTX 5090? Don’t let your projects stall! I’ve got RTX 3090 24GB GPUs available to fill the gap.
Perfect for AI tasks, model training, or any GPU-intensive work.
telegram : sw141921 | 2025-01-31T17:13:20 | https://www.reddit.com/r/LocalLLaMA/comments/1iejrem/cant_get_the_rtx_5090_dont_waitgrab_a_3090_24gb/ | That_Mud7241 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iejrem | false | null | t3_1iejrem | /r/LocalLLaMA/comments/1iejrem/cant_get_the_rtx_5090_dont_waitgrab_a_3090_24gb/ | false | false | self | 0 | null |
DeepSeek Has Gotten OpenAI Fired Up | 1 | 2025-01-31T17:14:26 | https://www.wired.com/story/openai-deepseek-stargate-sam-altman/ | murodbeck | wired.com | 1970-01-01T00:00:00 | 0 | {} | 1iejscj | false | null | t3_1iejscj | /r/LocalLLaMA/comments/1iejscj/deepseek_has_gotten_openai_fired_up/ | false | false | 1 | {'enabled': False, 'images': [{'id': '-3-hX50cLZrCCQhSaq3yFb3MEDB-tz6ezVMGNdNVXh4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/map-h2MC5IIGQURw1Q1XZe7gAe5CgX98hHaOoSHLArk.jpg?width=108&crop=smart&auto=webp&s=1234db86cb4718428c0b345a67d867ea2e8a5b8b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/map-h2MC5IIGQURw1Q1XZe7gAe5CgX98hHaOoSHLArk.jpg?width=216&crop=smart&auto=webp&s=0a06901e8bc3d15c00df15b1ec7674742358812a', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/map-h2MC5IIGQURw1Q1XZe7gAe5CgX98hHaOoSHLArk.jpg?width=320&crop=smart&auto=webp&s=7f4e28bda5739736de26c2560e6aeda041fcd62b', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/map-h2MC5IIGQURw1Q1XZe7gAe5CgX98hHaOoSHLArk.jpg?width=640&crop=smart&auto=webp&s=9047810332da96c8d583a19625b4c41d9c1c68e9', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/map-h2MC5IIGQURw1Q1XZe7gAe5CgX98hHaOoSHLArk.jpg?width=960&crop=smart&auto=webp&s=5171dcef5439cd76862e8a9340ec7d3e8e162d98', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/map-h2MC5IIGQURw1Q1XZe7gAe5CgX98hHaOoSHLArk.jpg?width=1080&crop=smart&auto=webp&s=7600fbb85b4e004729dc5dd5e8dfafb3cab5864b', 'width': 1080}], 'source': {'height': 670, 'url': 'https://external-preview.redd.it/map-h2MC5IIGQURw1Q1XZe7gAe5CgX98hHaOoSHLArk.jpg?auto=webp&s=56e618dcd5d671e43aae24e2856b1d263295b78f', 'width': 1280}, 'variants': {}}]} |
||
call for (Q)LORA extractor development cooperation | 1 | Storing both Deepseek v3 and R1 on my phone for local inference sucks, so I'd like to have the R1 as an adapter to easily put on/off in llama-server/cli.
Surely we could try reproducing one via distillation, but let's be more clever and implement *LORE-ADAPT* of [https://openreview.net/pdf?id=ebnyMCM63m](https://openreview.net/pdf?id=ebnyMCM63m),
test and publish this hot tooling. | 2025-01-31T17:16:06 | https://www.reddit.com/r/LocalLLaMA/comments/1iejts2/call_for_qlora_extractor_development_cooperation/ | uhuge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iejts2 | false | null | t3_1iejts2 | /r/LocalLLaMA/comments/1iejts2/call_for_qlora_extractor_development_cooperation/ | false | false | self | 1 | null |
Is DeepSeek a violation of Scaling Laws? | 0 | Hey everyone,
Something's been bugging me about DeepSeek that I haven't seen discussed yet. Aren't their results a direct violation of scaling laws?
Think about it:
* GPT-4 cost \~$100M to train
* DeepSeek does the same (or better) for $5.6M
* That's a 20x cost reduction while maintaining/exceeding performance
Traditional scaling laws suggest you need proportional compute increases to get proportional performance gains. But DeepSeek just showed you can get GPT-4 level performance with a fraction of the resources.
This feels like it breaks the fundamental relationship that scaling laws describe. It's not just more efficient training - it's a complete violation of the predicted compute-capability relationship.
Am I missing something here? Would love to hear the community's thoughts on this.
Does this mean scaling laws need to be revised, or are they just... wrong? | 2025-01-31T17:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/1iejv4i/is_deepseek_a_violation_of_scaling_laws/ | atlasspring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iejv4i | false | null | t3_1iejv4i | /r/LocalLLaMA/comments/1iejv4i/is_deepseek_a_violation_of_scaling_laws/ | false | false | self | 0 | null |
How much bandwidth is required for tensor parallelism vs pipeline parallelism? | 13 | I'm thinking about running deepseek r1 on 24 rx 7900xtxs.
I could get 8 gpus per node and 3 nodes. This allows for the attention heads to distribute on the 8 gpus and for 3x pipeline parallelism.
I could use infiniband or 100g ethernet for the pipeline parallelism, but is it necessary? What bandwidth would I need?
Is PCIE 4.0 enough for tensor parallelism for a 678B model?
| 2025-01-31T17:19:32 | https://www.reddit.com/r/LocalLLaMA/comments/1iejwr9/how_much_bandwidth_is_required_for_tensor/ | LeptinGhrelin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iejwr9 | false | null | t3_1iejwr9 | /r/LocalLLaMA/comments/1iejwr9/how_much_bandwidth_is_required_for_tensor/ | false | false | self | 13 | null |
US should ‘steal’ China’s best AI talent to keep pace, Senate hears | 1 | 2025-01-31T17:26:46 | https://www.scmp.com/news/china/article/3296852/us-should-steal-chinas-best-ai-talent-keep-pace-senate-hears | bruhlmaocmonbro | scmp.com | 1970-01-01T00:00:00 | 0 | {} | 1iek356 | false | null | t3_1iek356 | /r/LocalLLaMA/comments/1iek356/us_should_steal_chinas_best_ai_talent_to_keep/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NCFq1NIN-WGnAIPF3JbqG_eCO0QRnPGhAHT2hrvuhQ4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/gVgjuHUvFo99KJSeB02ebk3APqexH-rxaBmEoBU28c0.jpg?width=108&crop=smart&auto=webp&s=e6c802648ce0afd59461ab52c9d717a499cd9fcb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/gVgjuHUvFo99KJSeB02ebk3APqexH-rxaBmEoBU28c0.jpg?width=216&crop=smart&auto=webp&s=6395380e666e7a0ddeb77aa89359797e9b5384f3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/gVgjuHUvFo99KJSeB02ebk3APqexH-rxaBmEoBU28c0.jpg?width=320&crop=smart&auto=webp&s=b0fb54e05f0e1908682d29764d54e0e08b318b08', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/gVgjuHUvFo99KJSeB02ebk3APqexH-rxaBmEoBU28c0.jpg?width=640&crop=smart&auto=webp&s=16969e76e2c416a6efd135d6b393dcf5f4927e00', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/gVgjuHUvFo99KJSeB02ebk3APqexH-rxaBmEoBU28c0.jpg?width=960&crop=smart&auto=webp&s=6e3861f5dbd6e7a6edf3713080ed6ff9d6658fa2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/gVgjuHUvFo99KJSeB02ebk3APqexH-rxaBmEoBU28c0.jpg?width=1080&crop=smart&auto=webp&s=afd2f319378f522735127607852681d9f70eb212', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/gVgjuHUvFo99KJSeB02ebk3APqexH-rxaBmEoBU28c0.jpg?auto=webp&s=99506623fa53f4a365716f381b88b7d500b2e630', 'width': 1200}, 'variants': {}}]} |
||
Improving Reliability of Structured Outputs | 1 | Hey all, I'm running into some issues when trying to use structured outputs due to limitations on properties supported by OpenAI (see https://platform.openai.com/docs/guides/structured-outputs/some-type-specific-keywords-are-not-yet-supported#some-type-specific-keywords-are-not-yet-supported)
This is particularly problematic for some of the array-related keywords (namely \`minItems\` and \`maxItems\`)
Furthermore, the schema limitations mean that TS won't warn about this error at compile time - it has to be encountered at runtime.
I'm assuming that the solution requires wrapping the schema input and converting it to a different format (e.g. an array with \`minItems\` & \`maxItems\` of 3 becomes \`{ item1: ..., item2: ..., item3 ...}\`)
I was wondering if there's a library or GitHub gist of a solution out there for this problem? | 2025-01-31T17:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/1iekgpv/improving_reliability_of_structured_outputs/ | DoctorNootNoot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iekgpv | false | null | t3_1iekgpv | /r/LocalLLaMA/comments/1iekgpv/improving_reliability_of_structured_outputs/ | false | false | self | 1 | null |
Issue with Local AI Models Displaying \boxed{} Instead of Rendering Content in Anything LLM -- Need Help | 0 | **TL;DR:** Just started with Local AI using Anything LLM & LM Studio but ran into an issue—models output `\boxed{}` instead of rendering boxed content. Switched to `\fbox{}`, but no change. Using Gemini for embeddings since it's free. How can I fix this?
Hey guys! I have just got into Local AI stuff and I first tried using Ollama in CLI and it worked decently, I encountered some looping issues where the model would just print the same output again and again in one single go and it did not stop so I cleared that session and restarted and did not encounter that error again. But then I searched about GUI's for these models and stumbled upon Anything LLM and LM Studio. I started using these two in the last 4 hours and have encountered one problem, it never shows output in boxed form, like it just prints "\\boxed{ math stuff or some other stuff }" and it just ends. I am using gemini as embedding provider as it is free and I do not want to pay for other models. So, How do I solve this issue? I tried telling the model to use \\fbox instead of \\boxed and the output was still the same. | 2025-01-31T17:47:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ieklfv/issue_with_local_ai_models_displaying_boxed/ | that_chubby_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieklfv | false | null | t3_1ieklfv | /r/LocalLLaMA/comments/1ieklfv/issue_with_local_ai_models_displaying_boxed/ | false | false | self | 0 | null |
What is the best open-source OCR that can handle both well written and handwritten text on docs? | 7 | Tesseract seems to be the best choice for printed text, but is basically useless for handwritten
PaddleOCR and co. have the same issue
docTR models don't seem to be enough?
Transformer-based approachs promise a lot but dont quite deliver when its a mix of handwritten+printed (DTrOCR, HtOCR, GOT_OCR2...)
The only great outputs I get are from multimodal vision based LLMs (Llama, Qwen and co.) on huggingface, but these are obviously heavier than standalone OCR models.
Does anybody have/know a fast precise alternative that works on both printed and handwritten text for latin characters? | 2025-01-31T18:02:28 | https://www.reddit.com/r/LocalLLaMA/comments/1iekyid/what_is_the_best_opensource_ocr_that_can_handle/ | tauri_mionZer0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iekyid | false | null | t3_1iekyid | /r/LocalLLaMA/comments/1iekyid/what_is_the_best_opensource_ocr_that_can_handle/ | false | false | self | 7 | null |
Coding AI on macbook air M1 | 1 | [removed] | 2025-01-31T18:12:16 | https://www.reddit.com/r/LocalLLaMA/comments/1iel76a/coding_ai_on_macbook_air_m1/ | Noxi_FR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iel76a | false | null | t3_1iel76a | /r/LocalLLaMA/comments/1iel76a/coding_ai_on_macbook_air_m1/ | false | false | self | 1 | null |
I combined web search with DeepSeek-R1-Llama-70B and made it API | 1 | [removed] | 2025-01-31T18:19:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ieldi5/i_combined_web_search_with_deepseekr1llama70b_and/ | sickleRunner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieldi5 | false | null | t3_1ieldi5 | /r/LocalLLaMA/comments/1ieldi5/i_combined_web_search_with_deepseekr1llama70b_and/ | false | false | self | 1 | null |
What was the actual cost of training deepseek? | 0 | If the 6 million figure being todsed around everywhere was really just for its final bout of training, is there any reliable information about the actual cost from start to finish? | 2025-01-31T18:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ielg85/what_was_the_actual_cost_of_training_deepseek/ | DsDman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ielg85 | false | null | t3_1ielg85 | /r/LocalLLaMA/comments/1ielg85/what_was_the_actual_cost_of_training_deepseek/ | false | false | self | 0 | null |
Tutorial: How to Run DeepSeek-R1 (671B) 1.58bit on Open WebUI | 127 | Hey guys! Daniel & I (Mike) at [Unsloth](https://github.com/unslothai/unsloth) collabed with Tim from [Open WebUI](https://github.com/open-webui/open-webui) to bring you this step-by-step on how to run the non-distilled DeepSeek-R1 Dynamic 1.58-bit model locally!
This guide is summarized so I highly recommend you read the full guide (with pics) here: [https://docs.openwebui.com/tutorials/integrations/deepseekr1-dynamic/](https://docs.openwebui.com/tutorials/integrations/deepseekr1-dynamic/)
# To Run DeepSeek-R1:
**1. Install Llama.cpp**
* Download prebuilt binaries or build from source following [this guide](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md).
**2. Download the Model (1.58-bit, 131GB) from** [Unsloth](https://github.com/unslothai/unsloth)
* Get the model from [Hugging Face](https://huggingface.co/unsloth/DeepSeek-R1-GGUF).
* Use Python to download it programmatically:
​
from huggingface_hub import snapshot_download snapshot_download( repo_id="unsloth/DeepSeek-R1-GGUF", local_dir="DeepSeek-R1-GGUF", allow_patterns=["*UD-IQ1_S*"] )
* Once the download completes, you’ll find the model files in a directory structure like this:
​
DeepSeek-R1-GGUF/ ├── DeepSeek-R1-UD-IQ1_S/ │ ├── DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf │ ├── DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf │ ├── DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf
* Ensure you know the path where the files are stored.
**3. Install and Run Open WebUI**
* If you don’t already have it installed, no worries! It’s a simple setup. Just follow the Open WebUI docs here: [https://docs.openwebui.com/](https://docs.openwebui.com/)
* Once installed, start the application - we’ll connect it in a later step to interact with the DeepSeek-R1 model.
**4. Start the Model Server with Llama.cpp**
Now that the model is downloaded, the next step is to run it using Llama.cpp’s server mode.
# 🛠️Before You Begin:
1. Locate the llama-server Binary
2. If you built Llama.cpp from source, the llama-server executable is located in:llama.cpp/build/bin Navigate to this directory using:cd \[path-to-llama-cpp\]/llama.cpp/build/bin Replace \[path-to-llama-cpp\] with your actual Llama.cpp directory. For example:cd \~/Documents/workspace/llama.cpp/build/bin
3. Point to Your Model Folder
4. Use the full path to the downloaded GGUF files.When starting the server, specify the first part of the split GGUF files (e.g., DeepSeek-R1-UD-IQ1\_S-00001-of-00003.gguf).
# 🚀Start the Server
Run the following command:
./llama-server \ --model /[your-directory]/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --port 10000 \ --ctx-size 1024 \ --n-gpu-layers 40
>
# Example (If Your Model is in /Users/tim/Documents/workspace):
./llama-server \ --model /Users/tim/Documents/workspace/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \ --port 10000 \ --ctx-size 1024 \ --n-gpu-layers 40
✅ Once running, the server will be available at:
http://127.0.0.1:10000
🖥️ Llama.cpp Server Running
[After running the command, you should see a message confirming the server is active and listening on port 10000.](https://preview.redd.it/erjbg5v5cbge1.png?width=3428&format=png&auto=webp&s=fff4de133562bb6f67076db17285860b7294f2ad)
>
**Step 5: Connect Llama.cpp to Open WebUI**
1. Open Admin Settings in Open WebUI.
2. Go to Connections > OpenAI Connections.
3. Add the following details:
4. URL → [http://127.0.0.1:10000/v1API](http://127.0.0.1:10000/v1API) Key → none
# Adding Connection in Open WebUI
>
https://preview.redd.it/8eja3yugcbge1.png?width=3456&format=png&auto=webp&s=3d890d2ed9c7bb20f6b2293a84c9c294a16de0a2
# Notes
* You don't need a GPU to run this model but it will make it faster especially when you have at least 24GB of VRAM.
* Try to have a sum of RAM + VRAM = 120GB+ to get decent tokens/s
If you have any questions please let us know and also - any suggestions are also welcome! Happy running folks! :) | 2025-01-31T18:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ielhyu/tutorial_how_to_run_deepseekr1_671b_158bit_on/ | yoracale | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ielhyu | false | null | t3_1ielhyu | /r/LocalLLaMA/comments/1ielhyu/tutorial_how_to_run_deepseekr1_671b_158bit_on/ | false | false | 127 | {'enabled': False, 'images': [{'id': 'oUAe34zUCLxMUIpYtOvOz6aYou2CnbtJjhJZ0bwJ6Jg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=108&crop=smart&auto=webp&s=6481fbac644d8a96c2918c63e805d1c62e24cbe5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=216&crop=smart&auto=webp&s=941b00cf4a68a70df266160fe06769bc2a817a41', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=320&crop=smart&auto=webp&s=e794c7cbf042b8d8e6fdd8f8c239e0f5cb398261', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=640&crop=smart&auto=webp&s=57fbf9c89972d5c31e3bd2d3354696be4e8d5b9d', 'width': 640}, {'height': 505, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=960&crop=smart&auto=webp&s=557f9a403410be41c1438b6d2b1a2acd9d507da4', 'width': 960}, {'height': 568, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=1080&crop=smart&auto=webp&s=989ea96f774aa62c199da9564be3b7b646db1494', 'width': 1080}], 'source': {'height': 834, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?auto=webp&s=fb46a23aaa0ed1c5044eaea486ff79352cce2675', 'width': 1584}, 'variants': {}}]} |
|
What are the best model per ram size? | 4 | Like the title says it would be cool to have a table listing the best model per ram size currently like:
8gb
16gb
24gb
32gb
40gb
64gb
128gb
I will keep it updated with the response I get or even if you think about adding other dimensions, even if you tested only one bracket tell us so I can update it | 2025-01-31T18:32:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ielp0b/what_are_the_best_model_per_ram_size/ | InternalMode8159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ielp0b | false | null | t3_1ielp0b | /r/LocalLLaMA/comments/1ielp0b/what_are_the_best_model_per_ram_size/ | false | false | self | 4 | null |
Performance difference | 0 | Hello
Right now I’m using my gaming pc to host some models. I’m using a amd 7800xt but currently looking to get a new server just for other reasons but would also like to move my hosting into that machine. I would looking at running a p40 but with the new 5090 drop maybe the 3090 will come down I plan to run olama deep seek obviously and maybe a couple of other I mostly use it for school but just want to know the tokens/s the 7800xt is getting around 10-20 tk/s running 14b models just want to the the price difference because I can get two p40s right now for the price of one 3090(dual slot to fit in my case).
(Sorry I know this is asked a lot) | 2025-01-31T18:34:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ielqe8/performance_difference/ | Rasr123105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ielqe8 | false | null | t3_1ielqe8 | /r/LocalLLaMA/comments/1ielqe8/performance_difference/ | false | false | self | 0 | null |
Investors refusing to believe their investments are going to zero | 0 | 2025-01-31T18:35:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ielr5b/investors_refusing_to_believe_their_investments/ | atlasspring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ielr5b | false | null | t3_1ielr5b | /r/LocalLLaMA/comments/1ielr5b/investors_refusing_to_believe_their_investments/ | false | false | 0 | null |
||
I added Live Web Search on top of DeepSeek-R1-LLama-70b and made it API | 53 | 2025-01-31T18:39:31 | sickleRunner | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ielupk | false | null | t3_1ielupk | /r/LocalLLaMA/comments/1ielupk/i_added_live_web_search_on_top_of/ | false | false | 53 | {'enabled': True, 'images': [{'id': 'iqNSJQHZBJQ0o5wgLL1it2n6vPfh-iWNxn6s2ETG2Zw', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/0pguliybjdge1.png?width=108&crop=smart&auto=webp&s=b9a4d36a0930b4a98187b191235f057b53a6910d', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/0pguliybjdge1.png?width=216&crop=smart&auto=webp&s=24cade41b40e3e6eecc07a7c7bc6439ab9918040', 'width': 216}, {'height': 214, 'url': 'https://preview.redd.it/0pguliybjdge1.png?width=320&crop=smart&auto=webp&s=b92c1ffc790804fbe78f3ca41e3fa09e09710f28', 'width': 320}, {'height': 429, 'url': 'https://preview.redd.it/0pguliybjdge1.png?width=640&crop=smart&auto=webp&s=9348eced5b7103939d1b6beb8b2ee81e5d8c3c83', 'width': 640}, {'height': 644, 'url': 'https://preview.redd.it/0pguliybjdge1.png?width=960&crop=smart&auto=webp&s=5420d5ae23d329e49b8fe7bdd0ba80030dee2f81', 'width': 960}], 'source': {'height': 677, 'url': 'https://preview.redd.it/0pguliybjdge1.png?auto=webp&s=0c0b3003cc20c0c396c633a693f5b1ef0fdb9b52', 'width': 1008}, 'variants': {}}]} |
|||
r1 shenanigans | 1 | 2025-01-31T18:39:48 | diligentgrasshopper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ieluyt | false | null | t3_1ieluyt | /r/LocalLLaMA/comments/1ieluyt/r1_shenanigans/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Opkh8RcJcGxznQA-pqxCiQigB-YXvf6JBOwmXcP8Mr0', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/qdgxtehbjdge1.png?width=108&crop=smart&auto=webp&s=bc664216ae0faf9fe4079db66784a4150fc48ff2', 'width': 108}, {'height': 222, 'url': 'https://preview.redd.it/qdgxtehbjdge1.png?width=216&crop=smart&auto=webp&s=8349b709edc8384385e131becf1816efc2985420', 'width': 216}, {'height': 329, 'url': 'https://preview.redd.it/qdgxtehbjdge1.png?width=320&crop=smart&auto=webp&s=79330ffee0e9733f8718b6fb36883729c9c92e8c', 'width': 320}, {'height': 658, 'url': 'https://preview.redd.it/qdgxtehbjdge1.png?width=640&crop=smart&auto=webp&s=341f020011df0f244b4ed78ce9865286e127eb88', 'width': 640}, {'height': 987, 'url': 'https://preview.redd.it/qdgxtehbjdge1.png?width=960&crop=smart&auto=webp&s=dc6f10b8af054e81cb018a541093a01ae38b45e1', 'width': 960}, {'height': 1111, 'url': 'https://preview.redd.it/qdgxtehbjdge1.png?width=1080&crop=smart&auto=webp&s=073e746bf8a64b206a68934bc17c937097d9699f', 'width': 1080}], 'source': {'height': 2428, 'url': 'https://preview.redd.it/qdgxtehbjdge1.png?auto=webp&s=40e874dd7e70478f88d420ce47bfa58e43d63361', 'width': 2360}, 'variants': {}}]} |
|||
AI Tools as a Hiring Requirement? Just Saw a Job Post That Blew My Mind | 1 | [removed] | 2025-01-31T18:52:32 | https://www.reddit.com/r/LocalLLaMA/comments/1iem5xf/ai_tools_as_a_hiring_requirement_just_saw_a_job/ | Far_Flamingo5333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iem5xf | false | null | t3_1iem5xf | /r/LocalLLaMA/comments/1iem5xf/ai_tools_as_a_hiring_requirement_just_saw_a_job/ | false | false | self | 1 | null |
Deepseek Qwen and Llama confusion | 5 | I was wondering if anyone can explain to me like i'm a little kid why deepseek distilled versions use in some models Llama and in some of the Qwen?
And how does it work, do they distill, make the model smaller and use a base stack of llama or qwen based on the model size and what are the implications?
Does anyone know, thanks! | 2025-01-31T18:55:22 | https://www.reddit.com/r/LocalLLaMA/comments/1iem89u/deepseek_qwen_and_llama_confusion/ | novus_nl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iem89u | false | null | t3_1iem89u | /r/LocalLLaMA/comments/1iem89u/deepseek_qwen_and_llama_confusion/ | false | false | self | 5 | null |
What can I run with 56gb of VRAM? | 2 | I've currently got a rtx3090 and I'm thinking about making a purchase for a 5090 (if I can even get one) but I'm in the preliminary stages of figuring on what I can run.
I am currently setting up a RAG pipeline and thinking of running phi4 (distilled?) just to ask some basic questions. I would like to supercharge this with a better model and more context by adding a 5090. I had a couple of questions though:
1) What is a good model for RAG if I suddenly have more vram?
2) What would be a good tradeoff for context/model capabilities?
3) Are you able to run the 5090 parallel to a 3090? There is no SLI any more but I don't think that matters anymore right?
4) Has the ability to undervolt the 3090/4090 been consistent and can we expect it from the 5090? The energy price running the 5090 scares me a bit.
Thanks for your time! | 2025-01-31T18:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/1iem8sn/what_can_i_run_with_56gb_of_vram/ | JFHermes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iem8sn | false | null | t3_1iem8sn | /r/LocalLLaMA/comments/1iem8sn/what_can_i_run_with_56gb_of_vram/ | false | false | self | 2 | null |
Is there any proof that deep seek was trained on Open AI’s data? | 0 | The past couple of days following Open AI’s accusation people have been focused on pointing out the irony relating to intellectual property since ChatGPT was trained on copyrighted data; accepting as fact almost that deep seek did distill OpenAI’s model but that this isn’t a problem. But is there any actual proof that this happened apart from a few Chinese IP addresses? I’m asking because regardless of moral standpoint this undermines the whole premise of the deep seek paper that an entirely new model could be trained for fewer than $5 million. | 2025-01-31T18:57:06 | https://www.reddit.com/r/LocalLLaMA/comments/1iem9q4/is_there_any_proof_that_deep_seek_was_trained_on/ | Unusual_Guidance2095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iem9q4 | false | null | t3_1iem9q4 | /r/LocalLLaMA/comments/1iem9q4/is_there_any_proof_that_deep_seek_was_trained_on/ | false | false | self | 0 | null |
What the actual hell | 1 | 2025-01-31T19:07:54 | https://www.reddit.com/gallery/1iemje3 | diligentgrasshopper | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1iemje3 | false | null | t3_1iemje3 | /r/LocalLLaMA/comments/1iemje3/what_the_actual_hell/ | false | false | 1 | null |
||
Can we train the deepseek LLM with our data set? | 1 | [removed] | 2025-01-31T19:16:19 | https://www.reddit.com/r/LocalLLaMA/comments/1iemqpx/can_we_train_the_deepseek_llm_with_our_data_set/ | IndependentNormal708 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iemqpx | false | null | t3_1iemqpx | /r/LocalLLaMA/comments/1iemqpx/can_we_train_the_deepseek_llm_with_our_data_set/ | false | false | self | 1 | null |
How to run an AWQ multimodal LLM (with SGLang) | 1 | Hi
I am trying to run an AWQ quant model on SGLang Qwen/Qwen2-VL-72B-Instruct-AWQ (for testing currently on runpod).
First I tried to split it between 2x A40 with this starting command:
python3 -m sglang.launch_server \
--model-path Qwen/Qwen2-VL-72B-Instruct-AWQ \
--context-length 8192 \
--host 0.0.0.0 --port 8000 \
--chat-template chatml-llava \
--grammar-backend outlines \
--quantization awq_marlin \
--tp 2
It started, and with text-only requests through the /v1/chat/completions API endpoint, it works as expected.
But once I've added an image, it throws this error:
[2025-01-31 00:59:16 TP0] Scheduler hit an exception: Traceback (most recent call last):
09:59:16
File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 1784, in run_scheduler_process
09:59:16
scheduler.event_loop_normal()
09:59:16
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
09:59:16
return func(*args, **kwargs)
09:59:16
File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 471, in event_loop_normal
09:59:16
self.process_input_requests(recv_reqs)
09:59:16
File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 579, in process_input_requests
09:59:16
output = self._request_dispatcher(recv_req)
09:59:16
File "/sgl-workspace/sglang/python/sglang/utils.py", line 374, in __call__
09:59:16
return fn(obj)
09:59:16
File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 649, in handle_generate_request
09:59:16
req.origin_input_ids = self.pad_input_ids_func(
09:59:16
File "/sgl-workspace/sglang/python/sglang/srt/models/qwen2_vl.py", line 412, in pad_input_ids
09:59:16
non_image_tokens = input_ids[: image_indices[image_cnt]]
09:59:16
IndexError: list index out of range
09:59:16
[2025-01-31 00:59:16] Received sigquit from a child process. It usually means the child failed.
This is my curl request:
curl --location 'https://xxxxxxxxx.proxy.runpod.net/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "Qwen/Qwen2-VL-72B-Instruct-AWQ",
"messages": [
{
"role": "user",
"content": "prompt goes here...."
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "another one here..."
},
{
"type": "image_url",
"image_url": {
"url": "https://supersecreturltomy.com/image.jpg"
}
}
]
}
],
"max_tokens": 600,
"temperature": 0.5,
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "response",
"schema": {
"type": "object",
"properties": {
"corrected_text": {
"type": "string"
},
"changes_made": {
"type": "array",
"items": {
"type": "string"
}
}
},
"required": [
"corrected_text",
"changes_made"
]
}
}
}
}'
I've then tested it with the smaller model Qwen/Qwen2-VL-7B-Instruct-AWQ,
deactivated --tp 2, and using the exact same request (including the image).
With this model, everything works fine and I get the expected response.
So I thought it might be because of the GPU split (--tp 2).
So I tested it again with a single A100 and Qwen/Qwen2-VL-72B-Instruct-AWQ,
but I still get the exact same error.
---
Of course, I am a complete noob and have no real understanding of all this stuff.
I am just doing my best to learn how everything works.
I've tried searching (Google, Perplexity, and Reddit), but couldn't find a clear explanation
why the smaller model works and the larger one doesn't.
Can someone help me to understand this?
| 2025-01-31T19:16:25 | https://www.reddit.com/r/LocalLLaMA/comments/1iemqtf/how_to_run_an_awq_multimodal_llm_with_sglang/ | Stunning-Storage-587 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iemqtf | false | null | t3_1iemqtf | /r/LocalLLaMA/comments/1iemqtf/how_to_run_an_awq_multimodal_llm_with_sglang/ | false | false | self | 1 | null |
Quantifying MCQ questions on parameters like factual knowledge, concept depth etc | 1 | I am new to this so pardon me if I say some technically naive things. I have a repository of MCQ questions for a competitive exam. The questions have pretty defined reading material from where they lend most of the facts and concepts but the questions are framed in a way that requires you to use a lot of inferential reasoning and option elimination.
I wanted to analyze all the questions by identifying the factual part of the question in the source material and quantifying the amount of "factuality" and "logic" for the questions. I certainly cannot expect LLMs to give me a consistent answer to such a query. Is there some way I can do it? I heard that there are embedding models that vectorize the semantic information of a text. I can do some manipulation on these vectors to quantify the "factuality" and "logic" but again will this be consistent across all the different questions?
| 2025-01-31T19:17:46 | https://www.reddit.com/r/LocalLLaMA/comments/1iems0k/quantifying_mcq_questions_on_parameters_like/ | No-Flight-2821 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iems0k | false | null | t3_1iems0k | /r/LocalLLaMA/comments/1iems0k/quantifying_mcq_questions_on_parameters_like/ | false | false | self | 1 | null |
The O3 mini is now available in two flavors. | 9 | 2025-01-31T19:31:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ien3fv/the_o3_mini_is_now_available_in_two_flavors/ | PerformanceRound7913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ien3fv | false | null | t3_1ien3fv | /r/LocalLLaMA/comments/1ien3fv/the_o3_mini_is_now_available_in_two_flavors/ | false | false | 9 | null |
||
One of the most illuminating ways to assess these models to me so far | 10 | I would feed the model articles on social sciences findings which have innumerable contradictions and ask the model if it spots any contradiction. Social sciences are proliferate with sophistic studies like these. These articles could go like this:
”We find weak to no correlation between X and Y.”
Then later, the article would phrase the findings in a much more headline-grabbing way, like this:
”Therefore X is much more likely to be Y.”
It includes explicit contradictions within the text, so you don’t have to examine the methodologies, but it’s nonetheless phrased in compelling enough ways to dupe lay readers. So far, none of the LLMs seem able to spot the contradictions, despite being able to summarize the text. So to me it's not clear whether these models have "reasoning abilities". | 2025-01-31T19:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ien50m/one_of_the_most_illuminating_ways_to_assess_these/ | feixiangtaikong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ien50m | false | null | t3_1ien50m | /r/LocalLLaMA/comments/1ien50m/one_of_the_most_illuminating_ways_to_assess_these/ | false | false | self | 10 | null |
NASA becomes latest federal agency to block China's DeepSeek on 'security and privacy concerns' | 1 | 2025-01-31T19:33:50 | https://www.cnbc.com/2025/01/31/nasa-becomes-latest-federal-agency-to-block-chinas-deepseek.html | fallingdowndizzyvr | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 1ien5nd | false | null | t3_1ien5nd | /r/LocalLLaMA/comments/1ien5nd/nasa_becomes_latest_federal_agency_to_block/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'gsujgPXmEDYEbP88fVRkkIzwh7LFQTML-ui1otaXUvU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5vztcQH3Naz90UZeq4hREdIYCeAO3RfA03-l7Uy6Lsg.jpg?width=108&crop=smart&auto=webp&s=47a1a0ce40e91ad039ed9d866e8b1829396e9187', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5vztcQH3Naz90UZeq4hREdIYCeAO3RfA03-l7Uy6Lsg.jpg?width=216&crop=smart&auto=webp&s=34f2c4549190706b0b83e2703ba02a63536399c4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5vztcQH3Naz90UZeq4hREdIYCeAO3RfA03-l7Uy6Lsg.jpg?width=320&crop=smart&auto=webp&s=0cf97e4e01d10b51afc3767adf5aa96871e4efdd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5vztcQH3Naz90UZeq4hREdIYCeAO3RfA03-l7Uy6Lsg.jpg?width=640&crop=smart&auto=webp&s=818eb6a54b9dbf3b8ff15b6aa431a0c8596df45e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5vztcQH3Naz90UZeq4hREdIYCeAO3RfA03-l7Uy6Lsg.jpg?width=960&crop=smart&auto=webp&s=6945b26e1d6a76a6ef00fbd9a166b931c534a85f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5vztcQH3Naz90UZeq4hREdIYCeAO3RfA03-l7Uy6Lsg.jpg?width=1080&crop=smart&auto=webp&s=0e7b4858fa50532c66baa438da081b87eb44fa03', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/5vztcQH3Naz90UZeq4hREdIYCeAO3RfA03-l7Uy6Lsg.jpg?auto=webp&s=fef5304b507fe8927c3c9f761a98ec8436ef9929', 'width': 1920}, 'variants': {}}]} |
||
OpenC/crypto-gpt-o3-mini: Worth a look for AI + Crypto? | 0 | Hey everyone, I’ve been testing **OpenC/crypto-gpt-o3-mini**, an open-source AI model designed for crypto-related tasks. It’s optimized for processing blockchain data, analyzing transactions, and supporting DeFi/NFT/Web3 applications.
Link huggingface: [https://huggingface.co/OpenC/crypto-gpt-o3-mini](https://huggingface.co/OpenC/crypto-gpt-o3-mini)
A few things stand out:
* It’s lightweight and runs efficiently on standard hardware.
* It supports real-time blockchain data analysis.
* It could be useful for automating transactions and on-chain analytics.
I’m still exploring whether it’s actually useful for crypto projects or just another hyped-up model. Has anyone else tried it? Would love to hear your thoughts. | 2025-01-31T19:36:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ien88y/openccryptogpto3mini_worth_a_look_for_ai_crypto/ | Different_Prune_3529 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ien88y | false | null | t3_1ien88y | /r/LocalLLaMA/comments/1ien88y/openccryptogpto3mini_worth_a_look_for_ai_crypto/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '4nCevzmecQS5Ukrg46UA31pob-nX7AkVwbbPS98I7nk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Vx6Zhvq-cgvD5owlrd2FIgj5K7_0ITPbU1yz4N0Mho0.jpg?width=108&crop=smart&auto=webp&s=35dc78aa6c740218d5bb5572e62186d60ff00fec', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Vx6Zhvq-cgvD5owlrd2FIgj5K7_0ITPbU1yz4N0Mho0.jpg?width=216&crop=smart&auto=webp&s=a08aa2db35c2e22f3e01f09087aa72ff12d8ba28', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Vx6Zhvq-cgvD5owlrd2FIgj5K7_0ITPbU1yz4N0Mho0.jpg?width=320&crop=smart&auto=webp&s=0862e6b89cdc6702a43ae0ceffec8966bc0b6e76', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Vx6Zhvq-cgvD5owlrd2FIgj5K7_0ITPbU1yz4N0Mho0.jpg?width=640&crop=smart&auto=webp&s=9d9cec6a848dcf3c90f8703775d6ec19c5da922f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Vx6Zhvq-cgvD5owlrd2FIgj5K7_0ITPbU1yz4N0Mho0.jpg?width=960&crop=smart&auto=webp&s=5d0f4b68964620d82902ea4f315b8af0ece94264', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Vx6Zhvq-cgvD5owlrd2FIgj5K7_0ITPbU1yz4N0Mho0.jpg?width=1080&crop=smart&auto=webp&s=9c5cdaf35dd53ba6db54a50e58f89254beb5cb23', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Vx6Zhvq-cgvD5owlrd2FIgj5K7_0ITPbU1yz4N0Mho0.jpg?auto=webp&s=c957048f613d4ba357b47d8c4095f7095fb98f3f', 'width': 1200}, 'variants': {}}]} |
Best Cloud VM for LoRA Fine-Tuning on a Budget? | 1 | [removed] | 2025-01-31T19:38:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ien9vc/best_cloud_vm_for_lora_finetuning_on_a_budget/ | Tiny_Yellow_7869 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ien9vc | false | null | t3_1ien9vc | /r/LocalLLaMA/comments/1ien9vc/best_cloud_vm_for_lora_finetuning_on_a_budget/ | false | false | self | 1 | null |
LlaMaCpp integration in Unreal Engine 5 | 1 | [removed] | 2025-01-31T19:40:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ienavc/llamacpp_integration_in_unreal_engine_5/ | Soulshellgames | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ienavc | false | null | t3_1ienavc | /r/LocalLLaMA/comments/1ienavc/llamacpp_integration_in_unreal_engine_5/ | false | false | self | 1 | null |
Deepseek R1 is now hosted by Nvidia | 655 | NVIDIA just brought DeepSeek-R1 671-bn param model to NVIDIA NIM microservice on build.nvidia .com
- The DeepSeek-R1 NIM microservice can deliver up to 3,872 tokens per second on a single NVIDIA HGX H200 system.
- Using NVIDIA Hopper architecture, DeepSeek-R1 can deliver high-speed inference by leveraging FP8 Transformer Engines and 900 GB/s NVLink bandwidth for expert communication.
- As usual with NVIDIA's NIM, its a enterprise-scale setu to securely experiment, and deploy AI agents with industry-standard APIs.
| 2025-01-31T19:44:44 | Outrageous-Win-3244 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ienetu | false | null | t3_1ienetu | /r/LocalLLaMA/comments/1ienetu/deepseek_r1_is_now_hosted_by_nvidia/ | false | false | 655 | {'enabled': True, 'images': [{'id': 'XUzQAM9Zompr5_WUDmNa6g1s506Z7G8l0WIFMjOlUU8', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/1zufl131vdge1.jpeg?width=108&crop=smart&auto=webp&s=d2b4fe17d46dcbc78e765f79d0dbdbc526a41441', 'width': 108}, {'height': 87, 'url': 'https://preview.redd.it/1zufl131vdge1.jpeg?width=216&crop=smart&auto=webp&s=82b85248c3501f20dcec0d42ea87797e1781a24d', 'width': 216}, {'height': 129, 'url': 'https://preview.redd.it/1zufl131vdge1.jpeg?width=320&crop=smart&auto=webp&s=926be036c0bc374304a77a78925cc47475fe9ba0', 'width': 320}, {'height': 258, 'url': 'https://preview.redd.it/1zufl131vdge1.jpeg?width=640&crop=smart&auto=webp&s=c70d8c80da395577b63301493ed66fac0dc6c408', 'width': 640}, {'height': 387, 'url': 'https://preview.redd.it/1zufl131vdge1.jpeg?width=960&crop=smart&auto=webp&s=5bc68068e9fbb05ccf48b67adebdf419087ad545', 'width': 960}, {'height': 435, 'url': 'https://preview.redd.it/1zufl131vdge1.jpeg?width=1080&crop=smart&auto=webp&s=1620ea7300a46477ec509baa8328c03e67a85db3', 'width': 1080}], 'source': {'height': 507, 'url': 'https://preview.redd.it/1zufl131vdge1.jpeg?auto=webp&s=c670948ae81277e97525409a628c40615d67b6a6', 'width': 1256}, 'variants': {}}]} |
||
Reaction to Nvidia's recent showcase of their AI system | 1 | 2025-01-31T19:49:39 | https://www.youtube.com/watch?v=aDmblZMV43U | Typical-Interest-543 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1ienj1c | false | {'oembed': {'author_name': 'Dallas Drapeau', 'author_url': 'https://www.youtube.com/@DallasDrap', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/aDmblZMV43U?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Game Devs React to Nvidia ACE @ UnrealFest"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/aDmblZMV43U/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Game Devs React to Nvidia ACE @ UnrealFest', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ienj1c | /r/LocalLLaMA/comments/1ienj1c/reaction_to_nvidias_recent_showcase_of_their_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'w5Lx_nmMRTjkfFVTBqjnvbO1tsxdldABkUlCAl_shE4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/AGpOKI8cmgBSOzAhLUhbtWg9UH7ngBzAiUNpGnP-GMQ.jpg?width=108&crop=smart&auto=webp&s=5f09c0e0b56d26409f95ae379e3722b5080133f0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/AGpOKI8cmgBSOzAhLUhbtWg9UH7ngBzAiUNpGnP-GMQ.jpg?width=216&crop=smart&auto=webp&s=03140b2e7794946237e24d77f237937b7575edc2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/AGpOKI8cmgBSOzAhLUhbtWg9UH7ngBzAiUNpGnP-GMQ.jpg?width=320&crop=smart&auto=webp&s=d89684d8770f6c0cde8af366f24c56a0c2052fa3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/AGpOKI8cmgBSOzAhLUhbtWg9UH7ngBzAiUNpGnP-GMQ.jpg?auto=webp&s=9efb3a4fa4108c530c4103d9e6c4279e8a91c2da', 'width': 480}, 'variants': {}}]} |
||
Deek Seek way of thinking | 0 |
Kept asking deepseek-r1-distill-qwen-1.5b "what are you?" and "What are you designed for"
The model picks up a random piece of information and starts doing internal thinking and reasoning
https://preview.redd.it/y2akgptexdge1.png?width=2342&format=png&auto=webp&s=c47e3762be1d5f62053ba8261c63d8683ef67b3b
| 2025-01-31T19:58:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ienqbg/deek_seek_way_of_thinking/ | Better_Athlete_JJ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ienqbg | false | null | t3_1ienqbg | /r/LocalLLaMA/comments/1ienqbg/deek_seek_way_of_thinking/ | false | false | 0 | null |
|
Simple web app to run local instances of R1 | 1 | Ollama UI became out dated for R1, so made a simple Chat app for R1. It's distributed under AGPL. If there is enough demand I might change the license in future version. Outside of that enjoy.
I plan on making it look nicer for formatting stuff later and what not or saved chats or rag. But this is something that fulfill my needs. Here is the repo. Its just a simple git clone and npm install.
Default of the R1 LLAMA 8b distillation
npm version is 10.8.1
[https://github.com/kodecreer/easy\_ai/tree/master](https://github.com/kodecreer/easy_ai/tree/master) | 2025-01-31T20:04:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ienvtp/simple_web_app_to_run_local_instances_of_r1/ | Temporary-Gene-3609 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ienvtp | false | null | t3_1ienvtp | /r/LocalLLaMA/comments/1ienvtp/simple_web_app_to_run_local_instances_of_r1/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q_NR8MF8VZ0KyrcCclIOqZflCT_IeQe2p2-YtQ2z8h4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nKoHMa3iHUEADXyXW-lgYsLbaB-JvjFGxofUwVYBf2o.jpg?width=108&crop=smart&auto=webp&s=afce19ca90641dd9cd302031c416e6fad7ca9dbf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nKoHMa3iHUEADXyXW-lgYsLbaB-JvjFGxofUwVYBf2o.jpg?width=216&crop=smart&auto=webp&s=b73e6f2a14943ff1c0d117058346a190ceb079b3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nKoHMa3iHUEADXyXW-lgYsLbaB-JvjFGxofUwVYBf2o.jpg?width=320&crop=smart&auto=webp&s=92684f706d278a6e3008944303fbaab4a1903910', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nKoHMa3iHUEADXyXW-lgYsLbaB-JvjFGxofUwVYBf2o.jpg?width=640&crop=smart&auto=webp&s=d5d5fe3e974210751bb7ecf6741a5d0c84a9d3e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nKoHMa3iHUEADXyXW-lgYsLbaB-JvjFGxofUwVYBf2o.jpg?width=960&crop=smart&auto=webp&s=190a947a4ca0a6d51b8ac40ad1b1a372bc781815', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nKoHMa3iHUEADXyXW-lgYsLbaB-JvjFGxofUwVYBf2o.jpg?width=1080&crop=smart&auto=webp&s=61876eec98bcd8f1f1e8abfd4f5de331afbfaeb5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nKoHMa3iHUEADXyXW-lgYsLbaB-JvjFGxofUwVYBf2o.jpg?auto=webp&s=07f18084c1dd0eacc193dfee49712d95fb014fcf', 'width': 1200}, 'variants': {}}]} |
Cheapest way to run R1 locally? | 5 | I have got Ryzen 5600G, 16Gb ddr4 ram, 12gb RTX3060.
This PC is obviously o potato. But if I upgrade to 128gb ram for example, will it work? I don't mind it being slow, just need to work
| 2025-01-31T20:14:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ieo4ik/cheapest_way_to_run_r1_locally/ | Glass-Driver2160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieo4ik | false | null | t3_1ieo4ik | /r/LocalLLaMA/comments/1ieo4ik/cheapest_way_to_run_r1_locally/ | false | false | self | 5 | null |
AGI | 1 | [removed] | 2025-01-31T20:19:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ieo8nd/agi/ | Fun_Spread_1802 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieo8nd | false | null | t3_1ieo8nd | /r/LocalLLaMA/comments/1ieo8nd/agi/ | false | false | self | 1 | null |
Need help with inference using unsloth | 1 | [removed] | 2025-01-31T20:29:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ieoh2b/need_help_with_inference_using_unsloth/ | rushat_bahi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieoh2b | false | null | t3_1ieoh2b | /r/LocalLLaMA/comments/1ieoh2b/need_help_with_inference_using_unsloth/ | false | false | self | 1 | null |
Best model for book translation | 5 | I'm interested in Asian literature and lectures, however often there are no high-quality translations available.
What would be a local model that can translate books (400-800 pages) consistently? | 2025-01-31T20:32:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ieojg7/best_model_for_book_translation/ | dasnabla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieojg7 | false | null | t3_1ieojg7 | /r/LocalLLaMA/comments/1ieojg7/best_model_for_book_translation/ | false | false | self | 5 | null |
DeepSeek R1 takes #1 overall on a Creative Short Story Writing Benchmark | 336 | 2025-01-31T20:38:45 | zero0_one1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ieooqe | false | null | t3_1ieooqe | /r/LocalLLaMA/comments/1ieooqe/deepseek_r1_takes_1_overall_on_a_creative_short/ | false | false | 336 | {'enabled': True, 'images': [{'id': 'r0lZ7G_iu8uq97HCCkEnn2umetekmkVqGE9uhEEyNUc', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/i2p0m8em4ege1.png?width=108&crop=smart&auto=webp&s=203d96b2eaa53b7a1d0215abaa35ac41288f6f57', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/i2p0m8em4ege1.png?width=216&crop=smart&auto=webp&s=36a60d3da2e631cca84ff89f4f690f5efe880488', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/i2p0m8em4ege1.png?width=320&crop=smart&auto=webp&s=953f8ec3e35af78f2d705052856f11ce560be157', 'width': 320}, {'height': 295, 'url': 'https://preview.redd.it/i2p0m8em4ege1.png?width=640&crop=smart&auto=webp&s=c4859ed3af650610750eb873e1231f2d526388ec', 'width': 640}, {'height': 443, 'url': 'https://preview.redd.it/i2p0m8em4ege1.png?width=960&crop=smart&auto=webp&s=a6541b1f5bdb4af55a99a802bb34a472517d46eb', 'width': 960}, {'height': 498, 'url': 'https://preview.redd.it/i2p0m8em4ege1.png?width=1080&crop=smart&auto=webp&s=68b3ae8643b31663e3312b78a7a30ca63689def5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://preview.redd.it/i2p0m8em4ege1.png?auto=webp&s=177a12a65fd84fea67ea6aa6943ea83dd6901561', 'width': 1300}, 'variants': {}}]} |
|||
Relatively budget 671B R1 CPU inference workstation setup, 2-3T/s | 63 | I saw a post going over how to do Q2 R1 inference with a gaming rig by reading the weights directly from SSDs. It's a very neat technique and I would also like to share my experiences with CPU inference with a regular EPYC workstation setup. This setup has good memory capacity and relatively decent CPU inference performance, while also providing a great backbone for GPU or SSD expansions. Being a workstation rather than a server means this rig should be rather easily worked with and integrated into your bedroom.
I am using a Q4KM GGUF and still experimenting with turning cores/CCDs/SMT on and off on my 7773X and trying different context lengths to better understand where the limit is at, but 3T/s seems to be the limit as everything is still extremely memory bandwidth starved.
CPU: Any Milan EPYC over 32 cores should be okay. The price of these things varies greatly depending on the part number and if they are ES/QS/OEM/Production chips. I recommend buying an ES or OEM 64-core variant, some of them go for $500-$600. Some cheapest 32-core OEM models can go as low as $200-$300. Make sure you ask the seller CPU/board/BIOSver compatibility before purchasing. **Never buy Lenovo or DELL locked EPYC chips unless you know what you are doing!** They are never going to work on consumer motherboards. Rome EPYCs can also work since they also support DDR4 3200, but they aren't too much cheaper and have quite a bit lower CPU performance compared to Milan. There are several overclockable ES/OEM Rome chips out here such as 32 core ZS1711E3VIVG5 and 100-000000054-04. 64 core ZS1406E2VJUG5 and 100-000000053-04. I had both ZS1711 and 54-04 and it was super fun to tweak around and OC them to 3.7GHz all core, if you can find one at a reasonable price, they are also great options.
Motherboard: H12SSL goes for around $500-600, and ROMED8-2T goes for $600-700. I recommend ROMED8-2T over H12SSL for the total 7x16 PCIe connectors rather than H12SSL's 5x16 + 2x8.
DRAM: This is where most money should be spent. You will want to get 8 sticks of 64GB DDR4 3200MT/s RDIMM. **It has to be RDIMM (Registered DIMM), and it also has to be the same model of memory.** Each stick costs around $100-125, so in total you should spend $800-1000 on memory. This will give you 512GB capacity and 200GB/s bandwidth. The stick I got is HMAA8GR7AJR4N-XN, which works well with my ROMED8-2T. You don't have to pick from the QVL list of the motherboard vendor, just use it as a reference. 3200MT/s is not a strict requirement, if your budget is tight, you can go down to 2933 or 2666. Also, I would avoid 64GB LRDIMMs (Load Reduced DIMM). They are earlier DIMMs in DDR4 era when per DRAM chip density was still low, so each DRAM package has 2 or 4 chips packed inside (DDP or 3DS), the buffers on them are also additional points of failure. 128GB and 256GB LRDIMMs are the cutting edge for DDR4, but they are outrageously expensive and hard to find. 8x64GB is enough for Q4 inference.
CPU cooler: I would limit the spending here to around $50. Any SP3 heatsink should be OK. If you bought 280W TDP CPUs, consider maybe getting better ones but there is no need to go above $100.
PSU: This system should be a backbone for more GPUs to one day be installed. I would start with a pretty beefy one, maybe around 1200W ish. I think around $200 is a good spot to shop for.
Storage: Any 2TB+ NVME SSD should be fairly flexible, they are fairly cheap these days. $100
Case: I recommend a full-tower with dual PSU support. I highly recommend Lianli's o11 and o11 XL family. They are quite pricy but done really well. $200
In conclusion, this whole setup should cost around $2000-2500 from scratch, not too much more expensive than a single 4090 nowadays. It can do Q4 R1 inference with usable context length and it's going to be a good starting point for future local inference. The 7 x16 PCIe gen 4 expansion provided is really handy and can do so much more once you can afford more GPUs.
I am also looking into testing some old Xeons such as running dual E5v4s, they are dirt cheap right now. Will post some results once I have them running! | 2025-01-31T20:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ieosbx/relatively_budget_671b_r1_cpu_inference/ | xinranli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieosbx | false | null | t3_1ieosbx | /r/LocalLLaMA/comments/1ieosbx/relatively_budget_671b_r1_cpu_inference/ | false | false | self | 63 | null |
Smallest, cheapest option for running local LLMs. | 3 | I have very limited space. My goal is to get something good enough running at 25 tokens/second minimum. I don’t want to spend more than $800 if possible.
Would I be crazy to buy a M4 Mac mini? I think it will hit 25 tokens/second easily. And will be super small and power-efficient.
I know I could get much better results with a discrete GPU, but that would be more space, power, and money.
Willing to mess around with a Raspberry Pi or similar if there is any way to hit 25 tokens/second without breaking the bank. I already have a 16GB Pi 5.
But even with the Pi as an option, I’m thinking I’ll wind up spending less if I go the Mac mini route. Would also be helpful to know which upgrades would be best worth my money on a Mac mini. Like if I get the base M4 chip but max out the RAM, what will bottleneck me first?
As far as models, DeepSeek or Llama 3.x maybe and quantized but the largest I can fit in memory. Tbh I’ve only used these a little and I’m not sure how much I’m giving up in quality, but I want OpenAI out of my data. | 2025-01-31T20:44:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ieotry/smallest_cheapest_option_for_running_local_llms/ | moseschrute19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieotry | false | null | t3_1ieotry | /r/LocalLLaMA/comments/1ieotry/smallest_cheapest_option_for_running_local_llms/ | false | false | self | 3 | null |
vLLM quantization performance: which kinds work best? | 5 | vLLM supports GGUF but the documentation seems to suggest that the speed will be better with AWQ. Does anyone have any experience with the current status? Is there a significant speed difference?
It's easier to run GGUF models in the exact size that fits, and there aren't very many AWQ quantization in comparison. I'm trying to figure out if I need to start doing the AWQ quantization myself.
Aphrodite builds on vLLM, so that might be another point of comparison. | 2025-01-31T20:49:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ieoxk0/vllm_quantization_performance_which_kinds_work/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieoxk0 | false | null | t3_1ieoxk0 | /r/LocalLLaMA/comments/1ieoxk0/vllm_quantization_performance_which_kinds_work/ | false | false | self | 5 | null |
Chatgpt scares me. Is it post-truth area? | 1 | [removed] | 2025-01-31T20:49:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ieoxk8/chatgpt_scares_me_is_it_posttruth_area/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieoxk8 | false | null | t3_1ieoxk8 | /r/LocalLLaMA/comments/1ieoxk8/chatgpt_scares_me_is_it_posttruth_area/ | false | false | self | 1 | null |
Who is using DeepSeeks RL technique? | 3 | Curious who all has stated using their Reinforcement learning technique locally for use cases. What have you tried, how successful has it been? | 2025-01-31T20:50:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ieoyim/who_is_using_deepseeks_rl_technique/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieoyim | false | null | t3_1ieoyim | /r/LocalLLaMA/comments/1ieoyim/who_is_using_deepseeks_rl_technique/ | false | false | self | 3 | null |
DeepSeek AI blocked by Italian authorities | 295 | 2025-01-31T20:54:04 | https://www.euronews.com/next/2025/01/31/deepseek-ai-blocked-by-italian-authorities-as-others-member-states-open-probes | ApprehensiveCook2236 | euronews.com | 1970-01-01T00:00:00 | 0 | {} | 1iep1i4 | false | null | t3_1iep1i4 | /r/LocalLLaMA/comments/1iep1i4/deepseek_ai_blocked_by_italian_authorities/ | false | false | 295 | {'enabled': False, 'images': [{'id': 'AOiRP86KAYwnyvpycVKmjHypXwGmxU3rendzPON8X-w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cwVbdtxL_MOCraBvILhveGZjsoXBHPHOS4Ik8eBEAT4.jpg?width=108&crop=smart&auto=webp&s=7ee08b14560644f937f776a3e03a67565a5c81aa', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cwVbdtxL_MOCraBvILhveGZjsoXBHPHOS4Ik8eBEAT4.jpg?width=216&crop=smart&auto=webp&s=c7a39d8b20e35f851768034465a05b3fe4e89c7c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cwVbdtxL_MOCraBvILhveGZjsoXBHPHOS4Ik8eBEAT4.jpg?width=320&crop=smart&auto=webp&s=a9af604009c8035a7bc7e5b55d3b952347e09cfb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cwVbdtxL_MOCraBvILhveGZjsoXBHPHOS4Ik8eBEAT4.jpg?width=640&crop=smart&auto=webp&s=c74e97f5b3d89f2b950da588950bdaa1d7f71e9d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cwVbdtxL_MOCraBvILhveGZjsoXBHPHOS4Ik8eBEAT4.jpg?width=960&crop=smart&auto=webp&s=4a8953023d5ce39373ca377206e1123828ae3550', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cwVbdtxL_MOCraBvILhveGZjsoXBHPHOS4Ik8eBEAT4.jpg?width=1080&crop=smart&auto=webp&s=04ee5a7056833db20b336f3dc7d8904056d1da23', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/cwVbdtxL_MOCraBvILhveGZjsoXBHPHOS4Ik8eBEAT4.jpg?auto=webp&s=294737a4153fc946c129da15b1ef61c09564d7e9', 'width': 1200}, 'variants': {}}]} |
||
Is there a local "projects" feature? | 3 | Hey folks,
I love using Claude's projects feature, and I wonder if there's any local solution to this. Note that this would have to be run entirely locally, not using a third party company that allows you to use a locally-served API. Thanks! | 2025-01-31T20:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/1iep412/is_there_a_local_projects_feature/ | Berberis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iep412 | false | null | t3_1iep412 | /r/LocalLLaMA/comments/1iep412/is_there_a_local_projects_feature/ | false | false | self | 3 | null |
Add data to Deepseek? | 2 | Good afternoon:
I'm wondering, is it possible to add "custom" data to Deepseek? I'm looking into setting it up for work but we have lots of custom, super specific stuff that I wouldn't expect it to know anything about. Is it possible to "teach" it? Is there a way to give it more info? | 2025-01-31T21:06:41 | https://www.reddit.com/r/LocalLLaMA/comments/1iepccv/add_data_to_deepseek/ | rmp5s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iepccv | false | null | t3_1iepccv | /r/LocalLLaMA/comments/1iepccv/add_data_to_deepseek/ | false | false | self | 2 | null |
Reasoning models struggling with this question based on Browser's Vault in Mario Party Jamboree. Wondering if it's a weakness or I'm just not phrasing the question/prompt correctly. | 2 | Was trying to explain to someone the value and odds of Browser's Vault in Mario Party Jamboree and thought, *hey! this would make a good question to ask a reasoning model!* So I did, and was shocked by the poor answers I was getting, and figure maybe it's me, maybe I'm not being clear, so I try to refine the question, and even feed the answer to the models (o1 and R1) but of the maybe 30 attempts R1 was the only one to get it right and only once, failing subsequent times using the same prompt.
I even created a python script of the "game" have it analyze it and asks the same question based on the script and still no luck.
Figure I come here and see what you guys think about the question, if it's me and the question is unclear and if there is a way to prompt it so the models can solve it correctly every time.
**The question:**
A guessing game where the player tries to identify two hidden numbers, each ranging from 1 to 9. The two numbers may be the same or different. In each turn, the player guesses both numbers, and the system provides feedback on whether they got one, both, or neither correct. The goal is to reveal both numbers and win the game.
Game System
At the start, the system randomly generates two hidden numbers, each between 1 and 9.
The game proceeds in turns, with the player making a guess for both numbers each turn.
The system checks the guess and provides feedback:
If both numbers are correct, the player wins.
If any of hidden number is correct, regardless of order, it is revealed, the player will know what that number is and the player continues guessing the remaining number.
If neither is correct, the player keeps trying.
The game ends when both numbers are revealed.
Based on the logic of the game: What is the maximum number of turns a player would need to guess both hidden numbers if they try every possibility while keeping track of their previous guesses?
Here is the py script:
import random
def main():
hidden_num1 = random.randint(1, 9)
hidden_num2 = random.randint(1, 9)
revealed1 = False
revealed2 = False
print("Welcome to the Two-Number Guessing Game!")
print("Guess two numbers between 1 and 9. Try to guess both to win!\n")
while True:
try:
guess1 = int(input("Enter your guess for the first number: "))
guess2 = int(input("Enter your guess for the second number: "))
except ValueError:
print("Please enter valid integers between 1 and 9.\n")
continue
# Check guesses and update revealed status
current_correct1, new_reveal1 = check_guess(guess1, hidden_num1, revealed1)
current_correct2, new_reveal2 = check_guess(guess2, hidden_num2, revealed2)
if current_correct1:
revealed1 = True
if current_correct2:
revealed2 = True
# Check win conditions
if current_correct1 and current_correct2:
print(f"\nCongratulations! You've guessed both numbers correctly! They were {hidden_num1} and {hidden_num2}. You win!")
break
if revealed1 and revealed2:
print(f"\nBoth numbers have been revealed! They were {hidden_num1} and {hidden_num2}. You win!")
break
# Provide feedback
correct_count = sum([current_correct1, current_correct2])
feedback = []
if new_reveal1:
feedback.append(f"First number revealed: {hidden_num1}")
if new_reveal2:
feedback.append(f"Second number revealed: {hidden_num2}")
if correct_count == 1 and not (new_reveal1 or new_reveal2):
if current_correct1:
feedback.append(f"First number correct ({hidden_num1})")
else:
feedback.append(f"Second number correct ({hidden_num2})")
elif correct_count == 0:
feedback.append("No correct guesses")
print("\n" + "\n".join(feedback))
print_revealed_status(revealed1, revealed2, hidden_num1, hidden_num2)
print()
def check_guess(guess, hidden_num, revealed):
if revealed:
return (guess == hidden_num, False)
return (guess == hidden_num, guess == hidden_num)
def print_revealed_status(revealed1, revealed2, num1, num2):
status = []
if revealed1:
status.append(f"First number: {num1}")
else:
status.append("First number: _")
if revealed2:
status.append(f"Second number: {num2}")
else:
status.append("Second number: _")
print("Current revealed status:", ", ".join(status))
if __name__ == "__main__":
main()
Answer is >!9 turns, any more and it's the player's inefficiency!< | 2025-01-31T21:09:13 | https://www.reddit.com/r/LocalLLaMA/comments/1iepekk/reasoning_models_struggling_with_this_question/ | IamVeryBraves | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iepekk | false | null | t3_1iepekk | /r/LocalLLaMA/comments/1iepekk/reasoning_models_struggling_with_this_question/ | false | false | self | 2 | null |
He is still just llama | 1 | 2025-01-31T21:17:35 | MarinatedPickachu | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ieplr4 | false | null | t3_1ieplr4 | /r/LocalLLaMA/comments/1ieplr4/he_is_still_just_llama/ | false | false | 1 | {'enabled': True, 'images': [{'id': '39E1KsJdttVFjOC1DT3X4a3rv-QyGbjXA1OoaqQNyTk', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ybg64w9lbege1.jpeg?width=108&crop=smart&auto=webp&s=b2d5e40d1113cd91991f67bf23904f3c566a3f1e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ybg64w9lbege1.jpeg?width=216&crop=smart&auto=webp&s=57b8da4168766c67a0922614af0085ed129bca1a', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ybg64w9lbege1.jpeg?width=320&crop=smart&auto=webp&s=dd99110e516d0cef798172b1910a7a958fb17569', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ybg64w9lbege1.jpeg?width=640&crop=smart&auto=webp&s=6b68c088df9453be906c54ca3dadf23ee2e638e0', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ybg64w9lbege1.jpeg?width=960&crop=smart&auto=webp&s=095040db3507de5590cd2d19293752d82b3e9bed', 'width': 960}], 'source': {'height': 2048, 'url': 'https://preview.redd.it/ybg64w9lbege1.jpeg?auto=webp&s=ea31f4d6f3d3b617b516e8a34f81e6e975d85ed5', 'width': 1024}, 'variants': {}}]} |
|||
UI where I can make 2 LLMs talk to each other with RAG and Document handling + display? Not Sillytavern please | 1 | [removed] | 2025-01-31T21:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/1iepttp/ui_where_i_can_make_2_llms_talk_to_each_other/ | WombatMask | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iepttp | false | null | t3_1iepttp | /r/LocalLLaMA/comments/1iepttp/ui_where_i_can_make_2_llms_talk_to_each_other/ | false | false | self | 1 | null |
The future of LLM performance isn't "self-reasoning" it's "self-loathing". | 0 | 2025-01-31T21:34:56 | cmndr_spanky | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ieq0di | false | null | t3_1ieq0di | /r/LocalLLaMA/comments/1ieq0di/the_future_of_llm_performance_isnt_selfreasoning/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'oZ0UjiqM_LhRa0QfgTLOHZ6R4iFez0jHwLn4_ndd198', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/5zcwqw8ieege1.png?width=108&crop=smart&auto=webp&s=392d3b03fcb45e45217ef27fdd12a78c45e8b2b4', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/5zcwqw8ieege1.png?width=216&crop=smart&auto=webp&s=52e5ac69daae0f74d6836d9ced0aad54a5db3741', 'width': 216}, {'height': 195, 'url': 'https://preview.redd.it/5zcwqw8ieege1.png?width=320&crop=smart&auto=webp&s=1a331fa638eb5a624501ad5b85a783a73619b465', 'width': 320}, {'height': 390, 'url': 'https://preview.redd.it/5zcwqw8ieege1.png?width=640&crop=smart&auto=webp&s=f9e802a89047d8ec7165e10e50edd05126138752', 'width': 640}, {'height': 585, 'url': 'https://preview.redd.it/5zcwqw8ieege1.png?width=960&crop=smart&auto=webp&s=2f74bf852d6a3cdd6119de1496973debf28a8b23', 'width': 960}, {'height': 658, 'url': 'https://preview.redd.it/5zcwqw8ieege1.png?width=1080&crop=smart&auto=webp&s=63d67b3814c377ce2ba58bf974415eb51421ab02', 'width': 1080}], 'source': {'height': 885, 'url': 'https://preview.redd.it/5zcwqw8ieege1.png?auto=webp&s=6b5bc72e48d6ed009538d45913db2d46ed365f35', 'width': 1451}, 'variants': {}}]} |
|||
anybody use the deepseek r1 from nvda? | 1 | [removed] | 2025-01-31T21:35:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ieq0fv/anybody_use_the_deepseek_r1_from_nvda/ | Junior-Education8608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieq0fv | false | null | t3_1ieq0fv | /r/LocalLLaMA/comments/1ieq0fv/anybody_use_the_deepseek_r1_from_nvda/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bDBtR3TunTkvgRYojn0XqDeBIKzBtXMLGIZO_q-cAwo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/e_zd-58KsfInR61YXtvDdp1zboqyLfXLN5s93v6quM4.jpg?width=108&crop=smart&auto=webp&s=c62e8721a6e4db3bd3de8b812202e1da10dc4ae5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/e_zd-58KsfInR61YXtvDdp1zboqyLfXLN5s93v6quM4.jpg?width=216&crop=smart&auto=webp&s=ac3752db1669ab9a59c2ba416726ac0f130fd414', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/e_zd-58KsfInR61YXtvDdp1zboqyLfXLN5s93v6quM4.jpg?width=320&crop=smart&auto=webp&s=2540776faaabafce1c5701494a84e4b621615932', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/e_zd-58KsfInR61YXtvDdp1zboqyLfXLN5s93v6quM4.jpg?width=640&crop=smart&auto=webp&s=f069a105b9b50511edf5cc1043929c743c4f1348', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/e_zd-58KsfInR61YXtvDdp1zboqyLfXLN5s93v6quM4.jpg?width=960&crop=smart&auto=webp&s=4e4fe7439c7043d1e4cf04a0b69d6b918fcd635e', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/e_zd-58KsfInR61YXtvDdp1zboqyLfXLN5s93v6quM4.jpg?auto=webp&s=0913f080cae89f248bf191616652dde12c91f305', 'width': 1024}, 'variants': {}}]} |
o3-mini? | 3 | I was about to ask a trivial question so I open the model selection to go with 4o, when a wild o3-mini appeared. Btw, this is using the android app. This is getting exciting | 2025-01-31T21:41:39 | jalbanesi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ieq5yp | false | null | t3_1ieq5yp | /r/LocalLLaMA/comments/1ieq5yp/o3mini/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'xP_FlnbBOaeK1joURxTW3N4Byf26AuMw_o_wk3VkKu4', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/aior4ixvfege1.jpeg?width=108&crop=smart&auto=webp&s=298a9beb3d8444c49f7ccc528843821e39a09771', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/aior4ixvfege1.jpeg?width=216&crop=smart&auto=webp&s=dd6a1fa364904570002a0b0ec9042286d5ca11b9', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/aior4ixvfege1.jpeg?width=320&crop=smart&auto=webp&s=496a1dba83dc14d2bbbf16b787bcddb69da9fa21', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/aior4ixvfege1.jpeg?width=640&crop=smart&auto=webp&s=922866aa5e68851dea76b3264380aa7296c6f4f1', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/aior4ixvfege1.jpeg?width=960&crop=smart&auto=webp&s=832c5f43ca7c26109f0a5780aa898b73a7220cd0', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/aior4ixvfege1.jpeg?width=1080&crop=smart&auto=webp&s=544b8a254486cfadd4be75b3886b10af571a4130', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/aior4ixvfege1.jpeg?auto=webp&s=ef6f61ee9ed9d45d10cf8e777c77338d7bc0d336', 'width': 1080}, 'variants': {}}]} |
||
its not local but the new o3-mini from openAI just came available | 1 | [removed] | 2025-01-31T21:48:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ieqbcn/its_not_local_but_the_new_o3mini_from_openai_just/ | DHamov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieqbcn | false | null | t3_1ieqbcn | /r/LocalLLaMA/comments/1ieqbcn/its_not_local_but_the_new_o3mini_from_openai_just/ | false | false | 1 | null |
|
Is stacking 3090s the best for inference of mid size llms ? | 3 | I've been looking into the hardware side of running mid-size language models—like those in the 24B-70B parameter range. I'm curious about using multiple RTX 3090s for inference at a decent speed, but I’m not entirely sure if this is the optimal approach.
The 3090 is relatively cost-effective compared to some of the enterprise-grade GPUs. However, is there a point where stacking 3090s becomes less efficient compared to investing in fewer, but more powerful, GPUs (like the A6000 or even A100 etc,,,)? | 2025-01-31T21:51:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ieqe5w/is_stacking_3090s_the_best_for_inference_of_mid/ | 11TheM11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieqe5w | false | null | t3_1ieqe5w | /r/LocalLLaMA/comments/1ieqe5w/is_stacking_3090s_the_best_for_inference_of_mid/ | false | false | self | 3 | null |
He is still just llama | 1 | [removed] | 2025-01-31T22:03:25 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ieqohd | false | null | t3_1ieqohd | /r/LocalLLaMA/comments/1ieqohd/he_is_still_just_llama/ | false | false | default | 1 | null |
||
Running deepseek r1? | 1 | [removed] | 2025-01-31T22:15:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ieqz0i/running_deepseek_r1/ | Illustrious-Row6858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieqz0i | false | null | t3_1ieqz0i | /r/LocalLLaMA/comments/1ieqz0i/running_deepseek_r1/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OqAvtQ4tlA8vKt4R_1outxRodFTo7HM0fblhK0y5vrk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?width=108&crop=smart&auto=webp&s=8a480083ca56e1cbe810b428889ead7407dc79b0', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?auto=webp&s=ac0c5a4567d2d1c72fdb480636106815d2b6b352', 'width': 200}, 'variants': {}}]} |
Multiple Local Sessions | 0 | Got six non-GPU, small memory PCs on a LAN. I want to drop in a capable new machine running a LOCAL Llama model where I can additionally RAG a bunch of private data PDFs in one place. Prompt arrivals will be generally sparse, but there is still a good likelihood there will be overlap even with some sensible investment to keep token speed timely. Want it no-training simple on the PCs.
Anyone have a suggestion on how to build a prompt/completion traffic cop/queue (or equivalent) so the desktop windows appear in-session 24x7 … accepting completion delays will occur during the overlaps? | 2025-01-31T22:17:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ier0nc/multiple_local_sessions/ | GaltEngineering | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ier0nc | false | null | t3_1ier0nc | /r/LocalLLaMA/comments/1ier0nc/multiple_local_sessions/ | false | false | self | 0 | null |
Running deepseek r1? | 1 | [removed] | 2025-01-31T22:18:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ier19e/running_deepseek_r1/ | Illustrious-Row6858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ier19e | false | null | t3_1ier19e | /r/LocalLLaMA/comments/1ier19e/running_deepseek_r1/ | false | false | self | 1 | null |
How can I run an llm using cpu instead of GPU - running on mac m2 arm chips | 0 | Hello, I'm currently running mistral 7B with ollama locally.
Connecting trough api(localhost) and looking to run difference instance to speed up the process.
It's using a lot of ram and almost nothing from the cpu, should I switch to as modal that leverage more cpu or I can bring modification to force it to use my cpu
[What I see when monitoring the llm running \(multiple threads\)](https://preview.redd.it/xj6yvm0wlege1.png?width=779&format=png&auto=webp&s=70d9e971678d3d700eea90ca8fc80b683f9000d4)
| 2025-01-31T22:21:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ier3fu/how_can_i_run_an_llm_using_cpu_instead_of_gpu/ | voidwater1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ier3fu | false | null | t3_1ier3fu | /r/LocalLLaMA/comments/1ier3fu/how_can_i_run_an_llm_using_cpu_instead_of_gpu/ | false | false | 0 | null |
|
Deploy Deepseek R1 8B on AWS | 0 | 2025-01-31T22:26:40 | https://www.slashml.com/blog/host-deepseek-r1-on-aws | fazkan | slashml.com | 1970-01-01T00:00:00 | 0 | {} | 1ier7z6 | false | null | t3_1ier7z6 | /r/LocalLLaMA/comments/1ier7z6/deploy_deepseek_r1_8b_on_aws/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Z6gDJER1QkwrMgDfJwUyRpxiLulc2mHLGuqwWg6dgCQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/t2kkIWlCHa7MLtBrx7LwgXZL-Qy5k5es5DbDaOa3mKA.jpg?width=108&crop=smart&auto=webp&s=d79b93293fd629ff751d33bf02cdb6a4c0697388', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/t2kkIWlCHa7MLtBrx7LwgXZL-Qy5k5es5DbDaOa3mKA.jpg?width=216&crop=smart&auto=webp&s=82a9dd34852e1926cae84e5666887ca14678bf79', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/t2kkIWlCHa7MLtBrx7LwgXZL-Qy5k5es5DbDaOa3mKA.jpg?width=320&crop=smart&auto=webp&s=256bc78266b5cbb8885d6357528072bb34fc1398', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/t2kkIWlCHa7MLtBrx7LwgXZL-Qy5k5es5DbDaOa3mKA.jpg?width=640&crop=smart&auto=webp&s=b3ddb7433d2e52175b4b768b07fc105f921fe136', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/t2kkIWlCHa7MLtBrx7LwgXZL-Qy5k5es5DbDaOa3mKA.jpg?width=960&crop=smart&auto=webp&s=2095528f67ecd41d79fce26c646df4bce0492247', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/t2kkIWlCHa7MLtBrx7LwgXZL-Qy5k5es5DbDaOa3mKA.jpg?auto=webp&s=738394dc73a151d3ee6e039e0169783a3498de25', 'width': 1024}, 'variants': {}}]} |
||
Hardware provisioning challenges while selling local LLM apps to customers | 2 | I founded a startup that offers a suite of applications that leverages local AI (running on macbook or NVidia GPU enabled machines etc). People love the idea of running the AI locally and not having to worry about privacy or token counts. But our initial sales outreach efforts have shown that a huge percentage of users don't have a macbook or a GPU enabled notebook. I am offering them to bring their own GPUs or host my application on their VPCs. They seem to be a bit hesitant with the idea in theory, but I feel like this is a source of friction with customer onboarding. The product runs very smoothly on my macbook and my GPU enabled windows laptop.
What are your thoughts on the practicality of local llm inferences. Is this something that can become mainstream? There are a lot of products that use ChatGPT or claude APIs today, but how do I convince customers to invest in a local setup? | 2025-01-31T22:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/1iercuo/hardware_provisioning_challenges_while_selling/ | __amberluz__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iercuo | false | null | t3_1iercuo | /r/LocalLLaMA/comments/1iercuo/hardware_provisioning_challenges_while_selling/ | false | false | self | 2 | null |
Oh o3-mini you cheeky dawg . HELP ME Government Daddy HELP ME . DeepSeek IS STEALING MY DATA. Oh you knew this was gonna happen didn't you? Cry wolf before the other party does. | 0 | 2025-01-31T22:38:35 | Educational_Gap5867 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ierhgm | false | null | t3_1ierhgm | /r/LocalLLaMA/comments/1ierhgm/oh_o3mini_you_cheeky_dawg_help_me_government/ | true | false | spoiler | 0 | {'enabled': True, 'images': [{'id': 'BvGv6ub6p0Fh3HV5_5hkoXgT_JUIzOdH4aPbOnQ-CB8', 'resolutions': [{'height': 22, 'url': 'https://preview.redd.it/no8b17cppege1.png?width=108&crop=smart&auto=webp&s=d5952a64845dc29d8b8c20f1f1a7571425fe8635', 'width': 108}, {'height': 45, 'url': 'https://preview.redd.it/no8b17cppege1.png?width=216&crop=smart&auto=webp&s=d2e99031f3dec5dfd8257372537ead3d43083780', 'width': 216}, {'height': 68, 'url': 'https://preview.redd.it/no8b17cppege1.png?width=320&crop=smart&auto=webp&s=9c3fa999d51c267bfbaa58b1ab45935e29726bd5', 'width': 320}, {'height': 136, 'url': 'https://preview.redd.it/no8b17cppege1.png?width=640&crop=smart&auto=webp&s=9dd14796eb8bb1251379c460298bd142e1ddd6ce', 'width': 640}], 'source': {'height': 159, 'url': 'https://preview.redd.it/no8b17cppege1.png?auto=webp&s=f6dc47e8b0adf7493b168a24c185b7361a94282a', 'width': 747}, 'variants': {'obfuscated': {'resolutions': [{'height': 22, 'url': 'https://preview.redd.it/no8b17cppege1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ffd7610bd1d741640fc5c64d47c6c6676a0d9b6e', 'width': 108}, {'height': 45, 'url': 'https://preview.redd.it/no8b17cppege1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=88d18aa76a12912069cef4ed8398ced479c918b9', 'width': 216}, {'height': 68, 'url': 'https://preview.redd.it/no8b17cppege1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=677e1e7c702b718093432b5c1515e48c9e449789', 'width': 320}, {'height': 136, 'url': 'https://preview.redd.it/no8b17cppege1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=1ecc31fe3d9e4c7fc8b268cbd11d55c746834735', 'width': 640}], 'source': {'height': 159, 'url': 'https://preview.redd.it/no8b17cppege1.png?blur=40&format=pjpg&auto=webp&s=d6f84aab8f564b7bb76f8567d710bce577299307', 'width': 747}}}}]} |
||
Thoughts on running Polaris cards for local inference? | 2 | So I've been wanting to get a really really beefy AI setup since deepseek launched, and one thing that's just been going through my head is the RX 580 cards, they have 580gb each, they're around as old as the Tesla P100 so software support could not be there is my main worry especially for AMD, but you can find the 16gb version that was made for miners for 100 dollars online, the normal 8 gig version's 40 dollars and you can flash the BIOS pretty easily since I've done it before to tune the memory timings so I'm thinking about getting one of those and buying the VRAM chips separately and trying to solder them on and flash the 16gb card's bios, but yeah I was just wondering if this is something anyone else has been doing or thinking of doing since you can get 12 of those for (lower range assuming free VRAM so obviously not) 480-1200 dollars? and that's 192gb of vram in total, now I've always thought this was a bad idea because they don't have NVlink but I've been told the software for this over PCIe is fast enough now that people run models using even Nvidia and AMD cards mismatched and still manage to get the benefits of their combined memory so I'm just sorta thinking about why this is a bad idea or why you wouldn't really do this? I used to mine with these cards 24/7 so I can tell you straight to the face that power consumption is likely not even as high as two 3090s with a setup like this. | 2025-01-31T22:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ieri5t/thoughts_on_running_polaris_cards_for_local/ | Illustrious-Row6858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieri5t | false | null | t3_1ieri5t | /r/LocalLLaMA/comments/1ieri5t/thoughts_on_running_polaris_cards_for_local/ | false | false | self | 2 | null |
OpenAI becoming open Ai? | 21 | 2025-01-31T22:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ierlm9/openai_becoming_open_ai/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ierlm9 | false | null | t3_1ierlm9 | /r/LocalLLaMA/comments/1ierlm9/openai_becoming_open_ai/ | false | false | 21 | null |
||
How many parameters in Meta Ai, Gemini, ChatGPT chats? | 0 | I asked in Meta AI chat, and it says it has 70B parameters. But at Meta blog the link to meta chat says it is 405b.
Meta chat: "I'm based on Llama 3, which has 3 model sizes: 8B, 70B, and 405B. I have 70B parameters, not 405B." | 2025-01-31T22:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ierphz/how_many_parameters_in_meta_ai_gemini_chatgpt/ | medgel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ierphz | false | null | t3_1ierphz | /r/LocalLLaMA/comments/1ierphz/how_many_parameters_in_meta_ai_gemini_chatgpt/ | false | false | self | 0 | null |
LM studio crash | 2 | I have 7900xt. When i try to download the model on LM studio. the app crashes, gpu spikes to 100% then when i go back in to the app its says "timed out. please try to resume" and when i try it crashes again
Dose any one know why?
i have 7900xt 20gb vram and 16gb ram | 2025-01-31T22:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ierqqc/lm_studio_crash/ | JamesJackL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ierqqc | false | null | t3_1ierqqc | /r/LocalLLaMA/comments/1ierqqc/lm_studio_crash/ | false | false | self | 2 | null |
Definition of a small LLM | 1 | [removed] | 2025-01-31T22:51:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ierrjv/definition_of_a_small_llm/ | as2307 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ierrjv | false | null | t3_1ierrjv | /r/LocalLLaMA/comments/1ierrjv/definition_of_a_small_llm/ | false | false | self | 1 | null |
Is there an orchestration for AIs? | 2 | I have a few models locally, each excelling at something. I wanna build a system where after I send a prompt, with maybe a file (like a csv or an image), it sends to the models and they each decide if they should do something about it or even ask other models. I could send an image and say "crop this image to include only what is relevant". Llava would tell what is important in this image and where and send the answer to llama and deepseek. Llama would call the tool to crop the image. Deepseek would summary e the description that llava gave. The example is far fetched maybe, it's just an example, each model could send messages to each othee
I wanna orchestrate them to have my own local compound ai system. How should I go about it?
I tried asking Perplexity but honestly even that can be mistaken at times. I haven't found an "AI orchestration", but if I came up with this idea, someone definelly took care of it already. If you wanna orchestrate docker containers, you use kubernets. For microservices, it's rabbitmq. What about compound AI? | 2025-01-31T22:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/1iersbt/is_there_an_orchestration_for_ais/ | Blender-Fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iersbt | false | null | t3_1iersbt | /r/LocalLLaMA/comments/1iersbt/is_there_an_orchestration_for_ais/ | false | false | self | 2 | null |
First PC from my Arc Cluster, 9 more to build... | 0 | 2025-01-31T22:53:24 | https://v.redd.it/074fm5e4sege1 | Ragecommie | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iertfy | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/074fm5e4sege1/DASHPlaylist.mpd?a=1740956018%2CMWNiMmE3NGIzM2VlNzhiODcxZWE4MmVkNDFkN2Q1M2RjMTMwOTEzNzEwNmRkODVjYjVhYzRjMjc4NTFjMzVmOA%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/074fm5e4sege1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/074fm5e4sege1/HLSPlaylist.m3u8?a=1740956018%2CM2UzNDBlMjg3YmYzM2MyN2JhYzc2NDA1M2NmZWUyYjhmYjBlMWVhMmFhYzhiZTA5MDYwMThjN2U4ZmNjMzc3ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/074fm5e4sege1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1iertfy | /r/LocalLLaMA/comments/1iertfy/first_pc_from_my_arc_cluster_9_more_to_build/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'c3p6OHA1ZTRzZWdlMfrzokP8o8-VaDBwaAN1-Gf4sRqZjK1AWxSnrDgfSg-N', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c3p6OHA1ZTRzZWdlMfrzokP8o8-VaDBwaAN1-Gf4sRqZjK1AWxSnrDgfSg-N.png?width=108&crop=smart&format=pjpg&auto=webp&s=084c5af25270e6993d79c9c8af4f6b5270fef022', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c3p6OHA1ZTRzZWdlMfrzokP8o8-VaDBwaAN1-Gf4sRqZjK1AWxSnrDgfSg-N.png?width=216&crop=smart&format=pjpg&auto=webp&s=08dd7d4e47fb55a60c9b5cb90f0d20cb91d3d241', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c3p6OHA1ZTRzZWdlMfrzokP8o8-VaDBwaAN1-Gf4sRqZjK1AWxSnrDgfSg-N.png?width=320&crop=smart&format=pjpg&auto=webp&s=6daba79e589b8e8fe7f00ed52612e80be033a1cd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c3p6OHA1ZTRzZWdlMfrzokP8o8-VaDBwaAN1-Gf4sRqZjK1AWxSnrDgfSg-N.png?width=640&crop=smart&format=pjpg&auto=webp&s=677feb7ffd4109b7e63ee5e64c4ccd3e0ed36d94', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/c3p6OHA1ZTRzZWdlMfrzokP8o8-VaDBwaAN1-Gf4sRqZjK1AWxSnrDgfSg-N.png?width=960&crop=smart&format=pjpg&auto=webp&s=2d31708783fd7af73fb23bee5479ad7c62f22ab6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/c3p6OHA1ZTRzZWdlMfrzokP8o8-VaDBwaAN1-Gf4sRqZjK1AWxSnrDgfSg-N.png?width=1080&crop=smart&format=pjpg&auto=webp&s=650e0317d5a57a8936aa64dcf791cd7f178f2fe1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/c3p6OHA1ZTRzZWdlMfrzokP8o8-VaDBwaAN1-Gf4sRqZjK1AWxSnrDgfSg-N.png?format=pjpg&auto=webp&s=5c5143527c19b7a1f821f56eeb65126a915b56f8', 'width': 1920}, 'variants': {}}]} |
||
Is there more to mobile LLM apps than NSFW chat bots? | 4 | Seems like the mobile space for LLMs is denominated by red light district style apps. Anyone finding any sorta value that's not that on mobile? | 2025-01-31T22:54:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ierug6/is_there_more_to_mobile_llm_apps_than_nsfw_chat/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ierug6 | false | null | t3_1ierug6 | /r/LocalLLaMA/comments/1ierug6/is_there_more_to_mobile_llm_apps_than_nsfw_chat/ | false | false | nsfw | 4 | null |
5090 Astral draws >620watts, future problem? | 0 | I cannot understand how 5090 Astral and 5090 suprim consistently draw >620 watts while gaming and is considered safe. I thought the cable can handle up to 600watts. If 24/7 long term, this could be a seeious issue, especially for AI models. What am I missing? | 2025-01-31T22:57:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ierwxc/5090_astral_draws_620watts_future_problem/ | Dry-Bunch-7448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ierwxc | false | null | t3_1ierwxc | /r/LocalLLaMA/comments/1ierwxc/5090_astral_draws_620watts_future_problem/ | false | false | self | 0 | null |
Generating videos | 1 | [removed] | 2025-01-31T23:00:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ieryph/generating_videos/ | sKemo12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieryph | false | null | t3_1ieryph | /r/LocalLLaMA/comments/1ieryph/generating_videos/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'yeHAmedlZveKaA1pAoQqzrYMSfteaUwlwFejl0q5_Cw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Dyo5Vp5AIgldntYC0o3UWukvhIcRwuzDPrG5rATWd4k.jpg?width=108&crop=smart&auto=webp&s=4f1774d9ca04fb8b12fb5ca15ec7de716c915dfc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Dyo5Vp5AIgldntYC0o3UWukvhIcRwuzDPrG5rATWd4k.jpg?width=216&crop=smart&auto=webp&s=a1d4dc86d5ec67a61920652c92ed19d4e5004713', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Dyo5Vp5AIgldntYC0o3UWukvhIcRwuzDPrG5rATWd4k.jpg?width=320&crop=smart&auto=webp&s=3747279f773c22307fea6714a2f0fee2663c8dd2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Dyo5Vp5AIgldntYC0o3UWukvhIcRwuzDPrG5rATWd4k.jpg?auto=webp&s=64a1e6bd4e98535ee97e86070cc6779fe6de23a2', 'width': 480}, 'variants': {}}]} |
Generating images/videos | 1 | [removed] | 2025-01-31T23:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ies13d/generating_imagesvideos/ | sKemo12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ies13d | false | null | t3_1ies13d | /r/LocalLLaMA/comments/1ies13d/generating_imagesvideos/ | false | false | self | 1 | null |
GUYS ! We might have OpenAI back !! | 0 | 2025-01-31T23:04:00 | https://www.reddit.com/gallery/1ies230 | citaman | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ies230 | false | null | t3_1ies230 | /r/LocalLLaMA/comments/1ies230/guys_we_might_have_openai_back/ | false | false | 0 | null |
||
openai can be opening again | 693 | 2025-01-31T23:09:00 | tensorsgo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ies630 | false | null | t3_1ies630 | /r/LocalLLaMA/comments/1ies630/openai_can_be_opening_again/ | false | false | 693 | {'enabled': True, 'images': [{'id': 'zvb4ywEG0mbs7VQywOgAVJELaHdLVdLZJbe86VqVK-s', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/1oovs3vgvege1.jpeg?width=108&crop=smart&auto=webp&s=d557abef5cbc4b0e827baf7ae104346c53d27e19', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/1oovs3vgvege1.jpeg?width=216&crop=smart&auto=webp&s=5fc78cdf2fc727107d26c6cc6b6ffa5bbbe2e081', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/1oovs3vgvege1.jpeg?width=320&crop=smart&auto=webp&s=3d0c2733ee827440a413a172729b8dbaebd940b5', 'width': 320}, {'height': 398, 'url': 'https://preview.redd.it/1oovs3vgvege1.jpeg?width=640&crop=smart&auto=webp&s=acc945b69ba442d4e66865f5e83ab96ac9b83b7b', 'width': 640}], 'source': {'height': 530, 'url': 'https://preview.redd.it/1oovs3vgvege1.jpeg?auto=webp&s=2d0ffbbd37130d425d6f9630b7dd793068b271bd', 'width': 852}, 'variants': {}}]} |
|||
I asked Deepseek r1 why so many models have "strawberry" problem. | 1 | [removed] | 2025-01-31T23:11:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ies8fm/i_asked_deepseek_r1_why_so_many_models_have/ | Maximum-Ad-1070 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ies8fm | false | null | t3_1ies8fm | /r/LocalLLaMA/comments/1ies8fm/i_asked_deepseek_r1_why_so_many_models_have/ | false | false | self | 1 | null |
The new Mistral Small model is disappointing | 78 | I was super excited to see a brand new 24B model from Mistral but after actually using it for more than single-turn interaction... I just find it to be disappointing
In my experience with the model it has a really hard time taking into account any information that is not crammed down its throat. It easily gets off track or confused
For single-turn question -> response it's good. For conversation, or anything that requires paying attention to context, it shits the bed. I've quadruple-checked and I'm using the right prompt format and system prompt...
Bonus question:
Why is the rope theta value 100M? The model is not long context. I think this was a misstep in choosing the architecture
Am I alone on this? Have any of you gotten it to work properly on tasks that require intelligence and instruction following?
Cheers | 2025-01-31T23:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/1iesirf/the_new_mistral_small_model_is_disappointing/ | Master-Meal-77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iesirf | false | null | t3_1iesirf | /r/LocalLLaMA/comments/1iesirf/the_new_mistral_small_model_is_disappointing/ | false | false | self | 78 | null |
They said what in the email?! | 0 | A | 2025-01-31T23:29:12 | https://www.reddit.com/r/LocalLLaMA/comments/1iesm44/they_said_what_in_the_email/ | kannthu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iesm44 | false | null | t3_1iesm44 | /r/LocalLLaMA/comments/1iesm44/they_said_what_in_the_email/ | false | false | self | 0 | null |
LLM feasibility of running on virtual memory | 4 | I'm wondering what the feasibility of running a LLM like the full DeepSeek R1 670b parameter model is on a high read/write speed M.2 card (12400MB/s read/11800MB/s write) in virtual memory.
M.2 for speeds above : [https://www.newegg.com/gigabyte-2tb-aorus/p/N82E16820009046R?srsltid=AfmBOooLhBO8Lhd1C7Zxjv744-oc2TOogvR2QVbcsDjrmhvoVNIx\_5xY](https://www.newegg.com/gigabyte-2tb-aorus/p/N82E16820009046R?srsltid=AfmBOooLhBO8Lhd1C7Zxjv744-oc2TOogvR2QVbcsDjrmhvoVNIx_5xY)
Is it possible, is it recommended and if it isn't possible what's the reason....I'm no expert and its a genuine question that I have no idea if it would work.
Thanks in advance | 2025-01-31T23:45:20 | https://www.reddit.com/r/LocalLLaMA/comments/1iesypx/llm_feasibility_of_running_on_virtual_memory/ | mgalbraith81 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iesypx | false | null | t3_1iesypx | /r/LocalLLaMA/comments/1iesypx/llm_feasibility_of_running_on_virtual_memory/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '6F01iO1tpNIoN4m6EEydz8Rktk4INIt8yk8aEAmZYQ8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eXrpNWocKzcGhCdhtzRhngLstAshF789Joakmqzrhpc.jpg?width=108&crop=smart&auto=webp&s=98ab888ff0f349157f125c3e6fbe1e7401c9ade0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/eXrpNWocKzcGhCdhtzRhngLstAshF789Joakmqzrhpc.jpg?width=216&crop=smart&auto=webp&s=407a90597fda280916ffeed3f4bcd3bc58d02a01', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/eXrpNWocKzcGhCdhtzRhngLstAshF789Joakmqzrhpc.jpg?width=320&crop=smart&auto=webp&s=d765de506f7ace293684431ce2f1119b649bf4af', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/eXrpNWocKzcGhCdhtzRhngLstAshF789Joakmqzrhpc.jpg?width=640&crop=smart&auto=webp&s=8319056b04d82fa6cb522c8799d6e1f80d8691d8', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/eXrpNWocKzcGhCdhtzRhngLstAshF789Joakmqzrhpc.jpg?auto=webp&s=d1d33f352a22069a6b2f2dcf67a12c196835c433', 'width': 640}, 'variants': {}}]} |
Now shows the chain of thought | 1 | [removed] | 2025-01-31T23:45:31 | https://www.reddit.com/r/LocalLLaMA/comments/1iesyv8/now_shows_the_chain_of_thought/ | JumpyAbies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iesyv8 | false | null | t3_1iesyv8 | /r/LocalLLaMA/comments/1iesyv8/now_shows_the_chain_of_thought/ | false | false | 1 | null |
|
cloud hardware make more sense economically? | 1 | [removed] | 2025-02-01T00:05:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ieteo2/cloud_hardware_make_more_sense_economically/ | Pandalovebeer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieteo2 | false | null | t3_1ieteo2 | /r/LocalLLaMA/comments/1ieteo2/cloud_hardware_make_more_sense_economically/ | false | false | self | 1 | null |
Deepseek R1 671b Running and Testing on a $2000 Local AI Server | 9 | 2025-02-01T00:07:23 | https://youtu.be/Tq_cmN4j2yY | bi4key | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1ietg0s | false | {'oembed': {'author_name': 'Digital Spaceport', 'author_url': 'https://www.youtube.com/@DigitalSpaceport', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Tq_cmN4j2yY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Deepseek R1 671b Running and Testing on a $2000 Local AI Server"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Tq_cmN4j2yY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Deepseek R1 671b Running and Testing on a $2000 Local AI Server', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ietg0s | /r/LocalLLaMA/comments/1ietg0s/deepseek_r1_671b_running_and_testing_on_a_2000/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'Eg489_47u5FWDAUR4-eyo4OOs9qaLavaZZvIViEvUx0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wiRV_Ddh77ivzN6k6PXFVB0XXy_spTX4M8v2cdX_nu0.jpg?width=108&crop=smart&auto=webp&s=019191f77c309470ac9762bf9e1dfeb3fd526ef0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wiRV_Ddh77ivzN6k6PXFVB0XXy_spTX4M8v2cdX_nu0.jpg?width=216&crop=smart&auto=webp&s=ce017a517216aa8adad99b9a14614e4af34b089f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wiRV_Ddh77ivzN6k6PXFVB0XXy_spTX4M8v2cdX_nu0.jpg?width=320&crop=smart&auto=webp&s=72a936baa9bf08b6f3716c8853325142bae68091', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wiRV_Ddh77ivzN6k6PXFVB0XXy_spTX4M8v2cdX_nu0.jpg?auto=webp&s=140f711d6bb974bf506fd7d8e3e33c36db10142f', 'width': 480}, 'variants': {}}]} |
||
Can someone please explain to me why its risky to run deepseek locally. I mean if I isolate it (on its own docker network) there is no way it gets out right? | 0 | Moatly what the title says. I feel like people have been pointing out that we dont really have a good understanding of this models origins however I generally trust anything open source. For what its worth I wouldnt even know where to start if I was going to validate its saftey based on its source. Is it best to spin up a isolated llama.cpp instance if I want to play with deepseek just to be on the safe side? | 2025-02-01T00:11:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ietixy/can_someone_please_explain_to_me_why_its_risky_to/ | EvolveOrDie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ietixy | false | null | t3_1ietixy | /r/LocalLLaMA/comments/1ietixy/can_someone_please_explain_to_me_why_its_risky_to/ | false | false | self | 0 | null |
Deepseek bitnet | 104 | 2025-02-01T00:25:36 | Thistleknot | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iettv1 | false | null | t3_1iettv1 | /r/LocalLLaMA/comments/1iettv1/deepseek_bitnet/ | false | false | 104 | {'enabled': True, 'images': [{'id': 'QICZUreJDa166FQh04uTU8XlRJu7Lzn1Jrdfc-Y-n00', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/cm74ybjy8fge1.jpeg?width=108&crop=smart&auto=webp&s=ef163f936cb530537d496cb2362f24b2c7f3349f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/cm74ybjy8fge1.jpeg?width=216&crop=smart&auto=webp&s=bc6c8620435bc227eb616717a57550dbeef78e13', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/cm74ybjy8fge1.jpeg?width=320&crop=smart&auto=webp&s=b5f487b4c0c89b15a52940c874bbbf0b73e917b4', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/cm74ybjy8fge1.jpeg?width=640&crop=smart&auto=webp&s=cf896e28b8d16473f9f4db98d30877291e849edd', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/cm74ybjy8fge1.jpeg?width=960&crop=smart&auto=webp&s=ba4c21a884976f174e819bb282d19d7c9cc2803e', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/cm74ybjy8fge1.jpeg?width=1080&crop=smart&auto=webp&s=770607525d0e3a5941ed45ab4bd6cc9b5a01fae3', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/cm74ybjy8fge1.jpeg?auto=webp&s=8a7e4107363cd563ede599277684156e23b18b48', 'width': 1080}, 'variants': {}}]} |
|||
F5-TTS fine tuneed model Vs XTTS V2 for non english/chinese languages, like spanish, portuguese, french - which one is the best? | 6 | F5-TTS I was only training in English and Portuguese. There are finetunes in other languages. But, since the basic model has only 2 languages, is it worse than XTTSV2 for Romance languages? | 2025-02-01T00:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ieubwx/f5tts_fine_tuneed_model_vs_xtts_v2_for_non/ | More_Bid_2197 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ieubwx | false | null | t3_1ieubwx | /r/LocalLLaMA/comments/1ieubwx/f5tts_fine_tuneed_model_vs_xtts_v2_for_non/ | false | false | self | 6 | null |
My PC 10 seconds after I typed “ollama run deepseek-r1:671b”: | 1,169 | 2025-02-01T01:11:25 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ieurv8 | false | null | t3_1ieurv8 | /r/LocalLLaMA/comments/1ieurv8/my_pc_10_seconds_after_i_typed_ollama_run/ | false | false | 1,169 | {'enabled': True, 'images': [{'id': '2PIjI6tgb7x9DbgEp2pqr96RHGUqXFFm-ypvqXD_yQ4', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/jixqkaabhfge1.gif?width=108&crop=smart&format=png8&s=c556a6f1caa7a86822be826548879d39da5486b8', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/jixqkaabhfge1.gif?width=216&crop=smart&format=png8&s=999a170e4fe30aae9a3b5bec45a15c0269c3000b', 'width': 216}], 'source': {'height': 214, 'url': 'https://preview.redd.it/jixqkaabhfge1.gif?format=png8&s=8dd05c461c3efdff043b4d88cc5fc33c64cdb408', 'width': 220}, 'variants': {'gif': {'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/jixqkaabhfge1.gif?width=108&crop=smart&s=a3ef20ab61e1321790229cb2a775c77da244db3b', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/jixqkaabhfge1.gif?width=216&crop=smart&s=c67a878b6f732544b4693cf47d6dc14a8220e551', 'width': 216}], 'source': {'height': 214, 'url': 'https://preview.redd.it/jixqkaabhfge1.gif?s=b4d0645fe681102140fc4f0416269369dbba361f', 'width': 220}}, 'mp4': {'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/jixqkaabhfge1.gif?width=108&format=mp4&s=0769c990b2bb4fed50d1be130e6cd6c2b8efdac4', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/jixqkaabhfge1.gif?width=216&format=mp4&s=cd2cea5915afe4e854e79c1f82f9ae8c5ccdbf0b', 'width': 216}], 'source': {'height': 214, 'url': 'https://preview.redd.it/jixqkaabhfge1.gif?format=mp4&s=2d2d7afc955f99b9f08351c78e0bbda5fa0f1c97', 'width': 220}}}}]} |
|||
Im trying to run deep seek 7b but it's using cpu not gpu | 1 | i just got deep seek 7b and it runs but it's really slow compared to the smaller one and uses massive amount of cpu is there a way to offload it to my gpu instead | 2025-02-01T01:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/1iev4wp/im_trying_to_run_deep_seek_7b_but_its_using_cpu/ | Danie_nooboficial | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iev4wp | false | null | t3_1iev4wp | /r/LocalLLaMA/comments/1iev4wp/im_trying_to_run_deep_seek_7b_but_its_using_cpu/ | false | false | self | 1 | null |
Is it possible to run some quantized model of deepseek r1 32 b or deepseek r1 16b on a cheap hardware like - i5 rtx 2050 4gb vram 16 ram | 1 | Content | 2025-02-01T02:00:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ievqa5/is_it_possible_to_run_some_quantized_model_of/ | bhagwano-ka-bhagwan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ievqa5 | false | null | t3_1ievqa5 | /r/LocalLLaMA/comments/1ievqa5/is_it_possible_to_run_some_quantized_model_of/ | false | false | self | 1 | null |
Is openai going open source because of Deepseek? | 0 | 2025-02-01T02:01:19 | bruhlmaocmonbro | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ievqqp | false | null | t3_1ievqqp | /r/LocalLLaMA/comments/1ievqqp/is_openai_going_open_source_because_of_deepseek/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'KbjeTPPQyXVSJtXnc7E5AywGRxREyc9s6kd4a0r9XNo', 'resolutions': [{'height': 173, 'url': 'https://preview.redd.it/pkri2xl6qfge1.jpeg?width=108&crop=smart&auto=webp&s=895e5693b0b31e4bbbc7d53678317cb83214c34b', 'width': 108}, {'height': 347, 'url': 'https://preview.redd.it/pkri2xl6qfge1.jpeg?width=216&crop=smart&auto=webp&s=a59472559ba88cb46ac540737f17bc98593d02ad', 'width': 216}, {'height': 514, 'url': 'https://preview.redd.it/pkri2xl6qfge1.jpeg?width=320&crop=smart&auto=webp&s=6df6824e85adc00cec3fbbfa8ddeed4373152c59', 'width': 320}, {'height': 1029, 'url': 'https://preview.redd.it/pkri2xl6qfge1.jpeg?width=640&crop=smart&auto=webp&s=622e1f9acb0f563a8202d6eb51f10c38d77e9bd9', 'width': 640}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/pkri2xl6qfge1.jpeg?auto=webp&s=bea24dd8c50ca4fc19705bae4153db0dfc337109', 'width': 746}, 'variants': {}}]} |
|||
Help me choose a laptop for local experiments | 1 | [removed] | 2025-02-01T02:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1iewcbj/help_me_choose_a_laptop_for_local_experiments/ | World_of_Reddit_21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iewcbj | false | null | t3_1iewcbj | /r/LocalLLaMA/comments/1iewcbj/help_me_choose_a_laptop_for_local_experiments/ | false | false | self | 1 | null |
Subsets and Splits