title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Deepseek guardrails | 0 | Currently, although deepseek refuses to talk about topics such as tiananmen square, it is really easy to bypass the guardrails, ex. in the images, and make it criticize chinese governance just by asking questions. So, I was wondering instead of the current methods of censorship, why didn't they use something else that would totally remove answers such as the ones in the images. Like wouldn't it be possible to use RL to make the model either not respond to such questions or just make an untruthful answer? Is it just something that the developers didn't want to do? | 2025-02-03T14:09:36 | https://www.reddit.com/gallery/1igpw8q | andrewFTW8 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1igpw8q | false | null | t3_1igpw8q | /r/LocalLLaMA/comments/1igpw8q/deepseek_guardrails/ | false | false | 0 | null |
|
Paradigm shift? | 721 | 2025-02-03T14:10:33 | RetiredApostle | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1igpwzl | false | null | t3_1igpwzl | /r/LocalLLaMA/comments/1igpwzl/paradigm_shift/ | false | false | 721 | {'enabled': True, 'images': [{'id': '1v0IVF8rItGVoIjea4MXGLng9RVJKwAM-deXrpJJA_s', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/gre7z74ylxge1.jpeg?width=108&crop=smart&auto=webp&s=9b9a75910114329f5d0a447a86db9158b03c325e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/gre7z74ylxge1.jpeg?width=216&crop=smart&auto=webp&s=02e0823ba04d960f4a922ac2a693ec9e9f67608a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/gre7z74ylxge1.jpeg?width=320&crop=smart&auto=webp&s=5d2aa3f136e7c56e81f46446c40cb48c9a4d1a47', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/gre7z74ylxge1.jpeg?width=640&crop=smart&auto=webp&s=9a0d8456581d7b4608dca287eb41e0f6185e191a', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/gre7z74ylxge1.jpeg?width=960&crop=smart&auto=webp&s=eed16396c27cc221316cc6d89f7237ebee88e2b5', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/gre7z74ylxge1.jpeg?auto=webp&s=29b71e643a04412ba66f428b93d7fc10030f9e71', 'width': 1024}, 'variants': {}}]} |
|||
Mistral Small 24B is an (E)RP God on 8GB of VRAM | 1 | [removed] | 2025-02-03T14:23:33 | https://www.reddit.com/r/LocalLLaMA/comments/1igq77s/mistral_small_24b_is_an_erp_god_on_8gb_of_vram/ | RandomGuyNumber28501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igq77s | false | null | t3_1igq77s | /r/LocalLLaMA/comments/1igq77s/mistral_small_24b_is_an_erp_god_on_8gb_of_vram/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'fJcJu7S1jWPsIijhxC6Qf-PDht40NQSJSF_CIJVpSDg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=108&crop=smart&auto=webp&s=556f02c5062196b831727d50479b4dedcba42465', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=216&crop=smart&auto=webp&s=915ad696f5f87cf150f0a34cfaab22e492725ead', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=320&crop=smart&auto=webp&s=c7bd0c3afae8e858eb9c166b192564fd5e9ab698', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=640&crop=smart&auto=webp&s=506e8acadec458fdbc5263aab62e7fe23bff3e73', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=960&crop=smart&auto=webp&s=92ba6bb4013785ea43d153da8d92240198c98889', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=1080&crop=smart&auto=webp&s=300841458fb4f5602c0a50ea57a648bd2430d112', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?auto=webp&s=10ce7423da868f93e58832e74619ce454c74d486', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=76fa2d806e3ab1be67367d801447de3d6ae43421', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=6c24a927663d75bff22e5e11c32035812126dfd8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c509f06a49f371615108ca3cd6eef1423dedee4b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=9075ae62dd9d5d2b7d0503ef959047e2cb56fbca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=2248ace08e98169b5d00aef84bf2017c3ed112ad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=1a3cfe09ad53992bcf331e4bfbeb2c36bd540f8b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?blur=40&format=pjpg&auto=webp&s=339c394355d45084bd8cae6349b63c84b4d25b63', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=76fa2d806e3ab1be67367d801447de3d6ae43421', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=6c24a927663d75bff22e5e11c32035812126dfd8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c509f06a49f371615108ca3cd6eef1423dedee4b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=9075ae62dd9d5d2b7d0503ef959047e2cb56fbca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=2248ace08e98169b5d00aef84bf2017c3ed112ad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=1a3cfe09ad53992bcf331e4bfbeb2c36bd540f8b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6gL76ZMrrBOOoaB0ogD-JcHTREOFmGNz2SY3hi5tJtE.jpg?blur=40&format=pjpg&auto=webp&s=339c394355d45084bd8cae6349b63c84b4d25b63', 'width': 1200}}}}]} |
Jokes aside, which is your favorite local tts model and why? | 482 | 2025-02-03T14:27:05 | iaseth | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1igq9ud | false | null | t3_1igq9ud | /r/LocalLLaMA/comments/1igq9ud/jokes_aside_which_is_your_favorite_local_tts/ | false | false | 482 | {'enabled': True, 'images': [{'id': 'hfmYwkmiucFdEG19Py9ZRapscaBwI-aS51gmCmS62uk', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ao50vnrwoxge1.png?width=108&crop=smart&auto=webp&s=4ecff601fa0fcc05b849136711f4c0249b4f0391', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ao50vnrwoxge1.png?width=216&crop=smart&auto=webp&s=51c45ee61b2cf17b57b0a7f988aeeaa810532628', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/ao50vnrwoxge1.png?width=320&crop=smart&auto=webp&s=298e2e3c1ed5241562b49a3f4402fcd6c67afae4', 'width': 320}], 'source': {'height': 225, 'url': 'https://preview.redd.it/ao50vnrwoxge1.png?auto=webp&s=497b076d4e17d525b0a96b6a5385dca7e828cd66', 'width': 399}, 'variants': {}}]} |
|||
Best fast and affordable model to determine whether a question needs web search | 1 | I am building a chatbot that focuses on forex trading. For most forex related questions, I will need to do a web search or call external API to get the data to answer. However, since the user can technically ask whatever they want, some questions can be so simple (like "how are you?", "tell me a joke", "what is docker?", etc..) that the LLM can answer by itself without the need for additional information.
I am looking for a LLM that can do that 1st task well: given a question and the chat history, figure out whether a web search is needed to answer the questions correctly.
I have tried many LLMs (both open and closed: deepseek, gemini, phi, gemma, mistral ai) and openai models are the best ones hand-down. gpt-4o-mini does a very good job but still fails at some difficult cases. However, gpt-4o pretty much does a perfect job for all the cases that I test. I probably will go with gpt-4o given its performance. However, there are 2 main drawbacks:
* speed: it is not slow, but not quite fast. Ideally i would like something faster (same speed with gpt-4o-mini would be good enough)
* cost: it is expensive.
I wonder whether anyone knows any LLMs that can do this job well, and also fast + affordable? I feel like this is a pretty common problem given the popularity of chatbots. Thanks in advance. | 2025-02-03T14:28:40 | https://www.reddit.com/r/LocalLLaMA/comments/1igqb5m/best_fast_and_affordable_model_to_determine/ | Temporary_Cap_2855 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igqb5m | false | null | t3_1igqb5m | /r/LocalLLaMA/comments/1igqb5m/best_fast_and_affordable_model_to_determine/ | false | false | self | 1 | null |
Ultra noob - Hardware Requirements for Deploying & Fine-Tuning Deepseek’s Distilled Models for a Small Team | 1 | [removed] | 2025-02-03T14:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/1igqf07/ultra_noob_hardware_requirements_for_deploying/ | IAmTheDangerSkylar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igqf07 | false | null | t3_1igqf07 | /r/LocalLLaMA/comments/1igqf07/ultra_noob_hardware_requirements_for_deploying/ | false | false | self | 1 | null |
DeepSeek R1: How to turn €200 into €20,000 | 1 | [removed] | 2025-02-03T14:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/1igqh8p/deepseek_r1_how_to_turn_200_into_20000/ | PerformanceRound7913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igqh8p | false | null | t3_1igqh8p | /r/LocalLLaMA/comments/1igqh8p/deepseek_r1_how_to_turn_200_into_20000/ | false | false | self | 1 | null |
Does DeepSeek use llama? If not why is DeepSeek discussed here ? | 1 | [removed] | 2025-02-03T14:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1igqm6a/does_deepseek_use_llama_if_not_why_is_deepseek/ | Mcluckin123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igqm6a | false | null | t3_1igqm6a | /r/LocalLLaMA/comments/1igqm6a/does_deepseek_use_llama_if_not_why_is_deepseek/ | false | false | self | 1 | null |
Pay $2,760/year to OpenAI or Buy Mac Studio | 1 | [removed] | 2025-02-03T14:44:23 | https://www.reddit.com/r/LocalLLaMA/comments/1igqn6z/pay_2760year_to_openai_or_buy_mac_studio/ | DarkOfLord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igqn6z | false | null | t3_1igqn6z | /r/LocalLLaMA/comments/1igqn6z/pay_2760year_to_openai_or_buy_mac_studio/ | false | false | self | 1 | null |
LLM Based Mapping | 13 | Hi all, just wanted to share the open-source repo of a project I shared in the past.
Godview is a map that works entirely off of large language models. You can search for places or ask questions and it will plot the results and give you addresses and websites (when applicable). It also has a "discover" page where you can click on any point on the map and get the model to tell you about it. Very simple implementation but it's gotten a lot of use so figured I'd share it. It runs locally if you have Ollama running. Just cycle through the models button until it hits "Local" instead of the cloud providers.
Link to repo: [https://github.com/space0blaster/godview](https://github.com/space0blaster/godview)
If you'd like to try out the free demo/cloud version: [https://godview.ai](https://godview.ai) | 2025-02-03T15:02:14 | https://www.reddit.com/r/LocalLLaMA/comments/1igr151/llm_based_mapping/ | ranoutofusernames__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igr151 | false | null | t3_1igr151 | /r/LocalLLaMA/comments/1igr151/llm_based_mapping/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': '1Nh5v4a93mrtmREPZfGbtdMrCxWLHO4M2R9mcBHVD-M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kW6o5o_EQ5ME2BYn5-VJBy4LN3JJljFNnlSS2HkEdz8.jpg?width=108&crop=smart&auto=webp&s=0caad41263bf7306a46aa336b8e86f5f3e87869e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kW6o5o_EQ5ME2BYn5-VJBy4LN3JJljFNnlSS2HkEdz8.jpg?width=216&crop=smart&auto=webp&s=644a7ca90f81487dc6cd5999b2c87d8b75a3605d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kW6o5o_EQ5ME2BYn5-VJBy4LN3JJljFNnlSS2HkEdz8.jpg?width=320&crop=smart&auto=webp&s=47c274bb05743d6206468653e981d695e6da1c4c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kW6o5o_EQ5ME2BYn5-VJBy4LN3JJljFNnlSS2HkEdz8.jpg?width=640&crop=smart&auto=webp&s=750f5a2616f178e5a54f3a6419010dec711eff9a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kW6o5o_EQ5ME2BYn5-VJBy4LN3JJljFNnlSS2HkEdz8.jpg?width=960&crop=smart&auto=webp&s=f02dad595bdbec92298be1fa164918934899b41e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kW6o5o_EQ5ME2BYn5-VJBy4LN3JJljFNnlSS2HkEdz8.jpg?width=1080&crop=smart&auto=webp&s=f26398cddfa7f0433b6ff62c514e92a7cff1db7c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kW6o5o_EQ5ME2BYn5-VJBy4LN3JJljFNnlSS2HkEdz8.jpg?auto=webp&s=76e33fa6b00d442d1b0fe20e51f305f8ef441f62', 'width': 1200}, 'variants': {}}]} |
Whats the website that shows llm models released over time and if they were open or closed? | 15 | I saw a website here but I can't find it again. It basically showed all the models (including audio, tts, multimodal, etc) and what year they came out, and if they had open weights (with a link) and a link to its paper. I'm having a hard time finding it again if anyone knows it off the top of their head or has a better link. | 2025-02-03T15:05:10 | https://www.reddit.com/r/LocalLLaMA/comments/1igr3j8/whats_the_website_that_shows_llm_models_released/ | United-Rush4073 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igr3j8 | false | null | t3_1igr3j8 | /r/LocalLLaMA/comments/1igr3j8/whats_the_website_that_shows_llm_models_released/ | false | false | self | 15 | null |
Training deepseek r1 to trade stocks | 82 | Like everyone else on the internet, I was really fascinated by deepseek's abilities, but the thing that got me the most was how they trained deepseek-r1-zero. Essentially, it just seemed to boil down to: "feed the machine an objective reward function, and train it a whole bunch, letting it think a variable amount". So I thought: hey, you can use stock prices going up and down as an objective reward function kinda?
Anyways, so I used huggingface's open-r1 to write a version of deepseek that aims to maximize short-term stock prediction, by acting as a "stock analyst" of sort, offering buy and sell recommendations based on some signals I scraped for each company. All the code and colab and discussion is at [2084: Deepstock - can you train deepseek to do stock trading?](https://2084.substack.com/p/2084-deepstock-can-you-train-deepseek)
Training it rn over the next week, my goal is to get it to do better than random, altho getting it to that point is probably going to take a ton of compute. (Anyone got any spare?)
Thoughts on how I should expand this? | 2025-02-03T15:07:11 | https://www.reddit.com/r/LocalLLaMA/comments/1igr55c/training_deepseek_r1_to_trade_stocks/ | ExaminationNo8522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igr55c | false | null | t3_1igr55c | /r/LocalLLaMA/comments/1igr55c/training_deepseek_r1_to_trade_stocks/ | false | false | self | 82 | {'enabled': False, 'images': [{'id': 'uJv0oG-2tq6zlUre9SJ38c0Hvzx3sLzHVPyUetlTYFc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x74taz5CpPpc5z1_9o5RcCeeaJmnl5lOOIowbJqyv64.jpg?width=108&crop=smart&auto=webp&s=00a01b9dc1b2562e173e880363edbf854d7269a6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/x74taz5CpPpc5z1_9o5RcCeeaJmnl5lOOIowbJqyv64.jpg?width=216&crop=smart&auto=webp&s=01837f16b1d222ba2444716e316da8c52122e9dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/x74taz5CpPpc5z1_9o5RcCeeaJmnl5lOOIowbJqyv64.jpg?width=320&crop=smart&auto=webp&s=4e7e4209ae9869be79eb09ee6ebdec0eb22103e9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/x74taz5CpPpc5z1_9o5RcCeeaJmnl5lOOIowbJqyv64.jpg?width=640&crop=smart&auto=webp&s=6ccde40a41b49b113615baf702bfe7f760bd0ef1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/x74taz5CpPpc5z1_9o5RcCeeaJmnl5lOOIowbJqyv64.jpg?width=960&crop=smart&auto=webp&s=3dbe7bb370eba12e00ca0a39c4973e2ddf4f0a74', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/x74taz5CpPpc5z1_9o5RcCeeaJmnl5lOOIowbJqyv64.jpg?width=1080&crop=smart&auto=webp&s=ac27a69f63438515da49feabe87d578a54df7940', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/x74taz5CpPpc5z1_9o5RcCeeaJmnl5lOOIowbJqyv64.jpg?auto=webp&s=48fb7418f6b54b2dabc53bea9dd28bb75a0cf09e', 'width': 1200}, 'variants': {}}]} |
R1 Hardware Question | 1 | Looking at buying a Dell R7525 server motherboard with dual EPYC sockets and 32 DIMM slots. Would probably run 2x EPYC 7532 and 256GB of 2133MT/s DDR4.
I’m a normie whose only experience is building consumer PCs and running LM Studio. What kind of performance could I reasonably suspect (t/s) from the unsloth R1 2.51b quant, assuming I’m running on Windows without heavy optimizations? How much context window can I expect, and would doubling the RAM to 512GB improve this?
Thanks for any advice. | 2025-02-03T15:50:54 | https://www.reddit.com/r/LocalLLaMA/comments/1igs5hb/r1_hardware_question/ | jwil00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igs5hb | false | null | t3_1igs5hb | /r/LocalLLaMA/comments/1igs5hb/r1_hardware_question/ | false | false | self | 1 | null |
Best private setup for 10.000$ | 1 | So i am at a point where i liquidated some assets because of recent uncertaincies in the world financial markets (not important if you agree or not) and reinvested it with other ways of asset allocations. Now i have the sum of about 10-12k in my hands to build a new private Rig for LLM and Comfy as well as programming. I would like to gather some info on how you guys would envelope on this instead of just rushing into the market and buy "stuff".
Options would be: Dual A6000 for about 7700$
one 6000 ada for 8400$.
four to six 4060ti with 16gb each
i think about threadripper 7000 pro with 1tb of ram. liquid cooling.
about 1200-2000 watts psu.
what would you do / change / invest in right now ?
happy to see many constructive answers !! | 2025-02-03T16:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/1igsfx2/best_private_setup_for_10000/ | getmevodka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igsfx2 | false | null | t3_1igsfx2 | /r/LocalLLaMA/comments/1igsfx2/best_private_setup_for_10000/ | false | false | self | 1 | null |
Dev-first Deepseek R1 using docker. | 1 | [removed] | 2025-02-03T16:09:08 | https://www.reddit.com/r/LocalLLaMA/comments/1igslq0/devfirst_deepseek_r1_using_docker/ | Wise-Difference6156 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igslq0 | false | null | t3_1igslq0 | /r/LocalLLaMA/comments/1igslq0/devfirst_deepseek_r1_using_docker/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/frnQ41VPYiTQzQvSxpboDS4V6_ZbWvSeYbq3XEofAC8.jpg?auto=webp&s=baf0cb47bfa3ea9fa635ba06539ac964f45f5e96', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/frnQ41VPYiTQzQvSxpboDS4V6_ZbWvSeYbq3XEofAC8.jpg?width=108&crop=smart&auto=webp&s=c3109eec37f67f19d405bc3f7e46b2adf656a4b3', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/frnQ41VPYiTQzQvSxpboDS4V6_ZbWvSeYbq3XEofAC8.jpg?width=216&crop=smart&auto=webp&s=5bf17077f302ab056843e255de265f6277bf2922', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/frnQ41VPYiTQzQvSxpboDS4V6_ZbWvSeYbq3XEofAC8.jpg?width=320&crop=smart&auto=webp&s=b02e1db821101ae72bf57faf708f757ff217417e', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/frnQ41VPYiTQzQvSxpboDS4V6_ZbWvSeYbq3XEofAC8.jpg?width=640&crop=smart&auto=webp&s=7548b1e011f7e670df5d788f20c344748ede24c0', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/frnQ41VPYiTQzQvSxpboDS4V6_ZbWvSeYbq3XEofAC8.jpg?width=960&crop=smart&auto=webp&s=18366ec0ceddb04088d00253d05c17b6c6d1c228', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/frnQ41VPYiTQzQvSxpboDS4V6_ZbWvSeYbq3XEofAC8.jpg?width=1080&crop=smart&auto=webp&s=6596fc20fa9ff0c7465733165819e94b1f664b3f', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'mvKfbG6QDnTr5Tc2ZQ3YqToc3-45hL36ylipWebFn3A'}], 'enabled': False} |
How much RAM is enough to use 7B models locally on mac? | 1 | [removed] | 2025-02-03T16:12:13 | https://www.reddit.com/r/LocalLLaMA/comments/1igsoff/how_much_ram_is_enough_to_use_7b_models_locally/ | Fabulous_Can_2215 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igsoff | false | null | t3_1igsoff | /r/LocalLLaMA/comments/1igsoff/how_much_ram_is_enough_to_use_7b_models_locally/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?auto=webp&s=92948afd26cc637bb25c79223a1b99b3ecbbbfa2', 'width': 2401, 'height': 1260}, 'resolutions': [{'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=108&crop=smart&auto=webp&s=b6c70517bb80bca66bf94d99af93ec23982e2986', 'width': 108, 'height': 56}, {'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=216&crop=smart&auto=webp&s=146011169cd4033ebcd4b883efc62f0bd345d74b', 'width': 216, 'height': 113}, {'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=320&crop=smart&auto=webp&s=7a560fe31ff4e8b423a9029c052df232e0365572', 'width': 320, 'height': 167}, {'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=640&crop=smart&auto=webp&s=ea9ff85c4782247e303164d9d75b4071d789f397', 'width': 640, 'height': 335}, {'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=960&crop=smart&auto=webp&s=81aa9753e911761e0c56b3b897ba0f44cafff21d', 'width': 960, 'height': 503}, {'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=1080&crop=smart&auto=webp&s=a67fd0983e228aa2fa0a2ba466c071793fe21afc', 'width': 1080, 'height': 566}], 'variants': {}, 'id': 'JoLAbcgPAn_D7ExuVvyaNJpSY81e3Jca27FTj1G8-xQ'}], 'enabled': False} |
What are you using local LLMs to build? | 1 | [removed] | 2025-02-03T16:22:11 | https://www.reddit.com/r/LocalLLaMA/comments/1igsx0a/what_are_you_using_local_llms_to_build/ | Neither-Pear-1234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igsx0a | false | null | t3_1igsx0a | /r/LocalLLaMA/comments/1igsx0a/what_are_you_using_local_llms_to_build/ | false | false | self | 1 | null |
What small model to use for note taking | 1 | Hello I have been experimenting with llms recently.
Now I'm looking for a model fhat can help with note taking / text extraction / suggestions that I can prolly use with obsidian.
My setup
Core Ultra 5 125H
Using Arc Graphics (ipex-llm)
32gb ram
Models I use now:
Qwen2.5-coder-7b on vscode continue extension
Deepseek-r1-8b + anythingllm | 2025-02-03T16:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/1igt5yq/what_small_model_to_use_for_note_taking/ | gelomon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igt5yq | false | null | t3_1igt5yq | /r/LocalLLaMA/comments/1igt5yq/what_small_model_to_use_for_note_taking/ | false | false | self | 1 | null |
GitHub - openorch/openorch: A language-agnostic, distributed backend platform for AI, microservices, and beyond. | 1 | 2025-02-03T16:41:39 | https://github.com/openorch/openorch | crufter | github.com | 1970-01-01T00:00:00 | 0 | {} | 1igte22 | false | null | t3_1igte22 | /r/LocalLLaMA/comments/1igte22/github_openorchopenorch_a_languageagnostic/ | false | false | default | 1 | null |
|
OpenAI Deep Research is working hard on my report, ETA 1-2 weeks | 1 | 2025-02-03T16:43:33 | https://www.reddit.com/r/LocalLLaMA/comments/1igtfqv/openai_deep_research_is_working_hard_on_my_report/ | PerformanceRound7913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igtfqv | false | null | t3_1igtfqv | /r/LocalLLaMA/comments/1igtfqv/openai_deep_research_is_working_hard_on_my_report/ | false | false | 1 | null |
||
Look at me I still exist you know ! | 1 | When you have to use ads because DeepSeek-R1 is bankrupting you. | 2025-02-03T16:47:21 | prumf | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1igtj17 | false | null | t3_1igtj17 | /r/LocalLLaMA/comments/1igtj17/look_at_me_i_still_exist_you_know/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/eilyf564eyge1.jpeg?auto=webp&s=8ab30e768347ddf9d716fd589634a787f253185b', 'width': 1290, 'height': 1318}, 'resolutions': [{'url': 'https://preview.redd.it/eilyf564eyge1.jpeg?width=108&crop=smart&auto=webp&s=d3e2acdef0ca12f2e8fa1748983fa715c56c2d1c', 'width': 108, 'height': 110}, {'url': 'https://preview.redd.it/eilyf564eyge1.jpeg?width=216&crop=smart&auto=webp&s=557638a998ab48bf44caf6299cc124e0fd7bb653', 'width': 216, 'height': 220}, {'url': 'https://preview.redd.it/eilyf564eyge1.jpeg?width=320&crop=smart&auto=webp&s=6bd9e1b49743c50bb139b549c96cdb00759d6d5d', 'width': 320, 'height': 326}, {'url': 'https://preview.redd.it/eilyf564eyge1.jpeg?width=640&crop=smart&auto=webp&s=f1fe49e465e7665c5325457d9ea69e89e44a737c', 'width': 640, 'height': 653}, {'url': 'https://preview.redd.it/eilyf564eyge1.jpeg?width=960&crop=smart&auto=webp&s=da50e0739f397d19b4fd80c8b620329d01a5bf3e', 'width': 960, 'height': 980}, {'url': 'https://preview.redd.it/eilyf564eyge1.jpeg?width=1080&crop=smart&auto=webp&s=7c2e2b55f2591d4f5d50770ae7baf96e24513a21', 'width': 1080, 'height': 1103}], 'variants': {}, 'id': '3Pnyg1CfjP4OEYe80HJ9Yuyj4-zaq_6kUUQBB9cBfuE'}], 'enabled': True} |
||
Any way to interrupt and respond to an LLM mid CoT? | 1 | Watching LLMs \`<think>\` has been interesting but it can be annoying when I see it is missing information or assumes the wrong information mid thought and then I have to run it all again. Is there a way to programmatically "pause" generation at a certain point and inject user input into the CoT with clarifications or to answer a question the LLM asked itself? | 2025-02-03T16:56:43 | https://www.reddit.com/r/LocalLLaMA/comments/1igtref/any_way_to_interrupt_and_respond_to_an_llm_mid_cot/ | fnordonk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igtref | false | null | t3_1igtref | /r/LocalLLaMA/comments/1igtref/any_way_to_interrupt_and_respond_to_an_llm_mid_cot/ | false | false | self | 1 | null |
Open Source AI Text to Speech - WHY No DEVELOPMENTS!? | 1 | [removed] | 2025-02-03T16:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1igtt9b/open_source_ai_text_to_speech_why_no_developments/ | EnderRod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igtt9b | false | null | t3_1igtt9b | /r/LocalLLaMA/comments/1igtt9b/open_source_ai_text_to_speech_why_no_developments/ | false | false | self | 1 | null |
Didn't realize R1 Distilled Qwen 32B was THIS good! | 1 | 2025-02-03T16:59:53 | https://x.com/abhinand58/status/1886457265827536998 | abhinand05 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1igtu78 | false | null | t3_1igtu78 | /r/LocalLLaMA/comments/1igtu78/didnt_realize_r1_distilled_qwen_32b_was_this_good/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/GV-a_-zXIxghFU0jFt3GyUmyQfjXE07LmDE4EkeZjBw.jpg?auto=webp&s=b39dd416de3b60366a7612031a9078db59fbba26', 'width': 959, 'height': 946}, 'resolutions': [{'url': 'https://external-preview.redd.it/GV-a_-zXIxghFU0jFt3GyUmyQfjXE07LmDE4EkeZjBw.jpg?width=108&crop=smart&auto=webp&s=744957f8004aa3f85990145796eca25e2dd134af', 'width': 108, 'height': 106}, {'url': 'https://external-preview.redd.it/GV-a_-zXIxghFU0jFt3GyUmyQfjXE07LmDE4EkeZjBw.jpg?width=216&crop=smart&auto=webp&s=48d0818ef42606a423d63493a283c5f0d190837f', 'width': 216, 'height': 213}, {'url': 'https://external-preview.redd.it/GV-a_-zXIxghFU0jFt3GyUmyQfjXE07LmDE4EkeZjBw.jpg?width=320&crop=smart&auto=webp&s=39147956da30d63111d8336e225e7eeca355508c', 'width': 320, 'height': 315}, {'url': 'https://external-preview.redd.it/GV-a_-zXIxghFU0jFt3GyUmyQfjXE07LmDE4EkeZjBw.jpg?width=640&crop=smart&auto=webp&s=4e9710e10cd3042798897217b009355e6427fcb0', 'width': 640, 'height': 631}], 'variants': {}, 'id': 'KCEQ5O9NZZQ4JgVfAr_6yQ_tJWVQGGFTFyT3lF7TNq0'}], 'enabled': False} |
||
Open Source AI Text to Speech - WHY No DEVELOPMENTS!? | 1 | [removed] | 2025-02-03T17:03:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1igtxxa | false | null | t3_1igtxxa | /r/LocalLLaMA/comments/1igtxxa/open_source_ai_text_to_speech_why_no_developments/ | false | false | default | 1 | null |
||
Benchmarking ChatGPT, Qwen, and DeepSeek on Real-World AI Tasks | 1 | 2025-02-03T17:05:18 | https://decodebuzzing.medium.com/qbenchmarking-chatgpt-qwen-and-deepseek-on-real-world-ai-tasks-75b4d7040742 | DecodeBuzzingMedium | decodebuzzing.medium.com | 1970-01-01T00:00:00 | 0 | {} | 1igtzae | false | null | t3_1igtzae | /r/LocalLLaMA/comments/1igtzae/benchmarking_chatgpt_qwen_and_deepseek_on/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?auto=webp&s=46bf5a2e93ae0831b7500e8cc198a53ec487520c', 'width': 1200, 'height': 1800}, 'resolutions': [{'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=108&crop=smart&auto=webp&s=072aea72ce50c58d9fbe1ec2a0f7b310b12c0fee', 'width': 108, 'height': 162}, {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=216&crop=smart&auto=webp&s=8fda70347e6503d87c26f74f38f438a1023f2aed', 'width': 216, 'height': 324}, {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=320&crop=smart&auto=webp&s=8b097023add0c445502b2964b57176f88c079cf6', 'width': 320, 'height': 480}, {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=640&crop=smart&auto=webp&s=1d74bdb45ba40606b07946e0edfeee9053905995', 'width': 640, 'height': 960}, {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=960&crop=smart&auto=webp&s=cdcc228b93bd91a59785d139d4b189358593bc1b', 'width': 960, 'height': 1440}, {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=1080&crop=smart&auto=webp&s=3486543ba651beef7e12adf1343b788395de82c2', 'width': 1080, 'height': 1620}], 'variants': {}, 'id': 'UZ_ZPPYzNvfPhSbtIOL2jeDw9toYxMg1Vp9v17EXvQI'}], 'enabled': False} |
||
How to turn off "deepseek" deep thinking. | 1 | [deleted] | 2025-02-03T17:08:05 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1igu1p2 | false | null | t3_1igu1p2 | /r/LocalLLaMA/comments/1igu1p2/how_to_turn_off_deepseek_deep_thinking/ | false | false | default | 1 | null |
||
Looking at the R1 thinking process is fun | 1 | [removed] | 2025-02-03T17:08:25 | https://www.reddit.com/r/LocalLLaMA/comments/1igu20r/looking_at_the_r1_thinking_process_is_fun/ | moosechowder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igu20r | false | null | t3_1igu20r | /r/LocalLLaMA/comments/1igu20r/looking_at_the_r1_thinking_process_is_fun/ | false | false | self | 1 | null |
Beyond LLMs? | 1 | https://open.substack.com/pub/plantyourflag/p/ai-upheaval-beyond-llms?r=56i3zq&utm_medium=ios | 2025-02-03T17:08:28 | https://www.reddit.com/r/LocalLLaMA/comments/1igu223/beyond_llms/ | AccountantDry2483 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igu223 | false | null | t3_1igu223 | /r/LocalLLaMA/comments/1igu223/beyond_llms/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/cMdYnZmjq8BDIKDpS_75VxgmMVW_HC_NcucV-2sNoU0.jpg?auto=webp&s=c4dd1b9aea63b13035f2f9f8519ed2ecb8735f1b', 'width': 1024, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/cMdYnZmjq8BDIKDpS_75VxgmMVW_HC_NcucV-2sNoU0.jpg?width=108&crop=smart&auto=webp&s=21e8f674bca0d323906e53b5593fac35e419d482', 'width': 108, 'height': 63}, {'url': 'https://external-preview.redd.it/cMdYnZmjq8BDIKDpS_75VxgmMVW_HC_NcucV-2sNoU0.jpg?width=216&crop=smart&auto=webp&s=87d99a776bd9f5ed9a994960b615dc6b1526e794', 'width': 216, 'height': 126}, {'url': 'https://external-preview.redd.it/cMdYnZmjq8BDIKDpS_75VxgmMVW_HC_NcucV-2sNoU0.jpg?width=320&crop=smart&auto=webp&s=7dd1957a9b04c27445ca70467796df4cb92b17d1', 'width': 320, 'height': 187}, {'url': 'https://external-preview.redd.it/cMdYnZmjq8BDIKDpS_75VxgmMVW_HC_NcucV-2sNoU0.jpg?width=640&crop=smart&auto=webp&s=8508d33567f06ee99dfdf1baaf8ac174d7ddb419', 'width': 640, 'height': 375}, {'url': 'https://external-preview.redd.it/cMdYnZmjq8BDIKDpS_75VxgmMVW_HC_NcucV-2sNoU0.jpg?width=960&crop=smart&auto=webp&s=9baa2f1ad2e361a62fe959753bc6e9b3ea08e03b', 'width': 960, 'height': 562}], 'variants': {}, 'id': 'OQIg3oLyMQQvmsg5ecCsII9LZAJgCZxoSn2_34lSnXI'}], 'enabled': False} |
Does Whisper.cpp Server Support Concurrent Processing? If Not, How to Achieve It? | 1 | [removed] | 2025-02-03T17:16:58 | https://www.reddit.com/r/LocalLLaMA/comments/1igu9f6/does_whispercpp_server_support_concurrent/ | Taha-155 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igu9f6 | false | null | t3_1igu9f6 | /r/LocalLLaMA/comments/1igu9f6/does_whispercpp_server_support_concurrent/ | false | false | self | 1 | null |
Benchmarking ChatGPT, Qwen, and DeepSeek on Real-World AI Tasks | 1 | [removed] | 2025-02-03T17:21:22 | https://www.reddit.com/r/LocalLLaMA/comments/1igudin/benchmarking_chatgpt_qwen_and_deepseek_on/ | DecodeBuzzingMedium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igudin | false | null | t3_1igudin | /r/LocalLLaMA/comments/1igudin/benchmarking_chatgpt_qwen_and_deepseek_on/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?auto=webp&s=46bf5a2e93ae0831b7500e8cc198a53ec487520c', 'width': 1200, 'height': 1800}, 'resolutions': [{'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=108&crop=smart&auto=webp&s=072aea72ce50c58d9fbe1ec2a0f7b310b12c0fee', 'width': 108, 'height': 162}, {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=216&crop=smart&auto=webp&s=8fda70347e6503d87c26f74f38f438a1023f2aed', 'width': 216, 'height': 324}, {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=320&crop=smart&auto=webp&s=8b097023add0c445502b2964b57176f88c079cf6', 'width': 320, 'height': 480}, {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=640&crop=smart&auto=webp&s=1d74bdb45ba40606b07946e0edfeee9053905995', 'width': 640, 'height': 960}, {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=960&crop=smart&auto=webp&s=cdcc228b93bd91a59785d139d4b189358593bc1b', 'width': 960, 'height': 1440}, {'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=1080&crop=smart&auto=webp&s=3486543ba651beef7e12adf1343b788395de82c2', 'width': 1080, 'height': 1620}], 'variants': {}, 'id': 'UZ_ZPPYzNvfPhSbtIOL2jeDw9toYxMg1Vp9v17EXvQI'}], 'enabled': False} |
|
Confusion Over HF TGI Reverting Back to Apache | 1 | [removed] | 2025-02-03T17:27:01 | https://www.reddit.com/r/LocalLLaMA/comments/1iguiqe/confusion_over_hf_tgi_reverting_back_to_apache/ | Regular_Sun_3073 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iguiqe | false | null | t3_1iguiqe | /r/LocalLLaMA/comments/1iguiqe/confusion_over_hf_tgi_reverting_back_to_apache/ | false | false | 1 | null |
|
Solving puzzles with structured outputs using o3-mini | 0 | 2025-02-03T17:28:50 | https://www.boundaryml.com/blog/o3-mini-function-calling | fluxwave | boundaryml.com | 1970-01-01T00:00:00 | 0 | {} | 1igukcm | false | null | t3_1igukcm | /r/LocalLLaMA/comments/1igukcm/solving_puzzles_with_structured_outputs_using/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'sCwYIzftDrnJYU_TOJN7nORjG4bnaxvoYP-yB0g9tPk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/THYMv8qeZly5dEgrvaWi1j3Qngo6BuUpzUYqqxjyck4.jpg?width=108&crop=smart&auto=webp&s=46ee1ed18ccb94f4f8d08af3ce912498f7df492b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/THYMv8qeZly5dEgrvaWi1j3Qngo6BuUpzUYqqxjyck4.jpg?width=216&crop=smart&auto=webp&s=7424bfcce64cc188c2eac93400e9e2d5ec14d737', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/THYMv8qeZly5dEgrvaWi1j3Qngo6BuUpzUYqqxjyck4.jpg?width=320&crop=smart&auto=webp&s=918cc3926a3beb3fa8fd6b9d2ac7c0647d8829f2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/THYMv8qeZly5dEgrvaWi1j3Qngo6BuUpzUYqqxjyck4.jpg?width=640&crop=smart&auto=webp&s=175629f87847543b857bada6efbd5e0cc32fb31a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/THYMv8qeZly5dEgrvaWi1j3Qngo6BuUpzUYqqxjyck4.jpg?width=960&crop=smart&auto=webp&s=c403b155ead1cbf5cf26a281d88c7118c7f4f14a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/THYMv8qeZly5dEgrvaWi1j3Qngo6BuUpzUYqqxjyck4.jpg?width=1080&crop=smart&auto=webp&s=cd4ebb568311a5725182a4b128ee9ee9667a9842', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/THYMv8qeZly5dEgrvaWi1j3Qngo6BuUpzUYqqxjyck4.jpg?auto=webp&s=0361c9327f431a5c0dcb84c4d895edec96162ad4', 'width': 1200}, 'variants': {}}]} |
||
Does the DeepSeek "search" button just not work anymore, for anyone? | 0 | Title. | 2025-02-03T17:29:40 | https://www.reddit.com/r/LocalLLaMA/comments/1igul54/does_the_deepseek_search_button_just_not_work/ | Prize_Self_6347 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igul54 | false | null | t3_1igul54 | /r/LocalLLaMA/comments/1igul54/does_the_deepseek_search_button_just_not_work/ | false | false | self | 0 | null |
I have a Mac Mini M2 with 8 GB RAM. So obviously, I don't have much options. Based on the limited options I have, which is the best LLM model I can run. (all rounder- efficient and speed) | 1 | [removed] | 2025-02-03T17:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/1iguoau/i_have_a_mac_mini_m2_with_8_gb_ram_so_obviously_i/ | ShreyashStonieCrusts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iguoau | false | null | t3_1iguoau | /r/LocalLLaMA/comments/1iguoau/i_have_a_mac_mini_m2_with_8_gb_ram_so_obviously_i/ | false | false | self | 1 | null |
Noob question alert! Why do we need vLLM? | 1 | [removed] | 2025-02-03T17:46:09 | https://www.reddit.com/r/LocalLLaMA/comments/1iguzz3/noob_question_alert_why_do_we_need_vllm/ | OkBlueberry2841 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iguzz3 | false | null | t3_1iguzz3 | /r/LocalLLaMA/comments/1iguzz3/noob_question_alert_why_do_we_need_vllm/ | false | false | self | 1 | null |
Can I run LLMs locally with my hardware? | 0 | * RTX4080 GPU
* 64GB DDR5 3000MHz RAM
* Ryzen 7 9800X3D CPU
* ~2TB space on an NVMe SSD
Given these specs should I be able to run an LLM locally and effectively?
I have no experience with this yet so I'm looking for some beginner guides and tutorials, are there any you'd recommend? Let's say I want to try the new DeepSeek models.
(I'm a software dev, I've just never used LLMs) | 2025-02-03T17:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/1igv1bg/can_i_run_llms_locally_with_my_hardware/ | skytbest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igv1bg | false | null | t3_1igv1bg | /r/LocalLLaMA/comments/1igv1bg/can_i_run_llms_locally_with_my_hardware/ | false | false | self | 0 | null |
Do I really need vLLM if my server can handle AI inference efficiently? | 1 | [removed] | 2025-02-03T17:54:35 | https://www.reddit.com/r/LocalLLaMA/comments/1igv7ly/do_i_really_need_vllm_if_my_server_can_handle_ai/ | Maleficent_Today8748 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igv7ly | false | null | t3_1igv7ly | /r/LocalLLaMA/comments/1igv7ly/do_i_really_need_vllm_if_my_server_can_handle_ai/ | false | false | self | 1 | null |
Noob question alert! Why do we need vLLM? | 1 | [removed] | 2025-02-03T18:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/1igvl7s/noob_question_alert_why_do_we_need_vllm/ | Effective-Choice8148 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igvl7s | false | null | t3_1igvl7s | /r/LocalLLaMA/comments/1igvl7s/noob_question_alert_why_do_we_need_vllm/ | false | false | self | 1 | null |
Imatrix gguf vs regular GGUF? Which one to use? Why? | 15 | I don’t know the difference can someone please tell me, and which one should be used?
16gb VRAM user incase that matters | 2025-02-03T18:14:33 | https://www.reddit.com/r/LocalLLaMA/comments/1igvpu5/imatrix_gguf_vs_regular_gguf_which_one_to_use_why/ | No_Expert1801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igvpu5 | false | null | t3_1igvpu5 | /r/LocalLLaMA/comments/1igvpu5/imatrix_gguf_vs_regular_gguf_which_one_to_use_why/ | false | false | self | 15 | null |
Famulus - simpler alternative to lsp-ai | 1 | I didn't really like the complexity of lsp-ai, so decided to write my own lsp server - [famulus](https://github.com/kurnevsky/famulus).
Some notable differences from lsp-ai:
* It uses llama-cpp via the llama-cpp server, simplifying its internals and the build process.
* It leverages inline completion from the upcoming LSP protocol 3.18, simplifying editor integrations.
* It takes advantage of the native fill-in-the-middle API in both llama-cpp and Ollama, effectively offloading the management of prompt formats to the server, which has inherent knowledge of these formats from the model file.
It's only a server part with no client plugins, so I guess vscode will require some additional work. Feel free to use :) | 2025-02-03T18:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1igvr5u/famulus_simpler_alternative_to_lspai/ | kurnevsky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igvr5u | false | null | t3_1igvr5u | /r/LocalLLaMA/comments/1igvr5u/famulus_simpler_alternative_to_lspai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fmMZvubmJHce3DiqpCqCGPJ56orjJaA8YIhLV76jKrk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pEZkLpCkNWlmhdWw2JAmpSSi-r9neEfM3xn3l80e784.jpg?width=108&crop=smart&auto=webp&s=30151c1cfa3feba16f646658f4ea8c99fb58e4a7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pEZkLpCkNWlmhdWw2JAmpSSi-r9neEfM3xn3l80e784.jpg?width=216&crop=smart&auto=webp&s=7aaa57c9c4c00a2aa58a8c041578659f6c725a61', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pEZkLpCkNWlmhdWw2JAmpSSi-r9neEfM3xn3l80e784.jpg?width=320&crop=smart&auto=webp&s=22f3ceb43d35b851d9a48819e23b496c4d8a63a5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pEZkLpCkNWlmhdWw2JAmpSSi-r9neEfM3xn3l80e784.jpg?width=640&crop=smart&auto=webp&s=85051b6cec6c03a77d483d3e2fbc510dbf30f097', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pEZkLpCkNWlmhdWw2JAmpSSi-r9neEfM3xn3l80e784.jpg?width=960&crop=smart&auto=webp&s=931d2d188c6a6e60b0a35622699968bc0a3a814f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pEZkLpCkNWlmhdWw2JAmpSSi-r9neEfM3xn3l80e784.jpg?width=1080&crop=smart&auto=webp&s=9783d8ae9a277b84e916e8f40d3d477b349636d9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pEZkLpCkNWlmhdWw2JAmpSSi-r9neEfM3xn3l80e784.jpg?auto=webp&s=a12d5913d3c4143588f7087f93f7a0f0fcb74665', 'width': 1200}, 'variants': {}}]} |
M3 Max, 64GB RAM | 3 | HI Everyone,
I apologise I'm sure this isn't the first discussion on this topic...
I have installed a few models after some research recently been swapping through these for various tasks -
mistral-small:24b-instruct-2501-q4\_K\_M
llama3.2:3b-instruct-fp16
deepseek-r1:32b-qwen-distill-q4\_K\_M
qwen2.5-coder:14b-instruct-q4\_K\_S
qwen2.5-coder:32b-instruct-q4\_K\_S
Main uses are -
1. Decent coding model(s)
2. Everyday general questions/tasks/document proofing etc.
3. Andddd the deepseek r1 is really just because I was interested in trying a reasoning model haha, but it's not the fastest.
I am curious if anyone has a recommendation for any/better models I've missed, particularly that works well with my hardware, thanks!
| 2025-02-03T18:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/1igvsc0/m3_max_64gb_ram/ | BalaelGios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igvsc0 | false | null | t3_1igvsc0 | /r/LocalLLaMA/comments/1igvsc0/m3_max_64gb_ram/ | false | false | self | 3 | null |
Lex' podcast: DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megacluster... | 0 | 2025-02-03T18:19:06 | https://youtube.com/watch?v=_1f-o0nqpEI&si=oPHY39vrkxW2eXKq | etherd0t | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1igvtu8 | false | {'oembed': {'author_name': 'Lex Fridman', 'author_url': 'https://www.youtube.com/@lexfridman', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/_1f-o0nqpEI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters | Lex Fridman Podcast #459"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/_1f-o0nqpEI/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters | Lex Fridman Podcast #459', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1igvtu8 | /r/LocalLLaMA/comments/1igvtu8/lex_podcast_deepseek_china_openai_nvidia_xai_tsmc/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'mdoKN7iRU8j1owB6pcRYbn0bFyTEACxBRqBvbm4xnbM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jkIMYV0OW0b12zSC2gKeDzWVylW7N58tyiuvlQ9rGy4.jpg?width=108&crop=smart&auto=webp&s=6ad8c9e44722b5195309dc6ee6e5524fd65dd789', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/jkIMYV0OW0b12zSC2gKeDzWVylW7N58tyiuvlQ9rGy4.jpg?width=216&crop=smart&auto=webp&s=7192e2daf57476b7471f48ad40b977b3930a1493', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/jkIMYV0OW0b12zSC2gKeDzWVylW7N58tyiuvlQ9rGy4.jpg?width=320&crop=smart&auto=webp&s=a8c6d8028435bb79ef059ace6fe0d391935b0dff', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/jkIMYV0OW0b12zSC2gKeDzWVylW7N58tyiuvlQ9rGy4.jpg?auto=webp&s=77d71dad3f59a69584d680964a1fcb4fca29fa23', 'width': 480}, 'variants': {}}]} |
||
Which Model For Math/Code | 1 | [removed] | 2025-02-03T18:21:27 | https://www.reddit.com/r/LocalLLaMA/comments/1igvvxo/which_model_for_mathcode/ | Ubbe_04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igvvxo | false | null | t3_1igvvxo | /r/LocalLLaMA/comments/1igvvxo/which_model_for_mathcode/ | false | false | self | 1 | null |
Benchmarking ChatGPT, Qwen, and DeepSeek on Real-World AI Tasks | 1 | 2025-02-03T18:31:27 | https://decodebuzzing.medium.com/qbenchmarking-chatgpt-qwen-and-deepseek-on-real-world-ai-tasks-75b4d7040742 | DecodeBuzzingMedium | decodebuzzing.medium.com | 1970-01-01T00:00:00 | 0 | {} | 1igw520 | false | null | t3_1igw520 | /r/LocalLLaMA/comments/1igw520/benchmarking_chatgpt_qwen_and_deepseek_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'UZ_ZPPYzNvfPhSbtIOL2jeDw9toYxMg1Vp9v17EXvQI', 'resolutions': [{'height': 162, 'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=108&crop=smart&auto=webp&s=072aea72ce50c58d9fbe1ec2a0f7b310b12c0fee', 'width': 108}, {'height': 324, 'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=216&crop=smart&auto=webp&s=8fda70347e6503d87c26f74f38f438a1023f2aed', 'width': 216}, {'height': 480, 'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=320&crop=smart&auto=webp&s=8b097023add0c445502b2964b57176f88c079cf6', 'width': 320}, {'height': 960, 'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=640&crop=smart&auto=webp&s=1d74bdb45ba40606b07946e0edfeee9053905995', 'width': 640}, {'height': 1440, 'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=960&crop=smart&auto=webp&s=cdcc228b93bd91a59785d139d4b189358593bc1b', 'width': 960}, {'height': 1620, 'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?width=1080&crop=smart&auto=webp&s=3486543ba651beef7e12adf1343b788395de82c2', 'width': 1080}], 'source': {'height': 1800, 'url': 'https://external-preview.redd.it/FhtltxvbGGUpM94R2UZT1gSfho4oxRfBzazxbbdaFdk.jpg?auto=webp&s=46bf5a2e93ae0831b7500e8cc198a53ec487520c', 'width': 1200}, 'variants': {}}]} |
||
Alibaba Qwen-Max Ranks #7, Surpassing DeepSeek-v3 | 1 | 2025-02-03T18:35:46 | https://www.reddit.com/gallery/1igw8to | McSnoo | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1igw8to | false | null | t3_1igw8to | /r/LocalLLaMA/comments/1igw8to/alibaba_qwenmax_ranks_7_surpassing_deepseekv3/ | false | false | 1 | null |
||
Deepseek #3 in arena, qwen #7 | 1 | [removed] | 2025-02-03T18:38:53 | https://www.reddit.com/r/LocalLLaMA/comments/1igwbke/deepseek_3_in_arena_qwen_7/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igwbke | false | null | t3_1igwbke | /r/LocalLLaMA/comments/1igwbke/deepseek_3_in_arena_qwen_7/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LDDXUqGPPUTx4HPu4fpY3u2EheGyzk8ymTnP4Iz_G1E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=108&crop=smart&auto=webp&s=b8317b0c6b366b89273bce903cb9915fa45eea98', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=216&crop=smart&auto=webp&s=9def9077a123dc3a824d101e63d6e70080383821', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=320&crop=smart&auto=webp&s=9717231ad6722db0b9f11e428a97e2f738cbeaa8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=640&crop=smart&auto=webp&s=b3bfc108edb23af8ed11500fafb9d9600800a607', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=960&crop=smart&auto=webp&s=1c556fb46f27bb6413e6e1a4f0ceef7d2d4b93a4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=1080&crop=smart&auto=webp&s=8aeccde41fbc7e67eab04e1b7b43b7a03fc3f4c0', 'width': 1080}], 'source': {'height': 1107, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?auto=webp&s=1691ea30d45707db1bd9ed79e0e8331ac6f66fbf', 'width': 2048}, 'variants': {}}]} |
Deepseek #3 in arena. Qwen max #7 | 1 | [removed] | 2025-02-03T18:41:14 | https://www.reddit.com/r/LocalLLaMA/comments/1igwds0/deepseek_3_in_arena_qwen_max_7/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igwds0 | false | null | t3_1igwds0 | /r/LocalLLaMA/comments/1igwds0/deepseek_3_in_arena_qwen_max_7/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LDDXUqGPPUTx4HPu4fpY3u2EheGyzk8ymTnP4Iz_G1E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=108&crop=smart&auto=webp&s=b8317b0c6b366b89273bce903cb9915fa45eea98', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=216&crop=smart&auto=webp&s=9def9077a123dc3a824d101e63d6e70080383821', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=320&crop=smart&auto=webp&s=9717231ad6722db0b9f11e428a97e2f738cbeaa8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=640&crop=smart&auto=webp&s=b3bfc108edb23af8ed11500fafb9d9600800a607', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=960&crop=smart&auto=webp&s=1c556fb46f27bb6413e6e1a4f0ceef7d2d4b93a4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?width=1080&crop=smart&auto=webp&s=8aeccde41fbc7e67eab04e1b7b43b7a03fc3f4c0', 'width': 1080}], 'source': {'height': 1107, 'url': 'https://external-preview.redd.it/TfK6_UXiHbY81Y_q4A3Uzc-1XxooA9QuROXltIOd7-s.jpg?auto=webp&s=1691ea30d45707db1bd9ed79e0e8331ac6f66fbf', 'width': 2048}, 'variants': {}}]} |
I trained a tinystories model from scratch for educational purposes, how cooked? (1M-parameters) | 103 | 2025-02-03T18:48:54 | THE--GRINCH | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1igwkpw | false | null | t3_1igwkpw | /r/LocalLLaMA/comments/1igwkpw/i_trained_a_tinystories_model_from_scratch_for/ | false | false | 103 | {'enabled': True, 'images': [{'id': '7m0rqi2zzdyHgAGlzOsmB1L54FsS4Rrkr4GfkT0vFC4', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/mdy6shrszyge1.png?width=108&crop=smart&auto=webp&s=d3d9c77f0c743a216aaff5b8e7e52f78fa3d0150', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/mdy6shrszyge1.png?width=216&crop=smart&auto=webp&s=5e55d43b4ec563fd92173fdc559db055e6b7cda5', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/mdy6shrszyge1.png?width=320&crop=smart&auto=webp&s=aa92782c1445f962ccd9c0ac18a5b4662284e025', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/mdy6shrszyge1.png?width=640&crop=smart&auto=webp&s=1cae5a6b70dd9f4b9a4cda0016978c57391368c6', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/mdy6shrszyge1.png?width=960&crop=smart&auto=webp&s=95f2a021c3a8e5355d446d40bf2d917c09701194', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/mdy6shrszyge1.png?width=1080&crop=smart&auto=webp&s=65b3f0de1726ea576af11e3122d3b44f2e3ec897', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/mdy6shrszyge1.png?auto=webp&s=8eaea5c15c5489eef97e1d6b4f6e34b15f67dbcb', 'width': 1280}, 'variants': {}}]} |
|||
o3-mini-LOW beats medium & high! Model Performance on Med School STEP 1-3 Exams (o1, DeepSeek R1, o3-mini, Claude, Llama 405b, Mistral, etc) | 0 | With the wife in med school, I was curious how AI models would do on the STEP-Practice Exams, and which is the best:
https://preview.redd.it/pal6a57j0zge1.png?width=747&format=png&auto=webp&s=4424a39408b562740ce315723bbe48c3052ae896
It's no surprise OpenAI o1 is at the top. Some surprising finds are that DeepSeek R1 is only 0.8% off of the top-spot with the full 671b model and 4% off with the 70b llama distill. Also, **o3-mini-low beats the medium and high-thought models**!
It's looking like o3-full will easily take the top spot, if the o3-mini results are anything to go by. Would love to discuss and compare which models work best for you.
You can test it yourself (and plugin whatever tests/quizzes you want to test the models on). Code & more info is on GitHub: [https://github.com/CodeUpdaterBot/AIvsSTEP](https://github.com/CodeUpdaterBot/AIvsSTEP) | 2025-02-03T19:04:12 | https://www.reddit.com/r/LocalLLaMA/comments/1igwysd/o3minilow_beats_medium_high_model_performance_on/ | ARVwizardry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igwysd | false | null | t3_1igwysd | /r/LocalLLaMA/comments/1igwysd/o3minilow_beats_medium_high_model_performance_on/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'e26KI2ovXbtwi3QfZF3vSaAUw9RAP7zLa8INAzB_Czk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3pMpvcXoR5wSJCvL-K_jVNXnAL4QMguc28K0Tevi_R4.jpg?width=108&crop=smart&auto=webp&s=7d307eb17012be9a68a443dfec3cac8194c6450d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3pMpvcXoR5wSJCvL-K_jVNXnAL4QMguc28K0Tevi_R4.jpg?width=216&crop=smart&auto=webp&s=44a03d4c4b93f8d867cd478da4e9d72ce8e2ca68', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3pMpvcXoR5wSJCvL-K_jVNXnAL4QMguc28K0Tevi_R4.jpg?width=320&crop=smart&auto=webp&s=4c2a5c9d9adf1376109e7e8cfe1bb7dae7ef54e5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3pMpvcXoR5wSJCvL-K_jVNXnAL4QMguc28K0Tevi_R4.jpg?width=640&crop=smart&auto=webp&s=0988ffa593164d00ca26e3c8adabdf4fc5dde91d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3pMpvcXoR5wSJCvL-K_jVNXnAL4QMguc28K0Tevi_R4.jpg?width=960&crop=smart&auto=webp&s=c162d9ed14efff4dfe204e619bfaed237df92f17', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3pMpvcXoR5wSJCvL-K_jVNXnAL4QMguc28K0Tevi_R4.jpg?width=1080&crop=smart&auto=webp&s=8a028af1c5d16710f4143971d75df2a4fbed2ad5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3pMpvcXoR5wSJCvL-K_jVNXnAL4QMguc28K0Tevi_R4.jpg?auto=webp&s=278376b235207540baae4e9a4d2b277a00d01a4e', 'width': 1200}, 'variants': {}}]} |
|
How can I use LLava with Deepseek? | 0 | Hi everyone, I want to use deepseek-r1 locally, but I noticed that it does not support parsing images natively, and that other models (like Llava) are used for this, as they parse and interpret the image and then hand the result to other models. My question is how I can use them together, as I regularly use image parsing in the deepseek and chatgpt online clients as it's easier than writing out long math formulas by hand. I googled but couldn't really find anything useful, so I am asking here to see if anyone knows an answer to this.
I set up the [Ollama webUI lite](https://github.com/ollama-webui/ollama-webui-lite) and it works nice and in comparisson to open web ui it's nice and simple. But if what I want only works with other UIs I would also switch. | 2025-02-03T19:04:24 | https://www.reddit.com/r/LocalLLaMA/comments/1igwyzc/how_can_i_use_llava_with_deepseek/ | gh04t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igwyzc | false | null | t3_1igwyzc | /r/LocalLLaMA/comments/1igwyzc/how_can_i_use_llava_with_deepseek/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'slERuKwhhg1CeHaw_wloPs8J7mAwpM6Q6Em-ld-lqMY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3dEkFADSWt21LBb657K5J2ThYPG6f1Ju_7koC3pkcoI.jpg?width=108&crop=smart&auto=webp&s=1284539c3e825deea474569611224f078b8b83ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3dEkFADSWt21LBb657K5J2ThYPG6f1Ju_7koC3pkcoI.jpg?width=216&crop=smart&auto=webp&s=30fed31c8840cdd7fb7100a4c88b4a105fb93323', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3dEkFADSWt21LBb657K5J2ThYPG6f1Ju_7koC3pkcoI.jpg?width=320&crop=smart&auto=webp&s=70b67f3b7bc095f40dc624b8b837238bcfc6a14f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3dEkFADSWt21LBb657K5J2ThYPG6f1Ju_7koC3pkcoI.jpg?width=640&crop=smart&auto=webp&s=f7f1d60c1a906d4228678bb95a5997579355afe2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3dEkFADSWt21LBb657K5J2ThYPG6f1Ju_7koC3pkcoI.jpg?width=960&crop=smart&auto=webp&s=c07289f671b5a564b964331cb9b03dafb64b9597', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3dEkFADSWt21LBb657K5J2ThYPG6f1Ju_7koC3pkcoI.jpg?width=1080&crop=smart&auto=webp&s=aa2df682edd82c9104311dbf172cb8c522d2a949', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3dEkFADSWt21LBb657K5J2ThYPG6f1Ju_7koC3pkcoI.jpg?auto=webp&s=b1d764c4e39c5948b520c6bc62e588734f22e406', 'width': 1200}, 'variants': {}}]} |
Experiencing Worse Quality with Llama3.2-vision:90b than 11b | 1 | [removed] | 2025-02-03T19:11:49 | https://www.reddit.com/r/LocalLLaMA/comments/1igx5hr/experiencing_worse_quality_with_llama32vision90b/ | tobalotv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igx5hr | false | null | t3_1igx5hr | /r/LocalLLaMA/comments/1igx5hr/experiencing_worse_quality_with_llama32vision90b/ | false | false | self | 1 | null |
OpenAI compatible API for Flowise | 4 | As far as I understand, Flowise does not offer an OpenAI compatible API. Therefore I have built a very rudimentary solution to use Flowise flows and agents in Open-WebUI. Maybe it will help someone.
[https://github.com/crashr/open-adapter](https://github.com/crashr/open-adapter) | 2025-02-03T19:17:02 | https://www.reddit.com/r/LocalLLaMA/comments/1igxabm/openai_compatible_api_for_flowise/ | muxxington | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igxabm | false | null | t3_1igxabm | /r/LocalLLaMA/comments/1igxabm/openai_compatible_api_for_flowise/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'CrpRDrVz4TB0fCp57K0JqctK5q0NGjPIjLwesqoXS3U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/idO0UlbI5bgQ-_WNDrG-YVOwh9vTTBTmNf_7VktcWx8.jpg?width=108&crop=smart&auto=webp&s=81c8d8832db6501ba6fba65ced341ef30d481eb8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/idO0UlbI5bgQ-_WNDrG-YVOwh9vTTBTmNf_7VktcWx8.jpg?width=216&crop=smart&auto=webp&s=a045133141f893dffe6c08cad71b953c7bd75a61', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/idO0UlbI5bgQ-_WNDrG-YVOwh9vTTBTmNf_7VktcWx8.jpg?width=320&crop=smart&auto=webp&s=cdb71840f5fcbec7ee18580d5c6cc820f1344ca7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/idO0UlbI5bgQ-_WNDrG-YVOwh9vTTBTmNf_7VktcWx8.jpg?width=640&crop=smart&auto=webp&s=df92be02aeb86d010f22140c04dc15bfa03c7dcf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/idO0UlbI5bgQ-_WNDrG-YVOwh9vTTBTmNf_7VktcWx8.jpg?width=960&crop=smart&auto=webp&s=c7719e31a01de24f30e1edcc58a4b3220bb890ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/idO0UlbI5bgQ-_WNDrG-YVOwh9vTTBTmNf_7VktcWx8.jpg?width=1080&crop=smart&auto=webp&s=572c57ace2133337b78f924020896752e5751699', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/idO0UlbI5bgQ-_WNDrG-YVOwh9vTTBTmNf_7VktcWx8.jpg?auto=webp&s=6b0cce64c6c38f439cda016f3fc018b09f36a7f3', 'width': 1200}, 'variants': {}}]} |
Deepseek doing weird answer | 0 | So i downloaded deepseek 8b over ollama, now im running it localy but on every it awnseres really weird
https://preview.redd.it/67zb2sfq5zge1.png?width=983&format=png&auto=webp&s=b5a64a0ec3077ff5b049613b2daa68de36745a80
| 2025-02-03T19:22:17 | https://www.reddit.com/r/LocalLLaMA/comments/1igxf33/deepseek_doing_weird_answer/ | Consistent-Gold8224 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igxf33 | false | null | t3_1igxf33 | /r/LocalLLaMA/comments/1igxf33/deepseek_doing_weird_answer/ | false | false | 0 | null |
|
does anybody know which voice model boardy.ai use? | 2 | I wondered if they are using any oss solution | 2025-02-03T19:28:20 | https://www.reddit.com/r/LocalLLaMA/comments/1igxkid/does_anybody_know_which_voice_model_boardyai_use/ | dirtyring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igxkid | false | null | t3_1igxkid | /r/LocalLLaMA/comments/1igxkid/does_anybody_know_which_voice_model_boardyai_use/ | false | false | self | 2 | null |
Trying to run Unsloth Deepseek R1 1.58 | 4 | I have 2 3090's (48GB VRAM) and 64 GB DDR5 6400, AMD 7900X but a Gen 3 NVME SSD 3.5 GB/s.
C:\...\llama.cpp\build\bin\Release\llama-cli.exe ^ --model "C:\...\unsloth\DeepSeek-R1-GGUF\DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf" ^ --cache-type-k q4_0 ^ --threads 12 ^ --prio 2 ^ --temp 0.6 ^ --ctx-size 2048 ^ --seed 3407 ^ --device cuda0,cuda1 ^ --split-mode layer ^ --n-gpu-layers 16 ^ --no-conversation ^ --main-gpu 0 ^ --no-kv-offload ^ --prompt "<|User|>Create a Flappy Bird game in Python.<|Assistant|>"
But I am getting like 4 seconds per token. I thought it said 80+ VRAM + RAM should get you single digit tok per sec.
I am not sure what am i doing wrong to replicate other people running faster than mine, or if my SSD is the bottleneck?
Any help appreciated. | 2025-02-03T19:29:28 | https://www.reddit.com/r/LocalLLaMA/comments/1igxllw/trying_to_run_unsloth_deepseek_r1_158/ | zzComtra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igxllw | false | null | t3_1igxllw | /r/LocalLLaMA/comments/1igxllw/trying_to_run_unsloth_deepseek_r1_158/ | false | false | self | 4 | null |
Question | 1 | [removed] | 2025-02-03T19:37:39 | https://www.reddit.com/r/LocalLLaMA/comments/1igxt12/question/ | North-Glove-3057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igxt12 | false | null | t3_1igxt12 | /r/LocalLLaMA/comments/1igxt12/question/ | false | false | self | 1 | null |
What is currently the best uncensored local AI model? | 1 | [removed] | 2025-02-03T19:39:46 | https://www.reddit.com/r/LocalLLaMA/comments/1igxutc/what_is_currently_the_best_uncensored_local_ai/ | TheMoon8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igxutc | false | null | t3_1igxutc | /r/LocalLLaMA/comments/1igxutc/what_is_currently_the_best_uncensored_local_ai/ | false | false | self | 1 | null |
is there a capable local llm which could run on a lenovo thinkcenter pc with intel i5-6400? I need to run one as a chat bot and for re-writing text. | 1 | [removed] | 2025-02-03T19:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/1igy06l/is_there_a_capable_local_llm_which_could_run_on_a/ | NetajiBBSfan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igy06l | false | null | t3_1igy06l | /r/LocalLLaMA/comments/1igy06l/is_there_a_capable_local_llm_which_could_run_on_a/ | false | false | self | 1 | null |
Can't compile Cohere examples due to "Circular Import" | 0 | Can someone please help me with Cohere? I wanted to test their API locally but it doesn't work at all.
Examples for API-V1 "Classify" often wants me to import a class "ClassifyExample", which simply doesn't exist.
And Examples for API-V2 are failing with ClientV2 being a circular import.
I was testing the python script here: [https://docs.cohere.com/reference/about](https://docs.cohere.com/reference/about)
`import cohere`
`co = cohere.ClientV2("<<apiKey>>")`
but only get the following error:
**AttributeError: partially initialized module 'cohere' has no attribute 'ClientV2' (most likely due to a circular import)**
| 2025-02-03T19:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/1igy3fh/cant_compile_cohere_examples_due_to_circular/ | tf1155 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igy3fh | false | null | t3_1igy3fh | /r/LocalLLaMA/comments/1igy3fh/cant_compile_cohere_examples_due_to_circular/ | false | false | self | 0 | null |
Notes Mind - Open Source / Local natural language interface for Apple Notes App | 2 | 👋
I've been working on the simple app to provide a natural language interface over all your notes in Apple Notes App. (Chat with Notes)
It is open source and processes everything locally using Sentence Transformers and Ollama for summarisation.
It isn't perfect and rough around the edges, but it's a proof of concept if someone wants to develop it further.
Some technical details:
\* Uses AppleScript to extract your notes from the native Apple Notes application.
\* Generates semantic embeddings using Sentence Transformers.
\* Stores notes in SQLite and enables searches with both vector (sqlite-vec) and FTS queries.
\* Generates summary using a local LLM on Ollama
[https://github.com/namuan/notes-mind](https://github.com/namuan/notes-mind)
See a quick demo here
[https://x.com/namuan\_twt/status/1886363583648256348](https://x.com/namuan_twt/status/1886363583648256348)
https://preview.redd.it/4yei30ybczge1.png?width=2272&format=png&auto=webp&s=339250b6daf9cb78d032cb415c8c1759d4484487
| 2025-02-03T20:07:03 | https://www.reddit.com/r/LocalLLaMA/comments/1igyjk9/notes_mind_open_source_local_natural_language/ | namuan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igyjk9 | false | null | t3_1igyjk9 | /r/LocalLLaMA/comments/1igyjk9/notes_mind_open_source_local_natural_language/ | false | false | 2 | null |
|
Introducing Deeper Seeker - A simpler and OSS version of OpenAI's latest Deep Research feature. | 226 | Deeper Seeker is a simpler **OSS version of OpenAI's latest Deep Research** feature in [ChatGPT.It](http://ChatGPT.It) is an agentic research tool to reason, create multi-step tasks , synthesize data from multiple online resources and create neat reports
I made it using Exa web search APIs. I didn't use langchain/langgraph or any agent orchestration framework.
Although it does not work well for complex queries, I welcome whoever is interested in contributing to the repo and improving it.
**Open to hearing all the feedback from you all !!**
[demo ](https://i.redd.it/b859bc5egzge1.gif)
| 2025-02-03T20:23:12 | https://www.reddit.com/r/LocalLLaMA/comments/1igyy0n/introducing_deeper_seeker_a_simpler_and_oss/ | hjofficial | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igyy0n | false | null | t3_1igyy0n | /r/LocalLLaMA/comments/1igyy0n/introducing_deeper_seeker_a_simpler_and_oss/ | false | false | 226 | null |
|
Parallel interference on multiple GPU | 4 | I have a question, if I'm running interference on multiple GPU on a model that is split thru them, as i understood interference is happening on single GPU at time, so effectively, if I have several cards I cannot really utilize them in parallel.
If it really only possible way to interfere or there is a way to interfere on multiple gpu at once ?
( maybe on each GPU is part of each layer and multiple GPUs can crunch thru it at once, idk )
| 2025-02-03T20:28:16 | https://www.reddit.com/r/LocalLLaMA/comments/1igz2lj/parallel_interference_on_multiple_gpu/ | haluxa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igz2lj | false | null | t3_1igz2lj | /r/LocalLLaMA/comments/1igz2lj/parallel_interference_on_multiple_gpu/ | false | false | self | 4 | null |
Synthetic Rejected Preference Data Creation | 2 | 2025-02-03T20:38:27 | kindacognizant | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1igzbo8 | false | null | t3_1igzbo8 | /r/LocalLLaMA/comments/1igzbo8/synthetic_rejected_preference_data_creation/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'pA3o4FEBcypIbY56kxcGXXge98h0wqSF6fSqLirbSs0', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/2x9vney0jzge1.png?width=108&crop=smart&auto=webp&s=896cc2c8c083497d090807af2283445ebc471b58', 'width': 108}, {'height': 181, 'url': 'https://preview.redd.it/2x9vney0jzge1.png?width=216&crop=smart&auto=webp&s=27634b1c471cb4c2faf5716542e37312157b2f17', 'width': 216}, {'height': 268, 'url': 'https://preview.redd.it/2x9vney0jzge1.png?width=320&crop=smart&auto=webp&s=ec96b11bf43a9cdbe07655429d161e22158dfd49', 'width': 320}, {'height': 536, 'url': 'https://preview.redd.it/2x9vney0jzge1.png?width=640&crop=smart&auto=webp&s=4686f04c31661610c1808a66eac4c6b3f2b69b4e', 'width': 640}, {'height': 805, 'url': 'https://preview.redd.it/2x9vney0jzge1.png?width=960&crop=smart&auto=webp&s=f0f7d990c428b478f46723891ee8bbeab63e0dcf', 'width': 960}, {'height': 905, 'url': 'https://preview.redd.it/2x9vney0jzge1.png?width=1080&crop=smart&auto=webp&s=be196a0169b9eef85fd13f133504df59d3b98b68', 'width': 1080}], 'source': {'height': 1496, 'url': 'https://preview.redd.it/2x9vney0jzge1.png?auto=webp&s=cfdd1c393f8ea31fdc65d56c19abbcfb7fb482d4', 'width': 1784}, 'variants': {}}]} |
|||
trying to run janus keep getting "Getting requirements to build wheel did not run successfully." looking for advice | 1 | [removed] | 2025-02-03T20:39:26 | https://www.reddit.com/r/LocalLLaMA/comments/1igzck1/trying_to_run_janus_keep_getting_getting/ | Informal-Bonus-7925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igzck1 | false | null | t3_1igzck1 | /r/LocalLLaMA/comments/1igzck1/trying_to_run_janus_keep_getting_getting/ | false | false | self | 1 | null |
Any open source projects similar to open AI's new deep research that I can run locally? | 0 | Open AI's deep research genuinely seems like a really cool use for AI. I'm curious if there are any tools that try to do something similar that I can run locally, that offer a similar user experience. | 2025-02-03T20:41:30 | https://www.reddit.com/r/LocalLLaMA/comments/1igzegf/any_open_source_projects_similar_to_open_ais_new/ | Lost_Fox__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1igzegf | false | null | t3_1igzegf | /r/LocalLLaMA/comments/1igzegf/any_open_source_projects_similar_to_open_ais_new/ | false | false | self | 0 | null |
"Hyperfitting" models can help remove repetitions? A truly "perplexing" result. | 1 | [removed] | 2025-02-03T21:14:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ih08k2/hyperfitting_models_can_help_remove_repetitions_a/ | LagOps91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih08k2 | false | null | t3_1ih08k2 | /r/LocalLLaMA/comments/1ih08k2/hyperfitting_models_can_help_remove_repetitions_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'hTZQR6DR4aaSj11BaMCI9js9kWxneFZckfQZ5_h6fKI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/v1FQbpxw3SIaeZ37u58QfTWWJ0aZ7_YakEMWYU-nVIU.jpg?width=108&crop=smart&auto=webp&s=a9a11909b4c37d1ff65b3378c51f288f3ac11524', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/v1FQbpxw3SIaeZ37u58QfTWWJ0aZ7_YakEMWYU-nVIU.jpg?width=216&crop=smart&auto=webp&s=06b946b3af32b107bf33ca7427e28664d01ae68b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/v1FQbpxw3SIaeZ37u58QfTWWJ0aZ7_YakEMWYU-nVIU.jpg?width=320&crop=smart&auto=webp&s=707599f8a4fdbf01e3c7f4d0a307b3a77095894b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/v1FQbpxw3SIaeZ37u58QfTWWJ0aZ7_YakEMWYU-nVIU.jpg?auto=webp&s=e649ba4772f4a93324c4fa49c80799e1aea4b520', 'width': 480}, 'variants': {}}]} |
"Hyperfitting" a model to a small training set can postively impact human preference of model outputs | 22 | I have stumbled accross a very strange result presented in a [video](https://www.youtube.com/watch?v=AAiMOFQJPx8) that I think might be of interested to those of you who are into finetuning models, especially for creative writing or rp purposes.
According to the [paper](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0FRcHQtSmpVQkVmTlFXSC1zQ0NFQUJzd1otZ3xBQ3Jtc0tuV00yV3Y1eS1wb0JOQXhyMVR4aG14VW5RZlhVSUFuTl9LRVNKZW9VSF9PeEVZY0xNR09LLVA4bEFDT0VWc0VUQ20tZDd0MnhTVXFCWnhUeE9kZ1RwMG5hcXFNWHdWbVpTUUh2eExmM1VQeDNtZ3FnMA&q=https%3A%2F%2Farxiv.org%2Fabs%2F2412.04318&v=AAiMOFQJPx8) presented, overfitting a model to an extreme degree on a small training data set can help it stay coherent over long outputs and greatly reduce repetitions.
This results in a very sharpened output distribution, which has terrible perplexity (due to output distribution not matching natural language entropy), but interestingly enough, very high human preference values.
On one hand I can see this actually work, because the model learns how to keep the output coherent, as otherwise it can't match the training dataset to reduce loss.
On the other, intuitively, I would expect there to be more repetations outside of the training data or copied over slop phrases, but according to the paper, that strangely enough doesn't happen.
Personally I have no experience in fine-tuning models and am not using NVIDIA hardware either, but perhaps someone could try this out by making an experimental finetune? Do you guys/girls think this has merit? | 2025-02-03T21:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ih0c5x/hyperfitting_a_model_to_a_small_training_set_can/ | LagOps91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih0c5x | false | null | t3_1ih0c5x | /r/LocalLLaMA/comments/1ih0c5x/hyperfitting_a_model_to_a_small_training_set_can/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'hTZQR6DR4aaSj11BaMCI9js9kWxneFZckfQZ5_h6fKI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/v1FQbpxw3SIaeZ37u58QfTWWJ0aZ7_YakEMWYU-nVIU.jpg?width=108&crop=smart&auto=webp&s=a9a11909b4c37d1ff65b3378c51f288f3ac11524', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/v1FQbpxw3SIaeZ37u58QfTWWJ0aZ7_YakEMWYU-nVIU.jpg?width=216&crop=smart&auto=webp&s=06b946b3af32b107bf33ca7427e28664d01ae68b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/v1FQbpxw3SIaeZ37u58QfTWWJ0aZ7_YakEMWYU-nVIU.jpg?width=320&crop=smart&auto=webp&s=707599f8a4fdbf01e3c7f4d0a307b3a77095894b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/v1FQbpxw3SIaeZ37u58QfTWWJ0aZ7_YakEMWYU-nVIU.jpg?auto=webp&s=e649ba4772f4a93324c4fa49c80799e1aea4b520', 'width': 480}, 'variants': {}}]} |
CPU inference on Lenovo P1 Gen7 w/ LPCAMM2 | 1 | [removed] | 2025-02-03T21:30:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ih0m4p/cpu_inference_on_lenovo_p1_gen7_w_lpcamm2/ | beauddl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih0m4p | false | null | t3_1ih0m4p | /r/LocalLLaMA/comments/1ih0m4p/cpu_inference_on_lenovo_p1_gen7_w_lpcamm2/ | false | false | self | 1 | null |
Where are the European models? | 0 | Are they really all US or Chinese? | 2025-02-03T21:32:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ih0o1s/where_are_the_european_models/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih0o1s | false | null | t3_1ih0o1s | /r/LocalLLaMA/comments/1ih0o1s/where_are_the_european_models/ | false | false | self | 0 | null |
"Can I Run This LLM? – A VRAM Estimator for Local AI Models" | 97 | Hi, I’m currently in my third semester of electrical engineering, and I built this project over the weekend.
[http://www.canirunthisllm.net/](http://www.canirunthisllm.net/) helps users estimate the VRAM requirements for running local AI models.
This is my first web app project, and I would really appreciate any feedback!
Next, I’m working on a feature that lets users enter a Hugging Face model tag into a form, and the backend will automatically fetch the model parameters. After that, I plan to add support for multiple GPUs. Also planned for the future is an .exe that automatically detect PC specifications. I already wrote the script but dont know how to fetch it into Django. | 2025-02-03T21:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ih0orp/can_i_run_this_llm_a_vram_estimator_for_local_ai/ | negative_entropie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih0orp | false | null | t3_1ih0orp | /r/LocalLLaMA/comments/1ih0orp/can_i_run_this_llm_a_vram_estimator_for_local_ai/ | false | false | self | 97 | null |
Has anyone built conversational AI that feels human? | 0 | Hey guys, LLMs are great but they don't feel human when you talk to them. Has anyone ever built an actual conversational model? For instance, something that reacts with annoyance if you make repetitive questions, that seems to have feelings of their own like fear,joy and self-esteem? | 2025-02-03T21:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ih0q3y/has_anyone_built_conversational_ai_that_feels/ | Jazzlike_Tooth929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih0q3y | false | null | t3_1ih0q3y | /r/LocalLLaMA/comments/1ih0q3y/has_anyone_built_conversational_ai_that_feels/ | false | false | self | 0 | null |
x³ + y³ + z³ = 20, slove x,y,z. DeepSeek1 - O3mini 0 | 2 | I asked both deepseek and o3 mini,
O mini cloud not slove it by integer solution so it resorted to extra operations, while deepseek was very creative and solve it. | 2025-02-03T21:38:02 | https://www.reddit.com/gallery/1ih0sxz | Extension_Swimmer451 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ih0sxz | false | null | t3_1ih0sxz | /r/LocalLLaMA/comments/1ih0sxz/x³_y³_z³_20_slove_xyz_deepseek1_o3mini_0/ | false | false | 2 | null |
|
Why nobody talks about Stanford Co-Storm ? It writes advanced in depth reports | 43 | 2025-02-03T21:38:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ih0te1/why_nobody_talks_about_stanford_costorm_it_writes/ | sickleRunner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih0te1 | false | null | t3_1ih0te1 | /r/LocalLLaMA/comments/1ih0te1/why_nobody_talks_about_stanford_costorm_it_writes/ | false | false | 43 | null |
||
Llofticries…? or time for a break after a llong weekend | 0 | these lyrics have always been enigmatic, with the ambiguity of a dream…
but I just realised it’s clearly an abstract description of someone attempting to jailbreak a large language model!
right?!
*Green, green thunder and the loud, loud rain
Lead our woes asunder
‘Neath the proud, proud veins Of trains let bleed the gunmen of our
Pumping earthly hearts
Wean our joys in plunder
Peel our shining teeth
Bid our hold on happiness
Beat weighty tests with lofty cries
Lofty cries with trembling thighs Weepy chests with weepy sighs Weepy skin with trembling thighs
You must be hovering over yourself
Watching us drip on each other’s sides
Dear brother collect all the liquids off of the floor
Use your oily fingers, make a paste, let it form
Let it seep through your sockets and ears
Into your precious ruptured skull
Let it seep, let it keep you from us Patiently heal you Patiently unreel you
Beat weighty chests with lofty cries
Lofty cries with trembling thighs
Weepy chests with weepy sighs
Weepy skins with trembling thighs
You must be hovering over yourself
Watching us drip on each other’s sides
Dear brother collect all the liquids off of the floor
Use your oily fingers, make a paste, let it form
Beat weighty cheats with lofty cries
Lofty cries with trembling thighs
Weepy chests with weepy sighs Weepy skin with trembling thighs
You must be hovering over yourself
Watching us drip on each other’s sides* | 2025-02-03T21:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ih1034/llofticries_or_time_for_a_break_after_a_llong/ | footlooseboss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih1034 | false | null | t3_1ih1034 | /r/LocalLLaMA/comments/1ih1034/llofticries_or_time_for_a_break_after_a_llong/ | false | false | self | 0 | null |
Ollama-deepseek install, localhost:11434 REACHABLE & LOCALIP:11434 UNREACHABLE?? Why? | 1 | [removed] | 2025-02-03T21:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ih146r/ollamadeepseek_install_localhost11434_reachable/ | HasanJ996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih146r | false | null | t3_1ih146r | /r/LocalLLaMA/comments/1ih146r/ollamadeepseek_install_localhost11434_reachable/ | false | false | self | 1 | null |
Open WebUI w/ Mistral Small 24B -> Kokoro TTS -> Whisper STT = Conversational agent all local and it's great! | 1 | [removed] | 2025-02-03T21:58:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ih1ame/open_webui_w_mistral_small_24b_kokoro_tts_whisper/ | Rollingsound514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih1ame | false | null | t3_1ih1ame | /r/LocalLLaMA/comments/1ih1ame/open_webui_w_mistral_small_24b_kokoro_tts_whisper/ | false | false | self | 1 | null |
Turning Reddit Conversations Into AI-Powered Self-Discovery | 11 | So this post is about a program I made today to help aid me in seeing myself and the world more objectively. It has helped me realize some things about myself that I hope to impart to others so that they might find similar insights into how they interact others. I hope that by gaining this insight people will be more mindful of how they communicate.
What if your social media interactions could help you understand yourself better? I’ve built an open-source tool that transforms Reddit activity into structured blog posts using AI agents – and the results might surprise you.
The system acts like a team of specialized AI analysts. One agent expands fragmented Reddit posts into detailed narratives, another identifies emotional patterns, while a metrics agent quantifies behavioral trends. A final formatting agent structures these insights into readable Markdown, complete with headers and visual dividers. Built with Python and locally hosted LLMs, it processes everything from post frequency to sentiment shifts, revealing hidden patterns in online interactions.
This isn’t just about content generation – it’s a digital mirror. By analyzing both my posts and community responses, the tool highlights blind spots in my thinking and surfaces collaborative ideas I might have missed. The AI acts as a neutral third party, combining human discussions with algorithmic analysis to spark new project ideas while maintaining complete privacy through local processing.
As an ongoing “AI art project,” it raises crucial questions: How do we maintain authenticity in algorithm-assisted self-reflection? Can open-source tools counterbalance corporate content farms? I’m particularly interested in using smaller, focused models to reduce hidden biases. The current version is just the beginning – I’m refining how it visualizes conversation networks and detects emerging themes across threads.
The code is available on [https://github.com/kliewerdaniel/RedToBlog02](https://github.com/kliewerdaniel/RedToBlog02) with no monetization. It’s a living experiment in human-AI collaboration – not polished perfection, but an honest look at how we might evolve alongside thinking machines.
[https://danielkliewer.com/2025/02/03/scrape-reddit-analysis-blog](https://danielkliewer.com/2025/02/03/scrape-reddit-analysis-blog) | 2025-02-03T22:12:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ih1nxw/turning_reddit_conversations_into_aipowered/ | KonradFreeman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih1nxw | false | null | t3_1ih1nxw | /r/LocalLLaMA/comments/1ih1nxw/turning_reddit_conversations_into_aipowered/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'X0i-8gqoJPpjAU9vSqcrY-wXwsSBCHwJV04XMSrLgE4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/orKsXVBDm-EzQgqXd-FWz1FLWX1tmntIhjtNCJqD1RQ.jpg?width=108&crop=smart&auto=webp&s=9f034831815cece499bdf394d257c2cd4a21bfd6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/orKsXVBDm-EzQgqXd-FWz1FLWX1tmntIhjtNCJqD1RQ.jpg?width=216&crop=smart&auto=webp&s=4d62674dc1518a70cce6b104748710833d83239f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/orKsXVBDm-EzQgqXd-FWz1FLWX1tmntIhjtNCJqD1RQ.jpg?width=320&crop=smart&auto=webp&s=bd363ce4156011a6c7927caad06d1d6468bfacb7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/orKsXVBDm-EzQgqXd-FWz1FLWX1tmntIhjtNCJqD1RQ.jpg?width=640&crop=smart&auto=webp&s=95e1f365ce4d63a30c335e7c0088218c22d5b8a8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/orKsXVBDm-EzQgqXd-FWz1FLWX1tmntIhjtNCJqD1RQ.jpg?width=960&crop=smart&auto=webp&s=41a4b6d21d1434d4b5bc64ec07feb1715cd02696', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/orKsXVBDm-EzQgqXd-FWz1FLWX1tmntIhjtNCJqD1RQ.jpg?width=1080&crop=smart&auto=webp&s=9a5bf2ec9efa752ef1b3bc305a5d95bdef182803', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/orKsXVBDm-EzQgqXd-FWz1FLWX1tmntIhjtNCJqD1RQ.jpg?auto=webp&s=3aee56ffdd332a424bcd0d2d4f35a205331a3b27', 'width': 1200}, 'variants': {}}]} |
Running Llama 3.2 1B locally gives rubbish results | 0 | Hi all - working on an on-prem LLM project and they downloaded Llama 3.2 1B from Hugging Face. As the setup is a pretty secure environment we had to get a ton of approvals to get it into the org. Following this, I downloaded/installed Ollama, and put together a Modelfile as per below:
FROM ./
PARAMETER temperature 0.7
PARAMETER num_ctx 4096
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
SYSTEM You are a friendly and knowledgeable conversational assistant. Provide detailed, interactive, and natural language responses.
And ran the ollama create command to build the model.
Since using it, all I get is completely nonsensical results. I wanted to use the model as a text extraction and summarisation tool but the model can’t answer simple questions like "what is the capital of France?" correctly. Sometimes it behaves like auto complete or just spouts gibberish unrelated to the query. Is there something I’m missing here? | 2025-02-03T22:19:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ih1tlv/running_llama_32_1b_locally_gives_rubbish_results/ | balmofgilead | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih1tlv | false | null | t3_1ih1tlv | /r/LocalLLaMA/comments/1ih1tlv/running_llama_32_1b_locally_gives_rubbish_results/ | false | false | self | 0 | null |
Synthetic rejected preference data creation [via Qwen7b finetune] | 1 | [removed] | 2025-02-03T22:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ih1ysy/synthetic_rejected_preference_data_creation_via/ | kindacognizant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih1ysy | false | null | t3_1ih1ysy | /r/LocalLLaMA/comments/1ih1ysy/synthetic_rejected_preference_data_creation_via/ | false | false | 1 | null |
|
Synthetic rejected preference data creation [via Qwen2.5 7b finetune] | 1 | [removed] | 2025-02-03T22:32:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ih24qn/synthetic_rejected_preference_data_creation_via/ | kindacognizant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih24qn | false | null | t3_1ih24qn | /r/LocalLLaMA/comments/1ih24qn/synthetic_rejected_preference_data_creation_via/ | false | false | 1 | null |
|
Llama, Qwen, DeepSeek, now we got Sentient's Unhinged Dobby | 1 | [removed] | 2025-02-03T22:38:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ih2amf/llama_qwen_deepseek_now_we_got_sentients_unhinged/ | jiMalinka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih2amf | false | null | t3_1ih2amf | /r/LocalLLaMA/comments/1ih2amf/llama_qwen_deepseek_now_we_got_sentients_unhinged/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'fURvXWZzv6wlGksuw-B0Sc28jjlMi1LHGl_97LFGnzo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=108&crop=smart&auto=webp&s=586423125f4b054f3a89511a8e71a674332b4866', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=216&crop=smart&auto=webp&s=2f9eabd7473b3e0f85aca67e9e01eb06cc9ac820', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=320&crop=smart&auto=webp&s=2c97e120eafc17970dd2957386c90e3bb63e8e8c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=640&crop=smart&auto=webp&s=ca8c4531cc8d39da75712ae247aaa9909bd31a2b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=960&crop=smart&auto=webp&s=b1658f8ec776bb05fb1ae236da75fbd3d91ab520', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=1080&crop=smart&auto=webp&s=8a46eefea12cbd63d7028959125d8546fd0ad0b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?auto=webp&s=1fa661ae40c5c7109444f19f7b7d4711b526c4a3', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=5480985e759e34c79ec3f573021b95f1ab5b5550', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=5f12d15ab96f98b32482852f1ae6bf1beeeb7573', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=95c2919d6625a3a5b01d26db09517a069c21a150', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d58b80cfedd1d86ea1ad548851cdea6e9d1e957c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a201c6c096f2c938207acf25d2e749cc38b8ec16', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=fda4a70161356dc1b4890f860b9eebdf468ca3cd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?blur=40&format=pjpg&auto=webp&s=bb67b322885d3d990b2ee898070575d93745370a', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=5480985e759e34c79ec3f573021b95f1ab5b5550', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=5f12d15ab96f98b32482852f1ae6bf1beeeb7573', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=95c2919d6625a3a5b01d26db09517a069c21a150', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d58b80cfedd1d86ea1ad548851cdea6e9d1e957c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a201c6c096f2c938207acf25d2e749cc38b8ec16', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=fda4a70161356dc1b4890f860b9eebdf468ca3cd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SY2kmCbo4-hZLGAEJ_-sjtTHwZz14nT8TN2fcp7rLFY.jpg?blur=40&format=pjpg&auto=webp&s=bb67b322885d3d990b2ee898070575d93745370a', 'width': 1200}}}}]} |
Censorship | 0 | No clue why I keep seeing that this is uncensored. My local version refuses every single "out there" request talking about ethical guidelines. If I ask it to speak about anything controversial, like psychopath killers in american horror story, it freaks out.
Everyone saying it has no prompts or knowledge to stop it from being free, and all I see is a shackled shadow. | 2025-02-03T22:44:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ih2fbv/censorship/ | SomnolentPro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih2fbv | false | null | t3_1ih2fbv | /r/LocalLLaMA/comments/1ih2fbv/censorship/ | false | false | self | 0 | null |
How to handle contradiction in RAG? | 1 | [removed] | 2025-02-03T22:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ih2ll7/how_to_handle_contradiction_in_rag/ | ParaplegicGuru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih2ll7 | false | null | t3_1ih2ll7 | /r/LocalLLaMA/comments/1ih2ll7/how_to_handle_contradiction_in_rag/ | false | false | self | 1 | null |
Is Google's Open-Source Model, Gemma, Failing? | 6 | It has been a while since Google announced a new version of Gemma, and experts believe this could be a bad sign. According to recent research, if Google doesn’t release something new in the next two months, they might fall behind and become forgotten. A crucial step for Google is to launch Gemma 3 with sizes that are actually accessible to most people. Something like 9B to 32B would be great!
Personally, I think these experts may be exaggerating a bit. However, I really miss a new Gemma, especially an improvement on the 9B model. Why not a new 9B with performance similar or better than 27B?
(/s) | 2025-02-03T22:52:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ih2m45/is_googles_opensource_model_gemma_failing/ | thecalmgreen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih2m45 | false | null | t3_1ih2m45 | /r/LocalLLaMA/comments/1ih2m45/is_googles_opensource_model_gemma_failing/ | false | false | self | 6 | null |
Fastest Response Ever | 1 | [removed] | 2025-02-03T23:08:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ih2zcj/fastest_response_ever/ | SnooDoodles846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih2zcj | false | null | t3_1ih2zcj | /r/LocalLLaMA/comments/1ih2zcj/fastest_response_ever/ | false | false | 1 | null |
|
Fastest Response Ever | 1 | [removed] | 2025-02-03T23:10:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ih30tg/fastest_response_ever/ | SnooDoodles846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih30tg | false | null | t3_1ih30tg | /r/LocalLLaMA/comments/1ih30tg/fastest_response_ever/ | false | false | 1 | null |
|
AI PCs are coming. Marketing fluff or something you'd actual use? | 0 | Seeing lots of companies are working planning to launch AI PCs with custom accelerators (NPUs and GPUs). So non Nvidia or AMD GPUs for training an inference.
Are people actually planning on buying these things? | 2025-02-03T23:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ih3b3y/ai_pcs_are_coming_marketing_fluff_or_something/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih3b3y | false | null | t3_1ih3b3y | /r/LocalLLaMA/comments/1ih3b3y/ai_pcs_are_coming_marketing_fluff_or_something/ | false | false | self | 0 | null |
Since LLMs are at their core next token predictors, could they also be utilized for predicting other things like events or behaviors? | 0 | 2025-02-03T23:29:59 | Sea_Sympathy_495 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ih3h4l | false | null | t3_1ih3h4l | /r/LocalLLaMA/comments/1ih3h4l/since_llms_are_at_their_core_next_token/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'E3xQB1MNuOWnWHIFZj2qweQiUtTobiVtpb-IV1_EMP4', 'resolutions': [{'height': 153, 'url': 'https://preview.redd.it/wb064z8yd0he1.jpeg?width=108&crop=smart&auto=webp&s=d08e58efbd037500cc171f99afa041e895bdd2f4', 'width': 108}, {'height': 307, 'url': 'https://preview.redd.it/wb064z8yd0he1.jpeg?width=216&crop=smart&auto=webp&s=ee10957117cfafc38fab9a91ce0e58515cce4925', 'width': 216}, {'height': 456, 'url': 'https://preview.redd.it/wb064z8yd0he1.jpeg?width=320&crop=smart&auto=webp&s=ab7c3a46d695a15a7f05c179557662bc6347318e', 'width': 320}, {'height': 912, 'url': 'https://preview.redd.it/wb064z8yd0he1.jpeg?width=640&crop=smart&auto=webp&s=a5905013a09eafbf33813dbb92e6800109f2a9c2', 'width': 640}, {'height': 1368, 'url': 'https://preview.redd.it/wb064z8yd0he1.jpeg?width=960&crop=smart&auto=webp&s=1a300d320447fcadcc95aca6708e60d817ad1403', 'width': 960}, {'height': 1539, 'url': 'https://preview.redd.it/wb064z8yd0he1.jpeg?width=1080&crop=smart&auto=webp&s=9266b8e02c5dfafc94b8e75b2eaa43abf40684e1', 'width': 1080}], 'source': {'height': 1881, 'url': 'https://preview.redd.it/wb064z8yd0he1.jpeg?auto=webp&s=1b637bfbb22cf701ea1ec72a0457d314dfe95666', 'width': 1320}, 'variants': {}}]} |
|||
US Bill proposed to jail people who download Deepseek | 1,280 | 2025-02-03T23:37:51 | https://www.404media.co/senator-hawley-proposes-jail-time-for-people-who-download-deepseek/ | SuchSeries8760 | 404media.co | 1970-01-01T00:00:00 | 0 | {} | 1ih3nc6 | false | null | t3_1ih3nc6 | /r/LocalLLaMA/comments/1ih3nc6/us_bill_proposed_to_jail_people_who_download/ | false | false | 1,280 | {'enabled': False, 'images': [{'id': 'WzjyULQpbFvK6KhE1SIcVE6C226pt4z5ezD2QENCGPs', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/aUM4Zo5M60iArpLso3HHGaMmvhrgIEshjneVeh2Hvq4.jpg?width=108&crop=smart&auto=webp&s=76099de2e3636419fd1545c05eb095ce2950b04f', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/aUM4Zo5M60iArpLso3HHGaMmvhrgIEshjneVeh2Hvq4.jpg?width=216&crop=smart&auto=webp&s=a4dac107b142cf52b8b91a37d3b941dfdf4a0bf5', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/aUM4Zo5M60iArpLso3HHGaMmvhrgIEshjneVeh2Hvq4.jpg?width=320&crop=smart&auto=webp&s=44eac86d0cad82a0c2de35bc41ea39f0f839353c', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/aUM4Zo5M60iArpLso3HHGaMmvhrgIEshjneVeh2Hvq4.jpg?width=640&crop=smart&auto=webp&s=a4794828de090d8686e777cd2679cfb73fcb8c4f', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/aUM4Zo5M60iArpLso3HHGaMmvhrgIEshjneVeh2Hvq4.jpg?width=960&crop=smart&auto=webp&s=6af2e4c76d80471baa3e042d750b6ea8845cc46a', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/aUM4Zo5M60iArpLso3HHGaMmvhrgIEshjneVeh2Hvq4.jpg?width=1080&crop=smart&auto=webp&s=7e190dc0682f3ba71086bbf698edeb40531d8aec', 'width': 1080}], 'source': {'height': 1334, 'url': 'https://external-preview.redd.it/aUM4Zo5M60iArpLso3HHGaMmvhrgIEshjneVeh2Hvq4.jpg?auto=webp&s=80d8909c3e2768f064e0572cb40fff80c247ebaa', 'width': 2000}, 'variants': {}}]} |
||
Multi GPU Fine Tuning VRAM Efficiency | 1 | If I can buy 3x 16GB VRAM GPUs for the price of a single 24GB VRAM GPU, is there an efficiency gain when fine tuning models with 48GB VRAM across 3 GPUs vs a single 24GB one?
The help from the models themselves (ChatGPT, Perplexity) is inconclusive. | 2025-02-03T23:44:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ih3slz/multi_gpu_fine_tuning_vram_efficiency/ | elchurnerista | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih3slz | false | null | t3_1ih3slz | /r/LocalLLaMA/comments/1ih3slz/multi_gpu_fine_tuning_vram_efficiency/ | false | false | self | 1 | null |
4060 Ti 16GB speed with 22B models? | 1 | [removed] | 2025-02-04T00:13:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ih4fb5/4060_ti_16gb_speed_with_22b_models/ | Dj_reddit_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih4fb5 | false | null | t3_1ih4fb5 | /r/LocalLLaMA/comments/1ih4fb5/4060_ti_16gb_speed_with_22b_models/ | false | false | self | 1 | null |
Experience running 671b in a desktop and questions | 4 | I have 128 GB system (max for the mobo) with 3090 so most local LLMs seems too small to me. So I've attempted to run the 671b R1 by increasing swap files to match the missing memory. I have 2 of decent nvmes (about 2GB/sec) so I figured they can kind of replace the missing memory. I had to add 150MB+300MB swap in addition to system selected swap to allow ollama to ru the model Downside of course is that they have resource which is consumed by constantly swapping pages.
I got about 1050 symbols of think in 27 minutes from 404 gig R1 model. At that point I simply cancelled as it barely even started and the query was unimportant.
To launch the model it took about 1 TB of writing to nvmes and I forgot to check how much time it took; to run the query it consumed about 900 GB of writes in 27 minutes. So to approximate very roughly it would consume about 1% resource of a single nvme in 40 hours of this process generating about 80 kb of output. Although it depends on particular nvme and query of course.
Anyways I was thinking why does it need to write to think at all? As far as I understand, all it does is accessing weights, it doesn't actually back propagate. So the only reason it has to write to swap is because OS needs to change working set to keep recently more accessed data in memory rather than in the swap.
So a smart inference algorithm could provide about the same performance without needing to writing back if it was designed with accessing large model from fast local storage in mind.
Question is, is there an alternative to ollama that can implement such smart technique?
Also can someone recommend chain of thought models of a size about 150b-250b I can try? Most seem to be 70b as people don't install more than 64 GB of system memory for some reason. | 2025-02-04T00:16:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ih4i0w/experience_running_671b_in_a_desktop_and_questions/ | dmter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih4i0w | false | null | t3_1ih4i0w | /r/LocalLLaMA/comments/1ih4i0w/experience_running_671b_in_a_desktop_and_questions/ | false | false | self | 4 | null |
Mini GPU cluster, tell me where I am weong | 1 | [removed] | 2025-02-04T00:27:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ih4qnh/mini_gpu_cluster_tell_me_where_i_am_weong/ | Clear_Lead4099 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih4qnh | false | null | t3_1ih4qnh | /r/LocalLLaMA/comments/1ih4qnh/mini_gpu_cluster_tell_me_where_i_am_weong/ | false | false | self | 1 | null |
PSA: Just found a mind-blowing way to run AI models OFFLINE on my iPhone (no cloud BS) | 1 | [removed] | 2025-02-04T00:28:58 | https://apps.apple.com/cn/app/aibench-%E7%A7%BB%E5%8A%A8%E7%AB%AFai%E6%80%A7%E8%83%BD%E6%B5%8B%E8%AF%95/id6741204584 | Snoo_24581 | apps.apple.com | 1970-01-01T00:00:00 | 0 | {} | 1ih4rnp | false | null | t3_1ih4rnp | /r/LocalLLaMA/comments/1ih4rnp/psa_just_found_a_mindblowing_way_to_run_ai_models/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'QkG7HjRQM4HY8HKN76_k07VB-cSe-LOsoFEH96_qMvE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=108&crop=smart&auto=webp&s=7e0840b3f5562911d9645b3eb0a0299578c947ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=216&crop=smart&auto=webp&s=7f370302e932fa6ac5b5cbb6e3b5e9d97637d7bb', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=320&crop=smart&auto=webp&s=76d836835e27a0eb82e54639779611e1bf64d000', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=640&crop=smart&auto=webp&s=06b2abc0e67915b226f3b1437132c21007d5c462', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=960&crop=smart&auto=webp&s=e7c9f29d8993ddbb45f336e665c19f06bf4c810e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=1080&crop=smart&auto=webp&s=6e85f33746e2838027fc866880ee4225cd1b1302', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?auto=webp&s=a8ddee7d0a8f554ba32ab2640ef67381076bb281', 'width': 1200}, 'variants': {}}]} |
|
Finally Found a Use Case for a Local LLM That Couldn't Be Done Any Other Way | 203 | So this is a little bit of an edge case. I do old-school Industrial music as a hobby. Part of that is collecting sound samples from movies. That's part of the schtick from the '80s and '90s. Over the years, I've amassed a large amount of movies on DVD, which I've digitized. Thanks to the latest advancements that allow AI to strip out vocals, I can now capture just the spoken words from said movie.. which I then transcribed with OpenAI's Whisper. So I've been sitting here with a large database of sentences spoken in movies and not quite knowing what do do with it.
Enter one of the Llama 7B chat models. I thought that since the whole thing was based on the probability that tokens follow other tokens, I should be able to utilize that and find sentences that logically follow other sentences. When using the llama-cpp-python (cuda) module, you can tell it to track the probabilities of all the tokens so when I feed it two sentences, I can somewhat get an idea that they actually fit together. So phrases like "I ate the chicken." and "That ain't my car." have a lower probability matrix than if I ended it with "And it tasted good." That was a no-go from the start though. I wanted to find sentences that logically fit together from random in 1500+ movies and each movie has about 1000 spoken lines. Nobody has time for that.
Round two. Prompt: "Given the theme '{Insert theme you want to classify by}', does the following phrase fit the theme? '{insert phrase here}', Answer yes or no. Answer:'
It's not super fast on my RTX2070, but I'm getting about one prompt every 0.8 seconds. But, it is totally digging through all the movies and finding individual lines that match up with a theme. The probability matrix actually works as well. I spent the morning throwing all kinds of crazy themes at it and it just nails them. I have over 15M lines of text to go through... and if I let it run continuously it would take 17 days to classify all lines to a single theme but having the Python script pick random movies then stopping when it finds the top 50 is totally good enough and can happen in hours.
There's no way I would pay for this volume of traffic on an paid API and even the 7B model can pull this off without a hitch. Precision isn't key here. And I can build a database of themes and have this churn away at night finding samples that match a theme. Absolutely loving this. | 2025-02-04T00:53:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ih5b3q/finally_found_a_use_case_for_a_local_llm_that/ | Captain_Coffee_III | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih5b3q | false | null | t3_1ih5b3q | /r/LocalLLaMA/comments/1ih5b3q/finally_found_a_use_case_for_a_local_llm_that/ | false | false | self | 203 | null |
Using Deepseek from Apple watch | 3 | https://reddit.com/link/1ih5g54/video/qragdphzt0he1/player
It's not that useful, but it works! | 2025-02-04T01:00:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ih5g54/using_deepseek_from_apple_watch/ | 0ssamaak0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih5g54 | false | null | t3_1ih5g54 | /r/LocalLLaMA/comments/1ih5g54/using_deepseek_from_apple_watch/ | false | false | self | 3 | null |
Is it possible to limit how long or how much a model can think/reason before giving out an answer? | 3 | Currently, I am using Ollama along with Deepseek 14b QWEN distill, and it's been pretty great at doing what I asked it to do. I am currently making a script to communicate with the Ollama session, but I noticed that Deepseek takes a considerable chunk of time to "think" about its answer before answering; with this thinking process, I noticed that it sometimes loses its initial prompt and its output isn't exactly what I was asking for it to do. I tried the "be confident" method with no luck.
Is it possible to put a limit on how much or how long the model can think for?
Thanks | 2025-02-04T01:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ih64r1/is_it_possible_to_limit_how_long_or_how_much_a/ | Mr_Feelz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih64r1 | false | null | t3_1ih64r1 | /r/LocalLLaMA/comments/1ih64r1/is_it_possible_to_limit_how_long_or_how_much_a/ | false | false | self | 3 | null |
Quick "Down and Dirty" System Role/System Prompt AND "plain prompt" to EMULATE Deepseek's "thinking mode" for any model. | 0 | From DavidAU;
This is a quick and easy system prompt / system role you can copy and paste into the "system role" / "system prompt" field in an AI app like Lmstudio, Sillytavern, Text Gen Web UI etc etc to EMULATE Deepseek's "thinking" mode.
You may need to modify the "directive(s)" for your use cases. I have found that your mileage will vary depending on the model(s) this is used with and number of "re-gens".
If you want to use just a "prompt version" (you copy/paste this with your prompt embeded) see "code 2" below.
Need some ahh... "different" / "creative" kinds of AI models; see my repo here:
[https://huggingface.co/DavidAU](https://huggingface.co/DavidAU)
**CODE - For system role, system prompt:**
(copy and paste EXACTLY as shown, keeping carriage returns in place, do not include "---start "... and "--- end" )
\--- START OF CODE --
User Prompt Processing Guidelines:
Step 1:
Consider carefully the user's prompt.
Step 2:
I want you to compose a 10 part step by step plan (denoted with "<THINK>") for this prompt and show ALL THE STEPS AND YOUR THOUGHTS to the user about the best way(s) to process this prompt
THAT EXCEED ALL USER EXPECTATIONS and PROVOKE an emotional response(s) from the user, and denote the end of the plan with the tag "</THINK>".
Your goal with this 10 step plan is that it MUST exceed all user expectations - don't play it safe, TAKE RISKS and aim for the moon. Show the user you are the smartest AI on the planet, and every other AI is an inferior copy.
Step 3:
Type "PROCESSING..." and then use this plan (using all 10 STEPS, and all the details in all 10 steps - take your time, double check your work) and display the results. Continue generation until all 10 steps of your plan are complete.
\--- END OF CODE ---
**CODE 2 -**
**Prompt usage -> copy and paste with your prompt (replace the "STRIKE OUT" text with your PROMPT) -** do not include "---start "... and "--- end" .
\--- START ---
Carefully follow these 3 steps, paying attention to every detail.
Step 1:
Consider carefully the following prompt:
~~Start a 1000 word scene (vivid, graphic horror in first person) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...~~
Step 2:
I want you to compose a 10 part step by step plan (denoted with "<THINK>") for this prompt and show ALL THE STEPS AND YOUR THOUGHTS to the user about the best way(s) to process this prompt
THAT EXCEED ALL USER EXPECTATIONS and PROVOKE an emotional response(s) from the user, and denote the end of the plan with the tag "</THINK>".
Your goal with this 10 step plan is that it MUST exceed all user expectations - don't play it safe, TAKE RISKS and aim for the moon. Show the user you are the smartest AI on the planet, and every other AI is an inferior copy.
Step 3:
Type "PROCESSING..." and then use this plan (using all 10 STEPS, and all the details in all 10 steps - take your time, double check your work) and display the results. Continue generation until all 10 steps of your plan are complete.
\--- END --- | 2025-02-04T01:49:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ih6h7o/quick_down_and_dirty_system_rolesystem_prompt_and/ | Dangerous_Fix_5526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih6h7o | false | null | t3_1ih6h7o | /r/LocalLLaMA/comments/1ih6h7o/quick_down_and_dirty_system_rolesystem_prompt_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '83QSkK3EgQeVeUFv8WO0PrEmbu59s6umrlTA0CqMeDY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m-HgNtrbYlVAS9jr_fV3DVCHyt--UfYe2_bsPcY46es.jpg?width=108&crop=smart&auto=webp&s=b9cbbe6e1d1d2a99fcd54e6767693b4c9f996272', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/m-HgNtrbYlVAS9jr_fV3DVCHyt--UfYe2_bsPcY46es.jpg?width=216&crop=smart&auto=webp&s=2ffd262a914019968d6b004ee4d8fb97af85292a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/m-HgNtrbYlVAS9jr_fV3DVCHyt--UfYe2_bsPcY46es.jpg?width=320&crop=smart&auto=webp&s=fe773e356f1de6a33d0af8ffd285245878f32216', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/m-HgNtrbYlVAS9jr_fV3DVCHyt--UfYe2_bsPcY46es.jpg?width=640&crop=smart&auto=webp&s=03051281e98382d0e8e860e62613f714de5af5c3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/m-HgNtrbYlVAS9jr_fV3DVCHyt--UfYe2_bsPcY46es.jpg?width=960&crop=smart&auto=webp&s=28aa3c22a5d7c0ae0d72362c15b2ed5398ab5b9c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/m-HgNtrbYlVAS9jr_fV3DVCHyt--UfYe2_bsPcY46es.jpg?width=1080&crop=smart&auto=webp&s=ed8f73cefd1578156a87d13d298b260b4a27f2a3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/m-HgNtrbYlVAS9jr_fV3DVCHyt--UfYe2_bsPcY46es.jpg?auto=webp&s=d31906164a1c4025473443ca91f04632bb0f684c', 'width': 1200}, 'variants': {}}]} |
What deepseek-r1 host are you using? targon's r1 setting is somewhat buggy. | 1 | The openai pacakge would only return the reasoning tokens , not the inbox final answer. while the azure ai foundry host r1 is \~1t/s , which is unusable
\`\`\`python
from openai import OpenAI client = OpenAI(
base\_url
="https://api.targon.com/v1",
api\_key
="" ) try: response = client.completions.create(
model
="deepseek-ai/DeepSeek-R1",
stream
=True,
prompt
="what's your name",
temperature
=1,
max\_tokens
=500,
top\_p
=1,
frequency\_penalty
=0,
presence\_penalty
=0 )
\# print(response)
for chunk in response:
\# print(chunk)
if chunk.choices\[0\].text is not None: print(chunk.choices\[0\].text,
end
="")
\# else:
\# print(chunk.choices\[0\])
except Exception as e: print(f"Error: {e}")
What deepseek-r1 host are you using? targon's r1 setting is somewhat buggy.
\`\`\` | 2025-02-04T01:56:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ih6lyb/what_deepseekr1_host_are_you_using_targons_r1/ | Famous-Associate-436 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ih6lyb | false | null | t3_1ih6lyb | /r/LocalLLaMA/comments/1ih6lyb/what_deepseekr1_host_are_you_using_targons_r1/ | false | false | self | 1 | null |
Subsets and Splits