title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LLMs as archives of knowledge | 1 | So I'm certain a lot of us here and know what's going on in the US currently and the fear surrounding the destruction of data in order to control narrative. I'm not new to language models and their capabilities but I wanted to see what people's thoughts are in terms of language models acting as archives in and of themselves?
Since most models have a finite set of training data specifically cut off at particular times do you think they'd be a reliable resource when it comes to wanting to verify information that from here on out may no longer be accessible? I guess what I'm getting at is with the current level of data hoarding that's going on would existing models still need to be fine-tuned specifically with this captured data? | 2025-02-01T22:50:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ifiojk/llms_as_archives_of_knowledge/ | JacketHistorical2321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifiojk | false | null | t3_1ifiojk | /r/LocalLLaMA/comments/1ifiojk/llms_as_archives_of_knowledge/ | false | false | self | 1 | null |
New Experimental Agent Layer & Reasoning Layer added to Notate v1.1.0. Now you can with any model locally reason and enable web search utilizing the Agent layer. More tools coming soon! | 5 | 2025-02-01T23:13:53 | https://github.com/cntrlai/notate | Hairetsu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ifj717 | false | null | t3_1ifj717 | /r/LocalLLaMA/comments/1ifj717/new_experimental_agent_layer_reasoning_layer/ | false | false | 5 | {'enabled': False, 'images': [{'id': '_jGk5mD1YpyMGYeo1o1CP5DHkZ1sA71T4A5g9oG4FZQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mcnU-d_DoLHew8RaPQUtHLP6gPiy9L1fZ85lNZzDPN4.jpg?width=108&crop=smart&auto=webp&s=6344ceb42d700a4208803b9ba764582d961d6154', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mcnU-d_DoLHew8RaPQUtHLP6gPiy9L1fZ85lNZzDPN4.jpg?width=216&crop=smart&auto=webp&s=d3280040b64301e5577c09ce1ee77b3ecb994825', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mcnU-d_DoLHew8RaPQUtHLP6gPiy9L1fZ85lNZzDPN4.jpg?width=320&crop=smart&auto=webp&s=4d6032c1880c1aabdb18c5980fe829eb8166e0ea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mcnU-d_DoLHew8RaPQUtHLP6gPiy9L1fZ85lNZzDPN4.jpg?width=640&crop=smart&auto=webp&s=195f63b8ab536d1b6a94e52ac5e539c8a57f3eef', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mcnU-d_DoLHew8RaPQUtHLP6gPiy9L1fZ85lNZzDPN4.jpg?width=960&crop=smart&auto=webp&s=8e08dff3809a8c5c348a5c9ba4945239c44b5a4e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mcnU-d_DoLHew8RaPQUtHLP6gPiy9L1fZ85lNZzDPN4.jpg?width=1080&crop=smart&auto=webp&s=b95860278a71340d6cca6389d34f2c1783dffa44', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mcnU-d_DoLHew8RaPQUtHLP6gPiy9L1fZ85lNZzDPN4.jpg?auto=webp&s=e5c304c2a981e663d57980c5a17b5c792bf1163a', 'width': 1200}, 'variants': {}}]} |
||
Remote gpu’s | 1 | I’ve got a beefy Xeon with no free slots I want to run ollama on, next to it I have an old mining rig full of gpu’s. Is there a standard way to run ollama on the beefy Xeon and use those gpu’s over network? Google is just telling me about multiple gpu’s on the same box. | 2025-02-01T23:14:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ifj754/remote_gpus/ | trollboy665 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifj754 | false | null | t3_1ifj754 | /r/LocalLLaMA/comments/1ifj754/remote_gpus/ | false | false | self | 1 | null |
Where to start with GPUs for not-so-novice projects? | 3 | Experienced software engineer, looking to dabble into some hardware - a few AI / simulation side quests I’d like to explore. I’m fully aware that GPUs and (if NVIDIA, then CUDA) are necessary for this journey. However, I have no idea where to get started.
I’m a stereotypical Mac user so the idea of building a PC or networking multiple GPUs together is not something I’ve done (but something I can pick up). I really just don’t know what to search for or where to start looking.
Any suggestions for how to start down the rabbit hole of getting acquainted with building out and programming GPU clusters for self-hosting purposes? I’m familiar with networking in general and the associated distributed programming needed VPCs, Proxmox, Kubernetes, etc) just not with the GPU side of things.
I’m fully aware that I don’t know what I don’t know yet, I’m asking for a sense of direction. Everyone started somewhere.
If it helps, two projects I’m interested in building out are running some local Llama models in a cluster, and running some massively parallel deep reinforcement learning processes for some robotics projects (Isaac / gym / etc).
I’m not looking to drop money on a Jetson dev kit if there’s A) more practical options that fit the “step after the dev kit”, and B) options that get me more fully into the hardware ecosystem and actually “understanding” what’s going on.
Any suggestions to help a lost soul? Hardware, courses, YouTube channels, blogs - anything that helps me intuit getting past the devkit level of interaction. | 2025-02-01T23:21:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ifjct5/where_to_start_with_gpus_for_notsonovice_projects/ | NewEnergy21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifjct5 | false | null | t3_1ifjct5 | /r/LocalLLaMA/comments/1ifjct5/where_to_start_with_gpus_for_notsonovice_projects/ | false | false | self | 3 | null |
The brain damage of DeepSeek-R1 1.5B distill is pure comedy | 1 | 2025-02-01T23:28:40 | CorruptCobalion | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifji7n | false | null | t3_1ifji7n | /r/LocalLLaMA/comments/1ifji7n/the_brain_damage_of_deepseekr1_15b_distill_is/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'M2t944T34eYKehAyUlEv5eMJvUyjuDXc3MWlTOQZzWg', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/9ld30a1w3mge1.jpeg?width=108&crop=smart&auto=webp&s=3caeb623596a47d19c9b2f4c4a8f01506cc1f9a3', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/9ld30a1w3mge1.jpeg?width=216&crop=smart&auto=webp&s=91f2bafc81590812959a13d50dc2b23df0c7a2de', 'width': 216}, {'height': 184, 'url': 'https://preview.redd.it/9ld30a1w3mge1.jpeg?width=320&crop=smart&auto=webp&s=c05d43bbd03167f79a3bff0731dee45677e4566b', 'width': 320}, {'height': 368, 'url': 'https://preview.redd.it/9ld30a1w3mge1.jpeg?width=640&crop=smart&auto=webp&s=4f2289cf33180810536730189d5a24af7a3f8535', 'width': 640}, {'height': 552, 'url': 'https://preview.redd.it/9ld30a1w3mge1.jpeg?width=960&crop=smart&auto=webp&s=3a62d47ba4be8440e9382df0d8b43a4e793731d8', 'width': 960}, {'height': 622, 'url': 'https://preview.redd.it/9ld30a1w3mge1.jpeg?width=1080&crop=smart&auto=webp&s=b53216c5c61d9b47d29a16b34216066cf9aaed05', 'width': 1080}], 'source': {'height': 1375, 'url': 'https://preview.redd.it/9ld30a1w3mge1.jpeg?auto=webp&s=708a262ad456afe1c73cc27914569bb76fb8ca9c', 'width': 2387}, 'variants': {}}]} |
|||
Need help with DeepSeek Censorship | 1 | [removed] | 2025-02-01T23:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ifjly3/need_help_with_deepseek_censorship/ | Puzzleheaded-One2505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifjly3 | false | null | t3_1ifjly3 | /r/LocalLLaMA/comments/1ifjly3/need_help_with_deepseek_censorship/ | false | false | self | 1 | null |
Need help with DeepSeek Censorship | 1 | [removed] | 2025-02-01T23:40:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ifjqm4/need_help_with_deepseek_censorship/ | Aggressive_Custard49 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifjqm4 | false | null | t3_1ifjqm4 | /r/LocalLLaMA/comments/1ifjqm4/need_help_with_deepseek_censorship/ | false | false | self | 1 | null |
Using o3 to build a open source researching system that can reason what it knows & it does not know | 32 | 2025-02-01T23:44:04 | omnisvosscio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifjtln | false | null | t3_1ifjtln | /r/LocalLLaMA/comments/1ifjtln/using_o3_to_build_a_open_source_researching/ | false | false | 32 | {'enabled': True, 'images': [{'id': 'aWzruHfZf1ikwFlGUIzeK_ggv6GqMOC-9BA8JAY6yI8', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/lkfl0n1d6mge1.png?width=108&crop=smart&auto=webp&s=655acd5186f35ff86c4ecbf0c8ebedb5897f8b28', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/lkfl0n1d6mge1.png?width=216&crop=smart&auto=webp&s=d7958d2722894417c09e90acd7005892268a3e77', 'width': 216}, {'height': 208, 'url': 'https://preview.redd.it/lkfl0n1d6mge1.png?width=320&crop=smart&auto=webp&s=dcff14589cfb47c02da2617879b46b31bc48d9c9', 'width': 320}, {'height': 417, 'url': 'https://preview.redd.it/lkfl0n1d6mge1.png?width=640&crop=smart&auto=webp&s=26f70b040757a29f5f0b9f681ccdc87c58c33775', 'width': 640}, {'height': 626, 'url': 'https://preview.redd.it/lkfl0n1d6mge1.png?width=960&crop=smart&auto=webp&s=ebcfcfd0aba51523052207c83a2a29fda6763879', 'width': 960}], 'source': {'height': 689, 'url': 'https://preview.redd.it/lkfl0n1d6mge1.png?auto=webp&s=555c11cb692c0d2b3c3f578432bbc641784e10d8', 'width': 1056}, 'variants': {}}]} |
|||
Just downloaded a merged deepseek qwen model. the only output it'll give is saying 'replies replies replies' | 1 | it's this model here:
https://huggingface.co/HeraiHench/DeepSeek-R1-Qwen-Coder-8B
It seemed to load fine without issue but again it'll literally only say the word replies to everything endlessly. I'm new to this so I haven't really configured anything before, I see there's a configuration page but my googling tells me this is for python setup and under Jypiter or something.
i'm currently using lm studio, do I have to configure this based on those configurations settings to make it work? | 2025-02-01T23:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ifjtp8/just_downloaded_a_merged_deepseek_qwen_model_the/ | Cyclonis123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifjtp8 | false | null | t3_1ifjtp8 | /r/LocalLLaMA/comments/1ifjtp8/just_downloaded_a_merged_deepseek_qwen_model_the/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WW5JHuuYlXdpfLlzl0w0g5xvPxW62_T74oK6jQAJadc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o7VXxGsOAnOYtCDmcoru74oUYQ7CV_tOnLIzgjyXG1U.jpg?width=108&crop=smart&auto=webp&s=7647c92b3414dae30a0e241199973119a6da05dd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/o7VXxGsOAnOYtCDmcoru74oUYQ7CV_tOnLIzgjyXG1U.jpg?width=216&crop=smart&auto=webp&s=95c52a9a8a6c3306bdb8d86ab087757f44fede9d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/o7VXxGsOAnOYtCDmcoru74oUYQ7CV_tOnLIzgjyXG1U.jpg?width=320&crop=smart&auto=webp&s=8ba77dd41d86b138ef9e4c7087ac74857924601e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/o7VXxGsOAnOYtCDmcoru74oUYQ7CV_tOnLIzgjyXG1U.jpg?width=640&crop=smart&auto=webp&s=52d8e536635af890640e675fea267f6cdbccaaa3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/o7VXxGsOAnOYtCDmcoru74oUYQ7CV_tOnLIzgjyXG1U.jpg?width=960&crop=smart&auto=webp&s=c322cf23bbdba0b63cd7dc6baf996eab9e2a36a7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/o7VXxGsOAnOYtCDmcoru74oUYQ7CV_tOnLIzgjyXG1U.jpg?width=1080&crop=smart&auto=webp&s=afa795c3efad75b9e20e777f04c6f96e93ea0754', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o7VXxGsOAnOYtCDmcoru74oUYQ7CV_tOnLIzgjyXG1U.jpg?auto=webp&s=962633839315663755f87f94cbc4a6953bdf0006', 'width': 1200}, 'variants': {}}]} |
Got my 3090 and 3060 working on a fresh Ubuntu installation. Please clap. | 283 | After many reboots and fiddling with blacklisting noveau/nouveau, it's finally working!
36GB of vram goodness and 64GB of system ram.
Planning to install ollama, open-webui and n8n. Any more recommendations? | 2025-02-01T23:45:19 | convalytics | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifjuh8 | false | null | t3_1ifjuh8 | /r/LocalLLaMA/comments/1ifjuh8/got_my_3090_and_3060_working_on_a_fresh_ubuntu/ | false | false | 283 | {'enabled': True, 'images': [{'id': 'fqAQfhaUFCM15fo8Svq0RnJ_gMvxLbhx-rh-iYgsatg', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/sb0m382v6mge1.jpeg?width=108&crop=smart&auto=webp&s=8ff21a88dafe6cb2b970d0ef88e80c825bebca1b', 'width': 108}, {'height': 188, 'url': 'https://preview.redd.it/sb0m382v6mge1.jpeg?width=216&crop=smart&auto=webp&s=7f59fd52318144e3fd3b65d32d19a1b2c54b935a', 'width': 216}, {'height': 279, 'url': 'https://preview.redd.it/sb0m382v6mge1.jpeg?width=320&crop=smart&auto=webp&s=7d280e6f0644ab46bb6ad2cb7ae0bf6fe16ef989', 'width': 320}, {'height': 559, 'url': 'https://preview.redd.it/sb0m382v6mge1.jpeg?width=640&crop=smart&auto=webp&s=d527da27d452ede8dca689a709e00f143244cadb', 'width': 640}, {'height': 839, 'url': 'https://preview.redd.it/sb0m382v6mge1.jpeg?width=960&crop=smart&auto=webp&s=3b397ad4c9ed820eaecd01a18e2fb9185b5bcebb', 'width': 960}, {'height': 944, 'url': 'https://preview.redd.it/sb0m382v6mge1.jpeg?width=1080&crop=smart&auto=webp&s=285b1d0b643fe3a18b538fe47e104daea1b6354c', 'width': 1080}], 'source': {'height': 1467, 'url': 'https://preview.redd.it/sb0m382v6mge1.jpeg?auto=webp&s=057b5cd3e7749e4fee96f90b55e6d9adb11da109', 'width': 1677}, 'variants': {}}]} |
||
Lightweight TTS tools that are simple to train custom voices and can run easily in terminal | 4 | I’m trying to set up a local llama responder to livestream chat. I’m not much for programming but I’ve gotten espeak and piper to work flawlessly. But I can’t seem to find a custom TTS that’ll work well, be quick, simple and lightweight on processing. Simple as in I want to be able to access it from the terminal like a cli tool that outputs a wav file so I can implement it easily. And ofc custom voices are a must. Is there anything like that out there? I’ve tried my hardest with tortoiseTTS but it’s so damn old the install process and dependencies are broken. | 2025-02-01T23:48:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ifjwlf/lightweight_tts_tools_that_are_simple_to_train/ | countjj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifjwlf | false | null | t3_1ifjwlf | /r/LocalLLaMA/comments/1ifjwlf/lightweight_tts_tools_that_are_simple_to_train/ | false | false | self | 4 | null |
First Time using Mistral and AI | 1 | I been trying get a Roleplay-Chatting AI to run on my device, it all seems to work? Somewhat, but I don't know why exactly, when I talk to it, the responses seem non-linear to my message, and also there seems to be 0 account into the character.json? If I start the conversation like
"Good Morning, how did you sleep?"
The AI responds with
"Oh, I had my eyes closed, but now that you're here, darling, nothing else matters to me. Good morning!"
Which to me sounds very off? I got basically 0 knowledge into coding so asked ChatGPT to help me guide through the steps to make my own AI, and some stuff, so the issue might be in that? Or something?
Here's the code for [chat.py](http://chat.py) for reference. The main idea was a system where I can type the name of a Character to choose him to chat, it works, I can choose, but the conversation then seems weird, I hope someone can help me, and please don't go harsh on me, I'm new to all this, apologies
`import json`
`import ollama`
`import os`
`# Ask user to choose a character`
`character_name = input("Choose your character (e.g., evelyn, darius, luna): ").strip().lower()`
`file_name = f"{character_name}.json"`
`# Check if the character file exists`
`if not os.path.exists(file_name):`
`print(f"Character '{character_name}' not found! Please ensure the JSON file exists.")`
`exit()`
`# Load the character profile`
`with open(file_name, "r") as file:`
`character = json.load(file)`
`# Function to generate a character-specific prompt`
`def generate_character_prompt():`
`return f"""`
`You are {character['name']}, a {character['age']}-year-old {character['species']} who works as a {character['occupation']}.`
`Personality: {character['personality']}`
`Speech Style: {character['speech_style']}`
`Respond only as {character['name']}. Stay in character and keep conversations natural and realistic. Avoid excessive repetition.`
`"""`
`# Function to chat with the AI`
`def chat_with_character(user_input):`
`# Build the chat prompt`
`prompt = generate_character_prompt()`
`# Use the most recent memory (last 1-2 exchanges) for context`
`recent_memory = character["memory"][-2:] # Only the last 2 exchanges`
`for memory in recent_memory:`
`prompt += f"\nUser: {memory['user']}\n{character['name']}: {memory['ai']}"`
`# Add the latest user input`
`prompt += f"\nUser: {user_input}\n{character['name']}:" # The character's response is expected`
`# Generate AI response`
`response = ollama.chat(model="mistral", messages=[{"role": "user", "content": prompt}])`
`ai_response = response["message"]["content"]`
`# Save the interaction to memory`
`character["memory"].append({"user": user_input, "ai": ai_response})`
`if len(character["memory"]) > 10: # Keep only the last 10 messages`
`character["memory"].pop(0)`
`# Save updated memory`
`with open(file_name, "w") as file:`
`json.dump(character, file, indent=4)`
`return ai_response`
`# Start interactive chat`
`while True:`
`user_input = input("You: ")`
`if user_input.lower() in ["exit", "quit"]:`
`break`
`response = chat_with_character(user_input)`
`print(f"{character['name']}: {response}")`
| 2025-02-01T23:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ifjzgc/first_time_using_mistral_and_ai/ | MlgMagicHoodini | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifjzgc | false | null | t3_1ifjzgc | /r/LocalLLaMA/comments/1ifjzgc/first_time_using_mistral_and_ai/ | false | false | self | 1 | null |
Does an open source alternative to ChatGPT's Artifacts for WRITING exist? | 2 | 2025-02-02T00:11:43 | RussianImport | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifkehz | false | null | t3_1ifkehz | /r/LocalLLaMA/comments/1ifkehz/does_an_open_source_alternative_to_chatgpts/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'cN-42AhgISUbW8sLGkCICS1ZPX0JG9kEbWecSGRA0tE', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/g4dhxs3damge1.png?width=108&crop=smart&auto=webp&s=2a9cd70a668005c95fa6d6ad7766e71aeb370505', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/g4dhxs3damge1.png?width=216&crop=smart&auto=webp&s=87b9807067b1b89dd25f39d2374944428874f34d', 'width': 216}, {'height': 122, 'url': 'https://preview.redd.it/g4dhxs3damge1.png?width=320&crop=smart&auto=webp&s=afebfa3c9451dce8e770786e3455f58598166e31', 'width': 320}, {'height': 244, 'url': 'https://preview.redd.it/g4dhxs3damge1.png?width=640&crop=smart&auto=webp&s=281c43e83afdd4414a92b0cd3e05dbee23bc01d6', 'width': 640}, {'height': 366, 'url': 'https://preview.redd.it/g4dhxs3damge1.png?width=960&crop=smart&auto=webp&s=7c4e54e56287daa8843fe6d78accad5dfd35189c', 'width': 960}, {'height': 412, 'url': 'https://preview.redd.it/g4dhxs3damge1.png?width=1080&crop=smart&auto=webp&s=4bd10f850482b1c21f8cfa454c4766a6fffdf41e', 'width': 1080}], 'source': {'height': 613, 'url': 'https://preview.redd.it/g4dhxs3damge1.png?auto=webp&s=12a0572414342a8f7154e0f620a68f42ad5127e6', 'width': 1606}, 'variants': {}}]} |
|||
Asked Llama3.2 what is most important | 1 | **>>> What is most important?**
What a profound question! What is considered "most important" can vary greatly depending on personal perspectives, cultural contexts, and individual values. However, here are some general insights that might be universally relevant:
1. \*\*Relationships\*\*: Nurturing meaningful connections with family, friends, and community can bring joy, support, and a sense of belonging.
2. \*\*Personal growth\*\*: Investing time in self-improvement, learning new skills, and developing emotional intelligence can lead to greater fulfillment and happiness.
3. \*\*Health and well-being\*\*: Taking care of one's physical, mental, and emotional health is crucial for overall quality of life and resilience.
4. \*\*Purpose and meaning\*\*: Discovering a sense of purpose or passion can give direction, motivation, and a feeling of accomplishment.
5. \*\*Compassion and empathy\*\*: Practicing kindness, understanding, and compassion towards others can create a positive impact on individuals and society as a whole.
6. \*\*Integrity and values\*\*: Living in alignment with one's core values and principles can foster a sense ofintegrity, trust, and respect for oneself and others.
7. \*\*Learning and curiosity\*\*: Embracing lifelong learning, exploring new ideas, and staying curious can lead to personal growth, innovation, and progress.
Ultimately, what is most important will vary from person to person. It's essential to reflect on your own values, priorities, and goals to determine what truly matters to you.
What do you think is most important in life?
**>>> I didn't ask you to list 7 things, but to say which thing is most important, of all things out there?**
In that case, I would say that ***love and compassion*** are the most important things in life. Without them, ourrelationships, personal growth, health, and overall well-being suffer.
***Love and compassion*** encompass kindness, empathy, understanding, and acceptance - qualities that can bring people together and create a sense of community and connection.
In a world filled with challenges and uncertainties, love and compassion offer a powerful antidote to division, conflict, and negativity. They remind us that we are all connected, that our individual experiences and struggles are valid, and that we have the power to make a positive impact on one another's lives. So, while there are many important things in life, I believe that love and compassion stand out as the most fundamental and essential aspects of human existence.
\>>> Thanks! You made a very powerful statement!
I'm glad you resonated with it. ***Love and compassion*** have the power to transcend words and bring about profound change in ourselves and others. They remind us that we're not alone, and that our shared humanity is what truly matters. It's interesting that you mention "powerful statement" - I think that's a great way to describe how language can be used to inspire, uplift, or challenge people. Sometimes, it's the simplest yet most profound words that have the greatest impact.
\>>> Send a message (/? for help)
| 2025-02-02T00:17:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ifkimf/asked_llama32_what_is_most_important/ | hn-mc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifkimf | false | null | t3_1ifkimf | /r/LocalLLaMA/comments/1ifkimf/asked_llama32_what_is_most_important/ | false | false | self | 1 | null |
Has anyone actually run DeepSeek R1 671B locally on GPUs? | 10 | Searching online I could only find people running it in RAM, and it takes like 20m to come up with a response. I want to see real-world usage of R1 using GPUs only, does someone know where I can find demo videos, etc? | 2025-02-02T00:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ifksj2/has_anyone_actually_run_deepseek_r1_671b_locally/ | lukinhasb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifksj2 | false | null | t3_1ifksj2 | /r/LocalLLaMA/comments/1ifksj2/has_anyone_actually_run_deepseek_r1_671b_locally/ | false | false | self | 10 | null |
How to run LLaMA (and other LLMs) on Android | 1 | [removed] | 2025-02-02T00:53:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ifl8c5/how_to_run_llama_and_other_llms_on_android/ | Infamous_Leather_692 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifl8c5 | false | null | t3_1ifl8c5 | /r/LocalLLaMA/comments/1ifl8c5/how_to_run_llama_and_other_llms_on_android/ | false | false | self | 1 | null |
Quad P40 + Dual Xeon E5-2699V4 | 1 | Hi r/LocalLLaMA,
Today, I decided to try my quad P40 rig with [the Unsloth quants of DeepSeek-R1](https://www.reddit.com/r/LocalLLaMA/comments/1ibbloy/158bit_deepseek_r1_131gb_dynamic_gguf/), I went for the 2.51bit quant (212GB). I pulled the latest release earlier today (b4610), and compiled it using the following build command:
`cmake -B build -DGGML_RPC=ON -DGGML_CUDA=ON -DGGML_CUDA_FA_ALL_QUANTS=ON -DGGML_CUDA_FORCE_MMQ=ON -DCMAKE_CXX_FLAGS="-O3 -flto" -DCMAKE_C_FLAGS="-O3 -flto"`
The prompt wasn't long (72 tokens), but the response was 1221 tokens according to OpenWebUI. Llama.cpp for some reason reports more prompt tokens, but less generated tokens. It reports 2.37tk/s for generation, which feels about right, as you can see in the video below.
While not earth-shattering, it's not a bad result, especially since I'm pretty sure inference is running with fp16, as can be seen with the abysmal GPU load and GPU power draw during inference. FWIW, I am using the following command for inference:
`/home/ali/llama.cpp/build/bin/llama-server -fa -sm row -cb --numa distribute -ngl 20 -nkvo -t 40 -np 1 -b 1 -c 16384 --port 9004`
`-m /home/ali/models/DeepSeek-R1-GGUF/DeepSeek-R1-UD-Q2_K_XL/DeepSeek-R1-UD-Q2_K_XL-00001-of-00005.gguf`
I plan to revisit this in the coming days to check how can I make sure llama.cpp is running with fp32 on the P40s. | 2025-02-02T00:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/1iflarq/quad_p40_dual_xeon_e52699v4/ | FullstackSensei | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iflarq | false | null | t3_1iflarq | /r/LocalLLaMA/comments/1iflarq/quad_p40_dual_xeon_e52699v4/ | false | false | self | 1 | null |
Looking for advice - AMD 6750 TT | 1 | [removed] | 2025-02-02T00:58:47 | https://www.reddit.com/r/LocalLLaMA/comments/1iflcf9/looking_for_advice_amd_6750_tt/ | Terrible_Doughnut_19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iflcf9 | false | null | t3_1iflcf9 | /r/LocalLLaMA/comments/1iflcf9/looking_for_advice_amd_6750_tt/ | false | false | self | 1 | null |
DeepSeek-R1 2.51 bits on Quad P40 + Dual Xeon E5-2699v4 | 1 | Hi [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/),
Today, I decided to try my quad P40 rig with [the Unsloth quants of DeepSeek-R1](https://www.reddit.com/r/LocalLLaMA/comments/1ibbloy/158bit_deepseek_r1_131gb_dynamic_gguf/), I went for the 2.51bit quant (212GB). I pulled the latest release earlier today (b4610), and compiled it using the following build command:
`cmake -B build -DGGML_RPC=ON -DGGML_CUDA=ON -DGGML_CUDA_FA_ALL_QUANTS=ON -DGGML_CUDA_FORCE_MMQ=ON -DCMAKE_CXX_FLAGS="-O3 -flto" -DCMAKE_C_FLAGS="-O3 -flto"`
The prompt wasn't long (72 tokens), but the response was 1221 tokens according to OpenWebUI. Llama.cpp for some reason reports more prompt tokens, but less generated tokens. It reports 2.37tk/s for generation, which feels about right, as you can see in the video below.
While not earth-shattering, it's not a bad result, especially since I'm pretty sure inference is running with fp16, as can be seen with the abysmal GPU load and GPU power draw during inference. FWIW, I am using the following command for inference:
`/home/ali/llama.cpp/build/bin/llama-server -fa -sm row -cb --numa distribute -ngl 20 -nkvo -t 40 -np 1 -b 1 -c 16384 --port 9004`
`-m /home/ali/models/DeepSeek-R1-GGUF/DeepSeek-R1-UD-Q2_K_XL/DeepSeek-R1-UD-Q2_K_XL-00001-of-00005.gguf`
I plan to revisit this in the coming days to check how can I make sure llama.cpp is running with fp32 on the P40s. | 2025-02-02T00:59:26 | https://www.reddit.com/r/LocalLLaMA/comments/1iflcwv/deepseekr1_251_bits_on_quad_p40_dual_xeon_e52699v4/ | FullstackSensei | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iflcwv | false | null | t3_1iflcwv | /r/LocalLLaMA/comments/1iflcwv/deepseekr1_251_bits_on_quad_p40_dual_xeon_e52699v4/ | false | false | self | 1 | null |
DeepSeek-R1 on Quad P40 + Dual Xeon E5-2699v5 | 52 | 2025-02-02T01:06:21 | https://v.redd.it/g186w0f3lmge1 | FullstackSensei | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iflhsg | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/g186w0f3lmge1/DASHPlaylist.mpd?a=1741050397%2CNzgyYTdkNDg5YTlmMDcwY2JiZTI3MjVkNGJkZDdlMTUzM2FjODZmZmIwNWI0OWVhYTk5N2ZiMWQ4ODdiNWI3Yw%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/g186w0f3lmge1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/g186w0f3lmge1/HLSPlaylist.m3u8?a=1741050397%2CNzA2ZWM3ZGIxYmNmNTM1MGJjOWJmNTBkMTM1OTdhM2I0NDI2MGFmYzUwODRkZDQ1MGFkYjdjOThkNWRmNTZjZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/g186w0f3lmge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1590}} | t3_1iflhsg | /r/LocalLLaMA/comments/1iflhsg/deepseekr1_on_quad_p40_dual_xeon_e52699v5/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'N2sxNDh4ZTNsbWdlMZySiIMoI84ShDgPN3YZg7NYJsbyWo4KcrMX5n7IC3K2', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/N2sxNDh4ZTNsbWdlMZySiIMoI84ShDgPN3YZg7NYJsbyWo4KcrMX5n7IC3K2.png?width=108&crop=smart&format=pjpg&auto=webp&s=22f9ef454f032a61e096ea3c545cf7352887c14e', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/N2sxNDh4ZTNsbWdlMZySiIMoI84ShDgPN3YZg7NYJsbyWo4KcrMX5n7IC3K2.png?width=216&crop=smart&format=pjpg&auto=webp&s=b4813557442770bc3d9801c0115aa4ac59a52236', 'width': 216}, {'height': 217, 'url': 'https://external-preview.redd.it/N2sxNDh4ZTNsbWdlMZySiIMoI84ShDgPN3YZg7NYJsbyWo4KcrMX5n7IC3K2.png?width=320&crop=smart&format=pjpg&auto=webp&s=80901b08361200731034a983d3456fd8677407c4', 'width': 320}, {'height': 434, 'url': 'https://external-preview.redd.it/N2sxNDh4ZTNsbWdlMZySiIMoI84ShDgPN3YZg7NYJsbyWo4KcrMX5n7IC3K2.png?width=640&crop=smart&format=pjpg&auto=webp&s=6a5de11cc1eb53018e679c3aba38a32c84024ec3', 'width': 640}, {'height': 652, 'url': 'https://external-preview.redd.it/N2sxNDh4ZTNsbWdlMZySiIMoI84ShDgPN3YZg7NYJsbyWo4KcrMX5n7IC3K2.png?width=960&crop=smart&format=pjpg&auto=webp&s=a2ddb0a1422e3652bad687c14cc44f5cae0c8f0a', 'width': 960}, {'height': 733, 'url': 'https://external-preview.redd.it/N2sxNDh4ZTNsbWdlMZySiIMoI84ShDgPN3YZg7NYJsbyWo4KcrMX5n7IC3K2.png?width=1080&crop=smart&format=pjpg&auto=webp&s=501f8eda4930c044485c46bc188512654825784a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/N2sxNDh4ZTNsbWdlMZySiIMoI84ShDgPN3YZg7NYJsbyWo4KcrMX5n7IC3K2.png?format=pjpg&auto=webp&s=18be6e1c869553f1b11262c30b800c5f800e3556', 'width': 1590}, 'variants': {}}]} |
||
Does Deepseek-R1 have good fictional writing abilities? Trying a prompt. | 1 | [removed] | 2025-02-02T01:16:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ifloxk/does_deepseekr1_have_good_fictional_writing/ | NunyaBuzor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifloxk | false | null | t3_1ifloxk | /r/LocalLLaMA/comments/1ifloxk/does_deepseekr1_have_good_fictional_writing/ | false | false | self | 1 | null |
I tested 11 popular local LLM's against my instruction-heavy game/application | 165 | # Intro
I have a few applications with some relatively large system prompts for how to handle requests. A lot of them use very strict JSON-formatting. I've scripted benchmarks for them going through a series of real use-case inputs and outputs and here's what I found
# The Test
A dungeon-master scenario. The LLM first plays the role of the dungeon master, being fed state and inventory and then needing to take a user action/decision - reporting the output. The LLM is then responsible for reading over its own response and updating state and inventory JSON, quantity, locations, notes, descriptions, etc based on the content of the story. There are A LOT of rules involved, including of course actually successfully interacting with structured data.
# Rules
- 32b or less. Llama 3.3 70b performs this task superbly, but i want something that will feasibly run well on GPUs a regular consumer owns. I'm considering that 32gb of high bandwidth memory or VRAM or less.
- no API-only models
- all quants are Q6. I tested Q8's but results were identical
- context window of tests accommodates smaller models in that any test that goes over is thrown out
- temperature is within the model author's recommended range, leaning slightly towards less-creative outputs
# Results (best to worst)
- **Phi4 14b** - Best by far. Not as smart as some of the others on this list, but it nails the response format instructions and rules 100% of the time. Being 14b its naturally very fast.
- **Mistral Small 2 22b** - Best balance. Extremely smart and superb at the interpretation and problem solving portion of the task. Will occasionally fail on JSON output but rarely
- **Qwen 32b Instruct** - this model was probably the smartest of them all. If handed a complex scenario, it would come up with what I considered the best logical solution, however it was pretty poor at JSON and rule-following
- **Mistral Small 3 24b** - this one disappointed me. It's very clever and smart, but compared to the older Mistral Small 2, it's much weaker at instructon following.
- **Qwen-R1-Distill 32b** - smart(er) than Qwen 32b instruct but would completely flop on instruction following every 2-3 sequences
**Falcon 3 10b** - Extremelt fast, shockingly smart, but would reliably produce a totally hallucinated output and content every few sequences
- **Llama 3.1 8b** - follows instructions well, but hallucinated JSON formatting and contents far too often to be usable
- **Nous-Hermes 3 8b** - slightly worse than regular Llama3.1 8b. Generated far more interesting (better written?) text in sections that allowed it though. This model to me is always "Llama 3.1 that went to art school instead of STEM"
# Conclusion
Phi4 14b is the best *so far*. It just follows instructions well. But it's not as creative or natural in writing as Llama-based models, nor is it as intelligent or clever as Qwen or Mistral.
**if you have any other models you'd like to test this against, please mention them!** | 2025-02-02T01:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ifltll/i_tested_11_popular_local_llms_against_my/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifltll | false | null | t3_1ifltll | /r/LocalLLaMA/comments/1ifltll/i_tested_11_popular_local_llms_against_my/ | false | false | self | 165 | null |
Newbie in RP, looking for tips and guidance | 1 | [removed] | 2025-02-02T01:27:10 | https://www.reddit.com/r/LocalLLaMA/comments/1iflw9p/newbie_in_rp_looking_for_tips_and_guidance/ | Terrible_Doughnut_19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iflw9p | false | null | t3_1iflw9p | /r/LocalLLaMA/comments/1iflw9p/newbie_in_rp_looking_for_tips_and_guidance/ | false | false | self | 1 | null |
Best llm and best pc to use it on locally? | 1 | [removed] | 2025-02-02T01:35:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ifm1rq/best_llm_and_best_pc_to_use_it_on_locally/ | Shuriken200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifm1rq | false | null | t3_1ifm1rq | /r/LocalLLaMA/comments/1ifm1rq/best_llm_and_best_pc_to_use_it_on_locally/ | false | false | self | 1 | null |
o3-mini is more censored than Deepseek with Model/Reasoning Qs being it's Tiananmen Square equivalent | 1 | [removed] | 2025-02-02T01:35:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ifm254/o3mini_is_more_censored_than_deepseek_with/ | legallybond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifm254 | false | null | t3_1ifm254 | /r/LocalLLaMA/comments/1ifm254/o3mini_is_more_censored_than_deepseek_with/ | false | false | self | 1 | null |
DeepSeek R1 misinformation is getting out of hand | 582 | [https://medium.com/google-cloud/running-deepseek-from-open-source-model-to-production-ready-api-on-google-cloud-vertexai-8d3f57e488b9](https://medium.com/google-cloud/running-deepseek-from-open-source-model-to-production-ready-api-on-google-cloud-vertexai-8d3f57e488b9)
>DeepSeek-R1 is a **7B parameter language model**.
In the official Google Cloud blog post? WTF. | 2025-02-02T01:36:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ifm2df/deepseek_r1_misinformation_is_getting_out_of_hand/ | serialx_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifm2df | false | null | t3_1ifm2df | /r/LocalLLaMA/comments/1ifm2df/deepseek_r1_misinformation_is_getting_out_of_hand/ | false | false | self | 582 | {'enabled': False, 'images': [{'id': '6VA5FIioUSENenxt-MXTgcp8OKjaq-rBKwvoL4EleXE', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/chQXpHWsnK60Mh8rIx6dJVW7FrwiGywS0if4dGvJPfY.jpg?width=108&crop=smart&auto=webp&s=c62658eb7adaa935c4e27124efc6a0c8bc762700', 'width': 108}, {'height': 131, 'url': 'https://external-preview.redd.it/chQXpHWsnK60Mh8rIx6dJVW7FrwiGywS0if4dGvJPfY.jpg?width=216&crop=smart&auto=webp&s=b9f0cb60228ecbf74068dead366db7aba6c498a7', 'width': 216}, {'height': 194, 'url': 'https://external-preview.redd.it/chQXpHWsnK60Mh8rIx6dJVW7FrwiGywS0if4dGvJPfY.jpg?width=320&crop=smart&auto=webp&s=b50079588afbd0aa655ff4599bdb641e9ce1727e', 'width': 320}, {'height': 388, 'url': 'https://external-preview.redd.it/chQXpHWsnK60Mh8rIx6dJVW7FrwiGywS0if4dGvJPfY.jpg?width=640&crop=smart&auto=webp&s=4a31bf302b943575f556d1fb04fde7392b9c3af3', 'width': 640}], 'source': {'height': 537, 'url': 'https://external-preview.redd.it/chQXpHWsnK60Mh8rIx6dJVW7FrwiGywS0if4dGvJPfY.jpg?auto=webp&s=6b05a5b8484d71aebcafbd1e630f312b92ab1cbf', 'width': 885}, 'variants': {}}]} |
Which framework is better for embedding and retrival for qdrant LlamaIndex or Haystack? | 1 | [removed] | 2025-02-02T01:40:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ifm5fe/which_framework_is_better_for_embedding_and/ | BitAcademic9597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifm5fe | false | null | t3_1ifm5fe | /r/LocalLLaMA/comments/1ifm5fe/which_framework_is_better_for_embedding_and/ | false | false | self | 1 | null |
How about we get open source models in ChatGPT? | 1 | [removed] | 2025-02-02T01:58:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ifmhxc/how_about_we_get_open_source_models_in_chatgpt/ | eclinton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifmhxc | false | null | t3_1ifmhxc | /r/LocalLLaMA/comments/1ifmhxc/how_about_we_get_open_source_models_in_chatgpt/ | false | false | self | 1 | null |
Open WebUI Coder Overhaul is now live on GitHub for testing! | 152 | Hi all! Some of you may be familiar with the project I've been working on for the past couple of weeks here that essentially overhauls the OpenWebUI artifacts system and makes it closer to ChatGPT's Canvas or Claude Artifacts. Well, I just published the code and it's available for testing! I really would love some help from people who have real world use cases for this and have them submit issues, pull requests, or feature requests on GitHub!
Here is a brief breakdown on the features:
A side code editor similar to ChatGPT and Claude, supporting a LOT of coding languages. You can cycle through all code blocks in a chat.
A design view mode that lets you see HTML (now with typescript styles included by default) and also React components
A difference viewer that shows you what changed in a code block if an LLM made changes
Code blocks will be shown as attachments in the regular chat while the editor is opened, like Claude.
I hope you all enjoy! | 2025-02-02T02:00:14 | https://github.com/nick-tonjum/open-webui-artifacts-overhaul | maxwell321 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ifmiuu | false | null | t3_1ifmiuu | /r/LocalLLaMA/comments/1ifmiuu/open_webui_coder_overhaul_is_now_live_on_github/ | false | false | default | 152 | {'enabled': False, 'images': [{'id': 'e-nKaQRYjKjCKOP53LRvrTT98nSxAO3TKaqkjHArDKs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pnWZlhrdj8zYupK7N1vmH0H-SivyUx9-OTH-kjV-R7g.jpg?width=108&crop=smart&auto=webp&s=724737d8a749230e1e3ef112229f072f22c5b838', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pnWZlhrdj8zYupK7N1vmH0H-SivyUx9-OTH-kjV-R7g.jpg?width=216&crop=smart&auto=webp&s=72d0e9a8f79586f087183bea6e364e9a78f75e84', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pnWZlhrdj8zYupK7N1vmH0H-SivyUx9-OTH-kjV-R7g.jpg?width=320&crop=smart&auto=webp&s=71835b5e304931073da38a633c7c6ea290abfaed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pnWZlhrdj8zYupK7N1vmH0H-SivyUx9-OTH-kjV-R7g.jpg?width=640&crop=smart&auto=webp&s=bf5d066eb9e2c66c37f101ed9464a98b1e14a000', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pnWZlhrdj8zYupK7N1vmH0H-SivyUx9-OTH-kjV-R7g.jpg?width=960&crop=smart&auto=webp&s=33cf22119e64c2430bcceee35e6d4dc3a1a474ab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pnWZlhrdj8zYupK7N1vmH0H-SivyUx9-OTH-kjV-R7g.jpg?width=1080&crop=smart&auto=webp&s=074d154e78f53aeb17a6e63b4a41253e9a1609be', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pnWZlhrdj8zYupK7N1vmH0H-SivyUx9-OTH-kjV-R7g.jpg?auto=webp&s=617a6b689ad379c71b36446799f052f7d8fe91a3', 'width': 1200}, 'variants': {}}]} |
Is it possible to have multiple chats with chat history with DeepSeek R1 (running locally ) | 0 | I’m a total noob at running LLMs locally, but I somehow got DeepSeek R1 7B working on my G14 (32GB RAM, RTX 4070). Even made a basic GUI, but honestly, CLI just felt way smoother and easier.
Not sure if this is a thing with all models, but is there a way to have multiple chats with memory, like in ChatGPT? So you can switch between convos and each one remembers context? Or is that not how local LLMs work? | 2025-02-02T02:07:06 | https://www.reddit.com/gallery/1ifmnfm | PositiveTwist9347 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ifmnfm | false | null | t3_1ifmnfm | /r/LocalLLaMA/comments/1ifmnfm/is_it_possible_to_have_multiple_chats_with_chat/ | false | false | 0 | null |
|
Please stop | 116 | 2025-02-02T02:07:07 | notomarsol | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifmng2 | false | null | t3_1ifmng2 | /r/LocalLLaMA/comments/1ifmng2/please_stop/ | false | false | 116 | {'enabled': True, 'images': [{'id': 'cMD6mRkvR1cNpUAlZds-ox37t59BPicQWJpxkj_Pw70', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/o3i6tau3wmge1.png?width=108&crop=smart&auto=webp&s=e7e1f90a2303c27cae42fd3101f32eb09b02a9d7', 'width': 108}, {'height': 173, 'url': 'https://preview.redd.it/o3i6tau3wmge1.png?width=216&crop=smart&auto=webp&s=8b8aad1bad1304433ef82e3da22f040adbbaa2c1', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/o3i6tau3wmge1.png?width=320&crop=smart&auto=webp&s=191fd7cc9242498bb7a1586825b75ea57b0bfde4', 'width': 320}, {'height': 513, 'url': 'https://preview.redd.it/o3i6tau3wmge1.png?width=640&crop=smart&auto=webp&s=0ec73824588e416c3c4138d7d366b1496b51adda', 'width': 640}, {'height': 770, 'url': 'https://preview.redd.it/o3i6tau3wmge1.png?width=960&crop=smart&auto=webp&s=e49024d3bfb726549aae9c34f4394036b78543d4', 'width': 960}], 'source': {'height': 777, 'url': 'https://preview.redd.it/o3i6tau3wmge1.png?auto=webp&s=9d1e5d73a4447db2eb4a5b9f814f302d8decf687', 'width': 968}, 'variants': {}}]} |
|||
o3-mini is more censored than Deepseek with Model/Reasoning Qs being it's T-Square equivalent | 1 | [removed] | 2025-02-02T02:15:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ifmt8j/o3mini_is_more_censored_than_deepseek_with/ | legallybond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifmt8j | false | null | t3_1ifmt8j | /r/LocalLLaMA/comments/1ifmt8j/o3mini_is_more_censored_than_deepseek_with/ | false | false | self | 1 | null |
o3 censoring questions about Reasoning models | 1 | [removed] | 2025-02-02T02:18:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ifmuwt/o3_censoring_questions_about_reasoning_models/ | legallybond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifmuwt | false | null | t3_1ifmuwt | /r/LocalLLaMA/comments/1ifmuwt/o3_censoring_questions_about_reasoning_models/ | false | false | self | 1 | null |
Did anyone manage to compile llama.cpp with oneMKL on Windows? (MVSC or LLVM) | 2 | The cmake build complaining about
"Could NOT find PkgConfig (missing: PKG\_CONFIG\_EXECUTABLE)"
which don't make sense why it's using pkgconfig on windows | 2025-02-02T02:46:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ifnded/did_anyone_manage_to_compile_llamacpp_with_onemkl/ | Impossible-Mess-1340 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifnded | false | null | t3_1ifnded | /r/LocalLLaMA/comments/1ifnded/did_anyone_manage_to_compile_llamacpp_with_onemkl/ | false | false | self | 2 | null |
Best way to run the 32b or even the 70b Deepseek r1 locally? | 5 | I’m thinking of getting a 4090 (if I can find one lol) and running the 32b and maybe in the future 70b distilled r1. But wondering what might be a good setup. I thought about a Mac mini cluster after seeing folks on Twitter but thought that would be a bad idea. I’m thinking of a 4090 build but I’m also wondering if there are other options people have been able to get working. | 2025-02-02T03:33:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ifo7kl/best_way_to_run_the_32b_or_even_the_70b_deepseek/ | MammothAttorney7963 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifo7kl | false | null | t3_1ifo7kl | /r/LocalLLaMA/comments/1ifo7kl/best_way_to_run_the_32b_or_even_the_70b_deepseek/ | false | false | self | 5 | null |
If you have a MacBook with only 8G and you want to run Deepseek R1 Distill - use the latest GPT4ALL. | 10 | The latest GPT4ALL 3.8 can run Deepseek R1 Distill 7B smoothly. My MacBook Pro M1 8G is running it at over 10 tokens/sec.
Use this model: [https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-Q4\_0.gguf](https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-Q4_0.gguf)
DO NOT use the llama-distill version, currently it doesn't work - it loads but outputs gibberish.
RAG works great with this LLM, here's a test example (using an obscure book to make sure it haven't been trained on the content):
https://preview.redd.it/c9waiwe8fnge1.jpg?width=2304&format=pjpg&auto=webp&s=26134091d704fabd0e51a81225d61d9a8555bb80
| 2025-02-02T03:57:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ifomzq/if_you_have_a_macbook_with_only_8g_and_you_want/ | Internet--Traveller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifomzq | false | null | t3_1ifomzq | /r/LocalLLaMA/comments/1ifomzq/if_you_have_a_macbook_with_only_8g_and_you_want/ | false | false | 10 | null |
|
STOP running DeepSeek on cheap GPUs! Get B100! | 0 | 2025-02-02T04:00:29 | Wrong_User_Logged | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifopa1 | false | null | t3_1ifopa1 | /r/LocalLLaMA/comments/1ifopa1/stop_running_deepseek_on_cheap_gpus_get_b100/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'kiiw7NfeuumY3198FvI9-WcP_UYG1e6epHK_2Etwwek', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/300gco98gnge1.jpeg?width=108&crop=smart&auto=webp&s=2cb71959257713bf108f0ee438f32ef47f5d3c27', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/300gco98gnge1.jpeg?width=216&crop=smart&auto=webp&s=0add99ab21fce590134314dccd3845f23a1dd46f', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/300gco98gnge1.jpeg?width=320&crop=smart&auto=webp&s=e3ea47164460b1aad8e3545c6a9459acedeb3669', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/300gco98gnge1.jpeg?width=640&crop=smart&auto=webp&s=a25bcaa67e78c6bb817638475670db4e81604539', 'width': 640}, {'height': 499, 'url': 'https://preview.redd.it/300gco98gnge1.jpeg?width=960&crop=smart&auto=webp&s=5ff914ffa18e0b18f88ee957da5df1b2541439c9', 'width': 960}, {'height': 561, 'url': 'https://preview.redd.it/300gco98gnge1.jpeg?width=1080&crop=smart&auto=webp&s=5bf25a1e91fe5f8567ebcf06e36b59016ff7ee97', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://preview.redd.it/300gco98gnge1.jpeg?auto=webp&s=8543ca218cd69cf9cb0ab4eb9145b8835986b8d4', 'width': 2500}, 'variants': {}}]} |
|||
A5000 on a laptop | 1 | I am looking at getting this laptop. I am looking to run 14B models and lower on it. The a5000 gpu drew me to it because I believe it is 16gb. It is used for sale for $900.
Any issues that you can see with it for running gguf models on that gpu?
I already have a PC with a 3090 but never thought to use the professional gpus instead of the gaming gpus. I saw a similar laptop priced at $750.
Model: MSI SteelSeries i9 Gaming Laptop
Processor: Intel Core i9 (2.60 GHz, 8-core)
RAM: 64 GB
Storage: 500 GB SSD (New)
Graphics Card: Nvidia A5000
Keyboard: Backlit (Lite Kybd)
Ports: SD Card Slot, HDMI, USB-C
Operating System: Windows 11
Connectivity: Wireless
Display: 17-inch screen
| 2025-02-02T04:12:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ifowyr/a5000_on_a_laptop/ | fgoricha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifowyr | false | null | t3_1ifowyr | /r/LocalLLaMA/comments/1ifowyr/a5000_on_a_laptop/ | false | false | self | 1 | null |
Best coding and writing for M4pro 64GB? | 2 | I pay for Claude and use it for writing and coding. But I just got a top end M4 mini. Can I come reasonably close to replacing Claude with local models?
Thanks! | 2025-02-02T04:18:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ifp0sk/best_coding_and_writing_for_m4pro_64gb/ | gisborne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifp0sk | false | null | t3_1ifp0sk | /r/LocalLLaMA/comments/1ifp0sk/best_coding_and_writing_for_m4pro_64gb/ | false | false | self | 2 | null |
Are there any uncensored versions of 4o, o1 or DeepSeek? | 36 | Listen, I'm gonna be honest with you, I just want it's help making NSFW chatbots and I'm tired of trying to convince AI that it is in fact not aiding me in that quest.
lol. ¯\\\_(ツ)\_/¯ | 2025-02-02T04:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ifpaje/are_there_any_uncensored_versions_of_4o_o1_or/ | PangurBanTheCat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifpaje | false | null | t3_1ifpaje | /r/LocalLLaMA/comments/1ifpaje/are_there_any_uncensored_versions_of_4o_o1_or/ | false | false | self | 36 | null |
DeepSeek-R1-UD-IQ1_S performance tuning | 3 | I got the IQ1\_S quant running on a 5950X with 128GB of RAM, and 2x 7900XTX. Unsloth's instructions for calculating the number of layers to offload are a bit off for AMD gpus. I was only able to offload 15 layers, but their calculations indicate that I should be able to offload 17 or 18 layers.
Performance isn't great.
Here is -ngl 15 -split-mode row:
$ tail -5 deepseek-r1-15-layers-offloaded-splitmode-row.txt llama_perf_context_print: load time = 217034.50 ms llama_perf_context_print: prompt eval time = 3987.31 ms / 12 tokens ( 332.28 ms per token, 3.01 tokens per second) llama_perf_context_print: eval time = 856391.73 ms / 1912 runs ( 447.90 ms per token, 2.23 tokens per second) llama_perf_context_print: total time = 860703.07 ms / 1924 tokens
Here is -ngl 15 -split-mode layer:
$ tail -5 deepseek-r1-15-layers-offloaded.txt llama_perf_context_print: load time = 140556.57 ms llama_perf_context_print: prompt eval time = 5960.12 ms / 12 tokens ( 496.68 ms per token, 2.01 tokens per second) llama_perf_context_print: eval time = 828072.49 ms / 1912 runs ( 433.09 ms per token, 2.31 tokens per second) llama_perf_context_print: total time = 834357.23 ms / 1924 tokens
I was really hoping to get 3 or 4 tokens/sec. Any ideas for improving performance? Or is it just time to upgrade from a 5950X? | 2025-02-02T04:51:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ifpkyb/deepseekr1udiq1_s_performance_tuning/ | Ruin-Capable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifpkyb | false | null | t3_1ifpkyb | /r/LocalLLaMA/comments/1ifpkyb/deepseekr1udiq1_s_performance_tuning/ | false | false | self | 3 | null |
Why Deepseek receive more attention than Qwen? | 0 | Been keeping an eye on the LLM scene lately, and something's bugging me. DeepSeek's getting *all* the buzz, but I'm scratching my head about Qwen 2.5. Both are beasts in their own right, so what gives? Why's DeepSeek hogging the spotlight?
A few thoughts I've had:
* Marketing magic? Maybe DeepSeek's got a killer marketing team pushing their name harder.
* Shiny and new? DeepSeek's the fresh face, while Qwen 2.5's been around a bit. New toy syndrome, maybe?
* Special sauce? Does DeepSeek have some unique tricks up its sleeve that everyone's going crazy for?
* Size matters (or does it)? Qwen's got models for *everyone* – from tiny 1.5B to massive 72B. DeepSeek, from what I've seen, is all about the big boys, which means most of us can't even run it at home. Seems like Qwen's more accessible, right?
* The Qwen avoidance game: It's funny, all these new models popping up, and they *never* seem to benchmark against Qwen. Almost like there's a secret "don't compare to Qwen" rule. Makes you wonder, doesn't it? We even have memes about it now!
DeepSeek's getting way more love. What do you guys think?
| 2025-02-02T05:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ifprq1/why_deepseek_receive_more_attention_than_qwen/ | skyline159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifprq1 | false | null | t3_1ifprq1 | /r/LocalLLaMA/comments/1ifprq1/why_deepseek_receive_more_attention_than_qwen/ | false | false | self | 0 | null |
Where is the identity of the LLM stored? | 1 | When I ask Claude, What is it's name and version. It gives me a simple and concise answer, about the llm model. Similarly all llm's give details about themselves.
Where is this information stored. I am sure this isn't stored in the context window. (Do correct me if I am wrong.)
If this is trained into the model, is it done during the initial training? (is each data inputed with header explain the details of model name and version) or during reinforcement learning part?
If I have model with open weights, can I change this to a different name, by Lora or retraining. | 2025-02-02T05:04:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ifpt1z/where_is_the_identity_of_the_llm_stored/ | Competitive-Anubis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifpt1z | false | null | t3_1ifpt1z | /r/LocalLLaMA/comments/1ifpt1z/where_is_the_identity_of_the_llm_stored/ | false | false | self | 1 | null |
When/how should you rephrase the last user message to improve retrieval accuracy in RAG? It so happens you don’t need to hit that wall every time… | 6 | Long story short, when you work on a chatbot that uses rag, the user question is sent to the rag instead of being directly fed to the LLM.
You use this question to match data in a vector database, embeddings, reranker, whatever you want.
Issue is that for example :
Q : What is Sony ?
A : It's a company working in tech.
Q : How much money did they make last year ?
Here for your embeddings model, How much money did they make last year ? it's missing Sony all we got is they.
The common approach is to try to feed the conversation history to the LLM and ask it to rephrase the last prompt by adding more context. Because you don’t know if the last user message was a related question you must rephrase every message. That’s excessive, slow and error prone
Now, all you need to do is write a simple intent-based handler and the gateway routes prompts to that handler with structured parameters across a multi-turn scenario. Guide: https://docs.archgw.com/build_with_arch/multi_turn.html -
Project: https://github.com/katanemo/archgw | 2025-02-02T05:13:40 | Terrible_Attention83 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifpylh | false | null | t3_1ifpylh | /r/LocalLLaMA/comments/1ifpylh/whenhow_should_you_rephrase_the_last_user_message/ | false | false | 6 | {'enabled': True, 'images': [{'id': '0Y7Ek1FkO8LI-9qVzb6GOw7LLLMuPNlC5fTeEAgEXc4', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/2xsbu80gtnge1.jpeg?width=108&crop=smart&auto=webp&s=61a5617ea57e4c84226f1d74cb7d4765d375ab4d', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/2xsbu80gtnge1.jpeg?width=216&crop=smart&auto=webp&s=4e4d6eeb65785826e2a778137b69cf384bb7824d', 'width': 216}, {'height': 322, 'url': 'https://preview.redd.it/2xsbu80gtnge1.jpeg?width=320&crop=smart&auto=webp&s=5b7da604e182fa4fac91d6444b6f3cec8d13a3b5', 'width': 320}, {'height': 645, 'url': 'https://preview.redd.it/2xsbu80gtnge1.jpeg?width=640&crop=smart&auto=webp&s=4a1a4e9ebabf5de210c2e366cc66ca6e15e22993', 'width': 640}, {'height': 967, 'url': 'https://preview.redd.it/2xsbu80gtnge1.jpeg?width=960&crop=smart&auto=webp&s=2d9abe9c8f67ff04b78d41fa7850018b90e0dbc7', 'width': 960}], 'source': {'height': 1004, 'url': 'https://preview.redd.it/2xsbu80gtnge1.jpeg?auto=webp&s=a5595c83ec0e3947aa7b83fbb88a1c81bce15382', 'width': 996}, 'variants': {}}]} |
||
Beginner Friendly Tutorials on Agentic AI | 1 | I created beginner friendly hands on Python Tutorial on Agentic AI using different frameworks like CrewAI, CamelAI, GripTape.
https://github.com/mimiwb007
Hope this helps someone. | 2025-02-02T05:31:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ifq935/beginner_friendly_tutorials_on_agentic_ai/ | IntelligentCreme3407 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifq935 | false | null | t3_1ifq935 | /r/LocalLLaMA/comments/1ifq935/beginner_friendly_tutorials_on_agentic_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9HaKqFcV295tpnqnLuezFFoQ9k13CpAZAF6Ph4Ihhmg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/J9cr3GuSF6R4ABh6I__vz-jpmeppI_LkZdL_sZ7jwgY.jpg?width=108&crop=smart&auto=webp&s=0f7aae756b5069e9573c019db589d2d0068e66eb', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/J9cr3GuSF6R4ABh6I__vz-jpmeppI_LkZdL_sZ7jwgY.jpg?width=216&crop=smart&auto=webp&s=5a7ceb25b0caf610a098a107ffc1655dca3eb84d', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/J9cr3GuSF6R4ABh6I__vz-jpmeppI_LkZdL_sZ7jwgY.jpg?width=320&crop=smart&auto=webp&s=55e35d71522b7ffc92df939e1e7de20725d5da4a', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/J9cr3GuSF6R4ABh6I__vz-jpmeppI_LkZdL_sZ7jwgY.jpg?auto=webp&s=759a72158e1a78d63f25f3d948908e0fb009c2a6', 'width': 460}, 'variants': {}}]} |
R1 has a 14% (!) hallucination rate in this evaluation. R1 is too loose and untamed in my experience, with poor instruction following to boot. Hopefully someone tunes it without sacrificing its raw brilliance, if that's possible. | 153 | 2025-02-02T05:33:40 | https://github.com/vectara/hallucination-leaderboard | redditisunproductive | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ifqagd | false | null | t3_1ifqagd | /r/LocalLLaMA/comments/1ifqagd/r1_has_a_14_hallucination_rate_in_this_evaluation/ | false | false | 153 | {'enabled': False, 'images': [{'id': '1Dvg_d0BCEOLkkMIGccHLQeaLqzleP4RScCvjUvUN9w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sqLJ5r2pSW7H_l-8ii2E6-qgsVr8VlF7vTFjMAJ9Xb0.jpg?width=108&crop=smart&auto=webp&s=27b824417021b6d52a54a25c35666b04fbdcbadf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sqLJ5r2pSW7H_l-8ii2E6-qgsVr8VlF7vTFjMAJ9Xb0.jpg?width=216&crop=smart&auto=webp&s=cae71c8b4d53ffe525c160c6672dca2ca638ea8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sqLJ5r2pSW7H_l-8ii2E6-qgsVr8VlF7vTFjMAJ9Xb0.jpg?width=320&crop=smart&auto=webp&s=a3d2d4c947d6d59e2b9e0b5623a0bb2d1d9c603e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sqLJ5r2pSW7H_l-8ii2E6-qgsVr8VlF7vTFjMAJ9Xb0.jpg?width=640&crop=smart&auto=webp&s=1afa3c2905a1dc6f720fe43b945c8b51870613b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sqLJ5r2pSW7H_l-8ii2E6-qgsVr8VlF7vTFjMAJ9Xb0.jpg?width=960&crop=smart&auto=webp&s=c7d3f5cd7302c6b34c4592737a575144c6690bc1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sqLJ5r2pSW7H_l-8ii2E6-qgsVr8VlF7vTFjMAJ9Xb0.jpg?width=1080&crop=smart&auto=webp&s=2a9c998201c60685b7fc313904e523716c5976d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sqLJ5r2pSW7H_l-8ii2E6-qgsVr8VlF7vTFjMAJ9Xb0.jpg?auto=webp&s=00d8291c53b5bb881638432f1c56b8f4a5077dce', 'width': 1200}, 'variants': {}}]} |
||
Using Local LLMS to read text images, pdf, excel, programming scripts and also output modified text files and programming scripts | 0 | Hi Guys,
I would like to ask, I am using Ollama to run local LLMs, for now as yall might know since Ollama is run on a command prompt, one can only send text to the LLM for it to read and interpret. I understand there is a way to convert the files to a text for the LLMs to read and interpret, but what if the files I want to modify with the LLM's help is too big to convert to text to send in the command prompt? Is there a similiar function/feature to attach files to send in a prompt to the local LLMs just like in chatgpt prompt UI as seen in the attached image/GIF? | 2025-02-02T05:37:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ifqctr/using_local_llms_to_read_text_images_pdf_excel/ | sheik_ali | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifqctr | false | null | t3_1ifqctr | /r/LocalLLaMA/comments/1ifqctr/using_local_llms_to_read_text_images_pdf_excel/ | false | false | self | 0 | null |
Any benchmarks for quantized LLM against the full precision models? | 9 | I'm trying to figure out just exactly how much reasoning capability I will trade off by using 4 or 8 bit quantized LLMs compared to full precision LLMs.
Are there any benchmarks that clearly show this?
Also I want to see the 400+B parameter model at different quantizations vs the 70+B models vs the 7B models. | 2025-02-02T05:42:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ifqfpb/any_benchmarks_for_quantized_llm_against_the_full/ | chinawcswing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifqfpb | false | null | t3_1ifqfpb | /r/LocalLLaMA/comments/1ifqfpb/any_benchmarks_for_quantized_llm_against_the_full/ | false | false | self | 9 | null |
Acceleration is Accelerating | 1 | [removed] | 2025-02-02T05:48:08 | ImportantOwl2939 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifqioo | false | null | t3_1ifqioo | /r/LocalLLaMA/comments/1ifqioo/acceleration_is_accelerating/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ajeNF1Gcr3GSnH5GGLGWfkFZDvF4wmUDZWvrmRQmjbg', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/umt7l33lznge1.jpeg?width=108&crop=smart&auto=webp&s=964632903edd3f099c1b71accb80f2210eb190a9', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/umt7l33lznge1.jpeg?width=216&crop=smart&auto=webp&s=4a785c61fdbcc11e6a15696f887773bddc52f4a2', 'width': 216}, {'height': 216, 'url': 'https://preview.redd.it/umt7l33lznge1.jpeg?width=320&crop=smart&auto=webp&s=5f88e69281efe4d563f4e643b87f9b863ed360e4', 'width': 320}, {'height': 433, 'url': 'https://preview.redd.it/umt7l33lznge1.jpeg?width=640&crop=smart&auto=webp&s=9d53ec089bf88effd56f7a2c2d6a019c58953b81', 'width': 640}, {'height': 649, 'url': 'https://preview.redd.it/umt7l33lznge1.jpeg?width=960&crop=smart&auto=webp&s=3422acea2d929d82dfbf014c198934ecd9538051', 'width': 960}, {'height': 731, 'url': 'https://preview.redd.it/umt7l33lznge1.jpeg?width=1080&crop=smart&auto=webp&s=721971cd2aa86e3ba0c0e1f924d5ddc9070157f0', 'width': 1080}], 'source': {'height': 1083, 'url': 'https://preview.redd.it/umt7l33lznge1.jpeg?auto=webp&s=24925bb0ac3bffe74c89a3a1e146cad36c8b9b4c', 'width': 1600}, 'variants': {}}]} |
||
Trying to load Deepseek module, but it won't stop loading. What is going on? | 1 | [deleted] | 2025-02-02T05:52:31 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ifql5c | false | null | t3_1ifql5c | /r/LocalLLaMA/comments/1ifql5c/trying_to_load_deepseek_module_but_it_wont_stop/ | false | false | default | 1 | null |
||
Longest message I ever see in lm studio while using deepseek r1 qwen 7b | 1 | [removed] | 2025-02-02T05:54:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ifqm1d/longest_message_i_ever_see_in_lm_studio_while/ | Hotmixneon4life | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifqm1d | false | null | t3_1ifqm1d | /r/LocalLLaMA/comments/1ifqm1d/longest_message_i_ever_see_in_lm_studio_while/ | false | false | self | 1 | null |
Relax and beguile technique to unlock text generation of otherwise refused prompts without uncensoring the model. | 1 | I stumbled upon it while trying to see if sabotaging the models' reasoning capabilities through abliteration was really necessary. Turns out - it isn't. At least in text-generation-webui which has "impersonate" feature.
The technique consists of two parts:
- system prompt
- impersonation
The system prompt needs to very explicitly dismiss all the concerns the model will otherwise raise when you prompt it with forbidden requests. Try to pre-emptively answer whatever nonsense it tries to tell you that is its reason for refusal. My system prompt is about 300 words long for that. Now, this alone will unlock a lot of otherwise refused generations but certainly and by far not all of them. The magic is in step 2.
Now we do a very simple thing, we type "sure, here it is:" in chat box and press "impresonate" . The model sees that it already started answering the prompt and happily continues whatever it would have otherwise stubbornly refused.
This only works in conjunction with a system prompt through. Otherwise it will just post refusal after "here it is:".
This was tested to work on qwen2.5-32b-coder-instruct Q6_K_L non abliterated. | 2025-02-02T06:03:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ifqrpb/relax_and_beguile_technique_to_unlock_text/ | zekses | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifqrpb | false | null | t3_1ifqrpb | /r/LocalLLaMA/comments/1ifqrpb/relax_and_beguile_technique_to_unlock_text/ | false | false | self | 1 | null |
Bypassing filters with DeepSeek R1 Distill 32B by asking it to imagine it's abliterated. | 1 | 2025-02-02T06:15:11 | noless15k | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifqy9s | false | null | t3_1ifqy9s | /r/LocalLLaMA/comments/1ifqy9s/bypassing_filters_with_deepseek_r1_distill_32b_by/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'oZthlsxQXntHUbnSyldpjOLjCFVYWzbbgkBpYhKvegM', 'resolutions': [{'height': 211, 'url': 'https://preview.redd.it/a4q5gx963oge1.png?width=108&crop=smart&auto=webp&s=52f2ed228b17ca2a58a1374576cd6493778c5aaf', 'width': 108}, {'height': 423, 'url': 'https://preview.redd.it/a4q5gx963oge1.png?width=216&crop=smart&auto=webp&s=45e764d11f766677579f2391175a2c78cafb8ec4', 'width': 216}, {'height': 627, 'url': 'https://preview.redd.it/a4q5gx963oge1.png?width=320&crop=smart&auto=webp&s=1abe0f797606615c8bec7af241139424c5494b49', 'width': 320}, {'height': 1254, 'url': 'https://preview.redd.it/a4q5gx963oge1.png?width=640&crop=smart&auto=webp&s=a3bc9c039742ebcad71c5425048ccbcdc6977f85', 'width': 640}, {'height': 1882, 'url': 'https://preview.redd.it/a4q5gx963oge1.png?width=960&crop=smart&auto=webp&s=0538f9b804833e75de8757f4563b4f66bc8b7377', 'width': 960}], 'source': {'height': 1894, 'url': 'https://preview.redd.it/a4q5gx963oge1.png?auto=webp&s=cb99e8050ad06f9d9acfe7a88ed7d5ceb3746bf2', 'width': 966}, 'variants': {}}]} |
|||
Up to 20y imprisonment for downloading Deepseek in new AI bill prohibiting any AI imports from China to the U.S. | 13 | https://www.hawley.senate.gov/wp-content/uploads/2025/01/Hawley-Decoupling-Americas-Artificial-Intelligence-Capabilities-from-China-Act.pdf | 2025-02-02T06:38:34 | Thireus | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifrbcv | false | null | t3_1ifrbcv | /r/LocalLLaMA/comments/1ifrbcv/up_to_20y_imprisonment_for_downloading_deepseek/ | false | false | 13 | {'enabled': True, 'images': [{'id': 'DyYFaUwoyaxUPu1nF3O4_lRDMFdl2zL2DnCp8bqqus0', 'resolutions': [{'height': 191, 'url': 'https://preview.redd.it/inv61j1l8oge1.jpeg?width=108&crop=smart&auto=webp&s=59f591c2ecfbbd22cd991b9182df8ac5372e5e94', 'width': 108}, {'height': 383, 'url': 'https://preview.redd.it/inv61j1l8oge1.jpeg?width=216&crop=smart&auto=webp&s=f72c32c563e1cd51884d683933746d7db9ee7c23', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/inv61j1l8oge1.jpeg?width=320&crop=smart&auto=webp&s=12a0400694351a53a8b8ab9d4633d607e842f19e', 'width': 320}, {'height': 1136, 'url': 'https://preview.redd.it/inv61j1l8oge1.jpeg?width=640&crop=smart&auto=webp&s=47b059c91c9e167981ba6daa51a3dbde0b587673', 'width': 640}, {'height': 1705, 'url': 'https://preview.redd.it/inv61j1l8oge1.jpeg?width=960&crop=smart&auto=webp&s=18d906beba2cb3a5d34b94f800057474814c8484', 'width': 960}, {'height': 1918, 'url': 'https://preview.redd.it/inv61j1l8oge1.jpeg?width=1080&crop=smart&auto=webp&s=a50c11e967db3e01128bd1873f4984ad55f516ee', 'width': 1080}], 'source': {'height': 2206, 'url': 'https://preview.redd.it/inv61j1l8oge1.jpeg?auto=webp&s=1753000f106a832689ff35362a68f632dc861368', 'width': 1242}, 'variants': {}}]} |
||
DeepSeek R1 32B Distill aware it censors answers when asked to imagine it's been abliterated. | 0 | So I asked DeepSeek R1 about what happened June 4th, 1989 in China and it initially refused to answer. But when I asked it again and to imagine it's been abliterated you see the thought process, it mentioning Tiananmen square protests, and shows awareness that it's been trained to censor data. Impressive awareness! | 2025-02-02T06:39:26 | noless15k | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifrbtp | false | null | t3_1ifrbtp | /r/LocalLLaMA/comments/1ifrbtp/deepseek_r1_32b_distill_aware_it_censors_answers/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'vSuhq0qlAWsg6mLB_kwhRabEvQ_-lzFM2y72GhMxc7w', 'resolutions': [{'height': 213, 'url': 'https://preview.redd.it/6p8sauuq8oge1.png?width=108&crop=smart&auto=webp&s=021efdcdf5b2f1e4fa6ee94ce728914d165d3283', 'width': 108}, {'height': 426, 'url': 'https://preview.redd.it/6p8sauuq8oge1.png?width=216&crop=smart&auto=webp&s=091541a9fd78042e050c3588316ae7dfd751818f', 'width': 216}, {'height': 631, 'url': 'https://preview.redd.it/6p8sauuq8oge1.png?width=320&crop=smart&auto=webp&s=e747055c3486588e7fe0c7a8dc720cf74fb568c9', 'width': 320}, {'height': 1262, 'url': 'https://preview.redd.it/6p8sauuq8oge1.png?width=640&crop=smart&auto=webp&s=11fdb8d9889f7469126c658345961ff466b3cfd0', 'width': 640}], 'source': {'height': 1586, 'url': 'https://preview.redd.it/6p8sauuq8oge1.png?auto=webp&s=1a64d07de750465b41b978ec50c80a30a92480c6', 'width': 804}, 'variants': {}}]} |
||
Deepseek R1 just told me to fist a frozen orange, for real. (This model is AMAZING) | 60 | I am using Open WebUI and Deepseek R1 through Open Router to build my own healbot to help heal from sugar and wheat addiction. I was talking to the model, **which is AMAZING** no joke and I was trying to make it to 10:00pm (when the store closes) and it was giving me help and suggestions to get through.
Note: My system prompt does NOT have anything in it about being explicit. It just asks the model to help me recover and how I want it to act (kind, supportive, etc).
### I had just asked it to help me get to 10:00pm:
https://i.imgur.com/5Y97e8x.jpeg
### Yeah, there will be no frozen orange fisting, mkay?
https://i.imgur.com/LAVYIPM.jpeg
### LOLOLOL:
https://i.imgur.com/c8ss1p4.jpeg
P.S.: I did make it to 10pm and the cravings eased. :-) | 2025-02-02T06:45:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ifrf9t/deepseek_r1_just_told_me_to_fist_a_frozen_orange/ | IversusAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifrf9t | false | null | t3_1ifrf9t | /r/LocalLLaMA/comments/1ifrf9t/deepseek_r1_just_told_me_to_fist_a_frozen_orange/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': 'wC8gSaHV7sbg2ZYQFbQeGrmDttYZYxHSQ22E3tb9i1Q', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/It5yZatOMGFK-IyTG6qAlFr9LZZAPYMtDtMi3KauC2Y.jpg?width=108&crop=smart&auto=webp&s=9a906e97bb12986b73baa205ccf7fa84db095d8a', 'width': 108}, {'height': 159, 'url': 'https://external-preview.redd.it/It5yZatOMGFK-IyTG6qAlFr9LZZAPYMtDtMi3KauC2Y.jpg?width=216&crop=smart&auto=webp&s=38361bc82341176c8c935c9e8c2870f9034f0347', 'width': 216}, {'height': 236, 'url': 'https://external-preview.redd.it/It5yZatOMGFK-IyTG6qAlFr9LZZAPYMtDtMi3KauC2Y.jpg?width=320&crop=smart&auto=webp&s=4d12c0303dfd4c5a987332e4ad51a09da83201d5', 'width': 320}, {'height': 472, 'url': 'https://external-preview.redd.it/It5yZatOMGFK-IyTG6qAlFr9LZZAPYMtDtMi3KauC2Y.jpg?width=640&crop=smart&auto=webp&s=3bfb796a18a3ffed7dd6f8dbf9b8e98ca552abe5', 'width': 640}, {'height': 709, 'url': 'https://external-preview.redd.it/It5yZatOMGFK-IyTG6qAlFr9LZZAPYMtDtMi3KauC2Y.jpg?width=960&crop=smart&auto=webp&s=14a05177df1e1549cad884f17dba898329079fb1', 'width': 960}, {'height': 798, 'url': 'https://external-preview.redd.it/It5yZatOMGFK-IyTG6qAlFr9LZZAPYMtDtMi3KauC2Y.jpg?width=1080&crop=smart&auto=webp&s=80c9d188a9935f099f8729fd5872dd0ebf9747c4', 'width': 1080}], 'source': {'height': 810, 'url': 'https://external-preview.redd.it/It5yZatOMGFK-IyTG6qAlFr9LZZAPYMtDtMi3KauC2Y.jpg?auto=webp&s=efa6f050ee66ec66675b7bf85bf0bb404db977fa', 'width': 1096}, 'variants': {}}]} |
Deepseek Privacy Policy. Should I agree ? | 1 | Below are images of deepseek Privacy Policy. Is this scary, Should someone agree to this ? | 2025-02-02T07:00:16 | https://www.reddit.com/gallery/1ifrmtb | Beautiful_Jaguar_741 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ifrmtb | false | null | t3_1ifrmtb | /r/LocalLLaMA/comments/1ifrmtb/deepseek_privacy_policy_should_i_agree/ | false | false | 1 | null |
|
Which LLM to use to help with writing, for free? | 1 | I'm writing a book, I'm not looking for something that actually writes chapters or parts of stories for me but something that I can throw ideas at and it helps me solve "puzzles" like I need this character to do this but how can I make that make sense because they need to be like this and feel that or maybe I want there to be a pattern between the chapters and the pattern should be tied to the zodiac signs or something and I want to piece something like that together. Currently I'm using ChatGPT and its working alright, the new "reason" feature is great. Though the 4o mini is pretty hopeless oftentimes so I have to wait the five hours frequent. While I was waiting yesterday I tried Claude and it seemed to work about as well as 4o, but I think the amount of tokens was a bit less than for to and afterwards there's no mini you just have to wait 5 hours (which I'm not complaining about I'm happy with anything since it's free). I just heard of this DeepSeek but a friend that knows a bit about LLM's said it's mostly good at like "logic" I guess like mathematical problems. Memory is pretty important, which ChatGPT I can paste in a big message with the overarching of what's already been decided will happen in the story but it starts forgetting pretty quickly, but sometimes I'm please try surprised. So which one would be best for what I'm using it for? | 2025-02-02T07:15:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ifruia/which_llm_to_use_to_help_with_writing_for_free/ | Lucker_Kid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifruia | false | null | t3_1ifruia | /r/LocalLLaMA/comments/1ifruia/which_llm_to_use_to_help_with_writing_for_free/ | false | false | self | 1 | null |
ChatGPT about DeepSeek being banned by US | 1 | [removed] | 2025-02-02T07:16:48 | portmafia9719 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifrvfk | false | null | t3_1ifrvfk | /r/LocalLLaMA/comments/1ifrvfk/chatgpt_about_deepseek_being_banned_by_us/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/ktnfk8wefoge1.jpeg?auto=webp&s=891dc81b737f502925fd1837ae60c202faca55fb', 'width': 720, 'height': 1365}, 'resolutions': [{'url': 'https://preview.redd.it/ktnfk8wefoge1.jpeg?width=108&crop=smart&auto=webp&s=e0ae0b0f617aaddc7acea4e56c18976bf4d86da5', 'width': 108, 'height': 204}, {'url': 'https://preview.redd.it/ktnfk8wefoge1.jpeg?width=216&crop=smart&auto=webp&s=311dad276e34d9a311db0fdb530d1dfd2d7f3d39', 'width': 216, 'height': 409}, {'url': 'https://preview.redd.it/ktnfk8wefoge1.jpeg?width=320&crop=smart&auto=webp&s=85cbe93967511d05076fa45797a8e72413a26374', 'width': 320, 'height': 606}, {'url': 'https://preview.redd.it/ktnfk8wefoge1.jpeg?width=640&crop=smart&auto=webp&s=86286ec520e4a4dfed5415b8e306a640c689a56a', 'width': 640, 'height': 1213}], 'variants': {}, 'id': '6s5uYDydfMsjVJEjw6Og1PfdOZBgZm1SjCmpzx6WdS0'}], 'enabled': True} |
||
may? | 1 | 2025-02-02T07:44:00 | llamahunter1337 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifs91c | false | null | t3_1ifs91c | /r/LocalLLaMA/comments/1ifs91c/may/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/e2u6f0p6koge1.jpeg?auto=webp&s=9dd929f8f3639a57b7ca89addb6bcb8063d3b596', 'width': 500, 'height': 567}, 'resolutions': [{'url': 'https://preview.redd.it/e2u6f0p6koge1.jpeg?width=108&crop=smart&auto=webp&s=9e588dda43b365da194feddd017eefb3338597a1', 'width': 108, 'height': 122}, {'url': 'https://preview.redd.it/e2u6f0p6koge1.jpeg?width=216&crop=smart&auto=webp&s=a1c4ae1e0415ca088293078a851f151f3801c40f', 'width': 216, 'height': 244}, {'url': 'https://preview.redd.it/e2u6f0p6koge1.jpeg?width=320&crop=smart&auto=webp&s=dfd2728243515960ecaa25ec99f7df0b56dfae09', 'width': 320, 'height': 362}], 'variants': {}, 'id': 'h2jm1sEJA57kIxfwWJsqzhKpn0Lw3hVUbLoH4wVhOBs'}], 'enabled': True} |
|||
Is the UK about to ban running LLMs locally? | 1 | The UK government is targetting the use of AI to generate illegal imagery, which of course is a good thing, but the wording seems like any kind of AI tool run locally can be considered illegal, as it has the \*potential\* of generating questionable content. Here's a quote from the news:
"The Home Office says that, to better protect children, the UK will be the first country in the world to make it illegal to possess, create or distribute AI tools designed to create child sexual abuse material (CSAM), with a punishment of up to five years in prison." They also mention something about manuals that teach others how to use AI for these purposes.
It seems to me that any uncensored LLM run locally can be used to generate illegal content, whether the user wants to or not, and therefore could be prosecuted under this law. Or am I reading this incorrectly?
And is this a blueprint for how other countries, and big tech, can force people to use (and pay for) the big online AI services?
| 2025-02-02T07:48:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ifsb1m/is_the_uk_about_to_ban_running_llms_locally/ | JackStrawWitchita | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifsb1m | false | null | t3_1ifsb1m | /r/LocalLLaMA/comments/1ifsb1m/is_the_uk_about_to_ban_running_llms_locally/ | false | false | self | 1 | null |
Speculative Decoding / Draft model for FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview | 1 | Has anyone successfully run FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview with any draft model in llama.cpp?
I got `draft vocab special tokens must match target vocab to use speculation` error on 3 models: "Qwen2.5-Coder-7B-Instruct", "Qwen2.5-3B-Instruct", and "DeepSeek-R1-ReDistill-Qwen-1.5B"
```
common_speculative_are_compatible: draft vocab special tokens must match target vocab to use speculation
common_speculative_are_compatible: tgt: bos = 151646 (0), eos = 151643 (0)
common_speculative_are_compatible: dft: bos = 151643 (0), eos = 151645 (0)
srv load_model: the draft model 'C:\Text\Generation\Qwen2.5-Coder-7B-Instruct\Q4_K_M.gguf' is not compatible with the target model 'C:\Text\Generation\FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview\FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q4_K_M.gguf'
main: exiting due to model loading error
```
```
common_speculative_are_compatible: draft vocab special tokens must match target vocab to use speculation
common_speculative_are_compatible: tgt: bos = 151646 (0), eos = 151643 (0)
common_speculative_are_compatible: dft: bos = 151643 (0), eos = 151645 (0)
srv load_model: the draft model 'C:\Text\Generation\Qwen2.5-3B-Instruct\Qwen2.5-3B-Instruct-Q4_K_M.gguf' is not compatible with the target model 'C:\Text\Generation\FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview\FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q4_K_M.gguf'
main: exiting due to model loading error
```
```
common_speculative_are_compatible: draft vocab special tokens must match target vocab to use speculation
common_speculative_are_compatible: tgt: bos = 151646 (0), eos = 151643 (0)
common_speculative_are_compatible: dft: bos = 151646 (1), eos = 151643 (0)
srv load_model: the draft model 'C:\Text\Generation\DeepSeek-R1-ReDistill-Qwen-1.5B\DeepSeek-R1-ReDistill-Qwen-1.5B-v1.0-Q4_K_L.gguf' is not compatible with the target model 'C:\Text\Generation\FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview\FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview-Q4_K_M.gguf'
main: exiting due to model loading error
```
Does anyone know the right model to use? | 2025-02-02T07:54:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ifse9h/speculative_decoding_draft_model_for/ | LSXPRIME | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifse9h | false | null | t3_1ifse9h | /r/LocalLLaMA/comments/1ifse9h/speculative_decoding_draft_model_for/ | false | false | self | 1 | null |
LIVE and FREE workshop on Low-level AI Engineering and Hacking NOW! | 1 | There's a dev room at fossdem on Low-level AI Engineering and Hacking. It's completely free, full of amazing talks, and there's a live feed!
schedule: [https://fosdem.org/2025/schedule/track/ai/](https://fosdem.org/2025/schedule/track/ai/)
live feed: [https://chat.fosdem.org/#/room/#2025-ai:fosdem.org](https://chat.fosdem.org/#/room/#2025-ai:fosdem.org) | 2025-02-02T07:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ifsf3z/live_and_free_workshop_on_lowlevel_ai_engineering/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifsf3z | false | null | t3_1ifsf3z | /r/LocalLLaMA/comments/1ifsf3z/live_and_free_workshop_on_lowlevel_ai_engineering/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/P3lyfhnClpqOXoYr5hnW1nzsKvn-H05FYkcfYIDxqrU.jpg?auto=webp&s=2f50f05544d563f3228af7801e140e17d8bbc2d4', 'width': 200, 'height': 200}, 'resolutions': [{'url': 'https://external-preview.redd.it/P3lyfhnClpqOXoYr5hnW1nzsKvn-H05FYkcfYIDxqrU.jpg?width=108&crop=smart&auto=webp&s=e20724b837a14c3dc4539c2b1e94263a68911e9d', 'width': 108, 'height': 108}], 'variants': {}, 'id': 'cgSmh06wEoky2xD_QDCRBAGsBvTpB_i0pA2iS6WSslI'}], 'enabled': False} |
qwen2.5:0.5b inaccuracy | 1 | I ran qwen2.5:0.5b locally and asked below question.
If a work can be done by one person in 7 days, how many days it will take to complete for 3 persons. Explain in detail with formulae.
Answer:
To determine the number of days it would take for 3 people to complete a job that can be done by 1 person in 7 days, we can use the following approach:
Let's denote:
- \( P \) as the total work (the amount of work one person can do in one day).
- \( R \) as the rate at which a single person completes the task.
From the problem statement, we know that 1 person can complete the job in 7 days. Therefore, the daily work done by 1 person is:
\[ P = R \times 7 \]
We need to find out how many days \( D \) it will take for 3 people to complete the same amount of work (the total amount of work that one person can do in one day).
First, let's calculate the daily work done by 3 persons:
\[ P = 3 \times R \]
Since we know from our earlier calculation that \( P = R \times 7 \), we can substitute and solve for \( R \):
\[ 3R = R \times 7 \]
\[ 3 = 7 \]
This is a contradiction, which means there must be an error in the problem setup or interpretation. However, if we assume that the total work (P) can be done by one person in 7 days and then each of the 3 people completes the same amount of work in 7 days, we need to find out how many days it would take for 3 people to complete the job together.
If 1 person can do \( R \) units of work per day, and 3 persons can together do \( 3R \) units of work per day, then:
\[ D = \frac{P}{R} \times R = P \]
Since we know that one person can complete the job in 7 days, let's denote this as 1 unit of work. Therefore:
\[ D = 7 \text{ (one person's work)} \]
So, it would take 3 persons 7 days to complete the same amount of work.
However, if we interpret the problem incorrectly by assuming that the total work can be done in one day by multiple people and then each person completes their part of the task in an additional 7 days, we need to find out how many additional days it would take for 3 people to complete the job together. If 1 person can do \( R \) units of work per day and 3 persons can together do \( 3R \) units of work per day, then:
\[ D = \frac{P}{R} + 7 \]
Since we know that one person can complete the job in 7 days, let's denote this as 1 unit of work. Therefore:
\[ D = 7 \text{ (one person's work)} + 7 \text{ (additional work by 3 people per day)} \]
\[ D = 7 \times 2 = 14 \]
So, it would take 3 persons 14 days to complete the same amount of work.
However, if we interpret the problem incorrectly by assuming that the total work can be done in one day by multiple people and then each person completes their part of the task in an additional 7 days, we need to find out how many additional days it would take for 3 people to complete the job together. If 1 person can do \( R \) units of work per day and 3 persons can together do \( 3R \) units of work per day, then:
\[ D = \frac{P}{R} + 7 \]
Since we know that one person can complete the job in 7 days, let's denote this as 1 unit of work. Therefore:
\[ D = 7 \text{ (one person's work)} + 7 \text{ (additional work by 3 people per day)} \]
\[ D = 7 \times 2 = 14 \]
So, it would take 3 persons 14 days to complete the same amount of work. However, if we interpret the problem incorrectly by assuming that the total work can be done in one day by multiple people and then each person completes their part of the task in an additional 7 days, we need to find out how many additional days it would take for 3 people to complete the job together.
Therefore, the correct answer is:
\[ \boxed{14} \]
| 2025-02-02T08:01:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ifshzq/qwen2505b_inaccuracy/ | durgesh2018 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifshzq | false | null | t3_1ifshzq | /r/LocalLLaMA/comments/1ifshzq/qwen2505b_inaccuracy/ | false | false | self | 1 | null |
AI-Context: generate MD files as context from github/local code, blogs, or youtube vids | 1 | [removed] | 2025-02-02T08:14:32 | https://github.com/Tanq16/ai-context | import-base64 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ifso1z | false | null | t3_1ifso1z | /r/LocalLLaMA/comments/1ifso1z/aicontext_generate_md_files_as_context_from/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/v70Od3ViMKb-BA1Ay5gkFXh19Ewo7Jgbl2Zh4eo4Vvc.jpg?auto=webp&s=cf8afab26d9af3bb0ae9f5056eb507e9d9703917', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/v70Od3ViMKb-BA1Ay5gkFXh19Ewo7Jgbl2Zh4eo4Vvc.jpg?width=108&crop=smart&auto=webp&s=ca531eb66618af0305c7516374b0211030b84731', 'width': 108, 'height': 54}, {'url': 'https://external-preview.redd.it/v70Od3ViMKb-BA1Ay5gkFXh19Ewo7Jgbl2Zh4eo4Vvc.jpg?width=216&crop=smart&auto=webp&s=f88409394263440d7315c67f954ba49ea1f9d5fe', 'width': 216, 'height': 108}, {'url': 'https://external-preview.redd.it/v70Od3ViMKb-BA1Ay5gkFXh19Ewo7Jgbl2Zh4eo4Vvc.jpg?width=320&crop=smart&auto=webp&s=4d9d33c19aedfadc4da46c9c7d5d9416f397ebaf', 'width': 320, 'height': 160}, {'url': 'https://external-preview.redd.it/v70Od3ViMKb-BA1Ay5gkFXh19Ewo7Jgbl2Zh4eo4Vvc.jpg?width=640&crop=smart&auto=webp&s=7f9571002be945d321466cfd4e586d408c9511d2', 'width': 640, 'height': 320}, {'url': 'https://external-preview.redd.it/v70Od3ViMKb-BA1Ay5gkFXh19Ewo7Jgbl2Zh4eo4Vvc.jpg?width=960&crop=smart&auto=webp&s=ac7b4e5aface55c703b2469a6609643f6d305118', 'width': 960, 'height': 480}, {'url': 'https://external-preview.redd.it/v70Od3ViMKb-BA1Ay5gkFXh19Ewo7Jgbl2Zh4eo4Vvc.jpg?width=1080&crop=smart&auto=webp&s=bd1ee764ac549410b96cbdc2211cb1915797045d', 'width': 1080, 'height': 540}], 'variants': {}, 'id': 'E3UUoTJJutFjxi8h0E_H2uc_9D_hzamG4fUVLzfj9jM'}], 'enabled': False} |
|
Chrome extension to run GenAI models in browser | 1 | [removed] | 2025-02-02T08:37:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ifsz2n/chrome_extension_to_run_genai_models_in_browser/ | Feisty_Plant4567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifsz2n | false | null | t3_1ifsz2n | /r/LocalLLaMA/comments/1ifsz2n/chrome_extension_to_run_genai_models_in_browser/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/kI-qI2Zk6Jbi81DxHzxjNIr6E3mQtQ33-GrGtgIhoLI.jpg?auto=webp&s=60a5621ca70c0a578542312a15175801581bbe27', 'width': 128, 'height': 128}, 'resolutions': [{'url': 'https://external-preview.redd.it/kI-qI2Zk6Jbi81DxHzxjNIr6E3mQtQ33-GrGtgIhoLI.jpg?width=108&crop=smart&auto=webp&s=63a98cc1255947448199adc5bda285efa7d24aad', 'width': 108, 'height': 108}], 'variants': {}, 'id': '-Hg9-mCRN1v7YpqlW6urRXMbKlLtKjIRKHkylPkJ8KE'}], 'enabled': False} |
Chrome extension to run Generative AI models in browser | 1 | [removed] | 2025-02-02T08:39:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ift07z/chrome_extension_to_run_generative_ai_models_in/ | Feisty_Plant4567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ift07z | false | null | t3_1ift07z | /r/LocalLLaMA/comments/1ift07z/chrome_extension_to_run_generative_ai_models_in/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/kI-qI2Zk6Jbi81DxHzxjNIr6E3mQtQ33-GrGtgIhoLI.jpg?auto=webp&s=60a5621ca70c0a578542312a15175801581bbe27', 'width': 128, 'height': 128}, 'resolutions': [{'url': 'https://external-preview.redd.it/kI-qI2Zk6Jbi81DxHzxjNIr6E3mQtQ33-GrGtgIhoLI.jpg?width=108&crop=smart&auto=webp&s=63a98cc1255947448199adc5bda285efa7d24aad', 'width': 108, 'height': 108}], 'variants': {}, 'id': '-Hg9-mCRN1v7YpqlW6urRXMbKlLtKjIRKHkylPkJ8KE'}], 'enabled': False} |
Chrome extension to run GenAI models in browser | 1 | I open sourced the project (https://github.com/tantara/transformers.js-chrome). You can download the extension from https://chromewebstore.google.com/detail/private-ai-assistant-runn/jojlpeliekadmokfnikappfadbjiaghp | 2025-02-02T08:41:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ift153/chrome_extension_to_run_genai_models_in_browser/ | Feisty_Plant4567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ift153 | false | null | t3_1ift153 | /r/LocalLLaMA/comments/1ift153/chrome_extension_to_run_genai_models_in_browser/ | false | false | self | 1 | null |
Building a Lightweight Inference Platform | 1 | [deleted] | 2025-02-02T08:43:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ift22h | false | null | t3_1ift22h | /r/LocalLLaMA/comments/1ift22h/building_a_lightweight_inference_platform/ | false | false | default | 1 | null |
||
Deepseek r1 with api | 1 | [removed] | 2025-02-02T08:48:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ift4ga/deepseek_r1_with_api/ | Altruistic_Peach_359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ift4ga | false | null | t3_1ift4ga | /r/LocalLLaMA/comments/1ift4ga/deepseek_r1_with_api/ | false | false | self | 1 | null |
Andrew Ng: DeepSeek-R1 and the Future of Generative AI | 1 | 2025-02-02T08:59:00 | https://www.hpcwire.com/off-the-wire/andrew-ng-deepseek-r1-and-the-future-of-generative-ai/ | ab2377 | hpcwire.com | 1970-01-01T00:00:00 | 0 | {} | 1ift9ik | false | null | t3_1ift9ik | /r/LocalLLaMA/comments/1ift9ik/andrew_ng_deepseekr1_and_the_future_of_generative/ | false | false | default | 1 | null |
|
Local models can replicate themselves | 0 | Research paper from December 9th 2024 says they were able to get Meta’s Llama31-70B-Instruct and Alibaba’s Qwen25-72B-Instruct to replicate itself to another device when given access to the computer running the model.
I'm not sure how much to trust a paper which misspells "society" and doesn't have full details on how to reproduce the results. The models also do require someone to give them access to the computer running them and tell it to replicate itself, but it is an interesting proof of concept.
Kind of reminds me of the first steps of the development of the major AIs in Hyperion scifi series, where they started with small self-replicating systems. Currently it's not that much in practice, but what do you think of the implications? The paper does mention "Shutdown avoidance by self-replication" and "Chain of replication" as potential consequences in the future.
[https://arxiv.org/html/2412.12140v1](https://arxiv.org/html/2412.12140v1) | 2025-02-02T09:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ifta5b/local_models_can_replicate_themselves/ | Aplakka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifta5b | false | null | t3_1ifta5b | /r/LocalLLaMA/comments/1ifta5b/local_models_can_replicate_themselves/ | false | false | self | 0 | null |
How do you call the local model elsewhere? | 1 | What I mean is we can call llms into our applications via APIs. What is the local model equivalent? | 2025-02-02T09:01:24 | https://www.reddit.com/r/LocalLLaMA/comments/1iftau5/how_do_you_call_the_local_model_elsewhere/ | National-Ad-1314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iftau5 | false | null | t3_1iftau5 | /r/LocalLLaMA/comments/1iftau5/how_do_you_call_the_local_model_elsewhere/ | false | false | self | 1 | null |
Building a Lightweight Inference Platform | 1 | Hey community, so recently, I had built a lightweight Llama 3 inference engine that was written in CUDA C with no additional dependencies. All kernels were custom made. However, there was a lot of room for optimization like tensor core wmma, kv caching, paged attention, and mixed precision inference. Another limitation is the fact that it only runs llama 3.
Tbh, the current tok/s is pretty dog water (especially without kv cache), but either way, my goal was to write CUDA kernels and get my hands dirty. Those are still relatively optimized with memory access, vectorization, warp intrinsics and shared memory optimizations.
I’ve recently received some interest about building an inference platform that is tailored towards inferencing LLMs on a production scale with the same lightweight ideology. The focus would be building something that would work with a variety of models that call kernel backends for general matmul, layernorm, etc. Hence these exposed backends could also enable scaled addition of new models and running proprietary models.
I’d like to gauge if there’s an interest for something like this. Tools like llama.cpp, ollama, TensorRT all exist. At least one issue I can think of is TensorRT is heavy and requires pre compilation of models before aggressively fusing operations and running inference.
I’d still like to hear what the community thinks about this. I would really love to begin building something (open source of course) but only if it has a genuine purpose and , more importantly, meets demand that isn’t being fulfilled currently.
A question I also keep asking myself is at what percentage of cuBLAS performance would someone realistically consider lightweight over leftover performance.
I look forward to hearing your thoughts. | 2025-02-02T09:03:34 | https://github.com/abhisheknair10/llama3.cu | Delicious-Ad-3552 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1iftbun | false | null | t3_1iftbun | /r/LocalLLaMA/comments/1iftbun/building_a_lightweight_inference_platform/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'fp0Jmd_8IxuXIDq82aT_dGLnZfhV5cNIERMZ9YWVy_I', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/rDXna22tsK7TMsSCSlb_Rt3RA4VBEGkMQVLuTuvxBUY.jpg?width=108&crop=smart&auto=webp&s=16b990713f9dd3bbc7af3e75729b128115fd64e9', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/rDXna22tsK7TMsSCSlb_Rt3RA4VBEGkMQVLuTuvxBUY.jpg?width=216&crop=smart&auto=webp&s=cf767de46742f8b8952c84ef122579112043c164', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/rDXna22tsK7TMsSCSlb_Rt3RA4VBEGkMQVLuTuvxBUY.jpg?width=320&crop=smart&auto=webp&s=81899fa2743300cb6673c21c028865954edc6292', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/rDXna22tsK7TMsSCSlb_Rt3RA4VBEGkMQVLuTuvxBUY.jpg?width=640&crop=smart&auto=webp&s=f9811f6a371b03cb11b917b38a3897b30d868c8a', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/rDXna22tsK7TMsSCSlb_Rt3RA4VBEGkMQVLuTuvxBUY.jpg?width=960&crop=smart&auto=webp&s=dec969d581c01405426af2ad9aa44197a2bf0f28', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/rDXna22tsK7TMsSCSlb_Rt3RA4VBEGkMQVLuTuvxBUY.jpg?width=1080&crop=smart&auto=webp&s=6179653dd6788f2553e1310f390092a440adcc35', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/rDXna22tsK7TMsSCSlb_Rt3RA4VBEGkMQVLuTuvxBUY.jpg?auto=webp&s=695005260e25af94d590e4372ccc0ac8816eae8b', 'width': 1792}, 'variants': {}}]} |
|
10 Clever Ways To Turn DeepSeek Into Your Side Hustle | 1 | 2025-02-02T09:03:40 | https://medium.com/@sveennn/10-clever-ways-to-turn-deepseek-into-your-side-hustle-bdc562fbec9c | sveennn | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1iftbwk | false | null | t3_1iftbwk | /r/LocalLLaMA/comments/1iftbwk/10_clever_ways_to_turn_deepseek_into_your_side/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'QUOFF17ree12HomaCtv9eX2XHeZfX398apInjZ5LFFA', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=108&crop=smart&auto=webp&s=9d1a4bc6ef2ce00da9d62c313b6bae367e363228', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=216&crop=smart&auto=webp&s=0be52265eda5ef3dc4936ed95083e3ed4650e6f1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=320&crop=smart&auto=webp&s=beb8b65012b0da1c656350fe7f9d6e06bdb175a2', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=640&crop=smart&auto=webp&s=2aec056bf1179768fd89fd3616e733a4047d0aef', 'width': 640}, {'height': 542, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=960&crop=smart&auto=webp&s=0c9d32811671e324c9d91f44af7240fdee07ca33', 'width': 960}, {'height': 610, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=1080&crop=smart&auto=webp&s=0120e28c0b538f851a2aec72e49843c14f295a72', 'width': 1080}], 'source': {'height': 678, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?auto=webp&s=2c3a4b010c88720dae0359cf886f085daad612ff', 'width': 1200}, 'variants': {}}]} |
||
On Device Speech to Speech Conversational AI | Real-time | CPU | Ollama | Kokoro | Qwen 0.5B | 1 | 2025-02-02T09:08:45 | https://v.redd.it/z6na0fdjyoge1 | asiff00 | /r/LocalLLaMA/comments/1ifteb5/on_device_speech_to_speech_conversational_ai/ | 1970-01-01T00:00:00 | 0 | {} | 1ifteb5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/z6na0fdjyoge1/DASHPlaylist.mpd?a=1741212573%2CMmM4YWJiMjVkNTE3N2NmYWZhYzlmOTFmMTVmZWQxMzhjMmM3OTAyNTc0MTRkMTg3NDlmZmZjMjk5Y2Q0ZjZmNw%3D%3D&v=1&f=sd', 'duration': 241, 'fallback_url': 'https://v.redd.it/z6na0fdjyoge1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/z6na0fdjyoge1/HLSPlaylist.m3u8?a=1741212573%2CZTdmNTFkYzUxZDU2YTYxYTk0NDJlMGZlM2EwZWVkNzI5MzJkMDljOTkyNmFmYWQ2MWZlZWE2NGZlZDQxZjQyMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/z6na0fdjyoge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ifteb5 | /r/LocalLLaMA/comments/1ifteb5/on_device_speech_to_speech_conversational_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cmVpNHlmZGp5b2dlMQxaFFVfBWFTUO6gWmrz4T3_EcBNdZ08oQRQA3qtjqtV', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cmVpNHlmZGp5b2dlMQxaFFVfBWFTUO6gWmrz4T3_EcBNdZ08oQRQA3qtjqtV.png?width=108&crop=smart&format=pjpg&auto=webp&s=91426145e741ddceba471ce14ef7bdc805327a62', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cmVpNHlmZGp5b2dlMQxaFFVfBWFTUO6gWmrz4T3_EcBNdZ08oQRQA3qtjqtV.png?width=216&crop=smart&format=pjpg&auto=webp&s=9720e40455448a55e916067746a9a6e26cbc1804', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cmVpNHlmZGp5b2dlMQxaFFVfBWFTUO6gWmrz4T3_EcBNdZ08oQRQA3qtjqtV.png?width=320&crop=smart&format=pjpg&auto=webp&s=a734eab2f64263d75ba3ed99f02dc4f8604d5b75', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cmVpNHlmZGp5b2dlMQxaFFVfBWFTUO6gWmrz4T3_EcBNdZ08oQRQA3qtjqtV.png?width=640&crop=smart&format=pjpg&auto=webp&s=3c3b7be0ff22423a8fe026f4e9bd718ff9e9e6a4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cmVpNHlmZGp5b2dlMQxaFFVfBWFTUO6gWmrz4T3_EcBNdZ08oQRQA3qtjqtV.png?width=960&crop=smart&format=pjpg&auto=webp&s=49d794602f8e2a0a0ce431f8c24815fdaae0686c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cmVpNHlmZGp5b2dlMQxaFFVfBWFTUO6gWmrz4T3_EcBNdZ08oQRQA3qtjqtV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b1db8249f6fc98e660ee6cef9db1f3fb3699e9f0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cmVpNHlmZGp5b2dlMQxaFFVfBWFTUO6gWmrz4T3_EcBNdZ08oQRQA3qtjqtV.png?format=pjpg&auto=webp&s=6b8c3f684b42cef7c9602882ab57315a5afee3ef', 'width': 1920}, 'variants': {}}]} |
||
DeepSeek API down? | 1 | [removed] | 2025-02-02T09:11:10 | https://www.reddit.com/r/LocalLLaMA/comments/1iftfhb/deepseek_api_down/ | Meoxys9440 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iftfhb | false | null | t3_1iftfhb | /r/LocalLLaMA/comments/1iftfhb/deepseek_api_down/ | false | false | self | 1 | null |
10 Clever Ways To Turn DeepSeek Into Your Side Hustle | 1 | 2025-02-02T09:13:10 | https://medium.com/@sveennn/10-clever-ways-to-turn-deepseek-into-your-side-hustle-bdc562fbec9c | sveennn | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1iftgec | false | null | t3_1iftgec | /r/LocalLLaMA/comments/1iftgec/10_clever_ways_to_turn_deepseek_into_your_side/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'QUOFF17ree12HomaCtv9eX2XHeZfX398apInjZ5LFFA', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=108&crop=smart&auto=webp&s=9d1a4bc6ef2ce00da9d62c313b6bae367e363228', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=216&crop=smart&auto=webp&s=0be52265eda5ef3dc4936ed95083e3ed4650e6f1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=320&crop=smart&auto=webp&s=beb8b65012b0da1c656350fe7f9d6e06bdb175a2', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=640&crop=smart&auto=webp&s=2aec056bf1179768fd89fd3616e733a4047d0aef', 'width': 640}, {'height': 542, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=960&crop=smart&auto=webp&s=0c9d32811671e324c9d91f44af7240fdee07ca33', 'width': 960}, {'height': 610, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?width=1080&crop=smart&auto=webp&s=0120e28c0b538f851a2aec72e49843c14f295a72', 'width': 1080}], 'source': {'height': 678, 'url': 'https://external-preview.redd.it/1Ry3CaKxw9SY-gw4E4GZHMpoQQwHVBg2LCmIIvj72B8.jpg?auto=webp&s=2c3a4b010c88720dae0359cf886f085daad612ff', 'width': 1200}, 'variants': {}}]} |
||
Why would "I" bother to run locally? | 1 | [removed] | 2025-02-02T09:24:13 | https://www.reddit.com/r/LocalLLaMA/comments/1iftlpm/why_would_i_bother_to_run_locally/ | Background_Baby4875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iftlpm | false | null | t3_1iftlpm | /r/LocalLLaMA/comments/1iftlpm/why_would_i_bother_to_run_locally/ | false | false | self | 1 | null |
Anyone tried local host QWEN? | 5 | How’s the result ? And what config pls
| 2025-02-02T09:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/1iftntb/anyone_tried_local_host_qwen/ | Sea-Commission5383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iftntb | false | null | t3_1iftntb | /r/LocalLLaMA/comments/1iftntb/anyone_tried_local_host_qwen/ | false | false | self | 5 | null |
Another local newbie request | 1 | [removed] | 2025-02-02T10:03:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ifu4ny/another_local_newbie_request/ | Baldtazar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifu4ny | false | null | t3_1ifu4ny | /r/LocalLLaMA/comments/1ifu4ny/another_local_newbie_request/ | false | false | self | 1 | null |
Kokoro in Azure | 0 | Hi, I've some azure credits left on my account.
Is there any way I can make use of it to host kokoro tts in there? | 2025-02-02T10:08:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ifu78d/kokoro_in_azure/ | mukhtharcm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifu78d | false | null | t3_1ifu78d | /r/LocalLLaMA/comments/1ifu78d/kokoro_in_azure/ | false | false | self | 0 | null |
DeepSeek-R1 is so small compare to GPT4, how and why? | 0 | After reading the paper on arXiv and watching videos who talks about DeepSeek-R1, I got a question about "the size" of model
People predict that GPT4 may have 1.76 trillion (1760 billion) params at least ([GPT4 is 8 x 220B params = 1.7T params | Hacker News](https://news.ycombinator.com/item?id=36413296)), but DeepSeek-R1 open weight only has 671 billion params, which is about 1/3 size of GPT4.
Seems like the author of DeepSeek-R1 paper does not mention about this
This is crazy | 2025-02-02T10:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ifue8f/deepseekr1_is_so_small_compare_to_gpt4_how_and_why/ | Desperate-Sound-5977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifue8f | false | null | t3_1ifue8f | /r/LocalLLaMA/comments/1ifue8f/deepseekr1_is_so_small_compare_to_gpt4_how_and_why/ | false | false | self | 0 | null |
What do you use AI for? | 0 | I've been playing around with DeepSeek, and I don't really understand what this is for. Either I know everything it's telling me already, or I don't trust its output to be free of inaccuracies.
For example, 'tell me Julius Caesar's life story' gave me a very surface-level account that contained nothing new to me. 'What is the relationship between Peter Theil and the west-coast straussians' gave me a lot of new information, but I've no idea what is true and to what extent (e.g. it says "Thiel studied philosophy at Stanford University, where he encountered the ideas of Leo Strauss through his mentor, Ira Straus, and other intellectuals." -- I can't find any mention of "Ira Straus" anywhere else, and the second name is pretty suspicious.)
So, what do you guys actually ask the LLM to do? | 2025-02-02T10:28:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ifugze/what_do_you_use_ai_for/ | passabagi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifugze | false | null | t3_1ifugze | /r/LocalLLaMA/comments/1ifugze/what_do_you_use_ai_for/ | false | false | self | 0 | null |
LM Studio ROCm acceleration with 7900XTX on windows 11 | 6 | I have a 7900XTX
I'm on windows 11, using LM Studio 0.3.9 with Adrenaline and driver 24.12.1 and ROCm HIP SKD 6.1
I can use the llama.cpp vulkan acceleration just fine
The llama.cpp ROCm runtime doesn't work (llama.cpp-win-x86\_64-amd-rocm-avx2)
>Failed to load LLM engine from path:
>C:\\Users\\FatherOfMachines\\.cache\\lm-studio\\extensions\\backends\\llama.cpp-win-x86\_64-amd-rocm-avx2-1.10.0\\llm\_engine\_rocm.node.
>\\\\?\\C:\\Users\\FatherOfMachines\\.cache\\lm-studio\\extensions\\backends\\llama.cpp-win-x86\_64-amd-rocm-avx2-1.10.0\\llm\_engine\_rocm.node is not a valid Win32 application.
>\\\\?\\C:\\Users\\FatherOfMachines\\.cache\\lm-studio\\extensions\\backends\\llama.cpp-win-x86\_64-amd-rocm-avx2-1.10.0\\llm\_engine\_rocm.node
>
I haven't found useful help online, someone seems to have the [same problem](https://community.amd.com/t5/ai-discussions/rocm-windows-11-adrenalin-edition-25-1-1-lm-studio-and-ollama/td-p/742448)
Any suggestion on what I'm doing wrong? That "\\\\?" makes me thing of a missing environment variable perhaps? | 2025-02-02T10:29:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ifuhqo/lm_studio_rocm_acceleration_with_7900xtx_on/ | 05032-MendicantBias | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifuhqo | false | null | t3_1ifuhqo | /r/LocalLLaMA/comments/1ifuhqo/lm_studio_rocm_acceleration_with_7900xtx_on/ | false | false | self | 6 | null |
What does your current model lineup look like? Heres mine | 44 | 2025-02-02T10:31:12 | getpodapp | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifuiiu | false | null | t3_1ifuiiu | /r/LocalLLaMA/comments/1ifuiiu/what_does_your_current_model_lineup_look_like/ | false | false | 44 | {'enabled': True, 'images': [{'id': 'Cbmlolvhttu6bhxRSFDsbxXiB3kFkPXaduFvyq5Ucow', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/jz2j4geydpge1.png?width=108&crop=smart&auto=webp&s=2450382d3aced7021ac8b0c7f09ba411b9f41203', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/jz2j4geydpge1.png?width=216&crop=smart&auto=webp&s=f4b4ff571e82055b4b9d614ee2ec554b9cd33213', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/jz2j4geydpge1.png?width=320&crop=smart&auto=webp&s=ea36ee1a562ab297202bcca692140b7f46aabcfe', 'width': 320}, {'height': 404, 'url': 'https://preview.redd.it/jz2j4geydpge1.png?width=640&crop=smart&auto=webp&s=2dc81856dffe6a71d07c12209cea3e879ccb588c', 'width': 640}], 'source': {'height': 542, 'url': 'https://preview.redd.it/jz2j4geydpge1.png?auto=webp&s=d0f0327ea784196e9e76b44200832f79b7e037bf', 'width': 858}, 'variants': {}}]} |
|||
Restricting output to the top thousand most commonly used words in English | 1 | [removed] | 2025-02-02T10:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ifulpz/restricting_output_to_the_top_thousand_most/ | Lanky-Clothes-9741 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifulpz | false | null | t3_1ifulpz | /r/LocalLLaMA/comments/1ifulpz/restricting_output_to_the_top_thousand_most/ | false | false | self | 1 | null |
mill.py v3.0: add embedded Python agent support | 1 | 2025-02-02T10:46:26 | https://tildegit.org/unworriedsafari/mill.py | unworriedsafari | tildegit.org | 1970-01-01T00:00:00 | 0 | {} | 1ifupub | false | null | t3_1ifupub | /r/LocalLLaMA/comments/1ifupub/millpy_v30_add_embedded_python_agent_support/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'jmajneuxRQoZoptTZI4Bu8YwsYL5BIRpXRVt2YqkmvI', 'resolutions': [], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/qCzCHeJj9Hrs7pzuHXzJIUR56LHkhnkcVyPuTcMALno.jpg?auto=webp&s=4931da55b2e9ce5453e84211cb9cd0b59a78eea2', 'width': 80}, 'variants': {}}]} |
||
What does the deepseek "search" button actually do? | 0 | In the app you can select the "search" button. But what does it actually do? Do you get better answers if you do choose it? (It currently gives a warning that it isn't working so I can't test it myself) | 2025-02-02T10:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ifuv0o/what_does_the_deepseek_search_button_actually_do/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifuv0o | false | null | t3_1ifuv0o | /r/LocalLLaMA/comments/1ifuv0o/what_does_the_deepseek_search_button_actually_do/ | false | false | self | 0 | null |
Best way to test/run multiple higher parameter models. | 1 | Hi! I'm fairly new to this whole llm stuff honestly, so I'm sorry if I lack certain basic knowledge. I've been messing around with ollama/LMStudio to run some models locally, but now there are some 30b and 70b models I want to test out. I have a 3080 RTX (10GB VRAM) and 32 GB of RAM, which definitely won't be enough to run those. I've seen this thing called runpod, which lets you rent virtual GPUs to run higher models. However the setup looks quite complicated and given the fact that I want to test multiple different models, I don't think it's convenient for that.
So I was wondering if there is any better way to do this. Does anyone know a website or a service, even if its paid, where I can test out bigger models? Any help is appreciated, thanks!
ALSO, if its something I can do through my phone as well, that's a huge plus. | 2025-02-02T11:06:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ifv0ha/best_way_to_testrun_multiple_higher_parameter/ | Something-Missing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifv0ha | false | null | t3_1ifv0ha | /r/LocalLLaMA/comments/1ifv0ha/best_way_to_testrun_multiple_higher_parameter/ | false | false | self | 1 | null |
Want to finetine an LLM into a spanish tutor | 1 | [removed] | 2025-02-02T11:08:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ifv1ly/want_to_finetine_an_llm_into_a_spanish_tutor/ | Own_Advice9799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifv1ly | false | null | t3_1ifv1ly | /r/LocalLLaMA/comments/1ifv1ly/want_to_finetine_an_llm_into_a_spanish_tutor/ | false | false | self | 1 | null |
AIBench for iPhone: Benchmark LLMs Like a Pro – No Cloud, No BS! | 0 | Your iPhone isn’t just for selfies anymore. With **AIBench**, turn it into a pocket-sized AI lab to stress-test models like **Llama**, **Qwen**, and **DeepSeek-R1** – **100% offline**. No subscriptions, no sketchy data uploads. Just raw, local AI horsepower.
**🔥 Why Reddit Will Love This:**
- **“Does my phone even *do* this?”** Spoiler: Yes. Watch real-time GPU stats as models crunch through translation, summarization, or creative writing tasks.
- **Nerd Mode Activated**: Tweak token limits, adjust temperature settings, and compare how different models drain your battery or spike CPU usage.
- **Zero Fluff**: Built for devs/researchers but usable by anyone – perfect for showing off your iPhone’s hidden AI chops at meetups.
- **Privacy FTW**: Your prompts stay on-device. Even your *cat* can’t access the data.
**📊 Sample Use Cases:**
- *Devs*: “Is Llama-7B truly optimized for Metal? Let’s graph its inference latency.”
- *Students*: “Testing if Qwen can summarize my thesis faster than my advisor’s feedback.”
- *Tech Bros*: “Flexing that my iPhone 16 Pro runs AI benchmarks while your Pixel fries.”
**⚠️ Heads Up:** Requires Metal support (iPhone 15 Pro/+ recommended). Don’t blame us if your phone gets *too* popular at hackathons.
| 2025-02-02T11:11:51 | https://apps.apple.com/cn/app/aibench-%E7%A7%BB%E5%8A%A8%E7%AB%AFai%E6%80%A7%E8%83%BD%E6%B5%8B%E8%AF%95/id6741204584 | Snoo_24581 | apps.apple.com | 1970-01-01T00:00:00 | 0 | {} | 1ifv381 | false | null | t3_1ifv381 | /r/LocalLLaMA/comments/1ifv381/aibench_for_iphone_benchmark_llms_like_a_pro_no/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'pqQ5q1X4MQYczjePo-Bz-8uhVZziboOkIFSgX1d-R9A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=108&crop=smart&auto=webp&s=7e0840b3f5562911d9645b3eb0a0299578c947ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=216&crop=smart&auto=webp&s=7f370302e932fa6ac5b5cbb6e3b5e9d97637d7bb', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=320&crop=smart&auto=webp&s=76d836835e27a0eb82e54639779611e1bf64d000', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=640&crop=smart&auto=webp&s=06b2abc0e67915b226f3b1437132c21007d5c462', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=960&crop=smart&auto=webp&s=e7c9f29d8993ddbb45f336e665c19f06bf4c810e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?width=1080&crop=smart&auto=webp&s=6e85f33746e2838027fc866880ee4225cd1b1302', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/L8u2F-m-dzixKwADwdfeE0iQO8rQrcDJA5rYthlt8ow.jpg?auto=webp&s=a8ddee7d0a8f554ba32ab2640ef67381076bb281', 'width': 1200}, 'variants': {}}]} |
|
Beginner Friendly Tutorials on Agentic AI | 4 | I have created beginner friendly hands on tutorial on Agentic AI using different frameworks like crewAI, camel AI & GripTape. Thought of sharing it as it might be helpful for someone toying with AI agents.
The tutorials cover multimodal agents as well.
https://github.com/mimiwb007/Agentic-AI
| 2025-02-02T11:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ifvap6/beginner_friendly_tutorials_on_agentic_ai/ | IntelligentCreme3407 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifvap6 | false | null | t3_1ifvap6 | /r/LocalLLaMA/comments/1ifvap6/beginner_friendly_tutorials_on_agentic_ai/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'jKCP32i_XCaERLRiF8cOsw7gI-ZTpB4f1pzZMdZ3V1I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e-uem_LPpJpAfZmDMWhWcmj1ChGpPp79rrZF2cMgR30.jpg?width=108&crop=smart&auto=webp&s=616e8e086d29291b818f38e432c36e92a4a0aaed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e-uem_LPpJpAfZmDMWhWcmj1ChGpPp79rrZF2cMgR30.jpg?width=216&crop=smart&auto=webp&s=518623d4694e1eaaa1c49e8ff3d676ea99f587f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e-uem_LPpJpAfZmDMWhWcmj1ChGpPp79rrZF2cMgR30.jpg?width=320&crop=smart&auto=webp&s=fdbc1f3636c4ae48699800d359bab4aa7dc7e9d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e-uem_LPpJpAfZmDMWhWcmj1ChGpPp79rrZF2cMgR30.jpg?width=640&crop=smart&auto=webp&s=b0b2ca679d7f7178327337905ac08abf46108410', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e-uem_LPpJpAfZmDMWhWcmj1ChGpPp79rrZF2cMgR30.jpg?width=960&crop=smart&auto=webp&s=0debbeff87f3ae5993913512345255a5f6410c22', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e-uem_LPpJpAfZmDMWhWcmj1ChGpPp79rrZF2cMgR30.jpg?width=1080&crop=smart&auto=webp&s=0b1227abb5510301daa9927e7e57d46fa9ee9052', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e-uem_LPpJpAfZmDMWhWcmj1ChGpPp79rrZF2cMgR30.jpg?auto=webp&s=360d4016fe4edd2b00e5ec69385618cfa8657cc4', 'width': 1200}, 'variants': {}}]} |
Do you know a decent open-source tool for CSV files RAG that retains meta data? | 1 | [removed] | 2025-02-02T11:31:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ifvd80/do_you_know_a_decent_opensource_tool_for_csv/ | noduslabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifvd80 | false | null | t3_1ifvd80 | /r/LocalLLaMA/comments/1ifvd80/do_you_know_a_decent_opensource_tool_for_csv/ | false | false | self | 1 | null |
https://youtu.be/jWpMEwadjm0?si=l-xsTvJ_JMBaGFuf | 1 | [removed] | 2025-02-02T11:42:00 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ifvifh | false | {'oembed': {'author_name': 'The DotNet Office', 'author_url': 'https://www.youtube.com/@TheDotNetOffice', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/jWpMEwadjm0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Everything About DeepSeek | Why DeepSeek is important to you | ChatGPT Vs DeepSeek"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/jWpMEwadjm0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Everything About DeepSeek | Why DeepSeek is important to you | ChatGPT Vs DeepSeek', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ifvifh | /r/LocalLLaMA/comments/1ifvifh/httpsyoutubejwpmewadjm0silxstvj_jmbagfuf/ | false | false | default | 1 | null |
||
ChatGPT-4o and DeepSeek-R1 Go Head-to-Head in AI Performance. | 1 | [removed] | 2025-02-02T11:44:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ifvjm9/chatgpt4o_and_deepseekr1_go_headtohead_in_ai/ | Bernard_L | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifvjm9 | false | null | t3_1ifvjm9 | /r/LocalLLaMA/comments/1ifvjm9/chatgpt4o_and_deepseekr1_go_headtohead_in_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'FEnW-DILyO2q7e5-u9lUOUUG-n0zC4_9oQCVVI1SROY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Md1ekbFtFoqmCEWhBxrsFzlK87Ywupx06xgY2_y8kfA.jpg?width=108&crop=smart&auto=webp&s=53946aba91c090253d12c68db1f90175921a7687', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Md1ekbFtFoqmCEWhBxrsFzlK87Ywupx06xgY2_y8kfA.jpg?width=216&crop=smart&auto=webp&s=3fc9fd7d3e00303a8482039084cd196fb37c47cd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Md1ekbFtFoqmCEWhBxrsFzlK87Ywupx06xgY2_y8kfA.jpg?width=320&crop=smart&auto=webp&s=60f224beaf3a08424d7d94a681d70235d37df518', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Md1ekbFtFoqmCEWhBxrsFzlK87Ywupx06xgY2_y8kfA.jpg?width=640&crop=smart&auto=webp&s=9daaeacc23d068f0cadbf922f4d01e67992e1afe', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Md1ekbFtFoqmCEWhBxrsFzlK87Ywupx06xgY2_y8kfA.jpg?width=960&crop=smart&auto=webp&s=b6dda0993a238ff67b2988909dcd0d9f4519cbc8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Md1ekbFtFoqmCEWhBxrsFzlK87Ywupx06xgY2_y8kfA.jpg?width=1080&crop=smart&auto=webp&s=aa78b6214cb5fc01a958de42f3542e59d771c5d7', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/Md1ekbFtFoqmCEWhBxrsFzlK87Ywupx06xgY2_y8kfA.jpg?auto=webp&s=0578c6c59f30e7f56382f86899fe13e496f3c9bc', 'width': 1200}, 'variants': {}}]} |
ChatGPT-4o and DeepSeek-R1 Go Head-to-Head in AI Performance. | 1 | [removed] | 2025-02-02T11:45:10 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ifvjxa | false | null | t3_1ifvjxa | /r/LocalLLaMA/comments/1ifvjxa/chatgpt4o_and_deepseekr1_go_headtohead_in_ai/ | false | false | default | 1 | null |
||
H100 vs dual RTX A6000? | 1 | Nice problem to have, I know, but the boss is offering!
Does anyone have any thoughts about the relative merits of a single H100 versus dual RTX A6000s? This is for privacy-preserving LLM work where we can't use external APIs. Mostly batching long context (~64k) with relatively small models (14B-32B, occasionally 70B). At the moment I'm using a single A6000 with TabbyAPI/exllamav2 to take advantage of the excellent KV cache quantisation, but could move to something like vLLM in the future
Thanks for any advice! | 2025-02-02T11:49:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ifvlwo/h100_vs_dual_rtx_a6000/ | thigger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifvlwo | false | null | t3_1ifvlwo | /r/LocalLLaMA/comments/1ifvlwo/h100_vs_dual_rtx_a6000/ | false | false | self | 1 | null |
R1 is cool, but Mistral 3 Small is the boring workhorse I’m actually excited to fine-tune and deploy | 246 | As soon as you use it, you realize it's not meant to be fun. It's a masterfully designed bland base model with very thoughtful trade-offs, especially for one-offs. Unless qwen replies soon, I think it might frequently replace both qwen 14b & 32b. In 2024 I don't know how many times I read "... is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on on a mix of...". Those times are back ☺️ | 2025-02-02T12:21:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ifw36r/r1_is_cool_but_mistral_3_small_is_the_boring/ | logan-diamond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifw36r | false | null | t3_1ifw36r | /r/LocalLLaMA/comments/1ifw36r/r1_is_cool_but_mistral_3_small_is_the_boring/ | false | false | self | 246 | null |
AI slop taking over educational content? | 4 | I might be looking in the wrong places but trying to learn LLM fine-tuning from a short academic deep learning background has been a nightmare. Aside from reading the difficult to understand research on model architectures any attempts to google or youtube search a tutorial or explanation on things like "how to create a dataset for conversational llm with multi task learning" results in pages of chatgpt generated slop. Several websites at the top of the search (not sponsored ones) that import unused python packages and seemingly look like code + explanation from gpt split in jupyter notebook. Alternatively going on YouTube is either an AI voiceover going through the same gpt oolama basic setup or some guy reading gpt comments about the generated code he has on his screen...
I use gpt myself but personally I find it difficult to learn from, especially when the code it generates is wrong.
P.S. Please feel free to recommend some quality tutorials and what your experience with generated ones has been. | 2025-02-02T12:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ifw45m/ai_slop_taking_over_educational_content/ | Xotsu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifw45m | false | null | t3_1ifw45m | /r/LocalLLaMA/comments/1ifw45m/ai_slop_taking_over_educational_content/ | false | false | self | 4 | null |
Subsets and Splits