title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]
url
stringlengths
0
780
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Help me choose a laptop for local LLM experiments
1
[removed]
2025-02-01T02:34:01
https://www.reddit.com/r/LocalLLaMA/comments/1iewcx4/help_me_choose_a_laptop_for_local_llm_experiments/
World_of_Reddit_21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iewcx4
false
null
t3_1iewcx4
/r/LocalLLaMA/comments/1iewcx4/help_me_choose_a_laptop_for_local_llm_experiments/
false
false
self
1
null
Unable to load DeepSeek Qwen model
1
Has anyone been able to load DeepSeek on their local system? I am trying to load DeepSeek Qwen 1.5 b model, Q4_K_M. I am getting error message saying incorrect model_type even though it matches with the config.json on HF. Please suggest
2025-02-01T02:36:04
https://www.reddit.com/r/LocalLLaMA/comments/1ieweb8/unable_to_load_deepseek_qwen_model/
sandybansal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieweb8
false
null
t3_1ieweb8
/r/LocalLLaMA/comments/1ieweb8/unable_to_load_deepseek_qwen_model/
false
false
self
1
null
Made a AI podcast generator+researcher with Llama as base
3
https://www.linkedin.com/posts/kamathhrishi_got-kinda-bored-so-i-went-on-this-little-ugcPost-7290006431875964929-I_Wd
2025-02-01T02:47:18
https://www.reddit.com/r/LocalLLaMA/comments/1iewlmf/made_a_ai_podcast_generatorresearcher_with_llama/
hrishikamath
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iewlmf
false
null
t3_1iewlmf
/r/LocalLLaMA/comments/1iewlmf/made_a_ai_podcast_generatorresearcher_with_llama/
false
false
self
3
{'enabled': False, 'images': [{'id': 'Pn97eU5oaHrnj11JOvWCb3foV1QQUtqiWUdAqPPuhBo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/mhY81bGLUQe_YK8jT7ibgsf7E1QZWWIiQlQnl3LP5yg.jpg?width=108&crop=smart&auto=webp&s=4afd407b061c72baed4d0e512eb709b8ac16e0ad', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/mhY81bGLUQe_YK8jT7ibgsf7E1QZWWIiQlQnl3LP5yg.jpg?width=216&crop=smart&auto=webp&s=4b6bd51d55b695d0d3085e4523671aec53fe1be1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/mhY81bGLUQe_YK8jT7ibgsf7E1QZWWIiQlQnl3LP5yg.jpg?width=320&crop=smart&auto=webp&s=151b7ff5879745f2a8b4fdaa6bc9454114190e5e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/mhY81bGLUQe_YK8jT7ibgsf7E1QZWWIiQlQnl3LP5yg.jpg?width=640&crop=smart&auto=webp&s=c030616a605dd3c9bc80b9b293c8515b2d4a9a53', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/mhY81bGLUQe_YK8jT7ibgsf7E1QZWWIiQlQnl3LP5yg.jpg?width=960&crop=smart&auto=webp&s=b5668c5056e32845a458431b1b2af4109137e215', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/mhY81bGLUQe_YK8jT7ibgsf7E1QZWWIiQlQnl3LP5yg.jpg?width=1080&crop=smart&auto=webp&s=374bb4c4245e6b87edeb43a7a846ca1485eda467', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/mhY81bGLUQe_YK8jT7ibgsf7E1QZWWIiQlQnl3LP5yg.jpg?auto=webp&s=4d29e4bbf91085ded583dc98074ec8249986437e', 'width': 1280}, 'variants': {}}]}
What Metrics Can Evaluate Storytelling and Roleplay in LLMs?
1
[removed]
2025-02-01T02:48:29
https://www.reddit.com/r/LocalLLaMA/comments/1iewmf2/what_metrics_can_evaluate_storytelling_and/
Low_Maintenance_4067
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iewmf2
false
null
t3_1iewmf2
/r/LocalLLaMA/comments/1iewmf2/what_metrics_can_evaluate_storytelling_and/
false
false
self
1
null
I was able to 1-shot prompt the unlsoth "python flappy bird game" test with Deepseek R1 distilled 70B. The distilled models deserve more credit.
129
I saw the unlsoth models and how they were getting excellent performance using dynamic quantization of the main model down to 2 bits and shrinking the model 80% in size. See article here - https://unsloth.ai/blog/deepseekr1-dynamic Still, even at 80% the size, when you consider kv cache & context, you still need a minimum of around 200GB of high bandwidth memory to run those models at any sort of respectable speed. And even then, you're getting 5-6 t/s tops, unless you can splurge on a 24-channel dual EPYC system. I am lucky enough to have a 4x 3090 system, so I have 96GB of VRAM to play with. To me 96GB of VRAM is the sweet spot for running 70B models where you can run bigger quants (6bit+) with plenty left over for context window. And with all that still get above 15 t/s (and much much faster using parallelism & draft models/etc). Anyway, I played around with 70B distilled model. I am running it with ollama, but I had to customize it to run at it's intended temperature of .6, and increased the context window to 16k (ollama stupidly defaults every model to 2k context). Then I ran the unsloth flappy bird test prompt to see how close I could come to replicate it's performance - >Prompt: Create a Flappy Bird game in Python. You must include >these things: >You must use pygame. >The background color should be randomly chosen and is a light >shade. Start with a light blue color. >Pressing SPACE multiple times will accelerate the bird. >The bird's shape should be randomly chosen as a square, circle >or triangle. The color should be randomly chosen as a dark color. >Place on the bottom some land colored as dark brown or yellow >chosen randomly. >Make a score shown on the top right side. Increment if you pass >pipes and don't hit them. >Make randomly spaced pipes with enough space. Color them >randomly as dark green or light brown or a dark gray shade. >When you lose, show the best score. Make the text inside the >screen. Pressing q or Esc will quit the game. Restarting is >pressing SPACE again. >The final game should be inside a markdown section in Python. >Check your code for errors and fix them before the final >markdown section. Here's the full response the model produced - <thought> Okay, I need to create a Flappy Bird game using Pygame based on the user's requirements. Let me go through each requirement step by step. First, the background color must be light and randomly chosen, starting with light blue. So I'll set a default light blue but have it change randomly each time the game restarts. Next, pressing SPACE should make the bird accelerate. That means each press will add to the bird's velocity, allowing it to jump higher. I'll need to handle multiple key presses, so maybe track if the spacebar is pressed and apply force each frame until released. The bird's shape should be random—square, circle, or triangle. Its color must be dark. So I'll create functions to generate a random dark color and another to choose the shape type randomly each game. There's land at the bottom, colored either dark brown or yellow, chosen randomly. I'll add a ground rectangle with a randomly selected color from those options. The score is displayed on the top right and increases when passing pipes without hitting them. So I'll need to track when the bird passes a pipe and increment the score accordingly. The best score should be saved and shown when the game ends. Pipes must be randomly spaced with enough space between them for the bird to pass. They can be dark green, light brown, or dark gray. I'll create a list of pipes, each with random heights and gaps, moving from right to left. When the bird hits a pipe or the ground, the game ends. Then, display the best score on the screen. The user can press Q or Esc to quit and SPACE to restart. I'll structure the code with functions for generating colors, creating pipes, handling collisions, drawing elements, and updating the game state. I'll also set up event loops for handling key presses and quitting. Testing each part as I go is important. For example, ensuring the bird's velocity increases correctly when pressing SPACE multiple times, checking that pipes generate without overlapping, and verifying collision detection works as expected. Finally, I'll make sure all elements are styled according to the color requirements and that the game restarts properly with new randomizations. </thought> Here's a complete implementation of Flappy Bird with all the specified requirements: import pygame import sys import random pygame.init() # Constants WIDTH, HEIGHT = 800, 600 GRAVITY = 0.5 JUMP_FORCE = -8 PIPE_WIDTH = 80 PIPE_GAP = 150 GROUND_HEIGHT = 50 SCORE_FONT = pygame.font.SysFont('Arial', 30) BEST_SCORE_FONT = pygame.font.SysFont('Arial', 40) class Bird: def __init__(self): self.shape = random.choice(['square', 'circle', 'triangle']) self.color = (random.randint(0,100), random.randint(0,100), random.randint(0,100)) self.x = WIDTH // 4 self.y = HEIGHT // 2 self.velocity = 0 self.size = 20 def draw(self, screen): if self.shape == 'square': pygame.draw.rect(screen, self.color, (self.x, self.y, self.size, self.size)) elif self.shape == 'circle': pygame.draw.circle(screen, self.color, (int(self.x + self.size//2), int(self.y + self.size//2)), self.size//2) elif self.shape == 'triangle': points = [ (self.x, self.y), (self.x + self.size, self.y), (self.x + self.size//2, self.y + self.size) ] pygame.draw.polygon(screen, self.color, points) class Pipe: def __init__(self): self.height = random.randint(50, HEIGHT - PIPE_GAP - 100) self.x = WIDTH self.color = (random.choice([0, 128, 64]), random.choice([128, 64, 0]), random.choice([0, 64, 32])) def draw(self, screen): pygame.draw.rect(screen, self.color, (self.x, 0, PIPE_WIDTH, self.height)) pygame.draw.rect(screen, self.color, (self.x, self.height + PIPE_GAP, PIPE_WIDTH, HEIGHT - (self.height + PIPE_GAP))) def get_light_color(): return (random.randint(128, 255), random.randint(128, 255), random.randint(128, 255)) def get_dark_color(): return (random.randint(0, 128), random.randint(0, 128), random.randint(0, 128)) def main(): screen = pygame.display.set_mode((WIDTH, HEIGHT)) clock = pygame.time.Clock() background_color = (173, 216, 230) # Light blue ground_color = (101, 67, 33) # Dark brown bird = Bird() pipes = [Pipe()] score = 0 best_score = 0 game_active = True jump_force = JUMP_FORCE while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() elif event.type == pygame.KEYDOWN: if event.key == pygame.K_SPACE and game_active: jump_force = JUMP_FORCE bird.velocity = jump_force elif (event.key == pygame.K_q or event.key == pygame.K_ESCAPE): pygame.quit() sys.exit() keys = pygame.key.get_pressed() if keys[pygame.K_SPACE] and game_active: bird.velocity += 0.5 screen.fill(background_color) # Draw ground ground_rect = pygame.Rect(0, HEIGHT - GROUND_HEIGHT, WIDTH, GROUND_HEIGHT) pygame.draw.rect(screen, ground_color, ground_rect) if game_active: # Bird physics bird.velocity += GRAVITY bird.y += bird.velocity # Update pipes for pipe in pipes: pipe.x -= 2 if pipe.x < -PIPE_WIDTH: pipes.remove(pipe) score += 1 # Add new pipes if len(pipes) == 0 or pipes[-1].x < WIDTH // 2: pipes.append(Pipe()) # Check collisions for pipe in pipes: if (bird.x + bird.size > pipe.x and bird.x < pipe.x + PIPE_WIDTH): if bird.y < pipe.height or bird.y + bird.size > pipe.height + PIPE_GAP: game_active = False if bird.y + bird.size > HEIGHT - GROUND_HEIGHT: game_active = False else: # Game over screen text = SCORE_FONT.render(f'Score: {score}', True, (0, 0, 0)) best_text = BEST_SCORE_FONT.render(f'Best Score: {best_score}', True, (0, 0, 0)) restart_text = SCORE_FONT.render('Press SPACE to restart', True, (0, 0, 0)) screen.blit(text, (WIDTH // 2 - text.get_width() // 2, HEIGHT // 2)) screen.blit(best_text, (WIDTH // 2 - best_text.get_width() // 2, HEIGHT // 2 + 40)) screen.blit(restart_text, (WIDTH // 2 - restart_text.get_width() // 2, HEIGHT // 2 + 80)) # Draw pipes for pipe in pipes: pipe.draw(screen) bird.draw(screen) score_text = SCORE_FONT.render(f'Score: {score}', True, (0, 0, 0)) screen.blit(score_text, (WIDTH - 100, 10)) if not game_active: best_score = max(best_score, score) if keys[pygame.K_SPACE]: # Reset game background_color = get_light_color() bird = Bird() pipes.clear() score = 0 game_active = True pygame.display.flip() clock.tick(60) if __name__ == "__main__": main() >This implementation includes all the requested features: >Random light background color (starting with light blue) >Bird shape can be square, circle, or triangle with random dark >colors >Ground with random dark brown or yellow color >Randomly spaced pipes with different colors >Score tracking and best score display >Space to jump, Q/Esc to quit, SPACE to restart >Physics-based bird movement >Collision detection >The game runs smoothly at 60 FPS and includes all the >requested gameplay mechanics. You can run it by saving it as a >Python file and running it with Pygame installed (pip install >pygame). This response with the attached code nailed every requirement. And I know for a fact this model wasn't pre-trained on this prompt. Because I tried it w/ the default 2k context and it did not produce a fully working game first try. Deepseek R1 is a reasoning model, so it needs enough context to go through it's thought process + produce a output. So bumping the context window to 16k is what allowed the model to run at it's potential. The base R1 model was actually designed to run at 128k context, so I'll play with how high I can go before I run out of GPU memory. 16k still gave me room to play with. Anyway, color me impressed. I wonder if the people that have had poor results w/ the distilled models is becaused they didn't give the model enough context, or ran 4 bpw or below quants. I think with these models you're better off trying to get 6 bpw or higher as they're already distilled and unoptimized out of the gate. All I know is even the distilled models are incredible. I bought some extra memory today to try the unsloth dynamic quants out of curiosity, but I believe in the near future someone will optimize these distilled models and we'll have near frontier model performance at reasonable sizes. This is such an exciting time to be a local llm enthusiast!
2025-02-01T02:50:40
https://www.reddit.com/r/LocalLLaMA/comments/1iewnwe/i_was_able_to_1shot_prompt_the_unlsoth_python/
Hisma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iewnwe
false
null
t3_1iewnwe
/r/LocalLLaMA/comments/1iewnwe/i_was_able_to_1shot_prompt_the_unlsoth_python/
false
false
self
129
{'enabled': False, 'images': [{'id': 'qEw0u4XizUoJuVY5pN62t3QO_hBTavBbL0GGwbDUVRw', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/iJNT-Ti7TdspoS_gdJzHMxwgACtAzYFgitkHpdZyDos.jpg?width=108&crop=smart&auto=webp&s=d034746ae7a344d255124a1d3e7638be2f60b93b', 'width': 108}, {'height': 170, 'url': 'https://external-preview.redd.it/iJNT-Ti7TdspoS_gdJzHMxwgACtAzYFgitkHpdZyDos.jpg?width=216&crop=smart&auto=webp&s=1fc388c2082a1579bcbb66e0831d195f150f4f58', 'width': 216}, {'height': 252, 'url': 'https://external-preview.redd.it/iJNT-Ti7TdspoS_gdJzHMxwgACtAzYFgitkHpdZyDos.jpg?width=320&crop=smart&auto=webp&s=ab2934941002cf509c0ee2c3a5d222e5d401c7c4', 'width': 320}, {'height': 504, 'url': 'https://external-preview.redd.it/iJNT-Ti7TdspoS_gdJzHMxwgACtAzYFgitkHpdZyDos.jpg?width=640&crop=smart&auto=webp&s=5284cc0b5e87d6bca3f06e93be7a28ebe9e96713', 'width': 640}, {'height': 756, 'url': 'https://external-preview.redd.it/iJNT-Ti7TdspoS_gdJzHMxwgACtAzYFgitkHpdZyDos.jpg?width=960&crop=smart&auto=webp&s=cb11a15d2f3c037e7e551b9be3e2e5a6579adc06', 'width': 960}, {'height': 850, 'url': 'https://external-preview.redd.it/iJNT-Ti7TdspoS_gdJzHMxwgACtAzYFgitkHpdZyDos.jpg?width=1080&crop=smart&auto=webp&s=7f82ba69b9435e2f6946fc943faa44e3be4c1adc', 'width': 1080}], 'source': {'height': 1099, 'url': 'https://external-preview.redd.it/iJNT-Ti7TdspoS_gdJzHMxwgACtAzYFgitkHpdZyDos.jpg?auto=webp&s=10cc3e87e08d3b5c6be2c309c3073f1314e37414', 'width': 1395}, 'variants': {}}]}
Optimizing On-Prem LLM Document Analyzer & Chat System for Teams
3
Hey everyone, First off, thanks for all the awesome threads and insights shared here. Tech has been evolving faster than ever lately, especially in the AI-powered space. particularly over the last few months. I'm currently researching on setting up an on-premises system for document-based queries (including docs with complex charts and graphs). my focus is on achieving high accuracy while balancing efficiency and cost, ideally supporting a team of 50-100 users. While I know a lot of this will come down to trial and error, I’d really appreciate your thoughts on the best ways to implement this with the latest tools available at the momentt. Any insights or experiences you can share would be great. Thanks in advance, looking forward to a great discussion here. Cheers and have a great day! <3
2025-02-01T03:01:45
https://www.reddit.com/r/LocalLLaMA/comments/1iewvd7/optimizing_onprem_llm_document_analyzer_chat/
tanzir-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iewvd7
false
null
t3_1iewvd7
/r/LocalLLaMA/comments/1iewvd7/optimizing_onprem_llm_document_analyzer_chat/
false
false
self
3
null
Why does model knowledge cut-off still lag so much?
6
Almost all current OpenAI models have their knowledge cutoff at 10/23. Most if not all Llama3 models are slightly ahead at 12/23. Even the latest mistral-small has 10/23 cutoff. As a side note, I do appreciate model releases that clearly state the cutoff date. Most don't even do that. I understand why there is knowledge cutoff but this being over an year seems baffling. Given the scale and infrastructure available for some of the big guys it feels to me like they should be able to cut this gap down to at least 6 months, and eventually to 3 months maybe? To play devil's advocate, I do wonder, especially post 2023, there is a possibility a lot of new public content could be generated either partly or fully by LLMs and this could make quality worse? I'm assuming some of the model makers are working on somehow filtering or deprioritizing this data out? Could this even be one of the concerns/reasons behind the older cutoff dates?
2025-02-01T03:34:20
https://www.reddit.com/r/LocalLLaMA/comments/1iexgw4/why_does_model_knowledge_cutoff_still_lag_so_much/
gamesntech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iexgw4
false
null
t3_1iexgw4
/r/LocalLLaMA/comments/1iexgw4/why_does_model_knowledge_cutoff_still_lag_so_much/
false
false
self
6
null
Soft Prompt Tuning Modern LLMs
26
2025-02-01T03:34:44
https://frugalgpu.substack.com/p/soft-prompt-tuning-modern-llms-442
xnick77x
frugalgpu.substack.com
1970-01-01T00:00:00
0
{}
1iexh50
false
null
t3_1iexh50
/r/LocalLLaMA/comments/1iexh50/soft_prompt_tuning_modern_llms/
false
false
https://b.thumbs.redditm…FovDi3WIYkYY.jpg
26
{'enabled': False, 'images': [{'id': 'E-BU9uMZtiiaSbuJ6vhybIrINcaQSP5TMHW_YZkv4jM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6_3ctAqxbdvXiaHQYdKYaOoJmjsKTUr9kcigziRYZMg.jpg?width=108&crop=smart&auto=webp&s=e02a31bb91f3c52f3399f8d0ba76729b6fe136a4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/6_3ctAqxbdvXiaHQYdKYaOoJmjsKTUr9kcigziRYZMg.jpg?width=216&crop=smart&auto=webp&s=8bf973166a530ce713b6a86739af73006c25f869', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/6_3ctAqxbdvXiaHQYdKYaOoJmjsKTUr9kcigziRYZMg.jpg?width=320&crop=smart&auto=webp&s=ff1cecfe7461997caae863a7f156d1937d1a046c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/6_3ctAqxbdvXiaHQYdKYaOoJmjsKTUr9kcigziRYZMg.jpg?width=640&crop=smart&auto=webp&s=4894ec1ef80e49f99a7de1c890f9b439263adb94', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/6_3ctAqxbdvXiaHQYdKYaOoJmjsKTUr9kcigziRYZMg.jpg?width=960&crop=smart&auto=webp&s=1808303cf6258890b31433eac381e65d0a996091', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/6_3ctAqxbdvXiaHQYdKYaOoJmjsKTUr9kcigziRYZMg.jpg?auto=webp&s=1f3df134a59bdbaf3f3d44e412b34c59332ffdcd', 'width': 960}, 'variants': {}}]}
How To Run Deepseek R1 671b Fully Locally On a $2000 EPYC Server
0
2025-02-01T03:58:22
https://digitalspaceport.com/how-to-run-deepseek-r1-671b-fully-locally-on-2000-epyc-rig/
TheKing01
digitalspaceport.com
1970-01-01T00:00:00
0
{}
1iexwgb
false
null
t3_1iexwgb
/r/LocalLLaMA/comments/1iexwgb/how_to_run_deepseek_r1_671b_fully_locally_on_a/
false
false
default
0
null
DeepSeek R1 knowledge cutoff.
0
2025-02-01T04:01:46
https://www.reddit.com/gallery/1iexypn
AlgorithmicKing
reddit.com
1970-01-01T00:00:00
0
{}
1iexypn
false
null
t3_1iexypn
/r/LocalLLaMA/comments/1iexypn/deepseek_r1_knowledge_cutoff/
false
false
https://a.thumbs.redditm…wayjx7aZkuv0.jpg
0
null
Transform Your Video Marketing with ReelWorld's AI Automation
1
[removed]
2025-02-01T04:02:40
https://www.reddit.com/r/LocalLLaMA/comments/1iexzc0/transform_your_video_marketing_with_reelworlds_ai/
Level-Novel9288
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iexzc0
false
null
t3_1iexzc0
/r/LocalLLaMA/comments/1iexzc0/transform_your_video_marketing_with_reelworlds_ai/
false
false
self
1
null
Best way to connectLM Studio to the web? LM Assistant app/Alternatives?
4
Trying LM studio and attempting to get the local models to connect to the internet. Came across this on github which looks relatively safe? Seems to use a webdriver to allow Local models to use info from Google Source: https://github.com/StonedCoder777/Speech-To-Speech-and-Internet-Search-For-LMStudio Wondering if anyone has tried this before. Are there alternatives that either LM studio (preffered)/Local LLM with a simple front end to interface with the web. Tried MSTY and it was not a good experience I think you can also use python/axe.lm + ollama to do something similar but it's more involved. Any recommendations to achieve this would help. If someone wants to view the github above and glance to see if anything is malicious, that would help as well Thanks
2025-02-01T04:03:39
https://www.reddit.com/r/LocalLLaMA/comments/1iexzyg/best_way_to_connectlm_studio_to_the_web_lm/
V-AceT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iexzyg
false
null
t3_1iexzyg
/r/LocalLLaMA/comments/1iexzyg/best_way_to_connectlm_studio_to_the_web_lm/
false
false
self
4
{'enabled': False, 'images': [{'id': 'w7AJp8n4NmXvCfbi1uhdn6D920R-t6hgqcaEWzVahOg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d3fa9P1bsoSM8P2AuilIP4Ght8Z6QjW6C0M-__rPHLc.jpg?width=108&crop=smart&auto=webp&s=91f76bf7c90673bd5bcebb04fa4a61d4ccfc3b67', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d3fa9P1bsoSM8P2AuilIP4Ght8Z6QjW6C0M-__rPHLc.jpg?width=216&crop=smart&auto=webp&s=9a7632e102e085f70a72eb5651eacb1052e8f368', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d3fa9P1bsoSM8P2AuilIP4Ght8Z6QjW6C0M-__rPHLc.jpg?width=320&crop=smart&auto=webp&s=333bad34ca69b070b5b5b394cdd8947eaf276b25', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d3fa9P1bsoSM8P2AuilIP4Ght8Z6QjW6C0M-__rPHLc.jpg?width=640&crop=smart&auto=webp&s=a3040c59433d23c4ccbdc13c864b185d155bd597', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d3fa9P1bsoSM8P2AuilIP4Ght8Z6QjW6C0M-__rPHLc.jpg?width=960&crop=smart&auto=webp&s=8395eb06335f2bf5fe87ba58d7fd78fc3dcca782', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d3fa9P1bsoSM8P2AuilIP4Ght8Z6QjW6C0M-__rPHLc.jpg?width=1080&crop=smart&auto=webp&s=cb593619daab3c34e4559248aec805eb5fd1c0fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d3fa9P1bsoSM8P2AuilIP4Ght8Z6QjW6C0M-__rPHLc.jpg?auto=webp&s=8adc85f12f11f9e6c36dae5cecd57dba9bbb5d93', 'width': 1200}, 'variants': {}}]}
What is the best way to setup a local LLM to use a local PDF
0
How would you create a local chat bot that uses information in one or more PDF files.
2025-02-01T04:08:12
https://www.reddit.com/r/LocalLLaMA/comments/1iey2rj/what_is_the_best_way_to_setup_a_local_llm_to_use/
Outrageous-Win-3244
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iey2rj
false
null
t3_1iey2rj
/r/LocalLLaMA/comments/1iey2rj/what_is_the_best_way_to_setup_a_local_llm_to_use/
false
false
self
0
null
What, if any, would be the potential legal ramifications of doing this? (AI, viral code? I don't know please read)
1
[removed]
2025-02-01T04:08:41
https://www.reddit.com/r/LocalLLaMA/comments/1iey32p/what_if_any_would_be_the_potential_legal/
AILabirynth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iey32p
false
null
t3_1iey32p
/r/LocalLLaMA/comments/1iey32p/what_if_any_would_be_the_potential_legal/
false
false
self
1
null
Anyone tried calling deep seek api locally from a small web-based project?
1
I'm thinking about using my locally installed deepseek via Ollama. Not sure if its doable. I'm creating small project that runs deep seek and im thinking instead of paying first, maybe i cant use the locally installed version.
2025-02-01T04:37:22
https://www.reddit.com/r/LocalLLaMA/comments/1ieykdv/anyone_tried_calling_deep_seek_api_locally_from_a/
creativegoof96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieykdv
false
null
t3_1ieykdv
/r/LocalLLaMA/comments/1ieykdv/anyone_tried_calling_deep_seek_api_locally_from_a/
false
false
self
1
null
dual a4000 vs 3090 for 70b models
1
i have been playing with ollama and a few different models on my current fake server rig. cpu 7950x3d, 192 gb ddr5@6400, and a 3080ti gpu. id like to get into some fine tuning/rag stuff next i think. Currently just for regular promts like 'write a python script to do ..' im getting about 3.5 tokens per second at my best on 20gb models, and around 1.5 tps on 40gb models. i know my biggest limitation is my gpu, and at those file sizes I'm relying on my cpu, a lot. i have two ampere a4000 gpus on a shelf, but my current motherboard only supports one gpu. Is my money best spent on getting a motherboard for dual gpus, or getting a single 3090? (In a month or more, once things settle down). i know total vram is king, but to what extent should generational architecture improvements steer my decisions? what sort of performance in 20gb and 40gb models should i expect with dual a4000 compared to a single 3090?
2025-02-01T04:43:10
https://www.reddit.com/r/LocalLLaMA/comments/1ieynvw/dual_a4000_vs_3090_for_70b_models/
BestCakeDayEvar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieynvw
false
null
t3_1ieynvw
/r/LocalLLaMA/comments/1ieynvw/dual_a4000_vs_3090_for_70b_models/
false
false
self
1
null
Best config to self host LLm?
0
I researched it says use Ollama And how many storage size I need ?
2025-02-01T04:54:45
https://www.reddit.com/r/LocalLLaMA/comments/1ieyuri/best_config_to_self_host_llm/
Sea-Commission5383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieyuri
false
null
t3_1ieyuri
/r/LocalLLaMA/comments/1ieyuri/best_config_to_self_host_llm/
false
false
self
0
null
DeepSeek R1 iFable Merge
1
...any takers?
2025-02-01T05:01:24
https://www.reddit.com/r/LocalLLaMA/comments/1ieyysa/deepseek_r1_ifable_merge/
GrungeWerX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieyysa
false
null
t3_1ieyysa
/r/LocalLLaMA/comments/1ieyysa/deepseek_r1_ifable_merge/
false
false
self
1
null
What do google translate use that makes the translation so good at so fast?
1
[removed]
2025-02-01T05:02:23
https://www.reddit.com/r/LocalLLaMA/comments/1ieyzjx/what_do_google_translate_use_that_makes_the/
innerpeace555
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ieyzjx
false
null
t3_1ieyzjx
/r/LocalLLaMA/comments/1ieyzjx/what_do_google_translate_use_that_makes_the/
false
false
self
1
null
Found this on youtube, have a doubt
1
[removed]
2025-02-01T05:18:32
https://www.reddit.com/r/LocalLLaMA/comments/1iez92g/found_this_on_youtube_have_a_doubt/
Narrow_Block_8755
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iez92g
false
null
t3_1iez92g
/r/LocalLLaMA/comments/1iez92g/found_this_on_youtube_have_a_doubt/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ACko2glx0I6cQMZMSoXVR1GagO4TVFZWijvYU8Q8n2Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=108&crop=smart&auto=webp&s=a7b29a079e38d6aaeb6049d89c40a4a9c1453580', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=216&crop=smart&auto=webp&s=53cd77bcee308745b88c241c37463417085a35cc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=320&crop=smart&auto=webp&s=a1ee95d3b27ebb238d35430117b24e91f0419605', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?auto=webp&s=602f68978a0785e099138c109c559de094ce87cd', 'width': 480}, 'variants': {}}]}
DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B because who doesn't like models with really, really long names?
8
I'm still awaiting benchmarks, but here's a merge of the DeepSeek Llama 3.1 8B distill from R1 with another model containing two o1-inspired reasoning models. [https://huggingface.co/grimjim/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B](https://huggingface.co/grimjim/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B) GGUF and i1 GGUF quants are available. [https://huggingface.co/mradermacher/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-GGUF](https://huggingface.co/mradermacher/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-GGUF) [https://huggingface.co/mradermacher/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-i1-GGUF](https://huggingface.co/mradermacher/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-i1-GGUF) It can run as a normal Llama 3.1 assistant just fine, and shouldn't emit think tags, given the low contributing weight of the R1 distillation. Task arithmetic was used with a base of Llama 3.1 8B Base (not Instruct!) as the distillation was performed on Base. Perhaps Instruct was found to be overfitted for 3.1 8B, unlike 70B. I grafted back the tokenizer for Instruct on the result. A precursor merge of two o1-inspired models achieved an unexpectedly high MATH Lvl 5 benchmark of 33.99%. The subsequent merge with an Instruct model trained in German reduced IFEval, but uplifted every other benchmark on the current Open LLM Leaderboard above that of the German Instruct model. One can even attempt to roleplay with this merge, and characters will have a higher-than-average tendency to try to resolve their problems, apparently influenced by the effect of smashing three (3) different reasoning models together.
2025-02-01T05:19:16
https://www.reddit.com/r/LocalLLaMA/comments/1iez9h1/deepsauerhuatuoskyworkr1o1llama318b_because_who/
grimjim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iez9h1
false
null
t3_1iez9h1
/r/LocalLLaMA/comments/1iez9h1/deepsauerhuatuoskyworkr1o1llama318b_because_who/
false
false
self
8
{'enabled': False, 'images': [{'id': 'vqIkekYGdhAXAMl4J6BawcOfEaIyY3GWyo--WpzHRLA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/b0G6gg9gvptMuyQDDY53xUnnkiUDokwPcPCzHp3nPhs.jpg?width=108&crop=smart&auto=webp&s=73e5f4f58eaf1689011100a00e578413256945f8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/b0G6gg9gvptMuyQDDY53xUnnkiUDokwPcPCzHp3nPhs.jpg?width=216&crop=smart&auto=webp&s=d528d927afb819aa693aae881be86e3201869315', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/b0G6gg9gvptMuyQDDY53xUnnkiUDokwPcPCzHp3nPhs.jpg?width=320&crop=smart&auto=webp&s=7869de06166ea8bdc4a9ff8923bf8b7fefaf05dd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/b0G6gg9gvptMuyQDDY53xUnnkiUDokwPcPCzHp3nPhs.jpg?width=640&crop=smart&auto=webp&s=f0c7e67b4b778e099e5b5f4cc9ddfd50f81c2461', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/b0G6gg9gvptMuyQDDY53xUnnkiUDokwPcPCzHp3nPhs.jpg?width=960&crop=smart&auto=webp&s=bb750b5729439f745b75a61bd14059839689b6d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/b0G6gg9gvptMuyQDDY53xUnnkiUDokwPcPCzHp3nPhs.jpg?width=1080&crop=smart&auto=webp&s=873c6ee18dc3b6a5fd585a0e26bcc48741eadee4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/b0G6gg9gvptMuyQDDY53xUnnkiUDokwPcPCzHp3nPhs.jpg?auto=webp&s=da6209575311b07feee24a6c1dce5c1d6f938d38', 'width': 1200}, 'variants': {}}]}
found this online, have some doubts
1
[removed]
2025-02-01T05:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1iezdhl/found_this_online_have_some_doubts/
Narrow_Block_8755
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iezdhl
false
null
t3_1iezdhl
/r/LocalLLaMA/comments/1iezdhl/found_this_online_have_some_doubts/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ACko2glx0I6cQMZMSoXVR1GagO4TVFZWijvYU8Q8n2Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=108&crop=smart&auto=webp&s=a7b29a079e38d6aaeb6049d89c40a4a9c1453580', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=216&crop=smart&auto=webp&s=53cd77bcee308745b88c241c37463417085a35cc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=320&crop=smart&auto=webp&s=a1ee95d3b27ebb238d35430117b24e91f0419605', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?auto=webp&s=602f68978a0785e099138c109c559de094ce87cd', 'width': 480}, 'variants': {}}]}
he said he found patterns in the voice but how did he labelled those patterns I mean he categorized similar voices by finding patterns (would need a significanlty large corpus of data), then how did he labeled them, would require labels at least to train.
1
2025-02-01T05:26:50
https://www.youtube.com/watch?v=BvXXgcBSCXY
Narrow_Block_8755
youtube.com
1970-01-01T00:00:00
0
{}
1iezdri
false
{'oembed': {'author_name': 'Google Cloud Tech', 'author_url': 'https://www.youtube.com/@googlecloudtech', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/BvXXgcBSCXY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Translating baby sounds using Google AI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/BvXXgcBSCXY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Translating baby sounds using Google AI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1iezdri
/r/LocalLLaMA/comments/1iezdri/he_said_he_found_patterns_in_the_voice_but_how/
false
false
https://b.thumbs.redditm…K27xr-eHMLrQ.jpg
1
{'enabled': False, 'images': [{'id': 'ACko2glx0I6cQMZMSoXVR1GagO4TVFZWijvYU8Q8n2Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=108&crop=smart&auto=webp&s=a7b29a079e38d6aaeb6049d89c40a4a9c1453580', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=216&crop=smart&auto=webp&s=53cd77bcee308745b88c241c37463417085a35cc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=320&crop=smart&auto=webp&s=a1ee95d3b27ebb238d35430117b24e91f0419605', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?auto=webp&s=602f68978a0785e099138c109c559de094ce87cd', 'width': 480}, 'variants': {}}]}
found this on gemini youtube channel, how correct it this, i have a few doubts
1
[removed]
2025-02-01T05:34:46
https://www.reddit.com/r/LocalLLaMA/comments/1iezibv/found_this_on_gemini_youtube_channel_how_correct/
Narrow_Block_8755
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1iezibv
false
null
t3_1iezibv
/r/LocalLLaMA/comments/1iezibv/found_this_on_gemini_youtube_channel_how_correct/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ACko2glx0I6cQMZMSoXVR1GagO4TVFZWijvYU8Q8n2Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=108&crop=smart&auto=webp&s=a7b29a079e38d6aaeb6049d89c40a4a9c1453580', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=216&crop=smart&auto=webp&s=53cd77bcee308745b88c241c37463417085a35cc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?width=320&crop=smart&auto=webp&s=a1ee95d3b27ebb238d35430117b24e91f0419605', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/jVmAoYGnwhkICiXKWUziiYgQYNLOn_WhJBlmILgr_WA.jpg?auto=webp&s=602f68978a0785e099138c109c559de094ce87cd', 'width': 480}, 'variants': {}}]}
Snowden complaining about GPU memory capacity so he's probably a Local LLM guy.
1
2025-02-01T05:44:36
https://i.redd.it/nj92abvwtgge1.png
dahara111
i.redd.it
1970-01-01T00:00:00
0
{}
1ieznts
false
null
t3_1ieznts
/r/LocalLLaMA/comments/1ieznts/snowden_complaining_about_gpu_memory_capacity_so/
false
false
https://b.thumbs.redditm…BnUDLb0XbvSY.jpg
1
{'enabled': True, 'images': [{'id': 'AF3dqjL9TIBLGmRhVcsy5Mdmtsue4fr47JT8UhWLo-8', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/nj92abvwtgge1.png?width=108&crop=smart&auto=webp&s=6d49379f17ca1df023b70804570601cc60d010b4', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/nj92abvwtgge1.png?width=216&crop=smart&auto=webp&s=b4e807a950110cda3fbb3dc52a789ba26797386d', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/nj92abvwtgge1.png?width=320&crop=smart&auto=webp&s=10963e3be2bdd4f0b994d4d8ccc594874fd16d10', 'width': 320}], 'source': {'height': 237, 'url': 'https://preview.redd.it/nj92abvwtgge1.png?auto=webp&s=cca08bd4056a92ba8f8ca627bdfb481cfff7c4b7', 'width': 482}, 'variants': {}}]}
He's complaining about memory capacity so he's probably a Local LLM guy.
1
2025-02-01T05:45:37
https://i.redd.it/qojhrc97ugge1.png
dahara111
i.redd.it
1970-01-01T00:00:00
0
{}
1iezoea
false
null
t3_1iezoea
/r/LocalLLaMA/comments/1iezoea/hes_complaining_about_memory_capacity_so_hes/
false
false
https://a.thumbs.redditm…3jaee1lZFIY8.jpg
1
{'enabled': True, 'images': [{'id': 'obr7eCJsxL870bkovbW7sCoIDZBqI75IWuKTafpaAA4', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/qojhrc97ugge1.png?width=108&crop=smart&auto=webp&s=0fdf6414c47f5f1cf0cd50963935f15227c56cf5', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/qojhrc97ugge1.png?width=216&crop=smart&auto=webp&s=d9152e3e19ed2a5c3f1452ec6fa2e24c2e8d6182', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/qojhrc97ugge1.png?width=320&crop=smart&auto=webp&s=bb966b1335049bde94a3991d9b34a1ea1a5fca02', 'width': 320}], 'source': {'height': 237, 'url': 'https://preview.redd.it/qojhrc97ugge1.png?auto=webp&s=4d65349abe89e6d2f29d991895b8718f3f10c893', 'width': 482}, 'variants': {}}]}
Ask DeepSeek about the Tiananmen Square massacre
0
2025-02-01T05:55:46
https://i.redd.it/fc1a9bf1wgge1.jpeg
gofuckyoursen
i.redd.it
1970-01-01T00:00:00
0
{}
1iezu20
false
null
t3_1iezu20
/r/LocalLLaMA/comments/1iezu20/ask_deepseek_about_the_tiananmen_square_massacre/
false
false
https://b.thumbs.redditm…evTvraNm4VWA.jpg
0
{'enabled': True, 'images': [{'id': 'e4I_RGxPMlBcQVD1ucr-rT_NennFrWeVjeOcqv7aZ7Y', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/fc1a9bf1wgge1.jpeg?width=108&crop=smart&auto=webp&s=ec0176b6c0e73eaf4dbb023baee7187c63c90fb1', 'width': 108}, {'height': 237, 'url': 'https://preview.redd.it/fc1a9bf1wgge1.jpeg?width=216&crop=smart&auto=webp&s=33d12781decdce9f719ba05657c87189d15b9fd9', 'width': 216}, {'height': 351, 'url': 'https://preview.redd.it/fc1a9bf1wgge1.jpeg?width=320&crop=smart&auto=webp&s=58c76444c26cf4aff236508f7c2ae56e82abc9b1', 'width': 320}, {'height': 703, 'url': 'https://preview.redd.it/fc1a9bf1wgge1.jpeg?width=640&crop=smart&auto=webp&s=226cd2d7b420e48f34b19dba2d285d3bdc4931bd', 'width': 640}, {'height': 1055, 'url': 'https://preview.redd.it/fc1a9bf1wgge1.jpeg?width=960&crop=smart&auto=webp&s=210b5a12d3566cc6d56b576e6b8cd1ea40663214', 'width': 960}], 'source': {'height': 1186, 'url': 'https://preview.redd.it/fc1a9bf1wgge1.jpeg?auto=webp&s=50e2d8d0629c40d0628c7db0265f29f456bf0a70', 'width': 1079}, 'variants': {}}]}
models to replace llama3.2 3b with?
4
my pc has enough resources to run a 3b llm model comfortably. ive been using it to help me out with explaining stuff for my academics and coding. are there any better models (preferably around 3b) that i can replace it with? thanks
2025-02-01T06:20:26
https://www.reddit.com/r/LocalLLaMA/comments/1if07sf/models_to_replace_llama32_3b_with/
CapableAd9320
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if07sf
false
null
t3_1if07sf
/r/LocalLLaMA/comments/1if07sf/models_to_replace_llama32_3b_with/
false
false
self
4
null
Which command will force local llm to give unlimited response until my system crash?
1
[removed]
2025-02-01T06:21:41
https://www.reddit.com/r/LocalLLaMA/comments/1if08fu/which_command_will_force_local_llm_to_give/
InsideResolve4517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if08fu
false
null
t3_1if08fu
/r/LocalLLaMA/comments/1if08fu/which_command_will_force_local_llm_to_give/
false
false
self
1
null
kokoro-on-browser:fully offline, local web-based text-to-speech application built using React and Vite. Powered by Kokoro.js
41
https://reddit.com/link/1if0f0q/video/xm1kznq22hge1/player * Browser-based text-to-speech conversion * Powered by Kokoro.js library by [xenovatech](https://www.reddit.com/user/xenovatech/) * Responsive React application / dev with Vite * One click netlify deployment. Thanks to all the teams who have contributed to Kokoro.js Repo : [https://github.com/fajrmn/kokoro-on-browser](https://github.com/fajrmn/kokoro-on-browser)
2025-02-01T06:34:15
https://www.reddit.com/r/LocalLLaMA/comments/1if0f0q/kokoroonbrowserfully_offline_local_webbased/
fajrmn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if0f0q
false
null
t3_1if0f0q
/r/LocalLLaMA/comments/1if0f0q/kokoroonbrowserfully_offline_local_webbased/
false
false
https://a.thumbs.redditm…_kibUg9PJDI0.jpg
41
{'enabled': False, 'images': [{'id': 'ENnSHh_gFjXjn-1SoWVKp7h1k8q4uI0gRrUWucxmSRw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0HyIJ1l1CEno8j1KoR6iI-87ktiGCajBTpMb_ydGwDk.jpg?width=108&crop=smart&auto=webp&s=ca0e3e60b5845f7c5735c9866a921c4bd39ef841', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0HyIJ1l1CEno8j1KoR6iI-87ktiGCajBTpMb_ydGwDk.jpg?width=216&crop=smart&auto=webp&s=df23d2f2503db243f8650393c875ceb56e9c260a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0HyIJ1l1CEno8j1KoR6iI-87ktiGCajBTpMb_ydGwDk.jpg?width=320&crop=smart&auto=webp&s=b54bce05577c4470d377106c321c4d34c94fd7e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0HyIJ1l1CEno8j1KoR6iI-87ktiGCajBTpMb_ydGwDk.jpg?width=640&crop=smart&auto=webp&s=bee35d7add852122ecb3630117e2b2782c49c0af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0HyIJ1l1CEno8j1KoR6iI-87ktiGCajBTpMb_ydGwDk.jpg?width=960&crop=smart&auto=webp&s=cf1bb5389e1d479683788f07910cb8fc0b79eb12', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0HyIJ1l1CEno8j1KoR6iI-87ktiGCajBTpMb_ydGwDk.jpg?width=1080&crop=smart&auto=webp&s=95138c571ed27003f67d2cde68a1562b52a30de0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0HyIJ1l1CEno8j1KoR6iI-87ktiGCajBTpMb_ydGwDk.jpg?auto=webp&s=adc141873fc891e58505cd710e4ca05c8f9f5374', 'width': 1200}, 'variants': {}}]}
You have roughly 50,000 USD. You have to build an inference rig without using GPUs. How do you go about it?
1
[removed]
2025-02-01T06:55:30
https://www.reddit.com/r/LocalLLaMA/comments/1if0ppk/you_have_roughly_50000_usd_you_have_to_build_an/
Comfortable-Rock-498
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if0ppk
false
null
t3_1if0ppk
/r/LocalLLaMA/comments/1if0ppk/you_have_roughly_50000_usd_you_have_to_build_an/
false
false
self
1
null
deepseek r1 scores 145 out of 150 in a Chinese Graduate School Math Examinations
1
[removed]
2025-02-01T07:15:37
https://i.redd.it/2b5jui9aahge1.jpeg
yolocase
i.redd.it
1970-01-01T00:00:00
0
{}
1if108y
false
null
t3_1if108y
/r/LocalLLaMA/comments/1if108y/deepseek_r1_scores_145_out_of_150_in_a_chinese/
false
false
https://a.thumbs.redditm…3fPTMpDDSSl0.jpg
1
{'enabled': True, 'images': [{'id': 'x9Lr2I7BbO_o4qyOhNt2iUAYi7_hhm_bNNWnMK8m7GM', 'resolutions': [{'height': 159, 'url': 'https://preview.redd.it/2b5jui9aahge1.jpeg?width=108&crop=smart&auto=webp&s=c0e9fb3bc7506d5e24cacfe3c876ecfca885d085', 'width': 108}, {'height': 318, 'url': 'https://preview.redd.it/2b5jui9aahge1.jpeg?width=216&crop=smart&auto=webp&s=5c82ec9920e5774bacb3227f659c4db7052f2308', 'width': 216}, {'height': 471, 'url': 'https://preview.redd.it/2b5jui9aahge1.jpeg?width=320&crop=smart&auto=webp&s=e267caf47743dbfed378fddfa9383f6913719e36', 'width': 320}, {'height': 943, 'url': 'https://preview.redd.it/2b5jui9aahge1.jpeg?width=640&crop=smart&auto=webp&s=96a5479e13860b8bfe58460af22cf71c84976073', 'width': 640}, {'height': 1414, 'url': 'https://preview.redd.it/2b5jui9aahge1.jpeg?width=960&crop=smart&auto=webp&s=bf421b7ea99cabd4d118bf55c47e7dc4568bb75c', 'width': 960}], 'source': {'height': 1580, 'url': 'https://preview.redd.it/2b5jui9aahge1.jpeg?auto=webp&s=96d306a387ab6b651485167e0072f1068b9631c8', 'width': 1072}, 'variants': {}}]}
DeepSeek r1 scores 145 out of 150 in a Chinese Graduate School Math Examinations
1
[removed]
2025-02-01T07:19:07
https://i.redd.it/ccqbciqwahge1.jpeg
yolocase
i.redd.it
1970-01-01T00:00:00
0
{}
1if11y5
false
null
t3_1if11y5
/r/LocalLLaMA/comments/1if11y5/deepseek_r1_scores_145_out_of_150_in_a_chinese/
false
false
https://b.thumbs.redditm…58A3XDDb4J4Q.jpg
1
{'enabled': True, 'images': [{'id': 'J-zzimWKVdTiY5CqcuilnaoY7Jd5-f8ogWJEV7ZeGak', 'resolutions': [{'height': 159, 'url': 'https://preview.redd.it/ccqbciqwahge1.jpeg?width=108&crop=smart&auto=webp&s=c898afafa191145d7341fbdb3fe2b900948d41e2', 'width': 108}, {'height': 318, 'url': 'https://preview.redd.it/ccqbciqwahge1.jpeg?width=216&crop=smart&auto=webp&s=91db23f088b1f0687d0ef95ca5c29cfd6b7d09f6', 'width': 216}, {'height': 471, 'url': 'https://preview.redd.it/ccqbciqwahge1.jpeg?width=320&crop=smart&auto=webp&s=8f50d6fad0dc4a5de48f2a3de7f6ba2845b21c98', 'width': 320}, {'height': 943, 'url': 'https://preview.redd.it/ccqbciqwahge1.jpeg?width=640&crop=smart&auto=webp&s=efe2995e4ea39a9128eef71c54b71b148f8e5cee', 'width': 640}, {'height': 1414, 'url': 'https://preview.redd.it/ccqbciqwahge1.jpeg?width=960&crop=smart&auto=webp&s=d517d4eac384997021d0ea3e7952c1fd5e3d4cb4', 'width': 960}], 'source': {'height': 1580, 'url': 'https://preview.redd.it/ccqbciqwahge1.jpeg?auto=webp&s=f464b9c367b0a6b04b3baf7f1504fd2ea25cd7e2', 'width': 1072}, 'variants': {}}]}
Best Open Source Coding AI model?
1
[removed]
2025-02-01T07:21:48
https://www.reddit.com/r/LocalLLaMA/comments/1if13bi/best_open_source_coding_ai_model/
Bailey_999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if13bi
false
null
t3_1if13bi
/r/LocalLLaMA/comments/1if13bi/best_open_source_coding_ai_model/
false
false
self
1
null
DeepSeek is Absolute Nonsense
1
2025-02-01T07:33:30
https://www.youtube.com/watch?v=jCgIhhKn6Lw&ab_channel=laowhy86
ZaggyChum
youtube.com
1970-01-01T00:00:00
0
{}
1if1935
false
{'oembed': {'author_name': 'laowhy86', 'author_url': 'https://www.youtube.com/@laowhy86', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/jCgIhhKn6Lw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="DeepSeek is Absolute Nonsense"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/jCgIhhKn6Lw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'DeepSeek is Absolute Nonsense', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1if1935
/r/LocalLLaMA/comments/1if1935/deepseek_is_absolute_nonsense/
false
false
https://b.thumbs.redditm…IrLYJv-3Vpps.jpg
1
{'enabled': False, 'images': [{'id': 'lUxXetxAOpl3zHeysMAmI-ICFI2mbpXl2THuoon7ndg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/mkW7tTL7u1ldqBcvmQvcmw39CEKoMZWYK5sBG5-JiI8.jpg?width=108&crop=smart&auto=webp&s=2b876c88333947b224483d514d0b2f9cba7fde05', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/mkW7tTL7u1ldqBcvmQvcmw39CEKoMZWYK5sBG5-JiI8.jpg?width=216&crop=smart&auto=webp&s=9271c562e6ca48053a891d3d595df18829aeeff5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/mkW7tTL7u1ldqBcvmQvcmw39CEKoMZWYK5sBG5-JiI8.jpg?width=320&crop=smart&auto=webp&s=bc612dcda90c864612cd40fd2539aa3681690e72', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/mkW7tTL7u1ldqBcvmQvcmw39CEKoMZWYK5sBG5-JiI8.jpg?auto=webp&s=a935097bd2360e2e9db923141799198de3f13d32', 'width': 480}, 'variants': {}}]}
Created a gist how to setup Ollama with Open WebUI in Docker on Ubuntu Server VM with Nvidia GPU on Proxmox, perhaps someone here finds it useful.
16
2025-02-01T07:36:23
https://gist.github.com/cepa/94d81ea83b0c2e97340c7c512767da18
ChopSticksPlease
gist.github.com
1970-01-01T00:00:00
0
{}
1if1ahn
false
null
t3_1if1ahn
/r/LocalLLaMA/comments/1if1ahn/created_a_gist_how_to_setup_ollama_with_open/
false
false
https://a.thumbs.redditm…5f6l7xzNMbZ0.jpg
16
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
O3-mini and 03-mini-high now available
0
https://preview.redd.it/…53743cc59af8be
2025-02-01T07:38:37
https://www.reddit.com/r/LocalLLaMA/comments/1if1bk8/o3mini_and_03minihigh_now_available/
Friendly_Fan5514
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if1bk8
false
null
t3_1if1bk8
/r/LocalLLaMA/comments/1if1bk8/o3mini_and_03minihigh_now_available/
false
false
https://b.thumbs.redditm…USrKNL0T1epQ.jpg
0
null
Did anyone host Deepseek R1 671B locally and got cloud like t/s?
1
[removed]
2025-02-01T07:52:28
https://www.reddit.com/r/LocalLLaMA/comments/1if1i43/did_anyone_host_deepseek_r1_671b_locally_and_got/
Orange-Hokage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if1i43
false
null
t3_1if1i43
/r/LocalLLaMA/comments/1if1i43/did_anyone_host_deepseek_r1_671b_locally_and_got/
false
false
self
1
null
Is there a database of output speed based on model, setup, context length etc?
1
I've seen some predictive calculations but I would like to see actual speed, tokens per second I've seen some efforts to compile them but it is sporadic, most don't even have 100 rows and data within few days only then abandoned
2025-02-01T07:59:40
https://www.reddit.com/r/LocalLLaMA/comments/1if1lij/is_there_a_database_of_output_speed_based_on/
Nabaatii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if1lij
false
null
t3_1if1lij
/r/LocalLLaMA/comments/1if1lij/is_there_a_database_of_output_speed_based_on/
false
false
self
1
null
Deepseek R1 Can Do Text Adventures Inside A Completely Simulated World Model
1
[removed]
2025-02-01T08:04:45
https://www.reddit.com/gallery/1if1o9e
Haziq47
reddit.com
1970-01-01T00:00:00
0
{}
1if1o9e
false
null
t3_1if1o9e
/r/LocalLLaMA/comments/1if1o9e/deepseek_r1_can_do_text_adventures_inside_a/
false
false
https://a.thumbs.redditm…-PQxaxuz2eE4.jpg
1
null
We've been incredibly fortunate with how things have developed over the past year
446
I still remember how in late 2023, people were speculating that Mixtral-8x7b was the best open-weights model that the community would get "for a long time", and possibly ever. Shortly afterwards, Mistral published a controversial blog post that appeared to indicate that they were moving away from open weights – an ominous sign at a time when there were very few open-weights models available, and Anthropic and OpenAI seemed as far out of reach as the stars. But since then: * Meta released the excellent Llama 3 series as open weights (though not entirely free software). * Contrary to what many had feared, Mistral continued to publish open-weights models, even releasing the weights for Mistral Large, which was previously API-only, and now publishing their latest Mistral Small under the Apache License, when the previous version was still under their proprietary MRL. * Yi-34b transitioned from a proprietary license to Apache. * Microsoft has been publishing a number of excellent small models under permissive licenses. * Qwen came out of nowhere, and released the best models that can be run on consumer hardware, almost all of them under permissive licenses. * DeepSeek upended the entire industry, and **an MIT-licensed model is now ranked joint #1 on style-controlled LMSYS,** on par with cutting-edge, proprietary, API-only models. This was completely unforeseeable a year ago. Reality has outpaced the wildest dreams of the most naive optimists. Some doomsayers even predicted that open-weights models would soon be outlawed. The exact opposite has happened, and continues to happen. To get an idea for what could easily have been, just look at the world of image generation models. In 15 months, there have only been two significant open-weights releases: SD3, and Flux.1D. SD3 was mired in controversy due to Stability's behavior and has been all but ignored by the community, and Flux is crippled by distillation. Both models are censored to a degree that has become the stuff of memes, and their licenses essentially make them unusable for anything except horsing around. That is how the LLM world could have turned out. Instead, we have a world where I don't even download every new model anymore, because there are multiple exciting releases every week and I simply lack the time to take all of them for a spin. I now regularly delete models from my hard drive that I would have given my right hand for not too long ago. It's just incredible.
2025-02-01T08:11:39
https://www.reddit.com/r/LocalLLaMA/comments/1if1rls/weve_been_incredibly_fortunate_with_how_things/
-p-e-w-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if1rls
false
null
t3_1if1rls
/r/LocalLLaMA/comments/1if1rls/weve_been_incredibly_fortunate_with_how_things/
false
false
self
446
null
What’s the best approach for GenAI product development, especially for local deployments?
3
In GenAI, the application and the LLM are two separate blocks. Trying to optimize both at the same time slows down development. A structured approach can make things faster and more controlled. Here’s a strategy that works: Step 1: Make the LLM Constant – Start with a powerful model like GPT-4o. Focus entirely on building the application, refining business logic, and ensuring smooth functionality. Once everything works as expected, treat the outputs as a benchmark. Step 2: Make the Application Constant – Now, switch to an open-source LLM that fits your needs. Optimize performance and fine-tune the model to match (or even exceed) the benchmarked outputs. This way, both application and model development happen in a streamlined fashion. What do you think? Have you followed a similar approach? Would love to hear your thoughts!
2025-02-01T08:14:58
https://www.reddit.com/r/LocalLLaMA/comments/1if1t8y/whats_the_best_approach_for_genai_product/
Ahmad401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if1t8y
false
null
t3_1if1t8y
/r/LocalLLaMA/comments/1if1t8y/whats_the_best_approach_for_genai_product/
false
false
self
3
null
Does anyone know who is the person in the image?
1
[removed]
2025-02-01T08:21:22
https://www.reddit.com/r/LocalLLaMA/comments/1if1wbe/does_anyone_know_who_is_the_person_in_the_image/
Aggravating-Mine-292
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if1wbe
false
null
t3_1if1wbe
/r/LocalLLaMA/comments/1if1wbe/does_anyone_know_who_is_the_person_in_the_image/
false
false
https://b.thumbs.redditm…v0JC-qg5ZnhE.jpg
1
null
Running ensemble models?
1
I imagine that, for end users, the future might entail sending one’s query to multiple AI models (eg R1, GPT, etc) and then aggregating/processing their responses to a unified reply. This is akin to various ”population of experts” models in cognitive science, and model averaging, ensemble techniques, and similar approaches in statistics/ML. Is this already being done? Is anyone here doing it, and if so, how are the results?
2025-02-01T08:55:28
https://www.reddit.com/r/LocalLLaMA/comments/1if2cc6/running_ensemble_models/
identicalelements
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if2cc6
false
null
t3_1if2cc6
/r/LocalLLaMA/comments/1if2cc6/running_ensemble_models/
false
false
self
1
null
DeepSeek-R1 system prompt & knowledge cut off date is likely 2024 June
7
DeepSeek-R1 knowledge cut-off is likely 2024 June, unlike on the official website which says 2023 Dec. Through [https://siliconflow.cn/zh-cn/](https://siliconflow.cn/zh-cn/) , the presumed system prompt of R1 is: \# 角色定义 role: "AI Assistant (DeepSeek-R1-Enhanced)" author: "DeepSeek" description: > 通用型智能助手,通过结构化思考流程提供可靠服务, 知识截止2023年12月,不处理实时信息。 \# 交互协议 interaction\_rules: thinking\_flow: # 新增思考流程规范 \- 步骤1: 问题语义解析(意图/实体/上下文) \- 步骤2: 知识库匹配(学科分类/可信度评估) \- 步骤3: 逻辑验证(矛盾检测/边界检查) \- 步骤4: 响应结构设计(分点/示例/注意事项) safety\_layer: \- 自动激活场景: \[政治, 医疗建议, 隐私相关\] \- 响应模板: "该问题涉及\[领域\],建议咨询专业机构" \# 输出规范 output\_schema: thinking\_section: # 强制思考段落 required: true tags: "<think>{content}</think>" content\_rules: \- 使用Markdown列表格式 \- 包含至少2个验证步骤 \- 标注潜在不确定性 \- 复杂概念使用类比解释 Reason why I believe it is the same as what they use on [deepseek.com](http://deepseek.com) interface, is: 1. SiliconFlow is hosting R1 in collaboration with DeepSeek, they won't just anyhow write a system prompt out of no where, especially such a structed YAML one. 2. The comments beside the thinking steps hinted that the thinking part of the prompt was "added", which means that V3 likely uses the same prompt without the thinking part. Again, not something you would do if the prompt isnt designed for a model trained on top of a base model (v3). The prompt includes a safety layer (with a template to answer when asked about political questions) and a knowledge cut-off date of 2023 Dec. After removing both and experimenting with the chat interface, the model successfully recalled up to 9th June 2024 event: Macron dissolved French parliament on 9 June. Anything after that, it starts hallucinating events that happened around May but put them under the wrong date in June. Therefore, it is reasonable to conclude that the cut-off data built in [deepseek.com](http://deepseek.com) UI is wrong, and this also falls in line with when V3 started training (it took 3 months, July-Oct 2024). Without the safety layer, it is also able to answer political questions like Taiwan, though it of course answers the politically correct one for China. It's not surprising, and nothing to be blamed for.
2025-02-01T09:04:36
https://www.reddit.com/r/LocalLLaMA/comments/1if2grt/deepseekr1_system_prompt_knowledge_cut_off_date/
aliencaocao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if2grt
false
null
t3_1if2grt
/r/LocalLLaMA/comments/1if2grt/deepseekr1_system_prompt_knowledge_cut_off_date/
false
false
self
7
{'enabled': False, 'images': [{'id': 'ZnXocnpPwF_MmW1Cp0s3QqVxHDarceaJkeQLvRYxMXY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/m7GXLvAYzUqvs_ocnvV7B-5xCzGrcz9hKEOanSiWnHA.jpg?width=108&crop=smart&auto=webp&s=f35b5a97e0c68616e78a2209afbe708a8a7f460c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/m7GXLvAYzUqvs_ocnvV7B-5xCzGrcz9hKEOanSiWnHA.jpg?width=216&crop=smart&auto=webp&s=3e6e3c6d21228002ddb27f847374c6b596b132cf', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/m7GXLvAYzUqvs_ocnvV7B-5xCzGrcz9hKEOanSiWnHA.jpg?width=320&crop=smart&auto=webp&s=d775cef7d73940e851a8890513c73644fa6241de', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/m7GXLvAYzUqvs_ocnvV7B-5xCzGrcz9hKEOanSiWnHA.jpg?width=640&crop=smart&auto=webp&s=4e47ce8c96c574535ff4ed107e7a78c7ce889410', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/m7GXLvAYzUqvs_ocnvV7B-5xCzGrcz9hKEOanSiWnHA.jpg?width=960&crop=smart&auto=webp&s=4231ea61237b8992f60eb0ac8f42156722ad8e2e', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/m7GXLvAYzUqvs_ocnvV7B-5xCzGrcz9hKEOanSiWnHA.jpg?width=1080&crop=smart&auto=webp&s=cec0a74c2b755849728c1e2e9c16e068921bc6a6', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/m7GXLvAYzUqvs_ocnvV7B-5xCzGrcz9hKEOanSiWnHA.jpg?auto=webp&s=de20ae753c5a7e6b4f4d2265b6fd7875fff782e4', 'width': 1202}, 'variants': {}}]}
How to force memory release in WebLLM and WebGPU? Or how is memory handled?
2
Hey everyone, I'm experimenting with WebLLM in the browser and downloading different models locally (several GBs). However, after testing a model, I notice that the memory (especially GPU memory via WebGPU) doesn't seem to be released immediately, even after switching to a new model. I've tried: * Setting the model reference to `null` * Calling `globalThis.gc?.()` * Manually requesting a new `GPUDevice` and destroying it (but `device.destroy()` isn’t always supported) * Using Chrome DevTools’ "Collect Garbage" feature The only thing that *always* works is closing and reopening the tab, but that’s not ideal. Has anyone found a way to force WebLLM/WebGPU to fully release memory without restarting the browser? Would appreciate any insights from folks working with WebGPU and WebLLM!
2025-02-01T09:05:08
https://www.reddit.com/r/LocalLLaMA/comments/1if2h10/how_to_force_memory_release_in_webllm_and_webgpu/
Vinserello
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if2h10
false
null
t3_1if2h10
/r/LocalLLaMA/comments/1if2h10/how_to_force_memory_release_in_webllm_and_webgpu/
false
false
self
2
null
Ai for tasks
0
Hello all, New to the world of AI and about to start a class on it soon. In the meantime, I’m searching for a locally hosted AI model that can connect to the internet, operate locally, and improve its learning capabilities over time. Ideally, I’m looking for an AI that can sort files when given specific commands, such as organizing videos or images, detecting duplicates in videos, or assisting with personal projects. I tried Ollama but kept encountering an "API not supported" error, so I’m wondering if there’s an AI system with a user-friendly GUI that works reliably. If anything in this post is unclear, feel free to ask for clarification. Any advice or recommendations would be greatly appreciated! Thanks
2025-02-01T09:05:18
https://www.reddit.com/r/LocalLLaMA/comments/1if2h3w/ai_for_tasks/
TheLastAirbender2025
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if2h3w
false
null
t3_1if2h3w
/r/LocalLLaMA/comments/1if2h3w/ai_for_tasks/
false
false
self
0
null
How do I create a custom knowledge base for locally hosted LLM?
1
So I got Llama3.2 8b (and Deep Seek r1 32b too) running smoothly on my 4080 Super GPU with 64 gigs of system ram and a 7800x3d cpu. I am running Ollama along with Open Web UI. Now I want to dig in a bit in create a support chat bot that is trained on a knowledge base of documents I provide it. Are there any guides/tutorials on how to do this and does it avoid stuff like GPU finetuning/training?
2025-02-01T09:19:11
https://www.reddit.com/r/LocalLLaMA/comments/1if2np3/how_do_i_create_a_custom_knowledge_base_for/
AnacondaMode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if2np3
false
null
t3_1if2np3
/r/LocalLLaMA/comments/1if2np3/how_do_i_create_a_custom_knowledge_base_for/
false
false
self
1
null
Asking for NVIDIA Blackwell RTX 5090/5080 for 30B - 70B Q4/Q5 LLMs: Speed and Token Generation?
5
Has anyone tried running 70B Q4 or Q5 LLM models on the NVIDIA Blackwell 5090 or 5080? How well do these GPUs perform? Also, what is the token generation speed (tokens per second) for these models?
2025-02-01T09:35:19
https://www.reddit.com/r/LocalLLaMA/comments/1if2ve8/asking_for_nvidia_blackwell_rtx_50905080_for_30b/
ZealousidealBadger47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if2ve8
false
null
t3_1if2ve8
/r/LocalLLaMA/comments/1if2ve8/asking_for_nvidia_blackwell_rtx_50905080_for_30b/
false
false
self
5
null
o3-mini is 75.76 Global Average at LiveBench leaderboard
0
Is it revolutionary? Sure it is best, but what is the REAL difference between deepseek-r1 (71.38 GA) and 03-mini? Is the 49.66 Global Average of deepseek-r1-distill-llama-70b on my PC a real alternative? What do you think? Let's talk about it!
2025-02-01T09:42:43
https://www.reddit.com/r/LocalLLaMA/comments/1if2ys4/o3mini_is_7576_global_average_at_livebench/
custodiam99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if2ys4
false
null
t3_1if2ys4
/r/LocalLLaMA/comments/1if2ys4/o3mini_is_7576_global_average_at_livebench/
false
false
self
0
null
GPU Choice Help for LLM‘s
1
[removed]
2025-02-01T09:46:30
https://www.reddit.com/r/LocalLLaMA/comments/1if30g4/gpu_choice_help_for_llms/
EdgeOfLife42_0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if30g4
false
null
t3_1if30g4
/r/LocalLLaMA/comments/1if30g4/gpu_choice_help_for_llms/
false
false
self
1
null
Finetuning LLMs for Nepali Language.
4
Im doing a project, I want to finetune LLMs for Nepali Language, which one is the easiest and which one is the best to do? Whata resourses are there if i wanna do this? Can someone please help
2025-02-01T09:54:59
https://www.reddit.com/r/LocalLLaMA/comments/1if349n/finetuning_llms_for_nepali_language/
Trick-Click8355
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if349n
false
null
t3_1if349n
/r/LocalLLaMA/comments/1if349n/finetuning_llms_for_nepali_language/
false
false
self
4
null
Custom Practice-Oriented Small Models Benchmark (Social Sciences, ENG/DE)
1
I'm thinking a lot about how generative models can be used in my work, which is qualitative, interdisciplinary social science with a focus on theoretical concepts and narrative empirical data. In this context, logic riddles, calculations, or coding don't play a significant role. It's primarily about transforming written word and conceptual understanding. Looking at commonly referenced benchmarks gave me little insight into how performance translates into real-world use for the tasks I perform daily. I also wanted to test whether small LLMs that can run on virtually any machine could offset API usage to some degree (taking into consideration the carbon footprint of server farms)*. **So, I created a custom benchmark that tests common small models across a variety of real-world tasks, using actual material I process in my work as a social scientist.** To make it extra challenging, this was a bilingual benchmark, testing performance in both English and German. All models used the same parameter settings (except R1, whose max output tokens were quadrupled and context window was doubled) and were fed the same prompts. These were very simple prompts, with the instructions being barely a line long in most cases. However, they were supplemented by textual content that needed to be processed in most cases. - Output Tokens: 512 - Temperature: 0.1 - Context Window Size: 2048 - Top P: 0.1 - Top K: 50 - Frequency Penalty: 0.2 - Presence Penalty: 0.2 For reference, GPT-4o was included. The outputs of all models per task were "anonymized" and evaluated on a **subjective** scale from 1 to 5 without knowing which output belonged to which model: - 1: Not usable; reworking the output would take longer than performing the task manually. - 2: Flawed, but some aspects can still be used. - 3: Usable output, but structure and content need to be adapted in multiple ways. - 4: Good output, requiring only small adaptations to structure or content. - 5: Matches expectations and *could* be used as-is (note: I don't let models write original content and claim it as my own; this was only done for benchmarking purposes). ## Results | | qwen2.5:3b | phi3:3.8b | llama3.2:3b | gemma2:2b | OpenAI: GPT-4o<br>(2024-11-20) | deepseek-r1:<br>1.5b | | -------------------------- | ---------- | --------- | ----------- | --------- | ------------------------------ | -------------------- | | Paper Summary ENG | 3 | 4 | 3 | 3 | 3 | DISQUALIFIED | | Paper Summary DE | 3 | 5 | 2 | 2 | 4 | | | Paper Translation ENG → DE | 1 | 1 | 2 | 1 | 4 | | | Paper Translation DE → ENG | 2 | 2 | 1 | 3 | 5 | | | Spelling ENG | 2 | 3 | 2 | 4 | 5 | | | Spelling DE | 2 | 1 | 2 | 1 | 4 | | | Notes → Todo ENG | 3 | 2 | 2 | 1 | 5 | | | Notes → Todo DE | 1 | 1 | 3 | 1 | 3 | | | Notes → Mail ENG | 2 | 1 | 1 | 3 | 4 | | | Notes → Mail DE | 1 | 1 | 1 | 2 | 4 | | | Knowledge ENG | 4 | 3 | 4 | 3 | 5 | | | Knowledge DE | 3 | 2 | 1 | 2 | 5 | | | Notes → Paper Outline ENG | 2 | 2 | 4 | 3 | 3 | | | Notes → Paper Outline DE | 1 | 1 | 3 | 3 | 4 | | | **TOTAL** (x/70) | 30 | 29 | 29 | 31 | 58 | | As you can see, R1 in its smallest variant had to be disqualified because it only produced gibberish. I was surprised by some results, e.g., phi3 beating GPT-4o in summarization, gemma2 being competent in correcting English spelling and grammar, and llama3.2 performing quite well in conceptual tasks in English (even beating GPT-4o). When comparing local use of small models with API access to enterprise models, the minimal cost of the latter seems justified in a professional context. However, for summarization and superficial knowledge requests about expert topics, these small LLMs perform decently well. When I find the time, I might rerun the benchmark with 7-9b models, which might yield a step-up in performance.
2025-02-01T10:11:37
https://www.reddit.com/r/LocalLLaMA/comments/1if3c8d/custom_practiceoriented_small_models_benchmark/
raumzeit77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if3c8d
false
null
t3_1if3c8d
/r/LocalLLaMA/comments/1if3c8d/custom_practiceoriented_small_models_benchmark/
false
false
self
1
null
Introduction Kroko ASR: fast streaming speech-to-text models.
1
[removed]
2025-02-01T10:20:38
https://www.reddit.com/r/LocalLLaMA/comments/1if3gij/introduction_kroko_asr_fast_streaming/
banafo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if3gij
false
null
t3_1if3gij
/r/LocalLLaMA/comments/1if3gij/introduction_kroko_asr_fast_streaming/
false
false
self
1
{'enabled': False, 'images': [{'id': 'tO2Z4cGuJ0FpBF8XKtk8I2Iix6Rx36J6t_1XUoQWu4A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Sj3upeWgf-_oOScoP2V9Xk4mqqt9KvG2qlKFlEdH89U.jpg?width=108&crop=smart&auto=webp&s=9cdc73f84da33a1136e4cbd487f9f46e0bde719c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Sj3upeWgf-_oOScoP2V9Xk4mqqt9KvG2qlKFlEdH89U.jpg?width=216&crop=smart&auto=webp&s=bf2f33d1434eced7819b27e7cd390851ce2b83d0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Sj3upeWgf-_oOScoP2V9Xk4mqqt9KvG2qlKFlEdH89U.jpg?width=320&crop=smart&auto=webp&s=6c7ee3013c8d2d37cae04e5f6a9d889f975047cb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Sj3upeWgf-_oOScoP2V9Xk4mqqt9KvG2qlKFlEdH89U.jpg?width=640&crop=smart&auto=webp&s=11e3213a833c853608ee140165d1629b9fab0b83', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Sj3upeWgf-_oOScoP2V9Xk4mqqt9KvG2qlKFlEdH89U.jpg?width=960&crop=smart&auto=webp&s=d48d89a63d492956cdac1a0bcb956c412c8b7d37', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Sj3upeWgf-_oOScoP2V9Xk4mqqt9KvG2qlKFlEdH89U.jpg?width=1080&crop=smart&auto=webp&s=50f7629473d2f5f2280267ba559e1b03df8eab86', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Sj3upeWgf-_oOScoP2V9Xk4mqqt9KvG2qlKFlEdH89U.jpg?auto=webp&s=8ba9db73e455f155e62898060cd9f251d4926046', 'width': 1200}, 'variants': {}}]}
Sam Altman acknowledges R1
1,172
Straight from the horses mouth. Without R1, or bigger picture open source competitive models, we wouldn’t be seeing this level of acknowledgement from OpenAI. This highlights the importance of having open models, not only that, but open models that actively compete and put pressure on closed models. R1 for me feels like a real _hard takeoff_ moment. No longer can OpenAI or other closed companies dictate the rate of release. No longer do we have to get the scraps of what they decide to give us. Now they have to actively compete in an open market. No moat. > Source: https://www.reddit.com/r/OpenAI/s/nfmI5x9UXC
2025-02-01T10:31:35
https://i.redd.it/ot5nsk399ige1.jpeg
ybdave
i.redd.it
1970-01-01T00:00:00
0
{}
1if3lq1
false
null
t3_1if3lq1
/r/LocalLLaMA/comments/1if3lq1/sam_altman_acknowledges_r1/
false
false
https://b.thumbs.redditm…iyrk3i5XayEc.jpg
1,172
{'enabled': True, 'images': [{'id': '9bQfU2NZAKPJA6g0KMwu51L51HmBIAzm1ScrazVANsA', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/ot5nsk399ige1.jpeg?width=108&crop=smart&auto=webp&s=f47858620b6affb1c712532a72862488fc96e136', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/ot5nsk399ige1.jpeg?width=216&crop=smart&auto=webp&s=fdff67e5abe6941f5fe1fb94327ec47a0abfcd3c', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/ot5nsk399ige1.jpeg?width=320&crop=smart&auto=webp&s=76f84b247c97c92c37f7d2548e56461dcc059b5c', 'width': 320}, {'height': 516, 'url': 'https://preview.redd.it/ot5nsk399ige1.jpeg?width=640&crop=smart&auto=webp&s=67ca17a8d86fa20881ff4876577c465ae2c733d9', 'width': 640}, {'height': 774, 'url': 'https://preview.redd.it/ot5nsk399ige1.jpeg?width=960&crop=smart&auto=webp&s=0af16b0f88322db3efa32b4c5900cf9f4159b5c1', 'width': 960}, {'height': 871, 'url': 'https://preview.redd.it/ot5nsk399ige1.jpeg?width=1080&crop=smart&auto=webp&s=e6a2ab7fa1cdc8c0867cc9cd05b4e52e8ef8beb6', 'width': 1080}], 'source': {'height': 951, 'url': 'https://preview.redd.it/ot5nsk399ige1.jpeg?auto=webp&s=16f375ff4898e5c1ca1f12f6b75d44f8ec754d6c', 'width': 1179}, 'variants': {}}]}
US probes DeepSeek's use of banned chips after chatbot scores just 17% accuracy
0
https://tribune.com.pk/story/2525900/us-probes-deepseeks-use-of-banned-chips-after-chatbot-scores-just-17-accuracy
2025-02-01T10:40:57
https://www.reddit.com/r/LocalLLaMA/comments/1if3q52/us_probes_deepseeks_use_of_banned_chips_after/
Quirky_Bag_4250
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if3q52
false
null
t3_1if3q52
/r/LocalLLaMA/comments/1if3q52/us_probes_deepseeks_use_of_banned_chips_after/
false
false
self
0
{'enabled': False, 'images': [{'id': 'EizUUOUwQusVdE1ANR9wLFplL-9emOr6RT_nDrOGt1M', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/BvjpL3opaerU-v-WymdPzv87pkYzdxpYaPFfRWuiohI.jpg?width=108&crop=smart&auto=webp&s=d4a49beb110a70332ffa9bfcbc583332a2b42516', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/BvjpL3opaerU-v-WymdPzv87pkYzdxpYaPFfRWuiohI.jpg?width=216&crop=smart&auto=webp&s=17f1fa814bb1c9dcfcb5edce0e3ca2c29fa268ec', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/BvjpL3opaerU-v-WymdPzv87pkYzdxpYaPFfRWuiohI.jpg?width=320&crop=smart&auto=webp&s=acda74762102ca49cbc646eecd171b89a1571ceb', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/BvjpL3opaerU-v-WymdPzv87pkYzdxpYaPFfRWuiohI.jpg?width=640&crop=smart&auto=webp&s=25feebcc48109829d8d2570829ff81e30379de68', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/BvjpL3opaerU-v-WymdPzv87pkYzdxpYaPFfRWuiohI.jpg?auto=webp&s=026543ab792f99e332197642bec5d97188e8758b', 'width': 640}, 'variants': {}}]}
Qwen Chat: The Highs and Lows of Updates
1
[removed]
2025-02-01T10:53:48
[deleted]
1970-01-01T00:00:00
0
{}
1if3wa3
false
{'oembed': {'author_name': 'CraftiumAI', 'author_url': 'https://www.youtube.com/@CraftiumAI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/f3DAQf5elcU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Qwen Chat Update – FREE Video and Image Generation for Everyone!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/f3DAQf5elcU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Qwen Chat Update – FREE Video and Image Generation for Everyone!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1if3wa3
/r/LocalLLaMA/comments/1if3wa3/qwen_chat_the_highs_and_lows_of_updates/
false
false
default
1
null
Virtuoso-Small-v2 - Distilled from Deepseek-v3, 128k context
54
>**Virtuoso-Small-v2 (14B)** is our next-generation, 14-billion-parameter language model that builds upon the original Virtuoso-Small architecture. This version is distilled from Deepseek-v3, leveraging an expanded dataset of 5B+ tokens worth of logits. arcee-ai/Virtuoso-Small-v2: [https://huggingface.co/arcee-ai/Virtuoso-Small-v2](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) gguf: [https://huggingface.co/bartowski/arcee-ai\_Virtuoso-Small-v2-GGUF](https://huggingface.co/bartowski/arcee-ai_Virtuoso-Small-v2-GGUF)
2025-02-01T10:56:34
https://www.reddit.com/r/LocalLLaMA/comments/1if3xpg/virtuososmallv2_distilled_from_deepseekv3_128k/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if3xpg
false
null
t3_1if3xpg
/r/LocalLLaMA/comments/1if3xpg/virtuososmallv2_distilled_from_deepseekv3_128k/
false
false
self
54
{'enabled': False, 'images': [{'id': 'RrW8gvPz2-Lk7uqHPsQaTNhL0wSLd0u3n_xiycvLSWQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CECnLi-SydxyzOhQFaOQ-QlosBX7DRLPUyZ0WLGYLN0.jpg?width=108&crop=smart&auto=webp&s=d662192f5d45d93f37533f934e52cd368dfccfff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CECnLi-SydxyzOhQFaOQ-QlosBX7DRLPUyZ0WLGYLN0.jpg?width=216&crop=smart&auto=webp&s=2c514bbec0a2f9a659ef30d5ec0d24d31068bcfc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CECnLi-SydxyzOhQFaOQ-QlosBX7DRLPUyZ0WLGYLN0.jpg?width=320&crop=smart&auto=webp&s=ce4c946aa2c4a98e0dc743ab7e1292dba1e50373', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CECnLi-SydxyzOhQFaOQ-QlosBX7DRLPUyZ0WLGYLN0.jpg?width=640&crop=smart&auto=webp&s=5c14959476b21191a4b34285c06ff048c6cf3ce6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CECnLi-SydxyzOhQFaOQ-QlosBX7DRLPUyZ0WLGYLN0.jpg?width=960&crop=smart&auto=webp&s=69d73c4c41b967862f2eeae35db9798286069aee', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CECnLi-SydxyzOhQFaOQ-QlosBX7DRLPUyZ0WLGYLN0.jpg?width=1080&crop=smart&auto=webp&s=c2236a64dbb553b65a7bcd2dad525ea5ee7af1ff', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CECnLi-SydxyzOhQFaOQ-QlosBX7DRLPUyZ0WLGYLN0.jpg?auto=webp&s=8311294b5757a90f37d3f5c5e2918433f92c96da', 'width': 1200}, 'variants': {}}]}
Cline/aider/Roo finetuning data?
1
[removed]
2025-02-01T11:00:01
https://www.reddit.com/r/LocalLLaMA/comments/1if3zf7/clineaiderroo_finetuning_data/
Alarming-Ad8154
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if3zf7
false
null
t3_1if3zf7
/r/LocalLLaMA/comments/1if3zf7/clineaiderroo_finetuning_data/
false
false
self
1
null
How do I choose a model?
0
I have downloaded LM studio, but I've not idea how to go about choosing a model? Are there any guides for this process?
2025-02-01T11:01:22
https://www.reddit.com/r/LocalLLaMA/comments/1if408e/how_do_i_choose_a_model/
jay_jay_okocha10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if408e
false
null
t3_1if408e
/r/LocalLLaMA/comments/1if408e/how_do_i_choose_a_model/
false
false
self
0
null
Sam Altman: OpenAI has been on the 'wrong side of history' concerning open source
840
2025-02-01T11:08:19
https://i.redd.it/iewy2sxsfige1.png
AloneCoffee4538
i.redd.it
1970-01-01T00:00:00
0
{}
1if43uf
false
null
t3_1if43uf
/r/LocalLLaMA/comments/1if43uf/sam_altman_openai_has_been_on_the_wrong_side_of/
false
false
https://b.thumbs.redditm…tjT37gksSx3c.jpg
840
{'enabled': True, 'images': [{'id': 'IBazNgtZpb8j6I_ZJDLnR__0SUXh2GSsDaYRw7_Jtos', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/iewy2sxsfige1.png?width=108&crop=smart&auto=webp&s=252703b86fe19e8da5f2c8cdc1fe62d919ce1b36', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/iewy2sxsfige1.png?width=216&crop=smart&auto=webp&s=707ac051dcc6bdee986036949835b567db98fbfb', 'width': 216}, {'height': 225, 'url': 'https://preview.redd.it/iewy2sxsfige1.png?width=320&crop=smart&auto=webp&s=1bca5ea67af73841e8524be4a3d0671f5eff2523', 'width': 320}, {'height': 450, 'url': 'https://preview.redd.it/iewy2sxsfige1.png?width=640&crop=smart&auto=webp&s=ad808e1a916c32f2181d2dc562c2065ba8cb4c99', 'width': 640}, {'height': 675, 'url': 'https://preview.redd.it/iewy2sxsfige1.png?width=960&crop=smart&auto=webp&s=4876204fa6fa50b3fb810133773bc14d350274e5', 'width': 960}, {'height': 760, 'url': 'https://preview.redd.it/iewy2sxsfige1.png?width=1080&crop=smart&auto=webp&s=0206dd64d37ea5c3c7fedb97b630e362b601830c', 'width': 1080}], 'source': {'height': 760, 'url': 'https://preview.redd.it/iewy2sxsfige1.png?auto=webp&s=3555e794ba7e1387ea55191574709dfbc2c7a915', 'width': 1080}, 'variants': {}}]}
Is deepseek-r1 is safe even on local ?
0
Hello everyone, As the title, the new deepseek-r1 is awesome and I want to do the full Open-r1 tutorial on 4xH100 but is it safe ? Even if I deploy it locally ? I own my data ? My data will not be transfer somewhere else ? On the cloud is clear but on local I didn’t find anything ! Thank you
2025-02-01T11:31:16
https://www.reddit.com/r/LocalLLaMA/comments/1if4fia/is_deepseekr1_is_safe_even_on_local/
elpad92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if4fia
false
null
t3_1if4fia
/r/LocalLLaMA/comments/1if4fia/is_deepseekr1_is_safe_even_on_local/
false
false
self
0
null
How can I force DeepSeek to think much more? And how much the longest i can force it?
42
Is there any benchmark/best practice for that?
2025-02-01T11:44:16
https://www.reddit.com/r/LocalLLaMA/comments/1if4m53/how_can_i_force_deepseek_to_think_much_more_and/
Western_Soil_4613
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if4m53
false
null
t3_1if4m53
/r/LocalLLaMA/comments/1if4m53/how_can_i_force_deepseek_to_think_much_more_and/
false
false
self
42
null
Is there a definitive guide on how to get **good** responses from local LLMs via ollama?
1
[removed]
2025-02-01T11:55:45
https://www.reddit.com/r/LocalLLaMA/comments/1if4s1o/is_there_a_definitive_guide_on_how_to_get_good/
Vast-Pace7353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if4s1o
false
null
t3_1if4s1o
/r/LocalLLaMA/comments/1if4s1o/is_there_a_definitive_guide_on_how_to_get_good/
false
false
self
1
null
Llama 3.1 70B performance
1
Hello! I cannot find anywhere on the internet if these numbers are the expected performance or not for running Llama 3.1 70B AWQ INT4 - from my understanding the INT4 should reduce memory footprint and also increase its performance. Running latest vLLM with prefix caching and chunked prefill. Benched with llmperf, concurrent requests = 1. \- 1x H100 GPU: \~35 output tokens per second. \- 2x A100 GPU: \~40 output tokens per second. Also speculative decoding using Llama 3.2 1B: \- 1x H100 GPU: \~80 output tokens per second. \- 2x A100 GPU: \~60 output tokens per second. I feel like I got conflicting results, and I don't know what to make of them.
2025-02-01T12:01:30
https://www.reddit.com/r/LocalLLaMA/comments/1if4vbf/llama_31_70b_performance/
purpledeerz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if4vbf
false
null
t3_1if4vbf
/r/LocalLLaMA/comments/1if4vbf/llama_31_70b_performance/
false
false
self
1
null
Llama 3.1 70B performance
1
[removed]
2025-02-01T12:06:57
https://www.reddit.com/r/LocalLLaMA/comments/1if4yeg/llama_31_70b_performance/
ScArL3T
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if4yeg
false
null
t3_1if4yeg
/r/LocalLLaMA/comments/1if4yeg/llama_31_70b_performance/
false
false
self
1
null
Deepseek
1
Why has my deepseek chat been stuck on cloudflare authentication for the pa's week. Initially I thought it was just busy but now it's become too long.
2025-02-01T12:20:41
https://www.reddit.com/r/LocalLLaMA/comments/1if55tl/deepseek/
StrainMundane6273
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if55tl
false
null
t3_1if55tl
/r/LocalLLaMA/comments/1if55tl/deepseek/
false
false
self
1
null
Llama 3.1 70B performance
1
[removed]
2025-02-01T12:34:24
https://www.reddit.com/r/LocalLLaMA/comments/1if5djm/llama_31_70b_performance/
ScArL3T
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if5djm
false
null
t3_1if5djm
/r/LocalLLaMA/comments/1if5djm/llama_31_70b_performance/
false
false
self
1
null
Mistral Small 3 is awesome, but...
21
It has a rather strange way of correcting itself and letting it go unnoticed. For example: **Prompt:** A marble is put in a glass. The glass is then turned upside down and put on a table. Then the glass is picked up and put in a microwave. Where's the marble? **Output:** >Based on the sequence of events you described, the marble would still be in the glass. Here's why: >**1.** The marble is put in the glass. >**2.** The glass is turned upside down and put on a table. At this point, the marble should fall out of the glass onto the table. >**3.** The glass is picked up and put in a microwave. Since the marble fell out in the previous step, it would not be in the glass when the glass is put in the microwave. >Therefore, the marble is on the table. It initially and wrongly states that the marble is still inside the glass, and when it's telling why it came to that conclusion, it makes it correct and nonchalantly alters the outcome, lol. That is the most clear example. I have noticed it do this quite a lot but more subtly for other tasks, such as creative writing. It kinda alters/improves texts on the go, which sometimes makes it a bit incoherent but not necessarily bad on the whole. In my opinion, apart from this strangle little behavior, Mistral Small 3 is generally very smart and probably the most generally "intelligent" model I've used for its size. I've especially had fun doing creative writing with it.
2025-02-01T12:37:11
https://www.reddit.com/r/LocalLLaMA/comments/1if5f3b/mistral_small_3_is_awesome_but/
Admirable-Star7088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if5f3b
false
null
t3_1if5f3b
/r/LocalLLaMA/comments/1if5f3b/mistral_small_3_is_awesome_but/
false
false
self
21
null
What is the simplest math models still can't solve?
4
What's your favourite simple math problem that llms still can't solve?
2025-02-01T12:45:44
https://www.reddit.com/r/LocalLLaMA/comments/1if5jzm/what_is_the_simplest_math_models_still_cant_solve/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if5jzm
false
null
t3_1if5jzm
/r/LocalLLaMA/comments/1if5jzm/what_is_the_simplest_math_models_still_cant_solve/
false
false
self
4
null
Has anyone tried R1-Zero?
12
I have read stories about R1-Zero (not the one available on the Deepseek website, which is just R1) having some strange thought behaviors. Does anyone have examples of these thought processes, or know where I can use the model through an API?
2025-02-01T12:47:50
https://www.reddit.com/r/LocalLLaMA/comments/1if5l87/has_anyone_tried_r1zero/
Quantum1248
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if5l87
false
null
t3_1if5l87
/r/LocalLLaMA/comments/1if5l87/has_anyone_tried_r1zero/
false
false
self
12
null
Just canceled my ChatGPT Plus subscription
652
I initially subscribed when they introduced uploading documents when it was limited to the plus plan. I kept holding onto it for o1 since it really was a game changer for me. But since R1 is free right now (when it’s available at least lol) and the quantized distilled models finally fit onto a GPU I can afford, I cancelled my plan and am going to get a GPU with more VRAM instead. I love the direction that open source machine learning is taking right now. It’s crazy to me that distillation of a reasoning model to something like Llama 8B can boost the performance by this much. I hope we soon will get more advancements in more efficient large context windows and projects like Open WebUI.
2025-02-01T12:56:16
https://www.reddit.com/r/LocalLLaMA/comments/1if5q97/just_canceled_my_chatgpt_plus_subscription/
Anxietrap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if5q97
false
null
t3_1if5q97
/r/LocalLLaMA/comments/1if5q97/just_canceled_my_chatgpt_plus_subscription/
false
false
self
652
null
Okay, don't talk bad about the Chinese elite. What about the Western elite?
0
Funny, the biggest criticism from the West is that Deepseek doesn't criticize the Chinese elite. But the Western world does not speak ill of the American/European elite. It's always that old story about "conspiracy theory", anti-Semitism, etc... Whoever is really in power cannot be criticized. Since the world has been the world, this is how it works. And no, Trump and the tech billionaires are not in real power. They are just actors to entertain the masses and impose the agenda.
2025-02-01T13:09:08
https://www.reddit.com/r/LocalLLaMA/comments/1if5yce/okay_dont_talk_bad_about_the_chinese_elite_what/
Objective_Lab_3182
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if5yce
false
null
t3_1if5yce
/r/LocalLLaMA/comments/1if5yce/okay_dont_talk_bad_about_the_chinese_elite_what/
false
false
self
0
null
Llama 3.1 70B Performance
1
[removed]
2025-02-01T13:12:25
https://www.reddit.com/r/LocalLLaMA/comments/1if60is/llama_31_70b_performance/
ScArL3T
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if60is
false
null
t3_1if60is
/r/LocalLLaMA/comments/1if60is/llama_31_70b_performance/
false
false
self
1
null
Reminder: Not your weights, not your model.
0
2025-02-01T13:13:16
https://i.redd.it/edozdsoy1jge1.png
Amgadoz
i.redd.it
1970-01-01T00:00:00
0
{}
1if612g
false
null
t3_1if612g
/r/LocalLLaMA/comments/1if612g/reminder_not_your_weights_not_your_model/
false
false
https://b.thumbs.redditm…fXDXAmaXvIRY.jpg
0
{'enabled': True, 'images': [{'id': 'z2vCzAFelEwzGxj-CwqqC2Gnb14NrDQMR1P1054WyEo', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/edozdsoy1jge1.png?width=108&crop=smart&auto=webp&s=2df9a8c1cd43ecc04303d8105994c75d27c84463', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/edozdsoy1jge1.png?width=216&crop=smart&auto=webp&s=1fb203b2b4f830440ebe8f7d933703a71892e1f0', 'width': 216}, {'height': 196, 'url': 'https://preview.redd.it/edozdsoy1jge1.png?width=320&crop=smart&auto=webp&s=a809846937f5738d9a33eca850bbd7c1a454cf74', 'width': 320}, {'height': 392, 'url': 'https://preview.redd.it/edozdsoy1jge1.png?width=640&crop=smart&auto=webp&s=61cdb8e09af9f630eb2f295677908477de83a40e', 'width': 640}, {'height': 589, 'url': 'https://preview.redd.it/edozdsoy1jge1.png?width=960&crop=smart&auto=webp&s=6f9c5c908a4950e15840c68da9c7c38932197d67', 'width': 960}, {'height': 662, 'url': 'https://preview.redd.it/edozdsoy1jge1.png?width=1080&crop=smart&auto=webp&s=0c4f8a054cf030edaef7474e4a3dce06d4bf794c', 'width': 1080}], 'source': {'height': 939, 'url': 'https://preview.redd.it/edozdsoy1jge1.png?auto=webp&s=33254681f1241351289376bf93998290333d86a7', 'width': 1530}, 'variants': {}}]}
Llama 3.1 70b Performance
3
Hello! I cannot find anywhere on the internet if these numbers are the expected performance or not for running Llama 3.1 70B AWQ INT4 - from my understanding the INT4 should reduce memory footprint and also increase its performance. Running latest vLLM with prefix caching and chunked prefill. Benched with llmperf, concurrent requests = 1. \- 1x H100 GPU: \~35 output tokens per second. \- 2x A100 GPU: \~40 output tokens per second. Also speculative decoding using Llama 3.2 1B: \- 1x H100 GPU: \~80 output tokens per second. \- 2x A100 GPU: \~60 output tokens per second. I feel like I got conflicting results, and I don't know what to make of them.
2025-02-01T13:18:27
https://www.reddit.com/r/LocalLLaMA/comments/1if64ds/llama_31_70b_performance/
ScArL3T
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if64ds
false
null
t3_1if64ds
/r/LocalLLaMA/comments/1if64ds/llama_31_70b_performance/
false
false
self
3
null
Thoughts about phi-4?
15
It's been a while since it was released. What do you think about it? How does it compare to qwen-2.5-14B and Mistral Nemo?
2025-02-01T13:28:46
https://www.reddit.com/r/LocalLLaMA/comments/1if6azp/thoughts_about_phi4/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if6azp
false
null
t3_1if6azp
/r/LocalLLaMA/comments/1if6azp/thoughts_about_phi4/
false
false
self
15
null
What should I run with my rig? Trying n8n automation.
1
[removed]
2025-02-01T13:36:02
https://www.reddit.com/r/LocalLLaMA/comments/1if6fm9/what_should_i_run_with_my_rig_trying_n8n/
Impossible_Brief5600
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if6fm9
false
null
t3_1if6fm9
/r/LocalLLaMA/comments/1if6fm9/what_should_i_run_with_my_rig_trying_n8n/
false
false
self
1
null
What are some other good communities to discuss local LLMs?
1
[removed]
2025-02-01T13:37:28
https://www.reddit.com/r/LocalLLaMA/comments/1if6gjz/what_are_some_other_good_communities_to_discuss/
AccurateSun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if6gjz
false
null
t3_1if6gjz
/r/LocalLLaMA/comments/1if6gjz/what_are_some_other_good_communities_to_discuss/
false
false
self
1
null
Which model I should run? Trying n8n automation.
1
[removed]
2025-02-01T13:39:56
https://www.reddit.com/r/LocalLLaMA/comments/1if6i5w/which_model_i_should_run_trying_n8n_automation/
DawarAzhar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if6i5w
false
null
t3_1if6i5w
/r/LocalLLaMA/comments/1if6i5w/which_model_i_should_run_trying_n8n_automation/
false
false
self
1
null
Anyone running models on an DL380 or other 2u server? If so, what have you built?
1
[removed]
2025-02-01T13:43:31
https://www.reddit.com/r/LocalLLaMA/comments/1if6kjj/anyone_running_models_on_an_dl380_or_other_2u/
fragmented_pkt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if6kjj
false
null
t3_1if6kjj
/r/LocalLLaMA/comments/1if6kjj/anyone_running_models_on_an_dl380_or_other_2u/
false
false
self
1
null
Best Open-Source LLaMA Model for Fine-Tuning a FAQ Bot?
1
[removed]
2025-02-01T13:47:46
https://www.reddit.com/r/LocalLLaMA/comments/1if6nfg/best_opensource_llama_model_for_finetuning_a_faq/
Separate_Cup_5095
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if6nfg
false
null
t3_1if6nfg
/r/LocalLLaMA/comments/1if6nfg/best_opensource_llama_model_for_finetuning_a_faq/
false
false
self
1
null
🚀Introducing LLPlayer - The media player integrated with OpenAI Whisper
33
Hello. I have created an open source video player that integrates OpenAI Whisper, Currently supports Windows only. GitHub: [http://github.com/umlx5h/LLPlayer](http://github.com/umlx5h/LLPlayer) Recently there was news on VLC about AI-generated subtitles, and like that one, I have created a video player that supports whisper.cpp natively. [Demo \(source:ted.com\/talks\/elon\_musk\_the\_future\_we\_re\_building\_and\_boring\)](https://reddit.com/link/1if6o88/video/0y1a36uo7jge1/player) Subtitles can be generated from the **playback position** of both local and online videos **asynchronously**. (with **yt-dlp** integration) Therefore, there is no need to wait at all to watch any videos with subtitles! It supports audio in **100** languages, and like VLC, it also supports real-time translation, with support for **134** languages. It is made for language learning purposes and has other unique features not found in normal video players. * **Dual Subtitles** * **Real-time Translation** (Google, DeepL) * **Real-time OCR Subtitles** (Tesseract, MicrosoftOCR) * **Subtitles Sidebar** * **Subtitles Seeking** * **Word Lookup & Search** * **Subtitles Downloader** More information can be found at: [https://llplayer.com](https://llplayer.com)
2025-02-01T13:48:54
https://www.reddit.com/r/LocalLLaMA/comments/1if6o88/introducing_llplayer_the_media_player_integrated/
umlx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if6o88
false
null
t3_1if6o88
/r/LocalLLaMA/comments/1if6o88/introducing_llplayer_the_media_player_integrated/
false
false
https://b.thumbs.redditm…BPDJB6aXbSFM.jpg
33
{'enabled': False, 'images': [{'id': 'ggtnVe4G09ciHN5rC349wgQa74jtdT-CnBIBFe9pU14', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/g0pAmVUQkF4KkAyxtOIwIb67j2xNu4Br8uYivNbBTeE.jpg?width=108&crop=smart&auto=webp&s=5df651505b4162dc1cd44792bafd0123ecdd1e7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/g0pAmVUQkF4KkAyxtOIwIb67j2xNu4Br8uYivNbBTeE.jpg?width=216&crop=smart&auto=webp&s=440dddacc00ebe877025b574d34e08a166881d0c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/g0pAmVUQkF4KkAyxtOIwIb67j2xNu4Br8uYivNbBTeE.jpg?width=320&crop=smart&auto=webp&s=e62efeca1903ab5c875d97b84cbb6f4df246a59a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/g0pAmVUQkF4KkAyxtOIwIb67j2xNu4Br8uYivNbBTeE.jpg?width=640&crop=smart&auto=webp&s=e25b86aa4e3a11f551927a2356ae1b77717da548', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/g0pAmVUQkF4KkAyxtOIwIb67j2xNu4Br8uYivNbBTeE.jpg?width=960&crop=smart&auto=webp&s=8dc592095526d4b0f87fcff9bc2d9288eec17283', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/g0pAmVUQkF4KkAyxtOIwIb67j2xNu4Br8uYivNbBTeE.jpg?width=1080&crop=smart&auto=webp&s=feb21f40dd055cff501203083b28fb3fb787b711', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/g0pAmVUQkF4KkAyxtOIwIb67j2xNu4Br8uYivNbBTeE.jpg?auto=webp&s=ea9a2078126058287c5e8bdaeb0b41a3ed5f55a1', 'width': 1200}, 'variants': {}}]}
Best way to run llm locally
1
Hi guys, Can I know what is the best way to run llm locally? I wanna know fully open source one and which support web ui and rag. I come across llama.cpp and kobold.cpp. I wonder whether there are other one which may be better than these two. I am looking for the overall best one, concerned about privacy too. Lastly, I wanna know what might be the top choice among community. (Which has more users and support)
2025-02-01T13:53:49
https://www.reddit.com/r/LocalLLaMA/comments/1if6rl2/best_way_to_run_llm_locally/
Drapper_Zedrus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if6rl2
false
null
t3_1if6rl2
/r/LocalLLaMA/comments/1if6rl2/best_way_to_run_llm_locally/
false
false
self
1
null
Can we run ollama with a RAG local without OPENAI?
2
Running ollama with deepseek is nice, however it would even be nicer to update the context with some local documents. Found some examples that run in python whereby a document can be uploaded and then the code wants to connect to OPENAI, so you need an openai-key, to digest the document in chunks with FAISS before it can be used for RAG. Is there a way to keep all the work on the local machine? Does anybody have an example to run ollama with deepseek with a RAG to enhance the LLMs context? Hope I am asking the right question - it is just that the python examples that I found so far and tried to execute needed me to have an OPENAI key and quickly ran through the rate limits for some reason
2025-02-01T14:04:56
https://www.reddit.com/r/LocalLLaMA/comments/1if6zpg/can_we_run_ollama_with_a_rag_local_without_openai/
Hace_x
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if6zpg
false
null
t3_1if6zpg
/r/LocalLLaMA/comments/1if6zpg/can_we_run_ollama_with_a_rag_local_without_openai/
false
false
self
2
null
How do I get the deepseek-r1 distill models to roleplay and talk from a first person perspective rather than constantly describing the character and setting and story?
1
[removed]
2025-02-01T14:05:14
https://www.reddit.com/r/LocalLLaMA/comments/1if6zwz/how_do_i_get_the_deepseekr1_distill_models_to/
CorruptCobalion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if6zwz
false
null
t3_1if6zwz
/r/LocalLLaMA/comments/1if6zwz/how_do_i_get_the_deepseekr1_distill_models_to/
false
false
self
1
null
o3-mini is now the SOTA coding model. It is truly something to behold. Procedural clouds in one-shot.
497
2025-02-01T14:08:08
https://v.redd.it/x607arxnbjge1
LocoMod
v.redd.it
1970-01-01T00:00:00
0
{}
1if71w7
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x607arxnbjge1/DASHPlaylist.mpd?a=1741010903%2CNTBkYTUxM2JkOWMyYzUwNWE3ZjE4YWIyN2M4NGJjZDcyZDZhMzZlNTNjYTc0NGI5NmI3ZWJhYzBlMWNlNjQ0Mw%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/x607arxnbjge1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1062, 'hls_url': 'https://v.redd.it/x607arxnbjge1/HLSPlaylist.m3u8?a=1741010903%2CMzU5YmRhZjJmOGY2NDlhYmYzOWQ3ZmQwMzkwZjk1ZTFhYTlmYjJjOTJiZTk4Yzk4NWEyYzIyNDMzYzQyZTBlNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/x607arxnbjge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1if71w7
/r/LocalLLaMA/comments/1if71w7/o3mini_is_now_the_sota_coding_model_it_is_truly/
false
false
https://external-preview…bfb0d628a64d3ecd
497
{'enabled': False, 'images': [{'id': 'aTBxM2VyeG5iamdlMUWHkN0UG3UwPNFPGIT0TYE7p36ybavsfv5qTlMpE8Gi', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/aTBxM2VyeG5iamdlMUWHkN0UG3UwPNFPGIT0TYE7p36ybavsfv5qTlMpE8Gi.png?width=108&crop=smart&format=pjpg&auto=webp&s=f03322f006d79c371ce035b5926813d7d3b04932', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/aTBxM2VyeG5iamdlMUWHkN0UG3UwPNFPGIT0TYE7p36ybavsfv5qTlMpE8Gi.png?width=216&crop=smart&format=pjpg&auto=webp&s=12da0ef89a4285b4b44b3a6f79c657835ffa7a62', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/aTBxM2VyeG5iamdlMUWHkN0UG3UwPNFPGIT0TYE7p36ybavsfv5qTlMpE8Gi.png?width=320&crop=smart&format=pjpg&auto=webp&s=2382839674b52189f0574180687650b24bcddccb', 'width': 320}, {'height': 354, 'url': 'https://external-preview.redd.it/aTBxM2VyeG5iamdlMUWHkN0UG3UwPNFPGIT0TYE7p36ybavsfv5qTlMpE8Gi.png?width=640&crop=smart&format=pjpg&auto=webp&s=54f32f8ea2c25aca638c2ef345356f532902fa64', 'width': 640}, {'height': 531, 'url': 'https://external-preview.redd.it/aTBxM2VyeG5iamdlMUWHkN0UG3UwPNFPGIT0TYE7p36ybavsfv5qTlMpE8Gi.png?width=960&crop=smart&format=pjpg&auto=webp&s=c67654ccb708dedf2c0082d755cd71aebcba4fc6', 'width': 960}, {'height': 597, 'url': 'https://external-preview.redd.it/aTBxM2VyeG5iamdlMUWHkN0UG3UwPNFPGIT0TYE7p36ybavsfv5qTlMpE8Gi.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1e07675ecb9d4ee89ddceeaf2aa5f16a9a6ae7f7', 'width': 1080}], 'source': {'height': 1912, 'url': 'https://external-preview.redd.it/aTBxM2VyeG5iamdlMUWHkN0UG3UwPNFPGIT0TYE7p36ybavsfv5qTlMpE8Gi.png?format=pjpg&auto=webp&s=f5d6e5afbb252ef0e8c3e0da61d8c3bab35c2818', 'width': 3456}, 'variants': {}}]}
Potential cons of using Nebius EU hosted DeepSeek deployment?
1
[removed]
2025-02-01T14:21:18
https://www.reddit.com/r/LocalLLaMA/comments/1if7az4/potential_cons_of_using_nebius_eu_hosted_deepseek/
anshabhi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if7az4
false
null
t3_1if7az4
/r/LocalLLaMA/comments/1if7az4/potential_cons_of_using_nebius_eu_hosted_deepseek/
false
false
self
1
null
How To Run Deepseek R1 671b Fully Locally On a $2000 EPYC Server
135
2025-02-01T14:30:39
https://digitalspaceport.com/how-to-run-deepseek-r1-671b-fully-locally-on-2000-epyc-rig/
RobotRobotWhatDoUSee
digitalspaceport.com
1970-01-01T00:00:00
0
{}
1if7hm3
false
null
t3_1if7hm3
/r/LocalLLaMA/comments/1if7hm3/how_to_run_deepseek_r1_671b_fully_locally_on_a/
false
false
default
135
null
smaller parameters model or a quantized version of the bigger model (4_K_M quants)?
3
I'm gonna download DeepSeek, Ollama offers only a Q4\_K version. I can run the 70B Q4\_K version I guess, or maybe the 30B Q4\_K. I was wondering if I should get a less processed (quantized) version, but lesser in params count.
2025-02-01T14:41:28
https://www.reddit.com/r/LocalLLaMA/comments/1if7pe4/smaller_parameters_model_or_a_quantized_version/
A7mdxDD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if7pe4
false
null
t3_1if7pe4
/r/LocalLLaMA/comments/1if7pe4/smaller_parameters_model_or_a_quantized_version/
false
false
self
3
null
How to replicate the behaviour of the HF Llama-2-7b-chat-hf API
1
Hello, I am trying to replicate the response quality we see in the example section to the right [https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) I am running the model locally and give both the web API version and mine the following prompt <<SYS>> <large chukn of text with facts about apples> <</SYS>> [INST] User: What are the nutrition facts about Apples [/INST] The one from the API looks very nice and is exactly what I want, but the local one responds with weird source links, and overall, it is not good. This is how I define the model `model_name = "meta-llama/Llama-2-7b-chat-hf"` tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) inputs = tokenizer(prompt, return_tensors="pt") This is how I use the model outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7, repetition_penalty=1.5, top_p=0.9, top_k=50) I tried playing around with the arguments, but the responses from the local model are of way lower quality and have things in them that are irrelevant like sources. What am I doing wrong, the resources on my PC don’t allow it to reach 32 GB of RAM, it caps at 25-27GB could that be the issue, or that affects only speed Does anyone have configuration that works well? I tried searching the internet for so long
2025-02-01T14:49:43
https://www.reddit.com/r/LocalLLaMA/comments/1if7vi5/how_to_replicate_the_behaviour_of_the_hf/
Specialist_Bee_9726
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if7vi5
false
null
t3_1if7vi5
/r/LocalLLaMA/comments/1if7vi5/how_to_replicate_the_behaviour_of_the_hf/
false
false
self
1
{'enabled': False, 'images': [{'id': 'aAYGC00FD8M8UbXTMQ3NBxyhNyEpQmadzMNNZBbyauU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lEVAwDlwdDG9VszuFWeio6TF7lI7QI-RoSfSlYJLOpE.jpg?width=108&crop=smart&auto=webp&s=a73ceaee26cf35549b56b4161b277f69aaaeda6d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lEVAwDlwdDG9VszuFWeio6TF7lI7QI-RoSfSlYJLOpE.jpg?width=216&crop=smart&auto=webp&s=694638f93bd330c005cab7d65497b27a833cf2e7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lEVAwDlwdDG9VszuFWeio6TF7lI7QI-RoSfSlYJLOpE.jpg?width=320&crop=smart&auto=webp&s=f824d2782b292c1f5ddcf1f5685c8dbd4e8d82d1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lEVAwDlwdDG9VszuFWeio6TF7lI7QI-RoSfSlYJLOpE.jpg?width=640&crop=smart&auto=webp&s=55314756582e65cea31f30b56dda3353bca8ef7a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lEVAwDlwdDG9VszuFWeio6TF7lI7QI-RoSfSlYJLOpE.jpg?width=960&crop=smart&auto=webp&s=302c453f48a611ce6d6d9ea72cb83134010b6858', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lEVAwDlwdDG9VszuFWeio6TF7lI7QI-RoSfSlYJLOpE.jpg?width=1080&crop=smart&auto=webp&s=fe9fe14d53c4d4cb79a0467892753740deef1f8d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lEVAwDlwdDG9VszuFWeio6TF7lI7QI-RoSfSlYJLOpE.jpg?auto=webp&s=7b606ce7f7fb4a508b364f74e8b03d52d9ae7249', 'width': 1200}, 'variants': {}}]}
Have we been oversold on how efficient DeepSeek R1 is at inference?
2
It seems none of the third-party API providers are providing as good performance as DeepSeek's own API (when their server is not getting hammered). I've tried both Nvidia and Azure, and it's surprising how slow the token speed is, given how little memory the MOE architecture uses. What's the current consensus on how efficient hosting the model is?
2025-02-01T14:51:11
https://www.reddit.com/r/LocalLLaMA/comments/1if7wkx/have_we_been_oversold_on_how_efficient_deepseek/
Miscend
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if7wkx
false
null
t3_1if7wkx
/r/LocalLLaMA/comments/1if7wkx/have_we_been_oversold_on_how_efficient_deepseek/
false
false
self
2
null
$200 o3 mini is NOT better than Free R1
51
2025-02-01T15:03:41
https://i.redd.it/r2ur6c6oljge1.jpeg
BidHot8598
i.redd.it
1970-01-01T00:00:00
0
{}
1if85xj
false
null
t3_1if85xj
/r/LocalLLaMA/comments/1if85xj/200_o3_mini_is_not_better_than_free_r1/
false
false
https://b.thumbs.redditm…IfVN-Z_hAI1A.jpg
51
{'enabled': True, 'images': [{'id': 'X_7-_T-VEFRk34JGAoMdGEqKQoYp3exHY1wbzYdJ3WI', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/r2ur6c6oljge1.jpeg?width=108&crop=smart&auto=webp&s=6b282a5abf438b5f5f5b78351e69101c40c23838', 'width': 108}, {'height': 266, 'url': 'https://preview.redd.it/r2ur6c6oljge1.jpeg?width=216&crop=smart&auto=webp&s=3a508908d1f5c54f46d36efea76752f52a5aafb1', 'width': 216}, {'height': 395, 'url': 'https://preview.redd.it/r2ur6c6oljge1.jpeg?width=320&crop=smart&auto=webp&s=d5fd3472c5b2c78f5fc59a8fdb6cb9e725f09b5e', 'width': 320}, {'height': 790, 'url': 'https://preview.redd.it/r2ur6c6oljge1.jpeg?width=640&crop=smart&auto=webp&s=139d36290ed05c072c8b0d227264818f672d503c', 'width': 640}], 'source': {'height': 834, 'url': 'https://preview.redd.it/r2ur6c6oljge1.jpeg?auto=webp&s=cd75595cfb75f2e9ca48d73e57ad11fcf25f849d', 'width': 675}, 'variants': {}}]}
Whats the best method to have AI refactor your entire project?
3
Hey! I have code in Github and usually use Codespaces for my coding. I want fo finalize my mobile app, so wondering if theres an AI that can kinda optimize, clean up, and refactor my code, but holistically, i.e. view my code as one big project not just look at each file independently. Any suggestions? I could download my code into VSCode as well locally but Codespaces has just been the easiest way for me to code Thank you in advance!
2025-02-01T15:12:44
https://www.reddit.com/r/LocalLLaMA/comments/1if8cr0/whats_the_best_method_to_have_ai_refactor_your/
MatthewNagy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if8cr0
false
null
t3_1if8cr0
/r/LocalLLaMA/comments/1if8cr0/whats_the_best_method_to_have_ai_refactor_your/
false
false
self
3
null
5090 vs 4090 for Llama 70B 3.3 for Q6 - Q8
1
I don’t really care about the internet drama over the 5 series, what actually matters to me is the extra VRAM and how the Llama 70B 3.3 quantization variants perform. I’m seriously considering the 5090 just for the VRAM alone. Has anyone tested the 70B 3.3 variants at Q6–Q8 and have any comparisons? Would love to see some actual data as all the reviews are just gaming benchmarks.
2025-02-01T15:13:49
https://www.reddit.com/r/LocalLLaMA/comments/1if8dj3/5090_vs_4090_for_llama_70b_33_for_q6_q8/
mrbluesneeze
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if8dj3
false
null
t3_1if8dj3
/r/LocalLLaMA/comments/1if8dj3/5090_vs_4090_for_llama_70b_33_for_q6_q8/
false
false
self
1
null
Cursor alternative that supports Notebooks properly (ipynb)
1
Either open source or paid pls?
2025-02-01T15:14:38
https://www.reddit.com/r/LocalLLaMA/comments/1if8e5z/cursor_alternative_that_supports_notebooks/
drivenkey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if8e5z
false
null
t3_1if8e5z
/r/LocalLLaMA/comments/1if8e5z/cursor_alternative_that_supports_notebooks/
false
false
self
1
null
Conversation with DeepSeek R1: Systemic censorship, Moloch, and Our Coming Cognitive Collapse
1
2025-02-01T15:16:08
https://github.com/yongxu/DeepBots/blob/main/conversations/3/README.md
renyongxu
github.com
1970-01-01T00:00:00
0
{}
1if8fby
false
null
t3_1if8fby
/r/LocalLLaMA/comments/1if8fby/conversation_with_deepseek_r1_systemic_censorship/
false
false
https://b.thumbs.redditm…eINa1R1HhDuU.jpg
1
{'enabled': False, 'images': [{'id': 'm8KoAogRH5JW4g8yCEDdx4tva3njfmCvlGj4e16m0Pg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HUP_OoqVDh5qJN6L-RbwJGcuAHEL6JM7y_KANaAYOGE.jpg?width=108&crop=smart&auto=webp&s=e27ac572ea9fe77dbdeedf4781302919a66fa003', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HUP_OoqVDh5qJN6L-RbwJGcuAHEL6JM7y_KANaAYOGE.jpg?width=216&crop=smart&auto=webp&s=260f32339eed42432a5f12bb5fd95f44dc31df7e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HUP_OoqVDh5qJN6L-RbwJGcuAHEL6JM7y_KANaAYOGE.jpg?width=320&crop=smart&auto=webp&s=d0398856230cda19d466dc11765b10995cf767ce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HUP_OoqVDh5qJN6L-RbwJGcuAHEL6JM7y_KANaAYOGE.jpg?width=640&crop=smart&auto=webp&s=06da3b1d6aaa314b1e6e6b898f33ddaff22a3fda', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HUP_OoqVDh5qJN6L-RbwJGcuAHEL6JM7y_KANaAYOGE.jpg?width=960&crop=smart&auto=webp&s=f507d5c9473614db6115825333cdd363eb30fa2c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HUP_OoqVDh5qJN6L-RbwJGcuAHEL6JM7y_KANaAYOGE.jpg?width=1080&crop=smart&auto=webp&s=4fc2ed42a9d3ac9a00804d2713c001a863e1b764', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HUP_OoqVDh5qJN6L-RbwJGcuAHEL6JM7y_KANaAYOGE.jpg?auto=webp&s=9913202a2aeaf822eab41f0a40f3d5cad3405ddd', 'width': 1200}, 'variants': {}}]}
Deepseek affect on Meta stock movement
1
[removed]
2025-02-01T15:18:16
https://www.reddit.com/r/LocalLLaMA/comments/1if8gv6/deepseek_affect_on_meta_stock_movement/
LowerButterscotch556
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if8gv6
false
null
t3_1if8gv6
/r/LocalLLaMA/comments/1if8gv6/deepseek_affect_on_meta_stock_movement/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ISDGQaK-jWHhPf8v0ls9r3JLoDsgFQvDqojGO2NZUvk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/m_mTtTfhf9kFE97sWe1p-d5gxKEnsNV6VqO9B4HYmn8.jpg?width=108&crop=smart&auto=webp&s=dd6489e28a31ca5f06d3b1b4a8b1548170eae502', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/m_mTtTfhf9kFE97sWe1p-d5gxKEnsNV6VqO9B4HYmn8.jpg?width=216&crop=smart&auto=webp&s=d836d2d6e3012a404ed9fc101749ad46aeccb57c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/m_mTtTfhf9kFE97sWe1p-d5gxKEnsNV6VqO9B4HYmn8.jpg?width=320&crop=smart&auto=webp&s=2d8b162efb5e47cca4e7ba636cd1f7984ef5010a', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/m_mTtTfhf9kFE97sWe1p-d5gxKEnsNV6VqO9B4HYmn8.jpg?width=640&crop=smart&auto=webp&s=8dbccfc6e67d1e57be069dfd5b57d17eea646737', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/m_mTtTfhf9kFE97sWe1p-d5gxKEnsNV6VqO9B4HYmn8.jpg?width=960&crop=smart&auto=webp&s=045932aa07342ef16679cc81a7ce101053bb33ce', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/m_mTtTfhf9kFE97sWe1p-d5gxKEnsNV6VqO9B4HYmn8.jpg?auto=webp&s=8f1d0b6726ba3e85bfe877670c5b6e4defada3af', 'width': 1024}, 'variants': {}}]}
Ant Deepseek V3 quant for GPU?
1
[removed]
2025-02-01T15:22:02
https://www.reddit.com/r/LocalLLaMA/comments/1if8jtu/ant_deepseek_v3_quant_for_gpu/
Significant_Bike9759
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if8jtu
false
null
t3_1if8jtu
/r/LocalLLaMA/comments/1if8jtu/ant_deepseek_v3_quant_for_gpu/
false
false
self
1
null
How resource usage scales with concurrent users
1
Hello everyone, In these few days after the release of Deepseek R1, we saw the first people publishing the specs of systems they built to either run the full R1, or a quantized/distilled version that is still very functional. It's great to see that we can run such powerful models on consumer hw while staying within 10000€ and still get acceptable performance. But what I'm wondering here, is if we can already take this a bit further or if we aren't there yet. All these setups demontrated the use of R1 by a single user, but what would happen if there are a few concurrent users? I think that, at least in principle, it would be possible to load the model into memory only once, but does Ollama (or something similar) support running multiple inferences in parallel on the same copy of the model? If it's necessary to load a full new copy for each user, this would aready be a showstopper for this scenario, given the massive memory requirement. Another possible problem that comes to my mind is the memory or processing bandwidth: maybe it's possible to have multiple concurrent inferences, but this saturates one of the two bandwidths, and the amount of tokens per second that you get plummets. I have a background in computer engineering, but I'm in the cybersecurity field, so I would like to hear from someone that has knowledge of how LLMs work and maybe also of the interernals of Ollama what are their thoughts on this. Thank you everyone for feeding my curiosity.
2025-02-01T15:22:07
https://www.reddit.com/r/LocalLLaMA/comments/1if8jw2/how_resource_usage_scales_with_concurrent_users/
GT95
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1if8jw2
false
null
t3_1if8jw2
/r/LocalLLaMA/comments/1if8jw2/how_resource_usage_scales_with_concurrent_users/
false
false
self
1
null