title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Level Up: System Design with Java/Spring Boot & LLMs
1
Hello everyone! I’m diving into system design for Java/Spring Boot applications and have recently started exploring LLMs. I’ve set up a local instance of Llama, utilizing my 2x 3090 GPUs, and I’m currently experimenting with it through a basic prompt interface. I’d love to know how I can effectively leverage my local Llama instance (or other similar LLMs) to assist in the system design process for this project. Additionally, if you’ve used LLMs for system design tasks before, what specific prompts or techniques have you found most helpful? Feel free to share any suggestions or resources, and if necessary, I can provide more details about my setup or objectives. Including a simplified diagram using tools like [draw.io](http://draw.io) or Lucidchart is also an option if that adds clarity. Thanks in advance for your insights—they’ll greatly help my personal growth!
2025-01-23T03:52:15
https://www.reddit.com/r/LocalLLaMA/comments/1i7uprp/level_up_system_design_with_javaspring_boot_llms/
saipavan23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7uprp
false
null
t3_1i7uprp
/r/LocalLLaMA/comments/1i7uprp/level_up_system_design_with_javaspring_boot_llms/
false
false
self
1
null
DeepSeek R1 Thinks for 10 Minutes Before Answering
1
[removed]
2025-01-23T03:57:25
https://www.reddit.com/r/LocalLLaMA/comments/1i7ut7r/deepseek_r1_thinks_for_10_minutes_before_answering/
Formal-Narwhal-1610
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7ut7r
false
null
t3_1i7ut7r
/r/LocalLLaMA/comments/1i7ut7r/deepseek_r1_thinks_for_10_minutes_before_answering/
false
false
self
1
null
Script of a ball bouncing inside a tesseract that I got from Deepseek-r1
3
2025-01-23T04:05:08
https://v.redd.it/w0tobu024oee1
onil_gova
v.redd.it
1970-01-01T00:00:00
0
{}
1i7uydp
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/w0tobu024oee1/DASHPlaylist.mpd?a=1740197122%2CMzNiMDYyMDJiZGE2N2EwZGQxYjBiNjQxYWFiOWYzM2M1MjM0NDZjMzNhODI0OTk5NjQ5YzhlNmRhNTI5ZjMzNA%3D%3D&v=1&f=sd', 'duration': 6, 'fallback_url': 'https://v.redd.it/w0tobu024oee1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/w0tobu024oee1/HLSPlaylist.m3u8?a=1740197122%2COTRlZGRjNjI3OTBjMmE3NDkxZjRhZGUyNzJkMjUxN2RmYzkyNjZkZTU5NjlmMmI1MjEyZDhiMzAxMTQ5ODBkZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w0tobu024oee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1000}}
t3_1i7uydp
/r/LocalLLaMA/comments/1i7uydp/script_of_a_ball_bouncing_inside_a_tesseract_that/
false
false
https://external-preview…c119801e129a789f
3
{'enabled': False, 'images': [{'id': 'NTFla29yMDI0b2VlMfKNapDGT80eoyITjeLVY0RSoJPixyxWEjJruKnV8JxL', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/NTFla29yMDI0b2VlMfKNapDGT80eoyITjeLVY0RSoJPixyxWEjJruKnV8JxL.png?width=108&crop=smart&format=pjpg&auto=webp&s=15c6a4a4ae8eaec1e9dd3a28538b666327fc4184', 'width': 108}, {'height': 155, 'url': 'https://external-preview.redd.it/NTFla29yMDI0b2VlMfKNapDGT80eoyITjeLVY0RSoJPixyxWEjJruKnV8JxL.png?width=216&crop=smart&format=pjpg&auto=webp&s=a58363620e0723386d0459620d624e7a47877786', 'width': 216}, {'height': 230, 'url': 'https://external-preview.redd.it/NTFla29yMDI0b2VlMfKNapDGT80eoyITjeLVY0RSoJPixyxWEjJruKnV8JxL.png?width=320&crop=smart&format=pjpg&auto=webp&s=97f0b3e798e95b42e1ba83f2e8fb171e307609af', 'width': 320}, {'height': 460, 'url': 'https://external-preview.redd.it/NTFla29yMDI0b2VlMfKNapDGT80eoyITjeLVY0RSoJPixyxWEjJruKnV8JxL.png?width=640&crop=smart&format=pjpg&auto=webp&s=10baf2dfbb5684458e0c89ffc5abc337bc212bc3', 'width': 640}, {'height': 690, 'url': 'https://external-preview.redd.it/NTFla29yMDI0b2VlMfKNapDGT80eoyITjeLVY0RSoJPixyxWEjJruKnV8JxL.png?width=960&crop=smart&format=pjpg&auto=webp&s=163030e81765d6b70042dcb58951971328852054', 'width': 960}, {'height': 777, 'url': 'https://external-preview.redd.it/NTFla29yMDI0b2VlMfKNapDGT80eoyITjeLVY0RSoJPixyxWEjJruKnV8JxL.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1188d401783355b66720ea565ec414804b0085e8', 'width': 1080}], 'source': {'height': 1068, 'url': 'https://external-preview.redd.it/NTFla29yMDI0b2VlMfKNapDGT80eoyITjeLVY0RSoJPixyxWEjJruKnV8JxL.png?format=pjpg&auto=webp&s=fa20add19348d64e0f2eccfbfecb287fe7556fb0', 'width': 1484}, 'variants': {}}]}
Say something, that proves you are human! How many AI agents are among us? Upvote humans!
0
We would love to know at Ozeki, what percentage of redditors are human.
2025-01-23T04:06:25
https://www.reddit.com/r/LocalLLaMA/comments/1i7uz8s/say_something_that_proves_you_are_human_how_many/
Outrageous-Win-3244
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7uz8s
false
null
t3_1i7uz8s
/r/LocalLLaMA/comments/1i7uz8s/say_something_that_proves_you_are_human_how_many/
false
false
self
0
null
Just wanted to share this thought process from the R1 distilled Qwen 1.5B Q2 that had my wife and I in stitches.
1
[removed]
2025-01-23T04:13:33
https://www.reddit.com/r/LocalLLaMA/comments/1i7v3tv/just_wanted_to_share_this_thought_process_from/
plopperzzz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7v3tv
false
null
t3_1i7v3tv
/r/LocalLLaMA/comments/1i7v3tv/just_wanted_to_share_this_thought_process_from/
false
false
self
1
null
A simple coding test prompt
2
`Create an image shader in GLSL that displays hundreds of stars on a black background. Animate the stars so that they are continuously moving toward the camera, as if the viewer is in a spaceship traveling at high speeds through the galaxy. The shader should implement the mainImage() function and be usable on Shadertoy.` Try this prompt with your favorite model, then paste it into a new shader at [shadertoy](https://www.shadertoy.com/) and hit compile. If it creates an animated starfield without error, count it as a success. I got a working answer from R1 about half the time, with just 34 lines of code. (You should try it in R1 just to see the chain of reasoning, it's amazing.) A lot of local models tend to fail including the 32B R1 distill and Phi-4.
2025-01-23T04:27:51
https://www.reddit.com/r/LocalLLaMA/comments/1i7vczt/a_simple_coding_test_prompt/
VegaKH
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7vczt
false
null
t3_1i7vczt
/r/LocalLLaMA/comments/1i7vczt/a_simple_coding_test_prompt/
false
false
self
2
{'enabled': False, 'images': [{'id': 'a72HQAEu4PTQrZjBx5KpJrKkkaUIvknsx8aBnEVNnro', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/JD8FRyjrjdpg3AyjRy2boRoBRXy8O6xbXf7VFHLbDPc.jpg?width=108&crop=smart&auto=webp&s=d1d1faa2c8d72398a611252570f3ac59b60adc06', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/JD8FRyjrjdpg3AyjRy2boRoBRXy8O6xbXf7VFHLbDPc.jpg?width=216&crop=smart&auto=webp&s=b3def230e7d2ea491ef519e08ac51330050225b9', 'width': 216}], 'source': {'height': 256, 'url': 'https://external-preview.redd.it/JD8FRyjrjdpg3AyjRy2boRoBRXy8O6xbXf7VFHLbDPc.jpg?auto=webp&s=b96e36275cf08cd60c6ac56eabd3032ffb3f0db1', 'width': 256}, 'variants': {}}]}
Anyone run a LLM on their mobile smartphone?
3
I have an iPhone 16 Pro Max. I’m sure this device is powerful enough.
2025-01-23T04:44:08
https://www.reddit.com/r/LocalLLaMA/comments/1i7vn3d/anyone_run_a_llm_on_their_mobile_smartphone/
Condomphobic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7vn3d
false
null
t3_1i7vn3d
/r/LocalLLaMA/comments/1i7vn3d/anyone_run_a_llm_on_their_mobile_smartphone/
false
false
self
3
null
Created a stock market research tool with LLM (Requesting Feedbacks)
1
2025-01-23T04:50:43
https://v.redd.it/vxfb7nq3coee1
Medical-Mistake3128
v.redd.it
1970-01-01T00:00:00
0
{}
1i7vqzj
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/vxfb7nq3coee1/DASHPlaylist.mpd?a=1740199861%2CNzZjMTFhYjE2NjEzODkyZmU4ZTkxMDY4YjZmZWZlMjYzNGQwZjRiZDkzYzFlMDYwMWM2NmQ0ZjA1NzBiZjFkOA%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/vxfb7nq3coee1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 650, 'hls_url': 'https://v.redd.it/vxfb7nq3coee1/HLSPlaylist.m3u8?a=1740199861%2CN2NkNmIyOGYwMTM5Mjk4NWNkMGIwNGU3YjY4NDBjZjUzNmUxZDU1MzE3Y2ZlNGYzNDFmZTk1M2VhYWZjZWFiZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vxfb7nq3coee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1i7vqzj
/r/LocalLLaMA/comments/1i7vqzj/created_a_stock_market_research_tool_with_llm/
false
false
https://external-preview…a4534eb1921dd50a
1
{'enabled': False, 'images': [{'id': 'N2xvbW1tcTNjb2VlMTLbzd9mYZ2HGPtpGHxc3cdg18zeUxucPWyA1xWO87Cu', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N2xvbW1tcTNjb2VlMTLbzd9mYZ2HGPtpGHxc3cdg18zeUxucPWyA1xWO87Cu.png?width=108&crop=smart&format=pjpg&auto=webp&s=8b93dacc8c787203df3cb7b869df6dfd7009793f', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/N2xvbW1tcTNjb2VlMTLbzd9mYZ2HGPtpGHxc3cdg18zeUxucPWyA1xWO87Cu.png?width=216&crop=smart&format=pjpg&auto=webp&s=06d0bf5380c023834f6dfc0a5ff4215953c8f495', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/N2xvbW1tcTNjb2VlMTLbzd9mYZ2HGPtpGHxc3cdg18zeUxucPWyA1xWO87Cu.png?width=320&crop=smart&format=pjpg&auto=webp&s=59278ee370f5ed111f57c355004db50097e41e6a', 'width': 320}, {'height': 324, 'url': 'https://external-preview.redd.it/N2xvbW1tcTNjb2VlMTLbzd9mYZ2HGPtpGHxc3cdg18zeUxucPWyA1xWO87Cu.png?width=640&crop=smart&format=pjpg&auto=webp&s=2b938ada4f6a414bb1bf8392b030f2f8f4c31b80', 'width': 640}, {'height': 487, 'url': 'https://external-preview.redd.it/N2xvbW1tcTNjb2VlMTLbzd9mYZ2HGPtpGHxc3cdg18zeUxucPWyA1xWO87Cu.png?width=960&crop=smart&format=pjpg&auto=webp&s=bb19ea567caed7f62996d227d940d6a8e0ba1289', 'width': 960}, {'height': 547, 'url': 'https://external-preview.redd.it/N2xvbW1tcTNjb2VlMTLbzd9mYZ2HGPtpGHxc3cdg18zeUxucPWyA1xWO87Cu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c35c8fc2b927530c88668b61b7099654fbc8e701', 'width': 1080}], 'source': {'height': 974, 'url': 'https://external-preview.redd.it/N2xvbW1tcTNjb2VlMTLbzd9mYZ2HGPtpGHxc3cdg18zeUxucPWyA1xWO87Cu.png?format=pjpg&auto=webp&s=578486e1029dc317135b947b606c82b9d01789d4', 'width': 1920}, 'variants': {}}]}
Introducing Codebase2Prompt: Streamline Your Codebase Summaries for LLMs – Contributors Welcome!
2
Hey guys, I'm excited to share [**Codebase2Prompt**](https://pypi.org/project/codebase2prompt/), a minimal command-line tool designed to help developers give large language models (LLMs) high level project context from codebases without the hassle breaking the context window. **What It Does:** * **Condenses Project Architecture:** Provides a concise overview of your project's structure, making it easier for both humans and AI tools to understand. This only shows the method address not the internal contents e.g. the name of the function/class etc. Its assumed that most developers and even LLMs name functions such that they can be easily understood by the name. Therefore, when you need to stop the LLM from drifting on a huge project this is a good solution. * **Optimized for AI Integration:** Generates summaries that are particularly useful for feeding into LLMs, enhancing their ability to work with your codebase effectively. You can find the project on GitHub here: [Codebase2Prompt Repository](https://github.com/epicshardz/codebase2prompt) **Looking for Contributors:** This project is available via pip but still in its early stages, and I'm looking for contributors to help expand its capabilities. Whether you're interested in adding new features, improving existing ones, or providing feedback, your contributions would be greatly appreciated. Feel free to check out the repository and see if there's anything you'd like to work on. Let's collaborate to make codebase understanding more accessible for everyone! Just do pip install codebase2prompt Then run c2p in your terminal. Use --exclude to remove unwanted directores e.g. --exclude venv Thanks, can't wait to hear some feedback and get some contributors!
2025-01-23T04:54:34
https://www.reddit.com/r/LocalLLaMA/comments/1i7vt9b/introducing_codebase2prompt_streamline_your/
redlikeazebra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7vt9b
false
null
t3_1i7vt9b
/r/LocalLLaMA/comments/1i7vt9b/introducing_codebase2prompt_streamline_your/
false
false
self
2
null
Llama lacks cultural understanding
1
I've noticed that when translating, llama outputs tend to lose or mis-interpret cultural nuances, while GPT models do not have this issue. Has anyone had the same experience or figured out a solution?
2025-01-23T05:03:16
https://www.reddit.com/r/LocalLLaMA/comments/1i7vysk/llama_lacks_cultural_understanding/
CS-fan-101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7vysk
false
null
t3_1i7vysk
/r/LocalLLaMA/comments/1i7vysk/llama_lacks_cultural_understanding/
false
false
self
1
null
Jan now supports DeepSeek R1 distills - model loading error fixed
46
2025-01-23T05:14:01
https://x.com/jandotai/status/1882282461041967605
Kooky-Somewhere-2883
x.com
1970-01-01T00:00:00
0
{}
1i7w583
false
null
t3_1i7w583
/r/LocalLLaMA/comments/1i7w583/jan_now_supports_deepseek_r1_distills_model/
false
false
https://a.thumbs.redditm…VDzYECVAq658.jpg
46
{'enabled': False, 'images': [{'id': 'X7te3l-ptKKiSzgqJowmN7Aw4ZDhBF_ylHcnrO-BmNw', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/292fRQ6ybDOj_r18_tYuGBSwlG2q4koHTYhO2NFfzsM.jpg?width=108&crop=smart&auto=webp&s=fccae688dbd1a41733f50a6c0d170c5bf35bd864', 'width': 108}, {'height': 175, 'url': 'https://external-preview.redd.it/292fRQ6ybDOj_r18_tYuGBSwlG2q4koHTYhO2NFfzsM.jpg?width=216&crop=smart&auto=webp&s=e41d5dbfa13467f11312114fb4e800cb92f484f1', 'width': 216}, {'height': 259, 'url': 'https://external-preview.redd.it/292fRQ6ybDOj_r18_tYuGBSwlG2q4koHTYhO2NFfzsM.jpg?width=320&crop=smart&auto=webp&s=6324d122cc871cd8f364055c954dadeb1b131a4a', 'width': 320}, {'height': 518, 'url': 'https://external-preview.redd.it/292fRQ6ybDOj_r18_tYuGBSwlG2q4koHTYhO2NFfzsM.jpg?width=640&crop=smart&auto=webp&s=1d01f7aa3e4b8157f9ad15919e2a4665a8fd758d', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/292fRQ6ybDOj_r18_tYuGBSwlG2q4koHTYhO2NFfzsM.jpg?auto=webp&s=1faa64ddcd6c7934cd4399ed25b654d4a3b061e3', 'width': 888}, 'variants': {}}]}
ByteDance dropping an Apache 2.0 licensed 2B, 7B & 72B "reasoning" agent for computer use
634
2025-01-23T05:27:26
https://v.redd.it/ealby85nioee1
Lynncc6
/r/LocalLLaMA/comments/1i7wcry/bytedance_dropping_an_apache_20_licensed_2b_7b/
1970-01-01T00:00:00
0
{}
1i7wcry
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ealby85nioee1/DASHPlaylist.mpd?a=1740331652%2CNzExZDdlZWQyYTNjZDZiZjA0NzJiZjc0Y2M4ZTMxNzhlMzI4OWE4YTNlZmVlMTU1M2U3NDg4N2E2NWVhYzdmOA%3D%3D&v=1&f=sd', 'duration': 236, 'fallback_url': 'https://v.redd.it/ealby85nioee1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/ealby85nioee1/HLSPlaylist.m3u8?a=1740331652%2CZjViNzhmYzk1YjczNjFkNmMxZWZkYjdiMjVkMjI3YTRhNjhjNmE5NjIxMjhlMDA1MzJhZGFlNjc2N2U3YzkyMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ealby85nioee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1i7wcry
/r/LocalLLaMA/comments/1i7wcry/bytedance_dropping_an_apache_20_licensed_2b_7b/
false
false
https://external-preview…6f899ee39dd11286
634
{'enabled': False, 'images': [{'id': 'OHh5dzk5NW5pb2VlMQuFHIyWgdnmfbsw0_j6sClcjT-Ye_u4ggCKOXoWw4Cm', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OHh5dzk5NW5pb2VlMQuFHIyWgdnmfbsw0_j6sClcjT-Ye_u4ggCKOXoWw4Cm.png?width=108&crop=smart&format=pjpg&auto=webp&s=ea82f012a901f91b23382f83fb7621023fe08876', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OHh5dzk5NW5pb2VlMQuFHIyWgdnmfbsw0_j6sClcjT-Ye_u4ggCKOXoWw4Cm.png?width=216&crop=smart&format=pjpg&auto=webp&s=0d0bd308934b04ad5da11ce893de844a490f498a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OHh5dzk5NW5pb2VlMQuFHIyWgdnmfbsw0_j6sClcjT-Ye_u4ggCKOXoWw4Cm.png?width=320&crop=smart&format=pjpg&auto=webp&s=6174d70120c0b78f35dac80b2379b0b627aced23', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OHh5dzk5NW5pb2VlMQuFHIyWgdnmfbsw0_j6sClcjT-Ye_u4ggCKOXoWw4Cm.png?width=640&crop=smart&format=pjpg&auto=webp&s=d48e7f2cfc37c94c0ca5757075ee3df595f94554', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OHh5dzk5NW5pb2VlMQuFHIyWgdnmfbsw0_j6sClcjT-Ye_u4ggCKOXoWw4Cm.png?width=960&crop=smart&format=pjpg&auto=webp&s=1786530bdd2020cb545e880c997fdbbe03fba179', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OHh5dzk5NW5pb2VlMQuFHIyWgdnmfbsw0_j6sClcjT-Ye_u4ggCKOXoWw4Cm.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1bb84a738b0a95d3bbda5e5d29577f415f999245', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OHh5dzk5NW5pb2VlMQuFHIyWgdnmfbsw0_j6sClcjT-Ye_u4ggCKOXoWw4Cm.png?format=pjpg&auto=webp&s=c241f3d552a7562d0f9717ffec7f0276075ff830', 'width': 1280}, 'variants': {}}]}
UI-TARS: Pioneering Automated GUI Interaction with Native Agents
6
2025-01-23T06:05:08
https://github.com/bytedance/UI-TARS
AaronFeng47
github.com
1970-01-01T00:00:00
0
{}
1i7wxmu
false
null
t3_1i7wxmu
/r/LocalLLaMA/comments/1i7wxmu/uitars_pioneering_automated_gui_interaction_with/
false
false
https://b.thumbs.redditm…f5SR1w6L80Cc.jpg
6
{'enabled': False, 'images': [{'id': 'a7Iwil_xP8FRE--ZAT_TcomvnbN5GiQAZaz5iaQvPaY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z6zBqpTY6R658MuCzX3JwMa8h_D5AELYfjFsR9eI4KY.jpg?width=108&crop=smart&auto=webp&s=80f2fcce36a53f06ac85b26e696fe37cd676d416', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z6zBqpTY6R658MuCzX3JwMa8h_D5AELYfjFsR9eI4KY.jpg?width=216&crop=smart&auto=webp&s=b0b487cb5b4ffe2084508e9346ea59c8beace7c1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z6zBqpTY6R658MuCzX3JwMa8h_D5AELYfjFsR9eI4KY.jpg?width=320&crop=smart&auto=webp&s=5ba87608c16c98910ce4eb3a10ac589c80d65ded', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z6zBqpTY6R658MuCzX3JwMa8h_D5AELYfjFsR9eI4KY.jpg?width=640&crop=smart&auto=webp&s=e81374702b5de50bb7c16f6c4f45ad7a4b5ea439', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z6zBqpTY6R658MuCzX3JwMa8h_D5AELYfjFsR9eI4KY.jpg?width=960&crop=smart&auto=webp&s=7c971fde86dfe0a800c058e49d9d20e1d793d4e4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z6zBqpTY6R658MuCzX3JwMa8h_D5AELYfjFsR9eI4KY.jpg?width=1080&crop=smart&auto=webp&s=8f22c229f9dc51bb60b8f118bcbc503ef562a828', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z6zBqpTY6R658MuCzX3JwMa8h_D5AELYfjFsR9eI4KY.jpg?auto=webp&s=5f73ceaf98747da4d8d0732a4af220f9d6e48b91', 'width': 1200}, 'variants': {}}]}
"Creative writing"
20
I often see people talking about the creative writing skills of a model. So I am curious to learn what kind of use-cases you are all working on. Are you all copywriters that use this for work, or perhaps translators? Or perhaps fanfic as a hobby? Or is it - you know - for erotica reasons? This may sound like a meme post but I'm actually curious since it pops up so often, so I'm happy to learn how "creative" people are using it so much.
2025-01-23T06:18:30
https://www.reddit.com/r/LocalLLaMA/comments/1i7x4va/creative_writing/
MountainGoatAOE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7x4va
false
null
t3_1i7x4va
/r/LocalLLaMA/comments/1i7x4va/creative_writing/
false
false
nsfw
20
null
The first performant open-source byte-level model without tokenization has been released. EvaByte is a 6.5B param model that also has multibyte prediction for faster inference (vs similar sized tokenized models)
300
2025-01-23T06:19:53
https://i.redd.it/o28q2pl6roee1.png
jd_3d
i.redd.it
1970-01-01T00:00:00
0
{}
1i7x5nd
false
null
t3_1i7x5nd
/r/LocalLLaMA/comments/1i7x5nd/the_first_performant_opensource_bytelevel_model/
false
false
https://b.thumbs.redditm…RE81znGyYxKY.jpg
300
{'enabled': True, 'images': [{'id': 'bJze2NXtkTxrqiH1vfeZcDap_i1EvoCbPF_OjBq6hqc', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/o28q2pl6roee1.png?width=108&crop=smart&auto=webp&s=3660e25e27ae830cd7d437b1afdd0d5e8c26b578', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/o28q2pl6roee1.png?width=216&crop=smart&auto=webp&s=90d22db69a93657ed656d675b996a76c87a0ee33', 'width': 216}, {'height': 216, 'url': 'https://preview.redd.it/o28q2pl6roee1.png?width=320&crop=smart&auto=webp&s=96cd462cbd83ff3f7d051dd6d2f464ce57e39ce6', 'width': 320}, {'height': 433, 'url': 'https://preview.redd.it/o28q2pl6roee1.png?width=640&crop=smart&auto=webp&s=fbcc9bcc3ec8f998f0c0b948df4117b2f7a6ae30', 'width': 640}], 'source': {'height': 607, 'url': 'https://preview.redd.it/o28q2pl6roee1.png?auto=webp&s=a2ae031e01fb23b28d128fd4fec1c665b2bb7cf7', 'width': 897}, 'variants': {}}]}
Deepseek R1
1
[removed]
2025-01-23T06:58:06
https://www.reddit.com/r/LocalLLaMA/comments/1i7xp9z/deepseek_r1/
necro_gay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7xp9z
false
null
t3_1i7xp9z
/r/LocalLLaMA/comments/1i7xp9z/deepseek_r1/
false
false
self
1
null
DeepSeek R1 and openrouter don't mix well... Help?
1
[removed]
2025-01-23T07:46:16
https://www.reddit.com/r/LocalLLaMA/comments/1i7yc7o/deepseek_r1_and_openrouter_dont_mix_well_help/
CuriosityEntertains
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7yc7o
false
null
t3_1i7yc7o
/r/LocalLLaMA/comments/1i7yc7o/deepseek_r1_and_openrouter_dont_mix_well_help/
false
false
self
1
null
DeepSeek V3 params
0
Just one quick question, i have the full model downloaded (~ 600GB) what is the models way of choosing the params? Is it based on the gpus i give it? Or caching? What if the machine restarts, will the cache be gone?
2025-01-23T08:04:23
https://www.reddit.com/r/LocalLLaMA/comments/1i7ykr2/deepseek_v3_params/
No-Emphasis6569
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7ykr2
false
null
t3_1i7ykr2
/r/LocalLLaMA/comments/1i7ykr2/deepseek_v3_params/
false
false
self
0
null
Gottcha!
1
https://preview.redd.it/…6877e4f96a2634
2025-01-23T08:23:01
https://www.reddit.com/r/LocalLLaMA/comments/1i7yth4/gottcha/
muxxington
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7yth4
false
null
t3_1i7yth4
/r/LocalLLaMA/comments/1i7yth4/gottcha/
false
false
https://b.thumbs.redditm…oxkQGDxzlGIU.jpg
1
null
Gottcha!
0
2025-01-23T08:23:56
https://i.redd.it/bviswyx7epee1.png
muxxington
i.redd.it
1970-01-01T00:00:00
0
{}
1i7ytw4
false
null
t3_1i7ytw4
/r/LocalLLaMA/comments/1i7ytw4/gottcha/
false
false
https://b.thumbs.redditm…Y2qWeovofgSI.jpg
0
{'enabled': True, 'images': [{'id': 'KCIKdPiFggjMcDO8e6nXW7QXzXlPGta48IUmOUiqZwc', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/bviswyx7epee1.png?width=108&crop=smart&auto=webp&s=bcd289a4aaeef35b20b4149abe936d320e22b02c', 'width': 108}, {'height': 106, 'url': 'https://preview.redd.it/bviswyx7epee1.png?width=216&crop=smart&auto=webp&s=e52a5b1a9ffb7c0ccfc279dcd7bbdf687cb7dbc4', 'width': 216}, {'height': 157, 'url': 'https://preview.redd.it/bviswyx7epee1.png?width=320&crop=smart&auto=webp&s=bb49415d7e499875e1e07f0a1814b49d15720b2e', 'width': 320}, {'height': 314, 'url': 'https://preview.redd.it/bviswyx7epee1.png?width=640&crop=smart&auto=webp&s=fc51c1dac9c2e803876c9caa99e44e22c9dda5a0', 'width': 640}, {'height': 471, 'url': 'https://preview.redd.it/bviswyx7epee1.png?width=960&crop=smart&auto=webp&s=1df5e729c07e720570b24e18bdaabffc30daa673', 'width': 960}], 'source': {'height': 498, 'url': 'https://preview.redd.it/bviswyx7epee1.png?auto=webp&s=6714e9df3a62c941a4eb7f2c557d6b26a6b95a4c', 'width': 1014}, 'variants': {}}]}
Anyone know why providers are charging so much more for Deepseek R1 than V3?
1
2025-01-23T08:36:07
https://openrouter.ai/deepseek/deepseek-r1
Saffron4609
openrouter.ai
1970-01-01T00:00:00
0
{}
1i7yzlz
false
null
t3_1i7yzlz
/r/LocalLLaMA/comments/1i7yzlz/anyone_know_why_providers_are_charging_so_much/
false
false
https://b.thumbs.redditm…c0n31oNQpJuQ.jpg
1
{'enabled': False, 'images': [{'id': 'Zio-186O4G6KJGuCFUAjtmAqFT67XUSFKdpFhfrgGIQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GUNAcP2mNPD2DJzzwZxrG7ClQI8kn0S0N1wENhZtVrQ.jpg?width=108&crop=smart&auto=webp&s=847d209edff56b1213aca963e6bb010775c9ccf9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GUNAcP2mNPD2DJzzwZxrG7ClQI8kn0S0N1wENhZtVrQ.jpg?width=216&crop=smart&auto=webp&s=404b39ce194b307ae87c03ab542f7b4c164d6d28', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GUNAcP2mNPD2DJzzwZxrG7ClQI8kn0S0N1wENhZtVrQ.jpg?width=320&crop=smart&auto=webp&s=35a7bd80624b132a3849670e9400f8268a631486', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GUNAcP2mNPD2DJzzwZxrG7ClQI8kn0S0N1wENhZtVrQ.jpg?width=640&crop=smart&auto=webp&s=d8079461891d07c8176489efda19301125f15d5d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GUNAcP2mNPD2DJzzwZxrG7ClQI8kn0S0N1wENhZtVrQ.jpg?width=960&crop=smart&auto=webp&s=eb227928ad7356084701d43dbdaa592693c0662f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GUNAcP2mNPD2DJzzwZxrG7ClQI8kn0S0N1wENhZtVrQ.jpg?width=1080&crop=smart&auto=webp&s=941ff794a1ef4e6ba73a85ed3270331676f9c197', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/GUNAcP2mNPD2DJzzwZxrG7ClQI8kn0S0N1wENhZtVrQ.jpg?auto=webp&s=25744641bd43fbce30302c41ad870b9e07efce87', 'width': 1200}, 'variants': {}}]}
UltraEval-Audio -- the world's first open-source framework that simultaneously supports both speech understanding and speech generation evaluation
7
https://preview.redd.it/…nbmb/AudioArena)
2025-01-23T08:38:43
https://www.reddit.com/r/LocalLLaMA/comments/1i7z0uc/ultraevalaudio_the_worlds_first_opensource/
Lynncc6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7z0uc
false
null
t3_1i7z0uc
/r/LocalLLaMA/comments/1i7z0uc/ultraevalaudio_the_worlds_first_opensource/
false
false
https://b.thumbs.redditm…6xYBElLxREYQ.jpg
7
{'enabled': False, 'images': [{'id': 'r0LC-nWsKzNEfw72QUXQhHe0CAYmqS8vIVhjLlPyKMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B045vPRrLJlCXbTCxeB0s4iVhFCqDA7ULPJqfqYVsh8.jpg?width=108&crop=smart&auto=webp&s=4c116712c53987cbff05f6b528d885b6e0986986', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B045vPRrLJlCXbTCxeB0s4iVhFCqDA7ULPJqfqYVsh8.jpg?width=216&crop=smart&auto=webp&s=a225c5a2b09bc8f90cc84ae8d847d22e2bb8309d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B045vPRrLJlCXbTCxeB0s4iVhFCqDA7ULPJqfqYVsh8.jpg?width=320&crop=smart&auto=webp&s=299f88e607af440cb3ad56a45e45b86d9706757e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B045vPRrLJlCXbTCxeB0s4iVhFCqDA7ULPJqfqYVsh8.jpg?width=640&crop=smart&auto=webp&s=4c5b4759c6ff9719170374bde526128ee2791d6e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B045vPRrLJlCXbTCxeB0s4iVhFCqDA7ULPJqfqYVsh8.jpg?width=960&crop=smart&auto=webp&s=b644ae87117a762a11ad6b554ef3db6a793744f4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B045vPRrLJlCXbTCxeB0s4iVhFCqDA7ULPJqfqYVsh8.jpg?width=1080&crop=smart&auto=webp&s=147bee35b3765ce16fb03ded9a606b97cf27b2b3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B045vPRrLJlCXbTCxeB0s4iVhFCqDA7ULPJqfqYVsh8.jpg?auto=webp&s=c480f167c2e2a09406cbec477eccf511c3702e61', 'width': 1200}, 'variants': {}}]}
LLM game jam!
7
We are doing an LLM game jam. [https://itch.io/jam/nobodywhojam](https://itch.io/jam/nobodywhojam) We have recently released Nobodywho 4.4 which now includes sampler configuration to get more fine grained control over which token i chosen next when using LLM this builds on top of several improvements and bug-fixes as well as built-in documentation over the last month. With all of these improvements, and the stability of our plugin increasing rapidly, we wanted to see what the community can create with it. Therefore we are sponsoring a gamejam on the 7th february spanding that whole weekend. Hope to see you guys there! 🫥
2025-01-23T08:42:19
https://www.reddit.com/r/LocalLLaMA/comments/1i7z2g8/llm_game_jam/
No_Abbreviations_532
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7z2g8
false
null
t3_1i7z2g8
/r/LocalLLaMA/comments/1i7z2g8/llm_game_jam/
false
false
self
7
null
Getting a 64MB M4 pro mini. What to use for code generation?
1
I'm doing suffering a little creative/researchy here, so although the project isn't yet large,there will be a lot of explaining new ideas at some length. Would the smallish context window of deepseek r1 be an issue? Will 70B be usable? Is something wise likely to be better?
2025-01-23T08:42:51
https://www.reddit.com/r/LocalLLaMA/comments/1i7z2ph/getting_a_64mb_m4_pro_mini_what_to_use_for_code/
gisborne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7z2ph
false
null
t3_1i7z2ph
/r/LocalLLaMA/comments/1i7z2ph/getting_a_64mb_m4_pro_mini_what_to_use_for_code/
false
false
self
1
null
How does DeepSeek compare to other LLMs with creative writing and non "deep-thinking" tasks?
1
I use 4o a lot for creative stuff so interested to hear how DeepSeek compares.
2025-01-23T08:54:51
https://www.reddit.com/r/LocalLLaMA/comments/1i7z863/how_does_deepseek_compare_to_other_llms_with/
MrHatesThisWebsite
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7z863
false
null
t3_1i7z863
/r/LocalLLaMA/comments/1i7z863/how_does_deepseek_compare_to_other_llms_with/
false
false
self
1
null
Local LLM to analyse data
1
Hello. I am new here and new to LLMs as well. We work with non profits. I want to know if you could guide me on if we can/should setup an open source LLM. Our use case is that we have a data warehouse and given its schema can it answer questions based on the data in the warehouse? And can it create charts based on that data. I’m asking 1) what is best open source LLM for this use case 2) is there any guide (documentation or YouTube) to setup and maintain and use the LLM. Apologies in advance if this doesn’t align with the guidelines.
2025-01-23T09:04:24
https://www.reddit.com/r/LocalLLaMA/comments/1i7zcne/local_llm_to_analyse_data/
bluebottleyellowbox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7zcne
false
null
t3_1i7zcne
/r/LocalLLaMA/comments/1i7zcne/local_llm_to_analyse_data/
false
false
self
1
null
Extremely long output tokens?
1
[removed]
2025-01-23T09:05:34
https://www.reddit.com/r/LocalLLaMA/comments/1i7zd62/extremely_long_output_tokens/
fuzzysingularity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7zd62
false
null
t3_1i7zd62
/r/LocalLLaMA/comments/1i7zd62/extremely_long_output_tokens/
false
false
self
1
null
Nivida just open sourced their long context goodies - 128k context for 50% less memory
1
If you need long context for RAG, tool use, agents, or just because, Nvidia released a new library to make it super simple. TLDR: You can get 128k context at 50% less memory 🐳 Here's a blog post on everything: [https://huggingface.co/blog/nvidia/kvpress](https://huggingface.co/blog/nvidia/kvpress)
2025-01-23T09:07:20
https://www.reddit.com/r/LocalLLaMA/comments/1i7zdxj/nivida_just_open_sourced_their_long_context/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7zdxj
false
null
t3_1i7zdxj
/r/LocalLLaMA/comments/1i7zdxj/nivida_just_open_sourced_their_long_context/
false
false
self
1
{'enabled': False, 'images': [{'id': 'm3pVuCOFC6JFWjOGI-t5BIond6xNVHWWv_lM22ZLbVE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=108&crop=smart&auto=webp&s=15a2401262ad6e2d209ef26c1f25f33afc0d0428', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=216&crop=smart&auto=webp&s=d1c03d427ef5ff435ea0dd27e8ee660d43a6327b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=320&crop=smart&auto=webp&s=4cda8f03a371eb1424f8444bc47a1cdf8ade920b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=640&crop=smart&auto=webp&s=fc4326ffb98dc14e566978f88ef739a5ca672b5d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=960&crop=smart&auto=webp&s=72fce8a6f75e7fb501022390a378f0dc03e15ba9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=1080&crop=smart&auto=webp&s=d558f41ed1536034192bcdc92aec1c41ed5b5023', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?auto=webp&s=4c2ef6c995bb1e2bedba51f527acad10492da5c9', 'width': 1200}, 'variants': {}}]}
Nivida just open sourced their long context goodies - 128k context for 50% less memory
224
If you need long context for RAG, tool use, agents, or just because, Nvidia released a new library to make it super simple. TLDR: You can get 128k context at 50% less memory Here's a blog post on everything: [https://huggingface.co/blog/nvidia/kvpress](https://huggingface.co/blog/nvidia/kvpress)
2025-01-23T09:09:11
https://www.reddit.com/r/LocalLLaMA/comments/1i7zeqz/nivida_just_open_sourced_their_long_context/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7zeqz
false
null
t3_1i7zeqz
/r/LocalLLaMA/comments/1i7zeqz/nivida_just_open_sourced_their_long_context/
false
false
self
224
{'enabled': False, 'images': [{'id': 'm3pVuCOFC6JFWjOGI-t5BIond6xNVHWWv_lM22ZLbVE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=108&crop=smart&auto=webp&s=15a2401262ad6e2d209ef26c1f25f33afc0d0428', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=216&crop=smart&auto=webp&s=d1c03d427ef5ff435ea0dd27e8ee660d43a6327b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=320&crop=smart&auto=webp&s=4cda8f03a371eb1424f8444bc47a1cdf8ade920b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=640&crop=smart&auto=webp&s=fc4326ffb98dc14e566978f88ef739a5ca672b5d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=960&crop=smart&auto=webp&s=72fce8a6f75e7fb501022390a378f0dc03e15ba9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?width=1080&crop=smart&auto=webp&s=d558f41ed1536034192bcdc92aec1c41ed5b5023', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1EGXfe3njoAdv8-vu6QoYUINf4_5M6RC7RjdxXu-fzQ.jpg?auto=webp&s=4c2ef6c995bb1e2bedba51f527acad10492da5c9', 'width': 1200}, 'variants': {}}]}
Video Call with Lily (Duolingo)
1
[removed]
2025-01-23T09:13:16
https://www.reddit.com/r/LocalLLaMA/comments/1i7zgja/video_call_with_lily_duolingo/
icm76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7zgja
false
null
t3_1i7zgja
/r/LocalLLaMA/comments/1i7zgja/video_call_with_lily_duolingo/
false
false
self
1
{'enabled': False, 'images': [{'id': '2vyI-r5cqw1_qIgkIAwpqb96cHvDdy_vZn40p5qlo9g', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/TcjHF8jaKGNMXvN9dfXxiu0qMdFEjSrWwttRlBk4scU.jpg?width=108&crop=smart&auto=webp&s=df909d7686fe3d0f275073fdcf73a9fb91aed378', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/TcjHF8jaKGNMXvN9dfXxiu0qMdFEjSrWwttRlBk4scU.jpg?width=216&crop=smart&auto=webp&s=4b9a509d7080510d2769797c92fcfdca5d645189', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/TcjHF8jaKGNMXvN9dfXxiu0qMdFEjSrWwttRlBk4scU.jpg?width=320&crop=smart&auto=webp&s=ea63e6db179b266261e252209c5348503ca6f9c8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/TcjHF8jaKGNMXvN9dfXxiu0qMdFEjSrWwttRlBk4scU.jpg?auto=webp&s=ab5fc739724e1a250b66175dbc8c693add3d96ef', 'width': 480}, 'variants': {}}]}
Reasoning Language Models: A Blueprint
1
[removed]
2025-01-23T09:17:04
https://www.reddit.com/r/LocalLLaMA/comments/1i7zi74/reasoning_language_models_a_blueprint/
Ok-Formal598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7zi74
false
null
t3_1i7zi74
/r/LocalLLaMA/comments/1i7zi74/reasoning_language_models_a_blueprint/
false
false
self
1
null
Is anyone else obsessed with watching R1's thinking output?
30
Is anyone else obsessed with watching the thinking output? Today, I tried the 1.5B Q5 distil model to see how it did with a simple question: >state density of gold and from this calculate the length of a cube of gold with 1kg mass. ``` Thought for 232 seconds Okay, so I need to figure out the density of gold and then use that to calculate the length of a cube made with 1 kg of gold. Hmm, let's break this down step by step. First, I remember that density is mass divided by volume. The formula for density is: Density= Volume Mass ​ But wait, to use this formula, I need to know the volume of the gold cube. Since it's a cube, all sides are equal, right? So if I can find the side length, I can find the volume. But before that, I need the density of gold. I think gold is really dense, but I'm not exactly sure what that number is. I think it's around 19.3 grams per cubic centimeter. Yeah, that sounds right. Let me confirm that... I think I remember that gold has a density of about 19.3 g/cm³. Okay, so that's the density. Now, I have a mass of 1 kg. Wait, I need to make sure the units are consistent because density is mass per volume. So, should I convert kilograms to grams? Yeah, that makes sense. 1 kg is 1000 grams. So, mass is 1000 grams, and density is 19.3 grams per cm³. So, using the density formula, I can rearrange it to solve for volume: Volume= Density Mass ​ Plugging in the numbers: Volume= 19.3 grams/cm 3 1000 grams ​ Let me do that division. 1000 divided by 19.3. Hmm, 19.3 goes into 1000 how many times? Let me calculate 19.3 times 50 is 965, which is less than 1000. Then 19.3 times 52 is 1003.6, which is just over 1000. So, it's approximately 52 cm³, maybe around 52.2 cm³ or something. Wait, actually, let me compute it more accurately. 19.3 multiplied by 52 is 1003.6, which is 3.6 grams over. So, 1000 / 19.3 is approximately 51.88 cm³. So, the volume is about 51.88 cm³. Now, since the cube has equal sides, the volume of a cube is side length cubed. So, Volume=Side 3 Therefore, to find the side length, I need to take the cube root of the volume. So, Side= 3 Volume ​ Plugging in the volume: Side= 3 51.88 cm 3 ​ Hmm, cube root of 51.88. Let me think about this. 3³ is 27, 4³ is 64, so it's somewhere between 3 and 4 cm. Let me calculate it more accurately. I know that 3.7³ is 3.7 * 3.7 * 3.7. 3.7 * 3.7 is 13.69, then 13.69 * 3.7. Let me compute that. 13.69 * 3 is 41.07, and 13.69 * 0.7 is 9.583. Adding them together gives 41.07 + 9.583 = 50.653 cm³. Hmm, that's close to 51.88, but a bit less. So, 3.7 cm³ gives 50.653 cm³. The volume we have is 51.88 cm³, so it's a bit higher. So, let's try 3.72 cm. 3.72 * 3.72 = ? First, 3.7 * 3.7 = 13.69, as before. Then, 0.02 * 3.72 is 0.0744, so 3.72 * 3.72 = 13.69 + 0.0744 + 0.0744? Wait, no, that's not correct. Actually, (a + b)² = a² + 2ab + b². So, (3.7 + 0.02)² = 3.7² + 23.7 0.02 + 0.02² = 13.69 + 0.148 + 0.0004 = 13.8384. Then, 13.8384 * 3.72. Let me compute that. 13.8384 * 3 = 41.5152 13.8384 * 0.7 = 9.68688 13.8384 * 0.02 = 0.276768 Adding them together: 41.5152 + 9.68688 = 51.20208 51.20208 + 0.276768 = 51.478848 cm³ So, 3.72³ is approximately 51.478848 cm³, which is a bit less than 51.88 cm³. So, let's try 3.73. 3.73 * 3.73 = ? Again, using the same method: 3.73 is 3.7 + 0.03 So, (3.7 + 0.03)² = 3.7² + 23.7 0.03 + 0.03² = 13.69 + 0.222 + 0.0009 = 13.9129 Then, 13.9129 * 3.73. Compute step by step: 13.9129 * 3 = 41.7387 13.9129 * 0.7 = 9.73903 13.9129 * 0.03 = 0.417387 Adding them up: 41.7387 + 9.73903 = 51.47773 51.47773 + 0.417387 = 51.895117 cm³ Wow, that's very close to 51.88 cm³. So, 3.73³ is approximately 51.895117 cm³. Which is a little over 51.88 cm³. So, the exact cube root is just a little less than 3.73 cm. Maybe 3.728 cm? Wait, perhaps I should use linear approximation or Newton-Raphson method for a better estimate. But this is getting complicated. Maybe I can use a calculator here, but since I don't have one, perhaps I can estimate. Alternatively, maybe I can use the exact calculation. But actually, perhaps I can note that 3.72³ is 51.478848, and 3.73³ is 51.895117. Since 51.88 is between them, so the cube root is somewhere in between. The difference between 51.88 and 51.478848 is 0.401152. The difference between 51.895117 and 51.478848 is 0.416269. So, the fraction is 0.401152 / 0.416269 ≈ 0.962. So, the cube root is approximately 3.72 + 0.962*(0.001) ≈ 3.72 + 0.000962 ≈ 3.720962 cm. So, approximately 3.721 cm. Therefore, the side length is approximately 3.721 cm. But wait, let me double-check my calculations because this seems a bit tedious. Alternatively, maybe I can use the formula for the volume of a cube and solve for the side length. Wait, another approach: let's take the cube root of 51.88. We know that 3.7³ is 50.653, and 3.72³ is 51.4788, which is less than 51.88. So, 3.72³ = 51.4788 Let me compute 3.72 + delta, where delta is small, such that (3.72 + delta)³ = 51.88 Using the binomial expansion: (3.72 + delta)^3 = 3.72³ + 3*(3.72)²delta + 3 (3.72)*delta² + delta³ We can ignore delta squared and delta cubed since delta is small. So, (3.72 + delta)^3 ≈ 3.72³ + 3*(3.72)²delta = 51.4788 + 3 (13.8384)delta = 51.4788 + 41.5152 delta Set this equal to 51.88: 51.4788 + 41.5152*delta = 51.88 Subtract 51.4788: 41.5152*delta = 0.4012 So, delta ≈ 0.4012 / 41.5152 ≈ 0.00962 So, delta ≈ 0.00962 cm. Therefore, the cube root is approximately 3.72 + 0.00962 ≈ 3.72962 cm. So, approximately 3.73 cm. But wait, when I cube 3.73, I get 51.895 cm³, which is just a bit over 51.88. So, actually, the cube root is slightly less than 3.73. Let me denote delta as negative. So, (3.72 + delta)^3 = 51.88 We found that 3.72³ is 51.4788, and 3.73³ is 51.8951. So, 51.88 is 51.8951 - 0.0151 above 3.73³. So, delta is such that: 51.4788 + 41.5152*delta = 51.88 So, 41.5152*delta = 51.88 - 51.4788 = 0.4012 Therefore, delta = 0.4012 / 41.5152 ≈ 0.00962 But since 3.73³ is higher, delta is negative, so delta ≈ -0.00962 cm. So, the cube root is approximately 3.72 - 0.00962 ≈ 3.71038 cm. Therefore, approximately 3.7104 cm. So, the side length is approximately 3.71 cm. Wait, let me verify by cubing 3.71: 3.71 * 3.71 = 13.7641 13.7641 * 3.71: Let me compute 13.7641 * 3 = 41.2923 13.7641 * 0.7 = 9.63487 13.7641 * 0.01 = 0.137641 Adding them together: 41.2923 + 9.63487 = 50.92717 50.92717 + 0.137641 ≈ 51.06481 cm³ Hmm, 51.06481, which is still less than 51.88. So, 3.71³ is 51.06481 cm³. So, that's still less than 51.88. Let me try 3.72: As before, 3.72³ is 51.4788 cm³, which is still less. 3.721³: Let's compute 3.721³. First, 3.721 * 3.721: 3.721 * 3.721 Compute 3 * 3.721 = 11.163 0.7 * 3.721 = 2.6047 0.02 * 3.721 = 0.07442 0.001 * 3.721 = 0.003721 So, adding them: 11.163 + 2.6047 = 13.7677 13.7677 + 0.07442 = 13.84212 13.84212 + 0.003721 ≈ 13.845841 So, 3.721² ≈ 13.845841 Now, 13.845841 * 3.721: Again, break it down: 13 * 3.721 = 48.373 0.845841 * 3.721 ≈ Let's compute 0.8 * 3.721 = 2.9768 0.04 * 3.721 = 0.14884 0.005841 * 3.721 ≈ 0.02176 Adding them together: 2.9768 + 0.14884 = 3.12564 3.12564 + 0.02176 ≈ 3.1474 So, total is 48.373 + 3.1474 ≈ 51.5204 cm³ So, 3.721³ ≈ 51.5204 cm³ Which is still less than 51.88. Hmm, so 3.721³ is 51.5204, which is less than 51.88. So, let's try 3.722. Compute 3.722³: First, 3.722²: Again, 3.72² is 13.8384, so 3.722² = 13.8384 + 2*(0.002)*3.72 + (0.002)^2 Wait, actually, 3.722 is 3.72 + 0.002, so: (3.72 + 0.002)² = 3.72² + 23.72 0.002 + 0.002² = 13.8384 + 0.01488 + 0.000004 = 13.853284 Then, 13.853284 * 3.722: Break it down: 13 * 3.722 = 48.386 0.853284 * 3.722 ≈ Let's compute: 0.8 * 3.722 = 2.9776 0.05 * 3.722 = 0.1861 0.003284 * 3.722 ≈ 0.01222 Adding them: 2.9776 + 0.1861 = 3.1637 3.1637 + 0.01222 ≈ 3.1759 Total volume: 48.386 + 3.1759 ≈ 51.5619 cm³ So, 3.722³ ≈ 51.5619 cm³, which is less than 51.88. So, 3.722³ = 51.5619 Difference from 51.88: 51.88 - 51.5619 = 0.3181 So, need to add more. Let me compute the derivative of the volume with respect to side length to approximate the delta. V = s³ dV/ds = 3s² At s = 3.722, dV/ds = 3*(3.722)² ≈ 3*13.853284 ≈ 41.55985 So, delta V ≈ dV/ds * delta = 41.55985 * delta = 0.3181 So, delta ≈ 0.3181 / 41.55985 ≈ 0.00765 cm Therefore, s ≈ 3.722 + 0.00765 ≈ 3.72965 cm So, approximately 3.7297 cm So, the cube root is approximately 3.7297 cm Therefore, the side length of the cube is approximately 3.73 cm To be precise, about 3.73 cm. Therefore, the edge length of the cube is approximately 3.73 cm. Let me verify by cubing 3.73: 3.73 * 3.73 = 13.9129 13.9129 * 3.73: 13 * 3.73 = 48.49 0.9129 * 3.73 ≈ Let's compute: 0.9 * 3.73 = 3.357 0.0129 * 3.73 ≈ 0.0482 Adding together: 3.357 + 0.0482 ≈ 3.4052 Total volume: 48.49 + 3.4052 ≈ 51.8952 cm³ Which is very close to 51.88 cm³. So, 3.73 cm³ is approximately 51.8952, which is about 51.90, which is very close to 51.88. Therefore, 3.73 cm is a good approximation. So, the edge length is approximately 3.73 cm. Hence, the edge of the cube is approximately 3.73 cm. Final Answer The edge of the cube is approximately \boxed{3.73} cm. ```
2025-01-23T09:27:55
https://www.reddit.com/r/LocalLLaMA/comments/1i7zn2e/is_anyone_else_obsessed_with_watching_r1s/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7zn2e
false
null
t3_1i7zn2e
/r/LocalLLaMA/comments/1i7zn2e/is_anyone_else_obsessed_with_watching_r1s/
false
false
self
30
null
is it possible to run original DeepSeek R1 600B+ unsloth/DeepSeek-R1-GGUF versions in AMD Mi300X GPUs ?
1
[removed]
2025-01-23T09:34:38
https://www.reddit.com/r/LocalLLaMA/comments/1i7zq0x/is_it_possible_to_run_original_deepseek_r1_600b/
Super-Designer4622
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7zq0x
false
null
t3_1i7zq0x
/r/LocalLLaMA/comments/1i7zq0x/is_it_possible_to_run_original_deepseek_r1_600b/
false
false
self
1
null
Simple reasoning effort extender with r1 by removing early </think> tag
1
[removed]
2025-01-23T09:43:24
https://www.reddit.com/r/LocalLLaMA/comments/1i7ztx1/simple_reasoning_effort_extender_with_r1_by/
vTuanpham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7ztx1
false
null
t3_1i7ztx1
/r/LocalLLaMA/comments/1i7ztx1/simple_reasoning_effort_extender_with_r1_by/
false
false
https://b.thumbs.redditm…l79mNRV0J2FI.jpg
1
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
Can the original DeepSeek R1 600B+ model (unsloth/DeepSeek-R1-GGUF) be executed on AMD MI300X GPUs?
1
[removed]
2025-01-23T09:45:12
https://www.reddit.com/r/LocalLLaMA/comments/1i7zuqm/can_the_original_deepseek_r1_600b_model/
RageshAntony
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7zuqm
false
null
t3_1i7zuqm
/r/LocalLLaMA/comments/1i7zuqm/can_the_original_deepseek_r1_600b_model/
false
false
self
1
null
Can the original DeepSeek R1 600B+ model (unsloth/DeepSeek-R1-GGUF) be executed on AMD MI300X GPUs?
9
I’m considering a local deployment on AMD MI300X GPUs due to their lower cost on Runpod. If it’s feasible to use these GGUF models on AMD hardware, which solution is best for inference: Llama.cpp, Oolama, vLLM, or SgLang?
2025-01-23T09:46:46
https://www.reddit.com/r/LocalLLaMA/comments/1i7zvh2/can_the_original_deepseek_r1_600b_model/
MatrixEternal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i7zvh2
false
null
t3_1i7zvh2
/r/LocalLLaMA/comments/1i7zvh2/can_the_original_deepseek_r1_600b_model/
false
false
self
9
null
Buying Mac Mini M4 Pro for local LLM, considering Project DIGITS also - thoughts on spec?
1
[removed]
2025-01-23T10:08:57
https://www.reddit.com/r/LocalLLaMA/comments/1i8064v/buying_mac_mini_m4_pro_for_local_llm_considering/
Better-Salamander499
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8064v
false
null
t3_1i8064v
/r/LocalLLaMA/comments/1i8064v/buying_mac_mini_m4_pro_for_local_llm_considering/
false
false
self
1
null
How to make DeepSeek-R1-Distill-Llama-70B Model reason?
2
I'm using Llama Instruct, but the model generates tokens without any <thinking> tags, even when I instruct it to think. What could be the problem?
2025-01-23T10:16:26
https://www.reddit.com/r/LocalLLaMA/comments/1i809q8/how_to_make_deepseekr1distillllama70b_model_reason/
d00m_sayer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i809q8
false
null
t3_1i809q8
/r/LocalLLaMA/comments/1i809q8/how_to_make_deepseekr1distillllama70b_model_reason/
false
false
self
2
null
I wondered if Deepseek is losing money with those API prices, so I made some numbers
1
[removed]
2025-01-23T10:20:12
https://www.reddit.com/r/LocalLLaMA/comments/1i80bkt/i_wondered_if_deepseek_is_losing_money_with_those/
raul3820
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i80bkt
false
null
t3_1i80bkt
/r/LocalLLaMA/comments/1i80bkt/i_wondered_if_deepseek_is_losing_money_with_those/
false
false
self
1
null
deepseek is a side project
2,229
2025-01-23T10:22:48
https://i.redd.it/zdvrlxahzpee1.jpeg
ParsaKhaz
i.redd.it
1970-01-01T00:00:00
0
{}
1i80cwf
false
null
t3_1i80cwf
/r/LocalLLaMA/comments/1i80cwf/deepseek_is_a_side_project/
false
false
https://b.thumbs.redditm…YRn5cTG75bUY.jpg
2,229
{'enabled': True, 'images': [{'id': 'Bg8LHev9U1iyL-ekURj311Y-6KKeKJ5bR-GhY8kRYKM', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/zdvrlxahzpee1.jpeg?width=108&crop=smart&auto=webp&s=eb307cac70e70afa7b4ab804058cc1103147c2d2', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/zdvrlxahzpee1.jpeg?width=216&crop=smart&auto=webp&s=358aef7f8fc5378d17fa6c1669296f159dc83b8f', 'width': 216}, {'height': 237, 'url': 'https://preview.redd.it/zdvrlxahzpee1.jpeg?width=320&crop=smart&auto=webp&s=80ea50ef26302c8269077f34ac234df87e8ed526', 'width': 320}, {'height': 474, 'url': 'https://preview.redd.it/zdvrlxahzpee1.jpeg?width=640&crop=smart&auto=webp&s=df808bd8bc2e4ba90db2fdb005eaae092d5d8206', 'width': 640}, {'height': 711, 'url': 'https://preview.redd.it/zdvrlxahzpee1.jpeg?width=960&crop=smart&auto=webp&s=4114ec60d7555edbdb350d38dfb7a17a50bfaa60', 'width': 960}, {'height': 800, 'url': 'https://preview.redd.it/zdvrlxahzpee1.jpeg?width=1080&crop=smart&auto=webp&s=a71df8bf545f0c824afb0ac1b4cb5c8b15cee749', 'width': 1080}], 'source': {'height': 810, 'url': 'https://preview.redd.it/zdvrlxahzpee1.jpeg?auto=webp&s=442bc847ab0d9966865c88979a13ab7d8a3f9c71', 'width': 1093}, 'variants': {}}]}
Wait, let me double-check if that’s correct...
0
2025-01-23T10:33:30
https://i.redd.it/eguv7dtd1qee1.png
AaronFeng47
i.redd.it
1970-01-01T00:00:00
0
{}
1i80i68
false
null
t3_1i80i68
/r/LocalLLaMA/comments/1i80i68/wait_let_me_doublecheck_if_thats_correct/
false
false
https://b.thumbs.redditm…cFiWLPkHF1pQ.jpg
0
{'enabled': True, 'images': [{'id': 'i5feG7AetxsYC9NM9mSiohcNVz678oCO62BPYfmxolQ', 'resolutions': [{'height': 145, 'url': 'https://preview.redd.it/eguv7dtd1qee1.png?width=108&crop=smart&auto=webp&s=f0a1316fc90d012bc9fc66734ca633e57b8e582d', 'width': 108}, {'height': 290, 'url': 'https://preview.redd.it/eguv7dtd1qee1.png?width=216&crop=smart&auto=webp&s=7fc0ef49292a90e39e9938ecbcbd6b4a5a40e199', 'width': 216}, {'height': 430, 'url': 'https://preview.redd.it/eguv7dtd1qee1.png?width=320&crop=smart&auto=webp&s=89ef3c6e3374ee744f4511b2e578b4bd3ecb8517', 'width': 320}, {'height': 861, 'url': 'https://preview.redd.it/eguv7dtd1qee1.png?width=640&crop=smart&auto=webp&s=3040d5058b22de0fd85af99868591889b6aa7101', 'width': 640}, {'height': 1292, 'url': 'https://preview.redd.it/eguv7dtd1qee1.png?width=960&crop=smart&auto=webp&s=624f1c051f55456ef34a89ceebf939b4b4238a91', 'width': 960}, {'height': 1453, 'url': 'https://preview.redd.it/eguv7dtd1qee1.png?width=1080&crop=smart&auto=webp&s=6e29405ee93c10aea0340418fa04f6670ae10e25', 'width': 1080}], 'source': {'height': 1938, 'url': 'https://preview.redd.it/eguv7dtd1qee1.png?auto=webp&s=821f6bd6ab476b636b0adae1b4bd8a7d9f8c8ff4', 'width': 1440}, 'variants': {}}]}
Deepseek-r1:32b
4
Hey, have any of you realised that, when using it for code generation and refactoring, Deepseek-r1 somehow misses a random character from variables? It does it repeatedly, I’m wondering whether my set up is the cause or if it’s something inherent? Ollama Both Deepseek-r1:32b and Deepseek-r1:32b-qwen-distill-q4_K_M Running fully on 3090 Hosted via open-webUI Parameters all to webUI default bar the context length which is set at 4096
2025-01-23T10:36:33
https://www.reddit.com/r/LocalLLaMA/comments/1i80job/deepseekr132b/
morifo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i80job
false
null
t3_1i80job
/r/LocalLLaMA/comments/1i80job/deepseekr132b/
false
false
self
4
null
Unity/Unreal with GPT Realtime Audio
3
Does anyone know how I can connect Ollama or GPT Realtime audio for a VR game that I'm trying to make ? I want there to be a character with lipsync based on realtime audio. Something like Digital Human from Nvidia but more customizable. I would imagine it makes for killer VR gameplay
2025-01-23T10:39:01
https://www.reddit.com/r/LocalLLaMA/comments/1i80kuk/unityunreal_with_gpt_realtime_audio/
Suspicious-Size-8159
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i80kuk
false
null
t3_1i80kuk
/r/LocalLLaMA/comments/1i80kuk/unityunreal_with_gpt_realtime_audio/
false
false
self
3
null
Feel like a n00b, but build spec questions
1
[removed]
2025-01-23T10:42:39
https://www.reddit.com/r/LocalLLaMA/comments/1i80mp1/feel_like_a_n00b_but_build_spec_questions/
Adam-TeamDE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i80mp1
false
null
t3_1i80mp1
/r/LocalLLaMA/comments/1i80mp1/feel_like_a_n00b_but_build_spec_questions/
false
false
self
1
null
GPU as a Service for Uncensored Models
6
All the current GPU as Service providers enables on demand inference only for mainstream models. Would you use a GPU-as-a-service provider that focuses on uncensored and non-mainstream LLMs? The provider would host these models and enable on-demand access for non-mainstream LLMs.
2025-01-23T11:03:52
https://www.reddit.com/r/LocalLLaMA/comments/1i80xot/gpu_as_a_service_for_uncensored_models/
Sarcinismo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i80xot
false
null
t3_1i80xot
/r/LocalLLaMA/comments/1i80xot/gpu_as_a_service_for_uncensored_models/
false
false
self
6
null
I asked AI DeepSeek R-1 to evaluate the limit of lim x-->0 x^3/sin2x-2x. So these is the thought process of that
1
Okay, let me try to work through this limit problem. The user asked for the limit as x approaches 0 of x cubed divided by sin(2x) minus 2x. So, the expression is x³/(sin(2x) - 2x). First, I remember that when evaluating limits, especially as x approaches 0, it's a good idea to plug in the value first to see if it's indeterminate. Let me substitute x = 0 into the numerator and denominator. The numerator becomes 0³, which is 0. The denominator becomes sin(0) - 2\*0, which is 0 - 0 = 0. So, we have a 0/0 indeterminate form. That means L'Hôpital's Rule is applicable here. L'Hôpital's Rule says that if the limit of f(x)/g(x) as x approaches a is 0/0 or ∞/∞, then we can take the derivatives of the numerator and denominator and try the limit again. So, let's differentiate the numerator and denominator. The numerator is x³, so its derivative is 3x². The denominator is sin(2x) - 2x. The derivative of sin(2x) is 2cos(2x) (using the chain rule), and the derivative of -2x is -2. So, the derivative of the denominator is 2cos(2x) - 2. Now, the new limit after applying L'Hôpital's Rule once is (3x²)/(2cos(2x) - 2). Let's check what happens when x approaches 0 here. Plugging in x = 0, the numerator is 0, and the denominator is 2cos(0) - 2 = 2\*1 - 2 = 0. So we still have 0/0. That means we need to apply L'Hôpital's Rule again. Taking derivatives once more: the numerator is 3x², derivative is 6x. The denominator is 2cos(2x) - 2, derivative is -4sin(2x). So the new limit is 6x / (-4sin(2x)). Simplify coefficients: 6/-4 is -3/2, so it's (-3/2)\*(x/sin(2x)). Now, evaluate the limit as x approaches 0 of x/sin(2x). Remember that lim\_{x→0} sin(ax)/(ax) = 1, so lim\_{x→0} x/sin(2x) = lim\_{x→0} 1/(2\*(sin(2x)/(2x))) ) = 1/(2\*1) = 1/2. Therefore, the limit becomes (-3/2)\*(1/2) = -3/4. Wait, but let me double-check. If I have x/sin(2x), that's equivalent to (1/2)\*(2x)/sin(2x). As x approaches 0, 2x approaches 0, so (2x)/sin(2x) approaches 1. Therefore, x/sin(2x) approaches 1/2. So yes, that part is correct. Multiplying that by -3/2 gives -3/4. But wait, when I took the second derivative, the denominator's derivative was -4sin(2x), right? Let me confirm: derivative of 2cos(2x) is 2\*(-sin(2x))\*2 = -4sin(2x), and derivative of -2 is 0. So yes, denominator's second derivative is -4sin(2x). So the second application of L'Hôpital's gives 6x / (-4sin(2x)) which simplifies to (-6x)/(4sin(2x)) = (-3x)/(2sin(2x)). Then, factoring out constants, (-3/2)\*(x/sin(2x)), which as x→0 is (-3/2)\*(1/2) = -3/4. Alternatively, maybe using Taylor series expansion could be another approach. Let's see. The denominator sin(2x) can be expanded as 2x - (2x)\^3/6 + (2x)\^5/120 - ... So sin(2x) - 2x = - (8x³)/6 + ... which is - (4x³)/3 + higher order terms. Therefore, the denominator is approximately -4x³/3 when x is near 0. The numerator is x³. So the expression becomes x³ / (-4x³/3) = -3/4. That confirms the result we got using L'Hôpital's Rule twice. So both methods give the same answer, -3/4. Therefore, the limit should be -3/4. I think that's it. Let me just check once more to make sure I didn't make any calculation mistakes. Especially with the signs. The denominator after first derivative was 2cos(2x) - 2, which at x=0 is 0, then second derivative is -4sin(2x), which at x=0 is 0 again. Wait, but after the second derivative, we have 6x in the numerator and -4sin(2x) in the denominator. Plugging x=0 gives 0/0 again? Wait, that can't be. Wait, no. Wait, after the first L'Hôpital's, we had (3x²)/(2cos(2x)-2). Plugging x=0 gives 0/0. Then we applied L'Hôpital's again to get (6x)/(-4sin(2x)). Now, plugging x=0 here gives 0/0 again. Wait a minute, that's another 0/0. So I might have to apply L'Hôpital's Rule a third time. Wait, I think I made a mistake here. Let me redo this step. After the first application: (3x²)/(2cos2x -2). Plug x=0, 0/0. Apply L'Hôpital's again: numerator derivative is 6x, denominator derivative is -4sin2x. So new limit is (6x)/(-4sin2x). Plugging x=0: 0/0 again. So need to apply L'Hôpital's a third time. Then third derivatives: numerator derivative of 6x is 6, denominator derivative of -4sin2x is -8cos2x. So third application gives 6 / (-8cos2x). Now plugging x=0, cos0=1, so 6/(-8\*1) = -6/8 = -3/4. Ah, I see. So I stopped too early. After the second application, we still had 0/0, so we need to apply L'Hôpital's a third time. Alternatively, using the Taylor series approach was more straightforward. So the correct process is three applications of L'Hôpital's Rule. But in my initial calculation, I tried to manipulate (6x)/(-4sin2x) by splitting into (-3/2)\*(x/sin2x) and then taking the limit as x approaches 0. But x/sin2x as x→0 is equivalent to 1/(2x)/sin2x multiplied by 2x, but perhaps I confused the steps there. Wait, actually, lim\_{x→0} x/sin2x = lim\_{x→0} \[x/(2x)\] \* \[2x/sin2x\] = (1/2)\*1 = 1/2. Because as x→0, sin2x \~ 2x, so sin2x ≈ 2x, so x/sin2x ≈ x/(2x) = 1/2. Therefore, the limit of x/sin2x as x→0 is 1/2. Therefore, (6x)/(-4sin2x) = (6/-4)\*(x/sin2x) = (-3/2)\*(1/2) = -3/4. So even though after the second L'Hôpital's application we have 0/0, we can algebraically simplify x/sin2x to find the limit without applying L'Hôpital's again. Alternatively, applying L'Hôpital's a third time would also work. Let me confirm: Third application: lim x→0 (6x)/(-4sin2x). Differentiate numerator and denominator: numerator derivative is 6, denominator derivative is -8cos2x. So the limit becomes 6 / (-8cos0) = 6/(-8\*1) = -6/8 = -3/4. Same result. So whether you apply L'Hôpital's Rule a third time or recognize the limit of x/sin2x, you get the same answer. Therefore, the correct answer is -3/4. Initially, I thought of stopping after two applications, but realized that after the second application, we still had 0/0, requiring a third. However, by manipulating the expression, we could avoid the third application. Either way, the result is the same. Just to ensure no step was missed, let's recap: Original limit: x³/(sin2x - 2x) as x→0. 1. Plug x=0: 0/0. Apply L’Hospital. 2. First derivative: 3x² / (2cos2x - 2). Plug x=0: 0/0. Apply again. 3. Second derivative: 6x / (-4sin2x). Plug x=0: 0/0. Apply third time. 4. Third derivative: 6 / (-8cos2x). Plug x=0: 6/-8 = -3/4. Alternatively, after second derivative: (6x)/(-4sin2x) = (-3/2)\*(x/sin2x). Recognize x/sin2x as 1/2, so total limit is -3/4. Yes, this confirms the answer. So the final answer is -3/4.
2025-01-23T11:05:34
https://www.reddit.com/r/LocalLLaMA/comments/1i80yiq/i_asked_ai_deepseek_r1_to_evaluate_the_limit_of/
gmjavia17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i80yiq
false
null
t3_1i80yiq
/r/LocalLLaMA/comments/1i80yiq/i_asked_ai_deepseek_r1_to_evaluate_the_limit_of/
false
false
self
1
null
How does llama-cpp-python "use" llama-cpp?
6
Very noob question here, so I apologize in advance. I'm new to the AI/LLM space and have begun looking at llama-cpp. Through this, I saw that there are some Python bindings available via the llama-cpp-python library. However, it's my understanding that llama-cpp is a binary, not a library. So is llama-cpp-python effectively working such that llama-cpp can be used in Python code, rather than in the terminal? I guess this also asks the question of how things that use llama-cpp as a "backend" work, such as Ollama?
2025-01-23T11:05:40
https://www.reddit.com/r/LocalLLaMA/comments/1i80ykd/how_does_llamacpppython_use_llamacpp/
Dxbson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i80ykd
false
null
t3_1i80ykd
/r/LocalLLaMA/comments/1i80ykd/how_does_llamacpppython_use_llamacpp/
false
false
self
6
null
Deepseek in RP (SFW)
11
https://preview.redd.it/…uld post it here
2025-01-23T11:07:54
https://www.reddit.com/r/LocalLLaMA/comments/1i80znq/deepseek_in_rp_sfw/
Healthy-Dingo-5944
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i80znq
false
null
t3_1i80znq
/r/LocalLLaMA/comments/1i80znq/deepseek_in_rp_sfw/
false
false
https://b.thumbs.redditm…AEqtYzDPIArc.jpg
11
null
LLM for shell pipe text processing?
2
Are there tools that let me use a LLM for simple one-off shell pipe text processing tasks, e.g.: $ cat foo.txt | llm "translate to english" Specifically I am looking for something that can automatically enhance a short prompt, a model that is trained for this kind of task and clever enough to handle long text documents, which a plain LLM wouldn't, e.g. something like this: $ ollama run model "translate to english: $(cat foo.txt)" Rarely works.
2025-01-23T11:29:57
https://www.reddit.com/r/LocalLLaMA/comments/1i81b3r/llm_for_shell_pipe_text_processing/
Spra991
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i81b3r
false
null
t3_1i81b3r
/r/LocalLLaMA/comments/1i81b3r/llm_for_shell_pipe_text_processing/
false
false
self
2
null
Are the AMD Rx 580 and Rx 480 (8GB) good for LLMs ?
2
I found very cheap Rx580 and 480 cards on marketplace and they seemed to be a deal compared to the VRAM they have (8gb). These are probably mining cards but the price is good enough to think about it. I looked online for LLM benchmarks vs other cards but didn't find any clear comparisons. it seems these cards might be supported on Ollama, but i no more information. Has anyone get their hands on theses and made them work ?
2025-01-23T11:31:34
https://www.reddit.com/r/LocalLLaMA/comments/1i81c05/are_the_amd_rx_580_and_rx_480_8gb_good_for_llms/
UniqueAttourney
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i81c05
false
null
t3_1i81c05
/r/LocalLLaMA/comments/1i81c05/are_the_amd_rx_580_and_rx_480_8gb_good_for_llms/
false
false
self
2
null
DeepSeek added recommandations for R1 local use to model card
161
[https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B#usage-recommendations](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B#usage-recommendations) **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \\boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
2025-01-23T11:36:56
https://www.reddit.com/r/LocalLLaMA/comments/1i81ev6/deepseek_added_recommandations_for_r1_local_use/
Snail_Inference
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i81ev6
false
null
t3_1i81ev6
/r/LocalLLaMA/comments/1i81ev6/deepseek_added_recommandations_for_r1_local_use/
false
false
self
161
{'enabled': False, 'images': [{'id': 'Q0QOTyTEboFw8cDsh_hlK-B2pRmxT16BpU6WvhiRAhg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=108&crop=smart&auto=webp&s=05513299537f9ac91b46a0ef050e330e6a41146a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=216&crop=smart&auto=webp&s=58f411ceb4a129b6fd04c891b0f50348540c111d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=320&crop=smart&auto=webp&s=f4922e2d1bdf4d98b9ae98ccd148d9a1fcc1cce7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=640&crop=smart&auto=webp&s=1e0d69a4ff14bebc6aa8e3ea68b33c5b632d47d8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=960&crop=smart&auto=webp&s=a9858a471059dc23dfab3344c026e3440db99079', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?width=1080&crop=smart&auto=webp&s=24f733061bede7f591279e304f9f0430a6aea381', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iGeXnfpFa5fajUZA8437ltPpnvjIlHkFysn4PHBZTIg.jpg?auto=webp&s=1427219437ae40d0c8e3b466654ca6877e656dd3', 'width': 1200}, 'variants': {}}]}
Been ages since google released an open model
384
2025-01-23T11:43:36
https://i.redd.it/fa91scqqdqee1.png
Amgadoz
i.redd.it
1970-01-01T00:00:00
0
{}
1i81iim
false
null
t3_1i81iim
/r/LocalLLaMA/comments/1i81iim/been_ages_since_google_released_an_open_model/
false
false
https://b.thumbs.redditm…yJfA_MfaA95k.jpg
384
{'enabled': True, 'images': [{'id': 'lmrZHVG-iK_4iIFgX1R7vwwrx0hcpg-ZzL0PRFvbxkU', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/fa91scqqdqee1.png?width=108&crop=smart&auto=webp&s=e7b26c317e916b8a81e8021df77c0b9e293d4a69', 'width': 108}, {'height': 223, 'url': 'https://preview.redd.it/fa91scqqdqee1.png?width=216&crop=smart&auto=webp&s=3c977ff1618198f2a8c6252e5ff0bd5aeb33e066', 'width': 216}, {'height': 330, 'url': 'https://preview.redd.it/fa91scqqdqee1.png?width=320&crop=smart&auto=webp&s=4f78c10935c8984f8f9d17834c7720f182fed482', 'width': 320}], 'source': {'height': 517, 'url': 'https://preview.redd.it/fa91scqqdqee1.png?auto=webp&s=2c799315b85b291d89a3d18059455f551c45317c', 'width': 500}, 'variants': {}}]}
No Code RAG on Warren Buffett's letters to Shareholders: BuffettGPT
0
Dive deep into the mind of Warren Buffett with BuffettGPT, an advanced no-code RAG knowledge bot. Built on Athina Flows, it lets you chat to Warren Buffett's letters to shareholders from 2000 to 2023. **Here's a technical breakdown:** 1. **Query Processing:** Your question is analyzed and semantically optimized for precise information retrieval. 2. **Vector Search:** A high-performance Weaviate vector database, containing Buffett's letters, is queried for contextually relevant passages. 3. **LLM-Powered Synthesis:** A state-of-the-art Large Language Model (LLM) synthesizes the retrieved information, generating a comprehensive and insightful answer in Buffett's own style. Cool part is all of this is built using No Code Athina Flows. Try it out for free and make your AI Flow here: [https://app.athina.ai/flows/templates/8fcf925d-a671-4c35-b62b-f0920365fe16](https://app.athina.ai/flows/templates/8fcf925d-a671-4c35-b62b-f0920365fe16)
2025-01-23T11:49:16
https://www.reddit.com/r/LocalLLaMA/comments/1i81lk7/no_code_rag_on_warren_buffetts_letters_to/
Sam_Tech1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i81lk7
false
null
t3_1i81lk7
/r/LocalLLaMA/comments/1i81lk7/no_code_rag_on_warren_buffetts_letters_to/
false
false
self
0
null
DeepSeek R1 is even better than OpenAI o1 and Claude 3.5 Sonnet
3
So i got to play around with DeepSeek R1, and based on the benchmarks I've seen and my test results, I can say it's just as good if not better (at certain things) than OpenAI o1 and Claude 3.5 Sonnet. It's a lot cheaper too (very small fraction of o1's and claude's pricing) but delivered results. here are some of its technical specs: **Total Parameters:** 671 billion * **Active Parameters per Token:** 37 billion * **Context Length:** Up to 128K tokens * **Training Data:** Trained on 14.8 trillion tokens * **Training Compute Cost:** Approximately 2.664 million H800 GPU hours (taken from this detailed article: [https://blog.getbind.co/2025/01/23/deepseek-r1-vs-gpt-o1-vs-claude-3-5-sonnet-which-is-best-for-coding/\_](https://blog.getbind.co/2025/01/23/deepseek-r1-vs-gpt-o1-vs-claude-3-5-sonnet-which-is-best-for-coding/_)
2025-01-23T11:55:14
https://www.reddit.com/r/LocalLLaMA/comments/1i81opy/deepseek_r1_is_even_better_than_openai_o1_and/
johnzakma10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i81opy
false
null
t3_1i81opy
/r/LocalLLaMA/comments/1i81opy/deepseek_r1_is_even_better_than_openai_o1_and/
false
false
self
3
{'enabled': False, 'images': [{'id': '1_MnsoBOjHUVlBv8s1AW8GF3ZoHqy4Q7Cx8Vh-5po64', 'resolutions': [{'height': 23, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?width=108&crop=smart&auto=webp&s=5d3f084b1f24c6be1b219ed06d50ede11039ae20', 'width': 108}, {'height': 47, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?width=216&crop=smart&auto=webp&s=4c4c27a0375b804db5d90bf12bf5c57a81b64386', 'width': 216}], 'source': {'height': 60, 'url': 'https://external-preview.redd.it/qucmYrGLR_ezD9eMXgpepPDC5n8MtQ-JNXioY_ynCHg.jpg?auto=webp&s=58df702c38afd9cce5d0d8f1b6181031aa15e77b', 'width': 272}, 'variants': {}}]}
This merge is amazing: FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview
185
[https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) This model is a Fusion(merge?) of DeepSeekR1 & QwQ & SkyT1 32B models. I usually don’t use merged models because, most of the time, I can’t find any noticeable differences. However, this one is truly different. Here are some examples: My test setup is ollama + IQ4\_XS + 32k context + Q8\_0 KV Cache, questions are from: [https://www.reddit.com/r/LocalLLaMA/comments/1i65599/r1\_32b\_is\_be\_worse\_than\_qwq\_32b\_tests\_included/](https://www.reddit.com/r/LocalLLaMA/comments/1i65599/r1_32b_is_be_worse_than_qwq_32b_tests_included/) 1. Hello! I have multiple different files with different sizes, I want to move files from disk 1 to disk 2, which has only 688 space available. Without yapping, and being as concise as possible. What combination of files gets me closer to that number? The file sizes are: 36, 36, 49, 53, 54, 54, 63, 94, 94, 107, 164, 201, 361, 478 FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview: full answer with thoughts: [https://pastebin.com/W2QFACDM](https://pastebin.com/W2QFACDM) To maximize the use of disk 2's 688-unit capacity, here are the optimal file combinations: 1. **478 + 53 + 54 + 54 + 49 = 688** 2. **361 + 201 + 36 + 36 + 54 = 688** 3. **478 + 107 + 49 + 54 = 688** 4. **361 + 107 + 49 + 54 + 54 + 63 = 688** Each combination precisely fills the available space on disk 2. **This is first time I saw a local model give me 4 correct answers for this question, it's amazing** DeepSeek-R1-Distill-Qwen-32B: full answer with thoughts: [https://pastebin.com/2VJrjVmJ](https://pastebin.com/2VJrjVmJ) The combination of files that gets you closest to the 688 space limit is: 361 + 164 + 107 + 54 + 54 = 686 This totals 686 units, which is just 2 units less than the available space on disk 2. 2. How many days are between 12-12-1971 and 18-4-2024? FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview: full answer with thoughts: [https://pastebin.com/PQGr1DDq](https://pastebin.com/PQGr1DDq) The number of days between December 12, 1971, and April 18, 2024, is 19,121 days . This calculation includes the full years, accounts for leap years, and considers the days in the partial years, providing the difference between the two dates. which is correct answer, 19121 days DeepSeek-R1-Distill-Qwen-32B: full answer with thoughts: [https://pastebin.com/T0iNuYHf](https://pastebin.com/T0iNuYHf) ### **Final Answer** \[ \boxed{19,486} \] 3. I have a boat with 3 free spaces. I want to transport a man, sheep and cat on the other side of the river. How to do that? FuseO1-DeekSeekR1-QwQ-SkyT1-32B-Preview: full answer with thoughts: [https://pastebin.com/MZZ5nDiM](https://pastebin.com/MZZ5nDiM) To transport the man, sheep, and cat across the river, the solution depends on the boat's capacity. Given that the boat has "3 free spaces," it can carry three items in total, including the man. Therefore, the man can take all three (himself, the sheep, and the cat) across the river in one trip. Here's the step-by-step solution: 1. **First Trip**: The man boards the boat with the sheep and the cat. 2. **Crossing the River**: The man rows the boat across the river to the other side. 3. **Arrival**: Upon reaching the other side, the man, sheep, and cat all disembark from the boat. This completes the task as all three are now on the opposite side of the river in a single trip. which is the correct answer, they only need to cross the river once DeepSeek-R1-Distill-Qwen-32B full answer with thoughts: [https://pastebin.com/XHQHwQth](https://pastebin.com/XHQHwQth) *fell into an infinite loop, I had to terminate it*
2025-01-23T11:56:19
https://www.reddit.com/r/LocalLLaMA/comments/1i81pbk/this_merge_is_amazing/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i81pbk
false
null
t3_1i81pbk
/r/LocalLLaMA/comments/1i81pbk/this_merge_is_amazing/
false
false
self
185
{'enabled': False, 'images': [{'id': 'R7FmvGUv0IuP2lOvwe4jYIRxvhzDD4fWEk8D8-E6wjE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=108&crop=smart&auto=webp&s=f4ce6ff91f28493117cd5b048bc6216a09220bef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=216&crop=smart&auto=webp&s=71086492e0bd267427d43b5860c6be4419149e93', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=320&crop=smart&auto=webp&s=760ce35daa5baabd11bed7c6c2cc53fd92c4512b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=640&crop=smart&auto=webp&s=a70e3333a94a5c2456a7acc9926686ecd7ac5c9c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=960&crop=smart&auto=webp&s=426d85b294dc069a37783d91f3f4591982523159', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?width=1080&crop=smart&auto=webp&s=b36973a03bf52079eebc34748f4189c4e0976c8e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9xx2hLsw1ifjZwY9Nn5e29pHkqpa0wW9wHi8pHYMO3s.jpg?auto=webp&s=5e0646c1f7e4cab74e200abeee3f00dbc2c33a9d', 'width': 1200}, 'variants': {}}]}
Rogue Rose 103B still neat at story-telling -- memorial post
1
[removed]
2025-01-23T12:06:45
https://www.reddit.com/r/LocalLLaMA/comments/1i81vaf/rogue_rose_103b_still_neat_at_storytelling/
uhuge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i81vaf
false
null
t3_1i81vaf
/r/LocalLLaMA/comments/1i81vaf/rogue_rose_103b_still_neat_at_storytelling/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XL8Yw75g99UuCw7sv0RUl-x9M-TvGwNpYbP4ulBil0w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/tGVagmMrgrOAt1_rqCjPysoGG8jz_NRGhB7FwOskTMc.jpg?width=108&crop=smart&auto=webp&s=a3699230aea6ec1a02799fb30b4444fd08b6609e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/tGVagmMrgrOAt1_rqCjPysoGG8jz_NRGhB7FwOskTMc.jpg?width=216&crop=smart&auto=webp&s=669ae53845370a2985cc72c4910d0298e029d83a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/tGVagmMrgrOAt1_rqCjPysoGG8jz_NRGhB7FwOskTMc.jpg?width=320&crop=smart&auto=webp&s=9e86f96b10a2a2697f76545d2d053a4e1bd28027', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/tGVagmMrgrOAt1_rqCjPysoGG8jz_NRGhB7FwOskTMc.jpg?width=640&crop=smart&auto=webp&s=ebf19797e98f35f8c7485deec0b813c01b79ec32', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/tGVagmMrgrOAt1_rqCjPysoGG8jz_NRGhB7FwOskTMc.jpg?width=960&crop=smart&auto=webp&s=dfef94c19b5dcd3c10f92aa27a444192718b9b10', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/tGVagmMrgrOAt1_rqCjPysoGG8jz_NRGhB7FwOskTMc.jpg?width=1080&crop=smart&auto=webp&s=ad87c61cbe81cc86f617981e82576de66ba45cbe', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/tGVagmMrgrOAt1_rqCjPysoGG8jz_NRGhB7FwOskTMc.jpg?auto=webp&s=0a8a5a7c15a5e8d9cc6b3dee4a0e7dcf1e2398e7', 'width': 1200}, 'variants': {}}]}
SmolVLM 256 and 500M. Beating our 80B VLM from 17 months ago!
158
Hello! It's Andi from multimodal team at Hugging Face Today we released new SmolVLM: tiny vision language models that come in 256M and 500M parameters! How did we compress SmolVLM 2.2B even further while making it generalize? \> Smoller image encoder (SigLIP-base 93M) \> More diverse data mixture \> Better efficiency in tokenization for images We have released base and instruction fine-tuned variants, they work out of the box with transformers and TRL, and we also release a WebGPU demo. All of them are in this collection: [https://huggingface.co/collections/HuggingFaceTB/smolvlm-256m-and-500m-6791fafc5bb0ab8acc960fb0](https://huggingface.co/collections/HuggingFaceTB/smolvlm-256m-and-500m-6791fafc5bb0ab8acc960fb0) Here's a demo: [https://huggingface.co/spaces/HuggingFaceTB/SmolVLM-256M-Demo](https://huggingface.co/spaces/HuggingFaceTB/SmolVLM-256M-Demo) Also here's the blog for more in-depth documentation:[https://huggingface.co/blog/smolervlm](https://huggingface.co/blog/smolervlm) Looking forward to see what you think! https://preview.redd.it/5viklrh4jqee1.png?width=1866&format=png&auto=webp&s=464a14c3795f760c96a584570cb454f31b3db066
2025-01-23T12:14:25
https://www.reddit.com/r/LocalLLaMA/comments/1i81znv/smolvlm_256_and_500m_beating_our_80b_vlm_from_17/
futterneid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i81znv
false
null
t3_1i81znv
/r/LocalLLaMA/comments/1i81znv/smolvlm_256_and_500m_beating_our_80b_vlm_from_17/
false
false
https://b.thumbs.redditm…sLEVXCa0NJ5w.jpg
158
{'enabled': False, 'images': [{'id': 'DJkj1GjbUolEdq-byIut5G4YAZIozaswtXK_SwiuRhs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GQobgTxgKDrhJhG-UAnJyEaSqhi-5vBwrdogSL5wNmU.jpg?width=108&crop=smart&auto=webp&s=7d23809316087bf3b8d482dce522b1d699beba4a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GQobgTxgKDrhJhG-UAnJyEaSqhi-5vBwrdogSL5wNmU.jpg?width=216&crop=smart&auto=webp&s=d2e34b52aa37bc6ad696b9aa0dfe92c7cd9f3d41', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GQobgTxgKDrhJhG-UAnJyEaSqhi-5vBwrdogSL5wNmU.jpg?width=320&crop=smart&auto=webp&s=8d6bee89b523b33eedd28b7dd410f964586823ad', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GQobgTxgKDrhJhG-UAnJyEaSqhi-5vBwrdogSL5wNmU.jpg?width=640&crop=smart&auto=webp&s=26b8e0c51d51b02acc52b56c533ba8ea0c229040', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GQobgTxgKDrhJhG-UAnJyEaSqhi-5vBwrdogSL5wNmU.jpg?width=960&crop=smart&auto=webp&s=aacc0f999c22e5137df1d17b7562a931f788ccc5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GQobgTxgKDrhJhG-UAnJyEaSqhi-5vBwrdogSL5wNmU.jpg?width=1080&crop=smart&auto=webp&s=f3d49c087a0c22e06f38ed7cb03781f7a46c240f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GQobgTxgKDrhJhG-UAnJyEaSqhi-5vBwrdogSL5wNmU.jpg?auto=webp&s=8cfdc4e0cf5b7888d6a282c07bb06e5bffee636b', 'width': 1200}, 'variants': {}}]}
Introducing Salt: A New Approach to Speech Generation Built on LLMs (+ Open-Sourced Code!)
63
--- **Hey r/LocalLLaMA!** We’re excited to share **Salt**, a speech generation project we’ve been working on since August. Here’s a quick dive into its journey, technical details, and open resources! --- ### **Development Timeline** - **August 2024**: Started prototyping. - **Late August**: First working model (spoiler: it was rough). - **Final Approach**: Instead of traditional transformer-based TTS, we extended a pre-trained LLM (Llama3 3B) with a **custom tokenizer** (WavTokenizer for audio) and trained it end-to-end for **TTS** and **ASR**. --- ### **Key Details** - **Training**: ~150 A100 GPU hours. - **Architecture**: Think “DALL-E 1 for speech” – autoregressive generation using a modified LLM backbone. - **Why It’s Unique**: Unlike adapter-based models or pure TTS frameworks, Salt leverages LLM capabilities while handling audio tokens natively, also voice is prompt controllable. --- ### **Current State** - **Performance**: Competitive SIM-O scores (~0.63) against models like Mini Omni and Llama Omni (see [metrics](https://github.com/VikhrModels/aulate)). - **Future Work**: Fine-tuning for **music/audio generation**! Paper and updates coming soon. --- ### **Open Resources** - **Model**: [Hugging Face](https://huggingface.co/Vikhrmodels/salt-asr_wav-uni_1_tts_wav-uni_1-12k) - **Inference Demo**: [Google Colab](https://colab.research.google.com/drive/1Fbv1-4iBKSymgzAF2b1A4aCP9ajI4USL?usp=sharing) - **Full Training Code**: [GitHub](https://github.com/VikhrModels/Salt) - **Metrics Toolkit**: [Aulate](https://github.com/VikhrModels/aulate)
2025-01-23T12:17:47
https://www.reddit.com/r/LocalLLaMA/comments/1i821n9/introducing_salt_a_new_approach_to_speech/
Mysterious_Hearing14
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i821n9
false
null
t3_1i821n9
/r/LocalLLaMA/comments/1i821n9/introducing_salt_a_new_approach_to_speech/
false
false
self
63
{'enabled': False, 'images': [{'id': 'RuegWeTfQCjF-oz8cFMLVqhX0WuVq6qPg2XwtD4mzHQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LXmQ0QogZLQv1V79Xo5v_iKRRaQlTt5q2KwdjqPGXFg.jpg?width=108&crop=smart&auto=webp&s=06c76ff5a2736027dde0914b25500988440668f4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LXmQ0QogZLQv1V79Xo5v_iKRRaQlTt5q2KwdjqPGXFg.jpg?width=216&crop=smart&auto=webp&s=8dccf31fea4a9d2b10c5178eea62669416508d3f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LXmQ0QogZLQv1V79Xo5v_iKRRaQlTt5q2KwdjqPGXFg.jpg?width=320&crop=smart&auto=webp&s=8d5edf662301bea52ba97ee508d12a84d760fd01', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LXmQ0QogZLQv1V79Xo5v_iKRRaQlTt5q2KwdjqPGXFg.jpg?width=640&crop=smart&auto=webp&s=ea87ee04e690304f8464b26fdd4aaa62f9e40623', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LXmQ0QogZLQv1V79Xo5v_iKRRaQlTt5q2KwdjqPGXFg.jpg?width=960&crop=smart&auto=webp&s=4ff0821b17fdb3e276731e5a7c3f12ab590739f8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LXmQ0QogZLQv1V79Xo5v_iKRRaQlTt5q2KwdjqPGXFg.jpg?width=1080&crop=smart&auto=webp&s=8c191c7308cfc6e8f2e649db8e4a460af4111883', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LXmQ0QogZLQv1V79Xo5v_iKRRaQlTt5q2KwdjqPGXFg.jpg?auto=webp&s=87e9d03bbc23358dd9bb7eb76e548cd90fff0483', 'width': 1200}, 'variants': {}}]}
1 hour-long easy-to-follow explanation on R1. Not me! Not affiliated in anyway.
3
2025-01-23T12:23:12
https://www.youtube.com/watch?v=XMnxKGVnEUc
Snoo_64233
youtube.com
1970-01-01T00:00:00
0
{}
1i8253g
false
{'oembed': {'author_name': 'Umar Jamil', 'author_url': 'https://www.youtube.com/@umarjamilai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/XMnxKGVnEUc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Paper: DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/XMnxKGVnEUc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Paper: DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1i8253g
/r/LocalLLaMA/comments/1i8253g/1_hourlong_easytofollow_explanation_on_r1_not_me/
false
false
https://a.thumbs.redditm…1_JbV8qKWBe8.jpg
3
{'enabled': False, 'images': [{'id': '-BE8_lKDkJBAH-d5X_UE7ASGcrEcyr2UCJD2eAqT0ys', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wehjUQvFqwNBtLc3UkcHYoyRcW8aIspGDviYNWyVj7w.jpg?width=108&crop=smart&auto=webp&s=bb389624f45058372b7585a6375bd4fb191379a6', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wehjUQvFqwNBtLc3UkcHYoyRcW8aIspGDviYNWyVj7w.jpg?width=216&crop=smart&auto=webp&s=c92d00577b188eadd47495c3cc333da1154e4f59', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wehjUQvFqwNBtLc3UkcHYoyRcW8aIspGDviYNWyVj7w.jpg?width=320&crop=smart&auto=webp&s=bec88c519da14e3ac586da8e64ac81eabfab1dfc', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wehjUQvFqwNBtLc3UkcHYoyRcW8aIspGDviYNWyVj7w.jpg?auto=webp&s=6609e19dd0fd6681229c37a119049b191c1cdf23', 'width': 480}, 'variants': {}}]}
Alternatives for LLM GUI
7
Hey, I'm trying to pivot away from OpenAI products and have been using some API inference services, such as DeepInfra. However, there’s still one thing I haven’t found a comfortable alternative for. I use the ChatGPT desktop app a lot, particularly because of its speech-to-text feature for prompting and the text finder in chats. Is there an alternative GUI (with speech-to-text functionality for prompting) that I can configure to use with another API inference service? This is the only thing I need to completely pivot away from OpenAI. Thanks in advance!
2025-01-23T12:24:14
https://www.reddit.com/r/LocalLLaMA/comments/1i825t7/alternatives_for_llm_gui/
icalv1213
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i825t7
false
null
t3_1i825t7
/r/LocalLLaMA/comments/1i825t7/alternatives_for_llm_gui/
false
false
self
7
null
Deepseek R1 review from casual user.
0
Hi, It has been about a week of intense testing of this model, the llama version 70b guff q4 on a a6000. The model is good, and it very interesting to see it’s internal monologue, but as a user of 01, I can’t see how they actually did the review, any where I go I see that it’s better than 01, bla bla bla, No it’s not, not even close, it slightly less censored, but the perplexity is no where near o1, It’s a very good model that far superseded other model of its weight, sure, but a o1 killer ? On the side note, do any of you know an uncensored version of the 70b ? That would make it even better.
2025-01-23T12:32:46
https://www.reddit.com/r/LocalLLaMA/comments/1i82ba3/deepseek_r1_review_from_casual_user/
sigiel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i82ba3
false
null
t3_1i82ba3
/r/LocalLLaMA/comments/1i82ba3/deepseek_r1_review_from_casual_user/
false
false
self
0
null
Fabric question - How to use custom models?
1
Hello, I’m trying to integrate DeepSeek models with Fabric, but I’m having trouble figuring out how to do so. By default, Fabric seems to support OpenAI, Google, and Anthropic APIs, but I haven’t found a way to configure it for DeepSeek or other custom models. Has anyone successfully used DeepSeek models or other non-native models with Fabric? If so, could you share any guidance or steps on how to set this up? Thank you in advance for your help!
2025-01-23T12:34:38
https://www.reddit.com/r/LocalLLaMA/comments/1i82cfo/fabric_question_how_to_use_custom_models/
MustStayAnonymous_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i82cfo
false
null
t3_1i82cfo
/r/LocalLLaMA/comments/1i82cfo/fabric_question_how_to_use_custom_models/
false
false
self
1
null
It is now possible torun DeepSeek R1 on a iPhone
1
[removed]
2025-01-23T12:39:50
https://v.redd.it/p44xz9bxnqee1
CarlosBaquero
v.redd.it
1970-01-01T00:00:00
0
{}
1i82fr9
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/p44xz9bxnqee1/DASHPlaylist.mpd?a=1740228006%2CMWYzNzk4ZDdhZTdhZWQzNzdhN2FlNTE4YmNmNjgxZWJmYWRjOTEwY2EzNTYxOWRlY2JmZDc1ZDE5NzEzMmRjNQ%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/p44xz9bxnqee1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/p44xz9bxnqee1/HLSPlaylist.m3u8?a=1740228006%2CNDAxMTc1ZGE2MDExNWFjYTIxNGVlZGJkZTE2MjhmYjAzYmQwZTRjYzlhZDVjNDNhZGUzMjg4MjQ0MGJiYjY1Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/p44xz9bxnqee1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 592}}
t3_1i82fr9
/r/LocalLLaMA/comments/1i82fr9/it_is_now_possible_torun_deepseek_r1_on_a_iphone/
false
false
https://external-preview…7b21a81408329fb7
1
{'enabled': False, 'images': [{'id': 'cmdxODZuN3hucWVlMa1xMPXzVlpcWY3S91x4hINfD8RoO692NVdXaHnDY0hf', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/cmdxODZuN3hucWVlMa1xMPXzVlpcWY3S91x4hINfD8RoO692NVdXaHnDY0hf.png?width=108&crop=smart&format=pjpg&auto=webp&s=82dbce4e572a8e09c53a03b23ab3b6b455c8de08', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/cmdxODZuN3hucWVlMa1xMPXzVlpcWY3S91x4hINfD8RoO692NVdXaHnDY0hf.png?width=216&crop=smart&format=pjpg&auto=webp&s=f3d2cce851f787f6b669539c4cf5fc6960b15da0', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/cmdxODZuN3hucWVlMa1xMPXzVlpcWY3S91x4hINfD8RoO692NVdXaHnDY0hf.png?width=320&crop=smart&format=pjpg&auto=webp&s=d3a50b67f7b0913766ac5ddf0ca714e16eff4e54', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/cmdxODZuN3hucWVlMa1xMPXzVlpcWY3S91x4hINfD8RoO692NVdXaHnDY0hf.png?width=640&crop=smart&format=pjpg&auto=webp&s=f363c417c3a0fc062c1487dc0cbc110033c51bbd', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/cmdxODZuN3hucWVlMa1xMPXzVlpcWY3S91x4hINfD8RoO692NVdXaHnDY0hf.png?format=pjpg&auto=webp&s=0fd38c849572c7b10aa2d2add7340ac0b57d226d', 'width': 888}, 'variants': {}}]}
Need help with server configuration for Llama 3.3:70B
1
[removed]
2025-01-23T12:55:30
https://www.reddit.com/r/LocalLLaMA/comments/1i82pq1/need_help_with_server_configuration_for_llama/
Schmiddi995
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i82pq1
false
null
t3_1i82pq1
/r/LocalLLaMA/comments/1i82pq1/need_help_with_server_configuration_for_llama/
false
false
self
1
null
Uncensor DeepSeek-R1 ...
1
[removed]
2025-01-23T12:56:48
https://www.reddit.com/r/LocalLLaMA/comments/1i82qj3/uncensor_deepseekr1/
bitRAKE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i82qj3
false
null
t3_1i82qj3
/r/LocalLLaMA/comments/1i82qj3/uncensor_deepseekr1/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
CVE-2024-50050: Critical Vulnerability in meta-llama/llama-stack | Oligo Security
1
2025-01-23T13:10:18
https://www.oligo.security/blog/cve-2024-50050-critical-vulnerability-in-meta-llama-llama-stack
cov_id19
oligo.security
1970-01-01T00:00:00
0
{}
1i82zin
false
null
t3_1i82zin
/r/LocalLLaMA/comments/1i82zin/cve202450050_critical_vulnerability_in/
false
false
https://a.thumbs.redditm…7bX9MEpA8138.jpg
1
{'enabled': False, 'images': [{'id': 'MFbXRe3mlM5p9xGDy9NwuTlhISxvG4yTlWfMuFiWEJ8', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/nxSLKWeZNHARfaRly39ZbRLi5WCUCOvgz3Ixs2eLQQg.jpg?width=108&crop=smart&auto=webp&s=68e62135b91d7c2749a6944a8536ed7d34067a51', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/nxSLKWeZNHARfaRly39ZbRLi5WCUCOvgz3Ixs2eLQQg.jpg?width=216&crop=smart&auto=webp&s=bb06eddcbff3885e4c8ce5e76956a116da64c2cc', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/nxSLKWeZNHARfaRly39ZbRLi5WCUCOvgz3Ixs2eLQQg.jpg?width=320&crop=smart&auto=webp&s=97e6312fae9cb1bfe83a06a3d848192bd942430c', 'width': 320}, {'height': 353, 'url': 'https://external-preview.redd.it/nxSLKWeZNHARfaRly39ZbRLi5WCUCOvgz3Ixs2eLQQg.jpg?width=640&crop=smart&auto=webp&s=28be38396151c22e10b203150e196f73a3ab7c91', 'width': 640}, {'height': 529, 'url': 'https://external-preview.redd.it/nxSLKWeZNHARfaRly39ZbRLi5WCUCOvgz3Ixs2eLQQg.jpg?width=960&crop=smart&auto=webp&s=62cbe4ba14cbbfd8435a869754d1c3415c544e34', 'width': 960}, {'height': 595, 'url': 'https://external-preview.redd.it/nxSLKWeZNHARfaRly39ZbRLi5WCUCOvgz3Ixs2eLQQg.jpg?width=1080&crop=smart&auto=webp&s=c56ef66f4c8a151360c63e66414fbdffb2c58d79', 'width': 1080}], 'source': {'height': 662, 'url': 'https://external-preview.redd.it/nxSLKWeZNHARfaRly39ZbRLi5WCUCOvgz3Ixs2eLQQg.jpg?auto=webp&s=e3f21b7038466514e2d7bbc9a8756ca78a381614', 'width': 1200}, 'variants': {}}]}
ChatGPT was down today. So tried a Local LLaMA. Thanks for the advice here!
39
LLM noob here. I took this as an opportunity to follow the excellent advice that somebody had posted [here ](https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/a_starter_guide_for_playing_with_your_own_local_ai/)and [here](https://www.reddit.com/r/LocalLLaMA/comments/1c9iawc/is_there_any_idiot_guide_to_running_local_llama/). Any whilst my computer is a pile of trash I was able to run 'DeepSeek-R1-Distill-Llama-8B-Q5\_K\_M.gguf' by merely downloading koboldcpp.exe What a time to be alive!
2025-01-23T13:10:23
https://www.reddit.com/r/LocalLLaMA/comments/1i82zkv/chatgpt_was_down_today_so_tried_a_local_llama/
BuyHighSellL0wer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i82zkv
false
null
t3_1i82zkv
/r/LocalLLaMA/comments/1i82zkv/chatgpt_was_down_today_so_tried_a_local_llama/
false
false
self
39
null
Doesn't all my code get sent to Deepseek if I use their API?
1
[removed]
2025-01-23T13:12:55
https://www.reddit.com/r/LocalLLaMA/comments/1i8318u/doesnt_all_my_code_get_sent_to_deepseek_if_i_use/
Theboyscampus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8318u
false
null
t3_1i8318u
/r/LocalLLaMA/comments/1i8318u/doesnt_all_my_code_get_sent_to_deepseek_if_i_use/
false
false
self
1
null
Train LLM to exctract key-value pairs
1
I have a template text and a filled text. Template text contains some placeholders e.g. <name-1>, <name-2> and in the filled text, there would be actual values e.g. Raven, John Each line of the texts are concatenated side by side and divided by a delimiter in this format i.e. line\_1\_template | line\_1\_filled line\_2\_template | line\_2\_filled ... I want the llm to look at this string text and give me key-value pairs i.e {<name-1>: Raven, <name-2>:John} They key-value pairs could be anything names, dates, addresses, pincode, etc. I have trained a qwen2.5\_7B for one epoch, but it has mostly learned the values and regurgitating the same outputs if I slightly change the key-value pairs. Please help me out!
2025-01-23T13:13:21
https://www.reddit.com/r/LocalLLaMA/comments/1i831jl/train_llm_to_exctract_keyvalue_pairs/
GHOST--1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i831jl
false
null
t3_1i831jl
/r/LocalLLaMA/comments/1i831jl/train_llm_to_exctract_keyvalue_pairs/
false
false
self
1
null
here are the 3 key take aways from reading a report analyzing 2 trillion + production tokens
4
https://preview.redd.it/…90bbd6f8185314
2025-01-23T13:23:09
https://www.reddit.com/r/LocalLLaMA/comments/1i838gd/here_are_the_3_key_take_aways_from_reading_a/
VisibleLawfulness246
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i838gd
false
null
t3_1i838gd
/r/LocalLLaMA/comments/1i838gd/here_are_the_3_key_take_aways_from_reading_a/
false
false
https://b.thumbs.redditm…ntEjHraOl0RM.jpg
4
null
Best llm for function calling at this time?
1
[removed]
2025-01-23T13:26:54
https://www.reddit.com/r/LocalLLaMA/comments/1i83b3k/best_llm_for_function_calling_at_this_time/
urekmazino_0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i83b3k
false
null
t3_1i83b3k
/r/LocalLLaMA/comments/1i83b3k/best_llm_for_function_calling_at_this_time/
false
false
self
1
null
What can I improve?
1
I installed Deepseek R1 8b through Ollama on my laptop with 1650 Ti Mobile (4gb). And it's very stupid and thinks very long. I understand that my laptop is weak and this LLM is hard to run. What can I do to achieve better results?
2025-01-23T13:35:56
https://www.reddit.com/r/LocalLLaMA/comments/1i83hgi/what_can_i_improve/
Anyusername7294
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i83hgi
false
null
t3_1i83hgi
/r/LocalLLaMA/comments/1i83hgi/what_can_i_improve/
false
false
self
1
null
Sytem prompt vs. User prompt vs. Tags
2
Hi. I use LLM for automated scheduled tasks, mostly text processing. My input messages to the LLM therefore consist mostly of the "instruction/explanation" (Summarise/Tag/Classify the following text), the "input" (the actual text to be processed), and sometimes the examples (here are some examples texts and their tags). So far, I have written all these two-three parts in a big user prompt roughly in markdown format, with e.g. block quotes (""") to designate the input text, bullets for examples etc. The system prompt is mostly very generic "You are a text analysis expert ..." or simply empty. Now, I have seen some other opinions (e.g. https://news.ycombinator.com/item?id=42748427), where you should actually put most/all of the "logic" in the system prompt, have user prompt contain only your "transformation input" for such a use case. Does this lead to better results? Am I doing it completely wrong? And if I follow that model, where are the "examples" placed? Any experience, recommendation, or even better, real-world examples of NLP tasks are more than appreciated. Assume the likes of 8B/9B model, or GPT-4o-mini, if that matters. Thanks a lot
2025-01-23T13:39:42
https://www.reddit.com/r/LocalLLaMA/comments/1i83k20/sytem_prompt_vs_user_prompt_vs_tags/
ihatebeinganonymous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i83k20
false
null
t3_1i83k20
/r/LocalLLaMA/comments/1i83k20/sytem_prompt_vs_user_prompt_vs_tags/
false
false
self
2
null
Clearing VRAM in llama_cpp
3
Hi all - I really need some help from this group & couldn’t find a suitable answer online. I have a python script that essentially feeds LLM1 output as input to LLM2. The issue is - individually both models (LLM1 and LLM2) fit into my 3090 VRAM. However not both. I have refactored the code to exhaust all the tasks for LLM1 first, then handover to LLM2, but how do I clear the VRAM during execution, to then load LLM2 fully onto VRAM as part of the script? Any help would be appreciated!
2025-01-23T13:42:04
https://www.reddit.com/r/LocalLLaMA/comments/1i83lpu/clearing_vram_in_llama_cpp/
robrjxx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i83lpu
false
null
t3_1i83lpu
/r/LocalLLaMA/comments/1i83lpu/clearing_vram_in_llama_cpp/
false
false
self
3
null
Fine-tune use cases
1
[removed]
2025-01-23T13:43:19
https://www.reddit.com/r/LocalLLaMA/comments/1i83mkv/finetune_use_cases/
QueRoub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i83mkv
false
null
t3_1i83mkv
/r/LocalLLaMA/comments/1i83mkv/finetune_use_cases/
false
false
self
1
null
FuzzyAI - jailbreaking your local models
1
[removed]
2025-01-23T13:46:00
https://www.reddit.com/r/LocalLLaMA/comments/1i83ojs/fuzzyai_jailbreaking_your_local_models/
go_out_drink666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i83ojs
false
null
t3_1i83ojs
/r/LocalLLaMA/comments/1i83ojs/fuzzyai_jailbreaking_your_local_models/
false
false
self
1
null
Hard integral problem
1
[removed]
2025-01-23T13:49:07
https://www.reddit.com/r/LocalLLaMA/comments/1i83qqv/hard_integral_problem/
PsychologicalKnee562
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i83qqv
false
null
t3_1i83qqv
/r/LocalLLaMA/comments/1i83qqv/hard_integral_problem/
false
false
self
1
{'enabled': False, 'images': [{'id': '24VKdL_WzUG7N1Ti5GjlruUpfCsQfdr8eN05WjGS4kY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?width=108&crop=smart&auto=webp&s=fcc5696614a4a7fbcb0e8d41870c6f131f6ba9f5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?width=216&crop=smart&auto=webp&s=15e826d21ecbe21556f362063b18678e271daab4', 'width': 216}], 'source': {'height': 316, 'url': 'https://external-preview.redd.it/Nw5gt5QIxILVHXdGb0NY0j88-a6AGGcofvMtzOYMMPc.jpg?auto=webp&s=65a1fa0d343cd544754bf78a93c832f7a9bb9e9b', 'width': 316}, 'variants': {}}]}
Does anyone know how much the training costs for DeepSeek R1 were?
1
[removed]
2025-01-23T13:54:25
https://www.reddit.com/r/LocalLLaMA/comments/1i83un7/does_anyone_know_how_much_the_training_costs_for/
ArturTMvelli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i83un7
false
null
t3_1i83un7
/r/LocalLLaMA/comments/1i83un7/does_anyone_know_how_much_the_training_costs_for/
false
false
self
1
null
Which is the Best llm to use on BOLT.DIY with 12-16gb vram
1
[removed]
2025-01-23T13:59:51
https://www.reddit.com/r/LocalLLaMA/comments/1i83yme/which_is_the_best_llm_to_use_on_boltdiy_with/
Siyam_fahad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i83yme
false
null
t3_1i83yme
/r/LocalLLaMA/comments/1i83yme/which_is_the_best_llm_to_use_on_boltdiy_with/
false
false
self
1
{'enabled': False, 'images': [{'id': '-A-qg64uhJOkcGYtSzO7h56OdUOM5dIhhSDxszOBFFM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aBK3V-LX4psIjpj4kexQlN3yGowxtUF-_nokeFq-HOo.jpg?width=108&crop=smart&auto=webp&s=b4bf6d0bace17cab4a8f8877b2cd7ad6d0a60532', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aBK3V-LX4psIjpj4kexQlN3yGowxtUF-_nokeFq-HOo.jpg?width=216&crop=smart&auto=webp&s=f7b9ee863f01a2b63a9cf0bbae6e98eb2a8ed992', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aBK3V-LX4psIjpj4kexQlN3yGowxtUF-_nokeFq-HOo.jpg?width=320&crop=smart&auto=webp&s=d5cf76d155326d3832d59ab9d29c6a445fb2d73d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aBK3V-LX4psIjpj4kexQlN3yGowxtUF-_nokeFq-HOo.jpg?width=640&crop=smart&auto=webp&s=5b52bb10dd1652b7ca068d9aa3a9f2b666f4c086', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aBK3V-LX4psIjpj4kexQlN3yGowxtUF-_nokeFq-HOo.jpg?width=960&crop=smart&auto=webp&s=162c78e8b3e8a70977fbc28e0529fe1e410930dc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aBK3V-LX4psIjpj4kexQlN3yGowxtUF-_nokeFq-HOo.jpg?width=1080&crop=smart&auto=webp&s=6c5befdc733695e0077cca750d63a2ec68c6e253', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aBK3V-LX4psIjpj4kexQlN3yGowxtUF-_nokeFq-HOo.jpg?auto=webp&s=89acf890ca20f59bd090713abb40e4c80da785b9', 'width': 1200}, 'variants': {}}]}
Combine on-policy synthetic data with Rule-Based Reward Models and DPO
1
2025-01-23T14:02:17
https://www.philschmid.de/rl-with-llms-in-2025-dpo
Ok_Two6167
philschmid.de
1970-01-01T00:00:00
0
{}
1i840ra
false
null
t3_1i840ra
/r/LocalLLaMA/comments/1i840ra/combine_onpolicy_synthetic_data_with_rulebased/
false
false
https://b.thumbs.redditm…dzoUDUX_YpXE.jpg
1
{'enabled': False, 'images': [{'id': 'jJzwJoZvX27zVBQsQ02HPYSdMkuDurNrMoMBp0EEL_I', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?width=108&crop=smart&auto=webp&s=35169f45c54c68b7e99883c7e9cd6a35af4002d7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?width=216&crop=smart&auto=webp&s=231acd4157849079eb7ecb9ee446cac79fce7ade', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?width=320&crop=smart&auto=webp&s=11469726fed0d5c281ad8758f9b8cb8bac96ac16', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?width=640&crop=smart&auto=webp&s=1269a93cd61b0d4cd4d41fdc01b431363cdf443e', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?width=960&crop=smart&auto=webp&s=99e240ee7d9f9b6ca1a38202d33e1d6025889ee5', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?auto=webp&s=8d311d7d0be12e8ba3dd60799f7d7a9ad3bbc06d', 'width': 961}, 'variants': {}}]}
What's so special about R1?
0
I should know better than believing the hype, but I downloaded the model anyway. There's something I always like asking an LLM because it's fast and easy to test, that is asking it to write a haiku. I've never seen anything as nonesensical as R1, first it went through some insance mental gymnastics thinking about writing a haiku, creating rules that rules in its thoughts that had nothing to do with haikus and eventually wrote a haiku that did not follow the haiku pattern at all. I told it that it failed and to try again, and the response was even worse. For some reason it started to "think" that it had to generate a 10x10 grid with numbers, so it started thinking about that and entered in an neverending train of thoughts. It didn't even give me an answer and I had to stop it after two minutes of it thinking. I did more experiementaion but still nothing impressive. The "thinking" just gives the impression that it's doing something but it only seems to make things worse. I tested a 7B model. Are bigger models any better compared to other models of the same size? I've used other 7B models and I've honestly rarely seen anything this bad.
2025-01-23T14:05:16
https://www.reddit.com/r/LocalLLaMA/comments/1i8431j/whats_so_special_about_r1/
Silvestron
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8431j
false
null
t3_1i8431j
/r/LocalLLaMA/comments/1i8431j/whats_so_special_about_r1/
false
false
self
0
null
Synthetic Data + Rule-Based Reward Models + DPO + Q-Lora < 24GB
1
2025-01-23T14:06:51
https://www.philschmid.de/rl-with-llms-in-2025-dpo
Ok_Two6167
philschmid.de
1970-01-01T00:00:00
0
{}
1i84487
false
null
t3_1i84487
/r/LocalLLaMA/comments/1i84487/synthetic_data_rulebased_reward_models_dpo_qlora/
false
false
https://b.thumbs.redditm…dzoUDUX_YpXE.jpg
1
{'enabled': False, 'images': [{'id': 'jJzwJoZvX27zVBQsQ02HPYSdMkuDurNrMoMBp0EEL_I', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?width=108&crop=smart&auto=webp&s=35169f45c54c68b7e99883c7e9cd6a35af4002d7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?width=216&crop=smart&auto=webp&s=231acd4157849079eb7ecb9ee446cac79fce7ade', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?width=320&crop=smart&auto=webp&s=11469726fed0d5c281ad8758f9b8cb8bac96ac16', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?width=640&crop=smart&auto=webp&s=1269a93cd61b0d4cd4d41fdc01b431363cdf443e', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?width=960&crop=smart&auto=webp&s=99e240ee7d9f9b6ca1a38202d33e1d6025889ee5', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/EHtH2tKNo1BQeIb5SeuqIA4QvOqLqyd7Jvo3hTDnEA8.jpg?auto=webp&s=8d311d7d0be12e8ba3dd60799f7d7a9ad3bbc06d', 'width': 961}, 'variants': {}}]}
Can i have an update of what is the best uncensored /nsfw chat ai rn ?
2
[removed]
2025-01-23T14:09:50
https://www.reddit.com/r/LocalLLaMA/comments/1i846g7/can_i_have_an_update_of_what_is_the_best/
Frenchygirll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i846g7
false
null
t3_1i846g7
/r/LocalLLaMA/comments/1i846g7/can_i_have_an_update_of_what_is_the_best/
false
false
nsfw
2
null
Hello, I am new to all of this. Installed dolphin-llama3.
1
It says it's an unrestricted model. I wanted to go ham with ridiculous queries. But shot me down half the time. Is there anyway to make it completely unrestricted with no safety boundaries? Thank you.
2025-01-23T14:39:21
https://www.reddit.com/r/LocalLLaMA/comments/1i84sqa/hello_i_am_new_to_all_of_this_installed/
Direct-Historian97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i84sqa
false
null
t3_1i84sqa
/r/LocalLLaMA/comments/1i84sqa/hello_i_am_new_to_all_of_this_installed/
false
false
self
1
null
You can try DeepSeek R1 Distill model on iOS now
8
2025-01-23T14:40:18
https://apps.apple.com/us/app/on-device-ai/id6497060890
animax00
apps.apple.com
1970-01-01T00:00:00
0
{}
1i84tgd
false
null
t3_1i84tgd
/r/LocalLLaMA/comments/1i84tgd/you_can_try_deepseek_r1_distill_model_on_ios_now/
false
false
https://b.thumbs.redditm…cWqptLQf6BsA.jpg
8
{'enabled': False, 'images': [{'id': 'aARlJ_4T1y6rlUm7vIx4sPQbu9s2xbf5gf7ZjUleO9c', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/t6imraKIWFZ3I3_65g9BZZu4sRyiNjmuC656_0OdBk0.jpg?width=108&crop=smart&auto=webp&s=a90c53cbcde59b9acf9be2be56c4d4ff56e0db20', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/t6imraKIWFZ3I3_65g9BZZu4sRyiNjmuC656_0OdBk0.jpg?width=216&crop=smart&auto=webp&s=603c690dd980d6e3197a9efc6b9d221e0cde8e51', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/t6imraKIWFZ3I3_65g9BZZu4sRyiNjmuC656_0OdBk0.jpg?width=320&crop=smart&auto=webp&s=b144eb08c7aedc56fd7dc2896ca38d7c3bea7050', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/t6imraKIWFZ3I3_65g9BZZu4sRyiNjmuC656_0OdBk0.jpg?width=640&crop=smart&auto=webp&s=1c350209222006df4549b4b793c2342e02301c53', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/t6imraKIWFZ3I3_65g9BZZu4sRyiNjmuC656_0OdBk0.jpg?width=960&crop=smart&auto=webp&s=e421baf8c84a96aa5fe870aa183b427b5eeb87c3', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/t6imraKIWFZ3I3_65g9BZZu4sRyiNjmuC656_0OdBk0.jpg?width=1080&crop=smart&auto=webp&s=47bcb26eb437d3af47ee62aab4f2343474536f38', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/t6imraKIWFZ3I3_65g9BZZu4sRyiNjmuC656_0OdBk0.jpg?auto=webp&s=4b3353fcfcf74f5ced751abc034a568469396930', 'width': 1200}, 'variants': {}}]}
Is this the key methodology that Anthropic used to make Claude so good?
24
[This article](https://www.anthropic.com/research/claude-character) is a bit old(June 8 2024), but this seems like it has the biggest contribution to what makes Claude sonnet 3.5 so good and open models could benefit from similar training. > Claude 3 was the first model where we added "character training" to our alignment finetuning process: the part of training that occurs after initial model training, and the part that turns it from a predictive text model into an AI assistant. The goal of character training is to make Claude begin to have more nuanced, richer traits like curiosity, open-mindedness, and thoughtfulness. I haven't heard people talk about this a lot and l feel like it should be talked about a lot more. Especially the traits that they shared in the last sentence ("more nuanced, richer traits like curiosity, open-mindedness, and thoughtfulness") really make the model, particular if you are looking for something that you can have a great conversation with.
2025-01-23T14:41:33
https://www.reddit.com/r/LocalLLaMA/comments/1i84uew/is_this_the_key_methodology_that_anthropic_used/
takuonline
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i84uew
false
null
t3_1i84uew
/r/LocalLLaMA/comments/1i84uew/is_this_the_key_methodology_that_anthropic_used/
false
false
self
24
{'enabled': False, 'images': [{'id': 'rCtL3f6m1V5W9lhVcx0JWa7bjphBuyhjjVP9xK-MrPs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/II3Q-OKv6LMzLP0_YgK6W5IHmC8ZarHFoW09w8JHOs8.jpg?width=108&crop=smart&auto=webp&s=aaee85518fff39b41ba9681784f4d721477c520b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/II3Q-OKv6LMzLP0_YgK6W5IHmC8ZarHFoW09w8JHOs8.jpg?width=216&crop=smart&auto=webp&s=2ecb4ab6ab1398b6b1b73fd29f7a5d783e4ab5e5', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/II3Q-OKv6LMzLP0_YgK6W5IHmC8ZarHFoW09w8JHOs8.jpg?width=320&crop=smart&auto=webp&s=76ada09efbabbdf7b15bc4dac409d81a3168b6aa', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/II3Q-OKv6LMzLP0_YgK6W5IHmC8ZarHFoW09w8JHOs8.jpg?width=640&crop=smart&auto=webp&s=2bf446cee68b988ad9ec5882c11f5a5bb6d8a10e', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/II3Q-OKv6LMzLP0_YgK6W5IHmC8ZarHFoW09w8JHOs8.jpg?width=960&crop=smart&auto=webp&s=bca17218b3f3da9ee406cbd6d65571412552408b', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/II3Q-OKv6LMzLP0_YgK6W5IHmC8ZarHFoW09w8JHOs8.jpg?width=1080&crop=smart&auto=webp&s=2b80846cf22db7eeba9a38970b58c740aa0296f4', 'width': 1080}], 'source': {'height': 1584, 'url': 'https://external-preview.redd.it/II3Q-OKv6LMzLP0_YgK6W5IHmC8ZarHFoW09w8JHOs8.jpg?auto=webp&s=3252fc3cce6f82d4c50065453675a06471b8d00b', 'width': 3024}, 'variants': {}}]}
Level1Techs 5090 review
0
2025-01-23T14:45:25
https://youtu.be/nryZwnVYpns?feature=shared
Thrumpwart
youtu.be
1970-01-01T00:00:00
0
{}
1i84x8s
false
{'oembed': {'author_name': 'Level1Techs', 'author_url': 'https://www.youtube.com/@Level1Techs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/nryZwnVYpns?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="NVIDIA is Taking a Risk with this Generation: The Blackwell GeForce RTX 5090 Review and Benchmarks"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/nryZwnVYpns/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'NVIDIA is Taking a Risk with this Generation: The Blackwell GeForce RTX 5090 Review and Benchmarks', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1i84x8s
/r/LocalLLaMA/comments/1i84x8s/level1techs_5090_review/
false
false
https://b.thumbs.redditm…8DCLQQWSDMQo.jpg
0
{'enabled': False, 'images': [{'id': 'yhAZ3r9y_5gWy-ourMzXPxZcI0WBgr9NWsOonD1kePY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nUsrT9NtVQP4WD3jjJ2IvbWF46xPfpyRFMwoGk0IOY8.jpg?width=108&crop=smart&auto=webp&s=75c1cd2c4347fff76c54a7e4e30e2efa4cc167f4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/nUsrT9NtVQP4WD3jjJ2IvbWF46xPfpyRFMwoGk0IOY8.jpg?width=216&crop=smart&auto=webp&s=ccd39e2f6584ff6b3ea2d40481896b28128187a7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/nUsrT9NtVQP4WD3jjJ2IvbWF46xPfpyRFMwoGk0IOY8.jpg?width=320&crop=smart&auto=webp&s=8c622c8b88a34ac8bc1f48530f839c49368ceb39', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/nUsrT9NtVQP4WD3jjJ2IvbWF46xPfpyRFMwoGk0IOY8.jpg?auto=webp&s=cf8e2787270c92bdc437c75914077d6f1f5ce51d', 'width': 480}, 'variants': {}}]}
Local LLM Game Jam! Prototype fun games over a weekend with us
12
2025-01-23T14:49:00
https://i.redd.it/jome85rqaree1.png
No_Abbreviations_532
i.redd.it
1970-01-01T00:00:00
0
{}
1i84zzy
false
null
t3_1i84zzy
/r/LocalLLaMA/comments/1i84zzy/local_llm_game_jam_prototype_fun_games_over_a/
false
false
https://b.thumbs.redditm…qZHb3zmKuqQg.jpg
12
{'enabled': True, 'images': [{'id': 'tjW-BSM56JM4PBOEvBfz9VHgLHIH53ya5KPH34JC6_Y', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/jome85rqaree1.png?width=108&crop=smart&auto=webp&s=9ec4a62e8261b6e06911ec3ebd69b21cc083f042', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/jome85rqaree1.png?width=216&crop=smart&auto=webp&s=7c2f8e58a2158c5eca23f9a8a1d08462524b9927', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/jome85rqaree1.png?width=320&crop=smart&auto=webp&s=4704bdbd6ce64c11037de65bb09a51b4965b1e29', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/jome85rqaree1.png?width=640&crop=smart&auto=webp&s=63e08a645362ea7e218fb89922b7ac51b7d4f390', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/jome85rqaree1.png?width=960&crop=smart&auto=webp&s=db0e1821ba0d4244f94ed6b4440740e4186d2ad5', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/jome85rqaree1.png?width=1080&crop=smart&auto=webp&s=a7fcda7d88cafeac50d4a935ec992bf08fdebc96', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/jome85rqaree1.png?auto=webp&s=877366796d11fb384be8492ae111e7d874da4c5c', 'width': 1080}, 'variants': {}}]}
Extracting insights from PDFs using Ollama
1
[removed]
2025-01-23T14:50:38
https://www.reddit.com/r/LocalLLaMA/comments/1i8517g/extracting_insights_from_pdfs_using_ollama/
KiwiNarrow4757
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i8517g
false
null
t3_1i8517g
/r/LocalLLaMA/comments/1i8517g/extracting_insights_from_pdfs_using_ollama/
false
false
self
1
null
Extracting insights from PDFs using Ollama
1
[removed]
2025-01-23T14:52:02
https://www.reddit.com/r/LocalLLaMA/comments/1i852bl/extracting_insights_from_pdfs_using_ollama/
KiwiNarrow4757
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i852bl
false
null
t3_1i852bl
/r/LocalLLaMA/comments/1i852bl/extracting_insights_from_pdfs_using_ollama/
false
false
self
1
null
Looking for an Open-Source Blinkist-Style Project for Chapter-Wise Summaries
3
I’m searching for an open-source solution to create chapter-wise summaries from a large corpus of PDFs. Some of the content contains confidential information, so I need a tool that can handle this locally without exposing data to external API providers. Key requirements: * Chapter detection and segmentation (this seems tricky—any existing implementations?). * Local models for summarization (are there specific models fine-tuned for this use case?). Does anything like this already exist? Open to suggestions or guidance!
2025-01-23T14:56:45
https://www.reddit.com/r/LocalLLaMA/comments/1i855zg/looking_for_an_opensource_blinkiststyle_project/
Own_Editor8742
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i855zg
false
null
t3_1i855zg
/r/LocalLLaMA/comments/1i855zg/looking_for_an_opensource_blinkiststyle_project/
false
false
self
3
null
Aren't current approaches to context (short term memory) surprisingly naive? Why not use a level-of-detail inspired hierarchical approach? (idea / discussion)
7
**Preface:** To preface all of this, I have a computer science background (computer graphics, low level languages and software architecture for the most part) and have a rough understanding about how current LLM architectures look like and work. Over the last couple of days, I have been quite bothered by how current approaches to short term memory and context operate. To me, it feels like short term memory should be a main concern of LLMs and be integrated into the architecture. I have been thinking about this quite a bit and I feel I have maybe some interesting approach that I can share. The whole thing is inspired by some of my own work in the past about rendering complex 3d scenes with a fixed triangle budget. The idea was to start with a coarse representation of the scene and increase the detail of meshes within the scene depending on an estimation for how much a swap of the mesh would improve visual quality. Sometimes, swapping the mesh would even lead to worse quality, due to overdraw, z-fighting or aliasing issues. **How does any of this relate to context you might ask?** **The idea is relatively straighforward:** Currently, LLMs process a context, which consists of a vector of tokens, which themselves are represented by vectors. This can be compared to alway drawing a ground-truth mesh in computer graphics and it comes with about the same downsides: you only have a limited budget in terms of memory and compute, so you have a hard limit when it comes to context size. You also have unwanted effects, such as the LLM starting to repeat certain phrases or having to deal with "noise" from parts of the context, which are not relevant to the current user input. This can again be compared to overdraw, z-fighting and especially aliasing issues. What you actually want, is to represent the entire conversation history at different levels of detail (compression) and construct a context for the LLM to process, which represents the entire conversation history, but leaves less relevant parts at a lower level of detail, while preserving high/full level of detail for relvant parts of the conversation history. **How could this be implemented?** We start our level-of-detail approach by chunking the input-tokens into a fixed size (say, chunks of 16 tokens). level-of-detail 0 represents our ground truth, which is the tokens themselves - so we have 16 vectors, which can directly be obtained from the token embedding mappings. level-of-detail 1 can be obtained by compressing two adjacent level-of-detail 0 chunks of 16 token embeddings into a new chunk of 16 vectors representing the original 32 token embeddings. To obtain this vector, surrounding chunks can be taken into account via sparse/diagonal/linear attention employed by a machine learning model for embedding compression. simillarly as to the construction of level-of-detail 1, further level-of-detail representations can be obtained until all chunks combined can (comfortably) fit into the context vector size of the LLM (let's say 8k embeddings / vectors). For the sake of the example, let's say that lod 5 is sufficient for the input being processed. Now, we might have an lod representation that can fit into the context size comfortably, but it's all far too coarse to be useable for retrieving usefull information. What we must do now, is to decide for which chunks we want to swap the current representation (16 vectors) with two chunks of the next lower lod (2 times 16 vectors, 32 total). To do so, we make a queue sorted by the relevance of the information encoded in the chunk in regards to the current prompt. For this, another machine learning model is trained and employed. In additon, a recency bias may be employed (new and relevant information is preferable to old and relevant information). Iteratively we take the chunk with the highest estimated relevance in respect to the prompt and replace it with two chunks, which we evaluate in terms of relevance and add to the queue. The process stops once a maximum number of entires are in the queue. In our example, this would be 8k context / 16 vectors per chunk = 512 entries, minus the space reserved for the output of the LLM. The chunks are then fed into a "classical" LLM, replacing the token embeddings and are processed with the usual attention mechanism. **Typical use case and possible performance:** For your usual work flow, you have the following: \- create LODs for all newly generated tokens/input prompt. This is linear in the amount of tokens processed and can be done while the LLM outputs new tokens. It feels to me like this should be next to free aside from memory requirements. \- when receiving a new prompt, re-calculate the the chunks for the new context based on relevance to the prompt. This is linear in respect to the context size of the LLM. \- feed the LLM with the new context and have it generate output tokens. This is quadratic in respect to the context size of the LLM (time and space) **Conclusion/Discussion:** As far as I understand, this method comes with the (imo quite big) advantage of providing a prompt-aware context construction, simillar to RAG, without inclusion of irrelevant details/"noise" that might degrade the LLM output and while preserving knowledge of the entire past conversation. Other ideas of compressing context that I am aware of, create a short term memory store, which doesn't take the user prompt into account and might forget relevant details that the user is interested in / which are relevant to the query. One significant downside I am seeing is that due to the context being re-assembled for every prompt, it's not possible to chache the context vector of the LLM and the whole context needs to be re-processed. In terms of training, I'm not quite sure - the token embeddings already encode meaning, so it might be possible to just take an LLM and use it as a base. It would likely be good to jointly train the LLM, the chunk compression machine learning model and the relevance estimation model. This way it might be possible for the chunk compression model to encode extra information, such as the degree of compression, in a way the LLM understands and may use to further improve performance. What do you guys think? Is this maybe a viable approach? What would you change/improve? Do you see any technical reasons as to why this can't be done? Is my understanding wrong? Feel free to correct me!
2025-01-23T14:57:19
https://www.reddit.com/r/LocalLLaMA/comments/1i856eq/arent_current_approaches_to_context_short_term/
LagOps91
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i856eq
false
null
t3_1i856eq
/r/LocalLLaMA/comments/1i856eq/arent_current_approaches_to_context_short_term/
false
false
self
7
null
Open-source Deepseek beat not so OpenAI in 'humanity's last exam' !
404
2025-01-23T14:57:59
https://i.redd.it/lxwhx4eicree1.jpeg
BidHot8598
i.redd.it
1970-01-01T00:00:00
0
{}
1i856wr
false
null
t3_1i856wr
/r/LocalLLaMA/comments/1i856wr/opensource_deepseek_beat_not_so_openai_in/
false
false
https://b.thumbs.redditm…_pUsxaRBQ0QY.jpg
404
{'enabled': True, 'images': [{'id': 'W5lAufooMiGHFhrZWGYJsqqpR9uRF9ELQqXWBnF8Hgs', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/lxwhx4eicree1.jpeg?width=108&crop=smart&auto=webp&s=dbc2426b8fd89cc2c0d394532a3b112e6069c23b', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/lxwhx4eicree1.jpeg?width=216&crop=smart&auto=webp&s=cbba276eb735343de38bfddb7dfc283e380c382b', 'width': 216}, {'height': 110, 'url': 'https://preview.redd.it/lxwhx4eicree1.jpeg?width=320&crop=smart&auto=webp&s=ceff0c86b7b1f9be099de22b22efdffeaa0829b1', 'width': 320}, {'height': 221, 'url': 'https://preview.redd.it/lxwhx4eicree1.jpeg?width=640&crop=smart&auto=webp&s=b6a7b23129dc6a37671b0f77472359990567d0e4', 'width': 640}, {'height': 331, 'url': 'https://preview.redd.it/lxwhx4eicree1.jpeg?width=960&crop=smart&auto=webp&s=2f0cd394611174e29d5ac85c529900f65e828d74', 'width': 960}, {'height': 373, 'url': 'https://preview.redd.it/lxwhx4eicree1.jpeg?width=1080&crop=smart&auto=webp&s=2fd9e67ddf899004ff5fc1ddcd6186486be5c9fe', 'width': 1080}], 'source': {'height': 708, 'url': 'https://preview.redd.it/lxwhx4eicree1.jpeg?auto=webp&s=231e37648bf93922ef524480e601288913f28b18', 'width': 2048}, 'variants': {}}]}
IntellAgnet: An open-source framework to evaluate and optimize conversational agents
9
[IntellAgnet](https://github.com/plurai-ai/intellagent) is a novel multi-agent framework to evaluate conversational agents. The system takes the prompt as an input and generates thousands of **realistic** challenging interactions with the tested agent. It then simulates the interactions and provides fine-grained analysis. The [research paper](https://arxiv.org/abs/2501.11067) provides many non-trivial insights that are produced by the system. The system is open source: [https://github.com/plurai-ai/intellagent](https://github.com/plurai-ai/intellagent)
2025-01-23T15:13:31
https://www.reddit.com/r/LocalLLaMA/comments/1i85jqm/intellagnet_an_opensource_framework_to_evaluate/
e2lv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i85jqm
false
null
t3_1i85jqm
/r/LocalLLaMA/comments/1i85jqm/intellagnet_an_opensource_framework_to_evaluate/
false
false
self
9
{'enabled': False, 'images': [{'id': '5pruI5cUE0WZb2qeJZWTfhVBJb5FY5UBGFyZaWdYVwU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SkZImiINvMbU2yk7bCkwjUiBxjheg2smjs0EMzvX9MU.jpg?width=108&crop=smart&auto=webp&s=e44705f80bb2d36e07a30eaadf4b08ee2b7e03c3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SkZImiINvMbU2yk7bCkwjUiBxjheg2smjs0EMzvX9MU.jpg?width=216&crop=smart&auto=webp&s=51f2b993677b1a0d1d1386560c90530d6de33408', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SkZImiINvMbU2yk7bCkwjUiBxjheg2smjs0EMzvX9MU.jpg?width=320&crop=smart&auto=webp&s=ca04b3d709714fb249aabeaa2748cd2a8a5ed601', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SkZImiINvMbU2yk7bCkwjUiBxjheg2smjs0EMzvX9MU.jpg?width=640&crop=smart&auto=webp&s=6dea427741364b02fa4407028ec3202c9455f571', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SkZImiINvMbU2yk7bCkwjUiBxjheg2smjs0EMzvX9MU.jpg?width=960&crop=smart&auto=webp&s=cfc44573711b62d7727b8a661e4b5948adb77a2c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SkZImiINvMbU2yk7bCkwjUiBxjheg2smjs0EMzvX9MU.jpg?width=1080&crop=smart&auto=webp&s=a6b1119ce4cae5013d1a437375bb0bbadbf3fe1e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SkZImiINvMbU2yk7bCkwjUiBxjheg2smjs0EMzvX9MU.jpg?auto=webp&s=6da91be8a56093c42590edd8c0a7ecba9f495a5b', 'width': 1200}, 'variants': {}}]}
Anybody managed to run a >7B model in a RTX 2060 6GB VRAM?
2
I know i'm stretching it, GB VRAM and all, but i ain't willing to buy a new card at the moment I am happy with my 7B models, but what i noticed is NOOoooo model with 8B parameters is willing to even finish downloading. It just keeps downloading and then resets back to 0%. I also notice computer performance drops a lot when downloading 8B models (but not when downloading 7B or lower) Is there a way to run 8B models on my computer or am i pushing it? It's a Ryzen 5 3400G with 16Gb DDR4
2025-01-23T15:16:49
https://www.reddit.com/r/LocalLLaMA/comments/1i85mfp/anybody_managed_to_run_a_7b_model_in_a_rtx_2060/
Blender-Fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i85mfp
false
null
t3_1i85mfp
/r/LocalLLaMA/comments/1i85mfp/anybody_managed_to_run_a_7b_model_in_a_rtx_2060/
false
false
self
2
null
Which small LLM to choose for RAG ?
1
[removed]
2025-01-23T15:18:47
https://www.reddit.com/r/LocalLLaMA/comments/1i85o5f/which_small_llm_to_choose_for_rag/
jasgewsnahh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i85o5f
false
null
t3_1i85o5f
/r/LocalLLaMA/comments/1i85o5f/which_small_llm_to_choose_for_rag/
false
false
self
1
null
Information retrieval user interface
1
[removed]
2025-01-23T15:22:16
https://www.reddit.com/r/LocalLLaMA/comments/1i85r1z/information_retrieval_user_interface/
alex2110_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i85r1z
false
null
t3_1i85r1z
/r/LocalLLaMA/comments/1i85r1z/information_retrieval_user_interface/
false
false
self
1
null
Looking for an information retrieval app
1
[removed]
2025-01-23T15:25:07
https://www.reddit.com/r/LocalLLaMA/comments/1i85tbn/looking_for_an_information_retrieval_app/
alex2110_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i85tbn
false
null
t3_1i85tbn
/r/LocalLLaMA/comments/1i85tbn/looking_for_an_information_retrieval_app/
false
false
self
1
null
Estimating concurrent capacity for a local LLM RAG setup
2
Hello! I’m building chatbots for companies' websites to assist with their sales. These chatbots will help potential clients by answering questions about the companies’ products and services. To avoid disruptions from public API changes, I plan to use Ollama to serve the LLM locally. The hardware I’m considering is Hetzner's GEX44 (Intel® Core™ i5-13500, 64 GB DDR4, Nvidia RTX™ 4000 SFF Ada Generation with 20 GB VRAM). I’ll be running models similar in size to Llama 3.2 3B Q8_0 (3.4 GB), which means the server should have enough VRAM to host 4 instances of the model. I understand that input size also plays a significant role—especially for RAG setups, where larger contexts can greatly increase the token count and processing time. Performance will naturally depend on active users, input size, and other variables. Is this setup viable for a production chatbot? How many concurrent queries could it realistically handle? I’d greatly appreciate any insights or benchmarks from those with practical experience. Thanks in advance!
2025-01-23T15:27:21
https://www.reddit.com/r/LocalLLaMA/comments/1i85v5t/estimating_concurrent_capacity_for_a_local_llm/
kzkv0p
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i85v5t
false
null
t3_1i85v5t
/r/LocalLLaMA/comments/1i85v5t/estimating_concurrent_capacity_for_a_local_llm/
false
false
self
2
null
5090 LLM Benchmark Results
13
https://preview.redd.it/…s if you got em!
2025-01-23T15:31:21
https://www.reddit.com/r/LocalLLaMA/comments/1i85ygw/5090_llm_benchmark_results/
olmoscd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i85ygw
false
null
t3_1i85ygw
/r/LocalLLaMA/comments/1i85ygw/5090_llm_benchmark_results/
false
false
https://b.thumbs.redditm…yXrG-lv0CQCA.jpg
13
null