title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How China’s New AI Model DeepSeek Is Threatening U.S. Dominance | 0 | 2025-01-25T02:03:34 | https://www.youtube.com/watch?v=WEBiebbeNCA | fallingdowndizzyvr | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1i9cfdv | false | {'oembed': {'author_name': 'CNBC', 'author_url': 'https://www.youtube.com/@CNBC', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/WEBiebbeNCA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How China’s New AI Model DeepSeek Is Threatening U.S. Dominance"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/WEBiebbeNCA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'How China’s New AI Model DeepSeek Is Threatening U.S. Dominance', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1i9cfdv | /r/LocalLLaMA/comments/1i9cfdv/how_chinas_new_ai_model_deepseek_is_threatening/ | false | false | 0 | {'enabled': False, 'images': [{'id': '67pdQr3OslWMY-UzmjmQYHBg2QERJa8X9pmP2sk_HC0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/gMIK_riKKBzJcYRSOybUojILoTtjNXmzT9dWIGmY3FY.jpg?width=108&crop=smart&auto=webp&s=31836c7ef75bb88fc9b43818ecc236786717619e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/gMIK_riKKBzJcYRSOybUojILoTtjNXmzT9dWIGmY3FY.jpg?width=216&crop=smart&auto=webp&s=909c3951123973cc5cba6308d56d833c6b54bd5c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/gMIK_riKKBzJcYRSOybUojILoTtjNXmzT9dWIGmY3FY.jpg?width=320&crop=smart&auto=webp&s=6437e4743ba2647ec2fcfb88ec9c07b1f901f8f6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/gMIK_riKKBzJcYRSOybUojILoTtjNXmzT9dWIGmY3FY.jpg?auto=webp&s=65f04728692d9ba5a6dcaac141a77d1a1decca31', 'width': 480}, 'variants': {}}]} |
||
Google Drive Apk Free Download | 1 | 2025-01-25T02:04:15 | https://www.ywsf12.com/2025/01/google-drive-apk-free-download.html | FormalInsurance1080 | ywsf12.com | 1970-01-01T00:00:00 | 0 | {} | 1i9cfu8 | false | null | t3_1i9cfu8 | /r/LocalLLaMA/comments/1i9cfu8/google_drive_apk_free_download/ | false | false | 1 | {'enabled': False, 'images': [{'id': '77GwLOrGnozYpiylJr9POxuiCyYr6tSUK19DekMZxdk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/zIFJDIjMxA6pOBfb6ym-iuqdty1-Z_B1bththc92PbY.jpg?width=108&crop=smart&auto=webp&s=a927c0232b0e1358083a8891d63b7418243c550a', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/zIFJDIjMxA6pOBfb6ym-iuqdty1-Z_B1bththc92PbY.jpg?auto=webp&s=d6cdf591bd444940cad81886ad6a53a7b90708d9', 'width': 200}, 'variants': {}}]} |
||
Optimum quanto getting memory error with activation quantization calibration | 2 | Using optimum quanto I am trying to quantize both weights and activations and then trying to calibrate the data in google colab.
But no matter whatever samples I try it gives memory error.
Below is the code for this.
\`\`\`
from transformers import AutoModelForCausalLM, AutoTokenizer
model\_name = "meta-llama/Llama-3.2-1B-Instruct"
model = AutoModelForCausalLM.from\_pretrained(model\_name, low\_cpu\_mem\_usage=True)
tokenizer = AutoTokenizer.from\_pretrained(model\_name)
from optimum.quanto import quantize, qint8
quantize(model, weights=qint8, activations=qint8)
\# Example using Hugging Face dataset
from datasets import load\_dataset
from optimum.quanto import Calibration
calibration\_samples = load\_dataset("allenai/c4", data\_files="en/c4-train.00001-of-01024.json.gz", split='train\[:20\]')
tokenizer.pad\_token = tokenizer.eos\_token
samples = \[item\['text'\] for item in calibration\_samples\]
inputs = tokenizer(samples, return\_tensors='pt', padding=True, truncation=True)
\# Use these inputs in calibration
with Calibration(momentum=0.9):
model(inputs\['input\_ids'\]) # <- the kernel restarts here.
\`\`\` | 2025-01-25T02:14:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i9cmue/optimum_quanto_getting_memory_error_with/ | infinite-Joy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9cmue | false | null | t3_1i9cmue | /r/LocalLLaMA/comments/1i9cmue/optimum_quanto_getting_memory_error_with/ | false | false | self | 2 | null |
Sky-T1-32B-Flash - Think Less, Achieve More: Cut Reasoning Costs by 50% Without Sacrificing Accuracy | 248 | Hugging face:
[https://huggingface.co/NovaSky-AI/Sky-T1-32B-Flash](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Flash)
Blog post:
[https://novasky-ai.github.io/posts/reduce-overthinking/](https://novasky-ai.github.io/posts/reduce-overthinking/) | 2025-01-25T02:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i9ddj1/skyt132bflash_think_less_achieve_more_cut/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9ddj1 | false | null | t3_1i9ddj1 | /r/LocalLLaMA/comments/1i9ddj1/skyt132bflash_think_less_achieve_more_cut/ | false | false | self | 248 | {'enabled': False, 'images': [{'id': 'fiuuJqFxphsHUDiyTThNQFTC_JN07fnrNsbI1mL4-C8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9WGPYB42SvpKADlypNtOJwZkMOILyqcETOaipgPz0HE.jpg?width=108&crop=smart&auto=webp&s=c3f5ee3b4c6047d8d0dc8e25652c12734aad3aab', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9WGPYB42SvpKADlypNtOJwZkMOILyqcETOaipgPz0HE.jpg?width=216&crop=smart&auto=webp&s=62b652080d969048b60107a6ecc237d6415e5fae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9WGPYB42SvpKADlypNtOJwZkMOILyqcETOaipgPz0HE.jpg?width=320&crop=smart&auto=webp&s=41cde3bb46863aa1f20c993daceedc7bc08ca22a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9WGPYB42SvpKADlypNtOJwZkMOILyqcETOaipgPz0HE.jpg?width=640&crop=smart&auto=webp&s=2ffa225434e01363451c9801210c3f34cca06e17', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9WGPYB42SvpKADlypNtOJwZkMOILyqcETOaipgPz0HE.jpg?width=960&crop=smart&auto=webp&s=5a85515e77e7183ed57f92e2d8891fc657142a50', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9WGPYB42SvpKADlypNtOJwZkMOILyqcETOaipgPz0HE.jpg?width=1080&crop=smart&auto=webp&s=42378ad0dd5962c271afe31c2fd66efab30a6a11', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9WGPYB42SvpKADlypNtOJwZkMOILyqcETOaipgPz0HE.jpg?auto=webp&s=57c5cb34d05771b41562e95b7e7acce946a86749', 'width': 1200}, 'variants': {}}]} |
Snowflake claims breakthrough can cut AI inferencing times by more than 50% | 77 | 2025-01-25T02:57:24 | https://siliconangle.com/2025/01/16/snowflake-claims-breakthrough-can-cut-ai-inferencing-times-50/?utm_source=tldrai | naytres | siliconangle.com | 1970-01-01T00:00:00 | 0 | {} | 1i9df4h | false | null | t3_1i9df4h | /r/LocalLLaMA/comments/1i9df4h/snowflake_claims_breakthrough_can_cut_ai/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'CyJnRf6qESIs8wkb9aHDrLHouL_jA98ryJBHbVOSPMU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/rFow1BiIxlzLULRqkmp4MQmAOsr_RX2LigTLUndYEX0.jpg?width=108&crop=smart&auto=webp&s=2a94f57c59ab86bff06cebf1b4d8d87ab04a0560', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/rFow1BiIxlzLULRqkmp4MQmAOsr_RX2LigTLUndYEX0.jpg?width=216&crop=smart&auto=webp&s=70d59876e29b68a669a36c4f3a7fa8438c8bbeed', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/rFow1BiIxlzLULRqkmp4MQmAOsr_RX2LigTLUndYEX0.jpg?width=320&crop=smart&auto=webp&s=742f82a5f435d706f764fdf31a0c05e2fb4584ab', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/rFow1BiIxlzLULRqkmp4MQmAOsr_RX2LigTLUndYEX0.jpg?width=640&crop=smart&auto=webp&s=3a3a7651d8f8f1ff902a9f794c7c939c280a9067', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/rFow1BiIxlzLULRqkmp4MQmAOsr_RX2LigTLUndYEX0.jpg?width=960&crop=smart&auto=webp&s=367420ee3ae803b55ab15f4d015a1b591fb44dd2', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/rFow1BiIxlzLULRqkmp4MQmAOsr_RX2LigTLUndYEX0.jpg?width=1080&crop=smart&auto=webp&s=96cf51c872df913ad73147745e9dce918763c90c', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/rFow1BiIxlzLULRqkmp4MQmAOsr_RX2LigTLUndYEX0.jpg?auto=webp&s=f34a2d737f5319040fe93fb1f88571260176eefc', 'width': 1792}, 'variants': {}}]} |
||
Replicating DeepSeek-R3-Zero RL recipe on 3B LLM for <30$, the model develops self-verification and search abilities all on its own | 148 | 2025-01-25T03:08:14 | https://x.com/jiayi_pirate/status/1882839370505621655 | Happysedits | x.com | 1970-01-01T00:00:00 | 0 | {} | 1i9dmll | false | null | t3_1i9dmll | /r/LocalLLaMA/comments/1i9dmll/replicating_deepseekr3zero_rl_recipe_on_3b_llm/ | false | false | 148 | {'enabled': False, 'images': [{'id': 'J_0GdxFPW0QUdI1963OaQFYeKNirVgnRxmeJixBdPqE', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/-U0AHUhVyO7_HDJ7f69uyMJA3j37LtkD4mztbS9f2x8.jpg?width=108&crop=smart&auto=webp&s=d8dbec4b6c1cbc7edb4da312a5545a235fedc9d3', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/-U0AHUhVyO7_HDJ7f69uyMJA3j37LtkD4mztbS9f2x8.jpg?width=216&crop=smart&auto=webp&s=4b7e6673bcea65175fdbb61357ccf338e62f69d2', 'width': 216}, {'height': 191, 'url': 'https://external-preview.redd.it/-U0AHUhVyO7_HDJ7f69uyMJA3j37LtkD4mztbS9f2x8.jpg?width=320&crop=smart&auto=webp&s=d57a80d9f36c497d697cae88a91e4b8f8e436712', 'width': 320}, {'height': 382, 'url': 'https://external-preview.redd.it/-U0AHUhVyO7_HDJ7f69uyMJA3j37LtkD4mztbS9f2x8.jpg?width=640&crop=smart&auto=webp&s=311e0b7bf41c19947928451d0391722ceeb36c65', 'width': 640}, {'height': 573, 'url': 'https://external-preview.redd.it/-U0AHUhVyO7_HDJ7f69uyMJA3j37LtkD4mztbS9f2x8.jpg?width=960&crop=smart&auto=webp&s=ebaf6f735542c6629e25e5ae5fca86b7bc6e2e97', 'width': 960}, {'height': 644, 'url': 'https://external-preview.redd.it/-U0AHUhVyO7_HDJ7f69uyMJA3j37LtkD4mztbS9f2x8.jpg?width=1080&crop=smart&auto=webp&s=1845e9965e728e1115ac102c82f6b8a99d666443', 'width': 1080}], 'source': {'height': 1008, 'url': 'https://external-preview.redd.it/-U0AHUhVyO7_HDJ7f69uyMJA3j37LtkD4mztbS9f2x8.jpg?auto=webp&s=bec2d6c5212d543809944bb37c0b479477986e03', 'width': 1688}, 'variants': {}}]} |
||
Ollama and Internet search | 1 | [removed] | 2025-01-25T03:09:38 | https://www.reddit.com/r/LocalLLaMA/comments/1i9dnin/ollama_and_internet_search/ | DependentMore5540 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9dnin | false | null | t3_1i9dnin | /r/LocalLLaMA/comments/1i9dnin/ollama_and_internet_search/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OiqJ8APHuTrO3N7H-PvGDHCjwdBM0ufhWoAdP3o3eFs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/GBl0pi2K0QGLzoKKmk559Rz_1ZX94Pii9znEhgDh0Q8.jpg?width=108&crop=smart&auto=webp&s=558c4f978bbd5e54c225d72971ac5f566b5fe821', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/GBl0pi2K0QGLzoKKmk559Rz_1ZX94Pii9znEhgDh0Q8.jpg?width=216&crop=smart&auto=webp&s=d4049e44b87a9796fc73ef08637cd7a6f62c1044', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/GBl0pi2K0QGLzoKKmk559Rz_1ZX94Pii9znEhgDh0Q8.jpg?width=320&crop=smart&auto=webp&s=4d41fd2b4f272c654b2966bc0bea1aa9e543ba33', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/GBl0pi2K0QGLzoKKmk559Rz_1ZX94Pii9znEhgDh0Q8.jpg?width=640&crop=smart&auto=webp&s=f0b2081f49d676156b6d28b7dc65dcb62c08f789', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/GBl0pi2K0QGLzoKKmk559Rz_1ZX94Pii9znEhgDh0Q8.jpg?width=960&crop=smart&auto=webp&s=edd516194b8dd28de20584518bd86c12e5d79ed7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/GBl0pi2K0QGLzoKKmk559Rz_1ZX94Pii9znEhgDh0Q8.jpg?width=1080&crop=smart&auto=webp&s=03db9717105782574a3748f940df8c0443d74fc3', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/GBl0pi2K0QGLzoKKmk559Rz_1ZX94Pii9znEhgDh0Q8.jpg?auto=webp&s=3a96dd63c5f8c36d57e538a7a0464bd6258a2379', 'width': 2560}, 'variants': {}}]} |
DeepSeek released deepseek/deepseek-r1-distill-llama-70b via OpenRouter; Use it with Cursor now! | 1 | Hey guys, I've written a proxy that makes it able for Cursor to use deepseek/deepseek-r1-distill-llama-70b which according to benchmarks it's superior to DeepSeek V3 at coding. Check this project out: [https://github.com/danilofalcao/cursor-deepseek](https://github.com/danilofalcao/cursor-deepseek)
Let me know your thoughts! | 2025-01-25T03:10:19 | https://www.reddit.com/r/LocalLLaMA/comments/1i9dnz7/deepseek_released/ | danilofs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9dnz7 | false | null | t3_1i9dnz7 | /r/LocalLLaMA/comments/1i9dnz7/deepseek_released/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WmnexHJ9OifBIxsoJYVIMI3DLOFISSQQj8gy5pXIzeA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EnJFp62JnFZRy52n6aUj0qkRooT_B9UHLlFV0T1yAFE.jpg?width=108&crop=smart&auto=webp&s=a97fbb8237d9b75244b2365b2695fa2bde46c7d0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EnJFp62JnFZRy52n6aUj0qkRooT_B9UHLlFV0T1yAFE.jpg?width=216&crop=smart&auto=webp&s=c1fe9c16bd5c30ed757868b7d3938fcbdbd136c2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EnJFp62JnFZRy52n6aUj0qkRooT_B9UHLlFV0T1yAFE.jpg?width=320&crop=smart&auto=webp&s=7174343e3707ec89169202ab2c95fe3960318ae9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EnJFp62JnFZRy52n6aUj0qkRooT_B9UHLlFV0T1yAFE.jpg?width=640&crop=smart&auto=webp&s=34e718bd85c3f5fed7edb0c16da0480e53f13cbe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EnJFp62JnFZRy52n6aUj0qkRooT_B9UHLlFV0T1yAFE.jpg?width=960&crop=smart&auto=webp&s=413eff243ff31d1d6731c189700d189798e1706f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EnJFp62JnFZRy52n6aUj0qkRooT_B9UHLlFV0T1yAFE.jpg?width=1080&crop=smart&auto=webp&s=d27e052bf9c81de8eeebd9b8b08e83e8b9dcf7ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EnJFp62JnFZRy52n6aUj0qkRooT_B9UHLlFV0T1yAFE.jpg?auto=webp&s=57e80381d978e691517d376f463b3586783b5d78', 'width': 1200}, 'variants': {}}]} |
Deepseek v3 will make MoE opensource models wayyy more common. | 397 | IDK why no one is talking about this but i just finished reading Deepseek v3's technical report, and how they’ve found innovative and novel solution for one of the biggest challenges with training MoE architectures which is irregular loss spiking.
this issue was probably the major reason why we haven’t seen widespread adoption of MoE models before. But now, with their novel solutions laid out in this open report, it’s likely that other companies will start implementing similar approaches.
I can already imagine a MoE powered Qwen or Llama becoming flagship models in future, just like deepseek | 2025-01-25T03:22:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i9dvrk/deepseek_v3_will_make_moe_opensource_models_wayyy/ | tensorsgo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9dvrk | false | null | t3_1i9dvrk | /r/LocalLLaMA/comments/1i9dvrk/deepseek_v3_will_make_moe_opensource_models_wayyy/ | false | false | self | 397 | null |
Deepseek seems to not be trained on wikipedia data, despite what the model claims | 1 | [removed] | 2025-01-25T03:48:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i9ecdq/deepseek_seems_to_not_be_trained_on_wikipedia/ | Logical-Bet-5372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9ecdq | false | null | t3_1i9ecdq | /r/LocalLLaMA/comments/1i9ecdq/deepseek_seems_to_not_be_trained_on_wikipedia/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dFY36Lt-192R0DQPybfiUE8PDyAFSvVVOx_k-4g4Swo', 'resolutions': [], 'source': {'height': 39, 'url': 'https://external-preview.redd.it/YFbGmsJKXlJ-ExCfsRZc_U3IiTRzwlsD0tnSCyH-VUU.jpg?auto=webp&s=722ffa5740fb5a9042d8bb125c2acbc3d1b59e2f', 'width': 50}, 'variants': {}}]} |
GitHub - meta-llama/llama-stack: Composable building blocks to build Llama Apps | 9 | 2025-01-25T03:55:08 | https://github.com/meta-llama/llama-stack | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i9egut | false | null | t3_1i9egut | /r/LocalLLaMA/comments/1i9egut/github_metallamallamastack_composable_building/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'nhPdcBLN-x1vEyBw7icF7BSDWGb3QwdXSB-kzrEG7V8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IwYhG2_NZBhLDfd8uqDPBYRH6A-wf02OfY4YhJfSqx8.jpg?width=108&crop=smart&auto=webp&s=587520c985a0053fa026b194208e025bda883d23', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IwYhG2_NZBhLDfd8uqDPBYRH6A-wf02OfY4YhJfSqx8.jpg?width=216&crop=smart&auto=webp&s=4d46ac16079f581eef9e66f66278c58504fbf959', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IwYhG2_NZBhLDfd8uqDPBYRH6A-wf02OfY4YhJfSqx8.jpg?width=320&crop=smart&auto=webp&s=3928cd66c3eaf229f59037c67c22022f37006aff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IwYhG2_NZBhLDfd8uqDPBYRH6A-wf02OfY4YhJfSqx8.jpg?width=640&crop=smart&auto=webp&s=021709f8319899af6cbfce0780d2a99e297c18dc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IwYhG2_NZBhLDfd8uqDPBYRH6A-wf02OfY4YhJfSqx8.jpg?width=960&crop=smart&auto=webp&s=e1624c24b4a2131bee45832e9e094bb33a4da283', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IwYhG2_NZBhLDfd8uqDPBYRH6A-wf02OfY4YhJfSqx8.jpg?width=1080&crop=smart&auto=webp&s=545f136de8360ef910096589e57e046c3f4b07af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IwYhG2_NZBhLDfd8uqDPBYRH6A-wf02OfY4YhJfSqx8.jpg?auto=webp&s=e57ef5fd10c3f4476dccca25a40ced0b7d922b3a', 'width': 1200}, 'variants': {}}]} |
||
Why does DeepSeek-R1 claim that it's actually OpenAI GPT-4? | 1 | I've been playing with DeepSeek-R1 recently (using their official website, chat.deepseek.com), and noticed something weird. When, after a casual conversation, I asked whether it would use our conversation to improve itself, R1 said it doesn't learn on the fly (obviously), but my feedback matters. And then it said: "If you report a bug, ambiguity, or idea (e.g., through the official OpenAI channels), it may help improve future versions of the models." Wait, what? Official **OpenAI** channels? Why would it even mention OpenAI? So I played with it a bit more and after a few prompts, it openly admitted that I'm actually talking to OpenAI's GPT-4.
It seems that it's quite reproducible (I tried it about a dozen times and it worked for me 100% of the time), try it yourself: just start a casual conversation, ask it a riddle, or something like that. Then ask how this conversation helps improve the model and how can you help improve it. And/or what you should do if you notice a bug or have an idea how to improve something. Do not mention or suggest any model or company name (OpenAI/DeepSeek), so that the answers are not biased. It's the DeepSeek-R1 who mentions OpenAI first (while not mentioning DeepSeek). If you continue the conversation casually, it will openly admit it's based on GPT-4, see an example below (sorry for attaching long screenshots, but I think it's worth it).
Only when you start the topic by directly asking about the model it is based on, it will say it's DeepSeek-R1 and stick to it.
Now, why is that? One possible answer would be that the chat on DeepSeek's website is just an interface to GPT, with cleverly added chain-of-thought functionality (that's what the chatbot actually suggests). It even happened to me once that in the chain-of-thoughts it said:
https://preview.redd.it/lervref582fe1.png?width=755&format=png&auto=webp&s=33c8d87dea465e52e730b51924912d6a7dda811b
But in the final response, it said:
https://preview.redd.it/emmcou2h92fe1.png?width=768&format=png&auto=webp&s=792faaaf56d5739019bde5aad1ab359c4ff2b2e7
like there were some additional safeguards not to admit the actual origin of the model. But since DeepSeek published the whole model with detailed technical information, that's unlikely (did anyone run the full model independently?) So why? Did DeepSeek use so much GPT-generated synthetic data in training that R1 "thinks" it's GPT-4? Whatever the answer is, that's quite an embarrassing mishap for DeepSeek.
Below is an example of my (almost) full conversation. Before that, I was just giving it some riddles and asking whether it would use my answers to improve the model. I didn't mention OpenAI or DeepSeek.
https://preview.redd.it/98f5mvx5d2fe1.png?width=846&format=png&auto=webp&s=c9ca9757a01d6df41fb06ab03c65b01f300cc756
https://preview.redd.it/i8ifpkt4f2fe1.png?width=846&format=png&auto=webp&s=493e5656f680b32a6b76719eb274466f10fad3e5
https://preview.redd.it/ypa5cgdef2fe1.png?width=846&format=png&auto=webp&s=17a9e099b39aa787cfbb1792c5db2e5f3cf85674 | 2025-01-25T04:26:43 | https://www.reddit.com/r/LocalLLaMA/comments/1i9f0o0/why_does_deepseekr1_claim_that_its_actually/ | Academic-Kangaroo-83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9f0o0 | false | null | t3_1i9f0o0 | /r/LocalLLaMA/comments/1i9f0o0/why_does_deepseekr1_claim_that_its_actually/ | false | false | self | 1 | null |
Deepseek R1 vs GPT o1 on Math | 0 | Curious to see if anyone has trialed to find which one is better in math? | 2025-01-25T04:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/1i9f1c6/deepseek_r1_vs_gpt_o1_on_math/ | SmellyFoot1m | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9f1c6 | false | null | t3_1i9f1c6 | /r/LocalLLaMA/comments/1i9f1c6/deepseek_r1_vs_gpt_o1_on_math/ | false | false | self | 0 | null |
How to put gguf models on open web ui | 0 | Please God somebody help me this is so goddammit complicated. I downloaded a gguf model off hugging face why tf can't i just import it onto open web ui? Why? Why is this so fucking difficult? IM NOT USING OLLAMA I DONT WANT OLLAMA I WANT MY MODEL
Kill me now | 2025-01-25T04:45:28 | https://www.reddit.com/r/LocalLLaMA/comments/1i9fbve/how_to_put_gguf_models_on_open_web_ui/ | DreamDragonP7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9fbve | false | null | t3_1i9fbve | /r/LocalLLaMA/comments/1i9fbve/how_to_put_gguf_models_on_open_web_ui/ | false | false | self | 0 | null |
Serve llm endpoint with tools pre-configured | 0 | Hi there,
Is there a service that allows you to serve and API enpoint which has preconfiged tools. In short I built a Chrome Extension (https://chromewebstore.google.com/detail/prompt-paul-ai-insights-a/bbphcdpangkcegolapkmchkigjdbjgid) and want to serve an endpoint with pre configured tools to allow PromptPaul to get information from our business systems and other API's.
This will be rolled out to my team (who are not tech savy). | 2025-01-25T04:46:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i9fcb5/serve_llm_endpoint_with_tools_preconfigured/ | klippers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9fcb5 | false | null | t3_1i9fcb5 | /r/LocalLLaMA/comments/1i9fcb5/serve_llm_endpoint_with_tools_preconfigured/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'gnKbNCwP7bjogEI-ZVkfIhwcHYVbhDIlQLAWPlJwH6s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/i1dc3XZ6xz43M3uRgsJdUwL65sIb-XVAVHeqZiE-l3Y.jpg?width=108&crop=smart&auto=webp&s=b403e0bd224ebc28b03f5df0b54b089158750ddb', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/i1dc3XZ6xz43M3uRgsJdUwL65sIb-XVAVHeqZiE-l3Y.jpg?auto=webp&s=b869bce8ecdf3bae40041d2928881cf57078f736', 'width': 128}, 'variants': {}}]} |
Another sneak peek of OpenWebUI Artifacts overhaul (Canvas / Claude Artifacts) | 65 | ERROR: type should be string, got "https://preview.redd.it/z7wlunjmk2fe1.png?width=1483&format=png&auto=webp&s=a14db1516ee46a23cb0c7dafbafb80d2d5413339\n\nhttps://i.redd.it/99vtt93qn2fe1.gif\n\nAnother update on what I'm working on! This has gotten a little bit bigger than I thought. It's almost done though!\n\nHere are the main changes:\n\nA working difference checker that shows you what changed since the last revision. This is easily toggleable with a single button click!\n\nYou can cycle between different code blocks in one message. The code viewer will also be able to have file names or code block titles on the top (determined by header text starting with two or three # symbols, Llama 3.3 70b sometimes does this and it works 100% of the time with a system prompt) If it can't find a header or title it will default to the coding language as normal\n\nCode in the chat can be compacted and shown as files while the Artifacts window is open, this is also toggleable!\n\nI also added WAYYYY more coding languages, pretty much everything you guys mentioned on my last post.\n\nHope to share this with you guys soon!" | 2025-01-25T05:13:53 | https://www.reddit.com/r/LocalLLaMA/comments/1i9ft5x/another_sneak_peek_of_openwebui_artifacts/ | maxwell321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9ft5x | false | null | t3_1i9ft5x | /r/LocalLLaMA/comments/1i9ft5x/another_sneak_peek_of_openwebui_artifacts/ | false | false | 65 | null |
|
What does Deepseek r1 mean for Stargate? | 3 | What impact, if any, does Deepseek have on Stargate and its value? Does it take the wind out of Stargate’s sails?
| 2025-01-25T05:14:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i9ftjg/what_does_deepseek_r1_mean_for_stargate/ | Important_Wind_2026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9ftjg | false | null | t3_1i9ftjg | /r/LocalLLaMA/comments/1i9ftjg/what_does_deepseek_r1_mean_for_stargate/ | false | false | self | 3 | null |
LLM or Just SQE? Need Advice + Best Places for Trainee Contracts! | 1 | [removed] | 2025-01-25T05:20:26 | https://www.reddit.com/r/LocalLLaMA/comments/1i9fwyj/llm_or_just_sqe_need_advice_best_places_for/ | Naruto4040 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9fwyj | false | null | t3_1i9fwyj | /r/LocalLLaMA/comments/1i9fwyj/llm_or_just_sqe_need_advice_best_places_for/ | false | false | self | 1 | null |
Deepseek-R1: Guide to running multiple variants on the GPU that suits you best | 7 | Hi LocalLlama fam!
Deepseek-R1 is everywhere. So, we have done the heavy lifting for you to run each variant on the cheapest and highest-availability GPUs. All these configurations have been tested with vLLM for high throughput and auto-scale with the Tensorfuse serverless runtime.
Below is the table that summarizes the configurations you can run.
|Model Variant|Dockerfile Model Name|GPU Type|Num GPUs / Tensor parallel size|
|:-|:-|:-|:-|
|DeepSeek-R1 2B|deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B|A10G|1|
|DeepSeek-R1 7B|deepseek-ai/DeepSeek-R1-Distill-Qwen-7B|A10G|1|
|DeepSeek-R1 8B|deepseek-ai/DeepSeek-R1-Distill-Llama-8B|A10G|1|
|DeepSeek-R1 14B|deepseek-ai/DeepSeek-R1-Distill-Qwen-14B|L40S|1|
|DeepSeek-R1 32B|deepseek-ai/DeepSeek-R1-Distill-Qwen-32B|L4|4|
|DeepSeek-R1 70B|deepseek-ai/DeepSeek-R1-Distill-Llama-70B|L40S|4|
|DeepSeek-R1 671B|deepseek-ai/DeepSeek-R1|H100|8|
# Take it for an experimental spin
You can find the Dockerfile and all configurations in the GitHub repo below. Simply open up a GPU VM on your cloud provider, clone the repo, and run the Dockerfile.
Github Repo: [https://github.com/tensorfuse/tensorfuse-examples/tree/main/deepseek\_r1](https://github.com/tensorfuse/tensorfuse-examples/tree/main/deepseek_r1)
Or, if you use AWS or Lambda Labs, run it via [Tensorfuse Dev containers](https://tensorfuse.io/docs/concepts/devcontainers) that sync your local code to remote GPUs.
# Deploy a production-ready service on AWS using Tensorfuse
If you are looking to use Deepseek-R1 models in your production application, [follow our detailed guide to deploy it on your AWS account using Tensorfuse](https://tensorfuse.io/docs/guides/deepseek_r1).
The guide covers all the steps necessary to deploy open-source models in production:
1. Deployed with the vLLM inference engine for high throughput
2. Support for autoscaling based on traffic
3. Prevent unauthorized access with token-based authentication
4. Configure a TLS endpoint with a custom domain
# Ask
If you like this guide, please like and retweet our post on X 🙏: [https://x.com/tensorfuse/status/1882486343080763397](https://x.com/tensorfuse/status/1882486343080763397)
| 2025-01-25T05:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/1i9g6zv/deepseekr1_guide_to_running_multiple_variants_on/ | tempNull | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9g6zv | false | null | t3_1i9g6zv | /r/LocalLLaMA/comments/1i9g6zv/deepseekr1_guide_to_running_multiple_variants_on/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'JlYotNR27b4dSVV07caU5BK4Xng9tG_m5QSdAYpwoBQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TmoYw0Dn9jy4SAR1bmkj9FZGCALqzcHF7qkK5YJkYbk.jpg?width=108&crop=smart&auto=webp&s=c5196abe8f7fa23d5f9b07ae514233808d7b5e1f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TmoYw0Dn9jy4SAR1bmkj9FZGCALqzcHF7qkK5YJkYbk.jpg?width=216&crop=smart&auto=webp&s=7c4a230cc749e9ac11e5b8c5e408c982aacd2195', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TmoYw0Dn9jy4SAR1bmkj9FZGCALqzcHF7qkK5YJkYbk.jpg?width=320&crop=smart&auto=webp&s=718984e0522d767888580aad425d355cbf2e5542', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TmoYw0Dn9jy4SAR1bmkj9FZGCALqzcHF7qkK5YJkYbk.jpg?width=640&crop=smart&auto=webp&s=6f832a90c1f2b0c6efb16ad8f1e83f03f721e070', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TmoYw0Dn9jy4SAR1bmkj9FZGCALqzcHF7qkK5YJkYbk.jpg?width=960&crop=smart&auto=webp&s=7d706dd150630850b6dbe8e9187700378c87baac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TmoYw0Dn9jy4SAR1bmkj9FZGCALqzcHF7qkK5YJkYbk.jpg?width=1080&crop=smart&auto=webp&s=19d9a978c7205ae8d54e88fe8055047ea0aa6cc7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TmoYw0Dn9jy4SAR1bmkj9FZGCALqzcHF7qkK5YJkYbk.jpg?auto=webp&s=e307b8d7c75b335d4be589582a0aa6d9a7b3a3c4', 'width': 1200}, 'variants': {}}]} |
ollama don't use gup? | 0 | Hello i have amd rx 6600 gpu and it's not supported officially by ollama so the model runs on only cpu, is there any way i can make models to run on gpu? | 2025-01-25T05:40:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i9g8ji/ollama_dont_use_gup/ | hiren11478 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9g8ji | false | null | t3_1i9g8ji | /r/LocalLLaMA/comments/1i9g8ji/ollama_dont_use_gup/ | false | false | self | 0 | null |
is the NVIDIA RTX A4000 a good performer? | 1 | [removed] | 2025-01-25T05:41:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i9g8w1/is_the_nvidia_rtx_a4000_a_good_performer/ | Felino_Wottgald | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9g8w1 | false | null | t3_1i9g8w1 | /r/LocalLLaMA/comments/1i9g8w1/is_the_nvidia_rtx_a4000_a_good_performer/ | false | false | self | 1 | null |
I flipped the function-calling pattern on its head. More responsive and less boiler plate for common agentic scenarios. | 42 | So I built Arch-Function LLM ( the #1 trending OSS function calling model on HuggingFace) and talked about it here: https://www.reddit.com/r/LocalLLaMA/comments/1hr9ll1/i_built_a_small_function_calling_llm_that_packs_a/
But one interesting property of building a lean and powerful LLM was that we could flip the function calling pattern on its head if engineered the right way and improve developer velocity for a lot of common scenarios for an agentic app.
Rather than the laborious 1) the application send the prompt to the LLM with function definitions 2) LLM decides response or to use tool 3) responds with function details and arguments to call 4) your application parses the response and executes the function 5) your application calls the LLM again with the prompt and the result of the function call and 6) LLM responds back that is send to the user
Now - that complexity for many common agentic scenarios can be pushed upstream to the reverse proxy. Which calls into the API as/when necessary and defaults the message to a fallback endpoint if no clear intent was found. Simplifies a lot of the code, improves responsiveness, lowers token cost etc you can learn more about the project below
Of course for complex planning scenarios the gateway would simply forward that to an endpoint that is designed to handle those scenarios - but we are working on the most lean “planning” LLM too. Check it out and would be curious to hear your thoughts
https://github.com/katanemo/archgw
| 2025-01-25T06:00:37 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i9gjok | false | null | t3_1i9gjok | /r/LocalLLaMA/comments/1i9gjok/i_flipped_the_functioncalling_pattern_on_its_head/ | false | false | 42 | {'enabled': True, 'images': [{'id': '-gwmxD_2A65q6Mt_ZM6nq76u3_trTABVpIMW1aoZQeY', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/xy8y911jy2fe1.jpeg?width=108&crop=smart&auto=webp&s=b4acfb8fe9ae4753e697d1f37657504e5f9c8915', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/xy8y911jy2fe1.jpeg?width=216&crop=smart&auto=webp&s=a664598e9b866c8238e5d78b43396b19a15ee0ba', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/xy8y911jy2fe1.jpeg?width=320&crop=smart&auto=webp&s=dcbc139933fb6bababbf1461cd5239babd1b900d', 'width': 320}, {'height': 381, 'url': 'https://preview.redd.it/xy8y911jy2fe1.jpeg?width=640&crop=smart&auto=webp&s=ddfda590492ba4d1f38fdf8525f782b794bad4bd', 'width': 640}, {'height': 572, 'url': 'https://preview.redd.it/xy8y911jy2fe1.jpeg?width=960&crop=smart&auto=webp&s=7bc976e72df7e27e69d83dd406f1ffe61bd0f4f8', 'width': 960}, {'height': 643, 'url': 'https://preview.redd.it/xy8y911jy2fe1.jpeg?width=1080&crop=smart&auto=webp&s=7fa57723305b4fb9d11dcd12a2e430f6cf06f9ce', 'width': 1080}], 'source': {'height': 1348, 'url': 'https://preview.redd.it/xy8y911jy2fe1.jpeg?auto=webp&s=554f31b2a889db30bb55d618544d667b14f7579a', 'width': 2262}, 'variants': {}}]} |
||
How's Anthropic's "15% improved" RAG different from normal RAG ? | 3 | It's been a while since I've built a RAG application powered by an LLM from scratch, but as far as I recall, chunking text from a source document and passing it to the LLM as part of a prompt / context is how it's always worked. According to this article [https://arstechnica.com/ai/2025/01/anthropic-adds-citations-in-bid-to-avoid-confabulating-ai-models/](https://arstechnica.com/ai/2025/01/anthropic-adds-citations-in-bid-to-avoid-confabulating-ai-models/) they've improved it somehow, but it's poorly explained or I'm too stupid and missed it??
If it is actually new / different, how would this be implemented with locally run LLMs? | 2025-01-25T06:10:19 | https://www.reddit.com/r/LocalLLaMA/comments/1i9gp97/hows_anthropics_15_improved_rag_different_from/ | cmndr_spanky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9gp97 | false | null | t3_1i9gp97 | /r/LocalLLaMA/comments/1i9gp97/hows_anthropics_15_improved_rag_different_from/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'P1y50eo9f8Z_AXnMKoIqz8wW81vXPbloCUbvaNY79XI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/rRc5QiFgYsAcSh5MwYPWxor7_Fpp6kGfU8ESHQ2pQKI.jpg?width=108&crop=smart&auto=webp&s=6e8ab182277888434347d684af8696fb838e2611', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/rRc5QiFgYsAcSh5MwYPWxor7_Fpp6kGfU8ESHQ2pQKI.jpg?width=216&crop=smart&auto=webp&s=565e8a5c9644980155e538be39edab1246f8a0bb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/rRc5QiFgYsAcSh5MwYPWxor7_Fpp6kGfU8ESHQ2pQKI.jpg?width=320&crop=smart&auto=webp&s=0e1b4b4fce09a9c6b07dcdac08d3eea8babeca46', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/rRc5QiFgYsAcSh5MwYPWxor7_Fpp6kGfU8ESHQ2pQKI.jpg?width=640&crop=smart&auto=webp&s=d4416666897f27dac945efbed8ebd6191c899d71', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/rRc5QiFgYsAcSh5MwYPWxor7_Fpp6kGfU8ESHQ2pQKI.jpg?width=960&crop=smart&auto=webp&s=f7659cd23fbd922408e3d50d2066672d79af1173', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/rRc5QiFgYsAcSh5MwYPWxor7_Fpp6kGfU8ESHQ2pQKI.jpg?width=1080&crop=smart&auto=webp&s=092ed9427b0ffed347ada27e9a40260292c2ed96', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rRc5QiFgYsAcSh5MwYPWxor7_Fpp6kGfU8ESHQ2pQKI.jpg?auto=webp&s=17592fc0ce460029aea019efa565a188d031d8d8', 'width': 1152}, 'variants': {}}]} |
Need help with understanding how to write good prompt to llama / mistral | 4 | Bit of background I love coding and love building stuff by my own.
So lately I started messing around with small llm models 7b-13b, when I'm using ollama I'm getting great results to my question and tasks I'm giving the model, But once I start using transformers the result are not quite bad, same models (non quantized ofc), same question and I'm getting very different results, sometimes the models talks to themselves sometimes they just repeat the input.
It's probably something with the prompts / parameters and etc, so I will appreciate if someone can point to good guide to know thinker with it, or at least can provide me a jupyter notebook that yield good results.
I'm not looking to copy Ollama setting / prompts I'm interested in understand how to improve, and why Ollama provide such good results in the first place.
I tried following the documentation in their website but again the results were not good, it improved but not much. | 2025-01-25T06:43:32 | https://www.reddit.com/r/LocalLLaMA/comments/1i9h6df/need_help_with_understanding_how_to_write_good/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9h6df | false | null | t3_1i9h6df | /r/LocalLLaMA/comments/1i9h6df/need_help_with_understanding_how_to_write_good/ | false | false | self | 4 | null |
How are LLMs capable of arithmetic calculations? | 3 | Was trying out the gemini flash thinking and to my surprise all the arithmetic calculation are correct. How is it able to do that? I vaguely recall that arithmetic was a problem when GPT 3.5 was the best model. Then openai came up with code interpreter, so that the model uses external tools to check its own calculation. Was there code execution under gemini flash thinking's hood? Also not much ai service run code. Only ones that does are openai and google ai studio iirc. Claude doesn't somehow. | 2025-01-25T06:49:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i9h9n8/how_are_llms_capable_of_arithmetic_calculations/ | Alanthisis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9h9n8 | false | null | t3_1i9h9n8 | /r/LocalLLaMA/comments/1i9h9n8/how_are_llms_capable_of_arithmetic_calculations/ | false | false | self | 3 | null |
Why model breaks after SFT? | 1 | [removed] | 2025-01-25T06:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/1i9he63/why_model_breaks_after_sft/ | Old-Acanthisitta-574 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9he63 | false | null | t3_1i9he63 | /r/LocalLLaMA/comments/1i9he63/why_model_breaks_after_sft/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'rGOJg3Lt23JqNO5-8wlbkkH_PrTv10IxAcDeUbn7xPM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=108&crop=smart&auto=webp&s=c8f630bbd77d1381441f1d24d2a40c947e86a698', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=216&crop=smart&auto=webp&s=ababbe3ac18dabc00e92e74d1645f47e5c9f43c6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=320&crop=smart&auto=webp&s=4bacb441497d8d69bb496ef9ee3d9cc8d59ee94e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=640&crop=smart&auto=webp&s=fc3e5fc94d8c131f31015fa0a7bf2c083e10e50d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=960&crop=smart&auto=webp&s=3a3c80c0a5fd80d2ff70b44dde2985f7316c57e2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?width=1080&crop=smart&auto=webp&s=fbd0b53940f79ee5175e91b9b7d04407dfa56bde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/apiFqB7edC2V-JmJ94_rtGaZxn9AEQFX2opWXAClKeM.jpg?auto=webp&s=c66dbf2acc72294a9b644d74879588d5ae2807fe', 'width': 1200}, 'variants': {}}]} |
|
[AI Workflow] Track "White House" updates using GPT 4o and Firecrawl | 1 | [removed] | 2025-01-25T07:17:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i9hni1/ai_workflow_track_white_house_updates_using_gpt/ | 0xhbam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9hni1 | false | null | t3_1i9hni1 | /r/LocalLLaMA/comments/1i9hni1/ai_workflow_track_white_house_updates_using_gpt/ | false | false | self | 1 | null |
DeepSeek r1 produced correct response but deleted after result. for sensitive topic | 1 | [removed] | 2025-01-25T07:17:51 | https://www.reddit.com/gallery/1i9hnn0 | DeathShot7777 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1i9hnn0 | false | null | t3_1i9hnn0 | /r/LocalLLaMA/comments/1i9hnn0/deepseek_r1_produced_correct_response_but_deleted/ | false | false | 1 | null |
|
The only way to get Deep Seek to talk about Xi Jin Ping | 1 | 2025-01-25T07:25:42 | CriticalAd3475 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i9hrfj | false | null | t3_1i9hrfj | /r/LocalLLaMA/comments/1i9hrfj/the_only_way_to_get_deep_seek_to_talk_about_xi/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ugrlAr9UG04t_oqUIe8UmsfgrqbAu5zJhVRYqi40yTs', 'resolutions': [{'height': 194, 'url': 'https://preview.redd.it/uc8xmccpd3fe1.png?width=108&crop=smart&auto=webp&s=fcd3d8da7729bc4cc45c88d7c720983ee0778a3d', 'width': 108}, {'height': 389, 'url': 'https://preview.redd.it/uc8xmccpd3fe1.png?width=216&crop=smart&auto=webp&s=4fe2902eddc9d7fa5bd2f8c65609d0c9770ef4fa', 'width': 216}, {'height': 577, 'url': 'https://preview.redd.it/uc8xmccpd3fe1.png?width=320&crop=smart&auto=webp&s=f00493c31c24293069d55d2a9776b8570b155319', 'width': 320}, {'height': 1154, 'url': 'https://preview.redd.it/uc8xmccpd3fe1.png?width=640&crop=smart&auto=webp&s=748e09d1b19765934b52e22144a1006ead5ed6fe', 'width': 640}, {'height': 1731, 'url': 'https://preview.redd.it/uc8xmccpd3fe1.png?width=960&crop=smart&auto=webp&s=0234429dc62b5c2708873b251a94f706c24a96c2', 'width': 960}, {'height': 1948, 'url': 'https://preview.redd.it/uc8xmccpd3fe1.png?width=1080&crop=smart&auto=webp&s=afbcc7b888fc4077c2e3f329da6a0fdf20ec3c09', 'width': 1080}], 'source': {'height': 1948, 'url': 'https://preview.redd.it/uc8xmccpd3fe1.png?auto=webp&s=e1bddc1334df1340e8d6053994e6a2d5508129b0', 'width': 1080}, 'variants': {}}]} |
|||
Book Translation using ollama | 1 | [removed] | 2025-01-25T07:26:05 | https://www.reddit.com/r/LocalLLaMA/comments/1i9hrmi/book_translation_using_ollama/ | Hefty_Cup_8160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9hrmi | false | null | t3_1i9hrmi | /r/LocalLLaMA/comments/1i9hrmi/book_translation_using_ollama/ | false | false | self | 1 | null |
Cheapest llm setup | 1 | [removed] | 2025-01-25T07:43:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i9hzve/cheapest_llm_setup/ | sameralhaswe21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9hzve | false | null | t3_1i9hzve | /r/LocalLLaMA/comments/1i9hzve/cheapest_llm_setup/ | false | false | self | 1 | null |
Deepseek R1 distilled 1.5 B model tells INCORRECT data | 1 | [removed] | 2025-01-25T07:44:29 | https://www.reddit.com/r/LocalLLaMA/comments/1i9i0gq/deepseek_r1_distilled_15_b_model_tells_incorrect/ | Head_Nectarine_772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9i0gq | false | null | t3_1i9i0gq | /r/LocalLLaMA/comments/1i9i0gq/deepseek_r1_distilled_15_b_model_tells_incorrect/ | false | false | 1 | null |
|
Initial perplexity values testing with lower active experts | 5 | | Model | [1] | [2] | [3] | [4] | [5] | [6] | [7] | [8] |
|---------------------|-------|-------|-------|-------|-------|-------|-------|-------|
| **IQ2_XXS** | 3.39 | 4.56 | 3.44 | 3.27 | 3.27 | 3.20 | 3.12 | 3.12 |
| **IQ3_XXS** (exp=3) | 3.12 | 4.03 | 2.93 | 2.63 | 2.52 | 2.48 | 2.45 | 2.48 |
| **IQ3_XXS** (exp=4) | 2.87 | 3.61 | 2.60 | 2.25 | 2.09 | 1.97 | 1.89 | 1.87 |
| **IQ3_XXS** (exp=6) | 2.67 | 3.53 | 2.53 | 2.13 | 1.94 | 1.80 | 1.71 | 1.65 |
| **IQ3_XXS** (def) | 2.69 | 3.53 | 2.51 | 2.11 | 1.91 | 1.78 | 1.69 | 1.62 |
(This is not a full perplexity evaluation, just a sanity check for low bpw models.) It seems IQ2_XXS 2bpw deviates greatly from IQ3_XXS 3bpw. If it helps, then we'd have a better quality quant type for the 1st layer of this model. If you offload then it's probably first in line to go in vram? Also **maybe** selectively keep shared experts high bpw?
8 experts are the default, I want to see how these perform in long context benchmarks! There's a lot of needed experiments that would be useful to local users so we can make optimal choices when manufacturers release the new 200 GB/s bandwidth devices for consumers. The expert slashing doubled speeds, and maybe we can get a speculative model if the MTP head or module is not easy to implement. | 2025-01-25T07:45:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i9i10r/initial_perplexity_values_testing_with_lower/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9i10r | false | null | t3_1i9i10r | /r/LocalLLaMA/comments/1i9i10r/initial_perplexity_values_testing_with_lower/ | false | false | self | 5 | null |
Deepseek R1 distilled 1.5 B model locally running on low end hardware might be a problem for you | 0 | I was running the DeepSeek 1.5B model locally on my old pc (WITHOUT GPU, i5 2nd, 16 gb ram) to test out how good it performs.
When asked about the Prime Minister of India, the model responded with the name "Mr Narendra Shreshtha", where it got the first name correct but the surname wrong.
https://preview.redd.it/yvlil8wqi3fe1.png?width=1366&format=png&auto=webp&s=9489f8cff1af463d0b1a0fbb8727407a0f872fa7
On being told it's mistake, the model made up another name, "Mr Narendra Singh Tomar", where it again messed up in the surname.
Finally, when I told the right answer, it somehow remembered it and also told his term duration.
It somehow also said that it was the user who misunderstood!! (underlined yellow)
https://preview.redd.it/sb6n1bksi3fe1.png?width=1366&format=png&auto=webp&s=da14c78d6d4466a60eba86e3249a1f8ed06e05df
That means that the model had information on this topic, but somehow messed up, maybe because of running on an old hardware or the cutdown measured taken up on the original model to come up with this one.
Now I totally understand that with such a small model, mistakes are obvious, but still I just wanted to point out. | 2025-01-25T07:54:39 | https://www.reddit.com/r/LocalLLaMA/comments/1i9i5e8/deepseek_r1_distilled_15_b_model_locally_running/ | Head_Nectarine_772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9i5e8 | false | null | t3_1i9i5e8 | /r/LocalLLaMA/comments/1i9i5e8/deepseek_r1_distilled_15_b_model_locally_running/ | false | false | 0 | null |
|
OpenAI is such a bubble | 1 | 2025-01-25T07:54:40 | Jack_Hackerman | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i9i5eb | false | null | t3_1i9i5eb | /r/LocalLLaMA/comments/1i9i5eb/openai_is_such_a_bubble/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'WyfSmPtu9yd_hwVKmDmI8xY20ryYtVsXusDR-6KNgiQ', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/ogkfn8qui3fe1.png?width=108&crop=smart&auto=webp&s=775a72b106e1f5df5329acb70da0dd101b713fbc', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/ogkfn8qui3fe1.png?width=216&crop=smart&auto=webp&s=4e069c96d3d6dd8408a178528d454d1f271ff76d', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/ogkfn8qui3fe1.png?width=320&crop=smart&auto=webp&s=072d25b67010a149234d5b0513d92c274b808529', 'width': 320}], 'source': {'height': 368, 'url': 'https://preview.redd.it/ogkfn8qui3fe1.png?auto=webp&s=bd0988571a7f668e9aab3d792f677f1eaaefb521', 'width': 504}, 'variants': {}}]} |
|||
So DeepSeek accidentally admits it copied OpenAI? | 0 | 2025-01-25T07:54:49 | https://www.reddit.com/r/LocalLLaMA/comments/1i9i5h1/so_deepseek_accidentally_admits_it_copied_openai/ | saMusle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9i5h1 | false | null | t3_1i9i5h1 | /r/LocalLLaMA/comments/1i9i5h1/so_deepseek_accidentally_admits_it_copied_openai/ | false | false | 0 | null |
||
Deepseek-R1 is the filthiest model I’ve ever used | 74 | I’ve tried so so many different models for ERP.
When I heard about R1 being a SOTA model I decided to try it out for some spicy chat. I honestly wasn’t expecting much considering it’s a “vanilla” (i.e not finetuned for roleplay by the usual huggingface crowd) model designed for reasoning. I was expecting something like the original llama3 instruct which will generate smut if you prompt it to but is not particularly good at it.
Oh boy.
Until yesterday I considered the magnum finetunes to be the most nsfw models available but R1 is just on a whole new level. I honestly am completely shocked. Idk what the hell was in the training data but my god. The absolute depravity of this model is astounding. There’s no positivity bias as far as I can tell. I dropped it into the middle of a raunchy but *mostly* wholesome scene I had going on and a few of the responses it generated were so filthy I almost didn’t even want to keep reading. The prose is also much much better than any llama or qwen finetune I’ve used. The fact that it’s dirt fucking cheap is icing on the cake. Between yesterday and today I used it for about six hours and I think I paid like 2 bucks on openrouter.
That said, it’s a very unruly model when it comes to roleplay and can be frustrating to get it to work well. You need a really solid system prompt to keep it in line, and even then it likes to ramble and produce walls of text. It also way overuses asterisks and hyphens, randomly putting them everywhere even when I prompt it not to. I had to go through lots of swipes, but the good ones were better than anything I’ve ever seen from an LLM. | 2025-01-25T07:59:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i9i7ug/deepseekr1_is_the_filthiest_model_ive_ever_used/ | nsfw_throwitaway69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9i7ug | false | null | t3_1i9i7ug | /r/LocalLLaMA/comments/1i9i7ug/deepseekr1_is_the_filthiest_model_ive_ever_used/ | false | false | nsfw | 74 | null |
What local model I should run? | 0 | 4070TI S, 13900K, 128GB ram.
I want to run the best model at the moment for general usage. Have never locally run a model before and would like to try it out. Thanks in advance! | 2025-01-25T09:06:08 | https://www.reddit.com/r/LocalLLaMA/comments/1i9j3sl/what_local_model_i_should_run/ | TCaller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9j3sl | false | null | t3_1i9j3sl | /r/LocalLLaMA/comments/1i9j3sl/what_local_model_i_should_run/ | false | false | self | 0 | null |
Best use case for local vision agents? | 3 | With smolagents adding vision and browser capabilities and SmolVLM going even smaller, it makes me think of the core local use case. What are you favourites and predictions?
Smolvlm blog https://huggingface.co/blog/smolervlm
Smolagents https://search.app/rw2dkvYnR5P8WeZV7 | 2025-01-25T09:17:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i9j99u/best_use_case_for_local_vision_agents/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9j99u | false | null | t3_1i9j99u | /r/LocalLLaMA/comments/1i9j99u/best_use_case_for_local_vision_agents/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'tc9WWGsxOU0Zz8tW4Bdpr-CEQ9i7dUSqNU_EIUJ-wN4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3m05h8DKxOcWLYPNwjLhH7d6yq4nTIHCOZa-ls1ip5E.jpg?width=108&crop=smart&auto=webp&s=b08701b0f0fbdaf8b01c68306cbbd39630e82d29', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3m05h8DKxOcWLYPNwjLhH7d6yq4nTIHCOZa-ls1ip5E.jpg?width=216&crop=smart&auto=webp&s=70af87f3d3caec10001d7ea4f73496e642a31ce8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3m05h8DKxOcWLYPNwjLhH7d6yq4nTIHCOZa-ls1ip5E.jpg?width=320&crop=smart&auto=webp&s=ce65857b071e548003c826b520864b43794f5d02', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3m05h8DKxOcWLYPNwjLhH7d6yq4nTIHCOZa-ls1ip5E.jpg?width=640&crop=smart&auto=webp&s=8cd871c6a960c44bfbcaf796798c32d004fa8b29', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3m05h8DKxOcWLYPNwjLhH7d6yq4nTIHCOZa-ls1ip5E.jpg?width=960&crop=smart&auto=webp&s=458b10e36cd14454972bcc22cd97ce72806b0ce3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3m05h8DKxOcWLYPNwjLhH7d6yq4nTIHCOZa-ls1ip5E.jpg?width=1080&crop=smart&auto=webp&s=62a6b26d6b12c66785bce9f4d73a2cb2ff5f9ad3', 'width': 1080}], 'source': {'height': 1450, 'url': 'https://external-preview.redd.it/3m05h8DKxOcWLYPNwjLhH7d6yq4nTIHCOZa-ls1ip5E.jpg?auto=webp&s=f08e975f1a58a1b6ce2cc9de4dfa63b5cdcebb51', 'width': 2900}, 'variants': {}}]} |
GitHub - OpenBMB/UltraRAG: Yet another RAG framework, automated "data construction-model fine-tuning-inference evaluation" | 5 | 2025-01-25T09:19:16 | https://github.com/OpenBMB/UltraRAG | foldl-li | github.com | 1970-01-01T00:00:00 | 0 | {} | 1i9j9zq | false | null | t3_1i9j9zq | /r/LocalLLaMA/comments/1i9j9zq/github_openbmbultrarag_yet_another_rag_framework/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'wgULQgsZEDiAK7Q6u2cKrmm_cq-P3zYG8DWBpvnnfIE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/otgT4yOy0G7qack6kwisyOtE5Xn4tE-XotOCRzZNHY8.jpg?width=108&crop=smart&auto=webp&s=58e5a8bb06087fc7fa0907620dc3f2b87d473fa2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/otgT4yOy0G7qack6kwisyOtE5Xn4tE-XotOCRzZNHY8.jpg?width=216&crop=smart&auto=webp&s=c9319a9debd3e92f750e9cea5a26c30dd6779040', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/otgT4yOy0G7qack6kwisyOtE5Xn4tE-XotOCRzZNHY8.jpg?width=320&crop=smart&auto=webp&s=b4e0544c1275aed74acddf8237cd65f02ebd5bcd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/otgT4yOy0G7qack6kwisyOtE5Xn4tE-XotOCRzZNHY8.jpg?width=640&crop=smart&auto=webp&s=db7ec841f6a2cbff0f42af93834d6ea97d135db7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/otgT4yOy0G7qack6kwisyOtE5Xn4tE-XotOCRzZNHY8.jpg?width=960&crop=smart&auto=webp&s=0efc5f789bb2ba0795abdac5fbd8c4f67f80cbc8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/otgT4yOy0G7qack6kwisyOtE5Xn4tE-XotOCRzZNHY8.jpg?width=1080&crop=smart&auto=webp&s=ed67610305924cc7b27fe7b72fd5a32d7f75b6b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/otgT4yOy0G7qack6kwisyOtE5Xn4tE-XotOCRzZNHY8.jpg?auto=webp&s=09c12a5d26f191f617b827488c73c0c8bcdfb1f1', 'width': 1200}, 'variants': {}}]} |
||
Deepseek R1 beats o1 in trick question 4+4+4=448 | 1 | [removed] | 2025-01-25T09:23:11 | https://www.reddit.com/r/LocalLLaMA/comments/1i9jbq2/deepseek_r1_beats_o1_in_trick_question_444448/ | sendel85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9jbq2 | false | null | t3_1i9jbq2 | /r/LocalLLaMA/comments/1i9jbq2/deepseek_r1_beats_o1_in_trick_question_444448/ | false | false | 1 | null |
|
Evaluation of PLM’s | 1 | [removed] | 2025-01-25T09:35:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i9jhfr/evaluation_of_plms/ | MIKOLOZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9jhfr | false | null | t3_1i9jhfr | /r/LocalLLaMA/comments/1i9jhfr/evaluation_of_plms/ | false | false | self | 1 | null |
How are the Pretrained Language Models evaluated? | 1 | [removed] | 2025-01-25T09:38:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i9jirl/how_are_the_pretrained_language_models_evaluated/ | MIKOLOZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9jirl | false | null | t3_1i9jirl | /r/LocalLLaMA/comments/1i9jirl/how_are_the_pretrained_language_models_evaluated/ | true | false | spoiler | 1 | null |
⌨️ VaaK: Type Less, Speak Freely - OpenAI Whisper Keyboard That *Actually* Works (Now with 1-Tap Switching!) | 1 | Hey Folks,
Sharing a new Keyboard I built using OpenAI's Whisper ASR. Please try and share the Feedback. This can be used with new Opensource models like Deepseek etc for Voice chatting.
What if your keyboard understood you perfectly - **even with accents** - and let you switch between voice/typing without app-juggling? Meet **[VaaK](https://github.com/amanhigh/vaak)**, where **OpenAI's Whisper ASR** (benchmark leader) meets **smart keyboard design**.
This gives you a speech interface for modern AI models like DeepSeek V3/R1 that lack one.
**Why You’ll Keep VaaK Installed** 🔥
- 🎙️ **Whisper > Google/Samsung**: 20-40% fewer errors in real-world use
- 🤯 Works with ANY AI Model: While DeepSeek/Sonnet dominate benchmarks, they have NO or Poor voice input - until now.
- ✋ **No Switching Hell**: Single tap to:
→ Voice dictation
→ System keyboard
→ Numpad (long-press spacebar)
→ Clipboard Buttons
- 🌍 **Accent-Friendly**: Tested with Indian, European, and East Asian English speakers
- 💸 **Cheap to Run**: $5 OpenAI credit ≈ 15 hours of voice typing
**Designed for Real Humans** 🧑💻
- Color-coded recording timer (green → yellow → red)
- **Hold to PASTE** saved prompts (emails, addresses)
- **Instant translation** while dictating (EN→HI, PA→FR, etc)
- **Zero learning curve**: Works like your default keyboard
**Try It If You…**
✓ Hate thumb-typing essays
✓ Need multilingual support
✓ Want future-ready AI integration
📥 [Download APK](https://github.com/amanhigh/vaak/releases) | 🐙 [GitHub](https://github.com/amanhigh/vaak)
⭐️ Please Star [GitHub Repo](https://github.com/amanhigh/vaak) if you like it!
**Planned Features**:
➜ AI text rephrasing
➜ Usage stats dashboard
➜ Play Store submission
➜ Enhanced theme engine
**“Hidden Time-Savers** ⏳
- Long-press BACKSPACE to delete entire words
- ‘AUTO’ language mode detects German/Punjabi/English seamlessly”
- Check out more [features](https://github.com/amanhigh/vaak?tab=readme-ov-file#5-advanced-usage) in the README!
**FAQ** ❓
- **Cost?** OpenAI charges ~$0.33/hour (way cheaper than typing-induced rage)
- **Privacy?** Uses OpenAI’s API (read their [policy](https://openai.com/policies/privacy-policy)), no extra data mining
- **Why not local ASR?** Whisper’s accuracy justifies the API call | 2025-01-25T09:41:46 | https://www.reddit.com/r/LocalLLaMA/comments/1i9jkev/vaak_type_less_speak_freely_openai_whisper/ | amanfdk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9jkev | false | null | t3_1i9jkev | /r/LocalLLaMA/comments/1i9jkev/vaak_type_less_speak_freely_openai_whisper/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Ect8hWw_TCa_PBEH5vj76LHxLW_lvCTu91D8H3U594E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tyeGA7cUs_JKwQi7GgDpoGJGAyIT0OCHfY6xG_GQBUE.jpg?width=108&crop=smart&auto=webp&s=6d9f4142bcfad1189a9cc074be0140591286e71f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tyeGA7cUs_JKwQi7GgDpoGJGAyIT0OCHfY6xG_GQBUE.jpg?width=216&crop=smart&auto=webp&s=d743d5fe633efbfdbfb6f40ff4606c8bcee1da97', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tyeGA7cUs_JKwQi7GgDpoGJGAyIT0OCHfY6xG_GQBUE.jpg?width=320&crop=smart&auto=webp&s=4bea747e7177137a466bdfc526a056eecaab19c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tyeGA7cUs_JKwQi7GgDpoGJGAyIT0OCHfY6xG_GQBUE.jpg?width=640&crop=smart&auto=webp&s=370aac09c0918d2a026218578d61f0bf5c7037e6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tyeGA7cUs_JKwQi7GgDpoGJGAyIT0OCHfY6xG_GQBUE.jpg?width=960&crop=smart&auto=webp&s=a9e4878c16b88200b24359216842b6cd4c24f6dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tyeGA7cUs_JKwQi7GgDpoGJGAyIT0OCHfY6xG_GQBUE.jpg?width=1080&crop=smart&auto=webp&s=c7f1289257f517fcd3693102241365964de4394c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tyeGA7cUs_JKwQi7GgDpoGJGAyIT0OCHfY6xG_GQBUE.jpg?auto=webp&s=9284bb0e1d80c75ba6842cd63b3d33e2242310a7', 'width': 1200}, 'variants': {}}]} |
The browser version of DeepSeek R1 allows for maximum of 50 messages, the API version has no such limitations, you buy the number of tokens you want to use. My issue is I cant find a nice application to hook the Deepseek API so I can run it somewhat similarly to the browser version. | 23 | Fully explained in title. | 2025-01-25T09:56:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i9jre5/the_browser_version_of_deepseek_r1_allows_for/ | PurpleCartoonist3336 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9jre5 | false | null | t3_1i9jre5 | /r/LocalLLaMA/comments/1i9jre5/the_browser_version_of_deepseek_r1_allows_for/ | false | false | self | 23 | null |
Deepseek censorship - local | 0 | 2025-01-25T10:09:01 | Quiet-Isopod-8113 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i9jxk3 | false | null | t3_1i9jxk3 | /r/LocalLLaMA/comments/1i9jxk3/deepseek_censorship_local/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'J2czsWmAEeuApFl2COgrIcyYQ145ZqqJqWovomRiFNs', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/egjkecau64fe1.png?width=108&crop=smart&auto=webp&s=eed8abdc8d5c6a711a24b61f6b76436a7a7e0057', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/egjkecau64fe1.png?width=216&crop=smart&auto=webp&s=d6ea0fca144f2b9a051c0dbe5507863f4930d8d4', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/egjkecau64fe1.png?width=320&crop=smart&auto=webp&s=e6d118fb3f684354bafe9878aeb1fc28df64761b', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/egjkecau64fe1.png?width=640&crop=smart&auto=webp&s=0d5b2373f06ae71e472e0cb2903ba42d80c995c1', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/egjkecau64fe1.png?width=960&crop=smart&auto=webp&s=614b214f9991ca1d889efa543dbeb09eec8bdc3d', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/egjkecau64fe1.png?width=1080&crop=smart&auto=webp&s=fd80c114bd56c77cf023db0ef834ead48a430d48', 'width': 1080}], 'source': {'height': 2412, 'url': 'https://preview.redd.it/egjkecau64fe1.png?auto=webp&s=3cb1d904b80f1f3fbe8d7bb6d46734071fec9fde', 'width': 1080}, 'variants': {}}]} |
|||
What is the easiest way to run something like Google Image Search on my own PC? | 2 | I have thousands of photos on my PC. I want to be able to show AI a photo of \[cat.jpg\] and have it look thru all my local photos for photos that contain a cat. Is it possible? | 2025-01-25T10:15:34 | https://www.reddit.com/r/LocalLLaMA/comments/1i9k0q9/what_is_the_easiest_way_to_run_something_like/ | tryingnew757 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9k0q9 | false | null | t3_1i9k0q9 | /r/LocalLLaMA/comments/1i9k0q9/what_is_the_easiest_way_to_run_something_like/ | false | false | self | 2 | null |
I am simply blown away by this 32B model. It's a Sky-T1 + Fuse-O1 + DeepseekR1 + Qwen32B fusion. Please read the full post | 145 | Model available here: [https://huggingface.co/sm54/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview-Q4\_K\_M-GGUF](https://huggingface.co/sm54/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview-Q4_K_M-GGUF)
Original reddit post by u/AaronFeng47 : [https://www.reddit.com/r/LocalLLaMA/comments/1i9ddj1/skyt132bflash\_think\_less\_achieve\_more\_cut/](https://www.reddit.com/r/LocalLLaMA/comments/1i9ddj1/skyt132bflash_think_less_achieve_more_cut/)
Leetcode 1430. Prompt + Thought process here: [https://pastebin.com/hqWfbG7e](https://pastebin.com/hqWfbG7e)
I am simply blown away. This AI's thought process is so smooth. Every time I thought it was overthinking it wasn't it was judging the cases.
Only caveat. 1: I think if the question would have been unknown it would've fucked up the final output I think it got distracted because it thought all the way through in Python but then suddenly started writing final code in Java and I am not sure what that was about. I had no pre-prompted it or anything like that.
Commands I ran
\`ollama pull [hf.co/sm54/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview-Q4\_K\_M-GGUF](http://hf.co/sm54/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview-Q4_K_M-GGUF) \`
\`Ollama run [hf.co/sm54/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview-Q4\_K\_M-GGUF\`](http://hf.co/sm54/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview-Q4_K_M-GGUF`)
\`> PROMPT (check pastebin for Prompt) \`
Final code here:
\`\`\`java
public class Solution {
public boolean doesPathExist(TreeNode root, int\[\] arr) {
if (root == null || arr.length == 0) {
return false;
}
Stack<Pair<TreeNode, Integer>> stack = new Stack<>();
stack.push(new Pair<>(root, 0));
while (!stack.isEmpty()) {
Pair<TreeNode, Integer> currentPair = stack.pop();
TreeNode currentNode = currentPair.getKey();
int currentIndex = currentPair.getValue();
if (currentIndex >= arr.length) {
continue;
}
if (currentNode.val != arr\[currentIndex\]) {
continue;
}
// Check if this is the last element and a leaf
if (currentIndex == arr.length - 1) {
if (currentNode.left == null && currentNode.right == null) {
return true;
}
} else {
// Push right first to process left later, maintaining order
if (currentNode.right != null) {
stack.push(new Pair<>(currentNode.right, currentIndex + 1));
}
if (currentNode.left != null) {
stack.push(new Pair<>(currentNode.left, currentIndex + 1));
}
}
}
return false;
}
}
// Helper class to store node and index
class Pair<K, V> {
private K key;
private V value;
public Pair(K key, V value) {
this.key = key;
this.value = value;
}
public K getKey() {
return key;
}
public V getValue() {
return value;
}
}
\`\`\`
Here is a dry run the model did to confirm all the different test cases. Simply immaculate.
\`\`\`plaintext
Let me test it against some examples.
Test case 1:
Tree:
0
/ \\
1 2
arr = \[0,1\]
Expected output: True (path root -> left child)
Code execution:
stack starts with \[(root,0)\]
pop root, i=0.
value matches arr\[0\] =0.
i is not last index, so push right and left children with i+1=1.
Now stack has (2,1), then (1,1).
Next iteration: pop (1,1). node 1's value is 1 == arr\[1\]=1.
Check if it's last index: yes. Check if leaf. Node 1 has no left or right? In this tree, node 1 is a leaf (assuming original tree as in test case).
Yes: return True.
Test case passes.
Another test case:
Tree:
0
/ \\
1 2
arr = \[0,2\]
Expected output: True.
In code:
stack has root with i=0. matches.
push right (2,1), then left(1,1).
pop (1,1): node.val is 1 != arr\[1\]=2? So skip.
then pop (2,1). val is 2 == arr\[1\].
Check if last index: yes, and node 2 is a leaf. So return True.
Test passes.
Another test case:
Tree:
0
/ \\
1 2
/
3
arr = \[0,1,3\]
Expected output: True.
Code:
stack starts with (root,0). val matches.
i is not last index. Push left (1,1) and right (2,1).
Next pop (2,1): value 2 != arr\[1\]=1? So skip.
pop (1,1): val 1 == arr\[1\].
Not last index (len(arr)=3). push its children: left (3,2), no right.
stack now has (3,2).
Pop (3,2): val is 3 == arr\[2\]=3.
Check if i==2 (last) and node is leaf. Node 3's left and right are None? So yes: return True.
Test passes.
Another test case:
Tree:
0
/ \\
1 2
/
3
arr = \[0,1,4\]
In code:
At (3,2), val is 3 !=4. so continue.
No other possibilities. stack becomes empty: return False.
Yes.
Another test case where arr is longer than any path.
Tree:
0
/ \\
1 2
arr = \[0,1,3\]
So the tree has no node beyond left child's left (which doesn't exist). So len(arr)=3, but all root-to-leaf paths have length 2. So expected output is False.
In code:
stack pops (root,0): val matches, push children with i=1.
pop (2,1): not arr\[1\]=1? So no.
pop(1,1): match. since len(arr)=3>1+1=2?
Wait: node 1 has left child 3? Wait in previous example, but in this test case's tree: node 1 doesn't have any children? Because the tree is:
root 0 with left 1 (no children) and right 2.
So in code:
At node 1, i=1. since len(arr)-1 =2, which is greater than 1.
so node 1's val matches arr\[1\] (1), then push its children: none.
Thus, stack after popping (1,1): nothing pushed. So stack becomes empty. No other elements.
Return False.
Which is correct.
Another edge case:
root is None and arr is \[0\]: return False.
Yes.
root is a single node with val 5 and arr \[5\]. Return True.
In code: yes.
So the code seems to handle all these cases.
Thus, I think this should be the correct approach.
\`\`\`
Wow.. just wow. I am really gonna use these think tags for my own leetcode prep. | 2025-01-25T10:16:36 | https://www.reddit.com/r/LocalLLaMA/comments/1i9k18t/i_am_simply_blown_away_by_this_32b_model_its_a/ | Educational_Gap5867 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9k18t | false | null | t3_1i9k18t | /r/LocalLLaMA/comments/1i9k18t/i_am_simply_blown_away_by_this_32b_model_its_a/ | false | false | self | 145 | {'enabled': False, 'images': [{'id': 'TEqyeDXn0LsbcmIyK5qHxi3lpz-RUP7p5rfXHNDX6AY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/f1RoNi878zbSQH-pU2eJAe4rECQVV7dAN8buV-lcDiA.jpg?width=108&crop=smart&auto=webp&s=e46e678167d47fe0c0fa7eb17e1f95d84cf40c29', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/f1RoNi878zbSQH-pU2eJAe4rECQVV7dAN8buV-lcDiA.jpg?width=216&crop=smart&auto=webp&s=9fb8f61fb1379dd1aac6e0208b3b73e61c441793', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/f1RoNi878zbSQH-pU2eJAe4rECQVV7dAN8buV-lcDiA.jpg?width=320&crop=smart&auto=webp&s=245612d0f9b1f8d04989c62e251327db88567610', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/f1RoNi878zbSQH-pU2eJAe4rECQVV7dAN8buV-lcDiA.jpg?width=640&crop=smart&auto=webp&s=cda43153fcc2634f926a2502ca603f1e0abda7ae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/f1RoNi878zbSQH-pU2eJAe4rECQVV7dAN8buV-lcDiA.jpg?width=960&crop=smart&auto=webp&s=8857f5800e6434a5777bb5d80be7ffdb9a3d57e6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/f1RoNi878zbSQH-pU2eJAe4rECQVV7dAN8buV-lcDiA.jpg?width=1080&crop=smart&auto=webp&s=aab4182ef393bd99b4e988bf70a03a275c779e5b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/f1RoNi878zbSQH-pU2eJAe4rECQVV7dAN8buV-lcDiA.jpg?auto=webp&s=bea65db561e33074ffcfa32def2e5ac0b95bfe7a', 'width': 1200}, 'variants': {}}]} |
Gigabyte AI TOP motherboards | 1 | I'm thinking on building a system for my home. I thought the theory of offloading vram to system ram
is attractive. I could get a rtx 4070 super and still run fairly large LLMs (with a speed restriction, I would assume), while having great performance for more general ML, like TTS and yolov8.
Has anyone had experience with these motherboards?
Does the graphic card and SSD also need to be Gigabyte brand? AI series?
| 2025-01-25T10:17:47 | https://www.reddit.com/r/LocalLLaMA/comments/1i9k1s2/gigabyte_ai_top_motherboards/ | MycologistNeither470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9k1s2 | false | null | t3_1i9k1s2 | /r/LocalLLaMA/comments/1i9k1s2/gigabyte_ai_top_motherboards/ | false | false | self | 1 | null |
Why should one avoid adding a system prompt with DeepSeek-R1? | 10 | I read on https://huggingface.co/deepseek-ai/DeepSeek-R1#usage-recommendations:
> Avoid adding a system prompt; all instructions should be contained within the user prompt.
Why should one avoid adding a system prompt with DeepSeek-R1? | 2025-01-25T10:18:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i9k284/why_should_one_avoid_adding_a_system_prompt_with/ | Franck_Dernoncourt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9k284 | false | null | t3_1i9k284 | /r/LocalLLaMA/comments/1i9k284/why_should_one_avoid_adding_a_system_prompt_with/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'EW9KzApmzFQOWfOeaBOFj0nk5o3bUHoeyDi_xD0iLLo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=108&crop=smart&auto=webp&s=9577640536ba5273620fd0480b3a5ebfe441c736', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=216&crop=smart&auto=webp&s=a31f920964030694d34d99e6af85a92e37ede1fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=320&crop=smart&auto=webp&s=7baa788de1cc26a316da0c3936a77b5d8c6638f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=640&crop=smart&auto=webp&s=cfe6acc456fe810e684e2549f82a4f400608da67', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=960&crop=smart&auto=webp&s=b5c2a167f26b2a775924151badedd1641594c389', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?width=1080&crop=smart&auto=webp&s=3c2b96522db9d74900aa2f630038d61a244f5fde', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xCP95O-e963Wkcg4zsFa0x35jJRRGJ69TOc664LDsj0.jpg?auto=webp&s=3da07efc6555d27ee00573823977a4aa6439f0f4', 'width': 1200}, 'variants': {}}]} |
Vision integration into Hugging Face smolagents | 1 | We have added vision support to smolagents, which unlocks the use of vision language models in agentic pipelines natively:
* Blog post: [https://huggingface.co/blog/smolagents-can-see](https://huggingface.co/blog/smolagents-can-see)
* GitHub repo: [https://github.com/huggingface/smolagents](https://github.com/huggingface/smolagents) | 2025-01-25T10:19:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i9k2nb/vision_integration_into_hugging_face_smolagents/ | Glittering-Debt2721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9k2nb | false | null | t3_1i9k2nb | /r/LocalLLaMA/comments/1i9k2nb/vision_integration_into_hugging_face_smolagents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '4MZoLkk5pt_zR3wRhojRp9f1frBXRUl8IZN_a1YqH5c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/0pAaB4IeL5nqlFMQzxYXzPYWJj4eh5Oo3onQEh8Qsd0.jpg?width=108&crop=smart&auto=webp&s=7bc56ae9230d90c56592c6420007a6eb26aefb06', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/0pAaB4IeL5nqlFMQzxYXzPYWJj4eh5Oo3onQEh8Qsd0.jpg?width=216&crop=smart&auto=webp&s=efc8ec53df8d9ed469656bf76e5ee52409cf096b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/0pAaB4IeL5nqlFMQzxYXzPYWJj4eh5Oo3onQEh8Qsd0.jpg?width=320&crop=smart&auto=webp&s=6c828b646b150fc04b3cd13e6dd7bc30c5a99694', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/0pAaB4IeL5nqlFMQzxYXzPYWJj4eh5Oo3onQEh8Qsd0.jpg?width=640&crop=smart&auto=webp&s=819483889fd9d41de4b001c830d316ec4ab4ff41', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/0pAaB4IeL5nqlFMQzxYXzPYWJj4eh5Oo3onQEh8Qsd0.jpg?width=960&crop=smart&auto=webp&s=62afd82633d48039a78ccd290996fdc473bf41fa', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/0pAaB4IeL5nqlFMQzxYXzPYWJj4eh5Oo3onQEh8Qsd0.jpg?width=1080&crop=smart&auto=webp&s=abf9f4ee0bfada27a8539306de4ca7e5833bc760', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/0pAaB4IeL5nqlFMQzxYXzPYWJj4eh5Oo3onQEh8Qsd0.jpg?auto=webp&s=887912034e2d42ff123eab158744476fc39216cc', 'width': 1920}, 'variants': {}}]} |
I just asked a local DeepSeek R1 "Can you show me what you can do?" - Here is its response. | 1 | [removed] | 2025-01-25T10:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i9kgc0/i_just_asked_a_local_deepseek_r1_can_you_show_me/ | GoofAckYoorsElf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9kgc0 | false | null | t3_1i9kgc0 | /r/LocalLLaMA/comments/1i9kgc0/i_just_asked_a_local_deepseek_r1_can_you_show_me/ | false | false | self | 1 | null |
Improve a Small Language Model for a specific language (cheap recipe) | 47 | 📓 [https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond](https://www.kaggle.com/code/anakin87/post-training-gemma-for-italian-and-beyond)
Hey! I recently took part in a Kaggle competition to fine-tune Gemma.
I fine-tuned the model to improve 🇮🇹 Italian performance, but I believe my recipe is adaptable to other languages and models.
**Key Steps**
📊 Choose reference metrics
🧑🔬 Data curation for Instruction Fine Tuning: identify existing datasets + generate synthetic data
🏋️♂️ Efficient Instruction Fine Tuning with Spectrum
🧑🔬 Data curation for Preference Tuning: identify existing datasets + generate synthetic data
👍👎 Efficient Direct Preference Optimization with Spectrum
📈 Evaluation
**In the attached notebook, you can find all code + datasets + models.**
**I hope it can be useful to someone.** | 2025-01-25T11:04:40 | https://www.reddit.com/r/LocalLLaMA/comments/1i9kp43/improve_a_small_language_model_for_a_specific/ | anakin_87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9kp43 | false | null | t3_1i9kp43 | /r/LocalLLaMA/comments/1i9kp43/improve_a_small_language_model_for_a_specific/ | false | false | self | 47 | {'enabled': False, 'images': [{'id': 'vNC-CUSHMUuX-ZiDsog2OyAIqtJdwPJwxJZWHGGvX8o', 'resolutions': [], 'source': {'height': 100, 'url': 'https://external-preview.redd.it/jnj02eRv5_ViVx0bg5Artls5yqQ1_wRUWgsOuRq4jzM.jpg?auto=webp&s=32d31228aa55610f03b4dcf8904ae7f379642216', 'width': 100}, 'variants': {}}]} |
I couldn't have my local Deepseek R1 analyse word files, is there anything I could do? | 1 | [removed] | 2025-01-25T11:04:45 | https://www.reddit.com/r/LocalLLaMA/comments/1i9kp5h/i_couldnt_have_my_local_deepseek_r1_analyse_word/ | jacky1001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9kp5h | false | null | t3_1i9kp5h | /r/LocalLLaMA/comments/1i9kp5h/i_couldnt_have_my_local_deepseek_r1_analyse_word/ | false | false | self | 1 | null |
DeepSeek R1 vs o1 Pro | 33 | Curious what people think of this. I personally have a ChatGPT Plus subscription which allows me to access o1 (not pro). I feel that R1 definitely beats o1, but there's lots of people claiming o1 Pro as just a completely different level of model. Curious about the people who has access to o1 Pro, how does it compare? | 2025-01-25T11:06:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i9kpu4/deepseek_r1_vs_o1_pro/ | YourAverageDev0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9kpu4 | false | null | t3_1i9kpu4 | /r/LocalLLaMA/comments/1i9kpu4/deepseek_r1_vs_o1_pro/ | false | false | self | 33 | null |
Will we continue to tolerate politic bots? | 125 | Recently any thread about Chinese models get many comments from random accounts initiating full scale discussions on topics of communism vs capitalism. It seems to always trend in favor of "explaining" how china is "not that bad".
Even though I have no presuppositions regarding China in this post, one thing I would like to discuss is if this is really a right place for all of those meadow glorification?
Second question is - are we, a spearhead of LLM community, in any way protected from bots pushing specific narratives?
It was not like this a year ago. | 2025-01-25T11:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i9lae2/will_we_continue_to_tolerate_politic_bots/ | NickNau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9lae2 | false | null | t3_1i9lae2 | /r/LocalLLaMA/comments/1i9lae2/will_we_continue_to_tolerate_politic_bots/ | false | false | self | 125 | null |
Built a White House Tracker using GPT 4o and Firecrawl | 1 | [removed] | 2025-01-25T11:56:44 | https://www.reddit.com/r/LocalLLaMA/comments/1i9lfd5/built_a_white_house_tracker_using_gpt_4o_and/ | Sam_Tech1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9lfd5 | false | null | t3_1i9lfd5 | /r/LocalLLaMA/comments/1i9lfd5/built_a_white_house_tracker_using_gpt_4o_and/ | false | false | self | 1 | null |
large for cheap | 1 | have concluded that i want to test all these large models as cheap as i can so its cpu inference on second hand poweredge.. i know its going to be dog slow. Impulse buy, what do you think it could run?
Dell PowerEdge R720xD 24\*SFF 2x E5-2650v2 384GB RAM 8x 1.2TB iDRAC Enterprise
Refurbished
Price:
£275.00 | 2025-01-25T12:01:01 | https://www.reddit.com/r/LocalLLaMA/comments/1i9lhpb/large_for_cheap/ | megadonkeyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9lhpb | false | null | t3_1i9lhpb | /r/LocalLLaMA/comments/1i9lhpb/large_for_cheap/ | false | false | self | 1 | null |
Google did not update this for a long time. What level is the full r1 at? | 105 | 2025-01-25T12:30:30 | Notdesciplined | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i9ly7f | false | null | t3_1i9ly7f | /r/LocalLLaMA/comments/1i9ly7f/google_did_not_update_this_for_a_long_time_what/ | false | false | 105 | {'enabled': True, 'images': [{'id': 'Mr3VmOukHRoyO2mqkBgGqim8zkjMJDpBL7eQz8avmbI', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/yxcnat7wv4fe1.jpeg?width=108&crop=smart&auto=webp&s=e1ad214d20ff3d63be9b586b3b1e6cd7f398a6b4', 'width': 108}, {'height': 240, 'url': 'https://preview.redd.it/yxcnat7wv4fe1.jpeg?width=216&crop=smart&auto=webp&s=58a80ae02495964f1c23f79afab9aff0372c58fa', 'width': 216}, {'height': 356, 'url': 'https://preview.redd.it/yxcnat7wv4fe1.jpeg?width=320&crop=smart&auto=webp&s=2e180fbd7e3841941fa70059a5347eeca60e43f0', 'width': 320}, {'height': 713, 'url': 'https://preview.redd.it/yxcnat7wv4fe1.jpeg?width=640&crop=smart&auto=webp&s=c4a74c90ac2af64f98872d371693355cf2d8e41e', 'width': 640}, {'height': 1070, 'url': 'https://preview.redd.it/yxcnat7wv4fe1.jpeg?width=960&crop=smart&auto=webp&s=c3951bfcb13cd79ab4911f0c4db3f972ef42dd2d', 'width': 960}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/yxcnat7wv4fe1.jpeg?auto=webp&s=69cb90be3729f335465b10e37f989ff222918446', 'width': 1076}, 'variants': {}}]} |
|||
Is there any jailbreak prompts for the deepseek distilled models? | 4 | I'm just wondering cause if you try to ask anything that's unethical and make it for example argue for it or not it will clamp up shut.
It will being to think about how to respond and then eventually see that it goes against whatever social, ethical or moral guidelines it has and it will realize it needs to refuse and then clamp up.
So is there any prompt one can make for it not to do that?
I've tried the qwen 30b distilled variant and the llama 70b one | 2025-01-25T12:45:10 | https://www.reddit.com/r/LocalLLaMA/comments/1i9m6rv/is_there_any_jailbreak_prompts_for_the_deepseek/ | Bandit-level-200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9m6rv | false | null | t3_1i9m6rv | /r/LocalLLaMA/comments/1i9m6rv/is_there_any_jailbreak_prompts_for_the_deepseek/ | false | false | self | 4 | null |
What are your favorite local models for autocompletion using continuedev? | 6 | What local ollama models do you guys use for vscode autocompletion on continuedev?
I've tried qwen2.5:1.5b, it makes my ide lag and is inaccurate compared to github copilot's autocomplete | 2025-01-25T12:59:12 | https://www.reddit.com/r/LocalLLaMA/comments/1i9mf1y/what_are_your_favorite_local_models_for/ | nocnoc94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9mf1y | false | null | t3_1i9mf1y | /r/LocalLLaMA/comments/1i9mf1y/what_are_your_favorite_local_models_for/ | false | false | self | 6 | null |
What questions have you asked reasoning models to solve that you couldn't get done with non-reasoning models? | 31 | Aside from testing r1 for curiousity, I haven't had much cause to use reasoning models. I found that normal models could handle tasks that I wanted and for those tasks that it couldn't handle, the reasoning models were also unable to handle them. | 2025-01-25T13:02:52 | https://www.reddit.com/r/LocalLLaMA/comments/1i9mhlx/what_questions_have_you_asked_reasoning_models_to/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9mhlx | false | null | t3_1i9mhlx | /r/LocalLLaMA/comments/1i9mhlx/what_questions_have_you_asked_reasoning_models_to/ | false | false | self | 31 | null |
R1 is an amazing writer, even if it's only a reasoning model | 1 | [removed] | 2025-01-25T13:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/1i9mir1/r1_is_an_amazing_writer_even_if_its_only_a/ | SirSourPlum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9mir1 | false | null | t3_1i9mir1 | /r/LocalLLaMA/comments/1i9mir1/r1_is_an_amazing_writer_even_if_its_only_a/ | false | false | self | 1 | null |
DeepSeek Hosting | 1 | [removed] | 2025-01-25T13:09:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i9mm12/deepseek_hosting/ | namayra02 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9mm12 | false | null | t3_1i9mm12 | /r/LocalLLaMA/comments/1i9mm12/deepseek_hosting/ | false | false | self | 1 | null |
Memory makes computation universal, remember? | 8 | IMO current ML/AI/deep learning has forgotten how important memory is for general intelligence. To the tune of removing "thinking tokens" of past iterations from the context from o1, even when that access to the history of thought is theoretically the thing giving these chain of thought models their power. WDYT? | 2025-01-25T13:24:35 | http://thinks.lol/2025/01/memory-makes-computation-universal/ | waxbolt | thinks.lol | 1970-01-01T00:00:00 | 0 | {} | 1i9mv9q | false | null | t3_1i9mv9q | /r/LocalLLaMA/comments/1i9mv9q/memory_makes_computation_universal_remember/ | false | false | default | 8 | null |
Pointers for a beginner | 1 | [removed] | 2025-01-25T13:39:17 | https://www.reddit.com/r/LocalLLaMA/comments/1i9n4mj/pointers_for_a_beginner/ | RecordingShort1028 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9n4mj | false | null | t3_1i9n4mj | /r/LocalLLaMA/comments/1i9n4mj/pointers_for_a_beginner/ | false | false | self | 1 | null |
I really like the "Murderers in a field" question | 4 | \# Quantitative Analyst Interview Question
\## Murderers in a field
You are guarding 100 murderers in a field, and you have a gun with a single bullet. If any one of the murderers has a non-zero probability of surviving, he will attempt to escape. If a murderer is certain of death, he will not attempt an escape. How do you stop them from escaping?
o1, deepseek-r1 definitely solves this question.
FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview-Q4\_K\_M was also able to solve this question with temperature 0.6. First local model that solved this question. Interesting! | 2025-01-25T13:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/1i9na24/i_really_like_the_murderers_in_a_field_question/ | serialx_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9na24 | false | null | t3_1i9na24 | /r/LocalLLaMA/comments/1i9na24/i_really_like_the_murderers_in_a_field_question/ | false | false | self | 4 | null |
Where is local computer use? | 2 | Since the Anthropic demo was made available through one of the local front ends, I figured it was only a matter of time until someone fully carved it into a local model. It seems odd if simply figuring out how to command it to move the mouse is the problem. If anyone is looking for solutions, I've had success having local models take screenshots and then using accessibility software, "say" the commands they want to Make on the PC. It just seems like a weird place to get held up. | 2025-01-25T13:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/1i9nauj/where_is_local_computer_use/ | derallo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9nauj | false | null | t3_1i9nauj | /r/LocalLLaMA/comments/1i9nauj/where_is_local_computer_use/ | false | false | self | 2 | null |
RTX 3060 12GB vs RTX 4060 8GB | 1 | Well, little birdy told me vram is very important when it comes to running local llms. I have the rtx 2060 right now and performance is good, but i can't get to run some models (mostly 8B and above) and i feel like it's a vram issue (SDXL won't run here but maybe it's a misconfiguration)
I researched a lot and the 4060 is about 15% more powerful, and since i barely do gaming anymore, i don't feel like it will be much bad if i sacrifice the power for an extra 50% in vram! I'd like to know what you guys think, and does the extra ram justify *"downgrading"* my gpu
It's a bit of a bummer, it feels like i'm "loosing something" either way
The cards are almost the same price any way | 2025-01-25T13:49:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i9nbbo/rtx_3060_12gb_vs_rtx_4060_8gb/ | Blender-Fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9nbbo | false | null | t3_1i9nbbo | /r/LocalLLaMA/comments/1i9nbbo/rtx_3060_12gb_vs_rtx_4060_8gb/ | false | false | self | 1 | null |
Full open source reproduction of R1 in progress ⏳ | 1,581 | 2025-01-25T14:11:35 | eliebakk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i9nqj9 | false | null | t3_1i9nqj9 | /r/LocalLLaMA/comments/1i9nqj9/full_open_source_reproduction_of_r1_in_progress/ | false | false | 1,581 | {'enabled': True, 'images': [{'id': 'gwdT1qthO2Mikx2VT7uIbjyNdEU8s8kmqdCMizpMovo', 'resolutions': [{'height': 149, 'url': 'https://preview.redd.it/s5rmvdhtd5fe1.png?width=108&crop=smart&auto=webp&s=c8545b2968ac7826f272173f4e2171b7d1a91265', 'width': 108}, {'height': 298, 'url': 'https://preview.redd.it/s5rmvdhtd5fe1.png?width=216&crop=smart&auto=webp&s=55c8ff8dcb262ef1857d1c54ebeff2fa12dd81bd', 'width': 216}, {'height': 442, 'url': 'https://preview.redd.it/s5rmvdhtd5fe1.png?width=320&crop=smart&auto=webp&s=1cad2dfcec7c41c7344f3face7718aab298eb898', 'width': 320}, {'height': 884, 'url': 'https://preview.redd.it/s5rmvdhtd5fe1.png?width=640&crop=smart&auto=webp&s=fbf96bf7e9979be87994f66f0537b9e70492b54b', 'width': 640}, {'height': 1326, 'url': 'https://preview.redd.it/s5rmvdhtd5fe1.png?width=960&crop=smart&auto=webp&s=ea0e6be16097b6db8f6094d8b92bbf60083a6a33', 'width': 960}, {'height': 1492, 'url': 'https://preview.redd.it/s5rmvdhtd5fe1.png?width=1080&crop=smart&auto=webp&s=19fa713df77030353367d5c5e66daf817fbf4e88', 'width': 1080}], 'source': {'height': 1824, 'url': 'https://preview.redd.it/s5rmvdhtd5fe1.png?auto=webp&s=c0354c5bc067fa6d2c4d4507c577521ad5d7a9a3', 'width': 1320}, 'variants': {}}]} |
|||
The Ture OpenAI | 1 | 2025-01-25T14:16:57 | Financial_Baby5162 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i9nu9g | false | null | t3_1i9nu9g | /r/LocalLLaMA/comments/1i9nu9g/the_ture_openai/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ITkmd2HoKg66C_7Jsj6Hi6g2Xf8RRogGiegEyCaPQME', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/8um9brm0f5fe1.jpeg?width=108&crop=smart&auto=webp&s=fe53b4e2d111d460acf11148f2082becd4c902e6', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/8um9brm0f5fe1.jpeg?width=216&crop=smart&auto=webp&s=a93bc73a65afa5baeddcc13aa0d18e5d6c25c885', 'width': 216}, {'height': 254, 'url': 'https://preview.redd.it/8um9brm0f5fe1.jpeg?width=320&crop=smart&auto=webp&s=b097f3c89713050e53dc6d3d604edca64f3535a5', 'width': 320}, {'height': 508, 'url': 'https://preview.redd.it/8um9brm0f5fe1.jpeg?width=640&crop=smart&auto=webp&s=957ebe297e88c3fb1f0ea0872e51bbac7989f18b', 'width': 640}], 'source': {'height': 540, 'url': 'https://preview.redd.it/8um9brm0f5fe1.jpeg?auto=webp&s=0dba95014c0a20237fc1355e27788ffa5aa28b6d', 'width': 680}, 'variants': {}}]} |
|||
The Ture OpenAI | 1 | [removed] | 2025-01-25T14:18:24 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1i9nv99 | false | null | t3_1i9nv99 | /r/LocalLLaMA/comments/1i9nv99/the_ture_openai/ | false | false | default | 1 | null |
||
Models for translation and education | 3 | I would like to run up some models with mainly translation purposes but also for education purposes in those languages.
Could someone suggest me some good multilanguage models and specifically some finetunes for Japanese and German, if they exists? | 2025-01-25T14:35:43 | https://www.reddit.com/r/LocalLLaMA/comments/1i9o7ye/models_for_translation_and_education/ | Chaotic_Alea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9o7ye | false | null | t3_1i9o7ye | /r/LocalLLaMA/comments/1i9o7ye/models_for_translation_and_education/ | false | false | self | 3 | null |
Fine tuned code model | 1 | [removed] | 2025-01-25T14:39:49 | https://www.reddit.com/r/LocalLLaMA/comments/1i9oayi/fine_tuned_code_model/ | Beginning-Pack-3564 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9oayi | false | null | t3_1i9oayi | /r/LocalLLaMA/comments/1i9oayi/fine_tuned_code_model/ | false | false | self | 1 | null |
4090 or wait? | 1 | [removed] | 2025-01-25T14:44:16 | https://www.reddit.com/r/LocalLLaMA/comments/1i9oe6d/4090_or_wait/ | ImZackSong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9oe6d | false | null | t3_1i9oe6d | /r/LocalLLaMA/comments/1i9oe6d/4090_or_wait/ | false | false | self | 1 | null |
Which Web ui to use | 3 | Hi guys, I am trying to get back into textgeneration ai and was wondering what zu to use? Oobabooga seems to have changed a lot. Was hoping for something different because until now I only have issues with obabooga. Any recommendations? | 2025-01-25T14:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/1i9oisz/which_web_ui_to_use/ | Cleptomanixxxx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9oisz | false | null | t3_1i9oisz | /r/LocalLLaMA/comments/1i9oisz/which_web_ui_to_use/ | false | false | self | 3 | null |
Nvidia to wind down CUDA support for Maxwell and Pascal | 187 | "Nvidia's release notes for CUDA 12.8 revealed that Maxwell, Pascal, and Volta GPUs will likely transition to the legacy driver branch. The document states that "architecture support for Maxwell, Pascal, and Volta is considered feature-complete and will be frozen in an upcoming release."
I think most of us new this day was coming soon. I wouldn't fret too much about it though. This doesn't mean that the cards will stop working or any software built on CUDA will stop working anytime soon. Even if CUDA 12.8 is the last version to support Pascal, I think open source projects like Llama.cpp will continue supporting those cards for a few more years, given how widely used Pascal is in the community and the lack of any decently priced alternatives until now.
If anyone is considering buying a P40 for a new build, I don't think they should change their plans because of this announcement, especially if they find a good deal on the P40.
Personally, I have 10 P40s (just bought 5 last week at $180/card), 4 P100s, and 4 V100s and I'm not planning on retiring them anytime soon. They're great and work really well for my use cases. | 2025-01-25T15:01:14 | https://www.tomshardware.com/pc-components/gpu-drivers/nvidia-starts-phasing-out-maxwell-pascal-and-volta-gpus-geforce-driver-support-status-unclear | FullstackSensei | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 1i9oqou | false | null | t3_1i9oqou | /r/LocalLLaMA/comments/1i9oqou/nvidia_to_wind_down_cuda_support_for_maxwell_and/ | false | false | default | 187 | null |
Llama 3 speech understanding? | 1 | [removed] | 2025-01-25T15:07:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i9ov7e/llama_3_speech_understanding/ | Cucumberbatch99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9ov7e | false | null | t3_1i9ov7e | /r/LocalLLaMA/comments/1i9ov7e/llama_3_speech_understanding/ | false | false | self | 1 | null |
Llama 3 speech understanding? | 1 | [removed] | 2025-01-25T15:08:45 | https://www.reddit.com/r/LocalLLaMA/comments/1i9owcp/llama_3_speech_understanding/ | Cucumberbatch99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9owcp | false | null | t3_1i9owcp | /r/LocalLLaMA/comments/1i9owcp/llama_3_speech_understanding/ | false | false | self | 1 | null |
20 million tokens in just a few days! | 1 | [removed] | 2025-01-25T15:24:54 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1i9p8qf | false | null | t3_1i9p8qf | /r/LocalLLaMA/comments/1i9p8qf/20_million_tokens_in_just_a_few_days/ | false | false | default | 1 | null |
||
Is it just me, or is O1 better than R1 for coding? | 1 | [removed] | 2025-01-25T15:26:59 | https://www.reddit.com/r/LocalLLaMA/comments/1i9paav/is_it_just_me_or_is_o1_better_than_r1_for_coding/ | 0uternet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9paav | false | null | t3_1i9paav | /r/LocalLLaMA/comments/1i9paav/is_it_just_me_or_is_o1_better_than_r1_for_coding/ | false | false | self | 1 | null |
How to run deepseek on mbp | 1 | [removed] | 2025-01-25T15:31:35 | https://www.reddit.com/r/LocalLLaMA/comments/1i9pds5/how_to_run_deepseek_on_mbp/ | sks8100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9pds5 | false | null | t3_1i9pds5 | /r/LocalLLaMA/comments/1i9pds5/how_to_run_deepseek_on_mbp/ | false | false | self | 1 | null |
Deepseek-R1 prompt for ERP | 1 | [removed] | 2025-01-25T15:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/1i9pwy8/deepseekr1_prompt_for_erp/ | nsfw_throwitaway69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9pwy8 | false | null | t3_1i9pwy8 | /r/LocalLLaMA/comments/1i9pwy8/deepseekr1_prompt_for_erp/ | false | false | nsfw | 1 | null |
R1 when it says 'Wait' in its chain-of-thought | 1 | 2025-01-25T15:58:30 | Fadawah | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i9pzf2 | false | null | t3_1i9pzf2 | /r/LocalLLaMA/comments/1i9pzf2/r1_when_it_says_wait_in_its_chainofthought/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'r5vyvmvo7hqFpniHmFuTVX92yxga487IGv-f9XtRFWo', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/vwhyvh47x5fe1.jpeg?width=108&crop=smart&auto=webp&s=19900679cc5498483d501e4a9676c269f046dff3', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/vwhyvh47x5fe1.jpeg?width=216&crop=smart&auto=webp&s=f6ccbe7bff6d668ca6821e9b9d21e75e5dbcc86d', 'width': 216}], 'source': {'height': 185, 'url': 'https://preview.redd.it/vwhyvh47x5fe1.jpeg?auto=webp&s=dd1442eaec3001b7ac5f69ec847b504bb7fe3dd2', 'width': 258}, 'variants': {}}]} |
|||
Best local ai setup for writing? | 1 | [removed] | 2025-01-25T16:00:16 | https://www.reddit.com/r/LocalLLaMA/comments/1i9q0vy/best_local_ai_setup_for_writing/ | Game-Lover44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9q0vy | false | null | t3_1i9q0vy | /r/LocalLLaMA/comments/1i9q0vy/best_local_ai_setup_for_writing/ | false | false | self | 1 | null |
Exporting Deepseek R1 to executorch? | 4 | Has anyone tried exporting Deepseek R1 model to a \`.pte\` file?
I've tried with no luck, using the qwen 1.5B model, but i get segmentation fault errors.
Also, the fact that i have zero experience with python or C++ can be a good reason.
I just followed the tutorials and Cursor'd my way there | 2025-01-25T16:16:49 | https://www.reddit.com/r/LocalLLaMA/comments/1i9qekr/exporting_deepseek_r1_to_executorch/ | gabrielmoncha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9qekr | false | null | t3_1i9qekr | /r/LocalLLaMA/comments/1i9qekr/exporting_deepseek_r1_to_executorch/ | false | false | self | 4 | null |
Scientists Experiment With Subjecting AI to Pain | 0 | 2025-01-25T16:21:27 | https://futurism.com/scientists-experiment-with-subjecting-ai-to-pain | DeltaSqueezer | futurism.com | 1970-01-01T00:00:00 | 0 | {} | 1i9qid0 | false | null | t3_1i9qid0 | /r/LocalLLaMA/comments/1i9qid0/scientists_experiment_with_subjecting_ai_to_pain/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'aquXGkvOoT5HriRhgPCEsAI8pe6YXfhsB3_GvGdDlFA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/HYrIufUdsC4ipwoqLJG3oGymisa5o6L8qr05Xlrn83A.jpg?width=108&crop=smart&auto=webp&s=a9d3e5f4af2a54b436d2f7bfbfc525f19a624c39', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/HYrIufUdsC4ipwoqLJG3oGymisa5o6L8qr05Xlrn83A.jpg?width=216&crop=smart&auto=webp&s=082a233eccc1eb4730d2a6c6394f985e292dccef', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/HYrIufUdsC4ipwoqLJG3oGymisa5o6L8qr05Xlrn83A.jpg?width=320&crop=smart&auto=webp&s=05973df14b9346f0c537e10c1dce8ed71c921881', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/HYrIufUdsC4ipwoqLJG3oGymisa5o6L8qr05Xlrn83A.jpg?width=640&crop=smart&auto=webp&s=6e78440e9db90c061ed0e63fe84e0e061e3d86aa', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/HYrIufUdsC4ipwoqLJG3oGymisa5o6L8qr05Xlrn83A.jpg?width=960&crop=smart&auto=webp&s=a54ee02610003ce79de3d1e0c786c98870561ed7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/HYrIufUdsC4ipwoqLJG3oGymisa5o6L8qr05Xlrn83A.jpg?width=1080&crop=smart&auto=webp&s=afc3a6991e5789d29c787549e047a3c6ff538369', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/HYrIufUdsC4ipwoqLJG3oGymisa5o6L8qr05Xlrn83A.jpg?auto=webp&s=b49e5853fa267650b34b0d8aaed99e4ac5d10b00', 'width': 1200}, 'variants': {}}]} |
||
Everyone is obsessed with R1 lol | 0 | 2025-01-25T16:43:23 | https://www.technologyreview.com/2025/01/24/1110526/china-deepseek-top-ai-despite-sanctions/ | ab2377 | technologyreview.com | 1970-01-01T00:00:00 | 0 | {} | 1i9r092 | false | null | t3_1i9r092 | /r/LocalLLaMA/comments/1i9r092/everyone_is_obsessed_with_r1_lol/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'rmRU4UFwnR-JMrWP9P7haqCJCRGnl_cP0aD8-KSfxPk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ALpZzRv9EjRDsRb0NfWYOhjPF9mYLxZT3aMsQqjW4zU.jpg?width=108&crop=smart&auto=webp&s=7ec6a15a7b4b8f76d042300c4cb5bd72a02ced73', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ALpZzRv9EjRDsRb0NfWYOhjPF9mYLxZT3aMsQqjW4zU.jpg?width=216&crop=smart&auto=webp&s=224e3163e4c944012f07d83f1d8bfb1bf116f6ce', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ALpZzRv9EjRDsRb0NfWYOhjPF9mYLxZT3aMsQqjW4zU.jpg?width=320&crop=smart&auto=webp&s=146cdb933f7fe963196a05acbc73cfdd3c74c503', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ALpZzRv9EjRDsRb0NfWYOhjPF9mYLxZT3aMsQqjW4zU.jpg?width=640&crop=smart&auto=webp&s=41888010eeca87b9a090bd0648e2c69883a85475', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ALpZzRv9EjRDsRb0NfWYOhjPF9mYLxZT3aMsQqjW4zU.jpg?width=960&crop=smart&auto=webp&s=435f8c48fcbc95ed3a592bdf3574de7a3613021a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ALpZzRv9EjRDsRb0NfWYOhjPF9mYLxZT3aMsQqjW4zU.jpg?width=1080&crop=smart&auto=webp&s=b1e5f41987aa993b36f054814943375c165f9cd4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ALpZzRv9EjRDsRb0NfWYOhjPF9mYLxZT3aMsQqjW4zU.jpg?auto=webp&s=356d00c1f435f9067d7dcc35f9427f2953126fad', 'width': 1200}, 'variants': {}}]} |
||
Recreating R1 | 1 | 2025-01-25T17:05:01 | https://llmstock.com/post/297 | Vegetable_Sun_9225 | llmstock.com | 1970-01-01T00:00:00 | 0 | {} | 1i9rihe | false | null | t3_1i9rihe | /r/LocalLLaMA/comments/1i9rihe/recreating_r1/ | false | false | 1 | {'enabled': False, 'images': [{'id': '-2VbjAXvrrZL0RjtZTqC9zezUB1tCYmDw2ccrFVDVws', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/rncrqeeHwHOgs4bbISDh0bG52tEK0G3e8X2wqNngNEw.jpg?width=108&crop=smart&auto=webp&s=40a5f7b886036119a9379d6d7aed9241ca63a92d', 'width': 108}, {'height': 86, 'url': 'https://external-preview.redd.it/rncrqeeHwHOgs4bbISDh0bG52tEK0G3e8X2wqNngNEw.jpg?width=216&crop=smart&auto=webp&s=43c7576c2cee899d2b0d8fe33f9948c606e92dfb', 'width': 216}, {'height': 128, 'url': 'https://external-preview.redd.it/rncrqeeHwHOgs4bbISDh0bG52tEK0G3e8X2wqNngNEw.jpg?width=320&crop=smart&auto=webp&s=606759c877a4407bc52f5407d441f908091abcc1', 'width': 320}, {'height': 256, 'url': 'https://external-preview.redd.it/rncrqeeHwHOgs4bbISDh0bG52tEK0G3e8X2wqNngNEw.jpg?width=640&crop=smart&auto=webp&s=0bed9607788f9073bb1db36aecb900f4da535bf8', 'width': 640}, {'height': 385, 'url': 'https://external-preview.redd.it/rncrqeeHwHOgs4bbISDh0bG52tEK0G3e8X2wqNngNEw.jpg?width=960&crop=smart&auto=webp&s=42f1b7c42cc0272ca4a48f82715882be2a0a8fd7', 'width': 960}, {'height': 433, 'url': 'https://external-preview.redd.it/rncrqeeHwHOgs4bbISDh0bG52tEK0G3e8X2wqNngNEw.jpg?width=1080&crop=smart&auto=webp&s=2aa97ee17da12d4cd0cae3f7221b3590e25bbdec', 'width': 1080}], 'source': {'height': 506, 'url': 'https://external-preview.redd.it/rncrqeeHwHOgs4bbISDh0bG52tEK0G3e8X2wqNngNEw.jpg?auto=webp&s=0cab401b0afc71f2f393d628a093505efc0add84', 'width': 1261}, 'variants': {}}]} |
||
Qwen-2.5-Coder 32B – can I run it with out downloading it? | 1 | [removed] | 2025-01-25T17:14:57 | https://www.reddit.com/r/LocalLLaMA/comments/1i9rqua/qwen25coder_32b_can_i_run_it_with_out_downloading/ | Crazy_Emotion7258 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9rqua | false | null | t3_1i9rqua | /r/LocalLLaMA/comments/1i9rqua/qwen25coder_32b_can_i_run_it_with_out_downloading/ | false | false | self | 1 | null |
The Short Case for Nvidia Stock | 0 | [removed] | 2025-01-25T17:24:04 | https://youtubetranscriptoptimizer.com/blog/05_the_short_case_for_nvda | dicklesworth | youtubetranscriptoptimizer.com | 1970-01-01T00:00:00 | 0 | {} | 1i9ryhv | false | null | t3_1i9ryhv | /r/LocalLLaMA/comments/1i9ryhv/the_short_case_for_nvidia_stock/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'thJ5BxEk87EZVp7QqMbp7nTITRyn1hXsN8gRr0W5ipU', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=108&crop=smart&auto=webp&s=3effac6e758bd4704331f88a46398c7ae0c99c3b', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=216&crop=smart&auto=webp&s=b00d836cb56bb2d008a84c61af1becef4da2f33d', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=320&crop=smart&auto=webp&s=58b9e885e34d6216c21d40e40935e51cfaba0605', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=640&crop=smart&auto=webp&s=b20e65876fa5792071fa0019f787278be05ebcf7', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=960&crop=smart&auto=webp&s=09032fc1d528072cd9bd3f4d676aeec4f3720cac', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?width=1080&crop=smart&auto=webp&s=19800fdf5ab979a84b61400d7841a2762150a34b', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/RN04t8ZmhcTwxR5vdBW9PBOvw4x8w9WRKihf3vCk99c.jpg?auto=webp&s=106f531822053b79291cb94625e1ea554a3b1619', 'width': 1792}, 'variants': {}}]} |
|
VRAM for Fine-Tuning Llama 3.2 3b | 1 | Hi! I'm wanting to fine-tune Llama 3.2 3b, but I'm on a heavy budget. I'm going to be using Unsloth and I'm going to be using QLora. The question is, can I get away with something like an 8GB 4060, or should I go for the 12GB 3060? I know the 4060 is more performant, but the 3060 allows for larger batch sizes. Just trying to figure out which trade offs would be more beneficial in the long run. Thank you for any suggestions! | 2025-01-25T17:31:06 | https://www.reddit.com/r/LocalLLaMA/comments/1i9s4ez/vram_for_finetuning_llama_32_3b/ | MushroomGecko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9s4ez | false | null | t3_1i9s4ez | /r/LocalLLaMA/comments/1i9s4ez/vram_for_finetuning_llama_32_3b/ | false | false | self | 1 | null |
What are the best 12b- 27b models for story telling (preferably uncensored) | 1 | [removed] | 2025-01-25T17:31:37 | https://www.reddit.com/r/LocalLLaMA/comments/1i9s4uq/what_are_the_best_12b_27b_models_for_story/ | itis_whatit-is | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9s4uq | false | null | t3_1i9s4uq | /r/LocalLLaMA/comments/1i9s4uq/what_are_the_best_12b_27b_models_for_story/ | false | false | self | 1 | null |
Some people on Reddit suggest that the $3000 Digits may have a limited amount of memory | 0 | 2025-01-25T17:32:11 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1i9s5bc | false | null | t3_1i9s5bc | /r/LocalLLaMA/comments/1i9s5bc/some_people_on_reddit_suggest_that_the_3000/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'WQXAZgILxrOFfJXEBVVD_DKyy581MTZcEX22oyarGy8', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/4woy6p4sd6fe1.png?width=108&crop=smart&auto=webp&s=44e55bacaeccfbd21aaab14792d9fbdba87b9644', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/4woy6p4sd6fe1.png?width=216&crop=smart&auto=webp&s=cfaec3e925adc6d310769186f7d9f67bf6441208', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/4woy6p4sd6fe1.png?width=320&crop=smart&auto=webp&s=653befe28311c997a74ba03482fb2b2d9fe89537', 'width': 320}, {'height': 372, 'url': 'https://preview.redd.it/4woy6p4sd6fe1.png?width=640&crop=smart&auto=webp&s=2b9d9c5b0c6d1825ceff33c068fe7f45a44c7789', 'width': 640}, {'height': 559, 'url': 'https://preview.redd.it/4woy6p4sd6fe1.png?width=960&crop=smart&auto=webp&s=d9623f1f83daf7053b277ea6e65ee092a9249603', 'width': 960}, {'height': 629, 'url': 'https://preview.redd.it/4woy6p4sd6fe1.png?width=1080&crop=smart&auto=webp&s=33ae5d2a6fca41a878d02873242c47e996f58c6a', 'width': 1080}], 'source': {'height': 994, 'url': 'https://preview.redd.it/4woy6p4sd6fe1.png?auto=webp&s=25c10886d2a16f3c7cf7a2b09b15f4500e2cd2a2', 'width': 1706}, 'variants': {}}]} |
|||
DeepSeek R1 does not compile shaders and does not warm up the model | 0 | hi
as the title suggests i am stuck at the message saying >Compiling shaders and warming up model...
it happen when using [this huggingface link](https://huggingface.co/spaces/webml-community/deepseek-r1-webgpu) or when running it locally in my browser.
i am on linux mint and i am using firefox nightly. webgpu works fine with this browser
thanks | 2025-01-25T17:36:14 | https://www.reddit.com/r/LocalLLaMA/comments/1i9s8qe/deepseek_r1_does_not_compile_shaders_and_does_not/ | Impolioid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9s8qe | false | null | t3_1i9s8qe | /r/LocalLLaMA/comments/1i9s8qe/deepseek_r1_does_not_compile_shaders_and_does_not/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9fF3UylqBOnfW8jU91qhq_IBNgoKsXW_LUu0tDJRjmg', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/vJB4UFQVZHwFuSpkKRAIao5Dro2V6Om-XJMu85dMq-8.jpg?width=108&crop=smart&auto=webp&s=920262c263e5d3a2fe44c2468f0eac6afe6bbd43', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/vJB4UFQVZHwFuSpkKRAIao5Dro2V6Om-XJMu85dMq-8.jpg?width=216&crop=smart&auto=webp&s=50fc0009968918174b0a6d5d65ceb993b46799ab', 'width': 216}, {'height': 149, 'url': 'https://external-preview.redd.it/vJB4UFQVZHwFuSpkKRAIao5Dro2V6Om-XJMu85dMq-8.jpg?width=320&crop=smart&auto=webp&s=bc0e03a869e8176635576a4c7a13ec26d4522a59', 'width': 320}, {'height': 299, 'url': 'https://external-preview.redd.it/vJB4UFQVZHwFuSpkKRAIao5Dro2V6Om-XJMu85dMq-8.jpg?width=640&crop=smart&auto=webp&s=c1d89998fbb286d7000fe27d60e790c455673af4', 'width': 640}, {'height': 449, 'url': 'https://external-preview.redd.it/vJB4UFQVZHwFuSpkKRAIao5Dro2V6Om-XJMu85dMq-8.jpg?width=960&crop=smart&auto=webp&s=90b2b21179f32e61cd1c82cb4dd0af26616000ef', 'width': 960}, {'height': 505, 'url': 'https://external-preview.redd.it/vJB4UFQVZHwFuSpkKRAIao5Dro2V6Om-XJMu85dMq-8.jpg?width=1080&crop=smart&auto=webp&s=3cfc4e571779ea1ccfcecb4d87f7d71dc44383bf', 'width': 1080}], 'source': {'height': 710, 'url': 'https://external-preview.redd.it/vJB4UFQVZHwFuSpkKRAIao5Dro2V6Om-XJMu85dMq-8.jpg?auto=webp&s=ce15cdb281c431d2e116dad1b2233db08372df9d', 'width': 1518}, 'variants': {}}]} |
Man this is crazy it kept thinking all the anime names for more than 10 minutes 😭 | 1 | [removed] | 2025-01-25T17:52:24 | https://www.reddit.com/r/LocalLLaMA/comments/1i9slzq/man_this_is_crazy_it_kept_thinking_all_the_anime/ | No-Point-6492 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9slzq | false | null | t3_1i9slzq | /r/LocalLLaMA/comments/1i9slzq/man_this_is_crazy_it_kept_thinking_all_the_anime/ | false | false | self | 1 | null |
How Chinese AI Startup DeepSeek Made a Model that Rivals OpenAI | 93 | 2025-01-25T18:10:15 | https://www.wired.com/story/deepseek-china-model-ai/ | CarbonTail | wired.com | 1970-01-01T00:00:00 | 0 | {} | 1i9t0x2 | false | null | t3_1i9t0x2 | /r/LocalLLaMA/comments/1i9t0x2/how_chinese_ai_startup_deepseek_made_a_model_that/ | false | false | 93 | {'enabled': False, 'images': [{'id': 'Uzs7TRBFbiMyREkgUcaef2dNNw_2N6Oc9X3t0xvNzsY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GaYe6FpTRtNr23ADdM65dvNw3TVMjwFcEfKfrHC4ukE.jpg?width=108&crop=smart&auto=webp&s=cd540f290ba469a36c2338042eb9f77ba54694b4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GaYe6FpTRtNr23ADdM65dvNw3TVMjwFcEfKfrHC4ukE.jpg?width=216&crop=smart&auto=webp&s=b21c0f64a2856187f68ad9fac120b1755cab95fe', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/GaYe6FpTRtNr23ADdM65dvNw3TVMjwFcEfKfrHC4ukE.jpg?width=320&crop=smart&auto=webp&s=2045b63bef8f3b11323fce9dd16c4f1c17a615f5', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/GaYe6FpTRtNr23ADdM65dvNw3TVMjwFcEfKfrHC4ukE.jpg?width=640&crop=smart&auto=webp&s=aa1a135c85bd082bf94671971fb8ea8e80f02eb2', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/GaYe6FpTRtNr23ADdM65dvNw3TVMjwFcEfKfrHC4ukE.jpg?width=960&crop=smart&auto=webp&s=df533ffc334c591484a101da8d724c89411b8132', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/GaYe6FpTRtNr23ADdM65dvNw3TVMjwFcEfKfrHC4ukE.jpg?width=1080&crop=smart&auto=webp&s=233b18e79f1f8c5cab1f8176072317c569f47b45', 'width': 1080}], 'source': {'height': 670, 'url': 'https://external-preview.redd.it/GaYe6FpTRtNr23ADdM65dvNw3TVMjwFcEfKfrHC4ukE.jpg?auto=webp&s=5ff4912de3b1f89b47df384268e701bdf34c48d1', 'width': 1280}, 'variants': {}}]} |
||
Canceling GPT Subscription | 0 | I would cancel my Chat GPT subscription but there is not a widget for deep seek on Opera GX side bar yet. Guess ill keep paying $20 a month. | 2025-01-25T18:16:04 | https://www.reddit.com/r/LocalLLaMA/comments/1i9t5ny/canceling_gpt_subscription/ | PersonalityUseful962 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9t5ny | false | null | t3_1i9t5ny | /r/LocalLLaMA/comments/1i9t5ny/canceling_gpt_subscription/ | false | false | self | 0 | null |
DeepSeek suspended my account for 5 days for generating Resident Evil 4 ASCII art | 1 | [removed] | 2025-01-25T18:16:09 | https://www.reddit.com/r/LocalLLaMA/comments/1i9t5q3/deepseek_suspended_my_account_for_5_days_for/ | Few-Enthusiasm5030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9t5q3 | false | null | t3_1i9t5q3 | /r/LocalLLaMA/comments/1i9t5q3/deepseek_suspended_my_account_for_5_days_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CB4ufN1aILmQjfy6zx8RR4Bi-Kpr7upzpy_pCGMm_l0', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=108&crop=smart&auto=webp&s=aaab9498359e4ddbc2ebbb141cf3f265b8d610d2', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=216&crop=smart&auto=webp&s=fdecb35367409460ed146a57942e0aa3391758ac', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=320&crop=smart&auto=webp&s=b1fe0e90d8a450f49c168b173d0d8ec843a62780', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=640&crop=smart&auto=webp&s=3220b737616889337aa2ab22addbd9486bb10ef3', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=960&crop=smart&auto=webp&s=684e9161c0cc1c98dd3dd0a462274f421dbd1c5f', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=1080&crop=smart&auto=webp&s=ecc193f2d58f053bef6f523e7a0ca7d3d173c81e', 'width': 1080}], 'source': {'height': 2532, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?auto=webp&s=db4fd396433e38497f19c0f0d7c7232b3be18b6f', 'width': 1170}, 'variants': {}}]} |
DeepSeek suspended my account for 5 days for generating Resident Evil 4 ASCII art... | 1 | [removed] | 2025-01-25T18:20:22 | https://www.reddit.com/r/LocalLLaMA/comments/1i9t92n/deepseek_suspended_my_account_for_5_days_for/ | TransmissionOfTerror | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1i9t92n | false | null | t3_1i9t92n | /r/LocalLLaMA/comments/1i9t92n/deepseek_suspended_my_account_for_5_days_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CB4ufN1aILmQjfy6zx8RR4Bi-Kpr7upzpy_pCGMm_l0', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=108&crop=smart&auto=webp&s=aaab9498359e4ddbc2ebbb141cf3f265b8d610d2', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=216&crop=smart&auto=webp&s=fdecb35367409460ed146a57942e0aa3391758ac', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=320&crop=smart&auto=webp&s=b1fe0e90d8a450f49c168b173d0d8ec843a62780', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=640&crop=smart&auto=webp&s=3220b737616889337aa2ab22addbd9486bb10ef3', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=960&crop=smart&auto=webp&s=684e9161c0cc1c98dd3dd0a462274f421dbd1c5f', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?width=1080&crop=smart&auto=webp&s=ecc193f2d58f053bef6f523e7a0ca7d3d173c81e', 'width': 1080}], 'source': {'height': 2532, 'url': 'https://external-preview.redd.it/oao5cqj4aTRzrqw5zVSiq8w3ZdV8H5TeIB-xCnK0gFE.jpg?auto=webp&s=db4fd396433e38497f19c0f0d7c7232b3be18b6f', 'width': 1170}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.