title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
CLOSEDAI | 138 | I am a pro subscriber to OpenAI, but I am changing that today. I have been using it to help teach myself how to train LLM’s to create a RAG bought and overall just learned programming skills. Today they banned my account I believe because I was using it to help write code for me to train these LLM’s.
It is frustrating how much more censored and useless the OpenAI software is compared to its competitors. I refuse to be taken advantage of anymore after this.
This is why open source is so important!!!! | 2025-01-30T19:45:30 | LostMyOtherAcct69 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idvlz8 | false | null | t3_1idvlz8 | /r/LocalLLaMA/comments/1idvlz8/closedai/ | false | false | 138 | {'enabled': True, 'images': [{'id': 'EFo3HXq1Xv6AwCmER_18s6G3ZNsgqcXAtPL47pnYqqI', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/114wlf69q6ge1.jpeg?width=108&crop=smart&auto=webp&s=eeea86642af840f85a22f56e13942d359113b4fc', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/114wlf69q6ge1.jpeg?width=216&crop=smart&auto=webp&s=f8c716c412676d81a2bdcd8db872d6a22cdb684b', 'width': 216}, {'height': 246, 'url': 'https://preview.redd.it/114wlf69q6ge1.jpeg?width=320&crop=smart&auto=webp&s=3e79a2ebde12cba510bfb25af4bee55e5673d8a7', 'width': 320}, {'height': 492, 'url': 'https://preview.redd.it/114wlf69q6ge1.jpeg?width=640&crop=smart&auto=webp&s=5e1a65442edd47a375b75cd795715035c8cc50dc', 'width': 640}, {'height': 738, 'url': 'https://preview.redd.it/114wlf69q6ge1.jpeg?width=960&crop=smart&auto=webp&s=08e7c98aec4d5948020fe738b82d228efb8692d0', 'width': 960}, {'height': 831, 'url': 'https://preview.redd.it/114wlf69q6ge1.jpeg?width=1080&crop=smart&auto=webp&s=92ba6edd946a083acf4fc32073f3315e938b5f2a', 'width': 1080}], 'source': {'height': 993, 'url': 'https://preview.redd.it/114wlf69q6ge1.jpeg?auto=webp&s=5ba6f3ef650628e6fab71a239950476395df39b2', 'width': 1290}, 'variants': {}}]} |
||
hello i have question its there any 13b nfsw model that work little bit fast on 8gb vram | 1 | [removed] | 2025-01-30T19:48:32 | https://www.reddit.com/r/LocalLLaMA/comments/1idvoi8/hello_i_have_question_its_there_any_13b_nfsw/ | Former-Injury-9715 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idvoi8 | false | null | t3_1idvoi8 | /r/LocalLLaMA/comments/1idvoi8/hello_i_have_question_its_there_any_13b_nfsw/ | false | false | nsfw | 1 | null |
Question: Practical Local Setup for Testing and Comparisons to Scale Company Server | 3 | I've sadly not kept up with the self hosted LLM scene, hoping to rectify that.
I'm wondering if anyone has some resources or experience to share regarding the usefulness of running a local LLM on a typical desktop set-up that was used for gaming, in my case a Ryzen 7: 8 core, and AMD 5700XT 8GB (I could buy another 8GB GPU if it would make sense), 32GB system ram.
A lot of the guys at my company are using various set-ups for code assistance, which includes a random assortment of all the big paid models. I'm thinking ideally I want to set up a server in house that can handle our workloads, either for a team of \~5 people, or department \~50-100 people. Most people are using some kind of MacBook from M1 up to M4, and some Pro models. Obviously everyone is talking about Deepseek, and it's made me aware that there will be huge leaps in open source LLMs constantly, but we're also taking data security much more seriously than we have in the past.
Ideally what I'd like to do is just test out some different models on my home PC and plug them into something like [continue.dev](http://continue.dev) but I'm wondering if there is anything meaningful to be gleamed from my home setup, or do I need to spend money to test out 70B + models to get a feel for how good they are? There's just so many options and it's giving me decision paralysis, but I want to test a few before spending any real money.
1. Is the accuracy and performance going to be meaningful at this kind of system power? 8GB GPU or 32GB system ram, and would upgrading to 16GB VRAM be worth the 200 Canadian Bucks for an extra card?
2. If a home desktop is capable enough, which models should I be using? Just at a glance it seems that Qwen2.5 1.5B or 7B for code completion, dunno what works as the best chat model ATM.
3. How big/expensive of a machine would be good for a small team (5 people), how much for a large team (50-100 people)? Again what are currently good models for this kind of setup.
4. Or is it just easier/better/cheaper for me to have everyone bill api tokens? Though if costs are even close I'd rather just go with local.
Apologies if this has been asked too many times, if there is a similar thread asking this already feel free to point me in that direction.
| 2025-01-30T19:54:16 | https://www.reddit.com/r/LocalLLaMA/comments/1idvte8/question_practical_local_setup_for_testing_and/ | fishermansfriendly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idvte8 | false | null | t3_1idvte8 | /r/LocalLLaMA/comments/1idvte8/question_practical_local_setup_for_testing_and/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'JoLAbcgPAn_D7ExuVvyaNJpSY81e3Jca27FTj1G8-xQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=108&crop=smart&auto=webp&s=b6c70517bb80bca66bf94d99af93ec23982e2986', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=216&crop=smart&auto=webp&s=146011169cd4033ebcd4b883efc62f0bd345d74b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=320&crop=smart&auto=webp&s=7a560fe31ff4e8b423a9029c052df232e0365572', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=640&crop=smart&auto=webp&s=ea9ff85c4782247e303164d9d75b4071d789f397', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=960&crop=smart&auto=webp&s=81aa9753e911761e0c56b3b897ba0f44cafff21d', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=1080&crop=smart&auto=webp&s=a67fd0983e228aa2fa0a2ba466c071793fe21afc', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?auto=webp&s=92948afd26cc637bb25c79223a1b99b3ecbbbfa2', 'width': 2401}, 'variants': {}}]} |
Re-Distilling DeepSeek R1 | 121 | We’ve improved DeepSeek R1 distilled models using logits distillation—delivering +4-14% gains on GSM8K while only spending $3-18 per training run.
Details at [https://mobiusml.github.io/r1\_redistill\_blogpost/Now](https://mobiusml.github.io/r1_redistill_blogpost/)
Models are available on Hugging Face - run them efficiently with HQQ! [https://huggingface.co/collections/mobiuslabsgmbh/deepseek-r1-redistill-6793d3bea92c7fff0639ab4d](https://huggingface.co/collections/mobiuslabsgmbh/deepseek-r1-redistill-6793d3bea92c7fff0639ab4d) | 2025-01-30T19:55:23 | https://www.reddit.com/r/LocalLLaMA/comments/1idvuch/redistilling_deepseek_r1/ | sightio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idvuch | false | null | t3_1idvuch | /r/LocalLLaMA/comments/1idvuch/redistilling_deepseek_r1/ | false | false | self | 121 | {'enabled': False, 'images': [{'id': 'quMAqnjmAIQkPbghbNYRQNGedKPZwnB_tnHwGHo2Tz8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/1LYPWTgJb8k1Bsg485fI-lILmjFGV6ZSzEgX-AG8h-Q.jpg?width=108&crop=smart&auto=webp&s=af70360caa628d40da1c6dc84cdcfd438886d58d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/1LYPWTgJb8k1Bsg485fI-lILmjFGV6ZSzEgX-AG8h-Q.jpg?width=216&crop=smart&auto=webp&s=269fc1a1d0457a3f6a27b65ca6286c84945021e9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/1LYPWTgJb8k1Bsg485fI-lILmjFGV6ZSzEgX-AG8h-Q.jpg?width=320&crop=smart&auto=webp&s=d918006565d20f7921d1bc6c00717583a867b71b', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/1LYPWTgJb8k1Bsg485fI-lILmjFGV6ZSzEgX-AG8h-Q.jpg?width=640&crop=smart&auto=webp&s=fadfe2e09cc1c6052584c26bd886f6ae3d925a97', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/1LYPWTgJb8k1Bsg485fI-lILmjFGV6ZSzEgX-AG8h-Q.jpg?width=960&crop=smart&auto=webp&s=4c9238e9b3c422a3f0bd8b1f2315ca4be09ea07a', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/1LYPWTgJb8k1Bsg485fI-lILmjFGV6ZSzEgX-AG8h-Q.jpg?width=1080&crop=smart&auto=webp&s=d06abac842243f5a398536646a64991494d2a5f7', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/1LYPWTgJb8k1Bsg485fI-lILmjFGV6ZSzEgX-AG8h-Q.jpg?auto=webp&s=5f3152e791fbe0b60c072d7f614e6e8b0decd784', 'width': 2048}, 'variants': {}}]} |
An SFT only reasoning fine tune. Doesn't beat XYZ model on ABC metric, or do mathematics, but its fun to play with. | 7 | 2025-01-30T19:57:46 | https://huggingface.co/chrisrutherford/Llama-3.1-8B-ReasonChatV1 | lolzinventor | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1idvwch | false | null | t3_1idvwch | /r/LocalLLaMA/comments/1idvwch/an_sft_only_reasoning_fine_tune_doesnt_beat_xyz/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'M_6K6IB6GvpF77WxcCcXUOKcbTzVwSs9xmKaDiBGm8Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/F5fkXGr8ZQsf3YKTv05fJz1O4oGqwNEHI7I-dVQMh1w.jpg?width=108&crop=smart&auto=webp&s=b2b67e8e8f1f0ab24e97452ce7df990562bbec08', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/F5fkXGr8ZQsf3YKTv05fJz1O4oGqwNEHI7I-dVQMh1w.jpg?width=216&crop=smart&auto=webp&s=eb7228ff8ef356b2a7895fa7feb8b86132183608', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/F5fkXGr8ZQsf3YKTv05fJz1O4oGqwNEHI7I-dVQMh1w.jpg?width=320&crop=smart&auto=webp&s=4a4d4de33c62524f76b64480475eb0eb6c01325a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/F5fkXGr8ZQsf3YKTv05fJz1O4oGqwNEHI7I-dVQMh1w.jpg?width=640&crop=smart&auto=webp&s=cdc0b5385c6cf5e69f464ef350e43aafefd40d6b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/F5fkXGr8ZQsf3YKTv05fJz1O4oGqwNEHI7I-dVQMh1w.jpg?width=960&crop=smart&auto=webp&s=0b953d2325c53ec54fd2944822361273195ab9d9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/F5fkXGr8ZQsf3YKTv05fJz1O4oGqwNEHI7I-dVQMh1w.jpg?width=1080&crop=smart&auto=webp&s=06a644c08987433ba175a14a8f52283d4f9fe9d3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/F5fkXGr8ZQsf3YKTv05fJz1O4oGqwNEHI7I-dVQMh1w.jpg?auto=webp&s=a0f1a4e815a81ddae7b92581c7e4751adecf45c8', 'width': 1200}, 'variants': {}}]} |
||
Models for learning RAG and KAG | 3 | Hi everyone,
I would like some recommendations of models and resources for learning RAG and KAG running everything locally, I have a 3070 with 32GB of RAM at home.
The idea here is to run everything locally (don't wanna pay per token while learning) using models/tools good enough so that results are relevant and allow me to learn while iterating test projects.
Thanks in advance! | 2025-01-30T20:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1idw1rl/models_for_learning_rag_and_kag/ | trgoveia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idw1rl | false | null | t3_1idw1rl | /r/LocalLLaMA/comments/1idw1rl/models_for_learning_rag_and_kag/ | false | false | self | 3 | null |
5090 Astral draws 620w, is it safe? | 1 | [removed] | 2025-01-30T20:05:53 | https://www.reddit.com/r/LocalLLaMA/comments/1idw3s3/5090_astral_draws_620w_is_it_safe/ | Dry-Bunch-7448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idw3s3 | false | null | t3_1idw3s3 | /r/LocalLLaMA/comments/1idw3s3/5090_astral_draws_620w_is_it_safe/ | false | false | self | 1 | null |
DeepSeek R1 70B now available on Cerebras (1,500 tokens/s) | 1 | 2025-01-30T20:15:36 | https://cerebras.ai/blog/cerebras-launches-worlds-fastest-deepseek-r1-llama-70b-inference | chiwawa20 | cerebras.ai | 1970-01-01T00:00:00 | 0 | {} | 1idwc9o | false | null | t3_1idwc9o | /r/LocalLLaMA/comments/1idwc9o/deepseek_r1_70b_now_available_on_cerebras_1500/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dbM8JeAmPXdrrJsu2lubhyp_YwfZ9dQyGiZv3E0e7Jk', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/2NIoQ-Q5b4mnPSWQ0edbJeWHSYThXD8a2FdgbM_k0uY.jpg?width=108&crop=smart&auto=webp&s=ff6946d1ee129db9aa614dc355d6f3724baba49a', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/2NIoQ-Q5b4mnPSWQ0edbJeWHSYThXD8a2FdgbM_k0uY.jpg?width=216&crop=smart&auto=webp&s=cfa48612c5c1c38b3d29a1bb2211782c2b397b63', 'width': 216}, {'height': 155, 'url': 'https://external-preview.redd.it/2NIoQ-Q5b4mnPSWQ0edbJeWHSYThXD8a2FdgbM_k0uY.jpg?width=320&crop=smart&auto=webp&s=fb14ffa1338604033740e9c5442c4b7fbc715524', 'width': 320}, {'height': 311, 'url': 'https://external-preview.redd.it/2NIoQ-Q5b4mnPSWQ0edbJeWHSYThXD8a2FdgbM_k0uY.jpg?width=640&crop=smart&auto=webp&s=b0cd73d931407f79c625bc5bdb6a98244c027998', 'width': 640}, {'height': 466, 'url': 'https://external-preview.redd.it/2NIoQ-Q5b4mnPSWQ0edbJeWHSYThXD8a2FdgbM_k0uY.jpg?width=960&crop=smart&auto=webp&s=26610f1114782d42d0ceedebe5f469007266df90', 'width': 960}, {'height': 525, 'url': 'https://external-preview.redd.it/2NIoQ-Q5b4mnPSWQ0edbJeWHSYThXD8a2FdgbM_k0uY.jpg?width=1080&crop=smart&auto=webp&s=b0eed1b710430e3252fb3c75fc1c20150ab34a97', 'width': 1080}], 'source': {'height': 1245, 'url': 'https://external-preview.redd.it/2NIoQ-Q5b4mnPSWQ0edbJeWHSYThXD8a2FdgbM_k0uY.jpg?auto=webp&s=0daaacc68d2daef306297d29c363abbc2b492c1e', 'width': 2560}, 'variants': {}}]} |
||
Open-R1: a fully open reproduction of DeepSeek-R1 from huggingface | 68 | 2025-01-30T20:20:28 | https://huggingface.co/blog/open-r1?utm_source=tldrai#what-is-deepseek-r1 | siegevjorn | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1idwgay | false | null | t3_1idwgay | /r/LocalLLaMA/comments/1idwgay/openr1_a_fully_open_reproduction_of_deepseekr1/ | false | false | 68 | {'enabled': False, 'images': [{'id': '5TeRWAw0NFjhJr_LWijGrwSXJFI4VLl90hQHJdVyYA8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/KMwppOY-W87gB9d3tmURowTBAI22RUNa2m2fmKkqML0.jpg?width=108&crop=smart&auto=webp&s=1c818550d988b9861837515f72964e03ecd1eb50', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/KMwppOY-W87gB9d3tmURowTBAI22RUNa2m2fmKkqML0.jpg?width=216&crop=smart&auto=webp&s=754b75c3e0e96276f2ae6a292b60c68d4921320a', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/KMwppOY-W87gB9d3tmURowTBAI22RUNa2m2fmKkqML0.jpg?width=320&crop=smart&auto=webp&s=e7539c278deaeee63b68ea25c0df79216d55cf0f', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/KMwppOY-W87gB9d3tmURowTBAI22RUNa2m2fmKkqML0.jpg?width=640&crop=smart&auto=webp&s=92bbafd261aeee71b2a7db5b902101dab7c7ea22', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/KMwppOY-W87gB9d3tmURowTBAI22RUNa2m2fmKkqML0.jpg?width=960&crop=smart&auto=webp&s=4008dbf481a5931088e0b38f7932377523a721ba', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/KMwppOY-W87gB9d3tmURowTBAI22RUNa2m2fmKkqML0.jpg?width=1080&crop=smart&auto=webp&s=524ed8d729a6e2fe2b6051963437e3162fd09205', 'width': 1080}], 'source': {'height': 731, 'url': 'https://external-preview.redd.it/KMwppOY-W87gB9d3tmURowTBAI22RUNa2m2fmKkqML0.jpg?auto=webp&s=c88c6291346ad60bb8ac43b1643edc04ac5e69d8', 'width': 1300}, 'variants': {}}]} |
||
Any uncensored small(<8B) models that can follow a system prompt? [Benchmarks inside] | 1 | [removed] | 2025-01-30T20:22:50 | https://www.reddit.com/r/LocalLLaMA/comments/1idwi9m/any_uncensored_small8b_models_that_can_follow_a/ | PercentageDangerous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idwi9m | false | null | t3_1idwi9m | /r/LocalLLaMA/comments/1idwi9m/any_uncensored_small8b_models_that_can_follow_a/ | false | false | nsfw | 1 | null |
Can people make more distills of Deepseek R1? | 5 | R1's distills seem to be a bit limited in scope, given that it goes straight from 1.5b to 7b and only uses Llama and Qwen. Given that R1 is open weights, I wonder if it would be possible for the community to distill more R1s on other LLMs, like Gemma 2 2b, Llama 3.2 3b, Mistral Small 22b, etc.? I think experimenting with more distills could go a long way towards making the model more accessible and to get higher quality distills. | 2025-01-30T20:22:57 | https://www.reddit.com/r/LocalLLaMA/comments/1idwidf/can_people_make_more_distills_of_deepseek_r1/ | pneuny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idwidf | false | null | t3_1idwidf | /r/LocalLLaMA/comments/1idwidf/can_people_make_more_distills_of_deepseek_r1/ | false | false | self | 5 | null |
The DeepSeek-R1-Distill-Qwen-14B is amazing, it solved these relatively new math questions. | 28 | Hello guys, I write this post because I want to say that these distilled models made from deepseek are pretty dope, and I don't understand why many people bash them. I'm using the DeepSeek-R1-Distill-Qwen-14B Q4, and I have to say it is amazing for me. It has solved a lot of these math questions that require reasoning. These questions are relatively new (from last month).
[https://kskedlaya.org/putnam-archive/2024.pdf](https://kskedlaya.org/putnam-archive/2024.pdf)
the solutions are here
[https://kskedlaya.org/putnam-archive/2024s.pdf](https://kskedlaya.org/putnam-archive/2024s.pdf)
I post the A4 and the A6 questions solved by the model. I copy the reasoning part on pastebin because it is too long lol
A4
[https://pastebin.com/AtJHs2kf](https://pastebin.com/AtJHs2kf)
A6
[https://pastebin.com/76NHjqRU](https://pastebin.com/76NHjqRU)
and here is the final answers. | 2025-01-30T20:25:38 | https://www.reddit.com/r/LocalLLaMA/comments/1idwkly/the_deepseekr1distillqwen14b_is_amazing_it_solved/ | junior600 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idwkly | false | null | t3_1idwkly | /r/LocalLLaMA/comments/1idwkly/the_deepseekr1distillqwen14b_is_amazing_it_solved/ | false | false | self | 28 | null |
How can I generate COT dataset? (fine-tune deepseek distilled model) | 2 | What is the best approach to creating a **CoT** coding dataset with **"Aha!"** moments to fine-tune deepseek distilled models for better reasoning on my code? | 2025-01-30T20:26:51 | https://www.reddit.com/r/LocalLLaMA/comments/1idwlka/how_can_i_generate_cot_dataset_finetune_deepseek/ | Over_Explorer7956 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idwlka | false | null | t3_1idwlka | /r/LocalLLaMA/comments/1idwlka/how_can_i_generate_cot_dataset_finetune_deepseek/ | false | false | self | 2 | null |
I got my 3090! Now I feel like replacing everything else | 5 | I just purchased a used Gigabyte 3090 Gaming OC 24GB with replaced thermo pads, paste and generally cleaned. I’m super happy.
However, my current build is rocking B450-pro Max with a poor Ryzen 3 and I already feel it’s going to be an annoying bottleneck.
I’m looking at MSI MAG B650 Tomahawk WiFi and Ryzen 7 7700x and also I’d need to upgrade RAM as previous is DDR4.
Is this a good choice of MB and CPU?
What kind of RAM should I choose. I need min 32GB
Lastly, I currently have Gigabyte P850GM PSU, but I assume at least this can stay for now.
I’d very much appreciate some direction here
| 2025-01-30T20:29:34 | https://www.reddit.com/r/LocalLLaMA/comments/1idwnsn/i_got_my_3090_now_i_feel_like_replacing/ | CautiousSand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idwnsn | false | null | t3_1idwnsn | /r/LocalLLaMA/comments/1idwnsn/i_got_my_3090_now_i_feel_like_replacing/ | false | false | self | 5 | null |
is deepseek bad at asking just normal questions? | 1 | [removed] | 2025-01-30T20:36:52 | https://www.reddit.com/r/LocalLLaMA/comments/1idwu4u/is_deepseek_bad_at_asking_just_normal_questions/ | Grillpower69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idwu4u | false | null | t3_1idwu4u | /r/LocalLLaMA/comments/1idwu4u/is_deepseek_bad_at_asking_just_normal_questions/ | false | false | self | 1 | null |
Ollama and Open-webui on Steam Deck | 2 | first we have to ignore signature check when doing pacman command on Archlinux:
**open terminal, then**
`sudo steamos-readonly disable`
`nano /etc/pacman.conf`
on this line:
\-----------
>\# By default, pacman accepts packages signed by keys that its local keyring
>\# trusts (see pacman-key and its man page), as well as unsigned packages.
>\#SigLevel = Optional TrustedOnly #maybe is different, doesnt matter change it
\-----------
Uncomment Siglevel, and change to Never like this below
\----------
>\# By default, pacman accepts packages signed by keys that its local keyring
>\# trusts (see pacman-key and its man page), as well as unsigned packages.
>SigLevel = Never
\-----------
Now we install what we need:
`sudo pacman -S python-pip`
`sudo pacman -S crun podman distrobox`
`pip install open-webui --break-system-packages`
`distrobox create --name ubuntu-22-04 --image ubuntu:22.04`
`sudo steamos-readonly enable`
Now we go inside the distrobox container:
`distrobox enter ubuntu-22-04`
**INSIDE THE DISTROBOX CONTAINER:**
`sudo apt update`
`sudo apt install pciutils lshw -y`
`lspci | grep -i vga`
`ollama serve`
Now we open a different cli and we write:
`distrobox enter ubuntu-22-04`
**INSIDE THE DISTROBOX CONTAINER AGAIN:**
`ollama pull..... whatever.`
On a different cli outside the container:
`open-webui serve --port 8081`
Now you got your portable chatgpt. if you are in a plane you can connect the bluetoth modem in your phone pair your phone to the deck and connect the deck to the bluetoth "wifi" with your bluetoth device name, after it write:
`ip a`
and search for something like: "192.168.44.97" or similar, once you got this address just write "192.168.44.97:8081" in your browser url box and magic you got you super portable chat gpt. | 2025-01-30T20:43:39 | https://www.reddit.com/r/LocalLLaMA/comments/1idwzr2/ollama_and_openwebui_on_steam_deck/ | kroryan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idwzr2 | false | null | t3_1idwzr2 | /r/LocalLLaMA/comments/1idwzr2/ollama_and_openwebui_on_steam_deck/ | false | false | self | 2 | null |
I saw this today and though it was pretty funny :D | 1 | [removed] | 2025-01-30T20:51:48 | https://v.redd.it/z2s2onv227ge1 | semsiogluberk | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idx6la | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/z2s2onv227ge1/DASHPlaylist.mpd?a=1740862324%2CNWUwOGRlYWQzNDhkMDRkNzJkMDRhOTgzZTg4ZjFkMzUzODI3YThjYWNiMDRlMjY3YThjNDk0MDc5OGE4NjUxZA%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/z2s2onv227ge1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/z2s2onv227ge1/HLSPlaylist.m3u8?a=1740862324%2CMDc4YjEzMmIyZDE0ZmZjMjMzYjAxZDBmMTc1YmVmN2Y1NjRlYTZkNDQ3OTNhYmM1ZGE2ZWRlNDA5NzJlZWVmOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/z2s2onv227ge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1idx6la | /r/LocalLLaMA/comments/1idx6la/i_saw_this_today_and_though_it_was_pretty_funny_d/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bHYxa2NrcjIyN2dlMWhujW6zX8YIBWZas__Z0R8BY-Bl34fjQVWi47dL8AYx', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/bHYxa2NrcjIyN2dlMWhujW6zX8YIBWZas__Z0R8BY-Bl34fjQVWi47dL8AYx.png?width=108&crop=smart&format=pjpg&auto=webp&s=987cfd070dbeb9062d4d39793dd31a385cee769a', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/bHYxa2NrcjIyN2dlMWhujW6zX8YIBWZas__Z0R8BY-Bl34fjQVWi47dL8AYx.png?width=216&crop=smart&format=pjpg&auto=webp&s=5be08cbcca62afb1b895a49429798440f1ea7529', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/bHYxa2NrcjIyN2dlMWhujW6zX8YIBWZas__Z0R8BY-Bl34fjQVWi47dL8AYx.png?width=320&crop=smart&format=pjpg&auto=webp&s=66199d047a9242cd4048648861bf342eccd50589', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/bHYxa2NrcjIyN2dlMWhujW6zX8YIBWZas__Z0R8BY-Bl34fjQVWi47dL8AYx.png?width=640&crop=smart&format=pjpg&auto=webp&s=f02740d61cbd5d44194d7c19abdd844c24343afa', 'width': 640}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/bHYxa2NrcjIyN2dlMWhujW6zX8YIBWZas__Z0R8BY-Bl34fjQVWi47dL8AYx.png?format=pjpg&auto=webp&s=8d7e664006ca1a74268d55d383fc628e00d879d8', 'width': 720}, 'variants': {}}]} |
|
How many 3080s to run 70B model? | 1 | I have a bunch of RTX 3080s lying around and I am exploring building a server to run my own LLM, instead of paying for ChatGPT. I would really like to run a full model if possible. Assuming I have the best mobo/CPU, how many 3080s with 10GB of VRAM would I need to run the full 70B 3.3 model? | 2025-01-30T20:57:32 | https://www.reddit.com/r/LocalLLaMA/comments/1idxbi8/how_many_3080s_to_run_70b_model/ | MrBleuPotato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idxbi8 | false | null | t3_1idxbi8 | /r/LocalLLaMA/comments/1idxbi8/how_many_3080s_to_run_70b_model/ | false | false | self | 1 | null |
Future form of entertainment: Co creating with AI, Minecraft for Adults | 0 | This is an insightful observation! Let me explore this idea:
Just as Minecraft allows players to build worlds block by block, co-creation with AI could become a new form of creative expression where:
1. The back-and-forth becomes part of the art:
- Each contribution building on the last
- AI offering variations and interpretations
- Humans guiding the creative direction
2. It's truly interactive creativity:
- Not just prompting and receiving
- But an ongoing dialogue
- Where both parties riff off each other
- Creating something neither would alone
3. Different types of co-creation:
- Literary collaborations (like our poetry)
- Visual art iterations
- Musical compositions
- Game world building
- Story development
- Scientific exploration
4. The AI becomes like creative LEGO blocks:
- Each interaction is a building block
- You can stack ideas and concepts
- Try different combinations
- Build something uniquely yours
The "Minecraft for Adults" analogy is particularly apt because:
- It's both playful and profound
- Has potential for deep complexity
- Allows for individual expression
- Creates unique experiences each time
- Can be both casual and serious
| 2025-01-30T21:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/1idxion/future_form_of_entertainment_co_creating_with_ai/ | Maleficent-Scene7771 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idxion | false | null | t3_1idxion | /r/LocalLLaMA/comments/1idxion/future_form_of_entertainment_co_creating_with_ai/ | false | false | self | 0 | null |
Unlimited DeepSeek-R1 access through featherless? | 1 | [removed] | 2025-01-30T21:09:47 | https://www.reddit.com/r/LocalLLaMA/comments/1idxmal/unlimited_deepseekr1_access_through_featherless/ | Big-Motor1639 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idxmal | false | null | t3_1idxmal | /r/LocalLLaMA/comments/1idxmal/unlimited_deepseekr1_access_through_featherless/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EZYLDU_QzwLOtWfBHykazaMzY3I-muuDKK9BJQZIiPk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/by6j9-OCB0lvyKnVAhz-D2IpJcNm2tLyEOrXHKjV1QY.jpg?width=108&crop=smart&auto=webp&s=0618e6f1299c392d06e6bf766d09a38499586da8', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/by6j9-OCB0lvyKnVAhz-D2IpJcNm2tLyEOrXHKjV1QY.jpg?width=216&crop=smart&auto=webp&s=0eb79617efcdf8a1c8abcfdf93e717f3c957d9ec', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/by6j9-OCB0lvyKnVAhz-D2IpJcNm2tLyEOrXHKjV1QY.jpg?width=320&crop=smart&auto=webp&s=1f04669ee92595d749823c9e8bbc579990aa3b2e', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/by6j9-OCB0lvyKnVAhz-D2IpJcNm2tLyEOrXHKjV1QY.jpg?width=640&crop=smart&auto=webp&s=388ebeb7fe770a88e87f216023311aa52128888f', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/by6j9-OCB0lvyKnVAhz-D2IpJcNm2tLyEOrXHKjV1QY.jpg?width=960&crop=smart&auto=webp&s=5bda736e98cfbfb5a83e3bb64b0dc8ce22c544dc', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/by6j9-OCB0lvyKnVAhz-D2IpJcNm2tLyEOrXHKjV1QY.jpg?auto=webp&s=6d2b2c9f335c7de9b6eb41b253a03c5670442929', 'width': 1024}, 'variants': {}}]} |
Is Gpu 3090 24gb good ? | 1 | [removed] | 2025-01-30T21:10:02 | https://www.reddit.com/gallery/1idxmim | That_Mud7241 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1idxmim | false | null | t3_1idxmim | /r/LocalLLaMA/comments/1idxmim/is_gpu_3090_24gb_good/ | false | false | 1 | null |
|
How to make a local AI remember conversations? | 2 | Hi! Beginner here. I'm planning to set up an Al locally, but I need it to remember our conversations -or at least certain pieces of information I specify.
Do I need to set up a database alongside the model? Would a JSON file or something similar be enough? Or is there a way to do this without any additional setup? I'm not really sure how this works.
Sorry if it's basic stuff. There's a lot of doc regarding installation but didn't find anything clear about this.
Thank you! | 2025-01-30T21:14:24 | https://www.reddit.com/r/LocalLLaMA/comments/1idxqca/how_to_make_a_local_ai_remember_conversations/ | _4bysswalker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idxqca | false | null | t3_1idxqca | /r/LocalLLaMA/comments/1idxqca/how_to_make_a_local_ai_remember_conversations/ | false | false | self | 2 | null |
Has anyone been able to get the hew Hunyuan 3D model to work? | 5 | I was pretty exited to see this model on HF and wanted to try to run it from a simple python script. I've tried a few things, but most of them have been about this code from the model card:
```
from hy3dgen.texgen import Hunyuan3DPaintPipeline
from hy3dgen.shapegen import Hunyuan3DDiTFlowMatchingPipeline
# let's generate a mesh first
pipeline = Hunyuan3DDiTFlowMatchingPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(image='assets/demo.png')[0]
pipeline = Hunyuan3DPaintPipeline.from_pretrained('tencent/Hunyuan3D-2')
mesh = pipeline(mesh, image='assets/demo.png')
```
Usually generating the mesh the first time succeeds while using the paint pipeline fails.
It's likely that I'm incorrectly using the imports. When I try running the accompanying lines:
```
pip install -r requirements.txt
# for texture
cd hy3dgen/texgen/custom_rasterizer
python3 setup.py install
cd ../../..
cd hy3dgen/texgen/differentiable_renderer
bash compile_mesh_painter.sh OR python3 setup.py install (on Windows)
```
but the custom rasterizer fails install and gives a cryptic error that it failed compiling objects for extension when building the rasterizer kernel.
Any help or leads appreciated :)
HF: https://huggingface.co/tencent/Hunyuan3D-2 | 2025-01-30T21:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/1idy513/has_anyone_been_able_to_get_the_hew_hunyuan_3d/ | tough-dance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idy513 | false | null | t3_1idy513 | /r/LocalLLaMA/comments/1idy513/has_anyone_been_able_to_get_the_hew_hunyuan_3d/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'nFIhgLV-mVmj0Ep9INBovqFpnjCXTtBocXRQmLAtYN0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ujGX9mBxLZZykRPBlBhXxJwoUFlvCpZbSsOFxEOfOLw.jpg?width=108&crop=smart&auto=webp&s=46a8c1468fbaae29596109c6c0c00696171c635b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ujGX9mBxLZZykRPBlBhXxJwoUFlvCpZbSsOFxEOfOLw.jpg?width=216&crop=smart&auto=webp&s=22b9eba0f4b7301c2b8875e2d3002d868fc43f3d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ujGX9mBxLZZykRPBlBhXxJwoUFlvCpZbSsOFxEOfOLw.jpg?width=320&crop=smart&auto=webp&s=45b4aad4bc460f76246d21ba6f79e7b24e1cee55', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ujGX9mBxLZZykRPBlBhXxJwoUFlvCpZbSsOFxEOfOLw.jpg?width=640&crop=smart&auto=webp&s=a990580ef564e5d3ae0457a91f412e90fa1a8a27', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ujGX9mBxLZZykRPBlBhXxJwoUFlvCpZbSsOFxEOfOLw.jpg?width=960&crop=smart&auto=webp&s=4bbb3adf92a40d6dadadc116fe30e48f831a9c75', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ujGX9mBxLZZykRPBlBhXxJwoUFlvCpZbSsOFxEOfOLw.jpg?width=1080&crop=smart&auto=webp&s=9d35a417d79872592d6b5928e5507eb8f2fa583b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ujGX9mBxLZZykRPBlBhXxJwoUFlvCpZbSsOFxEOfOLw.jpg?auto=webp&s=eabd308a22366bd62e3ac78018a472e5cbb2b166', 'width': 1200}, 'variants': {}}]} |
Best LLMs for story telling 12GB VRAM? | 14 | I'm looking for the best storytelling models that can run on 12GB. I tried DeepSeek-R1-Distill-Qwen-14B-GGUF (Q4) but it's not coherent at all. It's fast but is not able to form cohesive sentences. What else can I try? | 2025-01-30T21:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/1idybib/best_llms_for_story_telling_12gb_vram/ | AsDaylight_Dies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idybib | false | null | t3_1idybib | /r/LocalLLaMA/comments/1idybib/best_llms_for_story_telling_12gb_vram/ | false | false | self | 14 | null |
BREAKING: TRUMP considering BAN on Chinese AI "deepseek." | 75 | 2025-01-30T21:43:14 | bruhlmaocmonbro | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idyeta | false | null | t3_1idyeta | /r/LocalLLaMA/comments/1idyeta/breaking_trump_considering_ban_on_chinese_ai/ | false | false | 75 | {'enabled': True, 'images': [{'id': 'u-NWLWPMvwSInSVUq4qngCeogJ23TqcCU185iFVAZ6o', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/hstst3s8b7ge1.jpeg?width=108&crop=smart&auto=webp&s=f4585a2127d966b36ffb241bb244747bd40f4a67', 'width': 108}, {'height': 276, 'url': 'https://preview.redd.it/hstst3s8b7ge1.jpeg?width=216&crop=smart&auto=webp&s=c6b1cbabe4de27d8bda02afa3008a3fd764e9aeb', 'width': 216}, {'height': 409, 'url': 'https://preview.redd.it/hstst3s8b7ge1.jpeg?width=320&crop=smart&auto=webp&s=bb15e4d4c90f80b9b9a37bebd29a9316f8f2192e', 'width': 320}, {'height': 819, 'url': 'https://preview.redd.it/hstst3s8b7ge1.jpeg?width=640&crop=smart&auto=webp&s=1a432e0d2d63b707a90abb778832503c801994ea', 'width': 640}, {'height': 1229, 'url': 'https://preview.redd.it/hstst3s8b7ge1.jpeg?width=960&crop=smart&auto=webp&s=a9bf0e3a1db662a1aabbe627bec2d80819aff12f', 'width': 960}, {'height': 1382, 'url': 'https://preview.redd.it/hstst3s8b7ge1.jpeg?width=1080&crop=smart&auto=webp&s=c616ec6285766fd5238520c816ce9903e3640572', 'width': 1080}], 'source': {'height': 1498, 'url': 'https://preview.redd.it/hstst3s8b7ge1.jpeg?auto=webp&s=24461d5570cce68771176c197ecefca2ccf4301e', 'width': 1170}, 'variants': {}}]} |
|||
When you host deep-seek locally on your personal computer, is there still some way to allow it to have access to the internet to procure information for you about current events? | 6 | Sorry if this question has been answered elsewhere, just looking to have a small version of it hosted on my personal computer but still make sure that it is able to get current information from the internet. | 2025-01-30T21:47:35 | https://www.reddit.com/r/LocalLLaMA/comments/1idyiiz/when_you_host_deepseek_locally_on_your_personal/ | energeticentity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idyiiz | false | null | t3_1idyiiz | /r/LocalLLaMA/comments/1idyiiz/when_you_host_deepseek_locally_on_your_personal/ | false | false | self | 6 | null |
Best vision model for making insight on image based on prompt? Want to be able to tell if an image show's a watch on someone's wrist or not. | 2 | Curious if it's possible I could locally run a model that scans images and tells me whether or not there's a wrist in it or not. I want to be able to provide an image and prompt it with "Does this image have a wrist in it" or "Does this image show a watch on someone's wrist".
I have a bunch of images of watches but I only want the images where it shows the watch on a wrist. Kind of tedious to do manually but not sure if this is possible without some fine tuned model or a model specifically trained for this kind of task? | 2025-01-30T21:51:15 | https://www.reddit.com/r/LocalLLaMA/comments/1idylgv/best_vision_model_for_making_insight_on_image/ | Tomato_Straight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idylgv | false | null | t3_1idylgv | /r/LocalLLaMA/comments/1idylgv/best_vision_model_for_making_insight_on_image/ | false | false | self | 2 | null |
Deepseek Benchmark AMD vs Nvidia | 1 | is there a benchmark released for cost and output token/sec comparison between using AMD and Nvidia's GPU? | 2025-01-30T21:59:17 | https://www.reddit.com/r/LocalLLaMA/comments/1idys4p/deepseek_benchmark_amd_vs_nvidia/ | LowExtreme2753 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idys4p | false | null | t3_1idys4p | /r/LocalLLaMA/comments/1idys4p/deepseek_benchmark_amd_vs_nvidia/ | false | false | self | 1 | null |
Best Approach for Creating User-Trained AI Chatbots with a Local LLM? | 1 | Hey!
I've never worked with local LLMs or custom-trained models before, but I'm a programmer looking to build an app where users can teach an AI by answering questions. The goal is for other users to later interact with these personalized AI models, essentially chatting with AI versions of other users.
For example, User A might answer a set of questions about their interests, personality, or expertise, and the LLM would use that data to generate responses in their style. Later, User B could start a conversation with "User A's AI" and get responses that mimic User A's way of thinking.
What would be the best approach for implementing this? Any advice on frameworks, models, or best practices would be greatly appreciated!
Am I going to be able to achieve that with local LLM's? If yes, I'd be very happy if you could guide me towards the resources I need to know how to implement this. | 2025-01-30T22:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/1idytse/best_approach_for_creating_usertrained_ai/ | Swedish-Potato-93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idytse | false | null | t3_1idytse | /r/LocalLLaMA/comments/1idytse/best_approach_for_creating_usertrained_ai/ | false | false | self | 1 | null |
How can I use deepseek r1 on vscode? | 0 | anyone knows how to use deepseek r1 on vscode?
| 2025-01-30T22:09:46 | https://www.reddit.com/r/LocalLLaMA/comments/1idz19p/how_can_i_use_deepseek_r1_on_vscode/ | SystemEastern763 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idz19p | false | null | t3_1idz19p | /r/LocalLLaMA/comments/1idz19p/how_can_i_use_deepseek_r1_on_vscode/ | false | false | self | 0 | null |
LLM for medical ICD-10 code?!? | 1 | [removed] | 2025-01-30T22:11:46 | https://www.reddit.com/r/LocalLLaMA/comments/1idz2xr/llm_for_medical_icd10_code/ | fynnfisch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idz2xr | false | null | t3_1idz2xr | /r/LocalLLaMA/comments/1idz2xr/llm_for_medical_icd10_code/ | false | false | self | 1 | null |
'we're in this bizarre world where the best way to learn about llms... is to read papers by chinese companies. i do not think this is a good state of the world' - us labs keeping their architectures and algorithms secret is ultimately hurting ai development in the us.' - Dr Chris Manning | 1,496 | https://x.com/atroyn/status/1884700560500416881 | 2025-01-30T22:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/1idz487/were_in_this_bizarre_world_where_the_best_way_to/ | Research2Vec | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idz487 | false | null | t3_1idz487 | /r/LocalLLaMA/comments/1idz487/were_in_this_bizarre_world_where_the_best_way_to/ | false | false | self | 1,496 | {'enabled': False, 'images': [{'id': 'KZxcMqP2eToDBiblktOOglBrpKBK5N3_WJd8AWwcTz0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/NJbvyL6byy_BfIm9eox-emARP6w3eNLNB8x0w-0szoE.jpg?width=108&crop=smart&auto=webp&s=79cc0d5a655fe0c1743d200cf7d75a6b84f8dc92', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/NJbvyL6byy_BfIm9eox-emARP6w3eNLNB8x0w-0szoE.jpg?auto=webp&s=533f1003b17c8959a4328d1e23ac8b6fac11b58a', 'width': 200}, 'variants': {}}]} |
Let's assume they used ChatGPT's output to train the model. What will happen? Genuine question :) | 0 | 2025-01-30T22:15:23 | ahmmu20 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idz5w1 | false | null | t3_1idz5w1 | /r/LocalLLaMA/comments/1idz5w1/lets_assume_they_used_chatgpts_output_to_train/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'Csa-QFJvQX4JwxSYJzuV5TXmAnodYQD8DCK_Z2Mbzv0', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/8858ltfmg7ge1.png?width=108&crop=smart&auto=webp&s=d8347d8d361c43091b917266104bf37ebeeb823f', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/8858ltfmg7ge1.png?width=216&crop=smart&auto=webp&s=7d4a51cd5e5909d3ad39ef5c045398485e2f017f', 'width': 216}, {'height': 154, 'url': 'https://preview.redd.it/8858ltfmg7ge1.png?width=320&crop=smart&auto=webp&s=4c15caaf8526db9fa774124fd6fb87ec1224e712', 'width': 320}, {'height': 308, 'url': 'https://preview.redd.it/8858ltfmg7ge1.png?width=640&crop=smart&auto=webp&s=d41783e83bd54df9db79450f6ee00aaef6602dc8', 'width': 640}], 'source': {'height': 357, 'url': 'https://preview.redd.it/8858ltfmg7ge1.png?auto=webp&s=f52f85da81e7116e2fbd5ab9a7fc98d69cf25860', 'width': 740}, 'variants': {}}]} |
|||
EU can AI too | 1 | 2025-01-30T22:17:15 | https://youtu.be/MU7aRbwZdM0?si=s93YYrXHpkQ_552F | Hopeful-Fly-5292 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1idz7h2 | false | {'oembed': {'author_name': 'The Modern Web Architect', 'author_url': 'https://www.youtube.com/@TheModernWebArchitect', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/MU7aRbwZdM0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="My opinion about Mistral Small 3, it's ..."></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/MU7aRbwZdM0/hqdefault.jpg', 'thumbnail_width': 480, 'title': "My opinion about Mistral Small 3, it's ...", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1idz7h2 | /r/LocalLLaMA/comments/1idz7h2/eu_can_ai_too/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bUFG5hgxPqQoOBbvCgM4GsRJ56P8o5uNyBpHEG3VHAA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/UwR52TP7xKT7u5PIEGdmJ9vZrINI-7OrvDcpPO3yIgI.jpg?width=108&crop=smart&auto=webp&s=c935b10259c6b224ab2371b2ab369e49e20f5ff9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/UwR52TP7xKT7u5PIEGdmJ9vZrINI-7OrvDcpPO3yIgI.jpg?width=216&crop=smart&auto=webp&s=c2f75d9f20d0c9995f5808922c3c9b47a90cb49e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/UwR52TP7xKT7u5PIEGdmJ9vZrINI-7OrvDcpPO3yIgI.jpg?width=320&crop=smart&auto=webp&s=b88935dd79b77c6528a2ae0ce9e914ce5f09ec81', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/UwR52TP7xKT7u5PIEGdmJ9vZrINI-7OrvDcpPO3yIgI.jpg?auto=webp&s=ac43c696068fd3ca8ef1479255bf2f347e0b298a', 'width': 480}, 'variants': {}}]} |
||
Help finding the original deepseek-r1 and run it | 1 | [removed] | 2025-01-30T22:18:29 | https://www.reddit.com/r/LocalLLaMA/comments/1idz8h9/help_finding_the_original_deepseekr1_and_run_it/ | kuroro86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idz8h9 | false | null | t3_1idz8h9 | /r/LocalLLaMA/comments/1idz8h9/help_finding_the_original_deepseekr1_and_run_it/ | false | false | self | 1 | null |
Mistral Small 3 knows the truth | 101 | 2025-01-30T22:30:25 | magicduck | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idzimg | false | null | t3_1idzimg | /r/LocalLLaMA/comments/1idzimg/mistral_small_3_knows_the_truth/ | false | false | 101 | {'enabled': True, 'images': [{'id': '8N1-ScoLaqqKFKGnjBnn9ihsM-by4MtkT6guKK2DXxU', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/8rp05jjjj7ge1.png?width=108&crop=smart&auto=webp&s=2732bd1b1a89ba1169a4ae75383d9326422459a4', 'width': 108}, {'height': 37, 'url': 'https://preview.redd.it/8rp05jjjj7ge1.png?width=216&crop=smart&auto=webp&s=dcc2f6c1385ef328ba62e83e020148f4d0b6cf6f', 'width': 216}, {'height': 56, 'url': 'https://preview.redd.it/8rp05jjjj7ge1.png?width=320&crop=smart&auto=webp&s=722cd530c1c53345aa0c70de68ac5cae157377f9', 'width': 320}, {'height': 112, 'url': 'https://preview.redd.it/8rp05jjjj7ge1.png?width=640&crop=smart&auto=webp&s=af0e6a9f5574e3c1cae3becd10fc86657b8b07d3', 'width': 640}], 'source': {'height': 127, 'url': 'https://preview.redd.it/8rp05jjjj7ge1.png?auto=webp&s=82079750f5d674e60d4a41c462aa3f6a3c44f13a', 'width': 725}, 'variants': {}}]} |
|||
Safetensor instead of gguf | 2 | I'm new at running LLMs locally. I want to download and archive some of the larger version of the deepseek model that was released recently from their huggingface page. I know about using the gguf files but I noticed when I went to download their models it was split into a lot of safetensor files instead of a single gguf file. What can I do about this? I would prefer to have a single file if possible. I'm aware I can't run the larger models locally, but I'm a bit of a data hoarder and would like to archive them. | 2025-01-30T22:32:53 | https://www.reddit.com/r/LocalLLaMA/comments/1idzkn7/safetensor_instead_of_gguf/ | Velkan1642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idzkn7 | false | null | t3_1idzkn7 | /r/LocalLLaMA/comments/1idzkn7/safetensor_instead_of_gguf/ | false | false | self | 2 | null |
Gemma 2 sliding context window limits ICL / fine tuning as a reasoning agent. | 2 | For those that able to modify pre trained LLMs into reasoning agents through ICL and or fine tuning - have you noticed the sliding context window of Gemma 2 limits its ability to self iterate on its own outputs beyond 4,096 tokens of context or output? The same thing goes not happen with LLaMA 3.3 or GPT mini. | 2025-01-30T22:33:42 | https://www.reddit.com/r/LocalLLaMA/comments/1idzla4/gemma_2_sliding_context_window_limits_icl_fine/ | chitown160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idzla4 | false | null | t3_1idzla4 | /r/LocalLLaMA/comments/1idzla4/gemma_2_sliding_context_window_limits_icl_fine/ | false | false | self | 2 | null |
A fix has been implemented and we are monitoring the results. Fix: | 0 | 2025-01-30T22:38:04 | robertpiosik | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idzot6 | false | null | t3_1idzot6 | /r/LocalLLaMA/comments/1idzot6/a_fix_has_been_implemented_and_we_are_monitoring/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'iYRqU_OubK40i5RW_AaCch1vEp9JkUoPkYccb3NAfuU', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/yd4nfkcxk7ge1.png?width=108&crop=smart&auto=webp&s=efbb07b0d547e4366053ff5893d04aa3590bb166', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/yd4nfkcxk7ge1.png?width=216&crop=smart&auto=webp&s=38d7a94bf5039c8c1952c57bcb60497b91b3a82e', 'width': 216}, {'height': 181, 'url': 'https://preview.redd.it/yd4nfkcxk7ge1.png?width=320&crop=smart&auto=webp&s=5750333980cc5e71f943b0351839816f21beee70', 'width': 320}, {'height': 362, 'url': 'https://preview.redd.it/yd4nfkcxk7ge1.png?width=640&crop=smart&auto=webp&s=ade357516ed509bdf6f5f370c02bcd11f5e92c7c', 'width': 640}, {'height': 543, 'url': 'https://preview.redd.it/yd4nfkcxk7ge1.png?width=960&crop=smart&auto=webp&s=897aad58c75657ddd9cbe0e9f57cad7374622785', 'width': 960}, {'height': 611, 'url': 'https://preview.redd.it/yd4nfkcxk7ge1.png?width=1080&crop=smart&auto=webp&s=879bdf8a9d7c939eeb40537e5782456ee65bc29a', 'width': 1080}], 'source': {'height': 1117, 'url': 'https://preview.redd.it/yd4nfkcxk7ge1.png?auto=webp&s=a7a9afaad08d45ee4b238b98f54d99c34d43b798', 'width': 1972}, 'variants': {}}]} |
|||
Where Should I Go for AI Help | 1 | [removed] | 2025-01-30T22:38:55 | https://www.reddit.com/r/LocalLLaMA/comments/1idzpi9/where_should_i_go_for_ai_help/ | Dboy9876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1idzpi9 | false | null | t3_1idzpi9 | /r/LocalLLaMA/comments/1idzpi9/where_should_i_go_for_ai_help/ | false | false | self | 1 | null |
Mistral-Small-24B-Instruct-2501 vs Mistral-Small-Instruct-2409 | 1 | [deleted] | 2025-01-30T22:45:43 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1idzv62 | false | null | t3_1idzv62 | /r/LocalLLaMA/comments/1idzv62/mistralsmall24binstruct2501_vs/ | false | false | default | 1 | null |
||
Mistral-Small-24B-2501 vs Mistral-Small-2409 | 119 | 2025-01-30T22:48:32 | citaman | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1idzxix | false | null | t3_1idzxix | /r/LocalLLaMA/comments/1idzxix/mistralsmall24b2501_vs_mistralsmall2409/ | false | false | 119 | {'enabled': True, 'images': [{'id': 'rqLEAbmMeNfvG5vJAnaN-oYpaQXdcUO_spRQlyyyXgk', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/705ahg8qm7ge1.png?width=108&crop=smart&auto=webp&s=66c75365c91da4565f7dcb23dcbd5a4594e4e634', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/705ahg8qm7ge1.png?width=216&crop=smart&auto=webp&s=d2c93b66ea7c1d0ae5f2a9dbb22846c19f78b727', 'width': 216}, {'height': 145, 'url': 'https://preview.redd.it/705ahg8qm7ge1.png?width=320&crop=smart&auto=webp&s=41ff07130869f28333f60834c00edfad70861fc8', 'width': 320}, {'height': 291, 'url': 'https://preview.redd.it/705ahg8qm7ge1.png?width=640&crop=smart&auto=webp&s=b60f35e508f91598f803cbba687f8180633bbe1c', 'width': 640}, {'height': 437, 'url': 'https://preview.redd.it/705ahg8qm7ge1.png?width=960&crop=smart&auto=webp&s=945e715780e7fe46b8e60c5fd02224d7beb9b943', 'width': 960}, {'height': 492, 'url': 'https://preview.redd.it/705ahg8qm7ge1.png?width=1080&crop=smart&auto=webp&s=104f8c3587ac8752740e36f462ecc24770c011f2', 'width': 1080}], 'source': {'height': 820, 'url': 'https://preview.redd.it/705ahg8qm7ge1.png?auto=webp&s=ed7a79a30cf5c2c8eb0f97ea8d6aa79703bef3cf', 'width': 1799}, 'variants': {}}]} |
|||
QWEN just launched their chatbot website | 541 | Here is the link: [https://chat.qwenlm.ai/](https://chat.qwenlm.ai/) | 2025-01-30T23:03:37 | Vegetable-Practice85 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ie0a8u | false | null | t3_1ie0a8u | /r/LocalLLaMA/comments/1ie0a8u/qwen_just_launched_their_chatbot_website/ | false | false | 541 | {'enabled': True, 'images': [{'id': '2yWPHNSbv1Usxzxc-VUcV_oZJ7PY8F5U5KNAOcoQUNM', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/vzgzfrhlp7ge1.jpeg?width=108&crop=smart&auto=webp&s=3f323a3ebe29a37bdf092f1dbf5e9a7e6fcc147e', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/vzgzfrhlp7ge1.jpeg?width=216&crop=smart&auto=webp&s=5600442f39f8362e72d772546c1d0bd478e344a4', 'width': 216}, {'height': 261, 'url': 'https://preview.redd.it/vzgzfrhlp7ge1.jpeg?width=320&crop=smart&auto=webp&s=a2552f763b98dd7bc212dc29b6a28b24e0a65924', 'width': 320}, {'height': 522, 'url': 'https://preview.redd.it/vzgzfrhlp7ge1.jpeg?width=640&crop=smart&auto=webp&s=bbfa67cbeae08d4c800e7b5dc088c0330556268f', 'width': 640}, {'height': 784, 'url': 'https://preview.redd.it/vzgzfrhlp7ge1.jpeg?width=960&crop=smart&auto=webp&s=9d0b1578c4e2facab6ead47b75f1e743745c5de4', 'width': 960}, {'height': 882, 'url': 'https://preview.redd.it/vzgzfrhlp7ge1.jpeg?width=1080&crop=smart&auto=webp&s=4f4193c3721dc6a5a5c16a5f970c4427cb294f52', 'width': 1080}], 'source': {'height': 882, 'url': 'https://preview.redd.it/vzgzfrhlp7ge1.jpeg?auto=webp&s=e5ea83308c904e3702566fb577b9702c0a263711', 'width': 1080}, 'variants': {}}]} |
||
Title: Seeking Recommendations for Dataset Preparation Techniques for Non-Reasoning and Reasoning Models (e.g., DeepSeek R1) | 2 | Hello, Guys!
I'm currently working on a project that involves training both non-reasoning and reasoning models, specifically focusing on architectures like DeepSeek R1. As we all know, the quality of the dataset can significantly impact the performance of our models, so I'm eager to learn about effective dataset preparation techniques.
I'm particularly interested in:
1. Automated Approaches: Are there any automated tools or frameworks you’ve found useful for dataset preparation? I’m looking for solutions that can streamline the process, especially those that can handle data cleaning, normalization, augmentation, and splitting.
2. Techniques for Non-Reasoning Models: What specific techniques do you recommend for preparing datasets tailored to non-reasoning models? Any best practices or pitfalls to avoid?
3. Techniques for Reasoning Models: Similarly, what unique considerations should I keep in mind when preparing datasets for reasoning models like DeepSeek R1? Are there particular features or formats that enhance their performance?
4. Real-World Examples: If you have experience with a specific project or case study where dataset preparation made a significant difference, I would love to hear about it!
I appreciate any insights, resources, or personal experiences you can share. Thank you in advance for your help—looking forward to the discussion!
Best, | 2025-01-30T23:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ie0vcp/title_seeking_recommendations_for_dataset/ | ElPrincip6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie0vcp | false | null | t3_1ie0vcp | /r/LocalLLaMA/comments/1ie0vcp/title_seeking_recommendations_for_dataset/ | false | false | self | 2 | null |
How do I set kv cache quantisation in Ollama? | 1 | It seems like this was recently added based on github, but I'm struggling to find more information. How do I go about setting it up for the models I want to run? | 2025-01-30T23:57:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ie1gfd/how_do_i_set_kv_cache_quantisation_in_ollama/ | 3oclockam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie1gfd | false | null | t3_1ie1gfd | /r/LocalLLaMA/comments/1ie1gfd/how_do_i_set_kv_cache_quantisation_in_ollama/ | false | false | self | 1 | null |
Modular medium to large sized models | 2 | Are there any LLM projects you’re aware of that specialize in a single domain? For example, are there **70B parameter models** optimized specifically for coding that could outperform **GPT-4 (o1)** or **Claude** in programming tasks? Similarly, are there models fine-tuned for benchmarks like **HumanEval** or other specialized areas that achieve superior performance in their respective fields? | 2025-01-30T23:59:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ie1i68/modular_medium_to_large_sized_models/ | gamblingapocalypse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie1i68 | false | null | t3_1ie1i68 | /r/LocalLLaMA/comments/1ie1i68/modular_medium_to_large_sized_models/ | false | false | self | 2 | null |
New to Oobabooga, can't load any models | 1 | I have the docker-compose version running on an Ubuntu VM. Whenever I try to load a model I get an error saying ModuleNotFound, for whichever loader I select.
Do the loaders need to be installed separately? I'm brand new to all of this so any help is appreciated. | 2025-01-30T23:59:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ie1i8g/new_to_oobabooga_cant_load_any_models/ | formulafuckyeah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie1i8g | false | null | t3_1ie1i8g | /r/LocalLLaMA/comments/1ie1i8g/new_to_oobabooga_cant_load_any_models/ | false | false | self | 1 | null |
How to prepare datasets to fine tuning deepseek reasoning model? | 8 | Hello.
I'm trying to fine tune reasoning model (`DeepSeek-R1-Distill-Qwen-*`).
I recently read this tutorial published by unsloth. [https://www.datacamp.com/tutorial/fine-tuning-deepseek-r1-reasoning-model](https://www.datacamp.com/tutorial/fine-tuning-deepseek-r1-reasoning-model)
This example uses \[`FreedomIntelligence/medical-o1-reasoning-SFT`\] datasets which have additional field `Complex_cot` along with `Question`/`Response`.
The question is, my datasets only have "question - answer" pairs. No CoT context. How can I use my existing datasets to fine tuning reasoning models?
Should I prepare NEW datasets which contains `<think>...</think>` step? Just let this field blank? | 2025-01-31T00:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ie1r8x/how_to_prepare_datasets_to_fine_tuning_deepseek/ | Present-Tourist6487 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie1r8x | false | null | t3_1ie1r8x | /r/LocalLLaMA/comments/1ie1r8x/how_to_prepare_datasets_to_fine_tuning_deepseek/ | false | false | self | 8 | null |
Are experts in MoE models (like DeepSeek) interpretable? | 1 | [removed] | 2025-01-31T00:16:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ie1vfx/are_experts_in_moe_models_like_deepseek/ | NYRDS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie1vfx | false | null | t3_1ie1vfx | /r/LocalLLaMA/comments/1ie1vfx/are_experts_in_moe_models_like_deepseek/ | false | false | self | 1 | null |
What is the "Best" LLM to Run Locally (Ollama) on a Laptop with 32GB of RAM? | 0 | My laptop has the following specifications:
- Intel Core i5-12450H Processor
- 32GB of RAM
- No dedicated GPU, only Intel HD Graphics
I am looking for an LLM that can:
- Translate Portuguese to English effectively.
- Structure corporate emails and technical documentation in English (nothing overly complex).
- Work well with a knowledge base, such as PDFs or similar formats.
Any recommendations would be greatly appreciated! Thank you! | 2025-01-31T00:31:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ie26b9/what_is_the_best_llm_to_run_locally_ollama_on_a/ | Low-Professional-667 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie26b9 | false | null | t3_1ie26b9 | /r/LocalLLaMA/comments/1ie26b9/what_is_the_best_llm_to_run_locally_ollama_on_a/ | false | false | self | 0 | null |
Best tool to scrape data through vision with locally running LLM like Qwen. | 1 | [removed] | 2025-01-31T01:15:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ie32na/best_tool_to_scrape_data_through_vision_with/ | DragonNewsOfficial | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie32na | false | null | t3_1ie32na | /r/LocalLLaMA/comments/1ie32na/best_tool_to_scrape_data_through_vision_with/ | false | false | self | 1 | null |
Has Anyone Successfully Fine-Tuned Whisper for a Local Language for better accuracy | 6 | Hello everyone,
I am fairly new to AI and coding, and I’m curious about fine-tuning OpenAI’s Whisper model to improve its accuracy for a local language.
Has anyone here successfully fine-tuned Whisper? If so, how did you do it? What tools, frameworks, or techniques did you use? What method work best? I tried doing it my self on colab but I coulddnt seem to make it work, to begin with I just used common voices from Mozilla to see if it was even possible, maybe it is my limitation, but just wanted to ask if anyone have done it and could guide me a bit :) I’d really appreciate any insights, experiences, or resources that could help!
Thanks in advance! | 2025-01-31T01:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ie3i3a/has_anyone_successfully_finetuned_whisper_for_a/ | jumnopol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie3i3a | false | null | t3_1ie3i3a | /r/LocalLLaMA/comments/1ie3i3a/has_anyone_successfully_finetuned_whisper_for_a/ | false | false | self | 6 | null |
What is your favorite 12/13B model for NSFW RP? | 37 | Hello guys, I guess it’s that time of the year. Last year, I’ve tested a lot of M-N models such as violet-lotus, mag-mell, etc. Though there are still some minor problems for each models, such as incoherent after 10k context, only suitable for 3rd person roleplay and so on.
Since they’re all released probably about half a year ago, I want to ask you what’s your favorite for some sweet sweaty RP? | 2025-01-31T01:43:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ie3mv7/what_is_your_favorite_1213b_model_for_nsfw_rp/ | NullHypothesisCicada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie3mv7 | false | null | t3_1ie3mv7 | /r/LocalLLaMA/comments/1ie3mv7/what_is_your_favorite_1213b_model_for_nsfw_rp/ | false | false | nsfw | 37 | null |
Llama3.2:3b on a Raspberry Pi under ollama vs Deepseek-r1:1.5b | 0 | The only thing the hype around Deepseek did was to make me curious enough to install ollama and run the local distillation of Deepseek- the 1.5b version. It was fun for the first few times through watching it "think" (more accurately overthink lol), but it was a little too wonky and said some bizarre stuff. So I tried Llama3.2:3b, and I was blown away. Ran at about 6 to 8 TPS, so, fast enough to drive my TTS so it spoke normally. But it was conversationally awesome. I thought I'd just do a quick test but instead had a conversation that lasted nearly an hour and ranged from coding to nuclear fusion, astronomy, and the nature of consciousness. Not once did it go off the rails or output any cringe garbage.
The other thing the Deepseek hype did was drop Nvidia stock down enough for me to decide to buy some 😆
Anybody else running these small LMs on a Raspberry Pi or a Nano? | 2025-01-31T01:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ie3n1k/llama323b_on_a_raspberry_pi_under_ollama_vs/ | DelosBoard2052 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie3n1k | false | null | t3_1ie3n1k | /r/LocalLLaMA/comments/1ie3n1k/llama323b_on_a_raspberry_pi_under_ollama_vs/ | false | false | self | 0 | null |
ChatGPT talking about itself and competitor deepseek | 1 | [removed] | 2025-01-31T02:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ie40ez/chatgpt_talking_about_itself_and_competitor/ | Intelligent-Spot7183 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie40ez | false | null | t3_1ie40ez | /r/LocalLLaMA/comments/1ie40ez/chatgpt_talking_about_itself_and_competitor/ | false | false | 1 | null |
|
Exit status 0xc0000409 issue | 1 | [removed] | 2025-01-31T02:15:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ie4b3a/exit_status_0xc0000409_issue/ | Therefore72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie4b3a | false | null | t3_1ie4b3a | /r/LocalLLaMA/comments/1ie4b3a/exit_status_0xc0000409_issue/ | false | false | 1 | null |
|
DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked | 216 | 2025-01-31T02:16:09 | https://thehackernews.com/2025/01/deepseek-ai-database-exposed-over-1.html?m=1 | MerePotato | thehackernews.com | 1970-01-01T00:00:00 | 0 | {} | 1ie4brg | false | null | t3_1ie4brg | /r/LocalLLaMA/comments/1ie4brg/deepseek_ai_database_exposed_over_1_million_log/ | false | false | 216 | {'enabled': False, 'images': [{'id': 'fNhWAXlKBLhK-Yop8YPIhciRosniCWaMCsBtZXSGO2E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/FoBRfbFJiqbPvZWr-1-_kZti4liFY86vCxy63rbFeaE.jpg?width=108&crop=smart&auto=webp&s=1936b3003d0e11889737becfd4725642cd773117', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/FoBRfbFJiqbPvZWr-1-_kZti4liFY86vCxy63rbFeaE.jpg?width=216&crop=smart&auto=webp&s=979e8ba0ca1894a64ad2b56ca2468850edf475e7', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/FoBRfbFJiqbPvZWr-1-_kZti4liFY86vCxy63rbFeaE.jpg?width=320&crop=smart&auto=webp&s=b9c2a36fa54cb09cc8107f41c25469a1181061c5', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/FoBRfbFJiqbPvZWr-1-_kZti4liFY86vCxy63rbFeaE.jpg?width=640&crop=smart&auto=webp&s=8563af2625a34ede2f3818cc4a25bab8f7cf54c0', 'width': 640}], 'source': {'height': 380, 'url': 'https://external-preview.redd.it/FoBRfbFJiqbPvZWr-1-_kZti4liFY86vCxy63rbFeaE.jpg?auto=webp&s=caaebcd8360f9e5fe1ee62d3729ac443a0af2fe6', 'width': 728}, 'variants': {}}]} |
||
M4 Mac Mini 32GB RAM for 30B models? | 1 | [removed] | 2025-01-31T02:41:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ie4t5p/m4_mac_mini_32gb_ram_for_30b_models/ | random_poor_guy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie4t5p | false | null | t3_1ie4t5p | /r/LocalLLaMA/comments/1ie4t5p/m4_mac_mini_32gb_ram_for_30b_models/ | false | false | self | 1 | null |
Is it impossible to find a llm model comparison? | 3 | I cant find anything about which models are the best in terms of performance anywhere. Is the 671b R1 just the best at everything? Tia | 2025-01-31T02:53:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ie51kk/is_it_impossible_to_find_a_llm_model_comparison/ | DynamicOnion_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie51kk | false | null | t3_1ie51kk | /r/LocalLLaMA/comments/1ie51kk/is_it_impossible_to_find_a_llm_model_comparison/ | false | false | self | 3 | null |
Nvidia is 'paperware', so what about AMD? | 27 | Since 50x0 series Nvidia are basically non existent and priced like a small car, how do we feel about AMD 7900 XT? 20GB Ram, and according to some tests not a bad idea considering being on sale (eBay, new price) for around $700 vs. $4000+ for 5090.
ps://www.techpowerup.com/331776/amd-details-deepseek-r1-performance-on-radeon-rx-7900-xtx-confirms-ryzen-ai-max-memory-sizes
I happen to own one of the previous gen Nvidia Digits boxes (Xeon, 64GB, 4x full lane PCIE etc.) and am considering 4 x AMD 7900xt
Opinions? | 2025-01-31T02:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ie556h/nvidia_is_paperware_so_what_about_amd/ | Wintermute5791 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie556h | false | null | t3_1ie556h | /r/LocalLLaMA/comments/1ie556h/nvidia_is_paperware_so_what_about_amd/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'vhlDLH-v4if-SqlswU6rXsikSB_n13K-YnomXxYD23s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=108&crop=smart&auto=webp&s=e7780b8674f2af956f74bce8ab247c188aadb8b5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=216&crop=smart&auto=webp&s=6bff17f6b7829df187990561d91072cc56e01220', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=320&crop=smart&auto=webp&s=d93a38f1670951aca83cae1c4ca0f01e7afcd0b6', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=640&crop=smart&auto=webp&s=f21e0580018fbc158db3aacd175011215d1a0016', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=960&crop=smart&auto=webp&s=feda14d9a001301a734fe6957c26da36c472d477', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?width=1080&crop=smart&auto=webp&s=fb6db3d14b79c07013e17fb5cddb45ddb53cfdee', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/U7kEGeldAkVyqb3Nhj8mPodNM2GieE7WzfokvGyti8o.jpg?auto=webp&s=0c46eea26c9cfbb5d99c2d8af099fa9505ab2665', 'width': 1600}, 'variants': {}}]} |
Would training v3/r1 with higher precision make it better? | 1 | I heard they trained with lower precision (FP8) than usual. Is this approach hurting overall performance? I'm wondering if seeing this success they will just retrain for the next iteration and potentially compete with o3? | 2025-01-31T03:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ie571a/would_training_v3r1_with_higher_precision_make_it/ | robertpiosik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie571a | false | null | t3_1ie571a | /r/LocalLLaMA/comments/1ie571a/would_training_v3r1_with_higher_precision_make_it/ | false | false | self | 1 | null |
[2501.18096] LLMs can see and hear without any training | 24 | 2025-01-31T03:06:13 | https://arxiv.org/abs/2501.18096 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1ie59vq | false | null | t3_1ie59vq | /r/LocalLLaMA/comments/1ie59vq/250118096_llms_can_see_and_hear_without_any/ | false | false | default | 24 | null |
|
deepseek new model MTP | 1 | [removed] | 2025-01-31T03:13:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ie5f4a/deepseek_new_model_mtp/ | Shoddy_Mechanic9373 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie5f4a | false | null | t3_1ie5f4a | /r/LocalLLaMA/comments/1ie5f4a/deepseek_new_model_mtp/ | false | false | self | 1 | null |
Deepseek r1 model multi-token-prediction | 1 | [removed] | 2025-01-31T03:19:09 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ie5iwp | false | null | t3_1ie5iwp | /r/LocalLLaMA/comments/1ie5iwp/deepseek_r1_model_multitokenprediction/ | false | false | default | 1 | null |
||
Claude Projects Alternative ? | 1 | [removed] | 2025-01-31T03:28:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ie5ouk/claude_projects_alternative/ | ForThinkingDigital | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie5ouk | false | null | t3_1ie5ouk | /r/LocalLLaMA/comments/1ie5ouk/claude_projects_alternative/ | false | false | self | 1 | null |
If you can't afford to run R1 locally, then being patient is your best action. | 486 | Pause for a minute and read [I can now run a GPT-4 class model on my laptop](https://simonwillison.net/2024/Dec/9/llama-33-70b/).
It only take *20 months* for smaller model that can run on consumer hardware to surpass bigger older models.
Yes, it feels like an eternity for internet user. But 1.5 years is small for human lifespan. Don't believe me? Llama 1 is almost 2 years old! (Released on February 24, 2023)
In the next 20 months, there will be small model that are better than R1.
Just like patient gamer save money waiting for steam sale, we save money by waiting for better, more efficient smaller model. | 2025-01-31T03:35:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ie5tls/if_you_cant_afford_to_run_r1_locally_then_being/ | bora_ach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie5tls | false | null | t3_1ie5tls | /r/LocalLLaMA/comments/1ie5tls/if_you_cant_afford_to_run_r1_locally_then_being/ | false | false | self | 486 | {'enabled': False, 'images': [{'id': 'HWheKhQGoivbvKG3iJpnuoOj8CTBirv7y2Nh5S8xMUw', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=108&crop=smart&auto=webp&s=e7c5ef95ee36b3f5af8cb3a2775b8c09e51f03c2', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=216&crop=smart&auto=webp&s=c52fa52693287158950f1defd0572ae2ef137089', 'width': 216}, {'height': 229, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=320&crop=smart&auto=webp&s=9fac4ccbfd2c2577941fecc4bcc8da7f1789ee0f', 'width': 320}, {'height': 458, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=640&crop=smart&auto=webp&s=cd137f459436110a0a2f88fb9402b2ff30f720a7', 'width': 640}, {'height': 687, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=960&crop=smart&auto=webp&s=3831e89178ad486d0e8584cbdf17e93720cb697b', 'width': 960}, {'height': 773, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?width=1080&crop=smart&auto=webp&s=13c46c7a60e68ec985c3d0ad19a9593d003af9c0', 'width': 1080}], 'source': {'height': 1382, 'url': 'https://external-preview.redd.it/EEaf51_HzubmNCM0kFWnW6oa7MtAc4NszvDmRPVs3Ns.jpg?auto=webp&s=848fd8f5c025ed630d3a294cba90627a93893a09', 'width': 1930}, 'variants': {}}]} |
Please fill this survey about using LLMs for programming tasks | 1 | [removed] | 2025-01-31T03:44:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ie5zqx/please_fill_this_survey_about_using_llms_for/ | Zealousideal-File675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie5zqx | false | null | t3_1ie5zqx | /r/LocalLLaMA/comments/1ie5zqx/please_fill_this_survey_about_using_llms_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gfyV_eE0lNs_LDjuHPc5lLk0kRCwl-BbfSrFbRRsdk8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3A8i3hfMKNqfY-9l-7y2tu7FLtNgGFvxrqORGb5xu6M.jpg?width=108&crop=smart&auto=webp&s=c959067647564bf76ce60c7772f5f64bb6e2d693', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3A8i3hfMKNqfY-9l-7y2tu7FLtNgGFvxrqORGb5xu6M.jpg?width=216&crop=smart&auto=webp&s=8c905cac987058b0b459ea9bfdb28813e37e3707', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/3A8i3hfMKNqfY-9l-7y2tu7FLtNgGFvxrqORGb5xu6M.jpg?width=320&crop=smart&auto=webp&s=ad0d5f193550f5ef72ed3b72687ca69875cd8e82', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/3A8i3hfMKNqfY-9l-7y2tu7FLtNgGFvxrqORGb5xu6M.jpg?width=640&crop=smart&auto=webp&s=f4f8c26bf396bcfcf425689dde75f2be5588b291', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/3A8i3hfMKNqfY-9l-7y2tu7FLtNgGFvxrqORGb5xu6M.jpg?width=960&crop=smart&auto=webp&s=6daa196a13778e77910016674920a22dc3087975', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/3A8i3hfMKNqfY-9l-7y2tu7FLtNgGFvxrqORGb5xu6M.jpg?width=1080&crop=smart&auto=webp&s=78f31820442558e505fc4b4a38deafd3855fa473', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/3A8i3hfMKNqfY-9l-7y2tu7FLtNgGFvxrqORGb5xu6M.jpg?auto=webp&s=574d970fdcf423f581852dc5044288cab4633f3e', 'width': 1200}, 'variants': {}}]} |
RTX 4070S paired with older Tesla GPU | 1 | [removed] | 2025-01-31T03:59:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ie6958/rtx_4070s_paired_with_older_tesla_gpu/ | PrincessCheryl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie6958 | false | null | t3_1ie6958 | /r/LocalLLaMA/comments/1ie6958/rtx_4070s_paired_with_older_tesla_gpu/ | false | false | self | 1 | null |
best place to run Deepseek R1? | 1 | [removed] | 2025-01-31T04:04:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ie6cua/best_place_to_run_deepseek_r1/ | ZLPERSON | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie6cua | false | null | t3_1ie6cua | /r/LocalLLaMA/comments/1ie6cua/best_place_to_run_deepseek_r1/ | false | false | self | 1 | null |
It’s time to lead guys | 927 | 2025-01-31T04:10:56 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ie6gv0 | false | null | t3_1ie6gv0 | /r/LocalLLaMA/comments/1ie6gv0/its_time_to_lead_guys/ | false | false | 927 | {'enabled': True, 'images': [{'id': 'peTnn9IyZJFq4COrrIVKKU7g_Bex4AEjtplIEsq1NNo', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/4r69mh9f89ge1.jpeg?width=108&crop=smart&auto=webp&s=b80c656e5f5e42c170289fc78297728a1c5f7d98', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/4r69mh9f89ge1.jpeg?width=216&crop=smart&auto=webp&s=2d6a8d4c5b44a4b9da53ef0be99a995cb944baf0', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/4r69mh9f89ge1.jpeg?width=320&crop=smart&auto=webp&s=e3d8e73c7d96ea55509d744efcae90ed3a080daf', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/4r69mh9f89ge1.jpeg?width=640&crop=smart&auto=webp&s=2f3c997deb132531af541fbe7a279f1544512cbb', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/4r69mh9f89ge1.jpeg?width=960&crop=smart&auto=webp&s=6f719054bb9c2ee7f0aec58a0da7154708352e4c', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/4r69mh9f89ge1.jpeg?width=1080&crop=smart&auto=webp&s=39d56f53c80fa225957e3c8646a59172f0d48281', 'width': 1080}], 'source': {'height': 1290, 'url': 'https://preview.redd.it/4r69mh9f89ge1.jpeg?auto=webp&s=ce3ca9aa14d2a0f677c0b01bed9e837a1d25f47f', 'width': 1289}, 'variants': {}}]} |
|||
the endotherms will win! | 0 | 2025-01-31T04:17:09 | brucespector | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ie6ktw | false | null | t3_1ie6ktw | /r/LocalLLaMA/comments/1ie6ktw/the_endotherms_will_win/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'MwpCVPjV4xoex7zZUF_Ph_m0PetIfoaJVwrM5U97gLg', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/1s4wukcj99ge1.jpeg?width=108&crop=smart&auto=webp&s=336d1ecd8dc8600e8328af2be24e5b157e02e65d', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/1s4wukcj99ge1.jpeg?width=216&crop=smart&auto=webp&s=5429718ed3bb17d1edb75332bcb5ef1bfa5ed902', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/1s4wukcj99ge1.jpeg?width=320&crop=smart&auto=webp&s=f5ff5f01815e84b4e54df22cf09cdcaf243482cb', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/1s4wukcj99ge1.jpeg?width=640&crop=smart&auto=webp&s=415b582f8b7ea6f0833ed3aa0c5ef2e2386764a7', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/1s4wukcj99ge1.jpeg?width=960&crop=smart&auto=webp&s=7be1996e572dadc493ebee436e490c4760967860', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/1s4wukcj99ge1.jpeg?auto=webp&s=d3af778c51da2a744277b5b2ebc4dbe9f2094ab5', 'width': 1024}, 'variants': {}}]} |
|||
Fine tuning for limited private dataset for local smaller models? | 1 | I intend to build a Q&A bot on custom and private data. But I need some advices on how to build it.
So what I want to do is:
- have a small amount of documents (20 pdfs) in the form of cleaned texts (so no need to preprocess the data), so all the query answers can be and should be found in the documents, and the documents are not lengthy, each contains 2000 words.
- and the end users will ask questions related to information can be found in the documents. I usually have like 100 users queries as a sample.
Obviously I have tried RAG, which still takes too long and limited to the vector search efficiency.
I want to try out fine tuning and possibly model distillation techniques for learning purposes. My end goal is to be able to produce a 2B model or anything less than 7B that I am able to run Ollama with.
Any existing pipelines or techniques I can try? Also noted that my dataset is very small, and don’t want to put everything into context, and the result is not as good. | 2025-01-31T04:21:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ie6nbp/fine_tuning_for_limited_private_dataset_for_local/ | cas4d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie6nbp | false | null | t3_1ie6nbp | /r/LocalLLaMA/comments/1ie6nbp/fine_tuning_for_limited_private_dataset_for_local/ | false | false | self | 1 | null |
Which is the best NSFW llm? | 125 | Frankly speaking i am looking to build an chat app for adult talking. It's a test project of mine. So if you know which one should i use, please let me know | 2025-01-31T04:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ie6qpi/which_is_the_best_nsfw_llm/ | NebulaNinja_779 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie6qpi | false | null | t3_1ie6qpi | /r/LocalLLaMA/comments/1ie6qpi/which_is_the_best_nsfw_llm/ | false | false | nsfw | 125 | null |
Why is LM studio is faster than Ollama | 1 | [removed] | 2025-01-31T04:32:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ie6uj3/why_is_lm_studio_is_faster_than_ollama/ | CosmicVoyager221 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie6uj3 | false | null | t3_1ie6uj3 | /r/LocalLLaMA/comments/1ie6uj3/why_is_lm_studio_is_faster_than_ollama/ | false | false | self | 1 | null |
DeepSeek: A Game-Changer in Cost-Effective AI Training | 1 | [removed] | 2025-01-31T04:36:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ie6wqc/deepseek_a_gamechanger_in_costeffective_ai/ | YungRyke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie6wqc | false | null | t3_1ie6wqc | /r/LocalLLaMA/comments/1ie6wqc/deepseek_a_gamechanger_in_costeffective_ai/ | false | false | self | 1 | null |
DeepSeek: A Game-Changer in Cost-Effective AI Training | 1 | 2025-01-31T04:41:43 | https://mengunogul.medium.com/95b95fab4a5f | YungRyke | mengunogul.medium.com | 1970-01-01T00:00:00 | 0 | {} | 1ie701i | false | null | t3_1ie701i | /r/LocalLLaMA/comments/1ie701i/deepseek_a_gamechanger_in_costeffective_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'uxq5cngMmC4HzsZxi4yIZ9m8dh8hlImUICSKD39Giz0', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/6n9Gctkhm0E1aeg7wraie8U4eRiginCvBeW3DM202CU.jpg?width=108&crop=smart&auto=webp&s=255afb9f17fc2b5b9d63cd828ba07c41e3c34b48', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/6n9Gctkhm0E1aeg7wraie8U4eRiginCvBeW3DM202CU.jpg?width=216&crop=smart&auto=webp&s=c5fcaef2d2473ebebd6d865f18dcec10c940fa07', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/6n9Gctkhm0E1aeg7wraie8U4eRiginCvBeW3DM202CU.jpg?width=320&crop=smart&auto=webp&s=133c0adf9a6acb51a5727cbb41c0deaee334ef81', 'width': 320}, {'height': 318, 'url': 'https://external-preview.redd.it/6n9Gctkhm0E1aeg7wraie8U4eRiginCvBeW3DM202CU.jpg?width=640&crop=smart&auto=webp&s=a80d8dd07304467bd333baf1859a3d722ead9977', 'width': 640}, {'height': 477, 'url': 'https://external-preview.redd.it/6n9Gctkhm0E1aeg7wraie8U4eRiginCvBeW3DM202CU.jpg?width=960&crop=smart&auto=webp&s=bbef137a71af377e8f5437af3d425530adc80234', 'width': 960}, {'height': 537, 'url': 'https://external-preview.redd.it/6n9Gctkhm0E1aeg7wraie8U4eRiginCvBeW3DM202CU.jpg?width=1080&crop=smart&auto=webp&s=62534113a00e3871a17b7867d37ec78f2893ac2f', 'width': 1080}], 'source': {'height': 597, 'url': 'https://external-preview.redd.it/6n9Gctkhm0E1aeg7wraie8U4eRiginCvBeW3DM202CU.jpg?auto=webp&s=e4df89a4ed4644eb217f456cb44185a28663e8b7', 'width': 1200}, 'variants': {}}]} |
||
Mistral small 2409 instruct | 1 | [removed] | 2025-01-31T04:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ie75a7/mistral_small_2409_instruct/ | uchiha0324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie75a7 | false | null | t3_1ie75a7 | /r/LocalLLaMA/comments/1ie75a7/mistral_small_2409_instruct/ | false | false | self | 1 | null |
Deepseek alternatives? | 1 | [removed] | 2025-01-31T04:56:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ie7919/deepseek_alternatives/ | Tonomous_Agent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie7919 | false | null | t3_1ie7919 | /r/LocalLLaMA/comments/1ie7919/deepseek_alternatives/ | false | false | self | 1 | null |
Deepseek alternatives? | 1 |
Is there a platform to access the full parameter deep seek model via api (with thinking mode) that isn’t the official deepseek api? I’m tired of server issues and low rate limits. I figured since the full parameter model is open source some company somewhere may be hosting it, unless the license prevents that. | 2025-01-31T04:59:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ie7aq9/deepseek_alternatives/ | Embarrassed_Tree_164 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie7aq9 | false | null | t3_1ie7aq9 | /r/LocalLLaMA/comments/1ie7aq9/deepseek_alternatives/ | false | false | self | 1 | null |
Applying deep seek for analyzing | 4 | Hi everyone,
I’m excited to share AnalaI, an open-source SDK that leverages AI agents to analyze the crypto market. Our goal is to provide a powerful and flexible toolkit for traders, researchers, and developers to automate market analysis and gain deeper insights.
Key Features
✅ AI-driven market analysis
✅ Real-time data processing
✅ Customizable agent workflows
✅ Easy integration with trading and research tools
🔗 GitHub Repository: https://github.com/analai-hub/analai-python-sdk/
We are actively improving the project and would love your feedback! If you’re interested in AI, crypto, or trading automation, check out the repo and contribute. Let’s build something great together! | 2025-01-31T04:59:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ie7b33/applying_deep_seek_for_analyzing/ | Different_Prune_3529 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie7b33 | false | null | t3_1ie7b33 | /r/LocalLLaMA/comments/1ie7b33/applying_deep_seek_for_analyzing/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'iepemlEu66XqYKWwf0FDZzPptMdsyjlOD7JD_qYxBOg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FYezM8h3ToMxlGk0Fkh6ZO8xPfcGLxb1vfeyTwjjJxA.jpg?width=108&crop=smart&auto=webp&s=075c1f4e7d777e9ed3e71e5a4d59b69a9a527333', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FYezM8h3ToMxlGk0Fkh6ZO8xPfcGLxb1vfeyTwjjJxA.jpg?width=216&crop=smart&auto=webp&s=bb2d4129f71f61147cab092c5c4abc96bd52b788', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FYezM8h3ToMxlGk0Fkh6ZO8xPfcGLxb1vfeyTwjjJxA.jpg?width=320&crop=smart&auto=webp&s=ace9518aa651b6f226cf0dbf81814ef989a906ad', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FYezM8h3ToMxlGk0Fkh6ZO8xPfcGLxb1vfeyTwjjJxA.jpg?width=640&crop=smart&auto=webp&s=261787e3bc736a2232f314c2355cce926dc039dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FYezM8h3ToMxlGk0Fkh6ZO8xPfcGLxb1vfeyTwjjJxA.jpg?width=960&crop=smart&auto=webp&s=cb48d76fad4df7ee0c59b35b8d170d3526e2fc55', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FYezM8h3ToMxlGk0Fkh6ZO8xPfcGLxb1vfeyTwjjJxA.jpg?width=1080&crop=smart&auto=webp&s=b38f6ffb70abef73f42e53d6c7d11df1c8aeb600', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FYezM8h3ToMxlGk0Fkh6ZO8xPfcGLxb1vfeyTwjjJxA.jpg?auto=webp&s=4c49279b7a8cb9cfe35b362a192adde6766bdeac', 'width': 1200}, 'variants': {}}]} |
DeepSeek database left user data, chat histories exposed for anyone to see | 0 | 2025-01-31T05:01:18 | https://www.theverge.com/news/603163/deepseek-breach-ai-security-database-exposed | thesayke | theverge.com | 1970-01-01T00:00:00 | 0 | {} | 1ie7c4k | false | null | t3_1ie7c4k | /r/LocalLLaMA/comments/1ie7c4k/deepseek_database_left_user_data_chat_histories/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'AcLtdBVjm5eLMPCHPF3y4_OA8tl--Er4FEyhimVJTc4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/t_O2PqHEB1HPBCqQuAi6pqO0Kwfvn8bMHOqjaoimNSk.jpg?width=108&crop=smart&auto=webp&s=7ffeb80af3142abf51b183a9ae5b7461c208b113', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/t_O2PqHEB1HPBCqQuAi6pqO0Kwfvn8bMHOqjaoimNSk.jpg?width=216&crop=smart&auto=webp&s=26e64b1382619f727420c5e3e8db8b413d7aa014', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/t_O2PqHEB1HPBCqQuAi6pqO0Kwfvn8bMHOqjaoimNSk.jpg?width=320&crop=smart&auto=webp&s=66da8418bb5aa1876e1793145e7593097cd923f2', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/t_O2PqHEB1HPBCqQuAi6pqO0Kwfvn8bMHOqjaoimNSk.jpg?width=640&crop=smart&auto=webp&s=18ceca4c9512b00e463bd0acc5ab19f6c79f0355', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/t_O2PqHEB1HPBCqQuAi6pqO0Kwfvn8bMHOqjaoimNSk.jpg?width=960&crop=smart&auto=webp&s=47395dea2b50ad43bae4af3dd551e77c1f743b8e', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/t_O2PqHEB1HPBCqQuAi6pqO0Kwfvn8bMHOqjaoimNSk.jpg?width=1080&crop=smart&auto=webp&s=1460b14da3e3672d08821a76b5faddcf19891763', 'width': 1080}], 'source': {'height': 624, 'url': 'https://external-preview.redd.it/t_O2PqHEB1HPBCqQuAi6pqO0Kwfvn8bMHOqjaoimNSk.jpg?auto=webp&s=82a9670545faa9e0fe67438c0efe6b8255bc115d', 'width': 1200}, 'variants': {}}]} |
||
Chris Manning (top 3 NLP/Machine Learning researchers in the world) believes the Deepseek 6m dollar training costs due to the optimizations discussed in their paper | 47 | While a lot of the things discussed in the Deepseek paper have been verified, what has garnered the most skepticism is the training cost.
Chris manning, whose highly regarded as one of the top 3-5 NLP researchers in the world, gave a talk yesterday, which was live tweeted
https://x.com/atroyn/status/1884700131884490762
"deepseek have succeeded at producing models with large numbers of experts (256 in v3). combined with multi-head latent attention, plus training in fb8, dramatically reduces training costs.
@chrmanning
buys the $6M training compute cost."
He buys the 6 million dollar training cost claimed. | 2025-01-31T05:03:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ie7db5/chris_manning_top_3_nlpmachine_learning/ | Research2Vec | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie7db5 | false | null | t3_1ie7db5 | /r/LocalLLaMA/comments/1ie7db5/chris_manning_top_3_nlpmachine_learning/ | false | false | self | 47 | {'enabled': False, 'images': [{'id': 'KZxcMqP2eToDBiblktOOglBrpKBK5N3_WJd8AWwcTz0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/NJbvyL6byy_BfIm9eox-emARP6w3eNLNB8x0w-0szoE.jpg?width=108&crop=smart&auto=webp&s=79cc0d5a655fe0c1743d200cf7d75a6b84f8dc92', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/NJbvyL6byy_BfIm9eox-emARP6w3eNLNB8x0w-0szoE.jpg?auto=webp&s=533f1003b17c8959a4328d1e23ac8b6fac11b58a', 'width': 200}, 'variants': {}}]} |
Is data breach happened at deepseek? Wake-Up Call for AI Data Security | 1 | [removed] | 2025-01-31T05:04:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ie7ee5/is_data_breach_happened_at_deepseek_wakeup_call/ | That_Praline3447 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie7ee5 | false | null | t3_1ie7ee5 | /r/LocalLLaMA/comments/1ie7ee5/is_data_breach_happened_at_deepseek_wakeup_call/ | false | false | self | 1 | null |
How to apply deepseek for crypto? | 1 | [removed] | 2025-01-31T05:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ie7f64/how_to_apply_deepseek_for_crypto/ | Different_Prune_3529 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie7f64 | false | null | t3_1ie7f64 | /r/LocalLLaMA/comments/1ie7f64/how_to_apply_deepseek_for_crypto/ | false | false | self | 1 | null |
DeepSeek is a Disinformation Machine: AI chatbot advances China’s position 60% of the time in response to prompts about Chinese, Russian, and Iranian false claims | 0 | 2025-01-31T05:07:28 | https://www.newsguardrealitycheck.com/p/deepseek-ai-chatbot-china-russia-iran-disinformation | thesayke | newsguardrealitycheck.com | 1970-01-01T00:00:00 | 0 | {} | 1ie7g0q | false | null | t3_1ie7g0q | /r/LocalLLaMA/comments/1ie7g0q/deepseek_is_a_disinformation_machine_ai_chatbot/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OQPS-cwqXiubaqx8raaQZ32x8KwT_YNPb4iaWciFIRU', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/37bZluzwKzMCp9o40WRS479mFh4_zWC99mroj6RqQpY.jpg?width=108&crop=smart&auto=webp&s=9382a288febc507971010ab8abb608deb63c212d', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/37bZluzwKzMCp9o40WRS479mFh4_zWC99mroj6RqQpY.jpg?width=216&crop=smart&auto=webp&s=b2f98ad3e6f68121f333dae9c1ab73bbe881460d', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/37bZluzwKzMCp9o40WRS479mFh4_zWC99mroj6RqQpY.jpg?width=320&crop=smart&auto=webp&s=9f04f077e70c7632053b1d239a13088f556c6576', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/37bZluzwKzMCp9o40WRS479mFh4_zWC99mroj6RqQpY.jpg?width=640&crop=smart&auto=webp&s=dccd9e2b5776669384be69a9b96168ce2de9413a', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/37bZluzwKzMCp9o40WRS479mFh4_zWC99mroj6RqQpY.jpg?width=960&crop=smart&auto=webp&s=8b73084ac0bea4b3166ff34da37c275c1ae225b0', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/37bZluzwKzMCp9o40WRS479mFh4_zWC99mroj6RqQpY.jpg?auto=webp&s=5974b8ebeef79c2421404326236fc3d9a6b291f7', 'width': 1000}, 'variants': {}}]} |
||
Mistral small 2409 instruct unable to load on multiple GPUs | 1 | [removed] | 2025-01-31T05:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ie7rpt/mistral_small_2409_instruct_unable_to_load_on/ | ammar201101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie7rpt | false | null | t3_1ie7rpt | /r/LocalLLaMA/comments/1ie7rpt/mistral_small_2409_instruct_unable_to_load_on/ | false | false | self | 1 | null |
Damn deepseek is relatable | 1 | [removed] | 2025-01-31T05:28:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ie7si2/damn_deepseek_is_relatable/ | get_off_reddit_01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie7si2 | false | null | t3_1ie7si2 | /r/LocalLLaMA/comments/1ie7si2/damn_deepseek_is_relatable/ | false | false | 1 | null |
|
Forget OpenAI Operator — here's an open source AI agent system that works brilliantly for free | 16 | 2025-01-31T05:31:26 | https://www.tomsguide.com/ai/forget-openai-operator-heres-an-open-source-ai-agent-system-that-works-brilliantly-for-free | Durian881 | tomsguide.com | 1970-01-01T00:00:00 | 0 | {} | 1ie7u38 | false | null | t3_1ie7u38 | /r/LocalLLaMA/comments/1ie7u38/forget_openai_operator_heres_an_open_source_ai/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'T7KBAdUgMQl8o6VGaCZ6ERds85Uq68LgVTNqdbbMkRY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-L5PUbWTRtdCPpGhuP10XfAdWH0IuoN85iZ2ZYKwmU4.jpg?width=108&crop=smart&auto=webp&s=63a5b24ba4663a9781ce5b9a7e36834545364531', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/-L5PUbWTRtdCPpGhuP10XfAdWH0IuoN85iZ2ZYKwmU4.jpg?width=216&crop=smart&auto=webp&s=0ed4e66960b702645f6e72b2431cb44b2d77f330', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/-L5PUbWTRtdCPpGhuP10XfAdWH0IuoN85iZ2ZYKwmU4.jpg?width=320&crop=smart&auto=webp&s=40b3926cb9f44db1ca17fa4160f3e38f8af21058', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/-L5PUbWTRtdCPpGhuP10XfAdWH0IuoN85iZ2ZYKwmU4.jpg?width=640&crop=smart&auto=webp&s=d72e1442d696c85aa04c7dc71e68c46f31f32d84', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/-L5PUbWTRtdCPpGhuP10XfAdWH0IuoN85iZ2ZYKwmU4.jpg?width=960&crop=smart&auto=webp&s=1bf93263f29087d99c984413c592ff02ad231ed8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/-L5PUbWTRtdCPpGhuP10XfAdWH0IuoN85iZ2ZYKwmU4.jpg?width=1080&crop=smart&auto=webp&s=3959ba8f318fa6af8e4425a7d8a72f614cf45b1e', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/-L5PUbWTRtdCPpGhuP10XfAdWH0IuoN85iZ2ZYKwmU4.jpg?auto=webp&s=a8b5faed5b0d45fde0d1b6d6ecafa0c8866c84fd', 'width': 1200}, 'variants': {}}]} |
||
8x2080ti (22G) or 1x5090? | 8 | I mostly wanted a good workstation for small LLM/AI tasks before I send proposals to our school's cluster for H100 access.
Originally I planned on getting a 5090, but we all know what happened today. I do have a very nice server (ESC8000A-E11) that has eight PCIE 4.0 x16 slots.
It would take me exactly $1980 to acquire eight 2080ti with 22GB of VRAM, so 176GB in total.
Would this be a better deal than a single 5090? If I want to do things with LLM but also just train AI models in general (which may not require so much vram).
Obviously I think 8x2080ti would do much more than a single 5090 for LLM and theoretically, 8 2080ti is faster than a single 5090, but I'm worried if bandwidth will bottleneck it (especially for AI models that does not require so much vram and would it be possible that a single 5090 is faster in those scenarios). If you happen to know anything I would appreciate a lot ! | 2025-01-31T05:36:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ie7x2q/8x2080ti_22g_or_1x5090/ | PossiblePossible2571 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie7x2q | false | null | t3_1ie7x2q | /r/LocalLLaMA/comments/1ie7x2q/8x2080ti_22g_or_1x5090/ | false | false | self | 8 | null |
Right way run R1 quantized models on CPU? | 4 | I'm a bit late to LocalLlMA, until now, I've been using APIs. For the past week, I've been running models on my home machine learning rig and now I'm hooked. I have access to an AMD EPYC 128C machine with 1TB of memory and a couple of U.2 drives in RAID. I'm wondering what's the best way to run the R1 quantized model on this rig. Should I use the Ollama or Hugging Face libraries? I appreciate your input. | 2025-01-31T05:39:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ie7yfo/right_way_run_r1_quantized_models_on_cpu/ | Significant_Bath8608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie7yfo | false | null | t3_1ie7yfo | /r/LocalLLaMA/comments/1ie7yfo/right_way_run_r1_quantized_models_on_cpu/ | false | false | self | 4 | null |
DeepSeek-R1 now available on both Amazon Bedrock and SageMaker AI. | 6 | 2025-01-31T05:43:40 | bruhlmaocmonbro | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ie8106 | false | null | t3_1ie8106 | /r/LocalLLaMA/comments/1ie8106/deepseekr1_now_available_on_both_amazon_bedrock/ | false | false | 6 | {'enabled': True, 'images': [{'id': 'TupJuHhXmJfEwCVg-ABYaXhlCfur1WyU8knnXPmE6u8', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/6zfdj5dyo9ge1.jpeg?width=108&crop=smart&auto=webp&s=844ff97647b7a99beb0624a9b203c142c4c16913', 'width': 108}, {'height': 261, 'url': 'https://preview.redd.it/6zfdj5dyo9ge1.jpeg?width=216&crop=smart&auto=webp&s=6f177e5aff448a25aea2bf57e6c53deaedc18611', 'width': 216}, {'height': 387, 'url': 'https://preview.redd.it/6zfdj5dyo9ge1.jpeg?width=320&crop=smart&auto=webp&s=e674d95674948ed8050b099803c3c001213e6ccc', 'width': 320}, {'height': 775, 'url': 'https://preview.redd.it/6zfdj5dyo9ge1.jpeg?width=640&crop=smart&auto=webp&s=2f1dbd893eafb11d6574ea6188f6affdc7a3f279', 'width': 640}, {'height': 1163, 'url': 'https://preview.redd.it/6zfdj5dyo9ge1.jpeg?width=960&crop=smart&auto=webp&s=85f9c131f5901259c66e8ca0dc463e0c62b633c7', 'width': 960}, {'height': 1308, 'url': 'https://preview.redd.it/6zfdj5dyo9ge1.jpeg?width=1080&crop=smart&auto=webp&s=269d542bc1e5d40cca54b9f76b832a9e7ee1eb28', 'width': 1080}], 'source': {'height': 1418, 'url': 'https://preview.redd.it/6zfdj5dyo9ge1.jpeg?auto=webp&s=9b93eeb9019149d818ee6d8da4b5abac124e84e3', 'width': 1170}, 'variants': {}}]} |
|||
Which LLM can I run on my computer that simply anonymises data in pdfs? | 0 | I'm running a 4080 16gb and have 64gb of ddr5 ram.
My goal would be to feed an LLM with slightly varying versions of a pdf and the aim would be for it to censor names and number values in order to prepare the data so I can use it with online AIs like claude or chatgpt. After they have processed the data my local LLM would have to de-cipher it again to the old values.
I would need it for 2 different projects and corresponding data sizes. One is max 4-5 pages of PDF, another one is sometimes >90 pages of data which I can export to .csv.
Is that possible? | 2025-01-31T05:45:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ie827a/which_llm_can_i_run_on_my_computer_that_simply/ | jachjach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie827a | false | null | t3_1ie827a | /r/LocalLLaMA/comments/1ie827a/which_llm_can_i_run_on_my_computer_that_simply/ | false | false | self | 0 | null |
Truly impressive | 0 | Mistral Small 3 crushing the strawberry benchmark | 2025-01-31T05:57:50 | centerdeveloper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ie8932 | false | null | t3_1ie8932 | /r/LocalLLaMA/comments/1ie8932/truly_impressive/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'Z4muulJ0jZyEW36aoseSjv_3nCRWuzqjwcLLnqQum10', 'resolutions': [{'height': 22, 'url': 'https://preview.redd.it/gyxxqh1ir9ge1.jpeg?width=108&crop=smart&auto=webp&s=2b7732e747369c6695930ceac1ef0bef2f79f4fd', 'width': 108}, {'height': 44, 'url': 'https://preview.redd.it/gyxxqh1ir9ge1.jpeg?width=216&crop=smart&auto=webp&s=45c9ef417f8aa39dc47517fee5d2ec147709d84a', 'width': 216}, {'height': 65, 'url': 'https://preview.redd.it/gyxxqh1ir9ge1.jpeg?width=320&crop=smart&auto=webp&s=32a5940b142eaa7d70d4fbfd2b66e6c9e54dd884', 'width': 320}, {'height': 131, 'url': 'https://preview.redd.it/gyxxqh1ir9ge1.jpeg?width=640&crop=smart&auto=webp&s=7b79de1641766ec9576050fd3f6e3dd52d0204e3', 'width': 640}, {'height': 196, 'url': 'https://preview.redd.it/gyxxqh1ir9ge1.jpeg?width=960&crop=smart&auto=webp&s=dc1cc05514c07b135c2daa1de483f184924d62b0', 'width': 960}, {'height': 221, 'url': 'https://preview.redd.it/gyxxqh1ir9ge1.jpeg?width=1080&crop=smart&auto=webp&s=8f32f3487c5d7441c9cda5cd204c8600a25915ab', 'width': 1080}], 'source': {'height': 239, 'url': 'https://preview.redd.it/gyxxqh1ir9ge1.jpeg?auto=webp&s=9891b66e9fcc243c6e26b74fac0a4506f6c01054', 'width': 1166}, 'variants': {}}]} |
||
What the fuck is abbas man🗿💔 | 58 | 2025-01-31T06:16:37 | Fun-Property-5964 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ie8jea | false | null | t3_1ie8jea | /r/LocalLLaMA/comments/1ie8jea/what_the_fuck_is_abbas_man/ | false | false | 58 | {'enabled': True, 'images': [{'id': 'l1bMhPuDEy-_DcEUJoXAYCl1H752x5EYNpzCWODSQ7g', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/bi8mqxkuu9ge1.jpeg?width=108&crop=smart&auto=webp&s=8e403f681d3b492e2db6c63a57f1d4faa7678197', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/bi8mqxkuu9ge1.jpeg?width=216&crop=smart&auto=webp&s=6f696e0cc16463ec51277f08f99e40b8a8a4d226', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/bi8mqxkuu9ge1.jpeg?width=320&crop=smart&auto=webp&s=b424d0e3deeaa60af8134c30657aa831e2e0c876', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/bi8mqxkuu9ge1.jpeg?width=640&crop=smart&auto=webp&s=e82fac435f8a5f0ee1b5ec8398cb5ea3b6c1952d', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/bi8mqxkuu9ge1.jpeg?width=960&crop=smart&auto=webp&s=ac8c9132e3866dd416f93e510f34278d129ef2b2', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/bi8mqxkuu9ge1.jpeg?width=1080&crop=smart&auto=webp&s=49404576cf641385845cf912112c4d31f5077baf', 'width': 1080}], 'source': {'height': 2388, 'url': 'https://preview.redd.it/bi8mqxkuu9ge1.jpeg?auto=webp&s=0487961758660b4ed9ec94c3c1ee34c5c2db80ce', 'width': 1080}, 'variants': {}}]} |
|||
I'm confused. Here are some absolut noob questions. | 14 | Can someone please help me out?
I'm new in this Llama stuff and the deepseek hype made me get into it.
1. Now I wanted to download deekseek and deepseek coding v2, and all I saw was some files which are 8 months old (on huggingface). Is this actually the correct version? Why are people just started talking it some days ago then?
2. Also what exactly does 1.5b, 7b, etc mean and are those below 10B models even useful? I've downaloded meta 1.5b (preset of lm studio) and for me it's not just slow, but also it just makes up fairy Tales whenni ask it something.
I've also got 7b deepseek (I hope it's the correct one) and it isnt really good either. Also takes way too long thinking and typing.
3. Also when I search for deepseek Coder v2 in lm Studio, it gives me out a file with a relatively small amount of downloads.
But when I have googled Coder v2, there is also another version of it with a huge number of downloads. Why doesnt lm studio recommend me that?
4. Should I download Modules from hugging face instead of lm studio? (Which downloads also from huggingface, but see my question above)
5. And last question: lm studio or ollama? | 2025-01-31T06:47:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ie8z9y/im_confused_here_are_some_absolut_noob_questions/ | soyoucheckusernames | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie8z9y | false | null | t3_1ie8z9y | /r/LocalLLaMA/comments/1ie8z9y/im_confused_here_are_some_absolut_noob_questions/ | false | false | self | 14 | null |
What's the best model for me to run? | 2 | So not like I'm a newbie here, but i only recently got a laptop with a gpu, before that i couldn't run llms since all my laptops had igpus.
So i started tinkering but I'm still unable to decide what's the ideal sweet spot for me?
So if anyone could recommend models with quants, that'd be great!
Specs -
Lpddr5 ram 16gb at 5200Mt/s
Cpu - I5-12450H
Gpu - Rtx 3050 6gb at 80w(about 5.6 gb left free after reserved mem)
System - archlinux
For now I use the Falcon3-7b at Q5_K_M and get about 25-28 tps..
Phi4 at q4_k_m was running at about 15tps ( cpu offload ofc)
Llama3.1 8b and the tulu fine tune of it at q4_k_m run at 25tps..
3b models run at about 60tps
To make my question clear, I'm asking what should be my ideal model? Like x-b model with y-quant and atleast z-tps, I'm not asking for specific models like falcon is worse than xyz don't use that...
| 2025-01-31T06:59:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ie94x4/whats_the_best_model_for_me_to_run/ | oglord69420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie94x4 | false | null | t3_1ie94x4 | /r/LocalLLaMA/comments/1ie94x4/whats_the_best_model_for_me_to_run/ | false | false | self | 2 | null |
Tool calling support landed in llama.cpp today! | 114 | Many of the popular open models are supported: generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek
https://github.com/ggerganov/llama.cpp/pull/9639
| 2025-01-31T06:59:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ie94yy/tool_calling_support_landed_in_llamacpp_today/ | No-Statement-0001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie94yy | false | null | t3_1ie94yy | /r/LocalLLaMA/comments/1ie94yy/tool_calling_support_landed_in_llamacpp_today/ | false | false | self | 114 | {'enabled': False, 'images': [{'id': 'xdL_f93JHxJKkg0tzzSR6qmO38FWaBdur9GLaq0dhDw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YBzK_b0zC5kMXznqvNh60wXFxyGJhjHVYRyzqdghtGA.jpg?width=108&crop=smart&auto=webp&s=e5f10d088b66dd86e6e41f2847f4f1b8622d1577', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YBzK_b0zC5kMXznqvNh60wXFxyGJhjHVYRyzqdghtGA.jpg?width=216&crop=smart&auto=webp&s=faa4524b4f3f795b54eaeff8b8b2a2580951e3ca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YBzK_b0zC5kMXznqvNh60wXFxyGJhjHVYRyzqdghtGA.jpg?width=320&crop=smart&auto=webp&s=9fae25e89e1bb3330586ecbc3f6f275957ce9928', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YBzK_b0zC5kMXznqvNh60wXFxyGJhjHVYRyzqdghtGA.jpg?width=640&crop=smart&auto=webp&s=c5f74d128e1a6c5245d73b35d943114c838425b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YBzK_b0zC5kMXznqvNh60wXFxyGJhjHVYRyzqdghtGA.jpg?width=960&crop=smart&auto=webp&s=e24d587066d1091f6d37173761eabb8bce009328', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YBzK_b0zC5kMXznqvNh60wXFxyGJhjHVYRyzqdghtGA.jpg?width=1080&crop=smart&auto=webp&s=5281fda25134e4e68dd78a0ac7d3a8417da9551d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YBzK_b0zC5kMXznqvNh60wXFxyGJhjHVYRyzqdghtGA.jpg?auto=webp&s=77b42513319b3530ca75bf62c7e9a352dff31bc6', 'width': 1200}, 'variants': {}}]} |
is deepseek api still down? alrready few days! ? | 0 | [https://status.deepseek.com/](https://status.deepseek.com/) | 2025-01-31T06:59:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ie94zz/is_deepseek_api_still_down_alrready_few_days/ | staypositivegirl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie94zz | false | null | t3_1ie94zz | /r/LocalLLaMA/comments/1ie94zz/is_deepseek_api_still_down_alrready_few_days/ | false | false | self | 0 | null |
Why does this Qwen model have a weird name? Is this how Alibaba named it internally? | 0 | 2025-01-31T07:07:08 | WordyBug | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ie9951 | false | null | t3_1ie9951 | /r/LocalLLaMA/comments/1ie9951/why_does_this_qwen_model_have_a_weird_name_is/ | false | false | 0 | {'enabled': True, 'images': [{'id': '4a_wgBbIqkGDBn5HFBd45apP0qP8he-_27wUVH7gEZ4', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/ilv216oo3age1.png?width=108&crop=smart&auto=webp&s=dc5055030b5795d691531ab7907d56f5cba411c6', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/ilv216oo3age1.png?width=216&crop=smart&auto=webp&s=0aca1a6873eb208849a0cfa32deff50be4e9f964', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/ilv216oo3age1.png?width=320&crop=smart&auto=webp&s=aba1ec0285bd9035ae0de6f8f4126d9abfd3d9c7', 'width': 320}, {'height': 398, 'url': 'https://preview.redd.it/ilv216oo3age1.png?width=640&crop=smart&auto=webp&s=7c36ca59df0a85c36abd5aafabd4730f15b0300e', 'width': 640}, {'height': 598, 'url': 'https://preview.redd.it/ilv216oo3age1.png?width=960&crop=smart&auto=webp&s=d5c0aacc0892f3a50fb3cc92806e1afdc0a00c3f', 'width': 960}, {'height': 672, 'url': 'https://preview.redd.it/ilv216oo3age1.png?width=1080&crop=smart&auto=webp&s=f8b02109ba8c349a6b0cc253a77802ba817b9dd2', 'width': 1080}], 'source': {'height': 1064, 'url': 'https://preview.redd.it/ilv216oo3age1.png?auto=webp&s=c1a54de19d48cb496805252afdb043d964050e77', 'width': 1708}, 'variants': {}}]} |
|||
How to run Deepseek 2.51 quant at decent speed with 4xA100? | 4 | I am running [DeepSeek-R1-UD-Q2\_K\_XL](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-Q2_K_XL) on a 4xA100 machine with 512Gb of RAM and 128 vCPUs
I use llama.cpp server to do this. Here is the command that I use to start the server:
./llama.cpp/llama-server \ --host
0.0.0.0
\ --port 11434 \ -m /workspace/DeepSeek-R1-GGUF/DeepSeek-R1-UD-Q2_K_XL/DeepSeek-R1-UD-Q2_K_XL-00001-of-00005.gguf \ --device CUDA0,CUDA1,CUDA2,CUDA3 \ -sm layer \ --tensor-split 1,1,1,1 \ -ngl 64 \ -c 8192 \ -t 126 \ -b 2048 \ --threads-batch 64 \ --timeout 3600 \ --flash-attn \ --mlock \ --no-mmap \ --no-kv-offload \ --verbose-prompt \ --prio 2 \ --log-file /workspace/llama-server.log \ --log-verbose \ --log-prefix \ --log-timestamps \ --slots \ --metrics
All 61 layers of the model are loaded to the VRAM
However, it is incredibly slow. A simple prompt of "Hello! How are you today?" took 3 minutes to complet.
This prompt:
https://preview.redd.it/b6kim5913age1.png?width=1786&format=png&auto=webp&s=8a20240d14d1d25fa02622f9c6fd80ae3ad6eea8
took 7 minutes.
When using API with 5000 tokens contexts it is timing out after one hour.
I am missing something here? Can this be run with speed on local?
I am not interested in the 70b distil version, I need a quant. The distil version is not good for what I need | 2025-01-31T07:10:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ie9amy/how_to_run_deepseek_251_quant_at_decent_speed/ | Significant_Bike9759 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ie9amy | false | null | t3_1ie9amy | /r/LocalLLaMA/comments/1ie9amy/how_to_run_deepseek_251_quant_at_decent_speed/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'xSdGWArWU6LYyDRL5oP5FnuIAfKsN1Z6N1wc8N_fOQY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=108&crop=smart&auto=webp&s=38be96fe7ba592d724845ec508925c2e2d0437a9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=216&crop=smart&auto=webp&s=216add24eeddf96721764be15a01323d3289a098', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=320&crop=smart&auto=webp&s=146aafa2effa94c6a92be3a1e52d5d1c5dada77c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=640&crop=smart&auto=webp&s=bc7cd6ab7b35a273b107dce1a4113ba2c9dcca51', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=960&crop=smart&auto=webp&s=f708695c420ae4c27a7b7b045b263ef095a49773', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?width=1080&crop=smart&auto=webp&s=674cf56e451c44a0c9ae525a6f1cb1a1dd92eab0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uHOmNdCTHW-Q1CBdw01aifeSpeyvgfhjJI_lcC-SH5c.jpg?auto=webp&s=15786bbf8fa654f9c457319fd2509fc682f49b99', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits