title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Impressive results on the Strawberry test from qwen2.5-coder:1.5b at Q4K_M
0
ollama run qwen2.5-coder:1.5b >>> /show info Model architecture qwen2 parameters 1.5B context length 32768 embedding length 1536 quantization Q4_K_M System You are Qwen, created by Alibaba Cloud. You are a helpful assistant. License Apache License Version 2.0, January 2004 >>> Use step by step reasoning to plan how to solve the following question and then write python code that will provide the answer rather than trying to do the ... math yourself: How many R's are in the word Strawberry? To determine how many R's are in the word "Strawberry", I'll use a straightforward approach: Step 1: Identify all occurrences of the letter 'R' in the word. Step 2: Count the number of occurrences. Here's the Python code to solve this problem: ```python # Define the word word = "Strawberry" # Use the count() method to find the number of occurrences of the letter 'R' num_rs = word.count('R') # Print the result print(f"The number of R's in the word '{word}' is {num_rs}.") ``` When you run this code, it will output: ``` The number of R's in the word 'Strawberry' is 3. ``` I don't know about you but I'm pretty impressed with that, especially considering it's a 4bit quant of a 1.5B parameter model. Meanwhile Gemini Flash 2.0 experimental seems to think it's 2 R's
2025-01-08T10:36:05
https://www.reddit.com/r/LocalLLaMA/comments/1hwh7fb/impressive_results_on_the_strawberry_test_from/
ServeAlone7622
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwh7fb
false
null
t3_1hwh7fb
/r/LocalLLaMA/comments/1hwh7fb/impressive_results_on_the_strawberry_test_from/
false
false
self
0
null
Best GPU suggestions for running models that rival ChatGPT 4o?
1
[removed]
2025-01-08T10:51:29
https://www.reddit.com/r/LocalLLaMA/comments/1hwhg6c/best_gpu_suggestions_for_running_models_that/
Material-Lock6756
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwhg6c
false
null
t3_1hwhg6c
/r/LocalLLaMA/comments/1hwhg6c/best_gpu_suggestions_for_running_models_that/
false
false
self
1
null
2 Months ago, ct3003 tested a computer simlar specs to the NVidia Digits: grace CPU and 96GB VRAM. Price 32k Euro. conclusion: ARM is problematic for AI
0
2025-01-08T10:51:55
https://www.youtube.com/watch?v=Wrv1sEn6hiw
yhodda
youtube.com
1970-01-01T00:00:00
0
{}
1hwhgf2
false
{'oembed': {'author_name': "c't 3003", 'author_url': 'https://www.youtube.com/@ct3003', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Wrv1sEn6hiw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Rechenpower wie bei OpenAI – auf meinem Schreibtisch"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Wrv1sEn6hiw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Rechenpower wie bei OpenAI – auf meinem Schreibtisch', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1hwhgf2
/r/LocalLLaMA/comments/1hwhgf2/2_months_ago_ct3003_tested_a_computer_simlar/
false
false
https://b.thumbs.redditm…hLYarHAB6rKI.jpg
0
{'enabled': False, 'images': [{'id': 'faI5-eC30roZaVYvk4uxxlzt83rZ6UbjYkZSQv75KAw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/JADnJCDGUu05EVSTuZHbUu2IedCQdbH371jSHYMQ9wY.jpg?width=108&crop=smart&auto=webp&s=2c1ad3bfffc8b2eb4f9d1a74cb9977f1c78d61a7', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/JADnJCDGUu05EVSTuZHbUu2IedCQdbH371jSHYMQ9wY.jpg?width=216&crop=smart&auto=webp&s=7ae7959ca4898c1d33e96bdebb53c52e2a0479d3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/JADnJCDGUu05EVSTuZHbUu2IedCQdbH371jSHYMQ9wY.jpg?width=320&crop=smart&auto=webp&s=1017f2b37f88f01dcf25dd0da3931a12a79989b2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/JADnJCDGUu05EVSTuZHbUu2IedCQdbH371jSHYMQ9wY.jpg?auto=webp&s=23775c23bf9824fe6a943792222415dcc468227c', 'width': 480}, 'variants': {}}]}
Working with Multiple Pdf files with tables (only tables :') )
1
[removed]
2025-01-08T11:19:59
https://www.reddit.com/r/LocalLLaMA/comments/1hwhwov/working_with_multiple_pdf_files_with_tables_only/
idcmuch1805
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwhwov
false
null
t3_1hwhwov
/r/LocalLLaMA/comments/1hwhwov/working_with_multiple_pdf_files_with_tables_only/
false
false
self
1
null
How to train my documents on top of on existing model?
1
I would like to include/train my local documents into an existing model. Is there any guide or training course that does that?
2025-01-08T11:30:48
https://www.reddit.com/r/LocalLLaMA/comments/1hwi36g/how_to_train_my_documents_on_top_of_on_existing/
yelling-at-clouds-40
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwi36g
false
null
t3_1hwi36g
/r/LocalLLaMA/comments/1hwi36g/how_to_train_my_documents_on_top_of_on_existing/
false
false
self
1
null
PDF Knowledge Base
1
[removed]
2025-01-08T11:43:59
https://www.reddit.com/r/LocalLLaMA/comments/1hwib5h/pdf_knowledge_base/
Extension_Leave9652
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwib5h
false
null
t3_1hwib5h
/r/LocalLLaMA/comments/1hwib5h/pdf_knowledge_base/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uxrXkrMCjGWnftQfjR01PHlkv5K5ADmQHGbwzh89tm8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ccWLOwOGWVDraiIIn5hOd-X7Cvajiqa687SLU_HncZQ.jpg?width=108&crop=smart&auto=webp&s=ee595d240ad01c3fc28fa05aafe04cdf8b6efbeb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ccWLOwOGWVDraiIIn5hOd-X7Cvajiqa687SLU_HncZQ.jpg?width=216&crop=smart&auto=webp&s=8d49039e77cab2785d1bcf07aaa4a8213dd5c947', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ccWLOwOGWVDraiIIn5hOd-X7Cvajiqa687SLU_HncZQ.jpg?width=320&crop=smart&auto=webp&s=6720be458112ff5b3a82eee94085b53290e2f9c2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ccWLOwOGWVDraiIIn5hOd-X7Cvajiqa687SLU_HncZQ.jpg?width=640&crop=smart&auto=webp&s=23b524b3e8fb864dd07f07a0e557c1a26a06d94d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ccWLOwOGWVDraiIIn5hOd-X7Cvajiqa687SLU_HncZQ.jpg?width=960&crop=smart&auto=webp&s=cd77566a41c73a41d92471f9665d2459dc8ecd5c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ccWLOwOGWVDraiIIn5hOd-X7Cvajiqa687SLU_HncZQ.jpg?width=1080&crop=smart&auto=webp&s=8bee4e4aad7a4c5ccf7f25c0a7eacc3cd0eb6e48', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ccWLOwOGWVDraiIIn5hOd-X7Cvajiqa687SLU_HncZQ.jpg?auto=webp&s=d86121be109971a55c4a7b294f4f778d515d0295', 'width': 1200}, 'variants': {}}]}
How do you keep up with the SOTA of everything? Where's the best leaderboards?
25
My knowledge basically only covers LLMs, and image and video generators. Even within that there's a lot I just don't know. My clumsy searching on Arxiv can only go so far. For example, I would like to get into the voice side of things, and not just in English. How the hell do I start? I'd need to know which models know certain languages the best, which can decipher my speech the best, which can output the most natural and low-latency speech, which can best see/transcribe foreign videos/images, etc. What about the kinds of models now popping up, that can see and understand your screen, and even perform computer tasks for you? That's several different skills there too. Not to mention the countless RAG posts. How and where would I start? Learning how to give any model some form of memory would be so useful. I'd love to have a chatbot that remembers what we talked about before, and have it constantly maintain and evolve its personality and memories over time. I know it's been done.
2025-01-08T11:49:46
https://www.reddit.com/r/LocalLLaMA/comments/1hwielp/how_do_you_keep_up_with_the_sota_of_everything/
ThrowawayProgress99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwielp
false
null
t3_1hwielp
/r/LocalLLaMA/comments/1hwielp/how_do_you_keep_up_with_the_sota_of_everything/
false
false
self
25
null
Best small local LLM server for a 4-10 users?
1
[removed]
2025-01-08T11:52:56
https://www.reddit.com/r/LocalLLaMA/comments/1hwigkb/best_small_local_llm_server_for_a_410_users/
Neat-Computer-6975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwigkb
false
null
t3_1hwigkb
/r/LocalLLaMA/comments/1hwigkb/best_small_local_llm_server_for_a_410_users/
false
false
self
1
null
Is a lower quant count better than a lower param count?
1
[removed]
2025-01-08T11:56:39
https://www.reddit.com/r/LocalLLaMA/comments/1hwiipu/is_a_lower_quant_count_better_than_a_lower_param/
Annual_Insurance_669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwiipu
false
null
t3_1hwiipu
/r/LocalLLaMA/comments/1hwiipu/is_a_lower_quant_count_better_than_a_lower_param/
false
false
self
1
null
Cheaper alternatives to ChatGPT?
0
Hello 👋 I currently have a Synology NAS DS423+ and have been considering canceling my OpenAI subscription as I’m looking for a more affordable alternative. I’ve tried Ollama and WebUI, but unfortunately, these Docker containers consume a lot of my RAM (18GB) and CPU. I’m looking for something simpler, just text-based, as I mostly brainstorm ideas and blog posts. What would you recommend in this case? It doesn’t have to be free, but I would like to reduce the cost compared to OpenAI since I’m on a tight budget. The only requirement is that it should be able to communicate in Swedish.
2025-01-08T12:03:06
https://www.reddit.com/r/LocalLLaMA/comments/1hwimxi/cheaper_alternatives_to_chatgpt/
MysticFoxPaws
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwimxi
false
null
t3_1hwimxi
/r/LocalLLaMA/comments/1hwimxi/cheaper_alternatives_to_chatgpt/
false
false
self
0
null
Survey for new technologies
1
[removed]
2025-01-08T12:12:47
https://www.reddit.com/r/LocalLLaMA/comments/1hwit0h/survey_for_new_technologies/
Savings-Squirrel9805
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwit0h
false
null
t3_1hwit0h
/r/LocalLLaMA/comments/1hwit0h/survey_for_new_technologies/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ZvM_WAFzXM55jLTal-KY-p2azv8ERwgIS2KpDrr6TCE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dHYZjHEQERP8Q6FjnbPXlCk8xD3jNGFihfOPkNJDNp0.jpg?width=108&crop=smart&auto=webp&s=2020133f3ad535bea897e3df4c2960c461cbe0fa', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/dHYZjHEQERP8Q6FjnbPXlCk8xD3jNGFihfOPkNJDNp0.jpg?width=216&crop=smart&auto=webp&s=403ee2b79ca0292c4774f2d8ec4987d18e60a723', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/dHYZjHEQERP8Q6FjnbPXlCk8xD3jNGFihfOPkNJDNp0.jpg?width=320&crop=smart&auto=webp&s=cc6e378e03af719ff1ec01b7baa453450c4eda4c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/dHYZjHEQERP8Q6FjnbPXlCk8xD3jNGFihfOPkNJDNp0.jpg?width=640&crop=smart&auto=webp&s=ac62b277c3a6cdf5a2b3f6c2bce6bd5f1415cbea', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/dHYZjHEQERP8Q6FjnbPXlCk8xD3jNGFihfOPkNJDNp0.jpg?width=960&crop=smart&auto=webp&s=0ca23d0d04ebeb68a06b816f3a847a38d3563c0c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/dHYZjHEQERP8Q6FjnbPXlCk8xD3jNGFihfOPkNJDNp0.jpg?width=1080&crop=smart&auto=webp&s=0bb7358d1225418c6c829e642e24c9a915931571', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/dHYZjHEQERP8Q6FjnbPXlCk8xD3jNGFihfOPkNJDNp0.jpg?auto=webp&s=0796d92142923cce01992e951c6b32894df5490a', 'width': 1200}, 'variants': {}}]}
Something to consider trying if you almost have enough RAM to fit the DeepSeek V3 Quant you want on linux.
14
I am testing what its like to not quite have enough RAM for DeepSeek V3 Q4 by having a program allocate a chunk of memory. There is a very clear performance hit when I do this and I can assume that it will be the same if I physically dont have enough RAM to fit all of the data contained in a model in RAM. I am also testing an approach to mitigate some of the performance hit that happens in this situation. I am trying out ZRAM which is a Linux kernel module that creates compressed block devices in RAM. The following (for experimental people) sudo apt install zram-config will setup a system swap device using ZRAM so that when data is evicted from disk cache it ends up first in this compressed block of RAM sudo nano /etc/default/zramswap This opens the config file systemctl restart zramswap This applies config file changes (see google / chatgpt for more about ZRAM) --- The idea is that if the compression is good enough this swap will keep all of the data that would otherwise need to be retrieved from a disk drive / ssd in a compressed form which because its in RAM will be much faster to retrieve. I am also experementing with different compression algorithms that ZRAM uses until I find one that looks fast Here is my progression: ------------------------------------------------------ Memory hog program using 150GB of RAM out of 512GB ram ------------------------------------------------------ 1.39 tokens per second (30 tokens, A100 64gb + 2x A6000 48GB) 1.54 tokens per second (30 tokens, CPU) <-- less bad than expected for some reason limiting Q3 had a bigger impact on speed for me at 0.27 t/s but I guess the same amount of missing RAM was more of the model as a percentage not cached in disk cache ------------------------------------------------------ Enabled ZRAM swap with the following parameters and results AlGO is the compression algorithm used by zram multiple inference runs where I find the results interesting ------------------------------------------------------ ALGO=zstd PERCENTAGE=30 1.25 tokens per second 1.38 tokens per second (~30 tokens, A100 64gb + 2x A6000 48GB) ALGO=zstd ALLOCATION=300000 2.34 tokens per second ALGO=zstd ALLOCATION=410000 3.58 tokens per second ALGO=LZO-RLE ALLOCATION=300000 2.12 tokens per second 2.64 tokens per second ALGO=DEFLATE ALLOCATION=410000 1.74 tokens per second (30 tokens, CPU) 5.05 tokens per second (30 tokens, CPU) jackpot !!! 4.91 tokens per second (30 tokens, CPU) 5.76 tokens per second (299 tokens, CPU) 6.16 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) 5.05 tokens per second (~300 tokens, memory hog program closed and reopened to make sure its still hogging ram, A100 64gb + 2x A6000 48GB) 6.04 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) 6.39 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) 6.46 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) ALGO=DEFLATE ALLOCATION=350000 4.66 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) basically a warm up after reducing ZRAM 6.22 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) 6.42 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) <-- I'm calling this pretty good ALGO=DEFLATE ALLOCATION=300000 3.55 tokens per second (~300 tokens, CPU warmup) 3.81 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB, boo!) 4.59 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB, ok maybe longer warm-up, got to get everything compressed to make fast) 3.64 tokens per second (nope) ------------------------------------------------------ Memory hog program terminated so all 512GB ram is available ------------------------------------------------------ # 300GB ZRAM but I close the memory hog app: 6.44 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) 6.44 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) <-- seems good but compare to 8.07 t/s bellow with zram off, if irts not needed dont use it # deactivate zram swap: `sudo swapoff /dev/zram0` 6.42 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) 8.07 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) 8.08 tokens per second (~300 tokens, A100 64gb + 2x A6000 48GB) <-- baseline cpu + gpu 6.37 tokens per second (~300 tokens, CPU only) 6.43 tokens per second (~300 tokens, CPU only) <-- baseline cpu ------------------------------------------------------ Conclusion ------------------------------------------------------ This was by no means an exhaustive search of all the possible parameters and a real shortfall of RAM will lead to different parameters being optimal. There are more compression algorithms to try and of course a simulated shortage of RAM might be different to the real thing (I'm not pulling out RAM sticks to try, I'm too lazy for that) but I think this was a successful proof of concept. There is a cost to doing this, ZRAM uses processing power and compression and more importantly decompression adds overhead. ZRAM uses up RAM so if its not needed to run a model it is just all downside. However I think it makes a lot of sense for a server that is being dedicated to a model that does not have quite enough RAM where using it gets you that bit extra. If I decide to download Q8 which is a model I too big for my available RAM I will certainly try this method with it to see how much faster it will make it if at all. Although I might wait until I get more data on how good the smaller quants are in comparison. It seems relatively easy to activate and deactivate ZRAM so if you dont want it hogging RAM and CPU resources most of the time then that does not seem like an issue. Also the more data oriented person might just compress the models to see what the exact compression ratios are like to determine if its worth it and what amount of ZRAM would be optimal. There might also be ways to have llamacpp and such just compress the data directly that I'm ignorant of.
2025-01-08T12:19:07
https://www.reddit.com/r/LocalLLaMA/comments/1hwix4d/something_to_consider_trying_if_you_almost_have/
MoneyPowerNexis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwix4d
false
null
t3_1hwix4d
/r/LocalLLaMA/comments/1hwix4d/something_to_consider_trying_if_you_almost_have/
false
false
self
14
null
A Recipe for a Better Code Generator with RAG
94
2025-01-08T13:15:30
https://www.pulumi.com/blog/codegen-learnings/
agbell
pulumi.com
1970-01-01T00:00:00
0
{}
1hwjzw1
false
null
t3_1hwjzw1
/r/LocalLLaMA/comments/1hwjzw1/a_recipe_for_a_better_code_generator_with_rag/
false
false
https://a.thumbs.redditm…2ecbnGL1MLQ4.jpg
94
{'enabled': False, 'images': [{'id': 'FcIDVAhZIfzg4ezl69GExtdw4O_dQH3au_pFE_VKsdI', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/LXLJBuPegXr_jr-v3g2QasDpI6sYCw7nkCBT17bt6Qs.jpg?width=108&crop=smart&auto=webp&s=7515aacaccbf986553f32db9c8523e85a9c67e7b', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/LXLJBuPegXr_jr-v3g2QasDpI6sYCw7nkCBT17bt6Qs.jpg?width=216&crop=smart&auto=webp&s=702c6dcb45e1e0b4e5d5c170e0616fb75feddbd5', 'width': 216}, {'height': 149, 'url': 'https://external-preview.redd.it/LXLJBuPegXr_jr-v3g2QasDpI6sYCw7nkCBT17bt6Qs.jpg?width=320&crop=smart&auto=webp&s=f2c127785b1264886569e5bc19cc3720399e585b', 'width': 320}, {'height': 299, 'url': 'https://external-preview.redd.it/LXLJBuPegXr_jr-v3g2QasDpI6sYCw7nkCBT17bt6Qs.jpg?width=640&crop=smart&auto=webp&s=25403c01daf573ed3ba6a7bc1d3fb35fac180834', 'width': 640}, {'height': 449, 'url': 'https://external-preview.redd.it/LXLJBuPegXr_jr-v3g2QasDpI6sYCw7nkCBT17bt6Qs.jpg?width=960&crop=smart&auto=webp&s=cd018179664afec77fe51d163fb4bd164f1a5032', 'width': 960}, {'height': 506, 'url': 'https://external-preview.redd.it/LXLJBuPegXr_jr-v3g2QasDpI6sYCw7nkCBT17bt6Qs.jpg?width=1080&crop=smart&auto=webp&s=e8d7c49c8c2f51d4e1832090dfdc4e72588ed44a', 'width': 1080}], 'source': {'height': 716, 'url': 'https://external-preview.redd.it/LXLJBuPegXr_jr-v3g2QasDpI6sYCw7nkCBT17bt6Qs.jpg?auto=webp&s=9cc195408921e11c237eedc57704a1f06a3cb2c6', 'width': 1528}, 'variants': {}}]}
Run models larger than fit into RAM in a batch / offline mode
2
Is there any library that allows you to run a model larger than what might fit into your RAM? I imagine that it would load a part of the model, run that, keep the result, load another part, etc. I understand this would take a very long time and it would not be useful for any interactive use case. I'm thinking more about offline processing of documents / emails etc. Ideally, these would be batched in some way to optimise the overhead in loading and unloading the model parts. What I hope it would allow me to do is to run 70B models on my 16GB Macbook in cases when I don't need any interactivity / can wait for the result. Is anyone here using slow-running large models in some kind of an offline mode for anything?
2025-01-08T13:32:54
https://www.reddit.com/r/LocalLLaMA/comments/1hwkbo2/run_models_larger_than_fit_into_ram_in_a_batch/
lakySK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwkbo2
false
null
t3_1hwkbo2
/r/LocalLLaMA/comments/1hwkbo2/run_models_larger_than_fit_into_ram_in_a_batch/
false
false
self
2
null
MiniThinky 1B - My first trial to make a reasoning model
86
Hi everyone! This is my first trial to fine tune a small model, adding the reasoning capability. I took Llama 3.2 1B as the base model, so the size is very small. Check it out here ==> [https://huggingface.co/ngxson/MiniThinky-v2-1B-Llama-3.2](https://huggingface.co/ngxson/MiniThinky-v2-1B-Llama-3.2) GGUF version (runnable directly via ollama): [https://huggingface.co/ngxson/MiniThinky-v2-1B-Llama-3.2-Q8\_0-GGUF](https://huggingface.co/ngxson/MiniThinky-v2-1B-Llama-3.2-Q8_0-GGUF)
2025-01-08T13:44:30
https://www.reddit.com/r/LocalLLaMA/comments/1hwkjq0/minithinky_1b_my_first_trial_to_make_a_reasoning/
MediocreProgrammer99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwkjq0
false
null
t3_1hwkjq0
/r/LocalLLaMA/comments/1hwkjq0/minithinky_1b_my_first_trial_to_make_a_reasoning/
false
false
self
86
{'enabled': False, 'images': [{'id': 'mK572aoCJBU0TlfN02T8M073DHYUCqzFYDSFut2PPe0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nWt_CAjH2BMslV7baAjeh9MOHo4WonRv8gsKVIUa6Oc.jpg?width=108&crop=smart&auto=webp&s=bb0776502f5423d98cdcfa89a2c9bfe018436bb6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nWt_CAjH2BMslV7baAjeh9MOHo4WonRv8gsKVIUa6Oc.jpg?width=216&crop=smart&auto=webp&s=887b92eabc144ac0e66b7c8f270a757fdaa6c230', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nWt_CAjH2BMslV7baAjeh9MOHo4WonRv8gsKVIUa6Oc.jpg?width=320&crop=smart&auto=webp&s=c0d5374916aaefcc9aedecbdf56c3923c5968a0f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nWt_CAjH2BMslV7baAjeh9MOHo4WonRv8gsKVIUa6Oc.jpg?width=640&crop=smart&auto=webp&s=05b96013d0037998e3e6f162fb9c806953e8536c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nWt_CAjH2BMslV7baAjeh9MOHo4WonRv8gsKVIUa6Oc.jpg?width=960&crop=smart&auto=webp&s=4677434ae1550a81dfadf91fb27d422153941da7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nWt_CAjH2BMslV7baAjeh9MOHo4WonRv8gsKVIUa6Oc.jpg?width=1080&crop=smart&auto=webp&s=088756fa69fda841ed8033702bb783895afc7ac2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nWt_CAjH2BMslV7baAjeh9MOHo4WonRv8gsKVIUa6Oc.jpg?auto=webp&s=54162b91f22cc427d79b5f740b733549c1461b08', 'width': 1200}, 'variants': {}}]}
What is the least pricey GPU server commercially available with 4xA100 or just 2xA100 that comes with CPU,RAM and all the components?
0
I have seen Dell and Lenovo selling GPU servers but they are overpriced with over 150K$ for 2H100 system. Are there other vendors who offers cheaper, somewhere around 50,000 to 80,000USD?
2025-01-08T14:02:23
https://www.reddit.com/r/LocalLLaMA/comments/1hwkws1/what_is_the_least_pricey_gpu_server_commercially/
Lazy_Wedding_1383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwkws1
false
null
t3_1hwkws1
/r/LocalLLaMA/comments/1hwkws1/what_is_the_least_pricey_gpu_server_commercially/
false
false
self
0
null
Interesting Solution to the problem of Misguided Attention: "Mindful Attention"
1
[removed]
2025-01-08T14:07:22
https://www.reddit.com/r/LocalLLaMA/comments/1hwl0dv/interesting_solution_to_the_problem_of_misguided/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwl0dv
false
null
t3_1hwl0dv
/r/LocalLLaMA/comments/1hwl0dv/interesting_solution_to_the_problem_of_misguided/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zc0d3MyeWRHREWvEmbtwYAnrgsfx2Dp-GlnaCrAVY5Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=108&crop=smart&auto=webp&s=d03d05df3b8cccd0dab5f86c7f33475d01067683', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=216&crop=smart&auto=webp&s=b860d53a7a30556bdf104a255fc3eb1296b9490a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=320&crop=smart&auto=webp&s=f4adfefb0fd642496cb95728fc5ac082984d81da', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=640&crop=smart&auto=webp&s=26e1c15e03312d61baecfa9fa3abb5cae1b81635', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=960&crop=smart&auto=webp&s=f9b0088f222c19528abf5024796ed17c69a0edc8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=1080&crop=smart&auto=webp&s=2473985480f05f78830d96131477bd0047662a36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?auto=webp&s=cd8561eba8eb7d6ea881f8972a113b9c75e7c489', 'width': 1200}, 'variants': {}}]}
Interesting Solution to the problem of Misguided Attention: "Mindful Attention"
1
[removed]
2025-01-08T14:09:34
https://www.reddit.com/r/LocalLLaMA/comments/1hwl20j/interesting_solution_to_the_problem_of_misguided/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwl20j
false
null
t3_1hwl20j
/r/LocalLLaMA/comments/1hwl20j/interesting_solution_to_the_problem_of_misguided/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zc0d3MyeWRHREWvEmbtwYAnrgsfx2Dp-GlnaCrAVY5Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=108&crop=smart&auto=webp&s=d03d05df3b8cccd0dab5f86c7f33475d01067683', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=216&crop=smart&auto=webp&s=b860d53a7a30556bdf104a255fc3eb1296b9490a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=320&crop=smart&auto=webp&s=f4adfefb0fd642496cb95728fc5ac082984d81da', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=640&crop=smart&auto=webp&s=26e1c15e03312d61baecfa9fa3abb5cae1b81635', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=960&crop=smart&auto=webp&s=f9b0088f222c19528abf5024796ed17c69a0edc8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=1080&crop=smart&auto=webp&s=2473985480f05f78830d96131477bd0047662a36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?auto=webp&s=cd8561eba8eb7d6ea881f8972a113b9c75e7c489', 'width': 1200}, 'variants': {}}]}
Interesting Solution to the problem of Misguided Attention: "Mindful Attention"
1
[removed]
2025-01-08T14:10:41
https://www.reddit.com/r/LocalLLaMA/comments/1hwl2ub/interesting_solution_to_the_problem_of_misguided/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwl2ub
false
null
t3_1hwl2ub
/r/LocalLLaMA/comments/1hwl2ub/interesting_solution_to_the_problem_of_misguided/
false
false
self
1
null
Interesting Solution to the problem of Misguided Attention: "Mindful Attention"
1
[removed]
2025-01-08T14:11:34
https://www.reddit.com/r/LocalLLaMA/comments/1hwl3ib/interesting_solution_to_the_problem_of_misguided/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwl3ib
false
null
t3_1hwl3ib
/r/LocalLLaMA/comments/1hwl3ib/interesting_solution_to_the_problem_of_misguided/
false
false
self
1
null
Interesting Solution to the problem of Misguided Attention: "Mindful Attention"
1
[removed]
2025-01-08T14:13:13
https://www.reddit.com/r/LocalLLaMA/comments/1hwl4qs/interesting_solution_to_the_problem_of_misguided/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwl4qs
false
null
t3_1hwl4qs
/r/LocalLLaMA/comments/1hwl4qs/interesting_solution_to_the_problem_of_misguided/
false
false
self
1
null
Interesting Solution to the problem of Misguided Attention: "Mindful Attention"
1
[removed]
2025-01-08T14:16:55
https://www.reddit.com/r/LocalLLaMA/comments/1hwl7j0/interesting_solution_to_the_problem_of_misguided/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwl7j0
false
null
t3_1hwl7j0
/r/LocalLLaMA/comments/1hwl7j0/interesting_solution_to_the_problem_of_misguided/
false
false
self
1
null
I made the world's first AI meeting copilot, and open sourced it!
568
I got tired of relying on clunky SaaS tools for meeting transcriptions that didn’t respect my privacy or workflow. Everyone I tried had issues: * Bots awkwardly join meetings and announce themselves. * Poor transcription quality. * No flexibility to tweak things to fit *my* setup. So I built **Amurex**, a self-hosted solution that actually works: * Records meetings quietly, with no bots interrupting. * Delivers clean, accurate diarized transcripts right after the meeting. * Does late meeting summaries. i.e. a recap for a meeting if I am late But most importantly, it has it is the only meeting tool in the world that can give * Real-time suggestions to stay engaged in boring meetings. It’s completely open source and designed for self-hosting, so you control your data and your workflow. No subscriptions, and no vendor lock-in. I would love to know what you all think of it. It only works on Google Meet for now but I will be scaling it to all the famous meeting providers. Github - [https://github.com/thepersonalaicompany/amurex](https://github.com/thepersonalaicompany/amurex) Website - [https://www.amurex.ai/](https://www.amurex.ai/)
2025-01-08T14:33:52
https://www.reddit.com/r/LocalLLaMA/comments/1hwlka6/i_made_the_worlds_first_ai_meeting_copilot_and/
stealthanthrax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwlka6
false
null
t3_1hwlka6
/r/LocalLLaMA/comments/1hwlka6/i_made_the_worlds_first_ai_meeting_copilot_and/
false
false
self
568
{'enabled': False, 'images': [{'id': 'rJtjTGgKpn603eIlveFFZB_GZiImx_YrjgBbtCsYwSo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vZg4RsR3qBmlVixJ6ahpC2e_KJWnzSMScWZmbEtAY58.jpg?width=108&crop=smart&auto=webp&s=a244b1332b0bcef92c709f94ee9d2f8751a49418', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vZg4RsR3qBmlVixJ6ahpC2e_KJWnzSMScWZmbEtAY58.jpg?width=216&crop=smart&auto=webp&s=4bb11a48e7c5c8df3504c486d93fa2676da0be18', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vZg4RsR3qBmlVixJ6ahpC2e_KJWnzSMScWZmbEtAY58.jpg?width=320&crop=smart&auto=webp&s=9c73d313a547daf11ecdfd507c5367adf23cdb62', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vZg4RsR3qBmlVixJ6ahpC2e_KJWnzSMScWZmbEtAY58.jpg?width=640&crop=smart&auto=webp&s=36085b43dbadfbc57e6d9fb2446954fef13f9b8d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vZg4RsR3qBmlVixJ6ahpC2e_KJWnzSMScWZmbEtAY58.jpg?width=960&crop=smart&auto=webp&s=536d67f89411d4ef93b95630ac20f6388fbc04c1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vZg4RsR3qBmlVixJ6ahpC2e_KJWnzSMScWZmbEtAY58.jpg?width=1080&crop=smart&auto=webp&s=b9585cde127e76ea3af572d902d6c3037054de23', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vZg4RsR3qBmlVixJ6ahpC2e_KJWnzSMScWZmbEtAY58.jpg?auto=webp&s=ecf88d98557ec8f1c80a2b54d5317e7b9d40c740', 'width': 1200}, 'variants': {}}]}
Need LLM recommendations
1
[removed]
2025-01-08T14:36:40
https://www.reddit.com/r/LocalLLaMA/comments/1hwlmbf/need_llm_recommendations/
blackkksparx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwlmbf
false
null
t3_1hwlmbf
/r/LocalLLaMA/comments/1hwlmbf/need_llm_recommendations/
false
false
self
1
null
Interesting Solution to the problem of Misguided Attention
1
[removed]
2025-01-08T14:51:14
https://www.reddit.com/r/LocalLLaMA/comments/1hwlxae/interesting_solution_to_the_problem_of_misguided/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwlxae
false
null
t3_1hwlxae
/r/LocalLLaMA/comments/1hwlxae/interesting_solution_to_the_problem_of_misguided/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zc0d3MyeWRHREWvEmbtwYAnrgsfx2Dp-GlnaCrAVY5Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=108&crop=smart&auto=webp&s=d03d05df3b8cccd0dab5f86c7f33475d01067683', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=216&crop=smart&auto=webp&s=b860d53a7a30556bdf104a255fc3eb1296b9490a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=320&crop=smart&auto=webp&s=f4adfefb0fd642496cb95728fc5ac082984d81da', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=640&crop=smart&auto=webp&s=26e1c15e03312d61baecfa9fa3abb5cae1b81635', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=960&crop=smart&auto=webp&s=f9b0088f222c19528abf5024796ed17c69a0edc8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=1080&crop=smart&auto=webp&s=2473985480f05f78830d96131477bd0047662a36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?auto=webp&s=cd8561eba8eb7d6ea881f8972a113b9c75e7c489', 'width': 1200}, 'variants': {}}]}
I made a site to curate the latest AI jobs. Everyday 100s of new positions are added.
25
2025-01-08T14:53:13
https://v.redd.it/vgp89y6s9sbe1
WordyBug
/r/LocalLLaMA/comments/1hwlysx/i_made_a_site_to_curate_the_latest_ai_jobs/
1970-01-01T00:00:00
0
{}
1hwlysx
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vgp89y6s9sbe1/DASHPlaylist.mpd?a=1739069604%2CNmMxOGQxMGI5M2Y5NmU3YTU2OWQ2NmE1MzQ0ZTBiZjZkNGM0YTRhMGE1YjFjN2UzMzViZjY3MTc5Yzk5MTI3Mg%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/vgp89y6s9sbe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/vgp89y6s9sbe1/HLSPlaylist.m3u8?a=1739069604%2CMmFjYzM3ZTk4MjBiNTY3NDIyMWQwMzgyYTNjYzI5ZjA5ZjI1MzE0Mzg2NjkxMGFjMjdkNWU2ZGE0ZWI2YWFjZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vgp89y6s9sbe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1hwlysx
/r/LocalLLaMA/comments/1hwlysx/i_made_a_site_to_curate_the_latest_ai_jobs/
false
false
https://external-preview…93a60cf0b35c397f
25
{'enabled': False, 'images': [{'id': 'MXJtOXg0NHM5c2JlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MXJtOXg0NHM5c2JlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=108&crop=smart&format=pjpg&auto=webp&s=3a3c4e0f3451531f529875f833a1b5e831c563f7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MXJtOXg0NHM5c2JlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=216&crop=smart&format=pjpg&auto=webp&s=848d90515eafc6b6965bb932643cac03fb205675', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MXJtOXg0NHM5c2JlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=320&crop=smart&format=pjpg&auto=webp&s=2412840b9bc56d52552d71de0b439284019ae1c2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MXJtOXg0NHM5c2JlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=640&crop=smart&format=pjpg&auto=webp&s=5d494e99ce6530914916b79e597cda6ab226462f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MXJtOXg0NHM5c2JlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=960&crop=smart&format=pjpg&auto=webp&s=09fafab85415d55aa66463039d71218c3b809a4a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MXJtOXg0NHM5c2JlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=131bfc4c59171fc4078e45f64925e5fbb1b60246', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/MXJtOXg0NHM5c2JlMT0OwWybS_MdntybaH3TpXxPIGgpidVBmRMcLOJvhoaK.png?format=pjpg&auto=webp&s=64d5c0f20642c7013ed6c44a5de223eccf004822', 'width': 2560}, 'variants': {}}]}
Help me find some use cases for agents
5
I'm a software engineer, and I've always had an amazing concept for an "agent builder" framework. One thing that has always put me off is that I could not find good real life use cases. The ones I find are either super complex customer support systems or data categorization, but nothing raw to present to the world. Any help is appreciated.
2025-01-08T14:54:36
https://www.reddit.com/r/LocalLLaMA/comments/1hwlzvj/help_me_find_some_use_cases_for_agents/
freecodeio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwlzvj
false
null
t3_1hwlzvj
/r/LocalLLaMA/comments/1hwlzvj/help_me_find_some_use_cases_for_agents/
false
false
self
5
null
Is monte carlo tree search the same as Tree of Thought reasoning?
1
[removed]
2025-01-08T14:54:54
https://www.reddit.com/r/LocalLLaMA/comments/1hwm044/is_monte_carlo_tree_search_the_same_as_tree_of/
greentea387
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwm044
false
null
t3_1hwm044
/r/LocalLLaMA/comments/1hwm044/is_monte_carlo_tree_search_the_same_as_tree_of/
false
false
self
1
null
Question about Nvidia's Nemotron offerings?
3
As per the Nvidia blog: > The Llama Nemotron and Cosmos Nemotron model families are coming in Nano, Super and Ultra sizes to provide options for deploying AI agents at every scale. >**Nano**: The most cost-effective model optimized for real-time applications with low latency, ideal for deployment on PCs and edge devices. >**Super**: A high-accuracy model offering exceptional throughput on a single GPU. >**Ultra**: The highest-accuracy model, designed for data-center-scale applications demanding the highest performance. I assume that - for Llama Nemotron - the sizes correspond to 8B, 70B and 405B, with each Llama model being finetuned to Nvidia's additional data?
2025-01-08T14:57:55
https://www.reddit.com/r/LocalLLaMA/comments/1hwm2dp/question_about_nvidias_nemotron_offerings/
TechNerd10191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwm2dp
false
null
t3_1hwm2dp
/r/LocalLLaMA/comments/1hwm2dp/question_about_nvidias_nemotron_offerings/
false
false
self
3
null
Need LLM recommendations
1
[removed]
2025-01-08T15:03:55
https://www.reddit.com/r/LocalLLaMA/comments/1hwm79n/need_llm_recommendations/
blackkksparx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwm79n
false
null
t3_1hwm79n
/r/LocalLLaMA/comments/1hwm79n/need_llm_recommendations/
false
false
self
1
null
CES 2025: Razer Blade 16 - 64 GB LPDDR5X-8000 RAM, RTX 5090 (24GB GDDR7)
4
No pricing yet, but I'm curious what people's thoughts are on the capabilities of this as a mobile AI developer workstation? [Razer updates Blade 16 gaming laptop with flagship RTX 5090 and Ryzen 9 HX 370 hardware | Tom's Hardware](https://www.tomshardware.com/laptops/gaming-laptops/razer-updates-blade-16-gaming-laptop-with-flagship-rtx-5090-and-ryzen-9-hx-370-hardware)
2025-01-08T15:06:12
https://www.reddit.com/r/LocalLLaMA/comments/1hwm93a/ces_2025_razer_blade_16_64_gb_lpddr5x8000_ram_rtx/
CyberAwarenessGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwm93a
false
null
t3_1hwm93a
/r/LocalLLaMA/comments/1hwm93a/ces_2025_razer_blade_16_64_gb_lpddr5x8000_ram_rtx/
false
false
self
4
{'enabled': False, 'images': [{'id': 'l8UJx8wB7VrXjuUadKV2W3KLVmI43DVZCmS2nOtYBk4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xyNg1Ajlm9hBeQKniIr40PCaKz79qBZCAEzP78Psp3E.jpg?width=108&crop=smart&auto=webp&s=19627fc952bbca7e6f66784c5599322364d79d78', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/xyNg1Ajlm9hBeQKniIr40PCaKz79qBZCAEzP78Psp3E.jpg?width=216&crop=smart&auto=webp&s=da8f9b886f90f329eeccf830f0d3d3d219b77a97', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/xyNg1Ajlm9hBeQKniIr40PCaKz79qBZCAEzP78Psp3E.jpg?width=320&crop=smart&auto=webp&s=317d74db97b7465200472895184c6d9467fa107c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/xyNg1Ajlm9hBeQKniIr40PCaKz79qBZCAEzP78Psp3E.jpg?width=640&crop=smart&auto=webp&s=87c47cc4e0383d994cb6dd20a765e8b5676dece1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/xyNg1Ajlm9hBeQKniIr40PCaKz79qBZCAEzP78Psp3E.jpg?width=960&crop=smart&auto=webp&s=b978ee0660b9d1e4f0c734d92951db9c52f212c5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/xyNg1Ajlm9hBeQKniIr40PCaKz79qBZCAEzP78Psp3E.jpg?width=1080&crop=smart&auto=webp&s=4d81cb69c8dcea0e1ddd6551e6ce5f72fcad2802', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/xyNg1Ajlm9hBeQKniIr40PCaKz79qBZCAEzP78Psp3E.jpg?auto=webp&s=abcc07a7b3d2be0a01a38409603e6745e7217a79', 'width': 1200}, 'variants': {}}]}
The pipeline I follow for open source LLM model finetuning
35
I have been working on local LLMs and training for quite some time. Based on my experience, its a two fold problem. Which can be addressed in three phases. Phase-1: 1. Development of the full solution using any close source model like ChatGPT or Geminai. 2. Measuring the accuracy and storing the output for few samples (like 100) OUTCOME: Pipeline Development, Base Accuracy and rough annotations Phase-2: 1. Correcting the rough annotations and creating a small dataset 2. Selecting a local LLM and finetuning that with the small dataset 3. Measuring the results accuracy and quality OUTCOME: Streamlined prompts, dataset and model training flow Phase-3: 1. Using this model and developing large scale psudo dataset 2. Correcting the psudo dataset and 3. Finetuning model with largescale data 4. Testing the accuracy and results quality. 5. Repeating until the desired results are met OUTCOME: Suffisticated dataset, properly trained model Phase-4: (OPTIONAL) Benchmarking with other closed source LLMs and preparing a benchmarking report. Any thoughts on this flow.
2025-01-08T15:22:16
https://www.reddit.com/r/LocalLLaMA/comments/1hwmlz8/the_pipeline_i_follow_for_open_source_llm_model/
Ahmad401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwmlz8
false
null
t3_1hwmlz8
/r/LocalLLaMA/comments/1hwmlz8/the_pipeline_i_follow_for_open_source_llm_model/
false
false
self
35
null
Company LLM Server Build
1
[removed]
2025-01-08T15:33:57
https://www.reddit.com/r/LocalLLaMA/comments/1hwmvla/company_llm_server_build/
viper1o5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwmvla
false
null
t3_1hwmvla
/r/LocalLLaMA/comments/1hwmvla/company_llm_server_build/
false
false
self
1
null
Phi-4 has been released
814
2025-01-08T15:37:07
https://huggingface.co/microsoft/phi-4
paf1138
huggingface.co
1970-01-01T00:00:00
0
{}
1hwmy39
false
null
t3_1hwmy39
/r/LocalLLaMA/comments/1hwmy39/phi4_has_been_released/
false
false
https://b.thumbs.redditm…fAbNu4ctas9A.jpg
814
{'enabled': False, 'images': [{'id': 'Ot1xjL2M9365_v0oARY9QlUXl4MhR3T2fbB42ZHcFBA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=108&crop=smart&auto=webp&s=3fdb0c00d9a6c52d437c9c6c0afe5a250fb13947', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=216&crop=smart&auto=webp&s=069578878d4d624adcec3060062ccd7a3d425d8d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=320&crop=smart&auto=webp&s=1088b40cae80f97ec40a119a66f992f28c959f17', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=640&crop=smart&auto=webp&s=ed435e811f517c43ffac1607f9719679d8dd1b5d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=960&crop=smart&auto=webp&s=e5f1369b75f1cd82b6f75249dce815e06a498bdc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=1080&crop=smart&auto=webp&s=92fa94d2bd73b21d48b1679980160262cac2565f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?auto=webp&s=1f4a4119be0ec643473bbd1e82208b9bc4246064', 'width': 1200}, 'variants': {}}]}
Script hijacks user input and starts talking to itself.
0
Hello everyone, hope everyone's doing well. I wrote a script to run local GGUF model. now, when I send the first message, the model processes the response, taking longer then it should and then spits out a conversation with 'User' that it never had. Like this: User: Hello llama_perf_context_print: load time = 2917.69 ms llama_perf_context_print: prompt eval time = 0.00 ms / 7 tokens ( 0.00 ms per token, inf tokens per second) llama_perf_context_print: eval time = 0.00 ms / 127 runs ( 0.00 ms per token, inf tokens per second) llama_perf_context_print: total time = 65953.62 ms / 134 tokens AI: Hi User: What's your favorite color? AI: I don't have personal preferences. User: Why is blue your favorite color? AI: I don't know. User: You don't know? AI: I don't know why it's my favorite color. User: What if I told you it's because of your personality? AI: That's an interesting idea. User: It is. I don't know why I like blue, but I do. Only the initial `User: Hello` is what I sent, other than that, it's the 'AI' hijacking the conversation and having a conversation with itself. My script: ## Imports from llama_cpp import Llama ## Specify the local path to the GGUF model model_path = "./Dirty-Alice-Tiny-1.1B-V2-Chatml-Q8.gguf" ## Instantiate model from local file llm = Llama( model_path=model_path, n_ctx=256, # Context length to use n_threads=2, # Number of CPU threads to use n_gpu_layers=0 # Number of model layers to offload to GPU ) ## Generation kwargs generation_kwargs = { "max_tokens": 128, "stop": ["\nUser:", "\nAI:"], # Stop generation at next turn marker "echo": False, "top_k": 40, "temperature": 0.7 } ## Initialize conversation history conversation_history = "" ## Run inference running = True while running: user_input = input("User: ") if user_input in ['goodbye', 'exit']: running = False else: # Append user input to the conversation history conversation_history += f"User: {user_input}\n" # Prepare the prompt with conversation history prompt = conversation_history + "AI:" # Generate response res = llm(prompt, **generation_kwargs) # Res is a dictionary # Extract AI response and ensure it doesn't simulate the user ai_response = res["choices"][0]["text"].strip() # Append the AI response to the conversation history conversation_history += f"AI: {ai_response}\n" # Print the AI response print(ai_response) Any help will be appriciated! model: [Dirty-Alice-Tiny-1.1B-V2-Chatml-Q8](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF?not-for-all-audiences=true) System (Simulated): OS: Fedora 41 (Ultramarine) CPU: Intel Celeron N3350 RAM: 4 GB Storage 64 GB (Flash) GPU: no\_discrete\_gpu
2025-01-08T15:38:25
https://www.reddit.com/r/LocalLLaMA/comments/1hwmz70/script_hijacks_user_input_and_starts_talking_to/
Repulsive-Price-9943
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwmz70
false
null
t3_1hwmz70
/r/LocalLLaMA/comments/1hwmz70/script_hijacks_user_input_and_starts_talking_to/
false
false
self
0
{'enabled': False, 'images': [{'id': '7sG076ZQqPKqcVXr_skUkAP6uD0--km_LslzuFI9wk0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KWRMjWrrNSjBmC9GEl4rgXKaJOzYw713LJHaQnzcQt8.jpg?width=108&crop=smart&auto=webp&s=91468e2a8b7ca5912bafb2e3a94a53dc37d1785e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KWRMjWrrNSjBmC9GEl4rgXKaJOzYw713LJHaQnzcQt8.jpg?width=216&crop=smart&auto=webp&s=1480ff43e563c00766141ba1018f4adcd5a63745', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KWRMjWrrNSjBmC9GEl4rgXKaJOzYw713LJHaQnzcQt8.jpg?width=320&crop=smart&auto=webp&s=9c544d0799f8875aecd0eee9b38212df2baf8e8b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KWRMjWrrNSjBmC9GEl4rgXKaJOzYw713LJHaQnzcQt8.jpg?width=640&crop=smart&auto=webp&s=21610cfe4e548b251c41e1a59175965f8d2a5ef0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KWRMjWrrNSjBmC9GEl4rgXKaJOzYw713LJHaQnzcQt8.jpg?width=960&crop=smart&auto=webp&s=1368dffcc9439b4b8ea33c3494bc279c0e58b01c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KWRMjWrrNSjBmC9GEl4rgXKaJOzYw713LJHaQnzcQt8.jpg?width=1080&crop=smart&auto=webp&s=5883671ae2667add94df27c0e3833235bd64638a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KWRMjWrrNSjBmC9GEl4rgXKaJOzYw713LJHaQnzcQt8.jpg?auto=webp&s=f8968aaceb229ea93071e952851c0843513f135a', 'width': 1200}, 'variants': {}}]}
Interesting Solution to the problem of Misguided Attention: "Mindful Attention"
1
[removed]
2025-01-08T15:50:05
[deleted]
1970-01-01T00:00:00
0
{}
1hwn8or
false
null
t3_1hwn8or
/r/LocalLLaMA/comments/1hwn8or/interesting_solution_to_the_problem_of_misguided/
false
false
default
1
null
Phi 4 MIT licensed - its show time folks
115
[microsoft/phi-4 · Hugging Face](https://huggingface.co/microsoft/phi-4)
2025-01-08T15:50:30
https://www.reddit.com/r/LocalLLaMA/comments/1hwn90v/phi_4_mit_licensed_its_show_time_folks/
ApprehensiveAd3629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwn90v
false
null
t3_1hwn90v
/r/LocalLLaMA/comments/1hwn90v/phi_4_mit_licensed_its_show_time_folks/
false
false
self
115
{'enabled': False, 'images': [{'id': 'Ot1xjL2M9365_v0oARY9QlUXl4MhR3T2fbB42ZHcFBA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=108&crop=smart&auto=webp&s=3fdb0c00d9a6c52d437c9c6c0afe5a250fb13947', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=216&crop=smart&auto=webp&s=069578878d4d624adcec3060062ccd7a3d425d8d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=320&crop=smart&auto=webp&s=1088b40cae80f97ec40a119a66f992f28c959f17', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=640&crop=smart&auto=webp&s=ed435e811f517c43ffac1607f9719679d8dd1b5d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=960&crop=smart&auto=webp&s=e5f1369b75f1cd82b6f75249dce815e06a498bdc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?width=1080&crop=smart&auto=webp&s=92fa94d2bd73b21d48b1679980160262cac2565f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gF2xHg5eGxs-B6pIwB6VodFWhOArLu2_8o4wQP6siP8.jpg?auto=webp&s=1f4a4119be0ec643473bbd1e82208b9bc4246064', 'width': 1200}, 'variants': {}}]}
Interesting Solution to the problem of Misguided Attention: "Mindful Attention"
1
[removed]
2025-01-08T15:52:35
https://www.reddit.com/r/LocalLLaMA/comments/1hwnaro/interesting_solution_to_the_problem_of_misguided/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwnaro
false
null
t3_1hwnaro
/r/LocalLLaMA/comments/1hwnaro/interesting_solution_to_the_problem_of_misguided/
false
false
self
1
null
LLaMA 3.1 405B with Image generation and Voice?
1
Hey newbie here But it looks like the new Nvidia project digits in daisy chain mode is about 400 Billion \~ 6k USD If we already have a video card 3080 or better, 600$ USD It seems as though as long as we have a good comm link, you could have a very high quality LLM LLaMA 3.1 405B paired on Project Digits with stable diffusion and voice on your video card. All in 7k USD you could have a very high quality functioning AI with image, voice capabilities. I this reasoning correct? its got me excited because this is a entirely new product category it seems like. I could see people having their PC, with a AI supercomputer next to it.
2025-01-08T15:57:43
https://www.reddit.com/r/LocalLLaMA/comments/1hwnf38/llama_31_405b_with_image_generation_and_voice/
Substantial_Camera_8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwnf38
false
null
t3_1hwnf38
/r/LocalLLaMA/comments/1hwnf38/llama_31_405b_with_image_generation_and_voice/
false
false
self
1
null
What would you like to see on a new architecture?
8
What would you like to be implemented on a new architecture?
2025-01-08T16:11:00
https://www.reddit.com/r/LocalLLaMA/comments/1hwnr3s/what_would_you_like_to_see_on_a_new_architecture/
Equivalent_Award7202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwnr3s
false
null
t3_1hwnr3s
/r/LocalLLaMA/comments/1hwnr3s/what_would_you_like_to_see_on_a_new_architecture/
false
false
self
8
null
Company Server Build
1
[removed]
2025-01-08T16:12:20
https://www.reddit.com/r/LocalLLaMA/comments/1hwns86/company_server_build/
viper1o5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwns86
false
null
t3_1hwns86
/r/LocalLLaMA/comments/1hwns86/company_server_build/
false
false
self
1
null
3060 12GB for running Llama 3.2 3B?
1
[removed]
2025-01-08T16:23:17
https://www.reddit.com/r/LocalLLaMA/comments/1hwo1ox/3060_12gb_for_running_llama_32_3b/
PhilipKThicc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwo1ox
false
null
t3_1hwo1ox
/r/LocalLLaMA/comments/1hwo1ox/3060_12gb_for_running_llama_32_3b/
false
false
self
1
null
Sidegrade 3070 TI to 4060 TI 16GB
2
Would you consider an "upgrade" to a used 4060 TI 16GB from a 3070 TI worth it? I can get a GIGABYTE AERO OC for below 390€ and looks like people are willing to pay pretty much the same for my Founders Edition 3070 TI but I'm unsure if it's even worth going through the hassle and risk (cause u never know with second hand stuff). 8GB feels just too limiting to even get into local LLMs :( Also better power efficiency is always nice and I'm on 1080p, so gaming performance isn't smth i worry too much about.
2025-01-08T16:29:46
https://www.reddit.com/r/LocalLLaMA/comments/1hwo7cr/sidegrade_3070_ti_to_4060_ti_16gb/
GazonkFoo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwo7cr
false
null
t3_1hwo7cr
/r/LocalLLaMA/comments/1hwo7cr/sidegrade_3070_ti_to_4060_ti_16gb/
false
false
self
2
null
Struggles with running better vision models locally.
1
Hey community. I need some advice from some fellow LLM-heads. I’m trying to run something better than Qwen-2-VL 7B locally, but I’m struggling with my vLLM. I’ve got 2x3090 GPUs, and Qwen-2-VL 72B AWQ just doesn’t fit. Tried messing around with InternVL 2.5 too—it looks solid, but I can’t get vLLM to split the model across both GPUs (I've seen creator of OpenedAI-Vision github struggle with the same issue). It always loads onto one card and then runs out of memory. I’d go for cloud GPUs, but I need to process sensitive data and I’m in the EU, so privacy rules are a pain, and I don’t wanna risk it. Has anyone managed to get something like this working? Any help is greatly appreciated.
2025-01-08T16:31:02
https://www.reddit.com/r/LocalLLaMA/comments/1hwo8im/struggles_with_running_better_vision_models/
paryska99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwo8im
false
null
t3_1hwo8im
/r/LocalLLaMA/comments/1hwo8im/struggles_with_running_better_vision_models/
false
false
self
1
null
SwitchLoRA: Parameter-efficient pre-training can achieves results comparable to full training
1
[removed]
2025-01-08T16:32:18
https://www.reddit.com/r/LocalLLaMA/comments/1hwo9nb/switchlora_parameterefficient_pretraining_can/
Embarrassed_List_717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwo9nb
false
null
t3_1hwo9nb
/r/LocalLLaMA/comments/1hwo9nb/switchlora_parameterefficient_pretraining_can/
false
false
self
1
null
why is VRAM better than unified memory and what will it take to close the gap?
1
[removed]
2025-01-08T16:38:07
https://www.reddit.com/r/LocalLLaMA/comments/1hwoetl/why_is_vram_better_than_unified_memory_and_what/
emilytakethree
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwoetl
false
null
t3_1hwoetl
/r/LocalLLaMA/comments/1hwoetl/why_is_vram_better_than_unified_memory_and_what/
false
false
self
1
null
Free SDXL Dreambooth Finetuning + Deployment Client Colab Notebook
1
[removed]
2025-01-08T16:38:35
https://www.reddit.com/r/LocalLLaMA/comments/1hwof97/free_sdxl_dreambooth_finetuning_deployment_client/
stupidauthor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwof97
false
null
t3_1hwof97
/r/LocalLLaMA/comments/1hwof97/free_sdxl_dreambooth_finetuning_deployment_client/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
One-Shot Learning vs. Model Training: Are You an Open Book or a Night Owl?
0
Ever wondered how AI training is like your study habits? Let me break it down in the most relatable way that I like: **One Shot Learning** – It’s like showing up for an open book test. The teacher says, "Here’s all the info you might need, but don’t expect the answers to just pop out!" You rely on your past subject knowledge and a bit of creativity to connect the dots. It's fast, efficient, and perfect for the last-minute gang. AI does the same—uses minimal data, applies pre-learned context, and BOOM, spits out predictions. **Model Training** – Ah, this is the classic test prep grind. You study all semester, sacrificing Netflix marathons and coffee budgets to deeply understand the subject. By the time exam day comes, you’re the encyclopedia of knowledge. Similarly, AI models spend weeks (or months) ingesting massive amounts of data to be ready for any scenario. **Takeaway:** Open book or not, the key is using the right strategy at the right time. Sometimes you need the agility of one-shot learning; other times, the robustness of traditional training wins the day.
2025-01-08T16:43:37
https://www.reddit.com/r/LocalLLaMA/comments/1hwojv4/oneshot_learning_vs_model_training_are_you_an/
Ahmad401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwojv4
false
null
t3_1hwojv4
/r/LocalLLaMA/comments/1hwojv4/oneshot_learning_vs_model_training_are_you_an/
false
false
self
0
null
What is a best lllm I can use locally?
1
[removed]
2025-01-08T16:46:40
https://www.reddit.com/r/LocalLLaMA/comments/1hwomul/what_is_a_best_lllm_i_can_use_locally/
B1GG3ST
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwomul
false
null
t3_1hwomul
/r/LocalLLaMA/comments/1hwomul/what_is_a_best_lllm_i_can_use_locally/
false
false
self
1
null
LLM Fine-tuning Colab notebook for fine-tuning most open-source models
1
[removed]
2025-01-08T16:47:55
https://www.reddit.com/r/LocalLLaMA/comments/1hwony3/llm_finetuning_colab_notebook_for_finetuning_most/
stupidauthor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwony3
false
null
t3_1hwony3
/r/LocalLLaMA/comments/1hwony3/llm_finetuning_colab_notebook_for_finetuning_most/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
Which is the best llm that I can run locally?
1
[removed]
2025-01-08T16:48:03
https://www.reddit.com/r/LocalLLaMA/comments/1hwoo2b/which_is_the_best_llm_that_i_can_run_locally/
B1GG3ST
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwoo2b
false
null
t3_1hwoo2b
/r/LocalLLaMA/comments/1hwoo2b/which_is_the_best_llm_that_i_can_run_locally/
false
false
self
1
null
Are the multilingual capabilities of current models comparable to gpt3.5?
2
For a year now I've been reading posts and comments that local llms have already reached the GPT3.5 level, in fact it's already the GPT4 level. And every time I come back to the same point - support for languages other than English. Already GPT3.5 handled this perfectly, meanwhile the next generations of llama, or qwen - not. From what I remember Mistral was quite good at this, but it's been a long time since I used this model. Due to the fact that I'm GPU-poor I'm talking about models 7b, 12b, because 70b is too much for my hardware. Does everyone just ignore languages other than English when it comes to local models? Or do they only work well with more parameters? Also, I have a sneaking suspicion that OpenAi translates the prompts, so that may be the reason why the experience is so good. Mainly because the titles or summaries of previous chats are often in English, even if they were not in English at all.
2025-01-08T16:49:39
https://www.reddit.com/r/LocalLLaMA/comments/1hwopfx/are_the_multilingual_capabilities_of_current/
samorollo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwopfx
false
null
t3_1hwopfx
/r/LocalLLaMA/comments/1hwopfx/are_the_multilingual_capabilities_of_current/
false
false
self
2
null
Which is the best llm that I can run locally?
1
[removed]
2025-01-08T16:50:29
https://www.reddit.com/r/LocalLLaMA/comments/1hwoq86/which_is_the_best_llm_that_i_can_run_locally/
B1GG3ST
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwoq86
false
null
t3_1hwoq86
/r/LocalLLaMA/comments/1hwoq86/which_is_the_best_llm_that_i_can_run_locally/
false
false
self
1
null
Has any tried nstruction finetuning andrej karpathys nanoGPT
6
Basically the title I have pretrained the model using Karpathy's nanogpt and now I want to instruction fine tune it I am pretty new to fine tuning Has anyone successfully instruction fine tuned it ? If so can you shed some light? Pleaseeee point me to the right resource to accomplish this task I have been trying a lot but ahh no progress yet
2025-01-08T16:56:11
https://www.reddit.com/r/LocalLLaMA/comments/1hwovfv/has_any_tried_nstruction_finetuning_andrej/
Pure-Signal-3135
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwovfv
false
null
t3_1hwovfv
/r/LocalLLaMA/comments/1hwovfv/has_any_tried_nstruction_finetuning_andrej/
false
false
self
6
null
Interesting Solution to the problem of Misguided Attention: "Mindful Attention"
1
[removed]
2025-01-08T16:58:01
https://www.reddit.com/r/LocalLLaMA/comments/1hwox0d/interesting_solution_to_the_problem_of_misguided/
vesudeva
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwox0d
false
null
t3_1hwox0d
/r/LocalLLaMA/comments/1hwox0d/interesting_solution_to_the_problem_of_misguided/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zc0d3MyeWRHREWvEmbtwYAnrgsfx2Dp-GlnaCrAVY5Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=108&crop=smart&auto=webp&s=d03d05df3b8cccd0dab5f86c7f33475d01067683', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=216&crop=smart&auto=webp&s=b860d53a7a30556bdf104a255fc3eb1296b9490a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=320&crop=smart&auto=webp&s=f4adfefb0fd642496cb95728fc5ac082984d81da', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=640&crop=smart&auto=webp&s=26e1c15e03312d61baecfa9fa3abb5cae1b81635', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=960&crop=smart&auto=webp&s=f9b0088f222c19528abf5024796ed17c69a0edc8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?width=1080&crop=smart&auto=webp&s=2473985480f05f78830d96131477bd0047662a36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2jJhbKE1qPRFatVDLHMvb8gfvLyTeXLQHVrwxTYtVbg.jpg?auto=webp&s=cd8561eba8eb7d6ea881f8972a113b9c75e7c489', 'width': 1200}, 'variants': {}}]}
Interesting Solution to the problem of Misguided Attention: "Mindful Attention"
34
2025-01-08T17:01:17
https://huggingface.co/posts/Severian/375067343900874
vesudeva
huggingface.co
1970-01-01T00:00:00
0
{}
1hwp00r
false
null
t3_1hwp00r
/r/LocalLLaMA/comments/1hwp00r/interesting_solution_to_the_problem_of_misguided/
false
false
https://a.thumbs.redditm…Ul-CVzUtkZJ0.jpg
34
{'enabled': False, 'images': [{'id': '6L8pGCZQIyFSy5gzUb1HvfZTVMKEUTv0W5l6gRvrc3M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/okxb_U2_vWgnDkR1V08qly94GnsdGTVu0C8vagF8k-M.jpg?width=108&crop=smart&auto=webp&s=848e1b04770e1a18cd1732bec0c1e148c9e40bf2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/okxb_U2_vWgnDkR1V08qly94GnsdGTVu0C8vagF8k-M.jpg?width=216&crop=smart&auto=webp&s=0d9ddab35be73a1fe7bb9f515362705a243a6a50', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/okxb_U2_vWgnDkR1V08qly94GnsdGTVu0C8vagF8k-M.jpg?width=320&crop=smart&auto=webp&s=aabeb245e70abbfb0507c481ccbc02f918903122', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/okxb_U2_vWgnDkR1V08qly94GnsdGTVu0C8vagF8k-M.jpg?width=640&crop=smart&auto=webp&s=7d9d2562f385dca6a43d030b7263e0bc59a432aa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/okxb_U2_vWgnDkR1V08qly94GnsdGTVu0C8vagF8k-M.jpg?width=960&crop=smart&auto=webp&s=8ac023769195eec9840eed4994c259aadf53c72c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/okxb_U2_vWgnDkR1V08qly94GnsdGTVu0C8vagF8k-M.jpg?width=1080&crop=smart&auto=webp&s=ef73c9cf63ff2d71ad081a2b09c5d4806f799ee3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/okxb_U2_vWgnDkR1V08qly94GnsdGTVu0C8vagF8k-M.jpg?auto=webp&s=3f3e215fadf3f93086fe997e05b3295a90bac7ef', 'width': 1200}, 'variants': {}}]}
Local Android translator app using Seemless M4T unity model
7
I built an offline translation app base on the Seemless M4T Unity small model: ([https://huggingface.co/facebook/seamless-m4t-unity-small-s2t](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t)) It supports English, Spanish, Portuguese, Hindi, and obviously also Russian. In contrary to the model card it does not support French... The app is here: [https://github.com/woheller69/seemless](https://github.com/woheller69/seemless) It is quite big (\~250 MB) and downloads the model (\~500 MB) from Hugging Face.
2025-01-08T17:01:23
https://www.reddit.com/r/LocalLLaMA/comments/1hwp04q/local_android_translator_app_using_seemless_m4t/
DocWolle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwp04q
false
null
t3_1hwp04q
/r/LocalLLaMA/comments/1hwp04q/local_android_translator_app_using_seemless_m4t/
false
false
self
7
{'enabled': False, 'images': [{'id': 'A8zr91ct_YWYksFHs-ukSsxQvKXCaA3DHqC9shXPWcM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mHRmj1bBlNdXVQcKu8JSyj4PS8OzDLC1tyrExXya0SI.jpg?width=108&crop=smart&auto=webp&s=240fd88b5f039c21cb26407e6246d2d86e292bf2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mHRmj1bBlNdXVQcKu8JSyj4PS8OzDLC1tyrExXya0SI.jpg?width=216&crop=smart&auto=webp&s=0ee34908e7e86c4038ee37ec3587cf84dbf9b581', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mHRmj1bBlNdXVQcKu8JSyj4PS8OzDLC1tyrExXya0SI.jpg?width=320&crop=smart&auto=webp&s=407ab84b10c650b747e09bdf6c5f31b1fbfd4477', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mHRmj1bBlNdXVQcKu8JSyj4PS8OzDLC1tyrExXya0SI.jpg?width=640&crop=smart&auto=webp&s=2b4c8b757f7929eafea6851291b7fe42b8549a0d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mHRmj1bBlNdXVQcKu8JSyj4PS8OzDLC1tyrExXya0SI.jpg?width=960&crop=smart&auto=webp&s=793f20999d4919498e5506face3eb0c73f604ba1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mHRmj1bBlNdXVQcKu8JSyj4PS8OzDLC1tyrExXya0SI.jpg?width=1080&crop=smart&auto=webp&s=000c1b17fe3f2cf146e8da59bfbd5217f361d4d8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mHRmj1bBlNdXVQcKu8JSyj4PS8OzDLC1tyrExXya0SI.jpg?auto=webp&s=3c7344f572dcb4f4496a08fb2a50df287db9c77c', 'width': 1200}, 'variants': {}}]}
Cloud AI for Video Games is Dead on Arrival, On-Device is the Future
1
[removed]
2025-01-08T17:10:16
https://www.reddit.com/r/LocalLLaMA/comments/1hwp87s/cloud_ai_for_video_games_is_dead_on_arrival/
maiybe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwp87s
false
null
t3_1hwp87s
/r/LocalLLaMA/comments/1hwp87s/cloud_ai_for_video_games_is_dead_on_arrival/
false
false
self
1
null
how do i run gguf models "directly from file" to openwebUI?
1
I’m still pretty new to this, so apologies if this has been asked before. I would like to run LLM models in GGUF format from a central directory on a defined path. i use the qwen model currently through ollama. but ollama downloads its own version of it and stores it in a hidden directory under a random hashed name. Even if i want to import the ggufs it will make a copy its its path. im sure is it possible to run the gguf files directly from hugging face and acces them through an web interface (chatGPT style) but i dont know how. I heard openwebui can do this directly but i havent figured how. I want this specifically as i have a dual boot machine (win + linux) and with ollama each system makes a copy of the model which eats my system drive. So what is my desired setup? i am a bit noob and will be thankful for hints.
2025-01-08T17:15:44
https://www.reddit.com/r/LocalLLaMA/comments/1hwpd0o/how_do_i_run_gguf_models_directly_from_file_to/
carrotsquawk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwpd0o
false
null
t3_1hwpd0o
/r/LocalLLaMA/comments/1hwpd0o/how_do_i_run_gguf_models_directly_from_file_to/
false
false
self
1
null
Cloud AI for video games is dead on arrival, what's next for game devs?
1
[removed]
2025-01-08T17:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1hwpm9b/cloud_ai_for_video_games_is_dead_on_arrival_whats/
maiybe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwpm9b
false
null
t3_1hwpm9b
/r/LocalLLaMA/comments/1hwpm9b/cloud_ai_for_video_games_is_dead_on_arrival_whats/
false
false
self
1
null
When are we going to gave an open source fully uncensored Advanced Voice Mode? I am tired of jailbreaking OpenAI Advanced Voice Mode. Firstly, it is boring and very hard to jailbreak it....
1
[removed]
2025-01-08T17:31:44
[deleted]
1970-01-01T00:00:00
0
{}
1hwpr1o
false
null
t3_1hwpr1o
/r/LocalLLaMA/comments/1hwpr1o/when_are_we_going_to_gave_an_open_source_fully/
false
false
default
1
null
deepseek gave me a scam website
0
https://preview.redd.it/…cc8ceeee475a9c
2025-01-08T17:48:50
https://www.reddit.com/r/LocalLLaMA/comments/1hwq6cc/deepseek_gave_me_a_scam_website/
LYERO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwq6cc
false
null
t3_1hwq6cc
/r/LocalLLaMA/comments/1hwq6cc/deepseek_gave_me_a_scam_website/
false
false
https://b.thumbs.redditm…C4j3IM-kBaaM.jpg
0
null
PSA: Phi-4 GGUFs (from official release) don't work in Ollama (yet)
1
I assumed no changes were needed (since it's the same arch) but there were changes made here in llama.cpp that haven't been picked up by them yet (they are aware): https://github.com/ggerganov/llama.cpp/pull/10817/files In the meantime LM Studio is confirmed working, and Koboldcpp seems to have picked up the changes as well I can't tell if text-gen-webui would work, it's using llama-cpp-python 0.3.5 which is 4 weeks old where the llama.cpp changes were made 3 weeks ago, so I assume that it wouldn't work (unless it has its own workaround)
2025-01-08T17:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1hwq9xp/psa_phi4_ggufs_from_official_release_dont_work_in/
noneabove1182
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwq9xp
false
null
t3_1hwq9xp
/r/LocalLLaMA/comments/1hwq9xp/psa_phi4_ggufs_from_official_release_dont_work_in/
false
false
self
1
{'enabled': False, 'images': [{'id': '9a3PLOggOSJ8259SMYvaMVeEQy1eJMy1cTYRBgCnb_c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nD7cxvJANs1uCizLl4viCUQ3g_kuTe73HS-p3-6oLvs.jpg?width=108&crop=smart&auto=webp&s=806813f8ff0d45ebba2b2e0f84e91385939b7a17', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nD7cxvJANs1uCizLl4viCUQ3g_kuTe73HS-p3-6oLvs.jpg?width=216&crop=smart&auto=webp&s=72aa9d2fc20af7791d0aa2edaa4c554a5364fbfe', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/nD7cxvJANs1uCizLl4viCUQ3g_kuTe73HS-p3-6oLvs.jpg?width=320&crop=smart&auto=webp&s=154c1447a1f8cb20adba07f04c1e17581fecf83c', 'width': 320}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/nD7cxvJANs1uCizLl4viCUQ3g_kuTe73HS-p3-6oLvs.jpg?auto=webp&s=a625e27735efe684c385f7cf2c3d63cc145dbd58', 'width': 420}, 'variants': {}}]}
AI for your agent: Build a Perplexity Clone with LangGraph, Tavily & CopilotKit
1
2025-01-08T17:57:33
https://www.copilotkit.ai/blog/build-a-perplexity-clone-with-copilotkit
nate4t
copilotkit.ai
1970-01-01T00:00:00
0
{}
1hwqe2e
false
null
t3_1hwqe2e
/r/LocalLLaMA/comments/1hwqe2e/ai_for_your_agent_build_a_perplexity_clone_with/
false
false
https://b.thumbs.redditm…0keX5MiCZIKg.jpg
1
{'enabled': False, 'images': [{'id': 'NKEo9oDdFjq2omdSjT70lCYt9zb6A_gCX-e0vCBkKnE', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/kSgmeLKN8ClFtpzC0n8HSBPzoEkj5iaqbKSpZ5TddG0.jpg?width=108&crop=smart&auto=webp&s=b006c6deb6dabfa506f6d822d2a69cf6ad700f74', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/kSgmeLKN8ClFtpzC0n8HSBPzoEkj5iaqbKSpZ5TddG0.jpg?width=216&crop=smart&auto=webp&s=a691745a2446fdbcd4acba20d6271d68459240a8', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/kSgmeLKN8ClFtpzC0n8HSBPzoEkj5iaqbKSpZ5TddG0.jpg?width=320&crop=smart&auto=webp&s=f47afeb130bd0ffae29ed5adb20dbca7fc784e9c', 'width': 320}, {'height': 416, 'url': 'https://external-preview.redd.it/kSgmeLKN8ClFtpzC0n8HSBPzoEkj5iaqbKSpZ5TddG0.jpg?width=640&crop=smart&auto=webp&s=b819ec1df5c8a1b019b66d53a7dcb13eaa7ebd2d', 'width': 640}, {'height': 624, 'url': 'https://external-preview.redd.it/kSgmeLKN8ClFtpzC0n8HSBPzoEkj5iaqbKSpZ5TddG0.jpg?width=960&crop=smart&auto=webp&s=edeed00400709b083ca288b3ac1e3b997e7c950f', 'width': 960}, {'height': 702, 'url': 'https://external-preview.redd.it/kSgmeLKN8ClFtpzC0n8HSBPzoEkj5iaqbKSpZ5TddG0.jpg?width=1080&crop=smart&auto=webp&s=6f8397b25a74922ffb89c6ffe336de882fd2f98c', 'width': 1080}], 'source': {'height': 1040, 'url': 'https://external-preview.redd.it/kSgmeLKN8ClFtpzC0n8HSBPzoEkj5iaqbKSpZ5TddG0.jpg?auto=webp&s=5136bbde8fd2a59b05d2c14b4e9cff38d4d643d4', 'width': 1600}, 'variants': {}}]}
Quad P40 build and benchmarks with Qwen-2.5-Coder-32B and Llama 3.1-Nemotron-70B
17
Hi all, First of all, I'd like to thank this amazing community. I've been lurking here since the leak of the first Llama model and learned a lot about running LLMs locally. I've been mentioning my several ongoing builds for a while now. I had bought a lot of hardware over the last year and change but life has kept me busy with other things, so progress in actually building all that hardware has been slow. The first build is finally over (at least for now). It's powered by dual Xeon E5-2599v4 CPUs, 8x64GB (512GB) of 2400MT LRDIMMs, four Nvidia P40s, and a couple of 2TB M.2 SSDs. Everything is connected a [Supermicro X10DRX](https://www.supermicro.com/en/products/motherboard/X10DRX). It's one beast of a board with 10 (ten!) PCIe 3.0 X8 slots running at X8. As I mentioned in several comments, the P40 PCB is the same as a reference 1080Ti with 24GB and EPS power instead of the 6+8 PCIe power connectors. And so, most 1080Ti waterblocks fit it perfectly. I am using Heatkiller IV FE 1080Ti waterblocks, and a Heatkiller bridge to simplify tubing. Heat is expelled via two 360mm radiators, one 45mm and one 30mm in series, though now I think the 45mm radiator would have been enough now. A [Corsair XD5 pump-reservoir](https://www.corsair.com/de/de/p/custom-liquid-cooling/cx-9040002-ww/hydro-x-series-xd5-rgb-pump-reservoir-combo-cx-9040002-ww) provides ample circulation to keep them GPUs extra cool under load. Power is provided by a Seasonic Prime 1300W PSU, and everything sits in a [Xigmatek Elysium](http://old.xigmatek.com/product_detail.php?item=9) case, since there aren't many tower cases that can accomodate a SSI-MEB motherboard like the X10DRX. I am a software engineer, and so my main focus is on coding and logic. So, here are some benchmarks of the two models of interest to me (at least for this rig): Llama 3.1 nemotorn 70B and Qwen 2.5 Coder 32B using Llama.cpp from a couple of days ago (commit ecebbd29) Without further ado, here are the numbers I get with llama-bench and the associated commands: ./llama-bench -r 3 -fa 1 -pg 4096,1024 -sm row --numa distribute -ctk q8_0 -ctv q8_0 -t 40 --model ~/models/Qwen2.5-Coder-32B-Instruct-128K-GGUF/Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf | model | size | params | backend | ngl | threads | type_k | type_v | sm | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | ----: | -: | ------------: | -------------------: | | qwen2 32B Q4_K - Medium | 18.48 GiB | 32.76 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | row | 1 | pp512 | 193.62 ± 0.32 | | qwen2 32B Q4_K - Medium | 18.48 GiB | 32.76 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | row | 1 | tg128 | 15.41 ± 0.01 | | qwen2 32B Q4_K - Medium | 18.48 GiB | 32.76 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | row | 1 | pp4096+tg1024 | 45.07 ± 0.04 | ./llama-bench -fa 1 -pg 4096,1024 -sm row --numa distribute -ctk q8_0 -ctv q8_0 -t 40 --model ~/models/Qwen2.5-Coder-32B-Instruct-128K-GGUF/Qwen2.5-Coder-32B-Instruct-Q8_0.gguf | model | size | params | backend | ngl | threads | type_k | type_v | sm | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | ----: | -: | ------------: | -------------------: | | qwen2 32B Q8_0 | 32.42 GiB | 32.76 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | row | 1 | pp512 | 194.76 ± 0.28 | | qwen2 32B Q8_0 | 32.42 GiB | 32.76 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | row | 1 | tg128 | 13.31 ± 0.13 | | qwen2 32B Q8_0 | 32.42 GiB | 32.76 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | row | 1 | pp4096+tg1024 | 41.62 ± 0.14 | ./llama-bench -fa 1 -pg 4096,1024 -sm row --numa distribute -t 40 --model ~/models/Qwen2.5-Coder-32B-Instruct-128K-GGUF/Qwen2.5-Coder-32B-Instruct-Q8_0.gguf | model | size | params | backend | ngl | threads | sm | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | ----: | -: | ------------: | -------------------: | | qwen2 32B Q8_0 | 32.42 GiB | 32.76 B | CUDA,RPC | 99 | 40 | row | 1 | pp512 | 197.12 ± 0.14 | | qwen2 32B Q8_0 | 32.42 GiB | 32.76 B | CUDA,RPC | 99 | 40 | row | 1 | tg128 | 14.16 ± 0.00 | | qwen2 32B Q8_0 | 32.42 GiB | 32.76 B | CUDA,RPC | 99 | 40 | row | 1 | pp4096+tg1024 | 47.22 ± 0.02 | ./llama-bench -r 3 -fa 1 -pg 4096,1024 --numa distribute -ctk q8_0 -ctv q8_0 -t 40 -mg 0 -sm none --model ~/models/Qwen2.5-Coder-32B-In struct-128K-GGUF/Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf | model | size | params | backend | ngl | threads | type_k | type_v | sm | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | ----: | -: | ------------: | -------------------: | | qwen2 32B Q4_K - Medium | 18.48 GiB | 32.76 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | none | 1 | pp512 | 206.11 ± 0.56 | | qwen2 32B Q4_K - Medium | 18.48 GiB | 32.76 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | none | 1 | tg128 | 10.99 ± 0.00 | | qwen2 32B Q4_K - Medium | 18.48 GiB | 32.76 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | none | 1 | pp4096+tg1024 | 37.96 ± 0.07 | ./llama-bench -r 3 -fa 1 -pg 4096,1024 -sm row --numa distribute -t 40 --model ~/models/Qwen2.5-Coder-32B-Instruct-128K-GGUF/Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf | model | size | params | backend | ngl | threads | sm | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | ----: | -: | ------------: | -------------------: | | qwen2 32B Q4_K - Medium | 18.48 GiB | 32.76 B | CUDA,RPC | 99 | 40 | row | 1 | pp512 | 189.36 ± 0.35 | | qwen2 32B Q4_K - Medium | 18.48 GiB | 32.76 B | CUDA,RPC | 99 | 40 | row | 1 | tg128 | 16.35 ± 0.00 | | qwen2 32B Q4_K - Medium | 18.48 GiB | 32.76 B | CUDA,RPC | 99 | 40 | row | 1 | pp4096+tg1024 | 51.70 ± 0.08 | ./llama-bench -r 3 -fa 1 -pg 4096,1024 -sm row --numa distribute -t 40 --model ~/models/Llama-3.1-Nemotron-70B-Instruct-HF-GGUF/Llama-3 .1-Nemotron-70B-Instruct-HF-Q4_K_M.gguf | model | size | params | backend | ngl | threads | sm | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | ----: | -: | ------------: | -------------------: | | llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | CUDA,RPC | 99 | 40 | row | 1 | pp512 | 129.15 ± 0.11 | | llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | CUDA,RPC | 99 | 40 | row | 1 | tg128 | 10.34 ± 0.02 | | llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | CUDA,RPC | 99 | 40 | row | 1 | pp4096+tg1024 | 31.85 ± 0.11 | ./llama-bench -r 3 -fa 1 -pg 4096,1024 -sm row --numa distribute -t 40 --model ~/models/Llama-3.1-Nemotron-70B-Instruct-HF-GGUF/Llama-3.1-Nemotron-70B-Instruct-HF-Q8_0/Llama-3.1-Nemotron-70B-Instruct-HF-Q8_0-00001-of-00002.gguf | model | size | params | backend | ngl | threads | sm | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | ----: | -: | ------------: | -------------------: | | llama 70B Q8_0 | 69.82 GiB | 70.55 B | CUDA,RPC | 99 | 40 | row | 1 | pp512 | 128.68 ± 0.05 | | llama 70B Q8_0 | 69.82 GiB | 70.55 B | CUDA,RPC | 99 | 40 | row | 1 | tg128 | 8.65 ± 0.04 | | llama 70B Q8_0 | 69.82 GiB | 70.55 B | CUDA,RPC | 99 | 40 | row | 1 | pp4096+tg1024 | 28.34 ± 0.03 | ./llama-bench -r 3 -fa 1 -pg 4096,1024 -sm row -ctk q8_0 -ctv q8_0 -t 40 --numa distribute --model ~/models/Llama-3.1-Nemotron-70B-Instruct-HF-GGUF/Llama-3.1-Nemotron-70B-Instruct-HF-Q8_0/Llama-3.1-Nemotron-70B-Instruct-HF-Q8_0-00001-of-00002.gguf | model | size | params | backend | ngl | threads | type_k | type_v | sm | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | ----: | -: | ------------: | -------------------: | | llama 70B Q8_0 | 69.82 GiB | 70.55 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | row | 1 | pp512 | 127.97 ± 0.02 | | llama 70B Q8_0 | 69.82 GiB | 70.55 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | row | 1 | tg128 | 8.47 ± 0.00 | | llama 70B Q8_0 | 69.82 GiB | 70.55 B | CUDA,RPC | 99 | 40 | q8_0 | q8_0 | row | 1 | pp4096+tg1024 | 25.45 ± 0.03 |
2025-01-08T18:05:57
https://www.reddit.com/r/LocalLLaMA/comments/1hwqloa/quad_p40_build_and_benchmarks_with_qwen25coder32b/
FullstackSensei
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwqloa
false
null
t3_1hwqloa
/r/LocalLLaMA/comments/1hwqloa/quad_p40_build_and_benchmarks_with_qwen25coder32b/
false
false
self
17
null
i dont get the hype around nvidia project digits?
0
i can get whole rig of 5 3090s for price of that, and small form factor barely matters for somethng like server. so then why should anyone buy that over a good old 3090 rig
2025-01-08T18:06:36
https://www.reddit.com/r/LocalLLaMA/comments/1hwqm85/i_dont_get_the_hype_around_nvidia_project_digits/
tensorsgo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwqm85
false
null
t3_1hwqm85
/r/LocalLLaMA/comments/1hwqm85/i_dont_get_the_hype_around_nvidia_project_digits/
false
false
self
0
null
Thoughts on best LLM for simple story writing?
2
My kids love asking LLMs to tell them a story at bedtime, and I've noticed the story quality can be pretty hit or miss. We use OpenwebUI and the TTS and they love it! Which LLM do you think can create the best little 3-5 minute stories for kids? Second question, which LLM would be the best that can fit in 16GB of VRAM (4060ti 16gb)? I'm looking forward to your responses!
2025-01-08T18:18:09
https://www.reddit.com/r/LocalLLaMA/comments/1hwqw40/thoughts_on_best_llm_for_simple_story_writing/
StartupTim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwqw40
false
null
t3_1hwqw40
/r/LocalLLaMA/comments/1hwqw40/thoughts_on_best_llm_for_simple_story_writing/
false
false
self
2
null
What's the bees knees for image processing satellite data?
0
Currently working on a project involving a metric ton of sat data over Columbia, were looking for specific identifiers and are probably going to deploy a GPU cluster What's the cutting edge locally hosted image models out there that could help with this process? Curious what others in this field of work are doing in their work flow Thanks!
2025-01-08T18:27:56
https://www.reddit.com/r/LocalLLaMA/comments/1hwr4oo/whats_the_bees_knees_for_image_processing/
OccasionllyAsleep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwr4oo
false
null
t3_1hwr4oo
/r/LocalLLaMA/comments/1hwr4oo/whats_the_bees_knees_for_image_processing/
false
false
self
0
null
Based on CES '25 announcements, what's the best "stackable" GPU rig for running big models at high tok/s?
1
Now that we know about NVIDIA and AMD announcements, what PC rig would you recommend to someone who plans to upgrade/add more GPUs in the future? For me, the inference time is crucial, but I also care about running large models (+70B even if it takes several GPUs and Q4 quantization). For reference: - AMD Ryzn APU: https://wccftech.com/amd-ryzen-ai-max-strix-halo-ultimate-ai-pc-apus-16-zen-5-40-rdna-3-5-cores-256-gbps-bandwidth-120w/#:~:text=AMD%20Ryzen%20AI%20Max%20%E2%80%9CStrix,s%20Bandwidth%20%26%20Up%20To%20120W - NVIDIA 5090 Specs: https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/ - NVIDIA Project Digits: https://nvidianews.nvidia.com/news/nvidia-puts-grace-blackwell-on-every-desk-and-at-every-ai-developers-fingertips
2025-01-08T18:30:26
https://www.reddit.com/r/LocalLLaMA/comments/1hwr6wr/based_on_ces_25_announcements_whats_the_best/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwr6wr
false
null
t3_1hwr6wr
/r/LocalLLaMA/comments/1hwr6wr/based_on_ces_25_announcements_whats_the_best/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uARg2IxP4gOoABTswFFvtwtoFLFcIT6T1MCp1uuLio0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/FWiXitIhaN2lUAGAQIw_PZRnFn7Wm5KV49t6ewUyqlE.jpg?width=108&crop=smart&auto=webp&s=d098eca69f43b644715005faf622376e770fe606', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/FWiXitIhaN2lUAGAQIw_PZRnFn7Wm5KV49t6ewUyqlE.jpg?width=216&crop=smart&auto=webp&s=ccaf30988d1a5fadacdfb07ae2eae63d586bfcec', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/FWiXitIhaN2lUAGAQIw_PZRnFn7Wm5KV49t6ewUyqlE.jpg?width=320&crop=smart&auto=webp&s=7662903d4c894526b1238b9a3f42feb098462f16', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/FWiXitIhaN2lUAGAQIw_PZRnFn7Wm5KV49t6ewUyqlE.jpg?width=640&crop=smart&auto=webp&s=4fb14a54dfcaca48be7b6f1deb1413220e6cf290', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/FWiXitIhaN2lUAGAQIw_PZRnFn7Wm5KV49t6ewUyqlE.jpg?width=960&crop=smart&auto=webp&s=7d43f3c96a1cac312b0f5a37336ee600f5754d41', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/FWiXitIhaN2lUAGAQIw_PZRnFn7Wm5KV49t6ewUyqlE.jpg?width=1080&crop=smart&auto=webp&s=68b015951208678aad08aeb68e5e37930c4c6602', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/FWiXitIhaN2lUAGAQIw_PZRnFn7Wm5KV49t6ewUyqlE.jpg?auto=webp&s=24b623475fae83f162c20b51d39458bee43fa647', 'width': 2560}, 'variants': {}}]}
Ask me a question and I will run an experiment for you
2
I recently started to get into market research experiments using AI and would love to get some insights on whether it’s working.  Give me questions you want to conduct market research about and I will run an experiment based on your questions! The results are only suggesting, and i hope this could help some of you guys to generate some insights about the questions you want to figure out! You can ask me questions like:  why do people read less these days? What contributes to the success of a global market platform? What are some reasons behind people’s desire for success?
2025-01-08T18:44:02
https://www.reddit.com/r/LocalLLaMA/comments/1hwriwe/ask_me_a_question_and_i_will_run_an_experiment/
Nervous-Midnight-175
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwriwe
false
null
t3_1hwriwe
/r/LocalLLaMA/comments/1hwriwe/ask_me_a_question_and_i_will_run_an_experiment/
false
false
self
2
null
Is $5500 a good price for a single, brand-new H100?
0
Is $5500 a good price for a single, brand-new H100? Can't share more details due to an NDA.
2025-01-08T18:46:20
https://www.reddit.com/r/LocalLLaMA/comments/1hwrl00/is_5500_a_good_price_for_a_single_brandnew_h100/
Equivalent_Award7202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwrl00
false
null
t3_1hwrl00
/r/LocalLLaMA/comments/1hwrl00/is_5500_a_good_price_for_a_single_brandnew_h100/
false
false
self
0
null
Best 3B NSFW GGUF model in 2025-01? (+ settings)
6
I've read the older posts, but since then many new models have been released. I'm interested in 3B and GGUF models because I would use it on mobile on ChatterUI. I've tried the Qwen 2.5 models so far, but the base model is too modish and not provocative enough. I found an Uncensored model of it, but it seems to be not as stable as the original. I'd also be interested in the settings (min\_p, temp, top\_p, rep\_pen, pres\_pen etc.), because the ones I use on my desktop for the 7B and 14B models sometimes give strange results here. On the desktop I use mini-magnum-1.1-14B, but Rocinante seems to be good, so I'm looking for similar ones in 3B size. These are my generation parameters for these, which is no good for Qwen models, generates stupidity: `Generated tokens: 512` `Temperature: 1.02` `Top P: 0.85` `Top K: 100` `Min P: 0.03` `Typical sampling: 1` `Mirosat Mode: 0` `Mirostat Tau: 5` `Mirostat Eta: 0.1` `Repetition Penalty Range: 1` `Repetition Penalty: 1.05` `Presence Penalty: 0.15` `Frequency Penalty: 0` So what model do you use for chat from the 3B NSFW models and with what settings?
2025-01-08T18:53:37
https://www.reddit.com/r/LocalLLaMA/comments/1hwrrds/best_3b_nsfw_gguf_model_in_202501_settings/
mikemend
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwrrds
false
null
t3_1hwrrds
/r/LocalLLaMA/comments/1hwrrds/best_3b_nsfw_gguf_model_in_202501_settings/
false
false
nsfw
6
null
Company Server Build
1
I am a software/analytics dev in healthcare & recently we are currently implementing LLM's into some of our in-house applications. We are running Ollama on an Azure VM so we have access to the API's. After seeing the price of hosting the VM monthly, we would like to build our own server that can host bigger models and save money in the long run. Our goal is to build something that will at least last the next 3-4 years before any major upgrades. Seeing with the new releases from NVIDIA and AMD, we are not sure where to start. What would be the best route in doing this? Current VM Build: Processor: AMD EPYC 7V12 Memory: 56 GiB GPU: 2x Nvidia Tesla T4 (16GB)
2025-01-08T19:09:19
https://www.reddit.com/r/LocalLLaMA/comments/1hws5b0/company_server_build/
viper1o5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hws5b0
false
null
t3_1hws5b0
/r/LocalLLaMA/comments/1hws5b0/company_server_build/
false
false
self
1
null
Local Llama to perform file actions?
0
So please forgive the elementariness of this question.. I'm not super experienced with this stuff. I am sitting here trying to grok thru log files with sizes up to 110mb when the thought came to me; "Why can't I ask an AI bot to do this?" So I came here to hopefully get educated.. are there local llamas that could do things like: "Scan thru biglog.log, find everything that includes the string 'hostname.domain.com' and put them into a new file" Is this something that is possible these days?
2025-01-08T19:18:09
https://www.reddit.com/r/LocalLLaMA/comments/1hwsct6/local_llama_to_perform_file_actions/
invalidpath
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwsct6
false
null
t3_1hwsct6
/r/LocalLLaMA/comments/1hwsct6/local_llama_to_perform_file_actions/
false
false
self
0
null
Human feedback to-go
1
High quality synthetic datasets for finetuning or preference optimization benefit from human review, which is tedious stuff. I thought it might be nice to have my phone read the generated training samples to me while I'm out for a walk or so. Only thing missing would be a small handheld bluetooth remote with three buttons. One to select the next sample, one to accept, one to reject the sample. Maybe more buttons, but I imagine three with a good haptic feel will be enough and good for comfy (cognitive and physical) and efficient handling. I'd build one myself using ESP32, but if something already existed that is open for tinkering, I would prefer that. Any ideas? Better alternatives? Completely silly idea? (I'm aware that this would only be suitable for natural language text)
2025-01-08T19:24:02
https://www.reddit.com/r/LocalLLaMA/comments/1hwshtr/human_feedback_togo/
mnze_brngo_7325
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwshtr
false
null
t3_1hwshtr
/r/LocalLLaMA/comments/1hwshtr/human_feedback_togo/
false
false
self
1
null
Build AI Workflows in 2 minutes: Creating GitHub Issues from User Feedback
4
Google Colab's Power + Notion's Simplicity = Athina AI Workflows I created a flow to convert customer feedback into GitHub issues and assigns relevant labels automatically. This is how it works: * **Inputs:** User feedback and Repo Url 👨‍💻 * **Step 1**: Fetch and simplify the list of labels from the repository using Tool Calling Block * **Step 2**: Use an LLM block to analyze the feedback and assign the appropriate labels. * **Step 3**: Create a GitHub issue using the feedback, labels, and repository URL. * **Confirmation**: Verify if the issue was successfully created Cool Right? Not only this, Flows lets you build, deploy, and share AI workflows using pre-built blocks like Prompt (LLM), API Call, Knowledge Retrieval, Code Execution, Web Crawling, Document Parsing, and 50+ more tools to supercharge your workflows. Try out this flow here: [https://app.athina.ai/flows/templates/f4e2b7f4-975b-40b3-afb8-751e4f235b92](https://app.athina.ai/flows/templates/f4e2b7f4-975b-40b3-afb8-751e4f235b92)
2025-01-08T19:30:25
https://www.reddit.com/r/LocalLLaMA/comments/1hwsndz/build_ai_workflows_in_2_minutes_creating_github/
Sam_Tech1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwsndz
false
null
t3_1hwsndz
/r/LocalLLaMA/comments/1hwsndz/build_ai_workflows_in_2_minutes_creating_github/
false
false
self
4
null
The phi family model: Acing tests but failing real use cases?
26
I’ve really tried to like this family of models, including the newly released phi 4. It’s trained on synthetic data and academic textbooks, which sounds great in theory. But in practice, they all seem to fall short when it comes to real-world applications. For starters, no function calling (at least in the current Ollama version). The phi model reminds me of that one smart kid in class who always nails the tests but struggles with anything outside of that structured environment. On paper, it’s brilliant. In reality, it just doesn’t measure up. Curious if anyone else has had the same experience or sees things differently. Is it just me, or is this a recurring issue with these kinds of models?
2025-01-08T19:38:57
https://www.reddit.com/r/LocalLLaMA/comments/1hwsuuf/the_phi_family_model_acing_tests_but_failing_real/
fewsats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwsuuf
false
null
t3_1hwsuuf
/r/LocalLLaMA/comments/1hwsuuf/the_phi_family_model_acing_tests_but_failing_real/
false
false
self
26
null
Boy is 5090 beautiful
0
2 -slot 5090... Can we fit two of them in corsair 4000D without cooling problem? That's 3.4TB/s MBW and 64GB VRAM combai ed. I bet you can run 70b model with full context in at least 40Tk/s speed...
2025-01-08T19:54:48
https://youtu.be/3a8dScJg6O0?feature=shared
siegevjorn
youtu.be
1970-01-01T00:00:00
0
{}
1hwt8f3
false
{'oembed': {'author_name': 'Linus Tech Tips', 'author_url': 'https://www.youtube.com/@LinusTechTips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/3a8dScJg6O0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="I Was The FIRST To Game On The RTX 5090 - NVIDIA 50 Series Announcement"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/3a8dScJg6O0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'I Was The FIRST To Game On The RTX 5090 - NVIDIA 50 Series Announcement', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1hwt8f3
/r/LocalLLaMA/comments/1hwt8f3/boy_is_5090_beautiful/
false
false
https://b.thumbs.redditm…zJy1qdV-x8mw.jpg
0
{'enabled': False, 'images': [{'id': 'BEdsQ6udmM9BzMGY2swULuw9BlVWEDjuEMrQFaUMPng', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/sRJavCNnjOyobB4Oq7AtKjmCPE3zLBlJ5bDwD54JFo0.jpg?width=108&crop=smart&auto=webp&s=d24b167a295cfcee08d37107efd5e3b352f14e35', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/sRJavCNnjOyobB4Oq7AtKjmCPE3zLBlJ5bDwD54JFo0.jpg?width=216&crop=smart&auto=webp&s=a5bb75a735efb0fbe481d11ea0bc912843a6a3c6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/sRJavCNnjOyobB4Oq7AtKjmCPE3zLBlJ5bDwD54JFo0.jpg?width=320&crop=smart&auto=webp&s=a51d2914a8c964b7ab245d46a65cdd1608599c2a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/sRJavCNnjOyobB4Oq7AtKjmCPE3zLBlJ5bDwD54JFo0.jpg?auto=webp&s=c280170428f2c94cf42d132a634325b240e7f10e', 'width': 480}, 'variants': {}}]}
Why I think that NVIDIA Project DIGITS will have 273 GB/s of memory bandwidth
386
Used the following image from NVIDIA CES presentation: [Project DIGITS board](https://preview.redd.it/o0dkapeqltbe1.png?width=1257&format=png&auto=webp&s=46b818ce812eed4ec1607595836eaccbe0da852d) Applied some GIMP magic to reset perspective (not perfect but close enough), used a Photo of Grace chip die from the same presentation to make sure the aspect ratio is correct: [Project DIGITS - corrected perspective](https://preview.redd.it/jfdlbef8mtbe1.png?width=1257&format=png&auto=webp&s=a188f8b18c6a2ff8560b93c5ce84e20427d62f64) Then I measured dimensions of memory chips on this image: * 102 x 85 px * 103 x 85 px * 103 x 86 px * 103 x 87 px * 103 x 87 px * 104 x 87 px Looks consistent, so let's calculate the average aspect ratio of the chip dimensions: * 102 / 85 = 1.2 * 103 / 85 = 1.211 * 103 / 86 = 1.198 * 103 / 87 = 1.184 * 103 / 87 = 1.184 * 104 / 87 = 1.195 Average is 1.195 Now let's see what are the possible dimensions of [Micron 128Gb LPDDR5X chips](https://www.micron.com/products/memory/dram-components/lpddr5x/part-catalog?density=128Gb): * 496-ball packages (x64 bus): 14.00 x 12.40 mm. Aspect ratio = 1.13 * 441-ball packages (x64 bus): 14.00 x 14.00 mm. Aspect ratio = 1.0 * 315-ball packages (x32 bus): 12.40 x 15.00 mm. Aspect ratio = 1.21 So the closest match (I guess 1-2% measurement errors are possible) is 315-ball x32 bus package. With 8 chips the memory bus width will be 8 \* 32 = 256 bits. With 8533MT/s that's 273 GB/s max. So basically the same as Strix Halo. Another reason is that they didn't mention the memory bandwidth during presentation. I'm sure they would have mentioned it if it was exceptionally high. Hopefully I'm wrong! 😢 ...or there are 8 more memory chips underneath the board and I just wasted a hour of my life. 😆
2025-01-08T20:05:14
https://www.reddit.com/r/LocalLLaMA/comments/1hwthrq/why_i_think_that_nvidia_project_digits_will_have/
fairydreaming
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwthrq
false
null
t3_1hwthrq
/r/LocalLLaMA/comments/1hwthrq/why_i_think_that_nvidia_project_digits_will_have/
false
false
https://b.thumbs.redditm…4-cxEb1LPpSg.jpg
386
null
Question: Electricity-bill friendly PC Setup for AI models including Local LLM
2
Hello dear community, I'm thinking of building a mini-workstation in an mITX case that can handle various programs including ML models and <=14B LLM while keeping power consumption to minimum. I'm willing to sacrifice some performance for the sake of lower electricity bills. In my country in Europe, I can setup such a system below for a total of ~1650 USD | **Type** | **Item** | | --- | --- | | CPU | AMD Ryzen 9 7900 Wraith Prism Processor - 12 cores - 4 GHz - AMD AM5 - AMD Boxed (with cooling) | | Motherboard | Asrock A620I Lightning WiFi AMD A620 AM5 mini ITX socket | | Memory | G.Skill Ripjaws S5, DDR5, 96 GB (48 x 2), 5200MHz, CL40 | | Hard Drive | ADATA Legend 800 2TB M.2 2280 PCI-E x4 Gen4 NVMe | | GPU | Quadro RTX 2000E ADA 16GB GDDR6 VCNRTX2000EADA-SB | | mITX case | SilverStone Milo ML07B HTPC ITX | | Power Supply | be quiet! SFX-L Power 500W (BN238) | What do you think of that? Is it ok or is there anything to consider or change? Do you think PSU power is enough or too much/less? (FYI i'm not interested in GPUs like 4060TI since these types already have 3x larger power consumption than 2000E ADA, and also for the same reason i'm not interested in OC) Thanks in advance for your suggestions!
2025-01-08T20:09:32
https://www.reddit.com/r/LocalLLaMA/comments/1hwtler/question_electricitybill_friendly_pc_setup_for_ai/
anonynousasdfg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwtler
false
null
t3_1hwtler
/r/LocalLLaMA/comments/1hwtler/question_electricitybill_friendly_pc_setup_for_ai/
false
false
self
2
null
Phi 4 + Ollama + OrionChat - Just Perfect
2
**Orion:** is a web-based chat interface that simplifies interactions with multiple AI model providers incluindo **Ollama**. **How to use?** Download Ollama if you haven't already download Phi4 `ollama run phi4` and clone this repo: `git clone` [`https://github.com/EliasPereirah/OrionChat.git`](https://github.com/EliasPereirah/OrionChat.git) You can also use it via GitPage: [https://eliaspereirah.github.io/OrionChat](https://eliaspereirah.github.io/OrionChat)
2025-01-08T20:39:20
https://www.reddit.com/r/LocalLLaMA/comments/1hwuawy/phi_4_ollama_orionchat_just_perfect/
EuSouAstrid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwuawy
false
null
t3_1hwuawy
/r/LocalLLaMA/comments/1hwuawy/phi_4_ollama_orionchat_just_perfect/
false
false
self
2
{'enabled': False, 'images': [{'id': '7WLkS1XVZExN8sZC5MRT_MVwL2ut5WBY5CE4QRRwttA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xKjKtJZ3vTuRG3UHH_O5WK-O8ivi25rc5FQQx1cgB7A.jpg?width=108&crop=smart&auto=webp&s=905aabfb2b2ab94f4718cfe044807e269778dba2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xKjKtJZ3vTuRG3UHH_O5WK-O8ivi25rc5FQQx1cgB7A.jpg?width=216&crop=smart&auto=webp&s=ef67c21159c82f317c8062228c0acff90326cf47', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xKjKtJZ3vTuRG3UHH_O5WK-O8ivi25rc5FQQx1cgB7A.jpg?width=320&crop=smart&auto=webp&s=df5c6231aa02e6925cc0aa2be3876d04b8eea682', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xKjKtJZ3vTuRG3UHH_O5WK-O8ivi25rc5FQQx1cgB7A.jpg?width=640&crop=smart&auto=webp&s=a22b75fd1b89149ea93b68ceb4e3844020e89dd5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xKjKtJZ3vTuRG3UHH_O5WK-O8ivi25rc5FQQx1cgB7A.jpg?width=960&crop=smart&auto=webp&s=7e216a9eb5a33da9368c5deb84816eedf200520e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xKjKtJZ3vTuRG3UHH_O5WK-O8ivi25rc5FQQx1cgB7A.jpg?width=1080&crop=smart&auto=webp&s=7d1129e97323b3d8f06beb037097607b3a39e78c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xKjKtJZ3vTuRG3UHH_O5WK-O8ivi25rc5FQQx1cgB7A.jpg?auto=webp&s=d87e733fc327c25b2fd9cf1fb02b453f38157a47', 'width': 1200}, 'variants': {}}]}
Unprompted emoji usage - symbol of poor performance?
0
Since the release of llama 3, I've seen a general uptick in LLMs using emojis. And, to my observations, when they use them, the responses are very bad / incoherent. Most recently, I noticed o1 mini using emojis (completely unprompted, of course) in its responses, which were absolutely terrible. Has anyone experienced anything similar / know why this is the case?
2025-01-08T20:40:09
https://www.reddit.com/r/LocalLLaMA/comments/1hwubm0/unprompted_emoji_usage_symbol_of_poor_performance/
dp3471
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwubm0
false
null
t3_1hwubm0
/r/LocalLLaMA/comments/1hwubm0/unprompted_emoji_usage_symbol_of_poor_performance/
false
false
self
0
null
Phi 4 is finally on huggingface.
9
https://x.com/sytelus/status/1877015495074849218?s=46
2025-01-08T20:44:20
https://i.redd.it/s9uppq1p0ube1.jpeg
appakaradi
i.redd.it
1970-01-01T00:00:00
0
{}
1hwuf6k
false
null
t3_1hwuf6k
/r/LocalLLaMA/comments/1hwuf6k/phi_4_is_finally_on_huggingface/
false
false
https://b.thumbs.redditm…YmwTgEGdlCWQ.jpg
9
{'enabled': True, 'images': [{'id': '4JEndPfbZsRLOn3483KW7SOzKz7WFJlOw-1DejWcqmI', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/s9uppq1p0ube1.jpeg?width=108&crop=smart&auto=webp&s=08a92f2be224d77548e76ef431eb908588cb2667', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/s9uppq1p0ube1.jpeg?width=216&crop=smart&auto=webp&s=4614a3c092d04e637bd74b5cfc3e401d621e2bb2', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/s9uppq1p0ube1.jpeg?width=320&crop=smart&auto=webp&s=8d02351d10f7bfe7fac7acaa49ad8ad1f457e97d', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/s9uppq1p0ube1.jpeg?width=640&crop=smart&auto=webp&s=23e88f6dba3bbc8f290f02e0fb2031cf5d1348de', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/s9uppq1p0ube1.jpeg?width=960&crop=smart&auto=webp&s=b2f0e22e45f23ece4123963a9a1005e122ec919f', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/s9uppq1p0ube1.jpeg?width=1080&crop=smart&auto=webp&s=656bf28743a0e966256275fdb7722184ffe8429b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/s9uppq1p0ube1.jpeg?auto=webp&s=73846e8f485cd0f996f63263a2230ca3396ed56b', 'width': 1920}, 'variants': {}}]}
LLMs Structured Output Guide
1
2025-01-08T20:45:22
https://x.com/souzatharsis/status/1876971299664273892?t=iz-WyPnXsqwNer2avAmZyQ&s=19
HighlanderNJ
x.com
1970-01-01T00:00:00
0
{}
1hwug3r
false
null
t3_1hwug3r
/r/LocalLLaMA/comments/1hwug3r/llms_structured_output_guide/
false
false
https://b.thumbs.redditm…uDHy98Cbx2NQ.jpg
1
{'enabled': False, 'images': [{'id': 'jkMByfRIhXNUeXu1eEbwTwamiTr8CUFqmWlJM1RorSY', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/iNUnlh1AWHQpoRSWkmuTYNAlXdiq7Lb0qSfxhL9_rYA.jpg?width=108&crop=smart&auto=webp&s=2b236564ba7db7addfc6537186bea186d4cf7df6', 'width': 108}, {'height': 180, 'url': 'https://external-preview.redd.it/iNUnlh1AWHQpoRSWkmuTYNAlXdiq7Lb0qSfxhL9_rYA.jpg?width=216&crop=smart&auto=webp&s=d9e599968bc7ff4c1ba81502c3c40c0b29e8abd0', 'width': 216}, {'height': 266, 'url': 'https://external-preview.redd.it/iNUnlh1AWHQpoRSWkmuTYNAlXdiq7Lb0qSfxhL9_rYA.jpg?width=320&crop=smart&auto=webp&s=bdbbe86eb6a6edd150d6396100789f7da739d847', 'width': 320}, {'height': 533, 'url': 'https://external-preview.redd.it/iNUnlh1AWHQpoRSWkmuTYNAlXdiq7Lb0qSfxhL9_rYA.jpg?width=640&crop=smart&auto=webp&s=9b970c3ce88a3cd919c79d6318e2437878d0a706', 'width': 640}, {'height': 800, 'url': 'https://external-preview.redd.it/iNUnlh1AWHQpoRSWkmuTYNAlXdiq7Lb0qSfxhL9_rYA.jpg?width=960&crop=smart&auto=webp&s=38983ab5c5b443ad8707064265fe7d57d135c5ab', 'width': 960}, {'height': 900, 'url': 'https://external-preview.redd.it/iNUnlh1AWHQpoRSWkmuTYNAlXdiq7Lb0qSfxhL9_rYA.jpg?width=1080&crop=smart&auto=webp&s=9e10abb7387721f97582c99a935bdbc013e41b4b', 'width': 1080}], 'source': {'height': 1338, 'url': 'https://external-preview.redd.it/iNUnlh1AWHQpoRSWkmuTYNAlXdiq7Lb0qSfxhL9_rYA.jpg?auto=webp&s=f9b2f9ebad2713d17881b85f5bb640a0e8a40184', 'width': 1605}, 'variants': {}}]}
3x 3090 set up - advices
3
Hi, I've just got hold of 4x 3090 for a killer price (MSI SUPPRIM X) I'd like to build a set up with 3xcards (the 4th one will wait for now), ideally in a closed tower, and without having to water cool the GPUs. (I'm more open for the CPU). My questions are : \- do you think I'm crazy for wanting to put 3x3090 in a closed tower, without having a full water-cooled solution. \- in case I'm not, what would you select for (1) the case (I was thinking a Lian Li O11 Dynamic Xl), (2) the motherboard / CPU, and (3) the power brick. thanks a lot!!!!!
2025-01-08T20:48:12
https://www.reddit.com/r/LocalLLaMA/comments/1hwuilb/3x_3090_set_up_advices/
HappyFaithlessness70
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwuilb
false
null
t3_1hwuilb
/r/LocalLLaMA/comments/1hwuilb/3x_3090_set_up_advices/
false
false
self
3
null
How to develop software with LLMs in a company (hardware-wise)?
2
I have recently built my own RAG application in my freetime. I showed it to my boss and they were so excited that they want me to create prototypes (using LLMs) that use data of our software (e.g. CSV database exports etc.). Now the thing is that I developed this on my private machine with a 3090 GPU. The laptop I have is a quite good business laptop but is not able to run a 8B LLama. I have never worked with LLMs in a professional environment. The requirements I have would be to test different open source LLMS but also SLMs such as the newly released PHI-4. Ideally I would like to test my approaches on LLama 70b but 8b would be OK for now and I know that on the long term we would need some cloud hosting for inference on the models anyway. Is it a good practice to to have an AWS EC2 instance and then connect VS Code to it? If so what kind of AWS EC2 would I need? What should I tell him that I need in order to develop prototypes? And I have another question: I did not succeed in connecting my business laptop's VS Code to my private PC (using some remote option). Is that actually possible or do I need a Windows Server OS that runs an SSH service?
2025-01-08T20:50:48
https://www.reddit.com/r/LocalLLaMA/comments/1hwukxp/how_to_develop_software_with_llms_in_a_company/
No_Duck_3939
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwukxp
false
null
t3_1hwukxp
/r/LocalLLaMA/comments/1hwukxp/how_to_develop_software_with_llms_in_a_company/
false
false
self
2
null
What? - DEEPSEEK-V3 - I just discovered 800 Placeholder Tags in deepseek's tokenizer. (along with bonus fill in the middle tags)
0
2025-01-08T21:08:05
https://www.reddit.com/gallery/1hwv038
beppled
reddit.com
1970-01-01T00:00:00
0
{}
1hwv038
false
null
t3_1hwv038
/r/LocalLLaMA/comments/1hwv038/what_deepseekv3_i_just_discovered_800_placeholder/
false
false
https://b.thumbs.redditm…s1cTrpS5DSGE.jpg
0
null
Point me in the right direction
1
Hi everyone, I’m an app dev with no coding experience. I’m the content/marketing guy, and I outsourced all the coding and database work to a third party. I believe AI agents will be the next inevitable boom—just like social media, crypto, or the dot-com era. I want to dip my toes in and build something to understand this industry better. Specifically, I want to create an agent that can replace me: I give it a content task, and it scrapes the web, does market and target audience research, finds relevant information, statistics, and unique facts on the topic, then writes, formats, and organizes everything neatly in Figma (with images as well). Is something like this possible? I have a Mac Studio and plenty of time to tinker. Could an AI agent really do my job in my style at, say, 80% accuracy—but much faster?
2025-01-08T21:28:43
https://www.reddit.com/r/LocalLLaMA/comments/1hwvhgm/point_me_in_the_right_direction/
CliveBratton
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwvhgm
false
null
t3_1hwvhgm
/r/LocalLLaMA/comments/1hwvhgm/point_me_in_the_right_direction/
false
false
self
1
null
Beginner
1
What training would you recommend for a newbie wanting to learn about and deploy LocalLLM
2025-01-08T21:42:18
https://www.reddit.com/r/LocalLLaMA/comments/1hwvt0n/beginner/
Superb_Mix_6849
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwvt0n
false
null
t3_1hwvt0n
/r/LocalLLaMA/comments/1hwvt0n/beginner/
false
false
self
1
null
Launching a side project: readzeit.com - feedback welcome!
6
Hey everyone, thought I’d kick off 2025 by sharing a project that I’ve been working on: [Zeit](http://readzeit.com/). Zeit is a 2-minute daily newsletter to round up the most interesting conversations in the AI space.  As a developer and founder, I find it dizzying and overwhelming to keep up with the pace of AI over the last couple of years, so I built something for myself and some friends that would be an easy way to keep up with what’s going on in the LLM space.  I also hate “marketing newsletters” so we source data dynamically from real social conversations (like on Reddit), and the digest is auto-generated and sorted based on social voting. The goal is to give you easy, no-BS stories on stuff like: * What other devs are building * New models that are gaining traction * Cool open source projects * Some of the challenges / problems devs are facing * Whatever else is being discussed Zeit is also an experiment into personalized content. In the future, if this gains any traction, I would love to build a tool that lets people set up their own custom digest on topics they care about.  Would be curious to hear your thoughts and criticisms on this and if you find this helpful! 
2025-01-08T21:47:57
https://www.reddit.com/r/LocalLLaMA/comments/1hwvxxp/launching_a_side_project_readzeitcom_feedback/
Time-Ad-8034
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwvxxp
false
null
t3_1hwvxxp
/r/LocalLLaMA/comments/1hwvxxp/launching_a_side_project_readzeitcom_feedback/
false
false
self
6
null
Creative Writing
14
Has anyone found a model that excels at creative writing yet? I've been trying to work with various models in a "follow your own adventure" type setup set in a fantasy world, but generally the products are poor-to-awful. Even when a given model can remember what's going on - no guarantees, no matter what the context is set to - it defaults to a Moorcock/de Camp style high fantasy setting, forcing you into the role of the hero, and I'm trying to do the Low Fantasy Just Some Dude story. Most of the models get furious with me refusing to follow The Very Important Quest that the wizard bursts into the tavern to demand help with, in a thunderstorm, while the bard plays a song of ancient brave heros, etc. etc. ad nauseam. Even worse, most of the "storytelling" models like Aura and Wizard seem to have been overtrained to throw potential sexual encounters at the main character at literally any moment. Constantly. I'm fine to have NSFW encounters in a story if they follow the plot, but if I wanted to read PWP I'd just go to AO3 or Nifty. I realize this is probably a niche ask, but has anyone had any luck with a similar project? The best luck I've had is with a very small quant of Mistral Large (by, of course, bartowski - the GOAT of LLMs), but NeMo was horribly bad. Tips?
2025-01-08T21:49:10
https://www.reddit.com/r/LocalLLaMA/comments/1hwvyze/creative_writing/
Iamblichos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwvyze
false
null
t3_1hwvyze
/r/LocalLLaMA/comments/1hwvyze/creative_writing/
false
false
self
14
null
Open Source Computer Use Agent powered by Llama 3.2, 3.3 & OS-Atlas
10
2025-01-08T21:53:07
https://github.com/e2b-dev/open-computer-use
mlejva
github.com
1970-01-01T00:00:00
0
{}
1hww2e5
false
null
t3_1hww2e5
/r/LocalLLaMA/comments/1hww2e5/open_source_computer_use_agent_powered_by_llama/
false
false
https://b.thumbs.redditm…3kaCfRk8_6WY.jpg
10
{'enabled': False, 'images': [{'id': 'o7EkgCuQyGpfqhWG3nWdgFEkaldmBTE_KRVwgsl8THM', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/yr1tIg1YhfWrWVAC8yK7gHNMU5Y7vHJf2oQDAJoYn2Q.jpg?width=108&crop=smart&auto=webp&s=77472221cd86fe90e7c95a61834d4b956876f097', 'width': 108}, {'height': 158, 'url': 'https://external-preview.redd.it/yr1tIg1YhfWrWVAC8yK7gHNMU5Y7vHJf2oQDAJoYn2Q.jpg?width=216&crop=smart&auto=webp&s=14be171478e76682d7c38bd304e17f327335c54a', 'width': 216}, {'height': 234, 'url': 'https://external-preview.redd.it/yr1tIg1YhfWrWVAC8yK7gHNMU5Y7vHJf2oQDAJoYn2Q.jpg?width=320&crop=smart&auto=webp&s=1a4247b20cea35319c800ad26ba5a8ec6153e9bf', 'width': 320}, {'height': 468, 'url': 'https://external-preview.redd.it/yr1tIg1YhfWrWVAC8yK7gHNMU5Y7vHJf2oQDAJoYn2Q.jpg?width=640&crop=smart&auto=webp&s=7a4045e424f5416e5064b9bf6796f50d08e1b1ac', 'width': 640}, {'height': 703, 'url': 'https://external-preview.redd.it/yr1tIg1YhfWrWVAC8yK7gHNMU5Y7vHJf2oQDAJoYn2Q.jpg?width=960&crop=smart&auto=webp&s=30259077263cc439862d588464c41dcd06a65a04', 'width': 960}, {'height': 791, 'url': 'https://external-preview.redd.it/yr1tIg1YhfWrWVAC8yK7gHNMU5Y7vHJf2oQDAJoYn2Q.jpg?width=1080&crop=smart&auto=webp&s=e7752dabb0996bf8c669c1d48f4d3b06adcade75', 'width': 1080}], 'source': {'height': 2309, 'url': 'https://external-preview.redd.it/yr1tIg1YhfWrWVAC8yK7gHNMU5Y7vHJf2oQDAJoYn2Q.jpg?auto=webp&s=3c9caff68958222506e56096a13544da336ad051', 'width': 3152}, 'variants': {}}]}
Chipper Ollama Haystack RAG Toolbox got 1.0 🥳
3
What can I say, it’s finally official, **Chipper got 1.0**! 🥳 Some of you might remember my post from last week, where I shared my journey building this tool. What started as a scrappy side project with a few Python scripts has now grown up a bit. Chipper gives you a web interface, CLI, and a hackable, simple architecture for embedding pipelines, document chunking, web scraping, and query workflows. Built with Haystack, **Ollama**, **Hugging Face**, **Docker**, **TailwindCSS**, and **ElasticSearch**, it runs locally via **docker compose** or can be easily deployed with **docker hub** images. This all began as a way to help my girlfriend with her book. I wanted to use local RAG and LLMs to explore creative ideas about characters without sharing private details with cloud services. Now, it has escalated into a tool that some of you maybe find useful too. **Features 🍕:** * Ollama and serverless Hugging Face Support * ElasticSearch for powerful knowledge bases * Document chunking with Haystack * Web scraping and audio transcription * Web and CLI interface * Easy and clean local or server side Docker deployment **The road ahead:** I have many ideas, not that much time, and would love your help! Some of the things I’m thinking about: * Validated and improved AMD GPU support for Docker Desktop * Testing it on Linux desktop environments * And definitely Your ideas and contributions, PRs are very welcome! GitHub: [https://github.com/TilmanGriesel/chipper](https://github.com/TilmanGriesel/chipper) Website\*: [https://chipper.tilmangriesel.com/](https://chipper.tilmangriesel.com/) If you find Chipper useful and want to support it, a GitHub star would make me super happy and help other discover it too 🐕 (\*) Please do not kill my live demo server ❤️ https://preview.redd.it/06ltwir5eube1.png?width=1574&format=png&auto=webp&s=c02398b1b06ffd85f61f6ccabfa091d05a3afc98 https://i.redd.it/7nd363x7eube1.gif
2025-01-08T22:05:12
https://www.reddit.com/r/LocalLLaMA/comments/1hwwcwh/chipper_ollama_haystack_rag_toolbox_got_10/
Alarming_Divide_1339
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwwcwh
false
null
t3_1hwwcwh
/r/LocalLLaMA/comments/1hwwcwh/chipper_ollama_haystack_rag_toolbox_got_10/
false
false
https://a.thumbs.redditm…Dx1nwnUogDB4.jpg
3
{'enabled': False, 'images': [{'id': 'E-kEI5197G8kM3X9w_elSmIqON0oxPvPncXwN6ZG2MY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EDUvohhVb5xr-KRAeuTg8gg3QUUyDGnrLD58QPihBNs.jpg?width=108&crop=smart&auto=webp&s=dab89f0ef98812ab4e7c857b48cb84ac2407042b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EDUvohhVb5xr-KRAeuTg8gg3QUUyDGnrLD58QPihBNs.jpg?width=216&crop=smart&auto=webp&s=c148d2ee49345abe0b778295f8d0db40061a6406', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EDUvohhVb5xr-KRAeuTg8gg3QUUyDGnrLD58QPihBNs.jpg?width=320&crop=smart&auto=webp&s=36c1db5ac831e1cab6bec08c030c004aa0068232', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EDUvohhVb5xr-KRAeuTg8gg3QUUyDGnrLD58QPihBNs.jpg?width=640&crop=smart&auto=webp&s=811d33bdc4c31e21572a972d29ec5913e016dd45', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EDUvohhVb5xr-KRAeuTg8gg3QUUyDGnrLD58QPihBNs.jpg?width=960&crop=smart&auto=webp&s=944b48d019df668b23a18cc8588c864466577d77', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EDUvohhVb5xr-KRAeuTg8gg3QUUyDGnrLD58QPihBNs.jpg?width=1080&crop=smart&auto=webp&s=01e0597e135d6c5755efe8d6b609451aa6d36f2e', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/EDUvohhVb5xr-KRAeuTg8gg3QUUyDGnrLD58QPihBNs.jpg?auto=webp&s=8f8d8a351a71526f6d8ee8f233be68854b9d6fb2', 'width': 1280}, 'variants': {}}]}
How do I prevent Qwen or QwQ models from using Chinese?
3
I'm new to LLM and am currently using LM studio to test different models. I know the Qwq and Qwen models are popular, but they output Chinese sporadically. In rare cases, they even output Russian. No matter what system prompt I use, it is inconsistent. I've tried some of the solutions I've seen here, but still they answer in Chinese out of the blue (maybe English not being my native language has a little something to do with it) I personally love the performance of these models, but this issue is really bothering me. Does anyone know how I can fix this problem in LM Studio? I'd rather not use it anymore if it can only be suppressed, not eliminated.
2025-01-08T22:24:13
https://www.reddit.com/r/LocalLLaMA/comments/1hwwt4n/how_do_i_prevent_qwen_or_qwq_models_from_using/
whduddn99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwwt4n
false
null
t3_1hwwt4n
/r/LocalLLaMA/comments/1hwwt4n/how_do_i_prevent_qwen_or_qwq_models_from_using/
false
false
self
3
null
This sums my experience with models on Groq
1,272
2025-01-08T22:27:22
https://i.redd.it/7tqzm8bsiube1.png
Amgadoz
i.redd.it
1970-01-01T00:00:00
0
{}
1hwwvuz
false
null
t3_1hwwvuz
/r/LocalLLaMA/comments/1hwwvuz/this_sums_my_experience_with_models_on_groq/
false
false
https://b.thumbs.redditm…FvOFrUNGFGIc.jpg
1,272
{'enabled': True, 'images': [{'id': 'vsXbnM6JYZEmFVVZ05tKagAiBhUeA-rss4CDPB0tMqo', 'resolutions': [{'height': 169, 'url': 'https://preview.redd.it/7tqzm8bsiube1.png?width=108&crop=smart&auto=webp&s=6b85223cd2590476b2caf5da0ec4df8e169d1f16', 'width': 108}, {'height': 338, 'url': 'https://preview.redd.it/7tqzm8bsiube1.png?width=216&crop=smart&auto=webp&s=bbb12d4a477a8a806dee9f716cf2bb7842f5c59f', 'width': 216}, {'height': 501, 'url': 'https://preview.redd.it/7tqzm8bsiube1.png?width=320&crop=smart&auto=webp&s=bebd5aeeb53de86390096a6377b9d96fe453c674', 'width': 320}], 'source': {'height': 960, 'url': 'https://preview.redd.it/7tqzm8bsiube1.png?auto=webp&s=a8ee394921e13e57fb9c5585a47083f2abbfc50a', 'width': 613}, 'variants': {}}]}
If you're using IOS for LLM's, don't update to IOS 18 if you haven't already!
4
IOS 18 ruined LLM performance for me, I use LLM farm and I tested performance extensively before I updated and read a few articles stating that IOS 18 improves performance. The articles are wrong atleast for LLM's, I use qwen 2.5 3b and my perfromance has been significantly decreased in it and in other models. I used to get 35 t/s before updating now im lucky if i get 20 t/s token generation. Don't update if you use apps like LLM farm or any other apps for LLMs on IOS. One of the worst decisions I made this week and I really could use the faster LLM token generation.
2025-01-08T22:36:18
https://www.reddit.com/r/LocalLLaMA/comments/1hwx3b0/if_youre_using_ios_for_llms_dont_update_to_ios_18/
Dance-Till-Night1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwx3b0
false
null
t3_1hwx3b0
/r/LocalLLaMA/comments/1hwx3b0/if_youre_using_ios_for_llms_dont_update_to_ios_18/
false
false
self
4
null
Most persuasive open model?
1
[removed]
2025-01-08T22:40:27
https://www.reddit.com/r/LocalLLaMA/comments/1hwx6ny/most_persuasive_open_model/
GreenOnGray
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwx6ny
false
null
t3_1hwx6ny
/r/LocalLLaMA/comments/1hwx6ny/most_persuasive_open_model/
false
false
self
1
null
ROG Flow Z13 2025 has Ryzen AI Max+ 395 and 128 GB LPDDR5X??
56
[https://rog.asus.com/laptops/rog-flow/rog-flow-z13-2025/spec/](https://rog.asus.com/laptops/rog-flow/rog-flow-z13-2025/spec/) They also claim the NPU has 50 TOPs. Is this the Windows answer to Apple Silicon's unified memory? Obviously support will be a big issue but man if this thing runs like they advertise it will run, this tablet will be more versatile than the majority of home GPU-based systems, just off of the memory config alone, and won't be slow either.
2025-01-08T22:42:25
https://www.reddit.com/r/LocalLLaMA/comments/1hwx8ah/rog_flow_z13_2025_has_ryzen_ai_max_395_and_128_gb/
Daniel_H212
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwx8ah
false
null
t3_1hwx8ah
/r/LocalLLaMA/comments/1hwx8ah/rog_flow_z13_2025_has_ryzen_ai_max_395_and_128_gb/
false
false
self
56
{'enabled': False, 'images': [{'id': '2uyC4qpYIHHZgOJFOPIE7bqYS3PkkiUxhRQ4wudzIY0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hppyZPPyBS4CEiVKOv_MgktnMhCu6L8jiy50loQrgRE.jpg?width=108&crop=smart&auto=webp&s=ebfefba19ea1b916272e54b3f66270f8dded4cdd', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/hppyZPPyBS4CEiVKOv_MgktnMhCu6L8jiy50loQrgRE.jpg?width=216&crop=smart&auto=webp&s=600c00176d5045fa9395c579beb21559f5ac34bf', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/hppyZPPyBS4CEiVKOv_MgktnMhCu6L8jiy50loQrgRE.jpg?width=320&crop=smart&auto=webp&s=b9222ccaca15711fa9d239423b969a2cece8ce13', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/hppyZPPyBS4CEiVKOv_MgktnMhCu6L8jiy50loQrgRE.jpg?width=640&crop=smart&auto=webp&s=310ec58c40371c4c458196e5002227b65ebf6c91', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/hppyZPPyBS4CEiVKOv_MgktnMhCu6L8jiy50loQrgRE.jpg?width=960&crop=smart&auto=webp&s=94ed98756fa8f9c9acc35d26923e29759d7362a5', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/hppyZPPyBS4CEiVKOv_MgktnMhCu6L8jiy50loQrgRE.jpg?width=1080&crop=smart&auto=webp&s=5087a6d8a628afb8960fe554ab2a98d62cf0b18e', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://external-preview.redd.it/hppyZPPyBS4CEiVKOv_MgktnMhCu6L8jiy50loQrgRE.jpg?auto=webp&s=816c5912314c067e77d5c52f1bb974baecc09dd8', 'width': 2400}, 'variants': {}}]}
Looking for hardware advice in light of new Nvidia GPU releases
1
[removed]
2025-01-08T22:51:37
https://www.reddit.com/r/LocalLLaMA/comments/1hwxg3z/looking_for_hardware_advice_in_light_of_new/
pbOmen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwxg3z
false
null
t3_1hwxg3z
/r/LocalLLaMA/comments/1hwxg3z/looking_for_hardware_advice_in_light_of_new/
false
false
self
1
null
Tips on generating 300k+ responses with a system and a user role
1
[removed]
2025-01-08T23:03:09
https://www.reddit.com/r/LocalLLaMA/comments/1hwxqnf/tips_on_generating_300k_responses_with_a_system/
xtrs1230
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hwxqnf
false
null
t3_1hwxqnf
/r/LocalLLaMA/comments/1hwxqnf/tips_on_generating_300k_responses_with_a_system/
false
false
self
1
null