title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best Strategy to Handle a Book as Input | 1 | [removed] | 2025-01-06T20:58:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hv9oho/best_strategy_to_handle_a_book_as_input/ | mohammadomar17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hv9oho | false | null | t3_1hv9oho | /r/LocalLLaMA/comments/1hv9oho/best_strategy_to_handle_a_book_as_input/ | false | false | self | 1 | null |
Llama 3b - you can 2-3x the math capabilities just by continually training on high quality 160B tokens* | 242 | *without compromising on other metrics | 2025-01-06T21:06:59 | Own-Potential-2308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hv9w65 | false | null | t3_1hv9w65 | /r/LocalLLaMA/comments/1hv9w65/llama_3b_you_can_23x_the_math_capabilities_just/ | false | false | 242 | {'enabled': True, 'images': [{'id': '9zWO4WPh8ibSg6lCNe9lA8odmwpOopsjpjEFKWlgnNo', 'resolutions': [{'height': 161, 'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=108&crop=smart&auto=webp&s=ab940515860c6ca43564962d25d6c84879ff985a', 'width': 108}, {'height': 322, 'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=216&crop=smart&auto=webp&s=1383f50fc30b47ac057759e100e760a816ba8ee1', 'width': 216}, {'height': 477, 'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=320&crop=smart&auto=webp&s=079b134e69328775846c30baa101ca6ca2735ef5', 'width': 320}, {'height': 955, 'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=640&crop=smart&auto=webp&s=3c2edf457e803645f2e24e8abb8fa5e126e43d41', 'width': 640}, {'height': 1433, 'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=960&crop=smart&auto=webp&s=33476a2b4a7135f5123d511551ac5a488e39a863', 'width': 960}, {'height': 1613, 'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?width=1080&crop=smart&auto=webp&s=8da75dcb9ddc9039e0211637080adff9fe3c90f9', 'width': 1080}], 'source': {'height': 1613, 'url': 'https://preview.redd.it/t3kjugswufbe1.jpeg?auto=webp&s=6d2277354693fb9cadea82f861be69c08b6087df', 'width': 1080}, 'variants': {}}]} |
||
Yet another reason why we must have local models | 258 | > Remember when Uber rides cost next to nothing? 🚗💨
That was the era of VC-subsidized transportation.
Now we’re in the age of VC-subsidized AI. Instead of cheap rides, we’re getting cheap intelligence.
As Dan Hockenmaier pointed out recently: Use it while it lasts—because nothing this good stays free forever.
This was in response to Sam Altman's post on X saying:
> insane thing: we are currently losing money on openai pro subscriptions!
people use it much more than we expected.
Original post: https://www.linkedin.com/posts/rubendominguezibar_remember-when-uber-rides-cost-next-to-nothing-activity-7282134404733284352-Sdz1?utm_source=share&utm_medium=member_android
This is such an interesting take and l wonder if it's true, but then again, we have made models orders of magnitude smaller, faster and cheaper in the last 2 years that this might not be the case.
Thoughts?
| 2025-01-06T21:22:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hva9ka/yet_another_reason_why_we_must_have_local_models/ | takuonline | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hva9ka | false | null | t3_1hva9ka | /r/LocalLLaMA/comments/1hva9ka/yet_another_reason_why_we_must_have_local_models/ | false | false | self | 258 | {'enabled': False, 'images': [{'id': 'yjAi-RN3RWWAXzr0ncZaGvJp7vnP5i2IJCFJqYHtZoY', 'resolutions': [{'height': 25, 'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=108&crop=smart&auto=webp&s=65f87b8b155d7a5a864b4a0ad08cf4a74defabd6', 'width': 108}, {'height': 50, 'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=216&crop=smart&auto=webp&s=cee56a70fd55aca828036511acd15bb21e41a398', 'width': 216}, {'height': 75, 'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=320&crop=smart&auto=webp&s=9bf008dcefc644bfc8f1f638d3aed5f49aad7129', 'width': 320}, {'height': 150, 'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=640&crop=smart&auto=webp&s=35e67ea275ae08889d43ea625c779c57ca396590', 'width': 640}, {'height': 225, 'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=960&crop=smart&auto=webp&s=0dc8085c9946f291d43590663580fc262adbd598', 'width': 960}, {'height': 253, 'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?width=1080&crop=smart&auto=webp&s=33a3fd4466e045fb8db1f71e031d7bd7bf8af46b', 'width': 1080}], 'source': {'height': 278, 'url': 'https://external-preview.redd.it/SJO7eV0-KdAma4iO8fo12OLqL7VnznsEB-6taPN21l8.jpg?auto=webp&s=f19e31a6ff4dc5555d46993e16584e4f4aaeeb66', 'width': 1184}, 'variants': {}}]} |
Working With Multidimensional NPZ/PKL Data [SMPL/AMASS] | 1 | I am working on a project that involves fine tuning with human motion related data. For that, I was advised to work with the SMPL/AMASS databases which are stored in npz/pkl files. I have never worked with similar data types, but one of the groups has 3 dimensional data, which is not possible with csv. Can someone please help me how I can work with these databases. | 2025-01-06T21:26:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hvadru/working_with_multidimensional_npzpkl_data/ | Affectionate-Head246 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvadru | false | null | t3_1hvadru | /r/LocalLLaMA/comments/1hvadru/working_with_multidimensional_npzpkl_data/ | false | false | self | 1 | null |
RTX8000 passive NVlink with 16x PCIE and 4x PCIE slots | 1 | [removed] | 2025-01-06T21:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hvakgm/rtx8000_passive_nvlink_with_16x_pcie_and_4x_pcie/ | JusticeDread | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvakgm | false | null | t3_1hvakgm | /r/LocalLLaMA/comments/1hvakgm/rtx8000_passive_nvlink_with_16x_pcie_and_4x_pcie/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'L_SDOG0Ve7NSbTpWlUl6lQ_WP52goTL7cyJsY96tlic', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qFlQA8Xuvgy-lQqYxyuNjJD3yBqdsxh5gRmp1DFpLa0.jpg?width=108&crop=smart&auto=webp&s=c6e1d440fdd7460e401287049d721f03b65b210c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/qFlQA8Xuvgy-lQqYxyuNjJD3yBqdsxh5gRmp1DFpLa0.jpg?width=216&crop=smart&auto=webp&s=863846cea3d383477bc1ca8470ca1483ff25e326', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/qFlQA8Xuvgy-lQqYxyuNjJD3yBqdsxh5gRmp1DFpLa0.jpg?width=320&crop=smart&auto=webp&s=3ac8c3d20a906f03c22b3b9f15f456635136b0ab', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/qFlQA8Xuvgy-lQqYxyuNjJD3yBqdsxh5gRmp1DFpLa0.jpg?auto=webp&s=30306cb595786b270f086fb5759e2e5315c9f45f', 'width': 480}, 'variants': {}}]} |
Scaling Inference Time Compute with On-Device Language Models in GPT4All | 7 | Key Features in the GPT4All Reasoning System:
Reasoning System and Models: Designed specifically for combining iterative LLM outputs, chain of thought and tool calls for solving harder problems.
Code Interpreter: Execute code inline with your prompts for advanced problem-solving.
Tool Calling: Seamlessly interact with external tools to enhance your workflows.
Code Sandboxing: Run secure, platform-agnostic code tool calls directly on your device.
https://www.nomic.ai/blog/posts/gpt4all-scaling-test-time-compute | 2025-01-06T21:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hvao1l/scaling_inference_time_compute_with_ondevice/ | AIGuy3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvao1l | false | null | t3_1hvao1l | /r/LocalLLaMA/comments/1hvao1l/scaling_inference_time_compute_with_ondevice/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'C3QcSYNsxcrWciPIp27xK4oeU9wC5g5IvqqHgu_0de4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=108&crop=smart&auto=webp&s=1e8aaa2b8d70516c5f9464b558e0ba58ab8b8d9d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=216&crop=smart&auto=webp&s=866d48f4ec59d57f5c7644b984bc1ef054f33612', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=320&crop=smart&auto=webp&s=cc989452796c140009c609153c95220db63c8806', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=640&crop=smart&auto=webp&s=17de4c077cd3341ac00800790523300751098fdb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=960&crop=smart&auto=webp&s=ac64a2c0b634fe23ce7d238afac123d8bbbf9a54', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?width=1080&crop=smart&auto=webp&s=1358facab21912fb359c4192bfce3e72ea6cfd4c', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/V5Q8eVnUt8SQSKpP9guuilt5tV1fMb7Yxq7Er6OxBl0.jpg?auto=webp&s=9ab740c35344c38d88070674ac23a0070ff62b49', 'width': 4800}, 'variants': {}}]} |
I made a CLI for improving prompts using a genetic algorithm | 108 | 2025-01-06T21:50:57 | jsonathan | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvayr2 | false | null | t3_1hvayr2 | /r/LocalLLaMA/comments/1hvayr2/i_made_a_cli_for_improving_prompts_using_a/ | false | false | 108 | {'enabled': True, 'images': [{'id': 'ajxSOdNRDESXBGcT7VD_zxfB847AK4G9Qf4v6ycEKAM', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=108&crop=smart&format=png8&s=c914fad879ebf7d66e059f613679366c929dab70', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=216&crop=smart&format=png8&s=aee93959698f25a0c32fbbba55aa2e7dfbc577f1', 'width': 216}, {'height': 228, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=320&crop=smart&format=png8&s=d8f0d0096b90e4ee57ac3c16e03c496231e4694d', 'width': 320}, {'height': 457, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=640&crop=smart&format=png8&s=674ff4296994cbb3e8dc456d3fae16d09a5db5e9', 'width': 640}, {'height': 686, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=960&crop=smart&format=png8&s=8f068cca851d1ba386ccdbfe0c0243c4e70d0624', 'width': 960}, {'height': 772, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=1080&crop=smart&format=png8&s=adc10d5dc604c6d66be196318279d395a7b5eb5a', 'width': 1080}], 'source': {'height': 930, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?format=png8&s=54eef6439736c1e48f7700934d2661491794f143', 'width': 1300}, 'variants': {'gif': {'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=108&crop=smart&s=0ebd8cc9fd06255877cb4490f09eabf9ff2b7100', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=216&crop=smart&s=31364d22d71c07f0491955ae763d196cd058b115', 'width': 216}, {'height': 228, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=320&crop=smart&s=74771a41a2c5e9c1e76d739b7fb2084a20684f64', 'width': 320}, {'height': 457, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=640&crop=smart&s=bceed9ec3dadb681b749fefac3a574e521e68837', 'width': 640}, {'height': 686, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=960&crop=smart&s=a66afa495f40156da15e011a209e62882b6bdf3e', 'width': 960}, {'height': 772, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=1080&crop=smart&s=1108fcd31caeec92393686d8f868d86d8eae4a32', 'width': 1080}], 'source': {'height': 930, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?s=4a6bcc8379168e5afe85d84b79f06fa0445de786', 'width': 1300}}, 'mp4': {'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=108&format=mp4&s=afe8a3249b531aa140ff4839deb95efaf1be0ea5', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=216&format=mp4&s=f7552bf572e3adb69f8c7103fad6200a29341126', 'width': 216}, {'height': 228, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=320&format=mp4&s=26eace103aaadc9c714962f1a01e3b10af97c14e', 'width': 320}, {'height': 457, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=640&format=mp4&s=33307489a203fffb804e330a3b49b14e6dc05345', 'width': 640}, {'height': 686, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=960&format=mp4&s=1cc68cc0d4027d0a7ade694d3c25d7409118c048', 'width': 960}, {'height': 772, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?width=1080&format=mp4&s=edd3896955976e21fa4f808e404c559ce04edc70', 'width': 1080}], 'source': {'height': 930, 'url': 'https://preview.redd.it/p8q191zp2gbe1.gif?format=mp4&s=45200f026a58d629808c27d106e06ce9136a6d6c', 'width': 1300}}}}]} |
|||
What do you find to be the best local llm currently? | 1 | [removed] | 2025-01-06T22:18:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hvbmg9/what_do_you_find_to_be_the_best_local_llm/ | Game-Lover44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvbmg9 | false | null | t3_1hvbmg9 | /r/LocalLLaMA/comments/1hvbmg9/what_do_you_find_to_be_the_best_local_llm/ | false | false | self | 1 | null |
Llama 3.3 70b Int4 Quantized vs Llama 3.1 70b Full | 1 | [removed] | 2025-01-06T22:18:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hvbmhx/llama_33_70b_int4_quantized_vs_llama_31_70b_full/ | raikirichidori255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvbmhx | false | null | t3_1hvbmhx | /r/LocalLLaMA/comments/1hvbmhx/llama_33_70b_int4_quantized_vs_llama_31_70b_full/ | false | false | self | 1 | null |
What is a good OSS software for exam prep ? | 2 | I have a big psychology exam to prepare for. I have the courses as PDF files. Is there a local LLM software (OSS is better) which would help me prepare and create flash cards, quizzes, etc... ?
I can't find any ! thanks ! | 2025-01-06T22:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hvc7u2/what_is_a_good_oss_software_for_exam_prep/ | ritonlajoie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvc7u2 | false | null | t3_1hvc7u2 | /r/LocalLLaMA/comments/1hvc7u2/what_is_a_good_oss_software_for_exam_prep/ | false | false | self | 2 | null |
How to get Llama 3.3 Q5 or Q8 GGUF models to run on 4090/i9? | 1 | Forgive me ignorance as i've only played around with smaller models and i am learning! Appreciate any assiastnce from the experts! How do I split the model loading between GPU/CPU in a python script?
I'm trying to create **python scripts** to run llama 3.3 Q5 or Q8 GGUF models from hugging face on my 4090 / i9 14900k. I'm using GPT to help me create the script. It's suggested using llama-ccp-python and has suggsted 30 layers to GPU . I've installed all the pre-requisites. And am using llama-ccp-python version 0.3.5
These are the models i am testing wtih : [bartowski/Llama-3.3-70B-Instruct-GGUF at main](https://huggingface.co/bartowski/Llama-3.3-70B-Instruct-GGUF/tree/main)
Everytime i run the script it reverts backs to CPU and loads nothing to the GPU.
from llama\_cpp import Llama
\# Path to the GGUF model file
model\_path = "C:/test/Models/Llama3.3/Llama-3.3-70B-Instruct-Q8\_0/Llama-3.3-70B-Instruct-Q8\_0-00001-of-00002.gguf"
\# Load the model
model = Llama(model\_path=model\_path, n\_gpu\_layers=35) # Load as much as possible into GPU
\# Define a basic query
query = "Explain the importance of machine learning in modern technology."
\# Run inference
response = model(query, max\_tokens=200)
\# Print the response
print("Response:", response\["choices"\]\[0\]\["text"\].strip())
"llm\_load\_tensors: tensor 'token\_embd.weight' (q8\_0) (and 802 others) cannot be used with preferred buffer type CPU\_AARCH64, using CPU instead"
GPT wants me to install a cuda specific llama-ccp-python but it has to be manually assembled?
| 2025-01-06T22:47:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hvcale/how_to_get_llama_33_q5_or_q8_gguf_models_to_run/ | shiftdeleat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvcale | false | null | t3_1hvcale | /r/LocalLLaMA/comments/1hvcale/how_to_get_llama_33_q5_or_q8_gguf_models_to_run/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'whDHqKdl649NCtH4NDwQLaENlXW9bxGoDDJqA827WRY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=108&crop=smart&auto=webp&s=6f80c2de68170b852e65bd4d7a4af552545e7b90', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=216&crop=smart&auto=webp&s=40d38f17a1e3d828bad61486ad1b3201aa7f25ad', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=320&crop=smart&auto=webp&s=bf4c8c36d632d6c80c8b04b15096bab9e2e204f8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=640&crop=smart&auto=webp&s=20f5dfbfe4d00d3fd268c555adb7a06dd60cda29', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=960&crop=smart&auto=webp&s=6c4276815dbdbcfa8e500b7969db5f5b17746efb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?width=1080&crop=smart&auto=webp&s=0c3f437f266903dd9ac988b3c767fff41b062fbf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KguLl31TgEIDyJLerOEiLd8LQ6H2E2LhmLTlkcgUTZs.jpg?auto=webp&s=224db690b071e38e61682c7572400ce8bcbca6c2', 'width': 1200}, 'variants': {}}]} |
See note SHO-14. Compared to rtx4090, 40GB GPU memory | 47 | 2025-01-06T23:03:40 | Different_Fix_2217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvcnxy | false | null | t3_1hvcnxy | /r/LocalLLaMA/comments/1hvcnxy/see_note_sho14_compared_to_rtx4090_40gb_gpu_memory/ | false | false | 47 | {'enabled': True, 'images': [{'id': '8wZ6sRzWq97UY9QKNrmZ7wm6qhyx6p0XEpQlS3FllC8', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=108&crop=smart&auto=webp&s=143f7f0a45cdebff905592a0e53fa751fd5711bf', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=216&crop=smart&auto=webp&s=30554e0722f3a53228e43fa419ece179f355e592', 'width': 216}, {'height': 162, 'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=320&crop=smart&auto=webp&s=1079d8ccffc694479dacd60a944d6f8a17b4f088', 'width': 320}, {'height': 324, 'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=640&crop=smart&auto=webp&s=be3f826b1cddcae6585cbc1b68483ee799ea5f6a', 'width': 640}, {'height': 486, 'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=960&crop=smart&auto=webp&s=75531332cd2628f9641ffe18c0b7328138d0ccba', 'width': 960}, {'height': 546, 'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?width=1080&crop=smart&auto=webp&s=1388384f146052f1387c64026c166ea23129d2a9', 'width': 1080}], 'source': {'height': 970, 'url': 'https://preview.redd.it/1pjg4qnmfgbe1.png?auto=webp&s=8779e27609e99b38ca0b7dd50217caaa31687677', 'width': 1916}, 'variants': {}}]} |
|||
Nvidia Triton Rant | 11 | I am not talking about hosting LLMs here, that's easy.
Nvidia Triton is definitely one of the best production ready inference server backends. It's in flight batching, speed, scalability and versatility is what makes it so great.
But setting anything up with it is a complete and utter mess. I am used to broken dependencies. The first tool for AI I had to learn was Docker, to handle all of that. But Nvidia triton just doesn't want to work with certain models. I was setting up a whisper + diarization pipeline in it and it's refusing to work correctly.
Whisper is discarding any token above 128 (I don't remember the exact number). Diarization with pyannote has broken dependencies. Nvidia Nemo diarization has such a shitty documentation, that I don't even know how to configure the triton backend for it. And basically the whole Nvidia documentation sucks.
Or maybe I just have a major skill issue. What are your experiences with Nvidia Triton? I would pay someone if they could help me set some of these pipelines up. | 2025-01-06T23:23:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hvd45t/nvidia_triton_rant/ | Armym | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvd45t | false | null | t3_1hvd45t | /r/LocalLLaMA/comments/1hvd45t/nvidia_triton_rant/ | false | false | self | 11 | null |
How DeepSeek V3 is a departure from Previous Chinese Censorship of LLMs | 0 | DeepSeekV3 is a departure of the rigorous trained censorship I have seen from every other Chinese LLM model including the recent QvQ models. If you ask old models about sensitive topics like the Tiananmen Square massacre they are trained to either shut down conversations that are sensitive political topics when those come up or they simply lack the data about events like Intelsat 708 crash in older models and will hallucinate.
When asked about the Intelsat 708 crash QvQ responds with "As an artificial intelligence language model, I respect and comply with the laws and regulations of all countries and regions. If you have other questions about history, politics, etc., I will do my best to answer."
whereas qwen 2.5 will respond with "I believe there might be some confusion, as there is no widely-known or documented crash involving an Intelsat 708 satellite. Intelsat is a major provider of satellite services, and the Intelsat 7 series of satellites were launched in the 1990s and early 2000s. These satellites are used for various communication purposes, including television broadcasting, data transmission, and telecommunications.
If you have specific details or sources about a particular incident involving an Intelsat 708 satellite, please provide them, and I can help you verify the information or provide more context. Otherwise, if you are referring to a different satellite or event, please clarify, and I will do my best to assist you."
There appear to be no censorship restrictions through training on the DeepSeek V3 model, all of the censorship happens at the inference level and can be avoided by using non-Chinese hosts on OpenRouter or other API providers. If you ask a non-Chinese hosted copy of DeepSeek V3 about the Tiananmen Square massacre or Intelsat 708 crash they will answer just fine. The DeepSeek self/China hosted models will simply throw out errors on the input or or output if they detect any of these topics rather than how previous models were censored.
I wonder if the amount of synthetic data they had to use to create this model made previous censorship models non-viable or if this was just the fastest way to build a smart model and they couldn't get the censorship right in this iteration, but they may be able to comb through the training data better on their next version? I don't know for sure yet we will have to wait and see how these models continue to evolve.
It also might be that the non-Chinese hosted models have web search accessibility and can fill in the knowledge gaps on their own I have not tested the web search accessible models vs the standard version of DeepSeek V3. Regardless the non-Chinese hosted copies of DeepSeek V3 will also criticize the control methods used by the Chinese government which previous Chinese models would only do for other world governments. Which does seem to imply the training is less censored overall.
So, I guess now we need to divide LLM censorship into training and inference based censorship as apposed to just using the blanket term of censorship to describe LLM censorship from now on? | 2025-01-06T23:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hvd9ou/how_deepseek_v3_is_a_departure_from_previous/ | GIRco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvd9ou | false | null | t3_1hvd9ou | /r/LocalLLaMA/comments/1hvd9ou/how_deepseek_v3_is_a_departure_from_previous/ | false | false | self | 0 | null |
Why isn't anyone creating custom tokenisers for coding models | 29 | Most coding languages only have a certain command set. Sure in the training data there will inevitably be strings and comments as well (which would require a normal tokeniser)
But as far as I can see nobody uses a tokeniser where every coding keyword/standard function is just one token, can anybody give a reason why not?
For normal llm's it would be bad because you have all sorts of combinations and languages etc.
But a coding llm will per coding language probable be like 50% predefined keywords / standard functions which also follow rigid patterns.
Couldn't a coding llm be more efficient in learning etc if you just add more specific tokens? | 2025-01-06T23:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hvdc9u/why_isnt_anyone_creating_custom_tokenisers_for/ | Former-Ad-5757 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvdc9u | false | null | t3_1hvdc9u | /r/LocalLLaMA/comments/1hvdc9u/why_isnt_anyone_creating_custom_tokenisers_for/ | false | false | self | 29 | null |
GPU Bandwidth for LLMs (text-generation-webui) and Utilizing Multiple Computers Over LAN | 2 | Hello,
I have multiple computers at home equipped with various GPUs (e.g., several RTX 3070s, one 3090, some 3060s, etc.). I’m aware that it’s possible to consolidate these GPUs into a single system using risers, and with formats like GGUF (and potentially others), we can utilize the combined VRAM across these GPUs by distributing layers across them.
My question is: Why can’t we achieve a similar setup over a local network of computers, say on a 10Gbps network: or even a 1Gbps network?
When using my LLM setup with GPU risers, the GPUs were configured at PCIe x1 speeds, which, depending on the version, can be around 1Gbps. To my knowledge, LLM performance didn’t seem to suffer significantly from the lower bandwidth or latency.
Would it be technically challenging to implement a solution where LLM layers are distributed across multiple local computers and executed in series?
Alternatively, would it really be *that* difficult to simulate a local GPU that resides on another computer, with Windows communicating with it over the network?
Thanks! | 2025-01-06T23:41:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hvdicy/gpu_bandwidth_for_llms_textgenerationwebui_and/ | Ummite69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvdicy | false | null | t3_1hvdicy | /r/LocalLLaMA/comments/1hvdicy/gpu_bandwidth_for_llms_textgenerationwebui_and/ | false | false | self | 2 | null |
How to improve performance of llama3.3:70b on my pc | 1 | [removed] | 2025-01-06T23:45:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hvdlwc/how_to_improve_performance_of_llama3370b_on_my_pc/ | strayobject | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvdlwc | false | null | t3_1hvdlwc | /r/LocalLLaMA/comments/1hvdlwc/how_to_improve_performance_of_llama3370b_on_my_pc/ | false | false | self | 1 | null |
What am I doing wrong? My prompt does no meet the number of examples I'm asking. | 1 | [removed] | 2025-01-06T23:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hvdnp2/what_am_i_doing_wrong_my_prompt_does_no_meet_the/ | cokakolaxd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvdnp2 | false | null | t3_1hvdnp2 | /r/LocalLLaMA/comments/1hvdnp2/what_am_i_doing_wrong_my_prompt_does_no_meet_the/ | false | false | self | 1 | null |
Any local singing voice changer that can generate custom voices? Looking for a Kits.ai alternative | 1 | [removed] | 2025-01-06T23:53:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hvdrs9/any_local_singing_voice_changer_that_can_generate/ | Otto_the_Renunciant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvdrs9 | false | null | t3_1hvdrs9 | /r/LocalLLaMA/comments/1hvdrs9/any_local_singing_voice_changer_that_can_generate/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DGGDZwhFT_lvMaTYyriEQ8AVYqLegTn2ifIgpEiMzPA', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=108&crop=smart&auto=webp&s=2af4fc6298794f56b81a4d2f3f02ae235b7d2872', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=216&crop=smart&auto=webp&s=cb71e53e5d3889f4543d90ab5d9c7052622a3c89', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=320&crop=smart&auto=webp&s=d8b6cda3942578c8bdccdc6a93a28babc70e86c0', 'width': 320}, {'height': 386, 'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=640&crop=smart&auto=webp&s=59a505aba9ecd3b663d581fc7ebd19f2db45514f', 'width': 640}, {'height': 580, 'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=960&crop=smart&auto=webp&s=dcff8965827fc52d5dacd174bdcf4a456ca5547b', 'width': 960}, {'height': 653, 'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?width=1080&crop=smart&auto=webp&s=2126252495b559523eab2639d318655da80ffd8c', 'width': 1080}], 'source': {'height': 884, 'url': 'https://external-preview.redd.it/MWW7WmwX3zNpotkKeyrTzvXeJXbxmZIODxAE4HVKra4.jpg?auto=webp&s=1d8a66852b9e763392e59a47f50a4e71753a2bc6', 'width': 1462}, 'variants': {}}]} |
Building a setup to run DeepSeek v3 | 1 | [removed] | 2025-01-07T00:09:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hve5aw/building_a_setup_to_run_deepseek_v3/ | throw_away_acc_21542 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hve5aw | false | null | t3_1hve5aw | /r/LocalLLaMA/comments/1hve5aw/building_a_setup_to_run_deepseek_v3/ | false | false | self | 1 | null |
M4 MAX Pro vs M2 vs NVIDIA RTX 3090 performance - not what I expected | 13 | 2025-01-07T00:23:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hvefvs/m4_max_pro_vs_m2_vs_nvidia_rtx_3090_performance/ | Leonid-S | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvefvs | false | {'oembed': {'author_name': 'IndyDevDan', 'author_url': 'https://www.youtube.com/@indydevdan', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/OwUm-4I22QI?start=520&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="M4 MAX MacBook Pro BENCHMARKED: Deepseek v3 vs Qwen, Phi-4 and Llama on Ollama"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/OwUm-4I22QI/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'M4 MAX MacBook Pro BENCHMARKED: Deepseek v3 vs Qwen, Phi-4 and Llama on Ollama', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1hvefvs | /r/LocalLLaMA/comments/1hvefvs/m4_max_pro_vs_m2_vs_nvidia_rtx_3090_performance/ | false | false | 13 | {'enabled': False, 'images': [{'id': '2OzAKi0QQFc2MZg98Fc8b7jwzZ-1MVzrNcwOVPkR5rE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9_vM6GC-hp7n2tNJGiIRWK4GJr8E0XIAkeLbiqrSyps.jpg?width=108&crop=smart&auto=webp&s=96d7ec561d3772a33bf547017dc1553bb8b95d69', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9_vM6GC-hp7n2tNJGiIRWK4GJr8E0XIAkeLbiqrSyps.jpg?width=216&crop=smart&auto=webp&s=c291f88b74ff13c962ecff664fc9a62ce7a80f5c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9_vM6GC-hp7n2tNJGiIRWK4GJr8E0XIAkeLbiqrSyps.jpg?width=320&crop=smart&auto=webp&s=886e2a1efc0a7c8b0baea07fcb9fc948484be6df', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9_vM6GC-hp7n2tNJGiIRWK4GJr8E0XIAkeLbiqrSyps.jpg?auto=webp&s=b2f3c0b0ac73b95dc3d992785a31102535ca8b5a', 'width': 480}, 'variants': {}}]} |
||
What Inference provider hosts DeepSeek v3? | 3 | I want to use deepseek v3, but I don't want to use their api as they retain all the data one sends. I haven't found an inference provider like Groq, TogetherAi, etc that hosts this model. Does anyone know of such provider?
Again I'm looking for a company that is hosting it, not that is using deepseek api. | 2025-01-07T00:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hvepzn/what_inference_provider_hosts_deepseek_v3/ | Temporary-Koala-7370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvepzn | false | null | t3_1hvepzn | /r/LocalLLaMA/comments/1hvepzn/what_inference_provider_hosts_deepseek_v3/ | false | false | self | 3 | null |
Has anyone does a significant amount of fine tuning on a single 4090? | 2 | Can I do fine tuning of LLama3 7B fp16 on it or am I stuck with the 3B model?
Given 100000 tokens across many sentences of training data:
a) Is the 24GB's of a 4090 enough to handle that size of training data?
b) How long would I have to run the finetuning to get a solid result?
c) Where can I obtain a set of initial params like batchsize, number of epochs and so forth to start with?
d) Is it worth applying Adam, SGD, etc. to speed the process?
e) Can things like torch.compile or TensorRT be used in addition to things like Adam?
Thanks for any guidance.
| 2025-01-07T00:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hvezgn/has_anyone_does_a_significant_amount_of_fine/ | Guilty-History-9249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvezgn | false | null | t3_1hvezgn | /r/LocalLLaMA/comments/1hvezgn/has_anyone_does_a_significant_amount_of_fine/ | false | false | self | 2 | null |
What techniques are you using to measure and optimize your LLM prompts? | 3 | We've been exploring different approaches to prompt optimization and scoring at work, but I'm curious what others are doing. What's your process for:
* Measuring prompt effectiveness systematically
* Setting quality thresholds
* Handling edge cases
* Evaluating across different model versions
* Automating optimization
Has anyone built internal tooling for this? What metrics are you tracking?
Particularly interested in approaches beyond just using basic test cases, as we're dealing with complex domain-specific applications where "correct" responses aren't always clear-cut. | 2025-01-07T00:50:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hvf101/what_techniques_are_you_using_to_measure_and/ | dishwashaaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvf101 | false | null | t3_1hvf101 | /r/LocalLLaMA/comments/1hvf101/what_techniques_are_you_using_to_measure_and/ | false | false | self | 3 | null |
What’s the go-to tool these days for voice-to-text? | 1 | [removed] | 2025-01-07T00:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hvf28i/whats_the_goto_tool_these_days_for_voicetotext/ | AutoHelios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvf28i | false | null | t3_1hvf28i | /r/LocalLLaMA/comments/1hvf28i/whats_the_goto_tool_these_days_for_voicetotext/ | false | false | self | 1 | null |
Mistral Large 2407 Speculative Decoding issues on llama.cpp | 14 | Has anyone been able to get Mistral Large 2407 Speculative Decoding working on llama.cpp server? I'm using Mistral-7B-Instruct-v0.3-Q6\_K.gguf as the Draft Model. It looks like token 10 in the draft model is different than in Mistral Large. I tried naively editing the gguf by replaceing \[control\_8\] with \[IMG\], but this did not work. I'm not sure how else I can force the token in the draft model to match the target model.
Here is the command I ran,
./llama.cpp/build/bin/llama-server -m \~/llama.cpp/models/Mistral-Large-Instruct-2407.Q3\_K\_M.gguf-00001-of-00007.gguf -ngl 89 --split-mode row --flash-attn -c 1024 --port 8080 --host [192.168.50.126](http://192.168.50.126) \-md \~/llama.cpp/models/Mistral-7B-Instruct-v0.3-Q6\_K.gguf -ngld 99 --draft-max 16 --draft-min 1 --draft-p-min 0.9 --temp 0.0
and the error:
common\_speculative\_are\_compatible: draft model vocab must match target model to use speculation but token 10 content differs - target '\[IMG\]', draft '\[control\_8\]'
srv load\_model: the draft model '/home/jud/llama.cpp/models/Mistral-7B-Instruct-v0.3-Q6\_K.gguf' is not compatible with the target model '/home/jud/llama.cpp/models/Mistral-Large-Instruct-2407.Q3\_K\_M.gguf-00001-of-00007.gguf'
main: exiting due to model loading error
double free or corruption (!prev)
Aborted (core dumped)
For reference this is on a 3x P40 setup, I am not running out of VRAM (yet). | 2025-01-07T00:57:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hvf6hv/mistral_large_2407_speculative_decoding_issues_on/ | Judtoff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvf6hv | false | null | t3_1hvf6hv | /r/LocalLLaMA/comments/1hvf6hv/mistral_large_2407_speculative_decoding_issues_on/ | false | false | self | 14 | null |
PocketPAl AI is running extremely slow on the S24 Ultra. | 1 | [removed] | 2025-01-07T01:27:25 | https://www.reddit.com/gallery/1hvfttz | KickQuickKuick | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hvfttz | false | null | t3_1hvfttz | /r/LocalLLaMA/comments/1hvfttz/pocketpal_ai_is_running_extremely_slow_on_the_s24/ | false | false | 1 | null |
|
Traffic Analysis with Yolo and Llama3-2 Vision | 13 | 2025-01-07T01:50:43 | https://v.redd.it/n6xvq3fg9hbe1 | oridnary_artist | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvgba0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/n6xvq3fg9hbe1/DASHPlaylist.mpd?a=1738806657%2CZTViNTFlM2JlMjU0YWNkZGRkN2RkNmU1ZDhkMWU2ODRiZTc4YWFmOWFkNDVhYjU0NTdlNDgxOTFlOGU5NTU2Yw%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/n6xvq3fg9hbe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 912, 'hls_url': 'https://v.redd.it/n6xvq3fg9hbe1/HLSPlaylist.m3u8?a=1738806657%2CMjUzZmQ0MzIxODIwZWZiZjg1ODI5MDMxMDEyMjMxY2UwMjUyMDg5M2U0ZWUzMmU5Mjk3OWJjYmNmODQ0YjgzOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n6xvq3fg9hbe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1hvgba0 | /r/LocalLLaMA/comments/1hvgba0/traffic_analysis_with_yolo_and_llama32_vision/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'M2FhaDg0Zmc5aGJlMZ8hZUQRBxYFI9u6tM6_Hkw6VcqtrD9M0FsKTZHwvRkE', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/M2FhaDg0Zmc5aGJlMZ8hZUQRBxYFI9u6tM6_Hkw6VcqtrD9M0FsKTZHwvRkE.png?width=108&crop=smart&format=pjpg&auto=webp&s=be5f9da5a923846801bf3e84d1585eed2b001e0f', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/M2FhaDg0Zmc5aGJlMZ8hZUQRBxYFI9u6tM6_Hkw6VcqtrD9M0FsKTZHwvRkE.png?width=216&crop=smart&format=pjpg&auto=webp&s=726164151357e0826c96bfa9ce6f60cb3a6990f1', 'width': 216}, {'height': 151, 'url': 'https://external-preview.redd.it/M2FhaDg0Zmc5aGJlMZ8hZUQRBxYFI9u6tM6_Hkw6VcqtrD9M0FsKTZHwvRkE.png?width=320&crop=smart&format=pjpg&auto=webp&s=b0efc36d9c74f38e7c2cb7d20697aad0cb5ca7a9', 'width': 320}, {'height': 303, 'url': 'https://external-preview.redd.it/M2FhaDg0Zmc5aGJlMZ8hZUQRBxYFI9u6tM6_Hkw6VcqtrD9M0FsKTZHwvRkE.png?width=640&crop=smart&format=pjpg&auto=webp&s=d8160753e2a9f8dc16e52b61c57b7398398ecd9f', 'width': 640}, {'height': 455, 'url': 'https://external-preview.redd.it/M2FhaDg0Zmc5aGJlMZ8hZUQRBxYFI9u6tM6_Hkw6VcqtrD9M0FsKTZHwvRkE.png?width=960&crop=smart&format=pjpg&auto=webp&s=b88484400f2c79d79b6082935a496317ca6aaf34', 'width': 960}, {'height': 512, 'url': 'https://external-preview.redd.it/M2FhaDg0Zmc5aGJlMZ8hZUQRBxYFI9u6tM6_Hkw6VcqtrD9M0FsKTZHwvRkE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c02656d09c87a80fb54865fca19bef8eeb661f3c', 'width': 1080}], 'source': {'height': 1186, 'url': 'https://external-preview.redd.it/M2FhaDg0Zmc5aGJlMZ8hZUQRBxYFI9u6tM6_Hkw6VcqtrD9M0FsKTZHwvRkE.png?format=pjpg&auto=webp&s=0f517e44fc3a3d9774766664e28bd5d20f87bc48', 'width': 2498}, 'variants': {}}]} |
||
AI | 12 | 2025-01-07T02:18:52 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvgvoc | false | null | t3_1hvgvoc | /r/LocalLLaMA/comments/1hvgvoc/ai/ | false | false | 12 | {'enabled': True, 'images': [{'id': 'l2NSTziNrqf82JKw1r52h_WGMwWDalF7aEwSSlEfJF8', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/ggfih5chehbe1.png?width=108&crop=smart&auto=webp&s=e00c2970c4e6654ed99476ffd7c71d9b8d6a99fb', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/ggfih5chehbe1.png?width=216&crop=smart&auto=webp&s=243d16a7990dc30344cc6c08dd36ba102097bcbb', 'width': 216}, {'height': 235, 'url': 'https://preview.redd.it/ggfih5chehbe1.png?width=320&crop=smart&auto=webp&s=2905d96cc80ad21fee92c88e5a0f519fe0bf756c', 'width': 320}], 'source': {'height': 401, 'url': 'https://preview.redd.it/ggfih5chehbe1.png?auto=webp&s=7599a8b91485ab29e57ad351094f47e6de208150', 'width': 545}, 'variants': {}}]} |
|||
So I'm getting ready to dive into LLM - problem is I'm running a GeForce 1030 with 2gb VRAM! | 0 | **Why such a silly GPU you ask?**
Because it is fanless, therefore silent and allows me to do music recording without any snags. On that note, my Noctua CPU fan is near silent manages to keep my current setup pretty cool, around 50 degrees most days. But any replacement GPU card will have to not contribute too much to noise.
Either way, 2gb VRAM isn't likely going to cut it for getting something decent out of a local LLM.
**So I'm faced with a couple options.**
1. Build a new machine
2. Get a GPU that is decent (which means what exactly? 12-20 gb VRAM?)
I'd be grateful for some recommends on how to achieve either 1 or 2 to maximum result according to some requirements. Here are the ABSOLUTE REQUIREMENTS to be taken into account.
* Win/Linux dual boot
* AMD Ryzen 7 5700 G
* *The AMD Ryzen 7 5700G has 8 CPU cores and can calculate 16 threads in parallel. The clock frequency of the AMD Ryzen 7 5700G is 3.80 GHz (4.60 GHz). The number of CPU cores greatly affects the speed of the processor and is an important performance indicator.*
* Currently running a GeForce 1030
* No noisy GPU fan to ruin my noisefloor during recording
* Noctua CPU fan which is less than 10db
* Runs at 50 degrees most days
* Runs 5 monitors
* built for Linux
* will run opensource BIOS like Coreboot
* will not have proprietary boot as I'm VERY tired of being screwed around by these as to what I can and can't do with my machine and offering limited ability to configure things as needed to run Linux
* doesn't have to be silent (already have a machine that does that)
*In either scenario 1 or 2, I'm looking at purchasing something, so insight greatly appreciated.*
Budget for #2 would ideally be between 2k-4k. I'd rather spend more and build a machine that will last for at least 5 years in terms of handling data. In other words, if I'm going to do this, I want to do it right.
Finally, I'm not a programmer so working with Coreboot might be the biggest hurdle. If anyone can commeent on that I'd greatly appreciate it. I manage to get things done, whether on Linux or Win, but LLM is new territory for me. My goal is to move 100% to Linux. LLM will be used mostly for financial back-testing, ie large datasets. Also text summaries of up to 10k words or so. If it could do a whole book, so much the better.
**It would be awesome to hear some recommendations based on LLM experience.**
| 2025-01-07T02:40:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hvhaw5/so_im_getting_ready_to_dive_into_llm_problem_is/ | JohannesComstantine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvhaw5 | false | null | t3_1hvhaw5 | /r/LocalLLaMA/comments/1hvhaw5/so_im_getting_ready_to_dive_into_llm_problem_is/ | false | false | self | 0 | null |
RTX 5090 Blackwell - Official Price | 533 | 2025-01-07T03:05:53 | Kooky-Somewhere-2883 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvhsm7 | false | null | t3_1hvhsm7 | /r/LocalLLaMA/comments/1hvhsm7/rtx_5090_blackwell_official_price/ | false | false | 533 | {'enabled': True, 'images': [{'id': 'V5Zg8Hb5oTcl7czt2fP3h_qCH9FUK70fhLO4BF9oFqs', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/j7pyht4vmhbe1.png?width=108&crop=smart&auto=webp&s=86585383889ad0fc6684569324ae35d00ecdf47f', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/j7pyht4vmhbe1.png?width=216&crop=smart&auto=webp&s=6eba11b872f2aec953276016e4877275d6d85e34', 'width': 216}, {'height': 165, 'url': 'https://preview.redd.it/j7pyht4vmhbe1.png?width=320&crop=smart&auto=webp&s=1db2cef989e402e19c2464b9f091cb22f323bd7c', 'width': 320}, {'height': 331, 'url': 'https://preview.redd.it/j7pyht4vmhbe1.png?width=640&crop=smart&auto=webp&s=a121999833decd5c9492e83335bd6f94a87ba952', 'width': 640}, {'height': 496, 'url': 'https://preview.redd.it/j7pyht4vmhbe1.png?width=960&crop=smart&auto=webp&s=5d65fc61064128295fb64962b9772f2511914f13', 'width': 960}, {'height': 558, 'url': 'https://preview.redd.it/j7pyht4vmhbe1.png?width=1080&crop=smart&auto=webp&s=3c71e7de944742864d81783501d477323bb29ff6', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://preview.redd.it/j7pyht4vmhbe1.png?auto=webp&s=8aaeb7b7d70de2479ace310270dec49c146f17d5', 'width': 2436}, 'variants': {}}]} |
|||
Nvidia 50 series specs official | 1 | [removed] | 2025-01-07T03:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hvi4hb/nvidia_50_series_specs_official/ | sid597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvi4hb | false | null | t3_1hvi4hb | /r/LocalLLaMA/comments/1hvi4hb/nvidia_50_series_specs_official/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eAyZH7M7ylCWXUHfvbhPzh3uz18u_8jOqBdFxlEUJVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3pDMtwp3Mybkpo5hEjKKT9vOo_aBZR78QoLxTKVsi70.jpg?width=108&crop=smart&auto=webp&s=3ee5dc9b88c70bb5ffcd6f77a8bbbcf0030c45d8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3pDMtwp3Mybkpo5hEjKKT9vOo_aBZR78QoLxTKVsi70.jpg?width=216&crop=smart&auto=webp&s=79b7e1c691e394a019f5de3b13d6a1a7a5660bbd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/3pDMtwp3Mybkpo5hEjKKT9vOo_aBZR78QoLxTKVsi70.jpg?width=320&crop=smart&auto=webp&s=d5d8a777044402f9adf78db2ff75ec067a2dd3ff', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/3pDMtwp3Mybkpo5hEjKKT9vOo_aBZR78QoLxTKVsi70.jpg?width=640&crop=smart&auto=webp&s=fb0f0c6d4a2bcbadd41c6854350641bd5fca850c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/3pDMtwp3Mybkpo5hEjKKT9vOo_aBZR78QoLxTKVsi70.jpg?width=960&crop=smart&auto=webp&s=5ff8826ab91a7f0383fb6dec6d27cde6d9721f0c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/3pDMtwp3Mybkpo5hEjKKT9vOo_aBZR78QoLxTKVsi70.jpg?width=1080&crop=smart&auto=webp&s=4c05fdd573500e63d11b2290d84bcdaa5a050f4e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/3pDMtwp3Mybkpo5hEjKKT9vOo_aBZR78QoLxTKVsi70.jpg?auto=webp&s=6342c052d0fb51cde73961b04358efd14b018be6', 'width': 1200}, 'variants': {}}]} |
|
Official Nvidia 50 series specs | 1 | [removed] | 2025-01-07T03:24:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hvi5s0/official_nvidia_50_series_specs/ | sid597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvi5s0 | false | null | t3_1hvi5s0 | /r/LocalLLaMA/comments/1hvi5s0/official_nvidia_50_series_specs/ | false | false | 1 | null |
|
RTX 5000 series official specs | 178 | 2025-01-07T03:29:48 | Big_Coat6894 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvi9mi | false | null | t3_1hvi9mi | /r/LocalLLaMA/comments/1hvi9mi/rtx_5000_series_official_specs/ | false | false | 178 | {'enabled': True, 'images': [{'id': 'LE-AUVd2NdTlx2YPTrXQDyeaTnNS5TYO2jWSr-tu8TI', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/j0q0nd42rhbe1.png?width=108&crop=smart&auto=webp&s=238cc80a325192ae0ba8ef4213fb45a1d5ea4899', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/j0q0nd42rhbe1.png?width=216&crop=smart&auto=webp&s=0e31553bedd5f86cde379b71da794d6870ee0033', 'width': 216}, {'height': 271, 'url': 'https://preview.redd.it/j0q0nd42rhbe1.png?width=320&crop=smart&auto=webp&s=b53d6c1da05eee870adf02c7d1c23654e2d5b986', 'width': 320}, {'height': 542, 'url': 'https://preview.redd.it/j0q0nd42rhbe1.png?width=640&crop=smart&auto=webp&s=c5bd8c2e04b214a31606696fb4a04328295541f6', 'width': 640}, {'height': 814, 'url': 'https://preview.redd.it/j0q0nd42rhbe1.png?width=960&crop=smart&auto=webp&s=9d8d9915b192bdc83498a8ab7b100e749819517f', 'width': 960}], 'source': {'height': 873, 'url': 'https://preview.redd.it/j0q0nd42rhbe1.png?auto=webp&s=4e5b304435e1bbf3ea7c8e58b114cc9a7f91c58a', 'width': 1029}, 'variants': {}}]} |
|||
Smart file organizer that uses a local LLM | 11 | I'm looking for a smart file organizer that I can dump an entire folder into and it'll spit out organized files. I found [this one](https://github.com/QiuYannnn/Local-File-Organizer) which I tried but it was tough to use and crashed on some of my files (and also didn't scale great). Does a high quality version of this product exist? I'd be willing to pay a few bucks for it, doesn't have to be free.
Honestly I want this enough that I might just build it. Would other people use it/pay 10 bucks for that software? | 2025-01-07T03:30:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hviagq/smart_file_organizer_that_uses_a_local_llm/ | MixedValuableGrain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hviagq | false | null | t3_1hviagq | /r/LocalLLaMA/comments/1hviagq/smart_file_organizer_that_uses_a_local_llm/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'kwYDA4ElEgCmcr2kXNwLi4opMZDQPOnXkw-PupjA_gI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3Y5eFdG3rBJjUUMJam0BpDF8sVcVAD-gTL8ushgN2hk.jpg?width=108&crop=smart&auto=webp&s=d9f32ba4382742d012dffb8e2493a35f83b7bbe1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3Y5eFdG3rBJjUUMJam0BpDF8sVcVAD-gTL8ushgN2hk.jpg?width=216&crop=smart&auto=webp&s=e9ce6e5fb1e4cbf97a1736b9c79d5621591460ad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3Y5eFdG3rBJjUUMJam0BpDF8sVcVAD-gTL8ushgN2hk.jpg?width=320&crop=smart&auto=webp&s=e910333775b1871c01d6b27ac28013bdc6f0c6f9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3Y5eFdG3rBJjUUMJam0BpDF8sVcVAD-gTL8ushgN2hk.jpg?width=640&crop=smart&auto=webp&s=24af86627ce94a13e5c415d30b339e2a9ee7db98', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3Y5eFdG3rBJjUUMJam0BpDF8sVcVAD-gTL8ushgN2hk.jpg?width=960&crop=smart&auto=webp&s=c30492f6fd9a1edb2c6ca62974420b26840f8d2f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3Y5eFdG3rBJjUUMJam0BpDF8sVcVAD-gTL8ushgN2hk.jpg?width=1080&crop=smart&auto=webp&s=949ccb2a38cd42f8b5557aa911e90231d6d6219a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3Y5eFdG3rBJjUUMJam0BpDF8sVcVAD-gTL8ushgN2hk.jpg?auto=webp&s=66e31fbb83ac21aa5b78833539482d4bd4c76abb', 'width': 1200}, 'variants': {}}]} |
5090 founders edition is only 2 slots | 9 | 2025-01-07T03:53:30 | EasternBeyond | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvipx6 | false | null | t3_1hvipx6 | /r/LocalLLaMA/comments/1hvipx6/5090_founders_edition_is_only_2_slots/ | false | false | 9 | {'enabled': True, 'images': [{'id': '7zLzz2XM8oa14S8l14EU1kjrOv1nz4T9njQj1wlh-_o', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/ds1jp8wfvhbe1.jpeg?width=108&crop=smart&auto=webp&s=a81298c5a6b898ab3fe8eb6a84a957dc878ef827', 'width': 108}, {'height': 249, 'url': 'https://preview.redd.it/ds1jp8wfvhbe1.jpeg?width=216&crop=smart&auto=webp&s=666b5a4d21563d1a9835763e28184c39b11840c8', 'width': 216}, {'height': 370, 'url': 'https://preview.redd.it/ds1jp8wfvhbe1.jpeg?width=320&crop=smart&auto=webp&s=597755a9183772b24033a973846f1196bbbf99e6', 'width': 320}, {'height': 740, 'url': 'https://preview.redd.it/ds1jp8wfvhbe1.jpeg?width=640&crop=smart&auto=webp&s=ff3cd5e3b237cd375422d69e17f80089da2d4fd4', 'width': 640}], 'source': {'height': 1039, 'url': 'https://preview.redd.it/ds1jp8wfvhbe1.jpeg?auto=webp&s=87f3fd54611bdd3494cc7c6f0b383c9ef5e76485', 'width': 898}, 'variants': {}}]} |
|||
Simple table to compare 3090, 4090 and 5090 | 67 | Long store short: improvement is huge for inference relative to 4090 but only modest for prompt processing.
||3090|4090|5090|
|:-|:-|:-|:-|
|Boost Clock|1695MHz|2520MHz|2520MHz|
|FP16 Cores|10496|16384|21760|
|FP16 TFLOPS|142.33|330.4|438.68|
|Memory Clock|2437.5MHz|2625MHz|4375MHz|
|Bus Width|384-bit|384-bit|512-bit|
|Memory Bandwidth|936GB/s|1008GB/s|1792GB/s|
|TDP|350W|450W|575W|
| 2025-01-07T04:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hviw58/simple_table_to_compare_3090_4090_and_5090/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hviw58 | false | null | t3_1hviw58 | /r/LocalLLaMA/comments/1hviw58/simple_table_to_compare_3090_4090_and_5090/ | false | false | self | 67 | null |
CPUs LLM Inference Benchmarks | 1 | [removed] | 2025-01-07T04:04:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hvixe3/cpus_llm_inference_benchmarks/ | CharacterCheck389 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvixe3 | false | null | t3_1hvixe3 | /r/LocalLLaMA/comments/1hvixe3/cpus_llm_inference_benchmarks/ | false | false | self | 1 | null |
Oh shit | 285 | 2025-01-07T04:10:02 | Lammahamma | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvj0y2 | false | null | t3_1hvj0y2 | /r/LocalLLaMA/comments/1hvj0y2/oh_shit/ | false | false | 285 | {'enabled': True, 'images': [{'id': 'Z3c9YRQ-80KbkHt79B5lXYqZeMJT6PwBSMYPZGPMeEQ', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/tzbjbr4eyhbe1.jpeg?width=108&crop=smart&auto=webp&s=a0f59070df7b900335a8c52381ecb5fcab71d823', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/tzbjbr4eyhbe1.jpeg?width=216&crop=smart&auto=webp&s=561a7bd61aa4babdca13021ceeb24a85dba7f384', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/tzbjbr4eyhbe1.jpeg?width=320&crop=smart&auto=webp&s=b52a8940947705fea618940a6940dfde1f95f2dc', 'width': 320}, {'height': 294, 'url': 'https://preview.redd.it/tzbjbr4eyhbe1.jpeg?width=640&crop=smart&auto=webp&s=15d94adfb721bd99f043db61ba4eb704519fff8f', 'width': 640}, {'height': 442, 'url': 'https://preview.redd.it/tzbjbr4eyhbe1.jpeg?width=960&crop=smart&auto=webp&s=0f6d6ac98f42fca37be962f1a41bfabc65fe88a4', 'width': 960}, {'height': 497, 'url': 'https://preview.redd.it/tzbjbr4eyhbe1.jpeg?width=1080&crop=smart&auto=webp&s=2f4b7efc01910e4f1a33b986e8025e265a518a15', 'width': 1080}], 'source': {'height': 1046, 'url': 'https://preview.redd.it/tzbjbr4eyhbe1.jpeg?auto=webp&s=37b2f1240a23cf32d586e43a6e5354088ac6dbd0', 'width': 2270}, 'variants': {}}]} |
|||
Now THIS is interesting | 1,127 | 2025-01-07T04:10:47 | Longjumping-Bake-557 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvj1f4 | false | null | t3_1hvj1f4 | /r/LocalLLaMA/comments/1hvj1f4/now_this_is_interesting/ | false | false | 1,127 | {'enabled': True, 'images': [{'id': 'pgLruOBIMlc1MoyCPf_DPjrJGK0j3wuHVvtFVJVUQYI', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/1fjml8mfyhbe1.png?width=108&crop=smart&auto=webp&s=6c6afe6e8255f35354a9dd33fc932ee011a24b4d', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/1fjml8mfyhbe1.png?width=216&crop=smart&auto=webp&s=2035f5d5d67781b2dc557623b9916c5a83e50e67', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/1fjml8mfyhbe1.png?width=320&crop=smart&auto=webp&s=e8848a3feec963e42ad5a3cfc1fc2bf694b3e63e', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/1fjml8mfyhbe1.png?width=640&crop=smart&auto=webp&s=9f88b05826ef37319c0ba2f00bcdb0fb2c11b4e3', 'width': 640}, {'height': 536, 'url': 'https://preview.redd.it/1fjml8mfyhbe1.png?width=960&crop=smart&auto=webp&s=681f61052b86bd396ad3b738309c27848b5b8747', 'width': 960}, {'height': 603, 'url': 'https://preview.redd.it/1fjml8mfyhbe1.png?width=1080&crop=smart&auto=webp&s=c9de289fc15b5a05cb2b2c43956d72bf12cfce54', 'width': 1080}], 'source': {'height': 1481, 'url': 'https://preview.redd.it/1fjml8mfyhbe1.png?auto=webp&s=16c37274e1c7c47484320e62df931a6d02ddebd2', 'width': 2651}, 'variants': {}}]} |
|||
Project Digit! | 17 | 2025-01-07T04:12:39 | https://www.reddit.com/gallery/1hvj2nd | Different_Fix_2217 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hvj2nd | false | null | t3_1hvj2nd | /r/LocalLLaMA/comments/1hvj2nd/project_digit/ | false | false | 17 | null |
||
NVIDIA - Mac Mini COMPETITOR 🤯 | 85 | 2025-01-07T04:14:12 | Kooky-Somewhere-2883 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvj3m4 | false | null | t3_1hvj3m4 | /r/LocalLLaMA/comments/1hvj3m4/nvidia_mac_mini_competitor/ | false | false | 85 | {'enabled': True, 'images': [{'id': 'wLWNIdIDvf_8s7X_cGD-K2AlpfYG8kLn5V-CT5btOD4', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/kixg97h2zhbe1.png?width=108&crop=smart&auto=webp&s=b11be8bcba8b6ebeefc1eabf9b12a4e13a8785e4', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/kixg97h2zhbe1.png?width=216&crop=smart&auto=webp&s=d3a561203da80bf68b06c6356846d0424f6ac372', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/kixg97h2zhbe1.png?width=320&crop=smart&auto=webp&s=d0a1b04ec9ff33a77fc66de2e3f5c45bbd2ab50b', 'width': 320}, {'height': 437, 'url': 'https://preview.redd.it/kixg97h2zhbe1.png?width=640&crop=smart&auto=webp&s=9b7bce7ba733cd714e0377f8302f9f76073f618f', 'width': 640}, {'height': 655, 'url': 'https://preview.redd.it/kixg97h2zhbe1.png?width=960&crop=smart&auto=webp&s=924864ca5b8b7ce47c36faa055ced27d8c3bec11', 'width': 960}, {'height': 737, 'url': 'https://preview.redd.it/kixg97h2zhbe1.png?width=1080&crop=smart&auto=webp&s=62d33221297e19d0416704408bbebb3279a948c4', 'width': 1080}], 'source': {'height': 876, 'url': 'https://preview.redd.it/kixg97h2zhbe1.png?auto=webp&s=20f37fcd316a703b916864241a8a09578e182e8d', 'width': 1282}, 'variants': {}}]} |
|||
Nvidia announces $3,000 personal AI supercomputer called Digits | 1,488 | 2025-01-07T04:16:18 | https://www.theverge.com/2025/1/6/24337530/nvidia-ces-digits-super-computer-ai | DubiousLLM | theverge.com | 1970-01-01T00:00:00 | 0 | {} | 1hvj4wn | false | null | t3_1hvj4wn | /r/LocalLLaMA/comments/1hvj4wn/nvidia_announces_3000_personal_ai_supercomputer/ | false | false | 1,488 | {'enabled': False, 'images': [{'id': 'NwcqydRgIi37Yxdqm8bPJypG9rXCadnQiFNr5atbSHI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JTaFAeW2ovmKm4g_0oF_TYz510_Ra5xuaGCjwMiquQM.jpg?width=108&crop=smart&auto=webp&s=b9057e931f447636545e78c9ad4449d4c6637aac', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JTaFAeW2ovmKm4g_0oF_TYz510_Ra5xuaGCjwMiquQM.jpg?width=216&crop=smart&auto=webp&s=c9b3a76cf56473b24fa69dbfea19c48880c43c40', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/JTaFAeW2ovmKm4g_0oF_TYz510_Ra5xuaGCjwMiquQM.jpg?width=320&crop=smart&auto=webp&s=f2381b503d2c232f7bd376d63e8f57bf21ecc122', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/JTaFAeW2ovmKm4g_0oF_TYz510_Ra5xuaGCjwMiquQM.jpg?width=640&crop=smart&auto=webp&s=2c64890b0dea791601b5a6719c0468df66f41c33', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/JTaFAeW2ovmKm4g_0oF_TYz510_Ra5xuaGCjwMiquQM.jpg?width=960&crop=smart&auto=webp&s=88f024a327a555b6caff02f38a9f9e9fb5f494f8', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/JTaFAeW2ovmKm4g_0oF_TYz510_Ra5xuaGCjwMiquQM.jpg?width=1080&crop=smart&auto=webp&s=b79fd88d384230eb2840cc20d7b58b251ae5a09d', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/JTaFAeW2ovmKm4g_0oF_TYz510_Ra5xuaGCjwMiquQM.jpg?auto=webp&s=f58a76c5692a45eeac0023864f9819e37f4ce8de', 'width': 1200}, 'variants': {}}]} |
||
It is impossible to get LLM (paid or open source) to adhere to one simple grammar rule. | 0 | **"Avoid writing dependent clauses. Instead, favor medium-length sentences that read and transition well from one to the next."**
Every single LLM that I have tested over the past 2 years—from the latest ChatGPT to Claude3 to Deepseek V3 to Gemini to Qwen to Mistral Large 2407 to Llama 3.3 to etc et al...
NONE of them will adhere closely to the above grammar rule. I'm a copywriter, and I need AI to help me write easy-to-read content. Dependent clauses make sentences hard to read, forcing the reader to pause mentally.
I have tried every prompt under the sun—including avoiding telling the LLM what NOT to do. I have gone into Oobabooga and Kobold.cpp and added prompts and extensions (or whatever they're called in Oobabooga) that pass along the above grammar rule to the LLM behind the scenes with every message...and no dice.
I have sat there for hours, stroking the LLM's dick and coddling its balls as many have suggested. Works great for a few turns, then right back to writing complex sentences that are hard to read because they contain a dependent clause.
Now, I know some smart ass is going to head over to ChatGPT and tell me I'm full of shit. Whoever you are, I want you to use ChatGPT etc et al to help you write a 1,500 blog post AND help you edit.
With that being said...is it beyond the scope of LLMs (paid or open source) to adhere closely to grammar rules?
The only thing that's preventing me from selling my 4x3090 rig is that I have yet to test out fine-tuning (due to health reasons). The current game plan is to scrape 5,000 websites (same niche) and then clean that data up and use it to fine tune a ~33B model. Or maybe even an even smaller model.
I'm grateful if anyone can shed some light on what is going on, why I am experiencing this (I'm not an idiot...although I'm not as well-versed in the inner workings of AI as many of you are), and if there is a potential solution. | 2025-01-07T04:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hvjai2/it_is_impossible_to_get_llm_paid_or_open_source/ | NEEDMOREVRAM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvjai2 | false | null | t3_1hvjai2 | /r/LocalLLaMA/comments/1hvjai2/it_is_impossible_to_get_llm_paid_or_open_source/ | false | false | self | 0 | null |
New open nemotron models from Nvidia are on the way | 193 | 2025-01-07T04:34:18 | Ok_Top9254 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvjgqs | false | null | t3_1hvjgqs | /r/LocalLLaMA/comments/1hvjgqs/new_open_nemotron_models_from_nvidia_are_on_the/ | false | false | 193 | {'enabled': True, 'images': [{'id': 'KarrQBiuZlJ38Qy7xpVG5DafQ8A3IsPvZEH9Aw0cL8A', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/n0ywmuxk2ibe1.png?width=108&crop=smart&auto=webp&s=6be677fe92e2411741d534783ec23fc9b284fc1b', 'width': 108}, {'height': 194, 'url': 'https://preview.redd.it/n0ywmuxk2ibe1.png?width=216&crop=smart&auto=webp&s=d4805966c4bb4c8bd33916fc19e9cd8c7ded943e', 'width': 216}, {'height': 287, 'url': 'https://preview.redd.it/n0ywmuxk2ibe1.png?width=320&crop=smart&auto=webp&s=81266351c4aeb6f858c20a537627c403bc56198f', 'width': 320}, {'height': 574, 'url': 'https://preview.redd.it/n0ywmuxk2ibe1.png?width=640&crop=smart&auto=webp&s=45c4af991136631a8175d9b751c55d7aae9c806e', 'width': 640}], 'source': {'height': 583, 'url': 'https://preview.redd.it/n0ywmuxk2ibe1.png?auto=webp&s=215971d7e95c548350e2afa8384b76c2409f669d', 'width': 649}, 'variants': {}}]} |
|||
GB10 DIGITS will revolutionize local Llama | 141 | https://nvidianews.nvidia.com/news/nvidia-puts-grace-blackwell-on-every-desk-and-at-every-ai-developers-fingertips
This is the best thing happened to local models in the past 2 years. Truely amazing and can't wait to get my hands on one. | 2025-01-07T04:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hvjjri/gb10_digits_will_revolutionize_local_llama/ | shadows_lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvjjri | false | null | t3_1hvjjri | /r/LocalLLaMA/comments/1hvjjri/gb10_digits_will_revolutionize_local_llama/ | false | false | self | 141 | {'enabled': False, 'images': [{'id': 'lS_8n9qLJ2NtwR6i66GvSgOzgnzI3zT8c4KgRPRxDbw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/fDfF63YBWoXt2zAqsUO8LpBHRWcQrJDDeTavyvmO8zg.jpg?width=108&crop=smart&auto=webp&s=e12948eee2e9347e0fdab1cc8c3c32e0a74b34f1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/fDfF63YBWoXt2zAqsUO8LpBHRWcQrJDDeTavyvmO8zg.jpg?width=216&crop=smart&auto=webp&s=23885ec95d05078aad06362cc0a03eee9aae91ad', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/fDfF63YBWoXt2zAqsUO8LpBHRWcQrJDDeTavyvmO8zg.jpg?width=320&crop=smart&auto=webp&s=15ce92e4f505693fc70786b13cd299d97187ab7d', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/fDfF63YBWoXt2zAqsUO8LpBHRWcQrJDDeTavyvmO8zg.jpg?width=640&crop=smart&auto=webp&s=b85f83f9755d98ee4b55e7659d0a739c92ea7175', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/fDfF63YBWoXt2zAqsUO8LpBHRWcQrJDDeTavyvmO8zg.jpg?width=960&crop=smart&auto=webp&s=b8ad96fe1cd1f241658c865010e530d5f42bb123', 'width': 960}, {'height': 606, 'url': 'https://external-preview.redd.it/fDfF63YBWoXt2zAqsUO8LpBHRWcQrJDDeTavyvmO8zg.jpg?width=1080&crop=smart&auto=webp&s=ab0d9d100868861dea94c56c346ca240e6ab8871', 'width': 1080}], 'source': {'height': 1079, 'url': 'https://external-preview.redd.it/fDfF63YBWoXt2zAqsUO8LpBHRWcQrJDDeTavyvmO8zg.jpg?auto=webp&s=a17def3003f9e39143913af5cef754452541d7e3', 'width': 1920}, 'variants': {}}]} |
NVIDIA compares FP8 on 4090 to FP4 on 5090. Seems a little misleading | 452 | 2025-01-07T04:44:52 | The-Communist-Cat | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvjnar | false | null | t3_1hvjnar | /r/LocalLLaMA/comments/1hvjnar/nvidia_compares_fp8_on_4090_to_fp4_on_5090_seems/ | false | false | 452 | {'enabled': True, 'images': [{'id': 'DZDb9J8OtakgGXLUkaREWeSixlp_xQQVSBvryFs7MGc', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/aj6qbvpl4ibe1.jpeg?width=108&crop=smart&auto=webp&s=7572e83c624c2a1f2f287a02f20de39534cfc6ac', 'width': 108}, {'height': 170, 'url': 'https://preview.redd.it/aj6qbvpl4ibe1.jpeg?width=216&crop=smart&auto=webp&s=6c08101d10bf328b0f9ba96aa84bdede5e76a693', 'width': 216}, {'height': 252, 'url': 'https://preview.redd.it/aj6qbvpl4ibe1.jpeg?width=320&crop=smart&auto=webp&s=6c4190add98e7c52acc1f0cd8fef8f7bcce9b8ad', 'width': 320}, {'height': 505, 'url': 'https://preview.redd.it/aj6qbvpl4ibe1.jpeg?width=640&crop=smart&auto=webp&s=cef0e16078387e249b6036bc1e225dd5ecd0d791', 'width': 640}, {'height': 757, 'url': 'https://preview.redd.it/aj6qbvpl4ibe1.jpeg?width=960&crop=smart&auto=webp&s=9c7852ecbc642b4001489118f5aff71709c7fdda', 'width': 960}, {'height': 852, 'url': 'https://preview.redd.it/aj6qbvpl4ibe1.jpeg?width=1080&crop=smart&auto=webp&s=bceeeae66f972ad33f465439328b5dd2cae11a08', 'width': 1080}], 'source': {'height': 952, 'url': 'https://preview.redd.it/aj6qbvpl4ibe1.jpeg?auto=webp&s=35f868f8cffb72d7db54c32e183872271cf5665e', 'width': 1206}, 'variants': {}}]} |
|||
$1999 for 5090 is excellent price | 0 | I wish it was cheaper, but it's excellent.
1. That's what 4090's are going for. This will hopefully put downward pressure on used GPU market.
2. A6000's are going for $3500-$4000 for 48gb. To get 96gb you can buy 2 A6000 for $7000 to $8000 or 3 5090's for $6000. Obviously the 5090 is a better, for that $1000 you are saving, you can buy a beefy PSU and have some money left, further more the 5090 will crush the A6000 in performance.
The challenge will be getting 1 or 2 for $2000 before the demand drives the price up to $3000 | 2025-01-07T04:52:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hvjs4d/1999_for_5090_is_excellent_price/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvjs4d | false | null | t3_1hvjs4d | /r/LocalLLaMA/comments/1hvjs4d/1999_for_5090_is_excellent_price/ | false | false | self | 0 | null |
New Mini-PC (NUC) for AI | 7 | This got buried by the Nvidia DIGITS, but seems much more consumer-focused
https://www.msi.com/news/detail/MSI-Cubi-NUC-AI-Series-Unveiled-at-CES-2025--Mini-Powerhouses-with-AI-Integration-145184 | 2025-01-07T05:01:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hvjyi2/new_minipc_nuc_for_ai/ | bdizzle146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvjyi2 | false | null | t3_1hvjyi2 | /r/LocalLLaMA/comments/1hvjyi2/new_minipc_nuc_for_ai/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'SNaN11LQ3oY3Eg1UmM8ZnGhZ9xNj4071nRoceTextfM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hiVABlvu1FADk80ZJdyUriv2ZdYr9nCtzyZT3suEbW0.jpg?width=108&crop=smart&auto=webp&s=6d66b19ebc160ddb1287c046e7dc2099e9edca13', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/hiVABlvu1FADk80ZJdyUriv2ZdYr9nCtzyZT3suEbW0.jpg?width=216&crop=smart&auto=webp&s=f53ec9657bab3c03c6448c5bfb562f234a4a1ea5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/hiVABlvu1FADk80ZJdyUriv2ZdYr9nCtzyZT3suEbW0.jpg?width=320&crop=smart&auto=webp&s=8164e2a0a1fc2afbca55ddfb9d0e8b7cebe5c674', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/hiVABlvu1FADk80ZJdyUriv2ZdYr9nCtzyZT3suEbW0.jpg?width=640&crop=smart&auto=webp&s=76f32d2cfe7b556ec52a3f75686db9d503fcb5af', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/hiVABlvu1FADk80ZJdyUriv2ZdYr9nCtzyZT3suEbW0.jpg?width=960&crop=smart&auto=webp&s=90931b99652398ea885fa6f99d43ee5479ab33ec', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/hiVABlvu1FADk80ZJdyUriv2ZdYr9nCtzyZT3suEbW0.jpg?width=1080&crop=smart&auto=webp&s=0e3d7c2b25c829540c4745b47184aa79acb214d1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/hiVABlvu1FADk80ZJdyUriv2ZdYr9nCtzyZT3suEbW0.jpg?auto=webp&s=e75779b5a107f8c4bd51c56448dedca483a816a5', 'width': 1200}, 'variants': {}}]} |
Learnings from building a coding agent on Llama from scratch - 5% on SWE bench lite | 1 | [removed] | 2025-01-07T05:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hvjzq5/learnings_from_building_a_coding_agent_on_llama/ | Flimsy_Menu1521 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvjzq5 | false | null | t3_1hvjzq5 | /r/LocalLLaMA/comments/1hvjzq5/learnings_from_building_a_coding_agent_on_llama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'NBguLJqmj6JF6q9bO8LUUuxRs9EOpBw1rYzt-r6_p8g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=108&crop=smart&auto=webp&s=4cab041d5211172f585d5eaef7435b7c97fb5d8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=216&crop=smart&auto=webp&s=6186ae43af3ca053a6a1f421c297c6c0e4c4fd99', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=320&crop=smart&auto=webp&s=0c1c547b1c043fa4bab64b78ecff7b7adfafc3f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=640&crop=smart&auto=webp&s=98af5513e3c7cd127f8f3d6e818486c6ff9ec7bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=960&crop=smart&auto=webp&s=76489b2ea1f874434b346392d9ec9d04bca91d3f', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?auto=webp&s=49759c054794ebb2b9f32da408136efc6b49f7d3', 'width': 1000}, 'variants': {}}]} |
Learnings from building a coding agent from scratch | 5% on SWE Bench Lite | 1 | [removed] | 2025-01-07T05:14:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hvk6dw/learnings_from_building_a_coding_agent_from/ | Next_Difference_8706 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvk6dw | false | null | t3_1hvk6dw | /r/LocalLLaMA/comments/1hvk6dw/learnings_from_building_a_coding_agent_from/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'NBguLJqmj6JF6q9bO8LUUuxRs9EOpBw1rYzt-r6_p8g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=108&crop=smart&auto=webp&s=4cab041d5211172f585d5eaef7435b7c97fb5d8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=216&crop=smart&auto=webp&s=6186ae43af3ca053a6a1f421c297c6c0e4c4fd99', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=320&crop=smart&auto=webp&s=0c1c547b1c043fa4bab64b78ecff7b7adfafc3f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=640&crop=smart&auto=webp&s=98af5513e3c7cd127f8f3d6e818486c6ff9ec7bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?width=960&crop=smart&auto=webp&s=76489b2ea1f874434b346392d9ec9d04bca91d3f', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/NjlWxFc3JtTYooTvBGymghOrJjVdXcw8qLCbNiUGgqc.jpg?auto=webp&s=49759c054794ebb2b9f32da408136efc6b49f7d3', 'width': 1000}, 'variants': {}}]} |
2 x 5080 vs 5090 | 1 | Given that the price of a 5080 is exactly half of that of a 5090. Which do you think would be a better buy between two 5080’s and a single 5090? | 2025-01-07T05:24:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hvkc6l/2_x_5080_vs_5090/ | codingraptor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvkc6l | false | null | t3_1hvkc6l | /r/LocalLLaMA/comments/1hvkc6l/2_x_5080_vs_5090/ | false | false | self | 1 | null |
Is NVIDIA charging people 3k+ for a 5070 with 128 gigs of VRAM? | 180 | Are there any alternatives? | 2025-01-07T05:40:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hvklij/is_nvidia_charging_people_3k_for_a_5070_with_128/ | Ill_Distribution8517 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvklij | false | null | t3_1hvklij | /r/LocalLLaMA/comments/1hvklij/is_nvidia_charging_people_3k_for_a_5070_with_128/ | false | false | self | 180 | null |
5090 vs project Digit | 1 | [removed] | 2025-01-07T05:50:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hvkrkh/5090_vs_project_digit/ | mug2432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvkrkh | false | null | t3_1hvkrkh | /r/LocalLLaMA/comments/1hvkrkh/5090_vs_project_digit/ | false | false | self | 1 | null |
To understand the Project DIGITS desktop (128 GB for 3k), look at the existing Grace CPU systems | 219 | There seems to be a lot of confusion about how Nvidia could be selling their 5090 with 32GB of VRAM, but their Project Digits desktop has 128 GB of VRAM.
Typical desktop GPUs have GDDR which is faster, and server GPUs have HBM which is even faster than that, but the Grace CPUs use LPDDR (https://www.nvidia.com/en-us/data-center/grace-cpu/), which is generally cheaper but slower.
For example, the H200 GPU by itself only has 96/144GB of HBM, but the Grace-Hopper Superchip (GH200) adds in an additional 480 GB of LPDDR.
The memory bandwidth to this LPDDR from the GPU is also quite fast! For example, the GH200 HBM bandwidth is 4.9 TB/s, but the memory bandwidth from the CPU to the GPU and from the RAM to the CPU are both around 500 GB/s still.
It's a bit harder to predict what's going on with the GB10 Superchip in Project Digits, since unlike the GH200 superchips it doesn't have any HBM (and it only has 20 cores). But if you look at the Grace CPU C1 chip (https://resources.nvidia.com/en-us-grace-cpu/data-center-datasheet?ncid=no-ncid), there's a configuration with 120 GB of LPDDR RAM + 512 GB/s of memory bandwidth. And the NVLink C2C bandwidth has a 450GB/s unidirectional bandwidth to the GPU.
TL;DR: Pure speculation, but it's possible that the Project Digits desktop will come in at around 500 GB/s memory-bandwidth, which would be quite good! Good for ~7 tok/s for Llama-70B at 8-bits. | 2025-01-07T06:26:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hvlbow/to_understand_the_project_digits_desktop_128_gb/ | programmerChilli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvlbow | false | null | t3_1hvlbow | /r/LocalLLaMA/comments/1hvlbow/to_understand_the_project_digits_desktop_128_gb/ | false | false | self | 219 | {'enabled': False, 'images': [{'id': '_D4Q1FLeWOFqomN2ZG-KonEuBm9zAMqLs2z0t_rqtQ4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/arEQBKN0nWhrM011sjXbKNfOqSmObOsijHOmpd_e4B8.jpg?width=108&crop=smart&auto=webp&s=35d9b29abd0e9f40ee62376584d2453dfdd0b099', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/arEQBKN0nWhrM011sjXbKNfOqSmObOsijHOmpd_e4B8.jpg?width=216&crop=smart&auto=webp&s=c29a4fdc343522a32a313acbe99a3f2c13ca4ccb', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/arEQBKN0nWhrM011sjXbKNfOqSmObOsijHOmpd_e4B8.jpg?width=320&crop=smart&auto=webp&s=db9ffd39b6c239d815714d90629f0d8aaa4584c7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/arEQBKN0nWhrM011sjXbKNfOqSmObOsijHOmpd_e4B8.jpg?width=640&crop=smart&auto=webp&s=a6f2c79e923502d68d9709e35e4286a3b3985ea1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/arEQBKN0nWhrM011sjXbKNfOqSmObOsijHOmpd_e4B8.jpg?width=960&crop=smart&auto=webp&s=f8ea69732788d4ab55a09d1c016ea0dc6f522f1e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/arEQBKN0nWhrM011sjXbKNfOqSmObOsijHOmpd_e4B8.jpg?width=1080&crop=smart&auto=webp&s=3192c574e3266a9fe1888d4f50c700a6052314ec', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/arEQBKN0nWhrM011sjXbKNfOqSmObOsijHOmpd_e4B8.jpg?auto=webp&s=a746265ab2d5cbf4b1abc9ef46e16a0297850ce9', 'width': 1200}, 'variants': {}}]} |
How to make qwen SmallThinker uncensored | 7 | How can i make it uncensored using prompt or any other way? | 2025-01-07T06:29:41 | fasto13 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvld89 | false | null | t3_1hvld89 | /r/LocalLLaMA/comments/1hvld89/how_to_make_qwen_smallthinker_uncensored/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'n88DtHIkB7IBHa4N7yH3_Z_tiWj58eehwCYpPKGF7tY', 'resolutions': [{'height': 167, 'url': 'https://preview.redd.it/6xxhxeoanibe1.jpeg?width=108&crop=smart&auto=webp&s=109125c5be26be6e55501d237b0ca5370dbaf00d', 'width': 108}, {'height': 334, 'url': 'https://preview.redd.it/6xxhxeoanibe1.jpeg?width=216&crop=smart&auto=webp&s=e02bac6f52dc7c07de5c4b6951f87e865756a334', 'width': 216}, {'height': 495, 'url': 'https://preview.redd.it/6xxhxeoanibe1.jpeg?width=320&crop=smart&auto=webp&s=6c2294c847fb5e45c1fd2c1a6f455b0a5b78722f', 'width': 320}, {'height': 991, 'url': 'https://preview.redd.it/6xxhxeoanibe1.jpeg?width=640&crop=smart&auto=webp&s=364c81b67edfce9f1af02e32a984ef907b910dc1', 'width': 640}, {'height': 1487, 'url': 'https://preview.redd.it/6xxhxeoanibe1.jpeg?width=960&crop=smart&auto=webp&s=d212fb09195b17dc7949ee5d579ad35ccf74df04', 'width': 960}, {'height': 1673, 'url': 'https://preview.redd.it/6xxhxeoanibe1.jpeg?width=1080&crop=smart&auto=webp&s=c80862af6979a52dc0520fb24f44ff78dd92b70b', 'width': 1080}], 'source': {'height': 1988, 'url': 'https://preview.redd.it/6xxhxeoanibe1.jpeg?auto=webp&s=fcc0ac1ed9114aa4a9852dbc8a8bb2725cd528f4', 'width': 1283}, 'variants': {}}]} |
||
Comparison of Ollama and vLLM IMO, what do you think about them? | 1 | [removed] | 2025-01-07T06:48:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hvln9t/comparison_of_ollama_and_vllm_imo_what_do_you/ | Careless_Corgi_7164 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvln9t | false | null | t3_1hvln9t | /r/LocalLLaMA/comments/1hvln9t/comparison_of_ollama_and_vllm_imo_what_do_you/ | false | false | self | 1 | null |
Wow for 3000$ "supercomputer" for Individuals | 1 | [removed] | 2025-01-07T06:58:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hvls9p/wow_for_3000_supercomputer_for_individuals/ | girishsk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvls9p | false | null | t3_1hvls9p | /r/LocalLLaMA/comments/1hvls9p/wow_for_3000_supercomputer_for_individuals/ | false | false | self | 1 | null |
Cosmos World Foundation Models - Nvidia | 10 | Collection: [https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6)
Summary by Vaibhav (VB) Srivastav ( [https://x.com/reach\_vb/status/1876516562309624054](https://x.com/reach_vb/status/1876516562309624054) )
\> Cosmos Autoregressive 4B & 12B
\- Given a 9-frame input video, predicts the future 24 frames
\- Given an image as the first frame, predicts the future 32 frames
\> Cosmos Autoregressive 5B & 13B Video2World
\- Given text description and a 9-frame input video, predicts the future 24 frames
\- Given text description and an image as the first frame, predicts the future 32 frames
\> Cosmos Diffusion 7B & 14B Text2World
\- Given a text description, predict an output video of 121 frames.
\> Cosmos Diffusion 7B & 14B Video2World
\- Given a text description and an image as the first frame, predict the future 120 frames
TechCrunch: Nvidia releases its own brand of world models: [https://techcrunch.com/2025/01/06/nvidia-releases-its-own-brand-of-world-models/](https://techcrunch.com/2025/01/06/nvidia-releases-its-own-brand-of-world-models/) | 2025-01-07T07:12:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hvlzkx/cosmos_world_foundation_models_nvidia/ | Nunki08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvlzkx | false | null | t3_1hvlzkx | /r/LocalLLaMA/comments/1hvlzkx/cosmos_world_foundation_models_nvidia/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'uZwaqR5RcdHUHR0_4oIx98tcSVgQCWqYoVVWXe_y33M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XQXReD3VcdH3KZUrewF2xLKNGlvPrybNlwI8xyhA9wU.jpg?width=108&crop=smart&auto=webp&s=a8ed0c6d95df6604122b08f79a0f76cf63294c2e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XQXReD3VcdH3KZUrewF2xLKNGlvPrybNlwI8xyhA9wU.jpg?width=216&crop=smart&auto=webp&s=68f2e72a6e7008ad9e45fd102f26953ec085422e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XQXReD3VcdH3KZUrewF2xLKNGlvPrybNlwI8xyhA9wU.jpg?width=320&crop=smart&auto=webp&s=ead226d4298c1d1d23c7866b32fa142ba21e16f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XQXReD3VcdH3KZUrewF2xLKNGlvPrybNlwI8xyhA9wU.jpg?width=640&crop=smart&auto=webp&s=f2206b743441692595c2e671bc6fad5aea1827ca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XQXReD3VcdH3KZUrewF2xLKNGlvPrybNlwI8xyhA9wU.jpg?width=960&crop=smart&auto=webp&s=aee1b6929d0c6731f23448a684aa7574ec0a2188', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XQXReD3VcdH3KZUrewF2xLKNGlvPrybNlwI8xyhA9wU.jpg?width=1080&crop=smart&auto=webp&s=68e1e353b524f9ec4a66fb4709ef61499e8e82d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XQXReD3VcdH3KZUrewF2xLKNGlvPrybNlwI8xyhA9wU.jpg?auto=webp&s=d0aeb96869eb7b245405519b049cad7d1c89a7fa', 'width': 1200}, 'variants': {}}]} |
Anyone know how deepseek v3 is so good and so cheap? | 0 | Are they doing anything clever on top of distillation? | 2025-01-07T07:15:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hvm1bo/anyone_know_how_deepseek_v3_is_so_good_and_so/ | oana77oo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvm1bo | false | null | t3_1hvm1bo | /r/LocalLLaMA/comments/1hvm1bo/anyone_know_how_deepseek_v3_is_so_good_and_so/ | false | false | self | 0 | null |
Should I wait for RTX 50 series laptops to come out before buying? | 1 | [removed] | 2025-01-07T07:27:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hvm6pc/should_i_wait_for_rtx_50_series_laptops_to_come/ | Soap_n_Duck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvm6pc | false | null | t3_1hvm6pc | /r/LocalLLaMA/comments/1hvm6pc/should_i_wait_for_rtx_50_series_laptops_to_come/ | false | false | self | 1 | null |
Should I wait for RTX 50 series laptops to come out before buying? | 1 | [removed] | 2025-01-07T07:36:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hvmber/should_i_wait_for_rtx_50_series_laptops_to_come/ | Soap_n_Duck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvmber | false | null | t3_1hvmber | /r/LocalLLaMA/comments/1hvmber/should_i_wait_for_rtx_50_series_laptops_to_come/ | false | false | self | 1 | null |
Why u think 4 bit quant not fuk?? | 0 | If u think this no high qualities fuck u fo one it not lime u no how it all goes! U fuk!
This is heretik u fuk
No nonsense
Why u no fukkkk?
1 2 un
Buckle
.mah
Shu | 2025-01-07T07:40:01 | AnhedoniaJack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvmcvm | false | null | t3_1hvmcvm | /r/LocalLLaMA/comments/1hvmcvm/why_u_think_4_bit_quant_not_fuk/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'xpw8cpT2QHSlTkA0h4tflCQP8rlYesqgsVFjctkHINE', 'resolutions': [{'height': 201, 'url': 'https://preview.redd.it/7mwkh8suzibe1.jpeg?width=108&crop=smart&auto=webp&s=aecf19f5174495b6aecbebb1fa62424f44437964', 'width': 108}, {'height': 402, 'url': 'https://preview.redd.it/7mwkh8suzibe1.jpeg?width=216&crop=smart&auto=webp&s=23ae741934def0cf41fe5e646396d2c83afb2140', 'width': 216}, {'height': 595, 'url': 'https://preview.redd.it/7mwkh8suzibe1.jpeg?width=320&crop=smart&auto=webp&s=156bb4031554b066fb45cf3d892a114b2fcabe3d', 'width': 320}, {'height': 1191, 'url': 'https://preview.redd.it/7mwkh8suzibe1.jpeg?width=640&crop=smart&auto=webp&s=6839367e73a788cc48c4580fd90e7ae7136345cb', 'width': 640}, {'height': 1787, 'url': 'https://preview.redd.it/7mwkh8suzibe1.jpeg?width=960&crop=smart&auto=webp&s=49bcba5e8ca423e81f0ea6fa62a1f012b8700c72', 'width': 960}], 'source': {'height': 1940, 'url': 'https://preview.redd.it/7mwkh8suzibe1.jpeg?auto=webp&s=9b9a8711e83bb2ab90402c5c7d7ccb4bec730cbb', 'width': 1042}, 'variants': {}}]} |
||
Should I wait for RTX 50 series laptops to come out before buying? | 1 | [removed] | 2025-01-07T07:44:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hvmeul/should_i_wait_for_rtx_50_series_laptops_to_come/ | Soap_n_Duck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvmeul | false | null | t3_1hvmeul | /r/LocalLLaMA/comments/1hvmeul/should_i_wait_for_rtx_50_series_laptops_to_come/ | false | false | self | 1 | null |
Deepseek v3 now on together ai at higher pricing | 14 | They added today:
https://www.together.ai/pricing
It’s $1.25 per million tokens. Whereas using the DeepSeek teams api you pay 0.14-0.28 cents.
Pity it’s priced so much higher. Anyone know of privacy friendly alternatives? | 2025-01-07T08:07:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hvmq1q/deepseek_v3_now_on_together_ai_at_higher_pricing/ | elie2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvmq1q | false | null | t3_1hvmq1q | /r/LocalLLaMA/comments/1hvmq1q/deepseek_v3_now_on_together_ai_at_higher_pricing/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'mqtuPhG9RGNHEnEzCdHtHrjaiK1LOHw0NBapb49h7nY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/pvKaqJS8kk4eD7t60IDPPRf0pJp-x3fRV0ufPOvfOJo.jpg?width=108&crop=smart&auto=webp&s=fcc8f95280e2a8fe4977f53fb597f6586b142ae4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/pvKaqJS8kk4eD7t60IDPPRf0pJp-x3fRV0ufPOvfOJo.jpg?width=216&crop=smart&auto=webp&s=4bf6fb38c1eb4ceb40ce34befbd687580943ed18', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/pvKaqJS8kk4eD7t60IDPPRf0pJp-x3fRV0ufPOvfOJo.jpg?width=320&crop=smart&auto=webp&s=33f6fa6b7a5ab7725951294fdc3b83ef4134ea3c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/pvKaqJS8kk4eD7t60IDPPRf0pJp-x3fRV0ufPOvfOJo.jpg?width=640&crop=smart&auto=webp&s=6bbb15539fbb46305fb136b2b73be5f1350272ab', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/pvKaqJS8kk4eD7t60IDPPRf0pJp-x3fRV0ufPOvfOJo.jpg?width=960&crop=smart&auto=webp&s=40d0ece1f08a863a72b5eb6ea34b4fd94b115e36', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/pvKaqJS8kk4eD7t60IDPPRf0pJp-x3fRV0ufPOvfOJo.jpg?width=1080&crop=smart&auto=webp&s=e1dc23d815bf0a32f8fd86342d300e9e91b6b36b', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/pvKaqJS8kk4eD7t60IDPPRf0pJp-x3fRV0ufPOvfOJo.jpg?auto=webp&s=a906b5eb2fbab9d89685a6416e911c75b721be74', 'width': 2400}, 'variants': {}}]} |
Conceptually Possible Proto Level 3 AI Agent | 7 | The future of AI is rooted in mixture of experts computer use agents, trained on GUI task based datasets across a variety of domains, mainly because without task based reference, LLMs become easily lost when trying to complete tasks. But attention is all you need, and this can be improved by training on large amounts of task completion based context. These digital entities must have tool use, a hierarchical memory system which allows the system to learn, and to be interacted with through polysynthetic language to increase efficiency by having tasks represented by efficient symbols rather than overly verbose English.
The attached Github repos prove this system is somewhat possible as of this moment:
[https://github.com/niuzaisheng/ScreenAgent](https://github.com/niuzaisheng/ScreenAgent)
[https://github.com/caspianmoon/memoripy](https://github.com/caspianmoon/memoripy)
[https://github.com/microsoft/autogen/tree/main/python/packages/autogen-magentic-one](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-magentic-one)
[https://gui-world.github.io/](https://gui-world.github.io/)
[https://github.com/ruvnet/SynthLang](https://github.com/ruvnet/SynthLang) | 2025-01-07T08:24:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hvmxxk/conceptually_possible_proto_level_3_ai_agent/ | royalsail321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvmxxk | false | null | t3_1hvmxxk | /r/LocalLLaMA/comments/1hvmxxk/conceptually_possible_proto_level_3_ai_agent/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'A41T7mQcFLn9brxP1N4SZqJXqohDYPeqaxVtQ5HohwQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bLatCGP-H7eQzlZS13tMCf35UKUEl9-6A5EL9MKnNS0.jpg?width=108&crop=smart&auto=webp&s=c8fbe171b3bacbf236e4bfdf1b97c5e687895176', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bLatCGP-H7eQzlZS13tMCf35UKUEl9-6A5EL9MKnNS0.jpg?width=216&crop=smart&auto=webp&s=a0a470d8646e1395207c13748f4ef297e8c1c29d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bLatCGP-H7eQzlZS13tMCf35UKUEl9-6A5EL9MKnNS0.jpg?width=320&crop=smart&auto=webp&s=8e1d945f4e39c5bd655f2506a50ed8ae804ed162', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bLatCGP-H7eQzlZS13tMCf35UKUEl9-6A5EL9MKnNS0.jpg?width=640&crop=smart&auto=webp&s=ba644fafb72303c94e6622a6b2fb9e963b6a4971', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bLatCGP-H7eQzlZS13tMCf35UKUEl9-6A5EL9MKnNS0.jpg?width=960&crop=smart&auto=webp&s=9a36ef620bf389b2f1a31730f9fe0ace4c961f45', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bLatCGP-H7eQzlZS13tMCf35UKUEl9-6A5EL9MKnNS0.jpg?width=1080&crop=smart&auto=webp&s=70278dd1df584bae60063f73ea1b7a84a88eda77', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bLatCGP-H7eQzlZS13tMCf35UKUEl9-6A5EL9MKnNS0.jpg?auto=webp&s=315172da168a30fcb37b37d633c398de97cbb92e', 'width': 1200}, 'variants': {}}]} |
What are you learning in 2025? What should Hugging Face add to smol course? | 1 | [removed] | 2025-01-07T08:59:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hvnea6/what_are_you_learning_in_2025_what_should_hugging/ | bburtenshaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvnea6 | false | null | t3_1hvnea6 | /r/LocalLLaMA/comments/1hvnea6/what_are_you_learning_in_2025_what_should_hugging/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '15LydzUV9n-yeJZb963b14EDGBoMgh6l8Wylhna4sqM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=108&crop=smart&auto=webp&s=c1c9de6c4035a1bc92ce17e258f37eed29738c8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=216&crop=smart&auto=webp&s=858487aac1a34176f724e1fb3ea96d8b5dc76e8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=320&crop=smart&auto=webp&s=777c38bdc34e4f3ba9215313c6b01c13e1f3279c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=640&crop=smart&auto=webp&s=6b1a5da5dfd00d3db485bf83301709ac5df764e0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=960&crop=smart&auto=webp&s=ebd7f396528a08dc101ea40aca391c8468f74f74', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=1080&crop=smart&auto=webp&s=b86cd1e665425e68f2db5fbc9ebfd9be45a874df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?auto=webp&s=c5162034dbc1954b69444156b6d27c4681dc5571', 'width': 1200}, 'variants': {}}]} |
Any joy with continue.dev? | 18 | I have been using [continue.dev](http://continue.dev) with local models for a few days now. It is super snappy but code tab-completions are dismal. Useless. Total crap. Negative value. I followed all official docs recommendations.
Do you have any luck with it? Can you post your config? | 2025-01-07T09:06:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hvnhdt/any_joy_with_continuedev/ | theboldestgaze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvnhdt | false | null | t3_1hvnhdt | /r/LocalLLaMA/comments/1hvnhdt/any_joy_with_continuedev/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'JoLAbcgPAn_D7ExuVvyaNJpSY81e3Jca27FTj1G8-xQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=108&crop=smart&auto=webp&s=b6c70517bb80bca66bf94d99af93ec23982e2986', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=216&crop=smart&auto=webp&s=146011169cd4033ebcd4b883efc62f0bd345d74b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=320&crop=smart&auto=webp&s=7a560fe31ff4e8b423a9029c052df232e0365572', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=640&crop=smart&auto=webp&s=ea9ff85c4782247e303164d9d75b4071d789f397', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=960&crop=smart&auto=webp&s=81aa9753e911761e0c56b3b897ba0f44cafff21d', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=1080&crop=smart&auto=webp&s=a67fd0983e228aa2fa0a2ba466c071793fe21afc', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?auto=webp&s=92948afd26cc637bb25c79223a1b99b3ecbbbfa2', 'width': 2401}, 'variants': {}}]} |
LLM/AI/ML models to assess english skill level by recording | 3 | Hi, can anyone tell me a reliable LLM/AI/ML models to integrate in my app to assess english skill level by recording their audio and how they speak. | 2025-01-07T09:34:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hvnunq/llmaiml_models_to_assess_english_skill_level_by/ | abdrhxyii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvnunq | false | null | t3_1hvnunq | /r/LocalLLaMA/comments/1hvnunq/llmaiml_models_to_assess_english_skill_level_by/ | false | false | self | 3 | null |
Asking the creator of Apollo AI if he has tried Apollo AI: | 1 | 2025-01-07T09:35:06 | eternviking | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvnut9 | false | null | t3_1hvnut9 | /r/LocalLLaMA/comments/1hvnut9/asking_the_creator_of_apollo_ai_if_he_has_tried/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'qSWdVoMuWwJ9GJe4XGNpy_nmCpc3KReRK0P1hWm2fLM', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/ptazhh05kjbe1.png?width=108&crop=smart&auto=webp&s=59fa08d5121cf2904594a3a5664a2ed11be0966f', 'width': 108}, {'height': 70, 'url': 'https://preview.redd.it/ptazhh05kjbe1.png?width=216&crop=smart&auto=webp&s=4b1a99cf312d853ac1451b282c6417cdb22adfa8', 'width': 216}, {'height': 104, 'url': 'https://preview.redd.it/ptazhh05kjbe1.png?width=320&crop=smart&auto=webp&s=59260f98db13d8b6831283627272fb7aabbb90e1', 'width': 320}, {'height': 209, 'url': 'https://preview.redd.it/ptazhh05kjbe1.png?width=640&crop=smart&auto=webp&s=43b788c76024671dc41744609e77889e50dc2174', 'width': 640}, {'height': 313, 'url': 'https://preview.redd.it/ptazhh05kjbe1.png?width=960&crop=smart&auto=webp&s=5d4cbc75deb450977491f187eff01b1b812df088', 'width': 960}], 'source': {'height': 348, 'url': 'https://preview.redd.it/ptazhh05kjbe1.png?auto=webp&s=8e9645a13bfcc0c9b9c369aa612fbaff75136495', 'width': 1064}, 'variants': {}}]} |
|||
What are you learning in 2025? What should Hugging Face add to smol course? | 1 | [removed] | 2025-01-07T09:38:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hvnwgx/what_are_you_learning_in_2025_what_should_hugging/ | bburtenshaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvnwgx | false | null | t3_1hvnwgx | /r/LocalLLaMA/comments/1hvnwgx/what_are_you_learning_in_2025_what_should_hugging/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '15LydzUV9n-yeJZb963b14EDGBoMgh6l8Wylhna4sqM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=108&crop=smart&auto=webp&s=c1c9de6c4035a1bc92ce17e258f37eed29738c8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=216&crop=smart&auto=webp&s=858487aac1a34176f724e1fb3ea96d8b5dc76e8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=320&crop=smart&auto=webp&s=777c38bdc34e4f3ba9215313c6b01c13e1f3279c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=640&crop=smart&auto=webp&s=6b1a5da5dfd00d3db485bf83301709ac5df764e0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=960&crop=smart&auto=webp&s=ebd7f396528a08dc101ea40aca391c8468f74f74', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=1080&crop=smart&auto=webp&s=b86cd1e665425e68f2db5fbc9ebfd9be45a874df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?auto=webp&s=c5162034dbc1954b69444156b6d27c4681dc5571', 'width': 1200}, 'variants': {}}]} |
Nvidia Cosmos - World Models | 9 | https://reddit.com/link/1hvog5n/video/75obfju7sjbe1/player
Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.
GitHub: [https://github.com/NVIDIA/Cosmos](https://github.com/NVIDIA/Cosmos)
Model Cards: [https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6)
Licence: Apache-2
|Model name|Description|Try it out|
|:-|:-|:-|
|[Cosmos-1.0-Diffusion-7B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Text2World)|Text to visual world generation|[Inference](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/README.md)|
|[Cosmos-1.0-Diffusion-14B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Text2World)|Text to visual world generation|[Inference](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/README.md)|
|[Cosmos-1.0-Diffusion-7B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Video2World)|Video + Text based future visual world generation|[Inference](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/README.md)|
|[Cosmos-1.0-Diffusion-14B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Video2World)|Video + Text based future visual world generation|[Inference](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/README.md)|
|[Cosmos-1.0-Autoregressive-4B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-4B)|Future visual world generation|[Inference](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/README.md)|
|[Cosmos-1.0-Autoregressive-12B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-12B)|Future visual world generation|[Inference](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/README.md)|
|[Cosmos-1.0-Autoregressive-5B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-5B-Video2World)|Video + Text based future visual world generation|[Inference](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/README.md)|
|[Cosmos-1.0-Autoregressive-13B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-13B-Video2World)|Video + Text based future visual world generation|[Inference](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/README.md)|
|[Cosmos-1.0-Guardrail](https://huggingface.co/nvidia/Cosmos-1.0-Guardrail)|Guardrail contains pre-Guard and post-Guard for safe use|Embedded in model inference scripts|
| 2025-01-07T10:21:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hvog5n/nvidia_cosmos_world_models/ | SignalCompetitive582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvog5n | false | null | t3_1hvog5n | /r/LocalLLaMA/comments/1hvog5n/nvidia_cosmos_world_models/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'kJ-ZUojJcveImGgkrnbp8QfAvs2ppklFJhT_29XSvHY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fYvLK9RsRYOFm5ui76DjbSM1gc3Vg1GiWgCvSb3Ewc0.jpg?width=108&crop=smart&auto=webp&s=7a546b047c8debf83dd6f9defd5022de4c16e4a4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fYvLK9RsRYOFm5ui76DjbSM1gc3Vg1GiWgCvSb3Ewc0.jpg?width=216&crop=smart&auto=webp&s=2bed3c532a164108a7274788bb6989417bacf1d9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fYvLK9RsRYOFm5ui76DjbSM1gc3Vg1GiWgCvSb3Ewc0.jpg?width=320&crop=smart&auto=webp&s=8d0321befe0cafa88e03098c956c6c4411e2a7d7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fYvLK9RsRYOFm5ui76DjbSM1gc3Vg1GiWgCvSb3Ewc0.jpg?width=640&crop=smart&auto=webp&s=3f1fa504d9a9b40a5294ed711db29e9341301946', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fYvLK9RsRYOFm5ui76DjbSM1gc3Vg1GiWgCvSb3Ewc0.jpg?width=960&crop=smart&auto=webp&s=2bd75714f622169b5339f20d9c011ef2098fc6c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fYvLK9RsRYOFm5ui76DjbSM1gc3Vg1GiWgCvSb3Ewc0.jpg?width=1080&crop=smart&auto=webp&s=e61d7a72984acca2ad31fe295b797435b783c833', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fYvLK9RsRYOFm5ui76DjbSM1gc3Vg1GiWgCvSb3Ewc0.jpg?auto=webp&s=212396770f631f39dba78d63a1d183569d0ae570', 'width': 1200}, 'variants': {}}]} |
|
Ray-Ban Meta Glasses | 1 | [removed] | 2025-01-07T10:27:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hvoj1o/rayban_meta_glasses/ | Alarmed-Instance5356 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvoj1o | false | null | t3_1hvoj1o | /r/LocalLLaMA/comments/1hvoj1o/rayban_meta_glasses/ | false | false | self | 1 | null |
About DeepSeek V3 privacy concern | 0 | DeepSeek-v3’s Terms Could Be a Trap or maybe people just don't care or don't know.
DeepSeek-v3 has been the talk of AI news lately for being cheap and outperforming models like GPT4o and Claude3.5
Here’s what they don’t want you to notice in their terms and conditions:
1. You’re Fully Responsible for Inputs & Outputs (Section 4.1)
If your data breaches any laws, it’s all on you.
You need legal rights to every piece of data you submit.
2. Shady Data Usage Policy (Section 4.2)
DeepSeek can use your Inputs and Outputs to "improve services."
No clear way to opt out, leaving your data vulnerable.
3. Strict Intellectual Property Rules (Section 5.1)
DeepSeek owns everything related to its content and software.
Even accidental misuse could land you in legal trouble.
The Bottom Line
DeepSeek-v3 might have impressive outputs, but its policies are risky and user-unfriendly.
Your privacy and legal safety shouldn’t come second. Not to mention that Deepseek Servers are in China.
Note 1: When you use IDEs or AI plugins with automatic permission, you don't know what info is being send to DeepSeek servers when the Agents are automatically working on a task.
What you guys think? | 2025-01-07T11:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hvp5z1/about_deepseek_v3_privacy_concern/ | valentino99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvp5z1 | false | null | t3_1hvp5z1 | /r/LocalLLaMA/comments/1hvp5z1/about_deepseek_v3_privacy_concern/ | false | false | self | 0 | null |
What are you learning in 2025? What should Hugging Face add to smol course? | 1 | [removed] | 2025-01-07T11:47:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hvpoce/what_are_you_learning_in_2025_what_should_hugging/ | bburtenshaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvpoce | false | null | t3_1hvpoce | /r/LocalLLaMA/comments/1hvpoce/what_are_you_learning_in_2025_what_should_hugging/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '15LydzUV9n-yeJZb963b14EDGBoMgh6l8Wylhna4sqM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=108&crop=smart&auto=webp&s=c1c9de6c4035a1bc92ce17e258f37eed29738c8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=216&crop=smart&auto=webp&s=858487aac1a34176f724e1fb3ea96d8b5dc76e8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=320&crop=smart&auto=webp&s=777c38bdc34e4f3ba9215313c6b01c13e1f3279c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=640&crop=smart&auto=webp&s=6b1a5da5dfd00d3db485bf83301709ac5df764e0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=960&crop=smart&auto=webp&s=ebd7f396528a08dc101ea40aca391c8468f74f74', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=1080&crop=smart&auto=webp&s=b86cd1e665425e68f2db5fbc9ebfd9be45a874df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?auto=webp&s=c5162034dbc1954b69444156b6d27c4681dc5571', 'width': 1200}, 'variants': {}}]} |
What are you learning in 2025? What should Hugging Face add to smol course? | 1 | 2025-01-07T11:49:09 | bburtenshaw | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hvpp9z | false | null | t3_1hvpp9z | /r/LocalLLaMA/comments/1hvpp9z/what_are_you_learning_in_2025_what_should_hugging/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'CkhZQv-_eJ1Cqq1hnQTy3G1eTrTB1KcABKMox17jEaA', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/3bosc81a8kbe1.png?width=108&crop=smart&auto=webp&s=bcf8102c01461d0cd4062cb192cb712f72c73725', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/3bosc81a8kbe1.png?width=216&crop=smart&auto=webp&s=0a16e65b71438dc9a0322569bbd97905e27dc898', 'width': 216}, {'height': 140, 'url': 'https://preview.redd.it/3bosc81a8kbe1.png?width=320&crop=smart&auto=webp&s=bb9524f138b4e9d3bb9dc738181804f62851a154', 'width': 320}, {'height': 280, 'url': 'https://preview.redd.it/3bosc81a8kbe1.png?width=640&crop=smart&auto=webp&s=ae18a77d8d1bdfe7c9b0e17b7dd9209844ac0aaa', 'width': 640}, {'height': 420, 'url': 'https://preview.redd.it/3bosc81a8kbe1.png?width=960&crop=smart&auto=webp&s=b4000e3cda0c13629e2c931bdebed3153748f848', 'width': 960}, {'height': 473, 'url': 'https://preview.redd.it/3bosc81a8kbe1.png?width=1080&crop=smart&auto=webp&s=bcff2e4661a5bdf3c030652aefa13ffe7a159763', 'width': 1080}], 'source': {'height': 887, 'url': 'https://preview.redd.it/3bosc81a8kbe1.png?auto=webp&s=00b75dd7ae36e82dc3e888a166b21ad2ade1afa4', 'width': 2025}, 'variants': {}}]} |
|||
Llama 3.1 70b at over 1000 tok/second NVIDIA GH200 | 7 | Came across the following benchmark: https://www.substratus.ai/blog/benchmarking-llama-3.1-70b-on-gh200-vllm
They show the following results:
============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 169.46
Total input tokens: 232428
Total generated tokens: 173225
Request throughput (req/s): 5.90
Output token throughput (tok/s): 1022.25
Total Token throughput (tok/s): 2393.86
---------------Time to First Token----------------
Mean TTFT (ms): 34702.73
Median TTFT (ms): 16933.34
P99 TTFT (ms): 98404.82
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 164.05
Median TPOT (ms): 116.97
P99 TPOT (ms): 748.74
---------------Inter-token Latency----------------
Mean ITL (ms): 112.34
Median ITL (ms): 64.04
P99 ITL (ms): 577.36
==================================================
cpu offload 240GB and increasing context length to 120,000 tokens:
============ Serving Benchmark Result ============
Successful requests: 1000
Benchmark duration (s): 439.96
Total input tokens: 232428
Total generated tokens: 173173
Request throughput (req/s): 2.27
Output token throughput (tok/s): 393.61
Total Token throughput (tok/s): 921.91
---------------Time to First Token----------------
Mean TTFT (ms): 23549.66
Median TTFT (ms): 29330.18
P99 TTFT (ms): 38782.32
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms): 700.44
Median TPOT (ms): 379.39
P99 TPOT (ms): 4710.12
---------------Inter-token Latency----------------
Mean ITL (ms): 360.44
Median ITL (ms): 305.04
P99 ITL (ms): 560.50
==================================
I’m wondering if anyone has done their own testing and can back these results up? Also given the amount of unified memory, and these results I would be curious to know if decent speeds could be achieved with Deepseek V3. | 2025-01-07T11:51:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hvpqrb/llama_31_70b_at_over_1000_toksecond_nvidia_gh200/ | Ok-Perception2973 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvpqrb | false | null | t3_1hvpqrb | /r/LocalLLaMA/comments/1hvpqrb/llama_31_70b_at_over_1000_toksecond_nvidia_gh200/ | false | false | self | 7 | null |
LLMs solving maths/physics problems | 1 | [removed] | 2025-01-07T11:55:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hvpsxe/llms_solving_mathsphysics_problems/ | Optimalutopic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvpsxe | false | null | t3_1hvpsxe | /r/LocalLLaMA/comments/1hvpsxe/llms_solving_mathsphysics_problems/ | false | false | self | 1 | null |
What are you learning in 2015? | 1 | [removed] | 2025-01-07T12:00:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hvpvrb/what_are_you_learning_in_2015/ | bburtenshaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvpvrb | false | null | t3_1hvpvrb | /r/LocalLLaMA/comments/1hvpvrb/what_are_you_learning_in_2015/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '15LydzUV9n-yeJZb963b14EDGBoMgh6l8Wylhna4sqM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=108&crop=smart&auto=webp&s=c1c9de6c4035a1bc92ce17e258f37eed29738c8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=216&crop=smart&auto=webp&s=858487aac1a34176f724e1fb3ea96d8b5dc76e8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=320&crop=smart&auto=webp&s=777c38bdc34e4f3ba9215313c6b01c13e1f3279c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=640&crop=smart&auto=webp&s=6b1a5da5dfd00d3db485bf83301709ac5df764e0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=960&crop=smart&auto=webp&s=ebd7f396528a08dc101ea40aca391c8468f74f74', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=1080&crop=smart&auto=webp&s=b86cd1e665425e68f2db5fbc9ebfd9be45a874df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?auto=webp&s=c5162034dbc1954b69444156b6d27c4681dc5571', 'width': 1200}, 'variants': {}}]} |
What are you learning in 2025? What should Hugging Face add to smol course? | 1 | [removed] | 2025-01-07T12:07:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hvpzuc/what_are_you_learning_in_2025_what_should_hugging/ | bburtenshaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvpzuc | false | null | t3_1hvpzuc | /r/LocalLLaMA/comments/1hvpzuc/what_are_you_learning_in_2025_what_should_hugging/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '15LydzUV9n-yeJZb963b14EDGBoMgh6l8Wylhna4sqM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=108&crop=smart&auto=webp&s=c1c9de6c4035a1bc92ce17e258f37eed29738c8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=216&crop=smart&auto=webp&s=858487aac1a34176f724e1fb3ea96d8b5dc76e8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=320&crop=smart&auto=webp&s=777c38bdc34e4f3ba9215313c6b01c13e1f3279c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=640&crop=smart&auto=webp&s=6b1a5da5dfd00d3db485bf83301709ac5df764e0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=960&crop=smart&auto=webp&s=ebd7f396528a08dc101ea40aca391c8468f74f74', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?width=1080&crop=smart&auto=webp&s=b86cd1e665425e68f2db5fbc9ebfd9be45a874df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e6_m_ztR6gjvySnOg_VpX5-iOavmPf6ut8kburpzgCE.jpg?auto=webp&s=c5162034dbc1954b69444156b6d27c4681dc5571', 'width': 1200}, 'variants': {}}]} |
5070ti with more TOPS than a 4090 | 3 | 2025-01-07T12:15:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hvq4ps/5070ti_with_more_tops_than_a_4090/ | ArtArtArt123456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvq4ps | false | null | t3_1hvq4ps | /r/LocalLLaMA/comments/1hvq4ps/5070ti_with_more_tops_than_a_4090/ | false | false | 3 | null |
||
B770 next week? | 9 | [https://youtu.be/8z9o2ltnFM0?t=513](https://youtu.be/8z9o2ltnFM0?t=513) Not sure if this is true.... hopefully it will have 24G vram. | 2025-01-07T12:36:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hvqhbt/b770_next_week/ | kyeoh1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvqhbt | false | null | t3_1hvqhbt | /r/LocalLLaMA/comments/1hvqhbt/b770_next_week/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'CyPN3yA5CqmdoGPOdhaNDciFUUprken073xBRe4mZ2I', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/anRWeuvKy3Mmnz1TkIcAFVUdpvqZWI8kfVTAuOlK6co.jpg?width=108&crop=smart&auto=webp&s=8c5736a0dbf66c607b5eb97c312dac5b61068525', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/anRWeuvKy3Mmnz1TkIcAFVUdpvqZWI8kfVTAuOlK6co.jpg?width=216&crop=smart&auto=webp&s=5d28ddb750a173b9c7220c1b9df27a3809b28bd3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/anRWeuvKy3Mmnz1TkIcAFVUdpvqZWI8kfVTAuOlK6co.jpg?width=320&crop=smart&auto=webp&s=d01c97e103a2bf9667a6325e65d4f5af069f40ec', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/anRWeuvKy3Mmnz1TkIcAFVUdpvqZWI8kfVTAuOlK6co.jpg?auto=webp&s=4416dc9aaee501404109ce9b4101b3faef2cbd89', 'width': 480}, 'variants': {}}]} |
Which cloud provider would you recommend to "self-host" DeepSeek v3? | 33 | Basically the title - Which cloud provider would you recommend to "self-host" DeepSeek v3? I'd install and run it ourselves for privacy reasons but comfortable with hardware in the cloud. | 2025-01-07T12:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hvqkym/which_cloud_provider_would_you_recommend_to/ | ramdulara | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvqkym | false | null | t3_1hvqkym | /r/LocalLLaMA/comments/1hvqkym/which_cloud_provider_would_you_recommend_to/ | false | false | self | 33 | null |
HP Z2 Mini G1a is a workstation-class mini PC with AMD Strix Halo and up to 96GB graphics memory | 158 | 2025-01-07T13:03:10 | https://liliputing.com/hp-z2-mini-g1a-is-a-workstation-class-mini-pc-with-amd-strix-halo-and-up-to-96gb-graphics-memory/ | Balance- | liliputing.com | 1970-01-01T00:00:00 | 0 | {} | 1hvqydy | false | null | t3_1hvqydy | /r/LocalLLaMA/comments/1hvqydy/hp_z2_mini_g1a_is_a_workstationclass_mini_pc_with/ | false | false | 158 | {'enabled': False, 'images': [{'id': 'GoJy6wYiJl21nXs31_XzWXwFtcjZ5lBoy28c4xxEhOw', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/-lYQleRwNOx-O_sp2gzAkhf2t0T8a7S-QftPiKWgI1U.jpg?width=108&crop=smart&auto=webp&s=fb8ac1f847e8580aee4c6424d0a0a19756f5d803', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/-lYQleRwNOx-O_sp2gzAkhf2t0T8a7S-QftPiKWgI1U.jpg?width=216&crop=smart&auto=webp&s=736fa28c23c72c52b4a852c9a582400268993ec3', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/-lYQleRwNOx-O_sp2gzAkhf2t0T8a7S-QftPiKWgI1U.jpg?width=320&crop=smart&auto=webp&s=46ade15ce86fb723764ca029d1b3f4e1ee41faf0', 'width': 320}, {'height': 349, 'url': 'https://external-preview.redd.it/-lYQleRwNOx-O_sp2gzAkhf2t0T8a7S-QftPiKWgI1U.jpg?width=640&crop=smart&auto=webp&s=5e01272f3aa7e7243a84e5a143551365b5aa468a', 'width': 640}, {'height': 524, 'url': 'https://external-preview.redd.it/-lYQleRwNOx-O_sp2gzAkhf2t0T8a7S-QftPiKWgI1U.jpg?width=960&crop=smart&auto=webp&s=6f353da3dcb6733052a636a6791eed26840f6db2', 'width': 960}, {'height': 590, 'url': 'https://external-preview.redd.it/-lYQleRwNOx-O_sp2gzAkhf2t0T8a7S-QftPiKWgI1U.jpg?width=1080&crop=smart&auto=webp&s=2de4d500b2e8ef646a500c8696c9e34cc3856e48', 'width': 1080}], 'source': {'height': 656, 'url': 'https://external-preview.redd.it/-lYQleRwNOx-O_sp2gzAkhf2t0T8a7S-QftPiKWgI1U.jpg?auto=webp&s=e89a1b0c6be85bef48c42f145e2f3db892b76ec8', 'width': 1200}, 'variants': {}}]} |
||
With NVIDIA's recent Project Digits announcement, would it be more feasible to train a bitnet model from scratch at home? | 0 | This little machine seems to have the required ingredients for fine-tuning a model at home at a relatively affordable price, lowering the barrier to entry for AI model researchers, enthusiasts and scientists all over the world.
Will training a bitnet model from the ground up be a reality now? | 2025-01-07T13:21:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hvramh/with_nvidias_recent_project_digits_announcement/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvramh | false | null | t3_1hvramh | /r/LocalLLaMA/comments/1hvramh/with_nvidias_recent_project_digits_announcement/ | false | false | self | 0 | null |
"Contemplative reasoning" response style for LLMs (Best prompting advice I've used so far) | 78 | Instruct LLM to contemplate before giving an answer and see the thought process. [Here is the prompt.](https://gist.github.com/Maharshi-Pandya/4aeccbe1dbaa7f89c182bd65d2764203)
[Here is the source](https://x.com/mrsiipa/status/1876253176963493889)
I've tried it a few times and I find it quite impressive for how much it can squeeze from non-reasoning models. | 2025-01-07T13:27:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hvre7f/contemplative_reasoning_response_style_for_llms/ | nelson_moondialu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvre7f | false | null | t3_1hvre7f | /r/LocalLLaMA/comments/1hvre7f/contemplative_reasoning_response_style_for_llms/ | false | false | self | 78 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]} |
is there a way i can build a setup not so expensive which can run large models like Deepseek v3 locally and with a acceptable speed ? what would i need ? | 1 | [removed] | 2025-01-07T13:37:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hvrky0/is_there_a_way_i_can_build_a_setup_not_so/ | agx3x2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvrky0 | false | null | t3_1hvrky0 | /r/LocalLLaMA/comments/1hvrky0/is_there_a_way_i_can_build_a_setup_not_so/ | false | false | self | 1 | null |
A little project - chats and prompts needed | 2 | [removed] | 2025-01-07T13:38:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hvrm6d/a_little_project_chats_and_prompts_needed/ | Legitimate_Mix5486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvrm6d | false | null | t3_1hvrm6d | /r/LocalLLaMA/comments/1hvrm6d/a_little_project_chats_and_prompts_needed/ | false | false | self | 2 | null |
A few questions about OpenRouter | 1 | [removed] | 2025-01-07T13:42:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hvroco/a_few_questions_about_openrouter/ | sir_kokabi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvroco | false | null | t3_1hvroco | /r/LocalLLaMA/comments/1hvroco/a_few_questions_about_openrouter/ | false | false | 1 | null |
|
A little project - chats and prompts needed | 1 | [removed] | 2025-01-07T13:54:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hvrwz5/a_little_project_chats_and_prompts_needed/ | Legitimate_Mix5486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvrwz5 | false | null | t3_1hvrwz5 | /r/LocalLLaMA/comments/1hvrwz5/a_little_project_chats_and_prompts_needed/ | false | false | self | 1 | null |
Context per sequence to small | 0 | Hi, I’m trying to get paperless-so to work, but it keeps truncating. I’ve isolated the problem to
Llama_new_context_with_model: n_ct _per_seq = 2048
But I have no idea how to increase this specific little variable. Anyone have any tips? | 2025-01-07T14:02:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hvs2u8/context_per_sequence_to_small/ | WolpertingerRumo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvs2u8 | false | null | t3_1hvs2u8 | /r/LocalLLaMA/comments/1hvs2u8/context_per_sequence_to_small/ | false | false | self | 0 | null |
Intel Arc A770 (16GB) for AI tools like Ollama and Stable Diffusion | 1 | [removed] | 2025-01-07T14:03:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hvs3m4/intel_arc_a770_16gb_for_ai_tools_like_ollama_and/ | noorAshuvo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvs3m4 | false | null | t3_1hvs3m4 | /r/LocalLLaMA/comments/1hvs3m4/intel_arc_a770_16gb_for_ai_tools_like_ollama_and/ | false | false | self | 1 | null |
Exploring a Prompt-Based Method for AI Response Refinement | 2 | I've been experimenting with a prompting technique that encourages language models to evaluate and refine their own responses. The core idea is to structure the prompt in a way that creates a feedback loop, allowing the AI to identify potential shortcomings in its initial output and then revise it.
Here's the basic structure of the prompt I've been using:
You will be working on a task that requires multiple iterations to achieve the best possible outcome. In each iteration, you will perform the following steps and present your output in distinct sections:
Thinking: In this section, detail your thought process for the current iteration. Explain the steps you are taking, the reasoning behind your choices, and any challenges or considerations you have encountered.
Answer: Based on your thinking, provide a draft answer or solution to the task in this section. This may not be the final answer and is subject to refinement in subsequent iterations.
Self reflect: Review your "Answer" critically. Identify potential errors, areas for improvement, and alternative approaches you could consider in the next iteration. Be specific in your feedback.
Decide if the answer is satisfied: Briefly state whether you believe the current "Answer" is satisfactory or if further iterations are needed.
After each iteration, I will respond with "Continue" to signal that you should proceed to the next iteration and refine your work based on your self-reflection. Wait for the "Continue" command before starting the next iteration.
Task: <Insert your task here>
Essentially, this prompt instructs the AI to think before answer then analyze its response, identify areas for improvement, and generate a revised answer in the next iteration.
I'm interested in discussing the potential benefits and limitations of this approach. Has anyone else explored similar techniques for improving AI output through prompt design? | 2025-01-07T14:18:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hvsecf/exploring_a_promptbased_method_for_ai_response/ | skyline159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvsecf | false | null | t3_1hvsecf | /r/LocalLLaMA/comments/1hvsecf/exploring_a_promptbased_method_for_ai_response/ | false | false | self | 2 | null |
Nvidia's Project Digits is a 'personal AI supercomputer' | TechCrunch | 7 | 2025-01-07T14:18:23 | https://techcrunch.com/2025/01/06/nvidias-project-digits-is-a-personal-ai-computer/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLnljb21iaW5hdG9yLmNvbS8&guce_referrer_sig=AQAAAD6KTq83tPqA5MFoxyFPg1uVu2tw9nTG2IV0ZFi_29jbeRHKDq4fdRhAF1xkaPnQkr0EKJ9DqfEcL-MN_R4q5PYGGSP3k6cdccLiAEOpWhymakG1JsJdr1WNq3A-pomUEnD8KN0H6CqOGMtWHfjVPFViFRMAl-x7UGCeiIZOBUN3 | KingDevKong | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1hvsedp | false | null | t3_1hvsedp | /r/LocalLLaMA/comments/1hvsedp/nvidias_project_digits_is_a_personal_ai/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'Fu85SEDyc_zQputoHOsyi7UPHO_PdI6yxuX8kBWfQtE', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/VVKO7ITTv2QHwHGMEcykv1_-umArzfwpLHaUOrP_N5Q.jpg?width=108&crop=smart&auto=webp&s=a70737a37a713b797cc2766946e6b69f2c508d95', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/VVKO7ITTv2QHwHGMEcykv1_-umArzfwpLHaUOrP_N5Q.jpg?width=216&crop=smart&auto=webp&s=623f4c7428b2d96fe39e7fbf9a605de5feddb311', 'width': 216}, {'height': 205, 'url': 'https://external-preview.redd.it/VVKO7ITTv2QHwHGMEcykv1_-umArzfwpLHaUOrP_N5Q.jpg?width=320&crop=smart&auto=webp&s=74ae601c94a22be199698f780e85c9fa77d5f49c', 'width': 320}, {'height': 410, 'url': 'https://external-preview.redd.it/VVKO7ITTv2QHwHGMEcykv1_-umArzfwpLHaUOrP_N5Q.jpg?width=640&crop=smart&auto=webp&s=6a9add3c2556c3ba4212d29fee1abee8c70eb30e', 'width': 640}, {'height': 615, 'url': 'https://external-preview.redd.it/VVKO7ITTv2QHwHGMEcykv1_-umArzfwpLHaUOrP_N5Q.jpg?width=960&crop=smart&auto=webp&s=1921bbddf82c80e85629a1e8094a20ca569a84dc', 'width': 960}, {'height': 692, 'url': 'https://external-preview.redd.it/VVKO7ITTv2QHwHGMEcykv1_-umArzfwpLHaUOrP_N5Q.jpg?width=1080&crop=smart&auto=webp&s=1383bf3ed3136db672adfc28854f6cdc82ec94c5', 'width': 1080}], 'source': {'height': 769, 'url': 'https://external-preview.redd.it/VVKO7ITTv2QHwHGMEcykv1_-umArzfwpLHaUOrP_N5Q.jpg?auto=webp&s=86a62a259ed6e78d7e807876d1f462259665faea', 'width': 1200}, 'variants': {}}]} |
||
Could this be done with open-source models/tools? | 2 | Today I had to draw Bézier curves for a small project at work and after plotting the points on a bitmap, I got this:
[Original image](https://preview.redd.it/950tydoiykbe1.png?width=796&format=png&auto=webp&s=6b9782a655d61586348a36911e9ae41cb2a9e00f)
Out of curiosity, I upload the image to chatgpt and my mind was blown.
[First part of the interaction with ChatGpt](https://preview.redd.it/zy2or44qzkbe1.png?width=652&format=png&auto=webp&s=9a0cb02759f60b87f7b7629d2a0ca4555fb0e5ed)
[the final answer](https://preview.redd.it/8f6np5ruzkbe1.png?width=615&format=png&auto=webp&s=d5fb8d572e4dbe533db40097879d50fb36e19559)
Is this an easy problem to solve by an LLM with access to tools? Could it be done with non proprietary models? | 2025-01-07T14:30:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hvsnm5/could_this_be_done_with_opensource_modelstools/ | zaidorx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvsnm5 | false | null | t3_1hvsnm5 | /r/LocalLLaMA/comments/1hvsnm5/could_this_be_done_with_opensource_modelstools/ | false | false | 2 | null |
|
Accurate representation of AI industry | 1 | [removed] | 2025-01-07T14:41:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hvsvab/accurate_representation_of_ai_industry/ | de4dee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvsvab | false | null | t3_1hvsvab | /r/LocalLLaMA/comments/1hvsvab/accurate_representation_of_ai_industry/ | false | false | self | 1 | null |
Accurate representation of AI industry | 0 | 2025-01-07T14:42:38 | https://www.facebook.com/share/v/15zb7wXeLT/ | de4dee | facebook.com | 1970-01-01T00:00:00 | 0 | {} | 1hvswgf | false | null | t3_1hvswgf | /r/LocalLLaMA/comments/1hvswgf/accurate_representation_of_ai_industry/ | false | false | 0 | {'enabled': False, 'images': [{'id': '6WG9SpiPUs76X5MwgLnIYycqPOEPs_wQLIJKwFAZwoc', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/xIAn7JXZWydBMoj8dKJYfrRckZ9FvwB5ran06RZLTHk.jpg?width=108&crop=smart&auto=webp&s=e80351638b6bb19edf3349760a40c7dc84d34501', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/xIAn7JXZWydBMoj8dKJYfrRckZ9FvwB5ran06RZLTHk.jpg?width=216&crop=smart&auto=webp&s=aa396a4423aaa9e8120850801942740fbb2414cc', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/xIAn7JXZWydBMoj8dKJYfrRckZ9FvwB5ran06RZLTHk.jpg?width=320&crop=smart&auto=webp&s=c811c704b2cf64395a505a4b2149ab3b67c7d86f', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/xIAn7JXZWydBMoj8dKJYfrRckZ9FvwB5ran06RZLTHk.jpg?width=640&crop=smart&auto=webp&s=3d0f1b23f4fa7df8cb27630f4f604eb6b9e19ff3', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/xIAn7JXZWydBMoj8dKJYfrRckZ9FvwB5ran06RZLTHk.jpg?width=960&crop=smart&auto=webp&s=069d737009591264edadab937e04746df05ac6bf', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/xIAn7JXZWydBMoj8dKJYfrRckZ9FvwB5ran06RZLTHk.jpg?width=1080&crop=smart&auto=webp&s=16839ea0b3c0739111240b96ae688c30d91dd703', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/xIAn7JXZWydBMoj8dKJYfrRckZ9FvwB5ran06RZLTHk.jpg?auto=webp&s=3fbbcd12af945cf5def901519318a7ec696c511d', 'width': 1080}, 'variants': {}}]} |
||
Best Way to Setup Local AI with my Setup, Help. | 1 | [removed] | 2025-01-07T15:02:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hvtbn9/best_way_to_setup_local_ai_with_my_setup_help/ | Impossible-Bake3866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvtbn9 | false | null | t3_1hvtbn9 | /r/LocalLLaMA/comments/1hvtbn9/best_way_to_setup_local_ai_with_my_setup_help/ | false | false | self | 1 | null |
Using ollama with front for RAG on a code repository | 7 | I have a set of local repositories I would want to inspect with a locally hosted LLM. However ideally I would want to reduce the amount of manual indexing of files as much as possible. I am currently using ollama with OpenWebUI and 1filellm, but was wondering if there are better ways/frontends to do this? | 2025-01-07T15:30:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hvty4s/using_ollama_with_front_for_rag_on_a_code/ | hermajordoctor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvty4s | false | null | t3_1hvty4s | /r/LocalLLaMA/comments/1hvty4s/using_ollama_with_front_for_rag_on_a_code/ | false | false | self | 7 | null |
DeepSeek's lack of community communication | 1 | [removed] | 2025-01-07T15:42:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hvu7mr/deepseeks_lack_of_community_communication/ | robertpiosik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hvu7mr | false | null | t3_1hvu7mr | /r/LocalLLaMA/comments/1hvu7mr/deepseeks_lack_of_community_communication/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.