title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Is it possible to run quantized Llama3 70B on 250gb RAM | 0 | I have a server with 250gb of RAM(64 core), but no GPU. I want to run quantized llama3 70B model, i have tried [Llama-3.3-70B-Instruct-Q5\_K\_M.gguf](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF/blob/main/Llama-3.3-70B-Instruct-Q5_K_M.gguf) and [Llama-3.3-70B-Instruct-Q3\_K\_M.gguf](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF/blob/main/Llama-3.3-70B-Instruct-Q3_K_M.gguf). But they are failing during model load phase due to RAM limitation.
I am curious how much RAM is required to run these models and I’m also looking for an LLM that can run within my 250GB RAM setup. My tasks involve basic question-answering, such as analyzing a call transcript (in JSON format) between an agent and a customer to determine:
* Whether the agent introduced themselves,
* Whether the agent resolved the issue, or
* If the issue was escalated to a supervisor.
Could you please suggest any suitable models for these requirements?
Thanks in advance! | 2024-12-29T13:04:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hovmkx/is_it_possible_to_run_quantized_llama3_70b_on/ | United_Demand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hovmkx | false | null | t3_1hovmkx | /r/LocalLLaMA/comments/1hovmkx/is_it_possible_to_run_quantized_llama3_70b_on/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'iaLyqh3uKq_6xIV5Ry-HWIviZbeWItT-nX7MJopV5lA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YHQx398kp8VtUS662AQmknUxmkOHjJ4ZauJ0VMLMuLg.jpg?width=108&crop=smart&auto=webp&s=d612e8b786f61dd5f7c4d94f416f99c48be6a40a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YHQx398kp8VtUS662AQmknUxmkOHjJ4ZauJ0VMLMuLg.jpg?width=216&crop=smart&auto=webp&s=80f50b3dcc6fb4dfebe9a3f7b903b15efec34297', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YHQx398kp8VtUS662AQmknUxmkOHjJ4ZauJ0VMLMuLg.jpg?width=320&crop=smart&auto=webp&s=c7e4d3ed75054670cda374fe07ad8f2afef2884f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YHQx398kp8VtUS662AQmknUxmkOHjJ4ZauJ0VMLMuLg.jpg?width=640&crop=smart&auto=webp&s=5d7514cd879ed25892cff1a7787d8fb3f03dc939', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YHQx398kp8VtUS662AQmknUxmkOHjJ4ZauJ0VMLMuLg.jpg?width=960&crop=smart&auto=webp&s=4ecbf25c00a1f01007d8ad11ff6e9e73c9eaea01', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YHQx398kp8VtUS662AQmknUxmkOHjJ4ZauJ0VMLMuLg.jpg?width=1080&crop=smart&auto=webp&s=03f9f4f31a1023c2bacab1e82263e1b65e510952', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YHQx398kp8VtUS662AQmknUxmkOHjJ4ZauJ0VMLMuLg.jpg?auto=webp&s=728375e4cba379539201bf14389746d817cd4df7', 'width': 1200}, 'variants': {}}]} |
Best practices for summarising text | 6 | Hi. What are some best practices in prompting when we want to summarise a possibly long piece of text, without hallucinating or losing details?
What instructions do you provide that are proved to work? Are examples useful? Do you specifically ask it to do paragraph-by-paragrapg, specify output size, etc?
Assume lighter open models e.g. Gemma 9b or Llama 8b.
Thanks. | 2024-12-29T13:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hovn7l/best_practices_for_summarising_text/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hovn7l | false | null | t3_1hovn7l | /r/LocalLLaMA/comments/1hovn7l/best_practices_for_summarising_text/ | false | false | self | 6 | null |
Intel preparing 1-slot Arc Battlemage GPU with 24GB memory | 1 | 2024-12-29T13:09:56 | https://videocardz.com/newz/intel-preparing-arc-pro-battlemage-gpu-with-24gb-memory | brown2green | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1hovpia | false | null | t3_1hovpia | /r/LocalLLaMA/comments/1hovpia/intel_preparing_1slot_arc_battlemage_gpu_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'FEtg7o7dFkS5LWV2HkxIhZl7g7U-Pqgi9RYG-Vj5HGM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=108&crop=smart&auto=webp&s=5272d5fa17f3f5cd0dfcc20d43280a3cc9f8e6e6', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=216&crop=smart&auto=webp&s=ecc28338e587da10f23634d4349ef67c89d5aa52', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=320&crop=smart&auto=webp&s=034702f62ca5f6e2024161d4dbb4f7724ce122e1', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=640&crop=smart&auto=webp&s=b4c3b4cc11895415c01a19621f91d0c8d8d45739', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=960&crop=smart&auto=webp&s=41c3e7f678aacb32d2992a8ee00bbf91871a52cf', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=1080&crop=smart&auto=webp&s=43bf091d89bf8e2bd167fc2f2db3fb517bbec4da', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?auto=webp&s=b1e52a60c819a9ac63e08cefe9cfc81573e3d1b8', 'width': 2500}, 'variants': {}}]} |
||
Intel preparing Arc (PRO) "Battlemage" GPU with 24GB memory - VideoCardz.com | 542 | 2024-12-29T13:11:42 | https://videocardz.com/newz/intel-preparing-arc-pro-battlemage-gpu-with-24gb-memory | brown2green | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1hovqke | false | null | t3_1hovqke | /r/LocalLLaMA/comments/1hovqke/intel_preparing_arc_pro_battlemage_gpu_with_24gb/ | false | false | 542 | {'enabled': False, 'images': [{'id': 'FEtg7o7dFkS5LWV2HkxIhZl7g7U-Pqgi9RYG-Vj5HGM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=108&crop=smart&auto=webp&s=5272d5fa17f3f5cd0dfcc20d43280a3cc9f8e6e6', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=216&crop=smart&auto=webp&s=ecc28338e587da10f23634d4349ef67c89d5aa52', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=320&crop=smart&auto=webp&s=034702f62ca5f6e2024161d4dbb4f7724ce122e1', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=640&crop=smart&auto=webp&s=b4c3b4cc11895415c01a19621f91d0c8d8d45739', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=960&crop=smart&auto=webp&s=41c3e7f678aacb32d2992a8ee00bbf91871a52cf', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?width=1080&crop=smart&auto=webp&s=43bf091d89bf8e2bd167fc2f2db3fb517bbec4da', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/tFc118OXqB6o-qsuZ1NYO5y9id1TZs92pLGYi5AsPp8.jpg?auto=webp&s=b1e52a60c819a9ac63e08cefe9cfc81573e3d1b8', 'width': 2500}, 'variants': {}}]} |
||
Are AMD and Intel Graphic cards viable for running AI models? | 14 | Title basically. I've been considering upgrading my PC for both Gaming and AI and i've been checking prices of various GPUs. My current RTX 3060, while okay for 1080p gaming and running low parameter models starts to show it's age so i was thinking about small upgrade to keep running latest games at 1080p and maybe expand my AI options to 20B or so models(this isn't meant to be a major upgrade as i'm not as much of a gamer anymore and for all my high parameter AI needs i just use API).
Problem is, NVIDIAs gpus are obviously expensive(cheapest i believe is RTX 4060 ti 16gb which costs around £450) but AMD and Intel seems to have 16GB Vram options available at fairly cheap comparing to NVIDIA's offerings(like for example Intel's Arc Sparkle A770 or AMD's Radeon RX 7600 XT which both give 16GB of Vram and costs around 300 on amazon)
So going back to the question in the title, can i purchase a 16GB card from other manufacturers other than Nvidia and expect same speeds, no unexpected incompatibilities with AI backends/frontends(i mainly use StableDiffusion for image gen and Koboldccp for Text gen) and such? | 2024-12-29T13:14:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hovs3p/are_amd_and_intel_graphic_cards_viable_for/ | sebo3d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hovs3p | false | null | t3_1hovs3p | /r/LocalLLaMA/comments/1hovs3p/are_amd_and_intel_graphic_cards_viable_for/ | false | false | self | 14 | null |
How far are we from having true personal assistants for voice-controlled computers? | 24 | My sister's husband has multiple sclerosis. He is completely paralyzed. He can speak, albeit in a quiet voice and with pauses. He's using his computer a lot. His native language is not English, but he can speak it well enough to use Windows built-in speech recognition (now Voice Access) to control his computer. Here's the list of commands, in case someone is interested: [https://support.microsoft.com/en-us/windows/windows-speech-recognition-commands-9d25ef36-994d-f367-a81a-a326160128c7](https://support.microsoft.com/en-us/windows/windows-speech-recognition-commands-9d25ef36-994d-f367-a81a-a326160128c7)
As you see, the commands are quite basic and sometimes it requires annoying micromanagement to complete a task.
So, I was thinking how great it would be if there was an AI agent that could replace this voice command solution.
Imagine that instead of having to check his favorite news pages one by one, he could teach an AI assistant to browse his favorite news sites and collect the summaries and headlines with links, or even read the news to him. Seems simple, right? But I already see a few problems with the current state of technologies:
\- current voice control and agent solutions seem quite experimental and fragile. He would not be able to debug issues if something happens.
\- AI voice mode usually seems too eager to react, as we've seen from GPT. He sometimes needs to make a pause to inhale deeply. Also, his English is not that fluent and he might mess things up when speaking longer sentences. So, the AI assistant should be "more patient" and also repeat the command as it understood it and proceed only when the user has said "Yes" to approve. Also, it should be possible to abort the ongoing chain of tasks midway immediately by saying "Stop!"
\- language. His native language is Latvian, which is supported well only by the largest LLMs and in text mode only. While he can speak commands in English, the content on websites he visits is in Latvian. So, that excludes fully local small multimodal models, unless they could be easily finetuned for a specific language (at least to understand the text and to use Latvian text-to-speech).
\- memory. Current LLMs don't have it, every session is a blank slate. As a workaround, we could invent some kind of an automation prompt, like "when I say - get me news - you should collect summaries of the following websites (the list follows)". But the solution should be user friendly, so that he could himself dictate new commands to add to the prompt for later reuse.
So, what might be our best hope to achieve something like this? Would it become possible next year? Is there something reliable we could try even now? | 2024-12-29T13:21:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hovwdd/how_far_are_we_from_having_true_personal/ | martinerous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hovwdd | false | null | t3_1hovwdd | /r/LocalLLaMA/comments/1hovwdd/how_far_are_we_from_having_true_personal/ | false | false | self | 24 | null |
How can I set up whisper.cpp on a machine that has no microphone or speaker but is accessed via a machine that does have both? | 9 | I have a home server running open-webui with whisper. The home server has neither a microphone nor a speaker. When I use the voice mode, I get an error due to the fact that the system does not have a microphone.
While still running the model and open-webui on the server, is it possible to access this functionality from another machine? The other machine can access via screen share, open port, or whatever is reasonable. | 2024-12-29T13:28:49 | https://www.reddit.com/r/LocalLLaMA/comments/1how0lc/how_can_i_set_up_whispercpp_on_a_machine_that_has/ | nonredditaccount | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1how0lc | false | null | t3_1how0lc | /r/LocalLLaMA/comments/1how0lc/how_can_i_set_up_whispercpp_on_a_machine_that_has/ | false | false | self | 9 | null |
Converting and de-quantizing GGUF tensor doubt | 1 | 2024-12-29T13:49:46 | United_Demand | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1howdgc | false | null | t3_1howdgc | /r/LocalLLaMA/comments/1howdgc/converting_and_dequantizing_gguf_tensor_doubt/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'LSdNebqUXwImXt9NKqihyakh8iYnj4G15O994i0Kg18', 'resolutions': [{'height': 9, 'url': 'https://preview.redd.it/8wsuzzjkls9e1.png?width=108&crop=smart&auto=webp&s=36e703d65de38cbf3dee118a8500e36d6fa2bb42', 'width': 108}, {'height': 18, 'url': 'https://preview.redd.it/8wsuzzjkls9e1.png?width=216&crop=smart&auto=webp&s=b62a139f4d601b67390a610fb65ba74b1efafafb', 'width': 216}, {'height': 27, 'url': 'https://preview.redd.it/8wsuzzjkls9e1.png?width=320&crop=smart&auto=webp&s=fb1fb229a01ace78ad2e961ee435f03dab7a035c', 'width': 320}, {'height': 55, 'url': 'https://preview.redd.it/8wsuzzjkls9e1.png?width=640&crop=smart&auto=webp&s=e50edeaa5cc027aa7c0740bb8bfc66640facbacc', 'width': 640}, {'height': 83, 'url': 'https://preview.redd.it/8wsuzzjkls9e1.png?width=960&crop=smart&auto=webp&s=db0db82a6d0bc65ebe40161ac2a5bd32fafa2d6c', 'width': 960}, {'height': 93, 'url': 'https://preview.redd.it/8wsuzzjkls9e1.png?width=1080&crop=smart&auto=webp&s=42352573f98b1e7d6ac43fc1be04b450d133af81', 'width': 1080}], 'source': {'height': 221, 'url': 'https://preview.redd.it/8wsuzzjkls9e1.png?auto=webp&s=161fd59b1a4511106f3b60c707c8064fbf390811', 'width': 2549}, 'variants': {}}]} |
|||
DeepSeek-R1-Lite-Preview seems to beat DeepSeek V3 on multiple benchmarks, so why is V3 getting so much more hype? | 68 | Discalimer - I'm having trouble finidng direct comparisons between these models, which is wierd because they are from the same company? I got my numbers from [https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek\_V3.pdf](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf) and [https://api-docs.deepseek.com/news/news1120](https://api-docs.deepseek.com/news/news1120)
It looks like a mixed bag. For example, r1-lite-preview scores a 52.5 on AIME while v3 scores a 39.2
On the other hand, v3 beats r1-lite-preview on GPQA Dimaond with a 59.5 to a 58.5
On Codeforces, both are listed using different units, but I think r1-preview wins? r1-lite-preview is listed as a 1450, while v3 gets a 51.6th percentile. I'm pretty sure 1450 beats the 51.6th percentile looking at the codeforces website but I could be wrong.
I understand that r1-lite-preview is being marketed as a reasoning model, but if you read the v3 paper they say that it is also trained as a reasoner via distillation of r1 (I think it is the full r1 they are using for distillation, not preview, but I couldn't tell in the paper).
Anyway this is not an attack on DeepSeek, they've made two amazing models. | 2024-12-29T14:03:02 | https://www.reddit.com/r/LocalLLaMA/comments/1howm2w/deepseekr1litepreview_seems_to_beat_deepseek_v3/ | 30299578815310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1howm2w | false | null | t3_1howm2w | /r/LocalLLaMA/comments/1howm2w/deepseekr1litepreview_seems_to_beat_deepseek_v3/ | false | false | self | 68 | {'enabled': False, 'images': [{'id': 'r_TNqszYbZg75taMDC1UolLOAhBrgn8LFkVb0SJnA1k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0qjVnnwNoQnqLFgZ6ob1TI4kT5tMHHR7y3ospjxj83Q.jpg?width=108&crop=smart&auto=webp&s=d2fd284627ba0559690468213d2a1d9ef43252db', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0qjVnnwNoQnqLFgZ6ob1TI4kT5tMHHR7y3ospjxj83Q.jpg?width=216&crop=smart&auto=webp&s=63c25e1cc009398edab83685148e5b5212a2a01f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0qjVnnwNoQnqLFgZ6ob1TI4kT5tMHHR7y3ospjxj83Q.jpg?width=320&crop=smart&auto=webp&s=ea0e52840144d887fb027f1b0cd43df4da426504', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0qjVnnwNoQnqLFgZ6ob1TI4kT5tMHHR7y3ospjxj83Q.jpg?width=640&crop=smart&auto=webp&s=e70d7b8cdeb92642d118c45b88bbc59840e56c91', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0qjVnnwNoQnqLFgZ6ob1TI4kT5tMHHR7y3ospjxj83Q.jpg?width=960&crop=smart&auto=webp&s=42cc29d9b367c7c58857189de9c658273a36693e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0qjVnnwNoQnqLFgZ6ob1TI4kT5tMHHR7y3ospjxj83Q.jpg?width=1080&crop=smart&auto=webp&s=cdbb0dff6115b5cef8807f76d6ed8fce1eee9693', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0qjVnnwNoQnqLFgZ6ob1TI4kT5tMHHR7y3ospjxj83Q.jpg?auto=webp&s=d7aa724d9a5a35aea36f45782222fc76b7e7d69c', 'width': 1200}, 'variants': {}}]} |
Converting and de-quantizing GGUF tensor doubt | 1 | [removed] | 2024-12-29T14:09:02 | https://www.reddit.com/r/LocalLLaMA/comments/1howpwa/converting_and_dequantizing_gguf_tensor_doubt/ | United_Demand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1howpwa | false | null | t3_1howpwa | /r/LocalLLaMA/comments/1howpwa/converting_and_dequantizing_gguf_tensor_doubt/ | false | false | self | 1 | null |
Converting and de-quantizing GGUF tensor doubt | 1 | 2024-12-29T14:10:37 | United_Demand | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1howqyo | false | null | t3_1howqyo | /r/LocalLLaMA/comments/1howqyo/converting_and_dequantizing_gguf_tensor_doubt/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'sg30VA3_ryJpBHUorPrhnYhaeLFvrCAivnyiT-TmBiU', 'resolutions': [{'height': 9, 'url': 'https://preview.redd.it/3dr8aiybps9e1.png?width=108&crop=smart&auto=webp&s=6aba2a614e7f32f9ba03d3f57ed304b28ab9cd6e', 'width': 108}, {'height': 18, 'url': 'https://preview.redd.it/3dr8aiybps9e1.png?width=216&crop=smart&auto=webp&s=2734b247873a296f7c7f7e95d12016407ab6fc3e', 'width': 216}, {'height': 27, 'url': 'https://preview.redd.it/3dr8aiybps9e1.png?width=320&crop=smart&auto=webp&s=5ebe4151db5df69146f7f4f986db682f2938ee8e', 'width': 320}, {'height': 55, 'url': 'https://preview.redd.it/3dr8aiybps9e1.png?width=640&crop=smart&auto=webp&s=e73a38d904a342eed9395b359e3b6e0cc20b89ed', 'width': 640}, {'height': 83, 'url': 'https://preview.redd.it/3dr8aiybps9e1.png?width=960&crop=smart&auto=webp&s=79028a3bd651ddb7bf3134ea0588fff8ceebe7c9', 'width': 960}, {'height': 93, 'url': 'https://preview.redd.it/3dr8aiybps9e1.png?width=1080&crop=smart&auto=webp&s=3217141de28d9425e7ef28a3a55972f7cda24b8e', 'width': 1080}], 'source': {'height': 221, 'url': 'https://preview.redd.it/3dr8aiybps9e1.png?auto=webp&s=30851b7167c168e9d5a687cd7a80766ee953f89b', 'width': 2549}, 'variants': {}}]} |
|||
Is it possible to run 70B models on single GPU? | 23 | The question goes on the place like this:
If I have a system with something like 256GB RAM DDR4 ram and a epyc 7003 system, is it possible to load models on the GPU as processing unit but the ram that will be used will be the system's one instead of the GPU, something similar to when one loads the system on an iGPU, since it doesn't have it's own memory, it uses the RAM.
Do you think someone has done anything like that? | 2024-12-29T14:34:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hox78v/is_it_possible_to_run_70b_models_on_single_gpu/ | JuCaDemon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hox78v | false | null | t3_1hox78v | /r/LocalLLaMA/comments/1hox78v/is_it_possible_to_run_70b_models_on_single_gpu/ | false | false | self | 23 | null |
Is VRAM more important than tech? For example a 24GB vram 30XX card vs 16GB 40XX card, which is best? | 1 | [removed] | 2024-12-29T14:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hoxlyt/is_vram_more_important_than_tech_for_example_a/ | Unreal_777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hoxlyt | false | null | t3_1hoxlyt | /r/LocalLLaMA/comments/1hoxlyt/is_vram_more_important_than_tech_for_example_a/ | false | false | self | 1 | null |
DeepSeek V3 GGUF | 0 | Im sure i read on here that some one said there is a DeepSeek V3 that is a GGUF ? but i cant seem to find the post anymore can some one point me to where this model is if it is real as "ollama run [hf.co/deepseek-ai/DeepSeek-V](http://hf.co/deepseek-ai/DeepSeek-V3)3" is not pulling haha | 2024-12-29T15:30:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hoyazc/deepseek_v3_gguf/ | Totalkiller4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hoyazc | false | null | t3_1hoyazc | /r/LocalLLaMA/comments/1hoyazc/deepseek_v3_gguf/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'W0a4Ut3YRdwrm_iPrjdjzRUGtRuqfg9-mLriBM-Dcd0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=108&crop=smart&auto=webp&s=ced9f4262fbc47c5e6d9e64a24695e9b606292f6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=216&crop=smart&auto=webp&s=cb792585d45e44e065221b17eec45d8ba00e3762', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=320&crop=smart&auto=webp&s=702d72cb93304f5ba3c0dfb4aeea96bc8251292a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=640&crop=smart&auto=webp&s=4b794987973338d5de710a3e8b5f2c292016f426', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=960&crop=smart&auto=webp&s=52428c047c2c2058eb7059e3838a1201dcf1e642', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?width=1080&crop=smart&auto=webp&s=faa4f3f409c46ca40fdb28f108da132df2cf1d67', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3Jnus-UVY99sm6zvUDLr65jZXHrp5PC9fY9CcixJ3gM.jpg?auto=webp&s=efcf7648caa799bd82602de8c8e6f93b14410580', 'width': 1200}, 'variants': {}}]} |
Does DeepSeek have contextual memory? | 0 | Just that question. | 2024-12-29T15:34:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hoydm5/does_deepseek_have_contextual_memory/ | Glad-Communication60 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hoydm5 | false | null | t3_1hoydm5 | /r/LocalLLaMA/comments/1hoydm5/does_deepseek_have_contextual_memory/ | false | false | self | 0 | null |
Light Prompter - Accelerate test-time-compute with batching! | 6 | [https://github.com/Green0-0/light\_prompter](https://github.com/Green0-0/light_prompter) | 2024-12-29T15:38:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hoygyb/light_prompter_accelerate_testtimecompute_with/ | Good-Assumption5582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hoygyb | false | null | t3_1hoygyb | /r/LocalLLaMA/comments/1hoygyb/light_prompter_accelerate_testtimecompute_with/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'SXHoXNzGujMgaIqmPbjhDRTTpV198UWrtauFJcwmCZg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kEzJdSzBPruPcXf66TcYNsyHVZhWAEv6K_TKCjCIqxs.jpg?width=108&crop=smart&auto=webp&s=dae833a82ad6049f10110f124ef73cf65ab03353', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kEzJdSzBPruPcXf66TcYNsyHVZhWAEv6K_TKCjCIqxs.jpg?width=216&crop=smart&auto=webp&s=4d5e871a9981ef3616ef1fc22cf2783ec2f92b73', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kEzJdSzBPruPcXf66TcYNsyHVZhWAEv6K_TKCjCIqxs.jpg?width=320&crop=smart&auto=webp&s=c8994fe09acf46942102c811db8f219e0db5874d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kEzJdSzBPruPcXf66TcYNsyHVZhWAEv6K_TKCjCIqxs.jpg?width=640&crop=smart&auto=webp&s=84fd3fd23965e8619d0527f5c372e39280382e22', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kEzJdSzBPruPcXf66TcYNsyHVZhWAEv6K_TKCjCIqxs.jpg?width=960&crop=smart&auto=webp&s=2cbe84ac0ad09bac90c059d7aa9d327286c3aeb2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kEzJdSzBPruPcXf66TcYNsyHVZhWAEv6K_TKCjCIqxs.jpg?width=1080&crop=smart&auto=webp&s=0949cef4a1d240be9259024416fa0d331ee648a0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kEzJdSzBPruPcXf66TcYNsyHVZhWAEv6K_TKCjCIqxs.jpg?auto=webp&s=f42c1d9e1d3bc4c2772e0be050327d971ceeaa76', 'width': 1200}, 'variants': {}}]} |
DeepSeek. Is this true? | 1 | 2024-12-29T15:41:23 | jursla | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hoyiy5 | false | null | t3_1hoyiy5 | /r/LocalLLaMA/comments/1hoyiy5/deepseek_is_this_true/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'o3-AgWg0CnTwi9eAHuKqWzm3yEqI0opLUHAWpTGnLgs', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/mfuwsk3j5t9e1.jpeg?width=108&crop=smart&auto=webp&s=97b7b9cec457a9efc038b2c8698ecf79aa1f94f3', 'width': 108}, {'height': 276, 'url': 'https://preview.redd.it/mfuwsk3j5t9e1.jpeg?width=216&crop=smart&auto=webp&s=41bd13a38de4ec7804ac44a2d734d17382779e54', 'width': 216}, {'height': 409, 'url': 'https://preview.redd.it/mfuwsk3j5t9e1.jpeg?width=320&crop=smart&auto=webp&s=6353e3928f91c4af9c7080820f41004306981222', 'width': 320}, {'height': 819, 'url': 'https://preview.redd.it/mfuwsk3j5t9e1.jpeg?width=640&crop=smart&auto=webp&s=18d41e22a713844eabf5c17e868fe8c513c2e8ca', 'width': 640}, {'height': 1229, 'url': 'https://preview.redd.it/mfuwsk3j5t9e1.jpeg?width=960&crop=smart&auto=webp&s=45b44aa8c9391d029b5defcfe9ae8ef5c70b5f9b', 'width': 960}, {'height': 1383, 'url': 'https://preview.redd.it/mfuwsk3j5t9e1.jpeg?width=1080&crop=smart&auto=webp&s=12085363ec366d4191f85488b2075179faca17c3', 'width': 1080}], 'source': {'height': 1499, 'url': 'https://preview.redd.it/mfuwsk3j5t9e1.jpeg?auto=webp&s=1b60202e5dc8725a8c9ca6f8081aa5d4ccf1457e', 'width': 1170}, 'variants': {}}]} |
|||
Port Forwarding WebUI to access anywhere? | 1 | [removed] | 2024-12-29T15:49:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hoyov1/port_forwarding_webui_to_access_anywhere/ | Consistent-Gold8224 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hoyov1 | false | null | t3_1hoyov1 | /r/LocalLLaMA/comments/1hoyov1/port_forwarding_webui_to_access_anywhere/ | false | false | self | 1 | null |
Repoprompt alternative for Windows | 1 | [removed] | 2024-12-29T16:13:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hoz7k0/repoprompt_alternative_for_windows/ | Fearless_Air_2744 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hoz7k0 | false | null | t3_1hoz7k0 | /r/LocalLLaMA/comments/1hoz7k0/repoprompt_alternative_for_windows/ | false | false | self | 1 | null |
What's the best way to deploy an open-source LLM for a production-ready product recommendation chatbot with future multimodal needs? | 1 | [removed] | 2024-12-29T16:15:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hoz96y/whats_the_best_way_to_deploy_an_opensource_llm/ | Old_Back_2860 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hoz96y | false | null | t3_1hoz96y | /r/LocalLLaMA/comments/1hoz96y/whats_the_best_way_to_deploy_an_opensource_llm/ | false | false | self | 1 | null |
24GB GPU that fits has adequate fans and is not the P40 or P40 w/ Radial that can be Quiet? | 9 | Hey Folks,
Still trying to dive into the local LLM space with more RAM, but sadly my T420+P40 cooling situation has been a battle. I've tried single and dual 40mm with maglev, 28mm depth, 92mmx38/25mm, and more. Sadly the better the airflow, the louder it gets. I'm contemplating either selling the P40 and going towards something like a P6000 with the built-in blower or trying a radial approach. Below is a picture of the interior of my chassis, but not with the current 92mm approach I'm trying. Please advise on approaches to cool this GPU but at a quiet 45db, preferably less.
https://preview.redd.it/i2x8fnjoct9e1.jpg?width=1024&format=pjpg&auto=webp&s=b2691e2d58153a0749f2fd5d9da39b0ceb9b37d2
| 2024-12-29T16:24:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hozg2h/24gb_gpu_that_fits_has_adequate_fans_and_is_not/ | s0n1cm0nk3y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hozg2h | false | null | t3_1hozg2h | /r/LocalLLaMA/comments/1hozg2h/24gb_gpu_that_fits_has_adequate_fans_and_is_not/ | false | false | 9 | null |
|
What's the best way to deploy an open-source LLM for a production-ready product recommendation chatbot with future multimodal needs? | 1 | [removed] | 2024-12-29T16:29:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hozk0f/whats_the_best_way_to_deploy_an_opensource_llm/ | Old_Back_2860 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hozk0f | false | null | t3_1hozk0f | /r/LocalLLaMA/comments/1hozk0f/whats_the_best_way_to_deploy_an_opensource_llm/ | false | false | self | 1 | null |
Find the holding system what should I get? | 2 | I got $4,000 burning the hole in my pocket I'm looking the bills of these sleep computer that can run you llms, stable diffusion, video generation and more. What type of system should I be looking at or parts I should be buying? | 2024-12-29T16:46:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hozx42/find_the_holding_system_what_should_i_get/ | TheArchivist314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hozx42 | false | null | t3_1hozx42 | /r/LocalLLaMA/comments/1hozx42/find_the_holding_system_what_should_i_get/ | false | false | self | 2 | null |
[ Removed by Reddit ] | 1 | [ Removed by Reddit on account of violating the [content policy](/help/contentpolicy). ] | 2024-12-29T17:04:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hp0bz3/removed_by_reddit/ | notnone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp0bz3 | false | null | t3_1hp0bz3 | /r/LocalLLaMA/comments/1hp0bz3/removed_by_reddit/ | false | false | self | 1 | null |
LCM the next evolution of models? | 1 | [removed] | 2024-12-29T17:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hp0jrs/lcm_the_next_evolution_of_models/ | scary_kitten_daddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp0jrs | false | null | t3_1hp0jrs | /r/LocalLLaMA/comments/1hp0jrs/lcm_the_next_evolution_of_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'JUa_gYkI-OxQlwRIFWLizJkO3_1klPCaidIEtxNbgmI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_Q_SaRMAJz6AMdZQJg9LKiI6W-A9vmyzEOlWvLdeD1g.jpg?width=108&crop=smart&auto=webp&s=1cb848dd11cf8cbdfc3246ad9eff1f527be2fc7e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/_Q_SaRMAJz6AMdZQJg9LKiI6W-A9vmyzEOlWvLdeD1g.jpg?width=216&crop=smart&auto=webp&s=6e81ad24b1ac7aa82ab16e1681d4e16627ac487b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/_Q_SaRMAJz6AMdZQJg9LKiI6W-A9vmyzEOlWvLdeD1g.jpg?width=320&crop=smart&auto=webp&s=dfa0ef5261ab2e0227a10297f935368084b8ffcd', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/_Q_SaRMAJz6AMdZQJg9LKiI6W-A9vmyzEOlWvLdeD1g.jpg?auto=webp&s=f23be74bfc9005c4f2ec7d925ba553974dd436a0', 'width': 480}, 'variants': {}}]} |
Deepseek v3 is really bad in WebDev Arena | 125 | This is an example of it trying to make a calendar webpage.
With every test I make it gives something completely irrelevant, that it seems to be broken.
I tested it ten times against different models and it failed them all.
So what can be the problem, is it bad api or bad system instructions? | 2024-12-29T17:19:39 | notnone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hp0p3c | false | null | t3_1hp0p3c | /r/LocalLLaMA/comments/1hp0p3c/deepseek_v3_is_really_bad_in_webdev_arena/ | false | false | 125 | {'enabled': True, 'images': [{'id': 'YL_KZJAhAHHx0HabgvjhLTBOlSSMSGYbFSjXWKuJzgo', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/8wpefk62nt9e1.jpeg?width=108&crop=smart&auto=webp&s=2c5b5b501737967965739592e2cd7ea0b36b29dc', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/8wpefk62nt9e1.jpeg?width=216&crop=smart&auto=webp&s=41c45e9cc73bf95607bd2867ef53ec5590f1ce4a', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/8wpefk62nt9e1.jpeg?width=320&crop=smart&auto=webp&s=17963c9767bc1d2ff67954d99dab9e9f7312711a', 'width': 320}, {'height': 294, 'url': 'https://preview.redd.it/8wpefk62nt9e1.jpeg?width=640&crop=smart&auto=webp&s=d9d686c02ad7755b9b3960a7651510aea3ce3cd3', 'width': 640}, {'height': 441, 'url': 'https://preview.redd.it/8wpefk62nt9e1.jpeg?width=960&crop=smart&auto=webp&s=6af5fb54be46c41e711fdfc5627b19cb25219ba1', 'width': 960}, {'height': 496, 'url': 'https://preview.redd.it/8wpefk62nt9e1.jpeg?width=1080&crop=smart&auto=webp&s=ba4d3afa7753304108178e6720a59e8147b4fb71', 'width': 1080}], 'source': {'height': 1390, 'url': 'https://preview.redd.it/8wpefk62nt9e1.jpeg?auto=webp&s=ccb21d4d1273a3467f7a1cc359afec65fcd17731', 'width': 3022}, 'variants': {}}]} |
||
Run DeepSeek V3 in the Cloud | 2 | Is it comfortably possible to run DeepSeek V3 in the Cloud at a reasonable price, für private use? | 2024-12-29T17:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hp102s/run_deepseek_v3_in_the_cloud/ | haemse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp102s | false | null | t3_1hp102s | /r/LocalLLaMA/comments/1hp102s/run_deepseek_v3_in_the_cloud/ | false | false | self | 2 | null |
How do I convert tables in PDF to csv while also adding formmatting rules before csv conversion? | 1 | I have a PDF having tables as shown in the image.
Not only do I want this to be converted into excel but also add Some rules.
For example, anything having "...." Is a child of anything having "..." Above it.
So "....horses" should be replaced with "Other Horses" provided "...Other" was the parent of it.
I tried sending the pdf page images to gpt-4o and wrote the prompt related to my requirements but it just fails to extract text from the image.
Does anyone know what I can do here? I have been researching for 1 and a half day and can't find success :( | 2024-12-29T17:37:37 | ShippersAreIdiots | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hp13ui | false | null | t3_1hp13ui | /r/LocalLLaMA/comments/1hp13ui/how_do_i_convert_tables_in_pdf_to_csv_while_also/ | false | false | 1 | {'enabled': True, 'images': [{'id': '27SQ8bhXjaMRMv_x4un1iYmyiSLfmoWI52j6hzpE0I8', 'resolutions': [{'height': 189, 'url': 'https://preview.redd.it/ceqoytk9qt9e1.jpeg?width=108&crop=smart&auto=webp&s=bd0fd41376d09748d5b0b973d080580a91183aec', 'width': 108}, {'height': 379, 'url': 'https://preview.redd.it/ceqoytk9qt9e1.jpeg?width=216&crop=smart&auto=webp&s=84b67f83835dd72a4607d5d5e4ecd0a86596fcda', 'width': 216}, {'height': 561, 'url': 'https://preview.redd.it/ceqoytk9qt9e1.jpeg?width=320&crop=smart&auto=webp&s=167a43f0b9d5cc1a9ac0c149de5bd133efd8416d', 'width': 320}, {'height': 1123, 'url': 'https://preview.redd.it/ceqoytk9qt9e1.jpeg?width=640&crop=smart&auto=webp&s=4031485214ae9b554b677f89b2e7a6c87db2800a', 'width': 640}], 'source': {'height': 1239, 'url': 'https://preview.redd.it/ceqoytk9qt9e1.jpeg?auto=webp&s=03ff9bd2e952b9d3b8694b27419e7a7303bd3cd3', 'width': 706}, 'variants': {}}]} |
||
Recommendation of Model + RAG for MacBook Pro M4 Max | 1 | Hi everyone,
I’m looking for suggestions on a model that works efficiently for thesis writing, specifically focusing on text editing and restructuring, and also serves as a reliable RAG (Retrieval-Augmented Generation) model. I am currently using a MacBook Pro M4 Max 64 GB / 16-core CPU + 40-core GPU and would like to transition to a setup that is completely local, moving away from relying on OpenAI or Claude APIs.
Does anyone have experience with local models that perform well in these areas? Any advice on installation or configuration would also be greatly appreciated!
Thanks in advance! | 2024-12-29T17:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hp1grn/recommendation_of_model_rag_for_macbook_pro_m4_max/ | Upstairs-Eye-7497 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp1grn | false | null | t3_1hp1grn | /r/LocalLLaMA/comments/1hp1grn/recommendation_of_model_rag_for_macbook_pro_m4_max/ | false | false | self | 1 | null |
If I had 8 h100's to serve Deepseek V3 how many TPM would I get and how many parallel requests could I make? | 1 | Im trying to learn how you would calculate these things to size your HW requirements, anyone have any pointers/ where to start. | 2024-12-29T18:02:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hp1odq/if_i_had_8_h100s_to_serve_deepseek_v3_how_many/ | this-is-test | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp1odq | false | null | t3_1hp1odq | /r/LocalLLaMA/comments/1hp1odq/if_i_had_8_h100s_to_serve_deepseek_v3_how_many/ | false | false | self | 1 | null |
Is the M3 Pro, M3 Max the best $/performance for running local LLM on a laptop? | 0 | I’m doing this just for fun/learning but also want some speed, could also use the speed in some editing.
https://youtu.be/KFL9YG3jg20?si=sXvPDssM9CMvxhcG&t=424
The M3 pro/max seems to have abnormally good gpu numbers compared to M2 pro/max or M4. | 2024-12-29T18:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hp1xky/is_the_m3_pro_m3_max_the_best_performance_for/ | moldyjellybean | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp1xky | false | null | t3_1hp1xky | /r/LocalLLaMA/comments/1hp1xky/is_the_m3_pro_m3_max_the_best_performance_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'isaRUgx8YFIeMFS9v93vRWIJgdtbpAaetCunQDhkwzQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Jo2YxQ563x9W6RQ7dgR82xaWEinkRsNvfuMYtMbfJ50.jpg?width=108&crop=smart&auto=webp&s=f754701d17d780ccf5dbf4ef818ed36690fcbcba', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Jo2YxQ563x9W6RQ7dgR82xaWEinkRsNvfuMYtMbfJ50.jpg?width=216&crop=smart&auto=webp&s=c91298e25dd7f688decad6cbb017ccd8b7660451', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Jo2YxQ563x9W6RQ7dgR82xaWEinkRsNvfuMYtMbfJ50.jpg?width=320&crop=smart&auto=webp&s=85a758237ce897f454102c4d14c6f7e37b691d37', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Jo2YxQ563x9W6RQ7dgR82xaWEinkRsNvfuMYtMbfJ50.jpg?auto=webp&s=56a3a01ecd4bc06ac7b068937ca3ae1442446c8b', 'width': 480}, 'variants': {}}]} |
Summary: The big AI events of 2024 | 33 | # January
* nothing...
# February
* Stability AI announces **Stable Diffusion 3** (gradually released to waiting list).
* Google upgrades the artificial intelligence chat in Bard, basing it on the new **Gemini Pro** model, in all available languages. Google replaces "Bard" with "Gemini".
* Google announces the **Gemini Pro 1.5** multimodal language model capable of parsing up to a million tokens, as well as parsing video and images. The model is gradually released to developers on a waiting list.
* OpenAI announces the **Sora** model that produces videos up to a minute long. The model is not released to the public at this time.
# March
* X Corporation announces the upcoming release of the **Grok 1.5** open source model.
* Anthropic announces **Claude 3**, a new version of their large language model. The version is deployed in 3 different sizes, with the largest model performing better than GPT-4.
* Suno AI, which develops a model for creating music, releases **Suno v3** to the general public.
# April
* Stability AI releases a new update to the music creation model - **Stable Audio 2.0**.
* X Corporation releases an upgrade to its language model, **Grok-1.5V**, which integrates high-level image recognition. In the test presented by the company, the model is the best in identifying and analyzing images compared to other models.
* The Mistral company releases its new model **Mixtral 8x22B** as open source. This is the most powerful model among the open source models and it contains 141 billion parameters but uses a method that allows more economical use.
* Meta releases the **LLaMA 3** model as open source in sizes 8B and 70B parameters. The large model shows better performance than Claude 3 Sonnet and Gemini Pro 1.5 in several measures. Meta is expected to later release larger models with 400 billion parameters and more.
* Microsoft releases the **Phi-3-mini** model in open source. The model comes in a reduced version of 3.8B parameters, which allows it to run on mobile devices as well, and it presents capabilities similar to GPT-3.5.
* Adobe announces its new image creation model **Firefly 3**.
* The startup **Reka AI** presents a series of multimodal language models in 3 sizes. The models are capable of processing video, audio and images. The large model featured similar capabilities to GPT-4.
* Apple releases as full open source a series of small language models under the name **OpenELM**. The models are available in four weights between 270 million and 3 billion parameters.
# May
* OpenAI announces the **GPT-4o model** that presents full multimodal capabilities, including receiving and creating text, images, and audio. The model presents an impressive ability to speak with a high response speed and in natural language. The model is 2 times more efficient than the GPT-4 Turbo model, and has better capabilities for languages other than English.
* Google announces a large number of AI features in its products. The main ones: increasing the token limit to 2 million for Gemini 1.5 to waiting list, releasing a smaller and faster **Gemini Flash 1.5 model**. Revealing the latest image creation model **Imagen 3**, music creation model **Music AI** and video creation model **Veo**. And the announcement of the **Astra model** with multimodal capabilities for realtime audio and video reception.
* Microsoft announces **Copilot+** for dedicated computers, which will allow a full search of the user's history through screenshots of the user's activity. The company also released as open source the SLMs that display impressive capabilities in a minimal size: **Phi-3 Small**, **Phi-3 Medium**, and **Phi-3 Vision** which includes image recognition capability.
* Meta introduces **Chameleon**, a new multimodal model that seamlessly renders text and images.
* Mistral AI releases a new open source version of its language model **Mistral-7B-Instruct-v0.3**.
* Google announces **AI Overviews** intended to give a summary of the relevant information in Google search.
* Suno AI releases an updated music creation model **Suno v3.5**.
* Mistral AI releases a new language model designed for coding **Codestral** in size 22B.
# June
* Stability AI releases its updated image creation model **Stable Diffusion 3** in a medium version in size 2B parameters.
* Apple announces **Apple Intelligence**, an AI system that will be integrated into the company's devices and will combine AI models of different sizes for different tasks.
* DeepSeekAI publishes the **DeepSeekCoderV2** open source language model which presents similar coding capabilities to models such as GPT-4, Claude 3 Opus and more.
* **Runway** introduces **Gen3 Alpha**, a new AI model for video generation.
* Anthropic releases the **Claude Sonnet 3.5** model, which presents better capabilities than other models with low resource usage.
* Microsoft releases in open source a series of image recognition models called **Florence 2**.
* Google announces **Gemma 2** open source language models with 9B and 27B parameter sizes. Also, the company opens the context window capabilities to developers for up to 2 million tokens.
# July
* OpenAI has released a miniaturized model called **GPT-4o mini** that presents high capabilities at a low cost
* Meta releases as open source the **llama 3.1 model** in sizes 8B, 70B and 405B. The large model features the same capabilities as the best closed source models
* mistral ai releases three new models: **Codestral Mamba**, **Mistral NeMo** and **Mathstral** designed for mathematics
* Google DeepMind has unveiled two new AI systems that won silver medals at this year's International Mathematical Olympiad (IMO), **AlphaProof** and **AlphaGeometry 2**.
* OpenAI launched **SearchGPT**, an integrated web search
* Startup Udio has released **Udio v1.5**, an updated version of its music creation model
* Mistral AI has released a large language model **Mistral Large 2** in size 123B, which presents capabilities close to the closed SOTA models.
* **Midjourney v6.1** is released
* Google releases the **Gemma 2 2B** model as open source. The model demonstrates better capabilities than much larger models.
# August
* "Black Forest Labs" releases weights for an image creation model named **Flux**, which shows better performance than similar closedsource models.
* OpenAI released a new version of its model, **GPT-4o 0806**, achieving 100% success in generating valid JSON output.
* Google's image generation model, **Imagen 3**, has been released.
* xAI Corporation has launched the models **Grok 2** and **Grok 2 mini**, which demonstrate performance on par with leading SOTA models in the market.
* Microsoft has introduced its small language models, **Phi 3.5**, in three versions, each showcasing impressive performance relative to their size.
* Google has introduced three new experimental AI models: **Gemini 1.5 Flash8B**, **Gemini 1.5 Pro** Enhanced, and **Gemini 1.5 Flash** Updated.
* **Ideogram 2.0** has been released, offering image generation capabilities that surpass those of other leading models.
* Luma has unveiled the **Dream Machine 1.5** model for video creation.
# September
* The French AI company Mistral has introduced **Pixtral12B**, its first multimodal model capable of processing both images and text.
* OPENAI has released two nextgeneration AI models to its subscribers: **o1 preview** and **o1 mini**. These models show a significant improvement in performance, particularly in tasks requiring reasoning, including coding, mathematics, GPQA, and more.
* Chinese company Alibaba releases the **Qwen 2.5** model in various sizes, ranging from 0.5B to 72B. The models demonstrate capabilities comparable to much larger models.
* The video generation model **KLING 1.5** has been released.
* **OpenAI** launches the **advanced voice mode** of GPT4o for all subscribers.
* **Meta** releases **Llama 3.2** in sizes 1B, 3B, 11B and 90B, featuring image recognition capabilities for the first time.
* **Google** has rolled out new model updates ready for deployment, **Gemini Pro 1.5 002** and **Gemini Flash 1.5 002**, showcasing significantly improved longcontext processing.
* **Kyutai** releases two opensource versions of its voicetovoice model, **Moshi**.
* Google releases an update to its AI tool **NotebookLM** that enables users to create podcasts based on their own content.
* Mistral AI launches a 22B model named **Mistral Small**.
# October
* **Flux 1.1 Pro** is released, showcasing advanced capabilities for image creation.
* Meta unveils **Movie Gen**, a new AI model that generates videos, images, and audio from text input.
* Pika introduces **Video Model 1.5** along with "Pika Effects."
* Adobe announces its video creation model, **Firefly Video**.
* Startup Rhymes AI releases **Aria**, an opensource, multimodal model exhibiting capabilities similar to comparably sized proprietary models.
* Meta releases an opensource speechtospeech language model named **Meta Spirit LM**.
* Mistral AI introduces **Ministral**, a new model available in 3B and 8B parameter sizes.
* **Janus AI**, a multimodal language model capable of recognizing and generating both text and images, is released as open source by DeepSeekAI.
* Google DeepMind and MIT unveil **Fluid**, a texttoimage generation model with industryleading performance at a scale of 10.5B parameters.
* **Stable Diffusion 3.5** is released in three sizes as open source.
* Anthropic launches **Claude 3.5 Sonnet New**, demonstrating significant advancements in specific areas over its previous version, and announces **Claude 3.5 Haiku**.
* Anthropic announces an experimental feature for computer use with a public beta API.
* The texttoimage model **Recraft v3** has been released to the public, ranking first in benchmarks compared to similar models.
* OpenAI has launched **Search GPT**, allowing users to perform web searches directly within the platform.
# November
* Alibaba released its new model, **QwQ 32B Preview**, which integrates reasoning capabilities before responding. The model competes with, and sometimes surpasses, OpenAI's o1-preview model.
* Alibaba opensourced the model **Qwen2.5 Coder 32B**, which offers comparable capabilities to leading proprietary language models in the coding domain.
* DeepSeek unveiled its new AI model, **DeepSeek-R1-Lite-Preview**, which incorporates reasoning capabilities and delivers impressive performance on the AIME and MATH benchmarks, matching the level of OpenAI's o1-preview.
* **Suno** upgraded its AIpowered music generator to **v4**, introducing new features and performance improvements.
* Mistral AI launched the **Pixtral Large** model, a multimodal language model excelling in image recognition and advanced performance metrics, and an update to Mistral Large, 2411.
* Google introduced two experimental models, **gemini-exp-1114** and **gemini-exp-1121**, currently leading the arena chatbot with enhanced performance.
* Anthropic launches **Claude 3.5 Haiku** and Visual PDF Analysis in Claude.
# December
* Amazon introduced a new series of models called **NOVA**, designed for text, image, and video processing.
* OpenAI released **SORA**, a video generation model, along with the full version of **O1** and **O1 Pro** for advanced subscribers. Additionally, the company launched a live video mode for **GPT4o**.
* Google unveiled the experimental model **Gemini-Exp-1206**, which ranked first in the chatbot leaderboard.
* Google launched **Gemini 2.0 Flash** in beta. This model leads benchmarks and outperforms the previous version, **Gemini Pro 1.5**. Additionally, Google introduced live speech and video mode and announced built-in image generation capabilities within the model.
* Google revealed **Gemini-2.0-Flash-Thinking**, a thinking model based on **Gemini 2.0 Flash**, which secured second place in the chatbot leaderboard.
* Google introduced **Veo 2**, a beta version video generation model capable of producing 4K videos up to two minutes long. The model outperformed **SORA** in human evaluations. Additionally, Google updated **Imagen 3**, offering enhanced image quality and realism.
* xAI integrated **Aurora**, a new model for generating high-quality and realistic images.
* Microsoft open-sourced the **Phi4** model, sized at 14B, showcasing impressive capabilities for its size.
* Meta released **Llama 3.3 70B**, a model offering performance comparable to **Llama 3.1 405B**.
* Google launched a multi-modal open-source model called **PaliGemma 2**, integrated with existing **Gemma** models.
* Pika Labs released **2.0**, the latest version of its AI-powered video generator.
* Meta introduced **Apollo**, a video generation model available in three different sizes.
* Deepseek open-sourced **Deepseek V3**, a model with 671B parameters that surpasses closed-source SOTA models across several benchmarks.
* Alibaba unveiled **QVQ-72B-Preview**, a cutting-edge thinking model capable of analyzing images, featuring SOTA-level performance.
* OpenAI announced **O3**, a groundbreaking AI model achieving 87.5% in the **ARC-AGI** benchmark, 25.2% in the **Frontier Math Benchmark** (compared to under 2% in previous models), and 87.7% in Ph.D.-level science questions. A cost-effective version, **O3 Mini**, is expected in January 2025, with performance similar to **O1**, alongside improved speed and efficiency.
* The video generation model **Kling 1.6** was released, offering significant performance enhancements.
source: https://nhlocal.github.io/AiTimeline | 2024-12-29T18:26:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hp28oi/summary_the_big_ai_events_of_2024/ | nh_local | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp28oi | false | null | t3_1hp28oi | /r/LocalLLaMA/comments/1hp28oi/summary_the_big_ai_events_of_2024/ | false | false | self | 33 | null |
New paper noise_step: TRAINING IN 1.58B WITH NO GRADIENT MEMORY
| 61 | [https://github.com/wbrickner/noise\_step](https://github.com/wbrickner/noise_step)
[https://x.com/\_brickner/status/1871348156786704657](https://x.com/_brickner/status/1871348156786704657)
[https://colab.research.google.com/drive/1hXzf5xB4INzMUNTlAB8CI1V10-JV7zyg?usp=sharing#scrollTo=JXCvlm8Rol8C](https://colab.research.google.com/drive/1hXzf5xB4INzMUNTlAB8CI1V10-JV7zyg?usp=sharing#scrollTo=JXCvlm8Rol8C)
it could be big deal but i am sure if it would work on full transformer training. | 2024-12-29T18:38:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hp2i3z/new_paper_noise_step_training_in_158b_with_no/ | shing3232 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp2i3z | false | null | t3_1hp2i3z | /r/LocalLLaMA/comments/1hp2i3z/new_paper_noise_step_training_in_158b_with_no/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': 'bF4WjiAOBKTLalchLALK7LOb4TPVcJ-aW1EVcvMcFJ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WzUpFtpmNSPBsS8Kum-uWvGr0bXN1aV2fEXzCTNkX6s.jpg?width=108&crop=smart&auto=webp&s=e403969eeda8f5ad00c44551920d1175482fbfc3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WzUpFtpmNSPBsS8Kum-uWvGr0bXN1aV2fEXzCTNkX6s.jpg?width=216&crop=smart&auto=webp&s=d29454fba6c42005b8fdf27f0916c529f6a07488', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WzUpFtpmNSPBsS8Kum-uWvGr0bXN1aV2fEXzCTNkX6s.jpg?width=320&crop=smart&auto=webp&s=da76f289954bac652efb37caff91dea46e87cec8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WzUpFtpmNSPBsS8Kum-uWvGr0bXN1aV2fEXzCTNkX6s.jpg?width=640&crop=smart&auto=webp&s=d15ad4b6972e661a284402022c595de3ff6cc804', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WzUpFtpmNSPBsS8Kum-uWvGr0bXN1aV2fEXzCTNkX6s.jpg?width=960&crop=smart&auto=webp&s=a75c8a81de4c7bcfb00a92103c5967ec6908bcc9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WzUpFtpmNSPBsS8Kum-uWvGr0bXN1aV2fEXzCTNkX6s.jpg?width=1080&crop=smart&auto=webp&s=a513d28104c71ba1bb94b1767bd354abbb405b4d', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/WzUpFtpmNSPBsS8Kum-uWvGr0bXN1aV2fEXzCTNkX6s.jpg?auto=webp&s=236bb213255d9d409951a31195599c044a0bd484', 'width': 1280}, 'variants': {}}]} |
How ElevenLabs created text to speech in another language? | 3 | https://x.com/elevenlabsio/status/1858997640513679396
For a Lex Fridman interview with Javier Milei, how did they create the overdubbing in another language? I'm trying with [fish.speech](https://github.com/fishaudio/fish-speech) and [F5 TTS](https://github.com/SWivid/F5-TTS) but they both fail. Will just fine tuning on a german dataset help, or are there some other tricks? Are there some other better models, or is there some special fine-tuning needed? Thanks! | 2024-12-29T18:41:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hp2ktt/how_elevenlabs_created_text_to_speech_in_another/ | eamag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp2ktt | false | null | t3_1hp2ktt | /r/LocalLLaMA/comments/1hp2ktt/how_elevenlabs_created_text_to_speech_in_another/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'x3NGxsgCHqbeAzhMGZyH0-da_Rs51-hRKQTuypbW5Io', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/avYAuU3oPpBQNtXiwS2BhzpvD9C9MnD2_U0_tZhrTAM.jpg?width=108&crop=smart&auto=webp&s=f3db0b528c9d83643ab74e7329593c70aabbe8bc', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/avYAuU3oPpBQNtXiwS2BhzpvD9C9MnD2_U0_tZhrTAM.jpg?auto=webp&s=e514f501e20c2f9a929fd257d91422b1ac6a1248', 'width': 200}, 'variants': {}}]} |
My llm sandwich beta pc ;) | 5 | Asrock b550 pro gaming
Ryzen 5600
4x 16GB DDR4 dr
Termaltake madrid 850W
3090 24GB in pcie16x via 8x4x4 spliter, [email protected] mode
2x 3060 12GB in pcie16x via 8x4x4 spliter, m2toPcie4 risers, [email protected] mode
1x 3060 12GB in [email protected] from NB.
Still got free
2x pcie1x @3.0
M2 pcie [email protected]
Hunted for server platform so long that finaly just assembled it all on consumer used parts.
| 2024-12-29T18:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hp2rx2/my_llm_sandwich_beta_pc/ | Kenavru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp2rx2 | false | null | t3_1hp2rx2 | /r/LocalLLaMA/comments/1hp2rx2/my_llm_sandwich_beta_pc/ | false | false | self | 5 | null |
Creating a synthetic general knowledge dataset using Gemini - licence question | 2 | Hi everyone! I'm using Gemini Flash 2.0 to generate a synthetic Q&A instruct dataset in my language in order to fine-tune a 3B or 7B Llama model.
However, in the Gemini API terms of aervice there is this:
> You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights).
I want to respect these terms, so I am planning on licensing the model as CC-BY-NC-SA, so it can't be used for production purposes (and will be up to a 7B model, so it won't be in the same category as Gemini, I guess). Is that enough to respect rhese model-related terms?
And regarding the "underlying data", does the Gemini's general knowlege fall into that category, since I'm creating the q&a dataset in a pretty systematic way? Also, how many Q&A pairs do you think would be allowed?
I am also thinking about publishing the dataset on Huggingface under the same CC-BY-NC-SA license, but that would maybe put me under Google's scrutiny too much? | 2024-12-29T18:59:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hp2zem/creating_a_synthetic_general_knowledge_dataset/ | JanBibijan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp2zem | false | null | t3_1hp2zem | /r/LocalLLaMA/comments/1hp2zem/creating_a_synthetic_general_knowledge_dataset/ | false | false | self | 2 | null |
Together has started hosting Deepseek V3 - Finally a privacy friendly way to use DeepSeek V3 | 284 | Deepseek V3 is now available on [together.ai](http://together.ai), predicatly their prices are not as competitive as Deepseeks official API. They charge $0.88 per million tokens both for input and output. But on the plus side they allow the full 128K context of the model, as opposed to the official API which is limited to 64K. And they don't train on your data. Which is one of the biggest issues with the official API.
This also means that Deepseek V3 can now be used in Openrouter without enabling the option to use providers which train on data. | 2024-12-29T19:11:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hp39cv/together_has_started_hosting_deepseek_v3_finally/ | mikael110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp39cv | false | null | t3_1hp39cv | /r/LocalLLaMA/comments/1hp39cv/together_has_started_hosting_deepseek_v3_finally/ | false | false | self | 284 | {'enabled': False, 'images': [{'id': 'KEKqZLXaDuojO8066WvfNm2knPNQpREJOqDRQbP0jOE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=108&crop=smart&auto=webp&s=c4356a09ff651d99050d2e2f7c625136bd5cc50d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=216&crop=smart&auto=webp&s=2efb5516e5e9493aedbb8874a4346aea1e2fdfe3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=320&crop=smart&auto=webp&s=5760f28068be8d1404c060058ca5dc7138a3921c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=640&crop=smart&auto=webp&s=5040e75d875b032b45e4cafad1ca6eed231c2aa5', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=960&crop=smart&auto=webp&s=678233eb228e31658cc7dc6f24ff3c4c199255ec', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=1080&crop=smart&auto=webp&s=e9407e720f5a5c73c6566e3b787afc17181bbb3f', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?auto=webp&s=610ce8e238d743540ebac62332adfbc058d7c11d', 'width': 2400}, 'variants': {}}]} |
chat completion ⇋ mardown | 1 | 2024-12-29T19:49:28 | S1M0N38 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hp443d | false | null | t3_1hp443d | /r/LocalLLaMA/comments/1hp443d/chat_completion_mardown/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'lNR7zwvAdWmvnjTRnqkHl7aFxy8aDvkpnSdcJ87DEXY', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/lhj5o5u5bu9e1.png?width=108&crop=smart&auto=webp&s=846f2177979b19762d6d71d4db0f1fdf628f2c91', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/lhj5o5u5bu9e1.png?width=216&crop=smart&auto=webp&s=b2792dfd6f0b6525c3f3b04462b3195626bfdefc', 'width': 216}, {'height': 189, 'url': 'https://preview.redd.it/lhj5o5u5bu9e1.png?width=320&crop=smart&auto=webp&s=7c7fdb45f0e5f6c9a3147ccabfe3b5a0a775c6ed', 'width': 320}, {'height': 379, 'url': 'https://preview.redd.it/lhj5o5u5bu9e1.png?width=640&crop=smart&auto=webp&s=8f499328ef0c25393d7bd1e0a9f65886803cea95', 'width': 640}, {'height': 568, 'url': 'https://preview.redd.it/lhj5o5u5bu9e1.png?width=960&crop=smart&auto=webp&s=1609ae8bbe7471ed58ee1682c1e16295a2481038', 'width': 960}, {'height': 639, 'url': 'https://preview.redd.it/lhj5o5u5bu9e1.png?width=1080&crop=smart&auto=webp&s=a419d49db1c2cac19cfc92d994547b9c5c34e4c3', 'width': 1080}], 'source': {'height': 1964, 'url': 'https://preview.redd.it/lhj5o5u5bu9e1.png?auto=webp&s=864da68be55128d54c2633933363d8ef152e4e3e', 'width': 3316}, 'variants': {}}]} |
|||
chat completion ⇋ markdown | 20 | 2024-12-29T19:53:26 | S1M0N38 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hp47ai | false | null | t3_1hp47ai | /r/LocalLLaMA/comments/1hp47ai/chat_completion_markdown/ | false | false | 20 | {'enabled': True, 'images': [{'id': 'h8MU9LWY4hPAXD_OkUh9_EDgbmdC040tvpGAA6dR3h0', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/4z381ogfeu9e1.png?width=108&crop=smart&auto=webp&s=4719f0be97daaf59f3b16c0fac5873ef85b9aa80', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/4z381ogfeu9e1.png?width=216&crop=smart&auto=webp&s=2cc1ac75262673390bb560a7c12c4b7243dbd64c', 'width': 216}, {'height': 189, 'url': 'https://preview.redd.it/4z381ogfeu9e1.png?width=320&crop=smart&auto=webp&s=9f18814524ed450ffe3342bb41ef962e24eef6de', 'width': 320}, {'height': 379, 'url': 'https://preview.redd.it/4z381ogfeu9e1.png?width=640&crop=smart&auto=webp&s=517a1cd2e30085adcf170b859ff2b9ad742d735e', 'width': 640}, {'height': 568, 'url': 'https://preview.redd.it/4z381ogfeu9e1.png?width=960&crop=smart&auto=webp&s=2cd6b786fd22a4d7ac35c56979ff8737886be629', 'width': 960}, {'height': 639, 'url': 'https://preview.redd.it/4z381ogfeu9e1.png?width=1080&crop=smart&auto=webp&s=70cf4282ca499d4379d161ca65382c339fb529c2', 'width': 1080}], 'source': {'height': 1964, 'url': 'https://preview.redd.it/4z381ogfeu9e1.png?auto=webp&s=035f88da8d308c67639ae7e985f62f8afa43496c', 'width': 3316}, 'variants': {}}]} |
|||
25k budget for a workstation. | 1 | [removed] | 2024-12-29T19:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hp47jh/25k_budget_for_a_workstation/ | SolarNexxus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp47jh | false | null | t3_1hp47jh | /r/LocalLLaMA/comments/1hp47jh/25k_budget_for_a_workstation/ | false | false | self | 1 | null |
How Did This Morrowind Mod Achieve Such High-Quality TTS Voice Lines? | 1 | [removed] | 2024-12-29T19:56:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hp49p5/how_did_this_morrowind_mod_achieve_such/ | ReflectionRecipes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp49p5 | false | null | t3_1hp49p5 | /r/LocalLLaMA/comments/1hp49p5/how_did_this_morrowind_mod_achieve_such/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6_eQxenMSjSAHMENvPnA5nCqwp5iY3P2OjxCNAak5wY', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/dNlmBklu-Z5iTQGIsO3VO3Z9mW32lZGP7TOftqZZYhM.jpg?width=108&crop=smart&auto=webp&s=bdef853f8538a6c2b0ad40188b9664889c4c41da', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/dNlmBklu-Z5iTQGIsO3VO3Z9mW32lZGP7TOftqZZYhM.jpg?width=216&crop=smart&auto=webp&s=4501b8241756f5d474788c904df1a1abcd916eaf', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/dNlmBklu-Z5iTQGIsO3VO3Z9mW32lZGP7TOftqZZYhM.jpg?width=320&crop=smart&auto=webp&s=b687fc12d40fa645fd48e26430f05b58545ac01b', 'width': 320}], 'source': {'height': 192, 'url': 'https://external-preview.redd.it/dNlmBklu-Z5iTQGIsO3VO3Z9mW32lZGP7TOftqZZYhM.jpg?auto=webp&s=79e630f2382dc8aae8b4d189cac72956d7461ca9', 'width': 385}, 'variants': {}}]} |
Is it possible to run any LLM with 8Gb VRAM (5600XT) + 16GBs of RAM? | 0 | I've been very interested in this topic lately, but my PC is not the best at the moment (currently building a new one but it may take a while). So I was wondering if I could do anything with these specs. Any help would be greatly appreciated | 2024-12-29T19:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hp49vx/is_it_possible_to_run_any_llm_with_8gb_vram/ | danielsuperxxx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp49vx | false | null | t3_1hp49vx | /r/LocalLLaMA/comments/1hp49vx/is_it_possible_to_run_any_llm_with_8gb_vram/ | false | false | self | 0 | null |
SoC system for running local LlaMa chatbot | 4 | I want to create a portable system (could be handheld) to which I feed all of my notes and documents, and which I could query at any time. Could this be done with a modern SoC system like the Raspberry Pi / ODROI (w/ small LED screen, blackberry-like keyboard, battery, etc).
I am not up-to-date with the capacity of the latest models, but it would be a very fun project, and I am just looking for a good starting point, in case someone has recently done something like this.
I would prefer if the model ran locally for privacy reasons. It would be nice to finetune the system off my documents and notes. | 2024-12-29T20:25:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hp4x0z/soc_system_for_running_local_llama_chatbot/ | grepmew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp4x0z | false | null | t3_1hp4x0z | /r/LocalLLaMA/comments/1hp4x0z/soc_system_for_running_local_llama_chatbot/ | false | false | self | 4 | null |
How Fast Does the Jetson Nano Really Run Large Language Models? | 44 | 2024-12-29T20:48:10 | https://www.jeremymorgan.com/blog/tech/nvidia-jetson-orin-nano-speed-test/ | dulldata | jeremymorgan.com | 1970-01-01T00:00:00 | 0 | {} | 1hp5fda | false | null | t3_1hp5fda | /r/LocalLLaMA/comments/1hp5fda/how_fast_does_the_jetson_nano_really_run_large/ | false | false | 44 | {'enabled': False, 'images': [{'id': 'LeNlQrf1NqeiBthoXtX6z6T_IHE9MW7Fc-GuID6HbKc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/d9Q6GkTt4D20qL1SIMI06B1H_uodngCkhiwoEwLoC4c.jpg?width=108&crop=smart&auto=webp&s=ceef22e02be967d537a7a11f64e0c231573b4d5d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/d9Q6GkTt4D20qL1SIMI06B1H_uodngCkhiwoEwLoC4c.jpg?width=216&crop=smart&auto=webp&s=8d0b3184ea1531646bf27275b3f28cd94c58debd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/d9Q6GkTt4D20qL1SIMI06B1H_uodngCkhiwoEwLoC4c.jpg?width=320&crop=smart&auto=webp&s=f9e9f6070be0dcb44d33967b92b1f4da7d40f8f8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/d9Q6GkTt4D20qL1SIMI06B1H_uodngCkhiwoEwLoC4c.jpg?width=640&crop=smart&auto=webp&s=67a35b0c2afd40ce65e1b841fd78d431f402c035', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/d9Q6GkTt4D20qL1SIMI06B1H_uodngCkhiwoEwLoC4c.jpg?width=960&crop=smart&auto=webp&s=0905c43c4c54ae1ef4cd304c2806d203805ad4a4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/d9Q6GkTt4D20qL1SIMI06B1H_uodngCkhiwoEwLoC4c.jpg?width=1080&crop=smart&auto=webp&s=94cc94c13cfa42cd65859bb4487a89d254145caf', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/d9Q6GkTt4D20qL1SIMI06B1H_uodngCkhiwoEwLoC4c.jpg?auto=webp&s=f55921432716bb0b5745636891826d0ea60cde3f', 'width': 1200}, 'variants': {}}]} |
||
LLM Visualization Tool | 79 | 2024-12-29T20:58:52 | https://bbycroft.net/llm | CortaCircuit | bbycroft.net | 1970-01-01T00:00:00 | 0 | {} | 1hp5nue | false | null | t3_1hp5nue | /r/LocalLLaMA/comments/1hp5nue/llm_visualization_tool/ | false | false | default | 79 | null |
|
Best practices on model evaluation during fine-tuning | 2 | Hello everyone!
Recently I've encountered a curtain issue with my fine-tuning workflow. It seems like iterations speed given my current training loop is rather slow: train the model -> run evaluations -> repeat if turned out bad.
One would assume that it is possible to at least partially approximate how the model is holding up via some simplified evaluations during fine-tuning stage using some sort of evaluation callback, like for example \`TrainerCallback\`-based callback in huggingface ecosystem.
The question is -> in case I want to evaluate my model during general SFT on instructions, which smaller evaluation sets might be useful to consider?
In case of domain-specific situation it is more straightforward IMO: you just create your evaluation suite, run inference and compute metrics.
But what about general purpose instructuctions? Evaluating each time on full MMLU, MMLU-Pro, IFEval, etc. takes massive amount of time.
Thanks in advance and happy holidays! | 2024-12-29T21:19:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hp65g5/best_practices_on_model_evaluation_during/ | oposteriori | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp65g5 | false | null | t3_1hp65g5 | /r/LocalLLaMA/comments/1hp65g5/best_practices_on_model_evaluation_during/ | false | false | self | 2 | null |
Deepseek V3 will be more expensive in February | 58 | I hadn’t know of that before now I’m less hyped about the model
| 2024-12-29T21:24:18 | felipejfc | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hp69da | false | null | t3_1hp69da | /r/LocalLLaMA/comments/1hp69da/deepseek_v3_will_be_more_expensive_in_february/ | false | false | 58 | {'enabled': True, 'images': [{'id': 'Y2-TsGgVqfQgx9vqx0XYVyM9_S9yVEohuMn5dnDhckE', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/6d95ssepuu9e1.jpeg?width=108&crop=smart&auto=webp&s=5be0838ca8a63c25a4c539661296141ccefe1c8b', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/6d95ssepuu9e1.jpeg?width=216&crop=smart&auto=webp&s=b5f4977a2bee4dc0f0925a5b0a7cb2fd806bb4f6', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/6d95ssepuu9e1.jpeg?width=320&crop=smart&auto=webp&s=e3176a5d811e993c5fad8397230a94808e0219c6', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/6d95ssepuu9e1.jpeg?width=640&crop=smart&auto=webp&s=7ed332a9375b334dfe248def4bd54a965b052d5a', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/6d95ssepuu9e1.jpeg?width=960&crop=smart&auto=webp&s=448fdea635b4196ad75f8bf493215819baee6439', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/6d95ssepuu9e1.jpeg?width=1080&crop=smart&auto=webp&s=4d221e22ccfa0f837f4bcfbdc89d18eb572a8bfb', 'width': 1080}], 'source': {'height': 2622, 'url': 'https://preview.redd.it/6d95ssepuu9e1.jpeg?auto=webp&s=7cc8e053341698b73f9bd3f3cab5b4c84a9bf64d', 'width': 1206}, 'variants': {}}]} |
||
Seeking Advice on Cost-Effective Hardware for Hosting DeepSeek V3 Locally | 7 | Hi everyone,
I'm intrigued to run DeepSeek V3 locally and aim to set up a cost-effective system using pre-owned hardware. Given that DeepSeek V3 is a Mixture-of-Experts (MoE) language model with 671 billion parameters, activating 37 billion per token, I understand that it demands substantial computational resources.
Here's my current understanding:
GPU: Considering the model's scale, I believe a high-end GPU with at least 24GB of VRAM is necessary. Would a used NVIDIA RTX 3090 or RTX A6000 be sufficient, or should I consider multiple GPUs to handle the load effectively?
CPU: While the GPU handles most of the heavy lifting, I assume a robust CPU is still important for overall system performance. Would a used AMD Ryzen 9 or Intel Core i9 be appropriate, or is there a specific model that balances performance and cost better for this application?
RAM: Given the model's size and the need for efficient data handling, I estimate that 64GB of RAM would be a safe starting point. Is this sufficient, or would increasing to 128GB provide significant benefits?
Additionally, I'm interested in any recommendations for sourcing reliable used hardware, such as reputable marketplaces or specific brands/models known for durability in AI workloads.
If anyone has experience running large-scale models like DeepSeek V3 on local setups, your insights would be invaluable. I'm particularly interested in balancing performance with budget constraints.
Thanks in advance for your guidance! | 2024-12-29T21:30:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hp6ejz/seeking_advice_on_costeffective_hardware_for/ | Funny_Acanthaceae285 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp6ejz | false | null | t3_1hp6ejz | /r/LocalLLaMA/comments/1hp6ejz/seeking_advice_on_costeffective_hardware_for/ | false | false | self | 7 | null |
[Rant] Deepseek v3 context window is a frustrating contrast to how awesome it is ! | 208 | 2024-12-29T21:39:01 | Americase | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hp6lr0 | false | null | t3_1hp6lr0 | /r/LocalLLaMA/comments/1hp6lr0/rant_deepseek_v3_context_window_is_a_frustrating/ | false | false | 208 | {'enabled': True, 'images': [{'id': 'YvNAmErwrFyT_0TYJRFis-OPXpg_dyWbddBbaelZLiY', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/zvcdydg8xu9e1.jpeg?width=108&crop=smart&auto=webp&s=69ff060ca54ae16ea086035478e9c0c34635bc6d', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/zvcdydg8xu9e1.jpeg?width=216&crop=smart&auto=webp&s=7a08487acaa234adab56e1609fae383c6db51454', 'width': 216}, {'height': 254, 'url': 'https://preview.redd.it/zvcdydg8xu9e1.jpeg?width=320&crop=smart&auto=webp&s=05efe3b20d1c2c8f0478273208bd2260fad84de4', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/zvcdydg8xu9e1.jpeg?auto=webp&s=a9b91ccc56dbeeab60cf039f7df6e0e11cf0aa9a', 'width': 628}, 'variants': {}}]} |
|||
One of the OGs of RAG - 🧬⚕️🔬 paperai v2.3 released | 28 | 2024-12-29T21:49:08 | https://github.com/neuml/paperai | davidmezzetti | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hp6u0x | false | null | t3_1hp6u0x | /r/LocalLLaMA/comments/1hp6u0x/one_of_the_ogs_of_rag_paperai_v23_released/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'jFFOISVwZ0xOnSbHfPKnByFrB3Oh03youqwfTpLVPd0', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/lLKsYS-yIe9JTnnJX5d9weqZHeT9VamWoXsQB2SQX50.jpg?width=108&crop=smart&auto=webp&s=d5664bdf7209c94c154f558741296b86e0a220e8', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/lLKsYS-yIe9JTnnJX5d9weqZHeT9VamWoXsQB2SQX50.jpg?width=216&crop=smart&auto=webp&s=205637b2c6c664b552be821bc63f1a20c089e1ff', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/lLKsYS-yIe9JTnnJX5d9weqZHeT9VamWoXsQB2SQX50.jpg?width=320&crop=smart&auto=webp&s=dc015bb1b383d1891d725abc69de86a5488a492b', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/lLKsYS-yIe9JTnnJX5d9weqZHeT9VamWoXsQB2SQX50.jpg?width=640&crop=smart&auto=webp&s=6d8af87bf8109fe2860729546b05eb2ab29bdeba', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/lLKsYS-yIe9JTnnJX5d9weqZHeT9VamWoXsQB2SQX50.jpg?width=960&crop=smart&auto=webp&s=d23733a7542d1af69abba092ddaa71650a1d2b43', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/lLKsYS-yIe9JTnnJX5d9weqZHeT9VamWoXsQB2SQX50.jpg?width=1080&crop=smart&auto=webp&s=a49782d1f35d1f380757cbb283044b6006cf966d', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/lLKsYS-yIe9JTnnJX5d9weqZHeT9VamWoXsQB2SQX50.jpg?auto=webp&s=ae361932043c18942549d9896efbb6e136c65145', 'width': 1920}, 'variants': {}}]} |
||
Quadro T400 Ollama Performance | 1 | I've been downsizing my homelab and did a lot of hemming and hawing about what SFF GPU to replace my 6700xt.
The two contenders were the 4060 and the T400... Biggest difference obviously being the 8gb vs 4gb of VRAM. As I was considering them, I did quite a bit of research, but could not find anything on the T400's inferencing performance. So I'll share my experience.
The following prompts were generated with the latest Ollama as of posting, through OpenwebUI. Prompt settings were kept default.
Formatting on mobile, I'll clean it up when I'm at my PC.
#########
### USER
Why is the sky blue
### ASSISTANT
The sky appears blue to us because of a phenomenon called Rayleigh scattering, named after the British physicist Lord Rayleigh. He first described it in the late 19th century.
Here's what happens:
1. **Sunlight enters Earth's atmosphere**: When sunlight enters our atmosphere, it contains all the colors of the visible spectrum (red, orange, yellow, green, blue, indigo, and violet).
2. **Shorter wavelengths scatter more**: The shorter wavelengths of light, like blue and violet, are scattered in all directions by the tiny molecules of gases in the atmosphere, such as nitrogen and oxygen.
3. **Blue light dominates**: Because blue light is scattered so much more than other colors, our eyes perceive it as the dominant color when looking at the sky on a clear day.
4. **Our eyes perceive the scattered light**: As we look up at the sky, our eyes receive the scattered blue light from all directions, giving us the illusion of a blue sky.
There are a few factors that influence the perceived color of the sky:
* **Time of day**: During sunrise and sunset, the sun's light has to travel through more of the atmosphere to reach our eyes, which scatters longer wavelengths like red and orange. This is why we often see oranges and reds during these times.
* **Cloud cover**: When clouds are present, they can reflect and scatter sunlight in various ways, creating a range of colors and effects.
* **Atmospheric conditions**: Pollution, dust, and water vapor in the atmosphere can also affect the color of the sky.
In summary, the blue color of the sky is a result of the scattering of sunlight by tiny molecules in our atmosphere, with blue light being scattered more than other colors due to its shorter wavelength.
##########
3.75 tokens per second, llama3.1 8b. Quite slow.
##########
### USER
Why is the sky blue
### ASSISTANT
The sky appears blue because of a phenomenon called Rayleigh scattering, named after the British physicist Lord Rayleigh. He discovered that when sunlight enters Earth's atmosphere, it encounters tiny molecules of gases such as nitrogen (N2) and oxygen (O2).
Here's what happens:
1. Sunlight consists of a spectrum of colors, with shorter wavelengths (like blue and violet) being scattered more than longer wavelengths (like red and orange).
2. When sunlight enters the Earth's atmosphere, it encounters tiny molecules of gases. These molecules scatter the light in all directions.
3. The smaller molecules are most effective at scattering the shorter wavelengths, like blue and violet. This is known as Rayleigh scattering.
4. As a result of this scattering, the blue light is distributed throughout the atmosphere, making the sky appear blue to our eyes.
The color of the sky can change depending on various factors such as:
* Time of day: During sunrise and sunset, the sun's rays have to travel through more of the Earth's atmosphere, which scatters the shorter wavelengths even more. This is why the sky often appears red or orange during these times.
* Atmospheric conditions: Dust, pollution, and water vapor in the air can scatter light in different ways, changing the color of the sky.
* Altitude: At higher altitudes, the atmosphere is thinner, which reduces the amount of scattering that occurs. This is why the sky often appears more intense blue at high elevations.
In summary, the sky appears blue due to Rayleigh scattering, where shorter wavelengths of sunlight are scattered by tiny molecules in the Earth's atmosphere, making them visible to our eyes.
#########
23.71 tokens per second, llama3.2 3b. Very performant.
I'm mostly just putting this here so it shows up when someone googles Ollama and T400. I'd be happy to do any extra testing if anyone is curious. | 2024-12-29T21:54:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hp6ycq/quadro_t400_ollama_performance/ | Uhhhhh55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp6ycq | false | null | t3_1hp6ycq | /r/LocalLLaMA/comments/1hp6ycq/quadro_t400_ollama_performance/ | false | false | self | 1 | null |
DeepSeek the Web Developer | 1 | 2024-12-29T22:03:48 | Billy462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hp75xn | false | null | t3_1hp75xn | /r/LocalLLaMA/comments/1hp75xn/deepseek_the_web_developer/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'GCw0QPbqtVkPMOFCf9OiRcOhHpzK9jOlBfLbRMwjrAc', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/b1m95qdk0v9e1.png?width=108&crop=smart&auto=webp&s=f6257240d776361a45e8a32dbf4051ba6c2889e6', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/b1m95qdk0v9e1.png?width=216&crop=smart&auto=webp&s=c07a5b7b282390ad91e108e9d64c8647bd351858', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/b1m95qdk0v9e1.png?width=320&crop=smart&auto=webp&s=ccb1f05c0c7a5728162f6c2f231c513552df1b36', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/b1m95qdk0v9e1.png?width=640&crop=smart&auto=webp&s=175edb1f9ed5f3a116245b6d2565fe1c7087bbdb', 'width': 640}, {'height': 601, 'url': 'https://preview.redd.it/b1m95qdk0v9e1.png?width=960&crop=smart&auto=webp&s=e1a211fe81d925685deca0248ea8ad02e2f7d721', 'width': 960}, {'height': 676, 'url': 'https://preview.redd.it/b1m95qdk0v9e1.png?width=1080&crop=smart&auto=webp&s=87bdc3020eac8764d2ce478c9c7f6794ffd902d1', 'width': 1080}], 'source': {'height': 1037, 'url': 'https://preview.redd.it/b1m95qdk0v9e1.png?auto=webp&s=f930480cd73d863e1a2247cbca7403b2945a011a', 'width': 1656}, 'variants': {}}]} |
|||
When "YouTube Summarizer" extensions wanted my credit card, I chose violence | 1 | [removed] | 2024-12-29T22:25:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hp7n98/when_youtube_summarizer_extensions_wanted_my/ | aitookmyj0b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp7n98 | false | null | t3_1hp7n98 | /r/LocalLLaMA/comments/1hp7n98/when_youtube_summarizer_extensions_wanted_my/ | false | false | 1 | null |
|
When "YouTube Summarizer" extensions wanted my creditcard, I chose violence | 1 | [removed] | 2024-12-29T22:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hp7rf3/when_youtube_summarizer_extensions_wanted_my/ | aitookmyj0b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp7rf3 | false | null | t3_1hp7rf3 | /r/LocalLLaMA/comments/1hp7rf3/when_youtube_summarizer_extensions_wanted_my/ | false | false | 1 | null |
|
When "YouTube Summarizer" extensions wanted my credit card, I chose violence | 1 | [removed] | 2024-12-29T22:32:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hp7t5g/when_youtube_summarizer_extensions_wanted_my/ | aitookmyj0b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp7t5g | false | null | t3_1hp7t5g | /r/LocalLLaMA/comments/1hp7t5g/when_youtube_summarizer_extensions_wanted_my/ | false | false | 1 | null |
|
When "YouTube Summarizer" extensions wanted my credit card, I chose violence | 1 | [removed] | 2024-12-29T22:33:24 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hp7tso | false | null | t3_1hp7tso | /r/LocalLLaMA/comments/1hp7tso/when_youtube_summarizer_extensions_wanted_my/ | false | false | default | 1 | null |
||
Youtube Summarizer Extension, free, local and open-source! | 1 | [removed] | 2024-12-29T22:35:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hp7vle/youtube_summarizer_extension_free_local_and/ | aitookmyj0b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp7vle | false | null | t3_1hp7vle | /r/LocalLLaMA/comments/1hp7vle/youtube_summarizer_extension_free_local_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'W4XJ858a5JkSYE1CPax9eOyRgkA47_9oY45DfninpsM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uUG_B-K2sANsnGD0ez-8I5bQ8_7skfFZbl4M-R2AGd4.jpg?width=108&crop=smart&auto=webp&s=979a484da3cce6347b4ed544918cdb54137dd957', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uUG_B-K2sANsnGD0ez-8I5bQ8_7skfFZbl4M-R2AGd4.jpg?width=216&crop=smart&auto=webp&s=4da3202776d7027094746faa6ae32bfea582a705', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uUG_B-K2sANsnGD0ez-8I5bQ8_7skfFZbl4M-R2AGd4.jpg?width=320&crop=smart&auto=webp&s=9f27cb3454089e26471dfbd17cbaf7c6ec5f8c6b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uUG_B-K2sANsnGD0ez-8I5bQ8_7skfFZbl4M-R2AGd4.jpg?width=640&crop=smart&auto=webp&s=65cbbef7ca11e7cd9f75ff92cf412d86c1012370', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uUG_B-K2sANsnGD0ez-8I5bQ8_7skfFZbl4M-R2AGd4.jpg?width=960&crop=smart&auto=webp&s=55703c6bc83a338cb04f3d47760452dc90d6d0d6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uUG_B-K2sANsnGD0ez-8I5bQ8_7skfFZbl4M-R2AGd4.jpg?width=1080&crop=smart&auto=webp&s=35a861fb9d56766a958e283c56681a13d8349062', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uUG_B-K2sANsnGD0ez-8I5bQ8_7skfFZbl4M-R2AGd4.jpg?auto=webp&s=ac358b11d3c0009b4f7614bbf2f959fcc69f6c14', 'width': 1200}, 'variants': {}}]} |
You Tube Summarizer Extension, free, local and open source! | 78 | 2024-12-29T22:38:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hp7xjc/you_tube_summarizer_extension_free_local_and_open/ | aitookmyj0b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp7xjc | false | null | t3_1hp7xjc | /r/LocalLLaMA/comments/1hp7xjc/you_tube_summarizer_extension_free_local_and_open/ | false | false | self | 78 | null |
|
LLMs (including o1) are still subpar at compositional and spatial reasoning. See AI explained video linked (Timestamp: 9:19) yet o1 is scoring more than 90 on livebench reasoning. Livebench should make reasoning more challenging as this is a flaw. At the same time, reasoning progress is incredible. | 1 | [removed] | 2024-12-29T22:38:09 | Personal-Dot-380 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hp7xkw | false | null | t3_1hp7xkw | /r/LocalLLaMA/comments/1hp7xkw/llms_including_o1_are_still_subpar_at/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'mdUo-7D-eMIdc_985-5A8bRMw5QTSqpjsVjyblHgqNo', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/n8kojurv7v9e1.jpeg?width=108&crop=smart&auto=webp&s=04279cbcddfc839a008097cc1de884a1133d7f49', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/n8kojurv7v9e1.jpeg?width=216&crop=smart&auto=webp&s=2f94d23d3044ff0f4b0ce1ebe79498facbc0aaaa', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/n8kojurv7v9e1.jpeg?width=320&crop=smart&auto=webp&s=467473aad0ae27ad987fba9a30cf9d2036a5481b', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/n8kojurv7v9e1.jpeg?width=640&crop=smart&auto=webp&s=00d0819fca154cc663764d5fcfb3aa5b92983a2b', 'width': 640}], 'source': {'height': 2005, 'url': 'https://preview.redd.it/n8kojurv7v9e1.jpeg?auto=webp&s=011b85067ea8b92aead4104a83ecc3a4e057e634', 'width': 705}, 'variants': {}}]} |
||
GPU poor's dilemma: 3060 12GB vs. 4060 Ti 16GB | 35 | Hi LocalLLaMa community!
I'd like to share some of the numbers that got I comparing 3060 12gb vs 4060 ti 16gb. Hope this helps to solve the dilemma for other GPU poors like myself.
# software: ollama
# method: ollama run --verbose [model_name]
#Prompt:
Write a code for logistic regression from scratch using numpy with SGD
#1. falcon3:10b-instruct-q8_0
#1.1 RTX 3060
NAME ID SIZE PROCESSOR UNTIL
falcon3:10b-instruct-q8_0 d56712f1783f 12 GB 6%/94% CPU/GPU 4 minutes from now
total duration: 55.5286745s
load duration: 25.6338ms
prompt eval count: 46 token(s)
prompt eval duration: 447ms
prompt eval rate: 102.91 tokens/s
eval count: 679 token(s)
eval duration: 54.698s
eval rate: 12.41 tokens/s
#1.2 RTX 4060 ti 16GB
NAME ID SIZE PROCESSOR UNTIL
falcon3:10b-instruct-q8_0 d56712f1783f 12 GB 100% GPU 3 minutes from now
total duration: 43.761345s
load duration: 17.6185ms
prompt eval count: 1471 token(s)
prompt eval duration: 839ms
prompt eval rate: 1753.28 tokens/s
eval count: 1003 token(s)
eval duration: 42.779s
eval rate: 23.45 tokens/s
#2. mistral-nemo:12b
#2.1. RTX 3060 12GB
NAME ID SIZE PROCESSOR UNTIL
mistral-nemo:12b 994f3b8b7801 9.3 GB 100% GPU 4 minutes from now
total duration: 20.3631907s
load duration: 22.6684ms
prompt eval count: 1032 token(s)
prompt eval duration: 758ms
prompt eval rate: 1361.48 tokens/s
eval count: 758 token(s)
eval duration: 19.556s
eval rate: 38.76 tokens/s
#2.2. RTX 4060 ti 16gb
total duration: 16.0498557s
load duration: 22.0506ms
prompt eval count: 16 token(s)
prompt eval duration: 575ms
prompt eval rate: 27.83 tokens/s
eval count: 541 token(s)
eval duration: 15.45s
eval rate: 35.02 tokens/s
TL;DR: RTX 3060 is faster (10–15%), when VRAM id noy limiting. Memory bandwidth is quite an accurate predictor of token generation speed. Larger L2 cache of 4060 ti 16GB doesn't appear to be impacting inference speed much.
| 2024-12-29T22:39:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hp7yft/gpu_poors_dilemma_3060_12gb_vs_4060_ti_16gb/ | siegevjorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp7yft | false | null | t3_1hp7yft | /r/LocalLLaMA/comments/1hp7yft/gpu_poors_dilemma_3060_12gb_vs_4060_ti_16gb/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]} |
Custom Spotlight-style LLM Prompt Launcher on GNOME | 1 | 2024-12-29T23:36:30 | https://v.redd.it/zu5r5x7vhv9e1 | Practical_Cover5846 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hp97e4 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/zu5r5x7vhv9e1/DASHPlaylist.mpd?a=1738107405%2CYzQyNWViNDk5MTNmM2FhNmQ1OGQ4ZWU0MWQyZWFiZjM4ZTM0NWMyNmIzOTZhMTNmMzU0M2IzZWZkMzI5YjIzNg%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/zu5r5x7vhv9e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/zu5r5x7vhv9e1/HLSPlaylist.m3u8?a=1738107405%2CMTA1MDMwMTJiOTFhNjAzNWE4Zjk3NDMyOWE0NWUyZDY1NmEyM2NlM2RmYjg0YmIwNGYwY2MzMDA2NzYyMTAxMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/zu5r5x7vhv9e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1hp97e4 | /r/LocalLLaMA/comments/1hp97e4/custom_spotlightstyle_llm_prompt_launcher_on_gnome/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NDF5N3VkZmFpdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NDF5N3VkZmFpdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=108&crop=smart&format=pjpg&auto=webp&s=0c798fdcfe0ead979d67f2b2490bf50d46bd23ab', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NDF5N3VkZmFpdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=216&crop=smart&format=pjpg&auto=webp&s=4a7a7bb5976ae894f6c1d485f2109a2707394c62', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NDF5N3VkZmFpdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=320&crop=smart&format=pjpg&auto=webp&s=be03a05166298ca59c5e60ff44513192207fd073', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NDF5N3VkZmFpdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=640&crop=smart&format=pjpg&auto=webp&s=b25d13617882379ed18e2054a69d927f3e3bc5b5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NDF5N3VkZmFpdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=960&crop=smart&format=pjpg&auto=webp&s=bcc2d7410baf6dacd53bd07b139abc9da707472a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NDF5N3VkZmFpdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0da4dcc31cb126651b26b6ac0a406d7673adc872', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NDF5N3VkZmFpdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?format=pjpg&auto=webp&s=971f54107a0561f84107825824f437ca0b16ceb6', 'width': 1920}, 'variants': {}}]} |
||
I am trying to generate English subtitles/CC f for a Russian language video clip | 4 | This is not a low effort post. I did this once before. I think using freesubtitles.ai. This time around it tells me my mp4 file is corrupted. It is a youtube video that I trimmed to shorter length. It also takes my money but does not recognize any credit it seems not to be maintained.
I uploaded to youtube which generated Russian closed cations. Youtube studio shows me transcript but I do not see any srt file.
I tried [otter.ai](http://otter.ai) which recognized the Russian as horrible English with no apparent options to specify Russian. ChatGPT recommended this.
I would prefer an online tool or super easy to install PC SW as I do not want to invest much time for this. I do not currently have Python installed
| 2024-12-29T23:36:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hp97kk/i_am_trying_to_generate_english_subtitlescc_f_for/ | silver_chief2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp97kk | false | null | t3_1hp97kk | /r/LocalLLaMA/comments/1hp97kk/i_am_trying_to_generate_english_subtitlescc_f_for/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'b-c8SLg5NqjPTVRdvAAcTCEzZmQIomP2Zb_1iH6QPtg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8jJE3U22__Pf8BzPTwArgYz3MkSn6_4Sn5_JAKF4bZ0.jpg?width=108&crop=smart&auto=webp&s=7bfb6bf69d951a5f3c96a54233671004dc248405', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8jJE3U22__Pf8BzPTwArgYz3MkSn6_4Sn5_JAKF4bZ0.jpg?width=216&crop=smart&auto=webp&s=876627d229c61bdd56c835c95dc7edbb85c3851c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8jJE3U22__Pf8BzPTwArgYz3MkSn6_4Sn5_JAKF4bZ0.jpg?width=320&crop=smart&auto=webp&s=dbe42ef605bb6417e1e35582a72c5ecc295b080c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8jJE3U22__Pf8BzPTwArgYz3MkSn6_4Sn5_JAKF4bZ0.jpg?width=640&crop=smart&auto=webp&s=6fd614f8bc9277ad4005dd9e8d89035c2a1a4bab', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8jJE3U22__Pf8BzPTwArgYz3MkSn6_4Sn5_JAKF4bZ0.jpg?width=960&crop=smart&auto=webp&s=53f936df6f4544f4297c6f929cccdb82abb0e434', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8jJE3U22__Pf8BzPTwArgYz3MkSn6_4Sn5_JAKF4bZ0.jpg?width=1080&crop=smart&auto=webp&s=ea79c846c6c02bef471ccf0e46b5c5de9e2db642', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8jJE3U22__Pf8BzPTwArgYz3MkSn6_4Sn5_JAKF4bZ0.jpg?auto=webp&s=de298eec5e45697fbb1ea9924f1f3d8bd740b882', 'width': 1200}, 'variants': {}}]} |
I'm a returning user of chatgpt and claude, I don't know much about other AIs. Can you tell me a little about the most powerful ones and their advantages? | 0 | I started using chatgpt to translate texts. Then I met Claude and I think he is the best model for this use, but as always, companies censor absurd things (fictional or minimally offensive content). So, I want to explore new alternatives. I would be grateful if someone could give me context on other models and what their strengths are. Thank you, I wish you happy holidays! | 2024-12-30T00:12:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hp9ykx/im_a_returning_user_of_chatgpt_and_claude_i_dont/ | Deshidia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hp9ykx | false | null | t3_1hp9ykx | /r/LocalLLaMA/comments/1hp9ykx/im_a_returning_user_of_chatgpt_and_claude_i_dont/ | false | false | self | 0 | null |
Custom Spotlight-style LLM Prompt Launcher on GNOME | 17 | 2024-12-30T00:12:51 | https://v.redd.it/81ewsokbov9e1 | Practical_Cover5846 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hp9z76 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/81ewsokbov9e1/DASHPlaylist.mpd?a=1738109585%2COTg0MzEyNzQ0MmRiNjU4YzA5ZDVhMDc5ZGFlMjJkMDBiNjM1ZTUxMmFkNzQyNjQ4ZjY1ZmRlM2I3ZDg1NDViMQ%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/81ewsokbov9e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/81ewsokbov9e1/HLSPlaylist.m3u8?a=1738109585%2CZWIxMDI4NmY2YzYwMDA5MTUzOTExM2RkNjk1Y2QyZGNlNTExODBhNGMwMjYwOTdhYTAzOGNjYjVjOWQzM2FiZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/81ewsokbov9e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1hp9z76 | /r/LocalLLaMA/comments/1hp9z76/custom_spotlightstyle_llm_prompt_launcher_on_gnome/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'ZGdvamlwcXFvdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZGdvamlwcXFvdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=108&crop=smart&format=pjpg&auto=webp&s=4ea5c342c6e92c72c62b7fa7ac770477b9ecf513', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZGdvamlwcXFvdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=216&crop=smart&format=pjpg&auto=webp&s=83df7b049480a16f61c8755102307786f26da08b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZGdvamlwcXFvdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=320&crop=smart&format=pjpg&auto=webp&s=b3334a6264dcae4a5f492d45a833a60bfa934058', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZGdvamlwcXFvdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=640&crop=smart&format=pjpg&auto=webp&s=8cf39ca079f01132d86c796e273c5891a0e0174e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZGdvamlwcXFvdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=960&crop=smart&format=pjpg&auto=webp&s=2ab9738f447901bc7049ce410520b2478dc41ef0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZGdvamlwcXFvdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=592f203c2274b193c933fe0fdfeb6c27073f6441', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZGdvamlwcXFvdjllMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?format=pjpg&auto=webp&s=f4ddff3750f2f91d53b6fdb53e1416a6e81d166d', 'width': 1920}, 'variants': {}}]} |
||
Fixed Llama-3_1-Nemotron-51B ggufs generate gibberish when prompt is longer than 4K tokens | 33 | Thanks to treebugg3 at HF reporting this bug.
[https://huggingface.co/ymcki/Llama-3\_1-Nemotron-51B-Instruct-GGUF/discussions/2](https://huggingface.co/ymcki/Llama-3_1-Nemotron-51B-Instruct-GGUF/discussions/2)
I spent the weekend and fixed the bug. According to treebugg3, the updated Q4\_K\_M can now at least work with 20K tokens. Now I made IQ3\_XXS, IQ3\_S, IQ3\_M, IQ4\_XS, Q4\_K\_M, Q5\_K\_M, Q6\_K available. Please let me know if you want other quants.
Presumably, 51B is a derivative of 3.1 70B which has an effective context length of 64K according to Nvidia RULER. So I presume it can also work up to 64K. But I don't have the resources to test that limit. Please report back if you find otherwise.
This bug only affects convert\_hf\_to\_gguf.py, so after re-download, there is no need to recompile your llama.cpp from b4380 and up to load the new GGUFs. That means downstream software like koboldcpp and llama-cpp-python will just work with these new GGUFs when they pick up the llama.cpp b4380 code in the near future.
I have since filed a PR for the fix:
[https://github.com/ggerganov/llama.cpp/pull/11008](https://github.com/ggerganov/llama.cpp/pull/11008)
If you want to generate your own quants without waiting for the PR to be finished, you can download convert\_hf\_to\_gguf.py from
[https://huggingface.co/ymcki/Llama-3\_1-Nemotron-51B-Instruct-GGUF/blob/main/convert\_hf\_to\_gguf.py](https://huggingface.co/ymcki/Llama-3_1-Nemotron-51B-Instruct-GGUF/blob/main/convert_hf_to_gguf.py)
Sorry for the inconvenience caused. But I am glad now this model finally is able to achieve its full glory in the gguf format.
| 2024-12-30T00:53:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hpatsr/fixed_llama3_1nemotron51b_ggufs_generate/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpatsr | false | null | t3_1hpatsr | /r/LocalLLaMA/comments/1hpatsr/fixed_llama3_1nemotron51b_ggufs_generate/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'AJ7NOWff6TP_Reo-x5PIkhLm5EG9YyiXpj0qO_lrhYI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lTbAB_cdtx3Ygws0kwkaWneuDTkCc46cpbpSm8lhHPA.jpg?width=108&crop=smart&auto=webp&s=49da631c2333cf27a8da3a9199f0a54fad93d5ae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lTbAB_cdtx3Ygws0kwkaWneuDTkCc46cpbpSm8lhHPA.jpg?width=216&crop=smart&auto=webp&s=c867142b8e39c162df4747a925a6874559ae29d0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lTbAB_cdtx3Ygws0kwkaWneuDTkCc46cpbpSm8lhHPA.jpg?width=320&crop=smart&auto=webp&s=87dc9820c8e5ce2f59696e256ed505a93862a5c5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lTbAB_cdtx3Ygws0kwkaWneuDTkCc46cpbpSm8lhHPA.jpg?width=640&crop=smart&auto=webp&s=b106c2fb1f53f1df0afdf43eab4a51e8857d4cb9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lTbAB_cdtx3Ygws0kwkaWneuDTkCc46cpbpSm8lhHPA.jpg?width=960&crop=smart&auto=webp&s=ca1e8d4f53a0bc2d42b033f04dfc5baacd7617b3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lTbAB_cdtx3Ygws0kwkaWneuDTkCc46cpbpSm8lhHPA.jpg?width=1080&crop=smart&auto=webp&s=5773053d37ec1cafeadfe81c990114cb4687eb09', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lTbAB_cdtx3Ygws0kwkaWneuDTkCc46cpbpSm8lhHPA.jpg?auto=webp&s=e089ef6403d5b85fffd3a8e8215f49649d6640b4', 'width': 1200}, 'variants': {}}]} |
QWQ preview 32b has performance anxiety | 1 | [removed] | 2024-12-30T00:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hpay6t/qwq_preview_32b_has_performance_anxiety/ | ThinkExtension2328 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpay6t | false | null | t3_1hpay6t | /r/LocalLLaMA/comments/1hpay6t/qwq_preview_32b_has_performance_anxiety/ | false | false | self | 1 | null |
Amurex - An open source, self hostable, AI meeting Copilot | 11 | 2024-12-30T01:02:09 | https://github.com/thepersonalaicompany/amurex | stealthanthrax | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hpb049 | false | null | t3_1hpb049 | /r/LocalLLaMA/comments/1hpb049/amurex_an_open_source_self_hostable_ai_meeting/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'yzlaTN016dtcMjrTa5yQZYD9sQZX9el82ikfRYktAYk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n_emVt08PnnTD7HGq0xwoGP1lszeVEEjxCGhJsZdYx4.jpg?width=108&crop=smart&auto=webp&s=55ee831fff11859c7518f565f3cf004cc6b76d03', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n_emVt08PnnTD7HGq0xwoGP1lszeVEEjxCGhJsZdYx4.jpg?width=216&crop=smart&auto=webp&s=42ca39f621c77bb5a7db3187856af15c9bb27a12', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n_emVt08PnnTD7HGq0xwoGP1lszeVEEjxCGhJsZdYx4.jpg?width=320&crop=smart&auto=webp&s=b970268c9904b1f9ccb332927d7544ff16500bf2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n_emVt08PnnTD7HGq0xwoGP1lszeVEEjxCGhJsZdYx4.jpg?width=640&crop=smart&auto=webp&s=9d2e14e102cf279109eec77e094e1df2e215b67c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n_emVt08PnnTD7HGq0xwoGP1lszeVEEjxCGhJsZdYx4.jpg?width=960&crop=smart&auto=webp&s=ce8ac65a3603a212aca61e4fa130ae44ee1c7fec', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n_emVt08PnnTD7HGq0xwoGP1lszeVEEjxCGhJsZdYx4.jpg?width=1080&crop=smart&auto=webp&s=ee6ef5dc6912edc9b6dc26b9ec6eaaff0454565d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n_emVt08PnnTD7HGq0xwoGP1lszeVEEjxCGhJsZdYx4.jpg?auto=webp&s=4bc5544433ae7bcda5410cf14154837a6c4525f9', 'width': 1200}, 'variants': {}}]} |
||
Which personal assistant LLM are you using? | 1 | [removed] | 2024-12-30T01:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hpb5fg/which_personal_assistant_llm_are_you_using/ | _iordanis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpb5fg | false | null | t3_1hpb5fg | /r/LocalLLaMA/comments/1hpb5fg/which_personal_assistant_llm_are_you_using/ | false | false | self | 1 | null |
Recomendations for a complete noob. | 1 | [removed] | 2024-12-30T01:16:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hpbaep/recomendations_for_a_complete_noob/ | skeet_beet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpbaep | false | null | t3_1hpbaep | /r/LocalLLaMA/comments/1hpbaep/recomendations_for_a_complete_noob/ | false | false | self | 1 | null |
No LM Runtime found for model format 'gguf'! | 1 | [removed] | 2024-12-30T01:22:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hpbesq/no_lm_runtime_found_for_model_format_gguf/ | klovys_santos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpbesq | false | null | t3_1hpbesq | /r/LocalLLaMA/comments/1hpbesq/no_lm_runtime_found_for_model_format_gguf/ | false | false | self | 1 | null |
What will be a good size of a dataset for fine-tuning a 7B model? | 1 | [removed] | 2024-12-30T01:25:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hpbh5q/what_will_be_a_good_size_of_a_dataset_for/ | Feeling_Constant_242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpbh5q | false | null | t3_1hpbh5q | /r/LocalLLaMA/comments/1hpbh5q/what_will_be_a_good_size_of_a_dataset_for/ | false | false | self | 1 | null |
No LM Runtime found for model format 'gguf'! | 1 | [removed] | 2024-12-30T01:27:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hpbipt/no_lm_runtime_found_for_model_format_gguf/ | klovys_santos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpbipt | false | null | t3_1hpbipt | /r/LocalLLaMA/comments/1hpbipt/no_lm_runtime_found_for_model_format_gguf/ | false | false | self | 1 | null |
What is your favorite chat or role play model? | 16 | I'm at the point of expanding storage from the amount of models I've downloaded to try. Having a great time between learning and chatting but I'm curious - what are your guys favorite models for just in general chat, or role play? I don't mean NSFW particularly...not into that myself, but it goes along the lines. Once shipping finally gives in I'll have the fans I need to get my other GPUs back online to do training and customize characters more myself. I find building them quite fun. | 2024-12-30T01:36:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hpbp8o/what_is_your_favorite_chat_or_role_play_model/ | ziggo0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpbp8o | false | null | t3_1hpbp8o | /r/LocalLLaMA/comments/1hpbp8o/what_is_your_favorite_chat_or_role_play_model/ | false | false | self | 16 | null |
What kind of models can I use with these specs, how fast would they be, and what would the context length be? | 1 | [removed] | 2024-12-30T01:52:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hpc15i/what_kind_of_models_can_i_use_with_these_specs/ | Hopeful_Client_3268 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpc15i | false | null | t3_1hpc15i | /r/LocalLLaMA/comments/1hpc15i/what_kind_of_models_can_i_use_with_these_specs/ | false | false | self | 1 | null |
What kind of models can I use with these specs, how fast would they be, and what would the context length be? | 1 | [removed] | 2024-12-30T01:58:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hpc5ep/what_kind_of_models_can_i_use_with_these_specs/ | Low_Major8488 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpc5ep | false | null | t3_1hpc5ep | /r/LocalLLaMA/comments/1hpc5ep/what_kind_of_models_can_i_use_with_these_specs/ | false | false | self | 1 | null |
Best pc build money can buy | 0 | Hi all, I am looking into building a local ai beast lab. Here are my requirements (from more important to less important):
- it should be able to run local llm and models
- it will have server-like uptime (24/7/365)
- it should support 10gbit connectivity
- it should allow me to run casual games at max performance
Given that this is my first build, my budget is ideally 5-10k (could be more if justified).
Do you have any recommendations? Is there anything specific I should really pay attention to?
Thank you in advance for any suggestion. | 2024-12-30T02:56:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hpd9rr/best_pc_build_money_can_buy/ | SnooSuggestions7655 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpd9rr | false | null | t3_1hpd9rr | /r/LocalLLaMA/comments/1hpd9rr/best_pc_build_money_can_buy/ | false | false | self | 0 | null |
LLM Book Generator - (With way too much effort for realism) | 0 | 2024-12-30T03:26:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hpduhw/llm_book_generator_with_way_too_much_effort_for/ | Toasty_Toms | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpduhw | false | null | t3_1hpduhw | /r/LocalLLaMA/comments/1hpduhw/llm_book_generator_with_way_too_much_effort_for/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Caw7QM83xDzHLkxTajHJj_r-tVSylCLb_5MI9S82jOc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PyxnNX55_MbwniGJnDxGAthwHwvubRRG7ECJi5yJos0.jpg?width=108&crop=smart&auto=webp&s=dfaeab8ad069331cf84f83fbdb8c6b4b71cc3201', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PyxnNX55_MbwniGJnDxGAthwHwvubRRG7ECJi5yJos0.jpg?width=216&crop=smart&auto=webp&s=5844d9208491180fcb161797ef03d30b8570a7c1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PyxnNX55_MbwniGJnDxGAthwHwvubRRG7ECJi5yJos0.jpg?width=320&crop=smart&auto=webp&s=f0fd3208220bed422abe592eff275a32e0c89966', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PyxnNX55_MbwniGJnDxGAthwHwvubRRG7ECJi5yJos0.jpg?width=640&crop=smart&auto=webp&s=529730e5ec84b3ddc9d087aa7d08c45b125dd421', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PyxnNX55_MbwniGJnDxGAthwHwvubRRG7ECJi5yJos0.jpg?width=960&crop=smart&auto=webp&s=96517d0babab4e0968ba05b6c2efbcfa2e22a701', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PyxnNX55_MbwniGJnDxGAthwHwvubRRG7ECJi5yJos0.jpg?width=1080&crop=smart&auto=webp&s=f2aafa567561e774e967417a1059ce574b590437', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PyxnNX55_MbwniGJnDxGAthwHwvubRRG7ECJi5yJos0.jpg?auto=webp&s=04d73e982e63489edf7779d74113daa08ed09d8e', 'width': 1200}, 'variants': {}}]} |
||
VidTok: A Family of Versatile and State-Of-The-Art Video Tokenizers | 20 |
VidTok is a cutting-edge family of video tokenizers that delivers state-of-the-art performance in both continuous and discrete tokenizations with various compression rates. VidTok incorporates several key advancements over existing approaches:
- ⚡️ **Efficient Architecture**. Separate spatial and temporal sampling reduces computational complexity without sacrificing quality.
- 🔥 **Advanced Quantization**. Finite Scalar Quantization (FSQ) addresses training instability and codebook collapse in discrete tokenization.
- 💥 **Enhanced Training**. A two-stage strategy—pre-training on low-res videos and fine-tuning on high-res—boosts efficiency. Reduced frame rates improve motion dynamics representation.
VidTok, trained on a large-scale video dataset, outperforms previous models across all metrics, including PSNR, SSIM, LPIPS, and FVD.
Resources and technical documentation:
- [GitHub](https://github.com/microsoft/VidTok)
- [arXiv](https://arxiv.org/pdf/2412.13061) | 2024-12-30T03:35:13 | https://huggingface.co/microsoft/VidTok | Balance- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hpe0ov | false | null | t3_1hpe0ov | /r/LocalLLaMA/comments/1hpe0ov/vidtok_a_family_of_versatile_and_stateoftheart/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'JiaRi7Sih-d5YpdxgrlR8GOGrOfRunninGvR64cl6zY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ExRB74s5xHDbNTPMIS83nNCVKk9YV-wekXM7hXyVi7s.jpg?width=108&crop=smart&auto=webp&s=5819ba947097069a9be3b43945e986a164357540', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ExRB74s5xHDbNTPMIS83nNCVKk9YV-wekXM7hXyVi7s.jpg?width=216&crop=smart&auto=webp&s=dfe0c35ce2215e8fc1ea4ad27f60ec930858b147', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ExRB74s5xHDbNTPMIS83nNCVKk9YV-wekXM7hXyVi7s.jpg?width=320&crop=smart&auto=webp&s=97ae59315428b3c9a73b4dcda155a28564c1893d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ExRB74s5xHDbNTPMIS83nNCVKk9YV-wekXM7hXyVi7s.jpg?width=640&crop=smart&auto=webp&s=4e7091d43825228e10097ba2ee744bc447cf9cf3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ExRB74s5xHDbNTPMIS83nNCVKk9YV-wekXM7hXyVi7s.jpg?width=960&crop=smart&auto=webp&s=91379301c46f62cbcd020d82dd5a9124fda01ce6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ExRB74s5xHDbNTPMIS83nNCVKk9YV-wekXM7hXyVi7s.jpg?width=1080&crop=smart&auto=webp&s=187087bf85e8737105b0600843aae9bb31e37382', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ExRB74s5xHDbNTPMIS83nNCVKk9YV-wekXM7hXyVi7s.jpg?auto=webp&s=77b0dc89222f52fb9d00eb5f099be9b0b7387daf', 'width': 1200}, 'variants': {}}]} |
|
i have concern using gui agent | 1 | [removed] | 2024-12-30T03:35:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hpe12h/i_have_concern_using_gui_agent/ | Glum_Ad7895 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpe12h | false | null | t3_1hpe12h | /r/LocalLLaMA/comments/1hpe12h/i_have_concern_using_gui_agent/ | false | false | self | 1 | null |
thought on gui agent? | 1 | [removed] | 2024-12-30T03:36:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hpe1ok/thought_on_gui_agent/ | Glum_Ad7895 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpe1ok | false | null | t3_1hpe1ok | /r/LocalLLaMA/comments/1hpe1ok/thought_on_gui_agent/ | false | false | self | 1 | null |
How to train LLM to write in in this format? | 1 | [removed] | 2024-12-30T03:42:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hpe5ey/how_to_train_llm_to_write_in_in_this_format/ | Lendoran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpe5ey | false | null | t3_1hpe5ey | /r/LocalLLaMA/comments/1hpe5ey/how_to_train_llm_to_write_in_in_this_format/ | false | false | self | 1 | null |
How to get llm to write properly | 1 | [removed] | 2024-12-30T03:43:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hpe68o/how_to_get_llm_to_write_properly/ | Lendoran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpe68o | false | null | t3_1hpe68o | /r/LocalLLaMA/comments/1hpe68o/how_to_get_llm_to_write_properly/ | false | false | self | 1 | null |
Built a local AI image tagger using local Llama 3.2 Vision + ChromaDB | 1 | [removed] | 2024-12-30T03:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hpegd2/built_a_local_ai_image_tagger_using_local_llama/ | bafil596 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpegd2 | false | null | t3_1hpegd2 | /r/LocalLLaMA/comments/1hpegd2/built_a_local_ai_image_tagger_using_local_llama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7XtepANKWCEGolsNLkDtmJ6GC1OiNvbIJi2ZQlAw7gc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=108&crop=smart&auto=webp&s=d751b1e7ff574ba8cc731d752a29834defccb3e5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=216&crop=smart&auto=webp&s=1628d51438c7e206b4f53243ed1a3dae871904e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=320&crop=smart&auto=webp&s=cae31134d558d3d442048e4a5bedfb37435c9f0a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=640&crop=smart&auto=webp&s=789cc7f2cf962d1b84a272bf6b4b48fef567737c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=960&crop=smart&auto=webp&s=61555da1b9ce180d0d4dab4efacac0cb629fbef0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=1080&crop=smart&auto=webp&s=58e1f2131b86ba436a9ea6818078821508ec6f25', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?auto=webp&s=46e70d09fa8876dca543827cfc450820d2829e3a', 'width': 1200}, 'variants': {}}]} |
Benchmark for iterative code improvement? Problems with deepseekv3 getting stuck in infinite loop. | 3 | I was hopeful about deepseek v3 given its benchmarks. However, my hopes were to use it for agents, e.g. like for GAIA benchmark: [https://huggingface.co/spaces/gaia-benchmark/leaderboard](https://huggingface.co/spaces/gaia-benchmark/leaderboard)
However, even very first try with deepseekv3, on this prompt: "Based upon today's date, write python code to plot TESLA and META stock price gains YTD vs. time per week, and save the plot to a file named 'stock\_gains.png'"
It got stuck in a repeated loop, where every time I feed it back the error, it just gives me back the same old code. Mind you, if this was just a cherry-picked case or my 100th try, I'd be concerned but not super worried, but this was my very first try. It reproduces both via the API and in their UI.
Sometimes rarely it does not get stuck in a loop, but most of the time does. It also shouldn't take so very long to fix the problem. sonnet or gpt-4o or even llama 3.1 or 3.3 do not require so many iterations.
So my question is, apart from SWE bench that is kind of indirect, are there any benchmarks that test how good a mode is at responding to feedback as errors or human feedback? | 2024-12-30T04:01:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hpeian/benchmark_for_iterative_code_improvement_problems/ | pseudotensor1234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpeian | false | null | t3_1hpeian | /r/LocalLLaMA/comments/1hpeian/benchmark_for_iterative_code_improvement_problems/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'S8GTUAXvA7GMyPfRJ_oXAJk7fejzVRtbmebcjxM5XFA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=108&crop=smart&auto=webp&s=2e86c35f2a2b32268456ff03a96e23db22024c71', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=216&crop=smart&auto=webp&s=29114b746949fbb08b32a34adbd150ed79a88932', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=320&crop=smart&auto=webp&s=33d173bc8055708db5e1f1e3427b45579d000f58', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=640&crop=smart&auto=webp&s=a4e5dfced582c1799d647c0054b9ce4848ee6843', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=960&crop=smart&auto=webp&s=9e54747e70e84a68d407d1b70f4c2b6b24b5d2d9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?width=1080&crop=smart&auto=webp&s=e198b3d967bef01e84273e73e157051a03039aa4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GLJwzw5I8e9YdLYKroUiyPxlSlFf2vSMLNG843fIb3s.jpg?auto=webp&s=7f9d64c6f1908298014d785447c4d99b70d6472b', 'width': 1200}, 'variants': {}}]} |
10th December 2024: Cerebras Systems + US Energy Sandia National Labs have CLAIMED to demonstrate training of a 1 trillion parameter model on a single CS-3 system (!) This is ~1% the footprint & power of an equivalent GPU cluster. | 391 | https://x.com/CerebrasSystems/status/1867296161750536442?t=wU_lBuMzYLClIb7ja4sjvw&s=19
https://x.com/CerebrasSystems/status/1866911933196275994?t=wU_lBuMzYLClIb7ja4sjvw&s=19
https://cerebras.ai/press-release/cerebras-demonstrates-trillion-parameter-model-training-on-a-single-cs-3-system
https://x.com/SandiaLabs?t=7yRTp8-c5zXhEN23qEhXwA&s=09
https://x.com/ENERGY?t=wKYSIpl1fbfRFT6V_1kq6w&s=09 | 2024-12-30T04:03:40 | https://www.reddit.com/gallery/1hpejko | Personal-Dot-380 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hpejko | false | null | t3_1hpejko | /r/LocalLLaMA/comments/1hpejko/10th_december_2024_cerebras_systems_us_energy/ | false | false | 391 | null |
|
Use llama vision for image tagging and semantic querying | 1 | [removed] | 2024-12-30T04:21:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hpev3v/use_llama_vision_for_image_tagging_and_semantic/ | bafil596 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpev3v | false | null | t3_1hpev3v | /r/LocalLLaMA/comments/1hpev3v/use_llama_vision_for_image_tagging_and_semantic/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7XtepANKWCEGolsNLkDtmJ6GC1OiNvbIJi2ZQlAw7gc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=108&crop=smart&auto=webp&s=d751b1e7ff574ba8cc731d752a29834defccb3e5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=216&crop=smart&auto=webp&s=1628d51438c7e206b4f53243ed1a3dae871904e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=320&crop=smart&auto=webp&s=cae31134d558d3d442048e4a5bedfb37435c9f0a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=640&crop=smart&auto=webp&s=789cc7f2cf962d1b84a272bf6b4b48fef567737c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=960&crop=smart&auto=webp&s=61555da1b9ce180d0d4dab4efacac0cb629fbef0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?width=1080&crop=smart&auto=webp&s=58e1f2131b86ba436a9ea6818078821508ec6f25', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Zaq3_lbbSE0xX4pAzUU02tljVF70bcq5MDZaif6QQYs.jpg?auto=webp&s=46e70d09fa8876dca543827cfc450820d2829e3a', 'width': 1200}, 'variants': {}}]} |
My personal summary of Open Weight Foundation Models in 2024 | 1 | [removed] | 2024-12-30T04:59:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hpfjcr/my_personal_summary_of_open_weight_foundation/ | lly0571 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpfjcr | false | null | t3_1hpfjcr | /r/LocalLLaMA/comments/1hpfjcr/my_personal_summary_of_open_weight_foundation/ | false | false | self | 1 | null |
LLM for code summarization | 1 | [removed] | 2024-12-30T05:00:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hpfk73/llm_for_code_summarization/ | LessofaJudge-30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpfk73 | false | null | t3_1hpfk73 | /r/LocalLLaMA/comments/1hpfk73/llm_for_code_summarization/ | false | false | self | 1 | null |
Budget AKA poor man Local LLM. | 420 | I was looking to setup a local LLM and I was looking at the prices of some of these Nvidia cards and I almost lost my mind. So I decided to build a floating turd.
The build,
Ad on market place for a CROSSHAIR V FORMULA-Z from asus from many eons ago with 4X Ballistix Sport 8GB Single DDR3 1600 MT/s (PC3-12800) (32GB total) with an AMD FX(tm)-8350 Eight-Core Processor for 50 bucks. The only reason I considered this was for the 4 PCIe slots. I had a case, PSU and a 1TB SSD.
Ebay, I found 2X P102-100 for 80 bucks. Why did I picked this card? Simple, memory bandwidth is king for LLM performance.
The memory bandwidth of the NVIDIA GeForce RTX 3060 depends on the memory interface and the amount of memory on the card:
8 GB card: Has a 128-bit memory interface and a peak memory bandwidth of 240 GB/s
12 GB card: Has a 192-bit memory interface and a peak memory bandwidth of 360 GB/s
RTX 3060 Ti: Has a 256-bit bus and a memory bandwidth of 448 GB/s
4000 series cards
4060 TI 128bit 288GB bandwidth
4070 192bit 480GB bandwidth or 504 if you get the good one.
The P102-100 has 10GB ram with 320bit memory bus and memory bandwidth of 440.3 GB --> this is very important.
Prices range from 350 per card to 600 per card for the 4070.
so roughly 700 to 1200 for two cards. So if all I need is memory bandwidth and cores to run my local LLM why would I spend 1200 or 700 when 80 bucks will do. Each p102-100 has 3200 cores and 440GB of bandwidth. I figured why not, lets test it and if I loose, then It is only 80 bucks as I would only need ton buy better video cards. I am not writing novels and I don't need to precision of larger models, this is just my playground and this should be enough.
Total cost for the floating turd was 130 dollars. It runs home assistant, faster whisper model on GPU, Phi-4 for assist and llama3.2-3b for music assistant so I can say play this song on any room on my house. All this with response times of under 1 second, no OpenAI and no additional cost to run, not even electricity since it runs off my solar inverter.
The tests. All numbers have been rounded to the nearest.
Model Token Size
llama3.2:1b-instruct-q4\_K\_M 112 TK/s 1B
phi3.5:3.8b-mini-instruct-q4\_K\_M 62 TK/s 3.8B
mistral:7b-instruct-q4\_K\_M 39 TK/s 7B
llama3.1:8b-instruct-q4\_K\_M 37 TK/s 8B
mistral-nemo:12b-instruct-2407-q4\_K\_M 26 TK/s 12B
nexusraven:13b-q4\_K\_M 24 TK/s 13B
qwen2.5:14b-instruct-q4\_K\_M 20 TK/s 14B
vanilj/Phi-4:latest 20 Tk/s 14.7B
phi3:14b-medium-4k-instruct-q4\_K\_M 22 TK/s 14B
mistral-small:22b-instruct-2409-q4\_K\_M 14 TK/s 22B
gemma2:27b-instruct-q4\_K\_M 12 TK/s 27B
https://preview.redd.it/zao18b562x9e1.png?width=1236&format=png&auto=webp&s=f4c2248708662a13036a0295ed20c33e6534411d
https://preview.redd.it/f44113272x9e1.png?width=734&format=png&auto=webp&s=2a838068e1f03f65aabd4044a03679e18ee949c3
All I can say is, not bad for 130 bucks total and the fact that I can run a 27B model with 12 TK/s is just the icing on the cake for me. Also I forgot to mention that the cards are power limited to 150W via nvidia-smi so there is a little more performance on the table since these cards are 250W but, I like to run them cool and save on power.
Cons...
These cards suck for image generation, ComfyUI takes over 2 minutes to generate 1024x768. I mean, they don't suck, they are just slow for image generation. How can anyone complaint about image generation taking 2 minutes for 80 bucks. The fact is works blows my mind. Obviously using FP8.
https://preview.redd.it/iutr1cnd8x9e1.png?width=1024&format=png&auto=webp&s=1e8aad7cee44c1965426164fb97900c472fba583
So if you are broke, it can be done for cheap. No need to spend thousands of dollars if you are just playing with it. $130 bucks, now that is a budget build. | 2024-12-30T05:28:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hpg2e6/budget_aka_poor_man_local_llm/ | Boricua-vet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpg2e6 | false | null | t3_1hpg2e6 | /r/LocalLLaMA/comments/1hpg2e6/budget_aka_poor_man_local_llm/ | false | false | 420 | null |
|
Has anyone tried to Quantize DeepSeek v3 with Low-Bit Quants like VPQ or HQQ? | 6 | If not I'm going to try and do it to see if it provides an option for those of us who want to run it locally | 2024-12-30T05:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hpg47x/has_anyone_tried_to_quantize_deepseek_v3_with/ | nite2k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpg47x | false | null | t3_1hpg47x | /r/LocalLLaMA/comments/1hpg47x/has_anyone_tried_to_quantize_deepseek_v3_with/ | false | false | self | 6 | null |
Using llama3.2:1B | 1 | [removed] | 2024-12-30T05:54:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hpghwf/using_llama321b/ | SilverBoi01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpghwf | false | null | t3_1hpghwf | /r/LocalLLaMA/comments/1hpghwf/using_llama321b/ | false | false | self | 1 | null |
Curious what it would take to run Deepseek v3 locally | 1 | [removed] | 2024-12-30T06:06:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hpgp84/curious_what_it_would_take_to_run_deepseek_v3/ | robkkni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpgp84 | false | null | t3_1hpgp84 | /r/LocalLLaMA/comments/1hpgp84/curious_what_it_would_take_to_run_deepseek_v3/ | false | false | self | 1 | null |
Help me identify this 3090 PCB for waterblock choice | 1 | [removed] | 2024-12-30T06:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hpgx21/help_me_identify_this_3090_pcb_for_waterblock/ | 330d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpgx21 | false | null | t3_1hpgx21 | /r/LocalLLaMA/comments/1hpgx21/help_me_identify_this_3090_pcb_for_waterblock/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ymZ3uYw9RG_nzdcdoho4wWI7xUkPbdl7uIpL4SyuwHI', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/nlL47dsiZZDSelp92-hDNRMnEB-RNQReUFjNmt2IZjA.jpg?width=108&crop=smart&auto=webp&s=2dcb607e6ad45bb2a7006b4b57bdcc8b3b7a3883', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/nlL47dsiZZDSelp92-hDNRMnEB-RNQReUFjNmt2IZjA.jpg?width=216&crop=smart&auto=webp&s=1f8c36269f7954735237176f22fb39b298dbc174', 'width': 216}, {'height': 158, 'url': 'https://external-preview.redd.it/nlL47dsiZZDSelp92-hDNRMnEB-RNQReUFjNmt2IZjA.jpg?width=320&crop=smart&auto=webp&s=18b822be39f3a8afde16b1df4b06127c5d08924b', 'width': 320}, {'height': 317, 'url': 'https://external-preview.redd.it/nlL47dsiZZDSelp92-hDNRMnEB-RNQReUFjNmt2IZjA.jpg?width=640&crop=smart&auto=webp&s=b20ed26618befc89e261c3cf6c4cf422aa18d770', 'width': 640}, {'height': 475, 'url': 'https://external-preview.redd.it/nlL47dsiZZDSelp92-hDNRMnEB-RNQReUFjNmt2IZjA.jpg?width=960&crop=smart&auto=webp&s=0f9d050b2e6356b3f02a36848d3fd6ae663c8e16', 'width': 960}, {'height': 535, 'url': 'https://external-preview.redd.it/nlL47dsiZZDSelp92-hDNRMnEB-RNQReUFjNmt2IZjA.jpg?width=1080&crop=smart&auto=webp&s=2da574e58b5cd6420b0e4f6f32909c9c733a9b83', 'width': 1080}], 'source': {'height': 892, 'url': 'https://external-preview.redd.it/nlL47dsiZZDSelp92-hDNRMnEB-RNQReUFjNmt2IZjA.jpg?auto=webp&s=491535fa106b9c21bb5a210b2c0f0e4ef50b418c', 'width': 1800}, 'variants': {}}]} |
What's Your Favorite Open Source Model for Story Writing and Coding? | 1 | [removed] | 2024-12-30T06:25:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hpgzg4/whats_your_favorite_open_source_model_for_story/ | krigeta1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpgzg4 | false | null | t3_1hpgzg4 | /r/LocalLLaMA/comments/1hpgzg4/whats_your_favorite_open_source_model_for_story/ | false | false | self | 1 | null |
Story writing prompt which allows you to control various elements of the prompt. You can also use it without control and let the LLM control the elements. Also easy to add or remove the elements you want to control or not. | 1 | [removed] | 2024-12-30T06:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hph8id/story_writing_prompt_which_allows_you_to_control/ | Personal-Dot-380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hph8id | false | null | t3_1hph8id | /r/LocalLLaMA/comments/1hph8id/story_writing_prompt_which_allows_you_to_control/ | false | false | self | 1 | null |
[image processing failed] | 1 | [deleted] | 2024-12-30T06:55:13 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hphfs3 | false | null | t3_1hphfs3 | /r/LocalLLaMA/comments/1hphfs3/image_processing_failed/ | false | false | default | 1 | null |
||
DeepSeek V3 API No Free Tokens Granted | 1 | [removed] | 2024-12-30T06:59:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hphhvm/deepseek_v3_api_no_free_tokens_granted/ | Tawsyf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hphhvm | false | null | t3_1hphhvm | /r/LocalLLaMA/comments/1hphhvm/deepseek_v3_api_no_free_tokens_granted/ | false | false | self | 1 | null |
Sam Altman is taking veiled shots at DeepSeek and Qwen. He mad. | 1,722 | https://x.com/sama/status/1872664379608727589?t=T-p_FReVLZWdi_Jia0dZfg&s=19 | 2024-12-30T07:06:41 | Personal-Dot-380 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hphlz7 | false | null | t3_1hphlz7 | /r/LocalLLaMA/comments/1hphlz7/sam_altman_is_taking_veiled_shots_at_deepseek_and/ | false | false | 1,722 | {'enabled': True, 'images': [{'id': '6LTemv5217EjkcDibKyP-HNfGuXPLkO9enzwMyc2pSk', 'resolutions': [{'height': 128, 'url': 'https://preview.redd.it/lba9xu2mqx9e1.jpeg?width=108&crop=smart&auto=webp&s=92f6ddb2c6b6c779532e38fed3128562aad8c83b', 'width': 108}, {'height': 256, 'url': 'https://preview.redd.it/lba9xu2mqx9e1.jpeg?width=216&crop=smart&auto=webp&s=745f70ebc2ad102a920b346afb4a9b2bc1010af6', 'width': 216}, {'height': 379, 'url': 'https://preview.redd.it/lba9xu2mqx9e1.jpeg?width=320&crop=smart&auto=webp&s=79474b1e9420918f906e71f1a8718173501e73b6', 'width': 320}, {'height': 759, 'url': 'https://preview.redd.it/lba9xu2mqx9e1.jpeg?width=640&crop=smart&auto=webp&s=f00e6c10892514bbdcfe97ce880b0ffbc344e3db', 'width': 640}, {'height': 1139, 'url': 'https://preview.redd.it/lba9xu2mqx9e1.jpeg?width=960&crop=smart&auto=webp&s=07005b3b3bd4ea16d9d3ca6693875282b68bd108', 'width': 960}, {'height': 1282, 'url': 'https://preview.redd.it/lba9xu2mqx9e1.jpeg?width=1080&crop=smart&auto=webp&s=f372aaf4dfb98a25948ab273e583e2fc666ec8a9', 'width': 1080}], 'source': {'height': 1282, 'url': 'https://preview.redd.it/lba9xu2mqx9e1.jpeg?auto=webp&s=18c52580c65fc4a0f05e72b82386b265c83896e2', 'width': 1080}, 'variants': {}}]} |
||
running VLLM docker without mandatory model in POST request | 1 | How to make vllm not require the model name when requesting?
I am running a vllm container on VM using the following command:
`docker run --runtime nvidia --gpus all -v /home/myname/_projects/models/:/mnt/model/ -p 8000:8000 --env "TRANSFORMERS_OFFLINE=1" --env "HF_DATASET_OFFLINE=1" --ipc=host vllm/vllm-openai:latest --model="/mnt/model/hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4" --max_model_len=50000 --gpu_memory_utilization=0.85`
The model is up and running and whenever I am trying to do a post request like this:
`curl -X POST "`[`http://0.0.0.0:8000/v1/completions`](http://0.0.0.0:8000/v1/completions)`" \`
`-H "Content-Type: application/json" \`
`-d '{"prompt": "What is AI?","max_tokens": 200}'`
it will pop up an error that the model name is not provided, I know a solution is to pass as well the model in param model, but why do I need to pass it if it is already being served in the container?
Many thanks, | 2024-12-30T07:10:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hphnt7/running_vllm_docker_without_mandatory_model_in/ | Puzzleheaded-Ad8442 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hphnt7 | false | null | t3_1hphnt7 | /r/LocalLLaMA/comments/1hphnt7/running_vllm_docker_without_mandatory_model_in/ | false | false | self | 1 | null |
How to chat with documents on a per user or per session level? Temporary data | 1 | [removed] | 2024-12-30T07:10:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hpho39/how_to_chat_with_documents_on_a_per_user_or_per/ | THenrich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hpho39 | false | null | t3_1hpho39 | /r/LocalLLaMA/comments/1hpho39/how_to_chat_with_documents_on_a_per_user_or_per/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.