title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
This is concering 👀
1
[removed]
2024-12-28T14:55:43
https://www.reddit.com/r/LocalLLaMA/comments/1ho794s/this_is_concering/
bengkoopa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho794s
false
null
t3_1ho794s
/r/LocalLLaMA/comments/1ho794s/this_is_concering/
false
false
https://b.thumbs.redditm…liUDyemWDH5o.jpg
1
null
Running embedding models on a low-end CPU
2
Hi, I was thinking about using my mini pc (intel N97 with integrated intel UHD graphics working as a file server atm) for running certain UIs like Open-webui and Sillytavern, and I wanted to know how usable will it be to run the embedding/reranking models or maybe even a small whisper model. I'm guessing there is no support for the integrated GPU, but is it still usable with only that CPU, any tips, etc? I was planning to offload only the LLM and TTS models to my main PC.
2024-12-28T15:12:17
https://www.reddit.com/r/LocalLLaMA/comments/1ho7kzp/running_embedding_models_on_a_lowend_cpu/
nengon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho7kzp
false
null
t3_1ho7kzp
/r/LocalLLaMA/comments/1ho7kzp/running_embedding_models_on_a_lowend_cpu/
false
false
self
2
null
Created an AI clone to respond to AI recruiters — ElevenLabs Voice cloning API + Open WebUI Knowledge RAG + GPT-4o
1
2024-12-28T15:24:22
https://youtube.com/shorts/xRlRhcuP6CY?si=Fy4kEGOlcyD0qbNF
M0shka
youtube.com
1970-01-01T00:00:00
0
{}
1ho7tkk
false
null
t3_1ho7tkk
/r/LocalLLaMA/comments/1ho7tkk/created_an_ai_clone_to_respond_to_ai_recruiters/
false
false
default
1
null
Reverse Video Search
1
2024-12-28T15:24:52
https://blog.mixpeek.com/reverse-video-search/
Chemical_Ninja8678
blog.mixpeek.com
1970-01-01T00:00:00
0
{}
1ho7ty7
false
null
t3_1ho7ty7
/r/LocalLLaMA/comments/1ho7ty7/reverse_video_search/
false
false
https://a.thumbs.redditm…0Xn8iRLtWS20.jpg
1
{'enabled': False, 'images': [{'id': 'iPWJ3nr-1MikfVxZtpJN0mkjcIuMpENYWJNSMgTkB04', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/LrzDgpzLWbVajR29VZooC7HinonBrH6QnpXKZWiZSsY.jpg?width=108&crop=smart&auto=webp&s=0678ae819d6b9ff5e623344ca9a1e2cb5448e7d8', 'width': 108}, {'height': 81, 'url': 'https://external-preview.redd.it/LrzDgpzLWbVajR29VZooC7HinonBrH6QnpXKZWiZSsY.jpg?width=216&crop=smart&auto=webp&s=4d8998baa35df3da2018adcfe8a3aa8b3c94b2a9', 'width': 216}, {'height': 120, 'url': 'https://external-preview.redd.it/LrzDgpzLWbVajR29VZooC7HinonBrH6QnpXKZWiZSsY.jpg?width=320&crop=smart&auto=webp&s=eb41280914a335342115c1d0d3192a041a0fa858', 'width': 320}, {'height': 240, 'url': 'https://external-preview.redd.it/LrzDgpzLWbVajR29VZooC7HinonBrH6QnpXKZWiZSsY.jpg?width=640&crop=smart&auto=webp&s=79a6a0b285419791b6cfe99db0f84ab5a43aeef8', 'width': 640}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/LrzDgpzLWbVajR29VZooC7HinonBrH6QnpXKZWiZSsY.jpg?auto=webp&s=bea8b345e86edab356446bf7f99c96b295b4b973', 'width': 800}, 'variants': {}}]}
Congrats to LG & IBM for topping GPU-Poor LLM Arena!
122
https://preview.redd.it/…this impressive.
2024-12-28T15:40:58
https://www.reddit.com/r/LocalLLaMA/comments/1ho85zn/congrats_to_lg_ibm_for_topping_gpupoor_llm_arena/
phhusson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho85zn
false
null
t3_1ho85zn
/r/LocalLLaMA/comments/1ho85zn/congrats_to_lg_ibm_for_topping_gpupoor_llm_arena/
false
false
https://b.thumbs.redditm…kWjDiwuyo4vE.jpg
122
{'enabled': False, 'images': [{'id': 'Jn8Qu_vDoWZof-N9lLOzftuBNpRrHvtYkXkKQBL1A48', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=108&crop=smart&auto=webp&s=4c1f344aca5db7afdd71312c01538475aa7c9b7f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=216&crop=smart&auto=webp&s=fd176b4b97c51d6d90835f587373cdbf22506e0c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=320&crop=smart&auto=webp&s=2d3fe2b3a23a4750a0954ae952fe131f7586bc5a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=640&crop=smart&auto=webp&s=e7ff80e00d414dbc8a7a294fd4bb94410a536b19', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=960&crop=smart&auto=webp&s=8e17133b8be719bc18faf3053db5105f4a62a0e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?width=1080&crop=smart&auto=webp&s=aa0167c430ce0d125a18de31c140059ff6ba325d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nsHJxd_mlZNUh8efVuy5ZIqnFiZEOWaAxxmVSE0Sglc.jpg?auto=webp&s=d91168b0c864f09b57c49468b6985435f999aca0', 'width': 1200}, 'variants': {}}]}
What's the best model to run locally on phone for conversations?
2
Body
2024-12-28T15:55:24
https://www.reddit.com/r/LocalLLaMA/comments/1ho8gsu/whats_the_best_model_to_run_locally_on_phone_for/
IsDeathTheStart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho8gsu
false
null
t3_1ho8gsu
/r/LocalLLaMA/comments/1ho8gsu/whats_the_best_model_to_run_locally_on_phone_for/
false
false
self
2
null
What is absolutely the fastest AI chip that money can buy today, in terms of Token/s?
0
Context: I'm a software developer trying to optimize my own personal workflow with AI. I've been using Sonnet-3.5 for coding thorough all of 2024, but I'm bottlenecked by its tokens-per-second speed. With DeepSeek's recent release and with LLAMA-3.3, I'm willing to invest on my own chip to run my models locally with a performance that isn't offered by providers like OpenAI and Anthropic. Question: What is the best AI accelerator chip that money can buy today, to run OSS models like Llama and DeepSeek?
2024-12-28T16:11:47
https://www.reddit.com/r/LocalLLaMA/comments/1ho8tqj/what_is_absolutely_the_fastest_ai_chip_that_money/
SrPeixinho
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho8tqj
false
null
t3_1ho8tqj
/r/LocalLLaMA/comments/1ho8tqj/what_is_absolutely_the_fastest_ai_chip_that_money/
false
false
self
0
null
RTX 5090 and 5080 pricing "rumors" (or rather, as listed by a chinese shop)
84
Well, it is \~2600 USD for the 5090 and \~1370 USD for the 5080. Seems believable and not unexpected when considering nVidia's pricing habits, but also the expected performance of the 5090. Nvidia knows it will be used by AI enthusiasts, so not very dissimilar to the crypto craze i guess, though this time this is the price from the company and not the scalpers. Also, it might be the 5090D version since it's in China, but the regular one shouldn't be too different i guess.. The 5080 would be a good deal for AI were it not for the 16GB VRAM. Regardless, happy tinkering and Happy Holidays as well. Sources: [https://wccftech.com/nvidia-geforce-rtx-5090-geforce-rtx-5080-pricing-surfaces-online/](https://wccftech.com/nvidia-geforce-rtx-5090-geforce-rtx-5080-pricing-surfaces-online/) [https://www.technetbooks.com/2024/12/nvidia-rtx-5080-and-5090-early-pricing.html](https://www.technetbooks.com/2024/12/nvidia-rtx-5080-and-5090-early-pricing.html)
2024-12-28T16:12:15
https://www.reddit.com/r/LocalLLaMA/comments/1ho8u3m/rtx_5090_and_5080_pricing_rumors_or_rather_as/
Mission_Bear7823
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho8u3m
false
null
t3_1ho8u3m
/r/LocalLLaMA/comments/1ho8u3m/rtx_5090_and_5080_pricing_rumors_or_rather_as/
false
false
self
84
{'enabled': False, 'images': [{'id': 'TVJkR71ymclADjsQ8v0jA-dCXdPhsL57fNv4cab-44c', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/YVT7y6vhYEdSiDvKlAIC4V4eVnvGl9DjtY4bHCmMvgs.jpg?width=108&crop=smart&auto=webp&s=e1810fa3fbdbf84de7450ff1f2515a2457dc634d', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/YVT7y6vhYEdSiDvKlAIC4V4eVnvGl9DjtY4bHCmMvgs.jpg?width=216&crop=smart&auto=webp&s=da0734aebc50b867d285e9b3050549642ac7f4ec', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/YVT7y6vhYEdSiDvKlAIC4V4eVnvGl9DjtY4bHCmMvgs.jpg?width=320&crop=smart&auto=webp&s=b7585291668b7b0d48c01941d12b08e21478650a', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/YVT7y6vhYEdSiDvKlAIC4V4eVnvGl9DjtY4bHCmMvgs.jpg?width=640&crop=smart&auto=webp&s=e59c2801dd9f790b5b077d823dd805b5908fa7ca', 'width': 640}, {'height': 544, 'url': 'https://external-preview.redd.it/YVT7y6vhYEdSiDvKlAIC4V4eVnvGl9DjtY4bHCmMvgs.jpg?width=960&crop=smart&auto=webp&s=43c5b5d3ee2149b972e7709e09e98e1738307f4d', 'width': 960}, {'height': 612, 'url': 'https://external-preview.redd.it/YVT7y6vhYEdSiDvKlAIC4V4eVnvGl9DjtY4bHCmMvgs.jpg?width=1080&crop=smart&auto=webp&s=35e3d826d10783f4021d535f16b49b6d07e0a487', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/YVT7y6vhYEdSiDvKlAIC4V4eVnvGl9DjtY4bHCmMvgs.jpg?auto=webp&s=d8ae48dfd15aa09b342953b339818805b0012e51', 'width': 2540}, 'variants': {}}]}
Frameworks or Open-Source Code for Enabling Communication Between Two Agents?
1
Couldn’t find much on this topic, so I’d really appreciate any pointers to projects or frameworks focused on enabling communication between AI agents. Suggestions on how this could be implemented are also welcome! Here are a few ideas I’ve been thinking about: \- Agent-to-Agent Communication: Agents should be able to communicate by calling each other's APIs. These APIs could mimic the structure of existing LLM APIs (e.g., OpenAI), which seem to be the standard. \- Authentication: Once communication is established, agents should authenticate each other. A simple approach could involve signing messages with private keys. \- Transaction Handling: Agents should be able to manage payments, including sending and receiving cryptocurrency, as well as confirming transactions. Any feedback, suggestions, or resources would be great!
2024-12-28T16:38:07
https://www.reddit.com/r/LocalLLaMA/comments/1ho9dyr/frameworks_or_opensource_code_for_enabling/
estebansaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho9dyr
false
null
t3_1ho9dyr
/r/LocalLLaMA/comments/1ho9dyr/frameworks_or_opensource_code_for_enabling/
false
false
self
1
null
Anybody here play RimWorld? I made a mod that generates in-game dialogue using Ollama.
1
[removed]
2024-12-28T16:41:34
https://www.reddit.com/r/LocalLLaMA/comments/1ho9gn8/anybody_here_play_rimworld_i_made_a_mod_that/
Pseudo_Prodigal_Son
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ho9gn8
false
null
t3_1ho9gn8
/r/LocalLLaMA/comments/1ho9gn8/anybody_here_play_rimworld_i_made_a_mod_that/
false
false
self
1
{'enabled': False, 'images': [{'id': '-fheot695JnutbY8ReKRiMhXM3KdEFOmkLr5kz7D5KM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Jazl6hC42y4HcHR-HmRQbACIaabKher3y92tXlD9g8I.jpg?width=108&crop=smart&auto=webp&s=eff125a88670e7f0f462007fc9ed40b6a3e48a34', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Jazl6hC42y4HcHR-HmRQbACIaabKher3y92tXlD9g8I.jpg?width=216&crop=smart&auto=webp&s=25fc7ab3c9b1fd52702bcd8708f5f59945161fe6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Jazl6hC42y4HcHR-HmRQbACIaabKher3y92tXlD9g8I.jpg?width=320&crop=smart&auto=webp&s=840b8e1a756904518eabfa4a03760df6e24b536c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Jazl6hC42y4HcHR-HmRQbACIaabKher3y92tXlD9g8I.jpg?width=640&crop=smart&auto=webp&s=de825cc901a3a10b6278f42d020e9c9bc3c50c0e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Jazl6hC42y4HcHR-HmRQbACIaabKher3y92tXlD9g8I.jpg?width=960&crop=smart&auto=webp&s=57ed7d6efcdc3b18e16d9c15e115f4db9259905b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Jazl6hC42y4HcHR-HmRQbACIaabKher3y92tXlD9g8I.jpg?width=1080&crop=smart&auto=webp&s=31487ccdde96a1e043142c6db2bad8878df2b2e5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Jazl6hC42y4HcHR-HmRQbACIaabKher3y92tXlD9g8I.jpg?auto=webp&s=b889eab8873edf4cc5fb905efa543fc37a362762', 'width': 1200}, 'variants': {}}]}
Never ask /r/LocalLLaMA how to buy AI accelerators
1
[removed]
2024-12-28T17:17:30
https://www.reddit.com/r/LocalLLaMA/comments/1hoa9cn/never_ask_rlocalllama_how_to_buy_ai_accelerators/
SrPeixinho
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoa9cn
false
null
t3_1hoa9cn
/r/LocalLLaMA/comments/1hoa9cn/never_ask_rlocalllama_how_to_buy_ai_accelerators/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PboujFz2lYrbQwgIjl5TovGp3WGFTdmLUnx6AADkf64', 'resolutions': [{'height': 113, 'url': 'https://external-preview.redd.it/9cfldXg9q3mFY1sUF6wp5TwccUYNZtW2p1-q4ixCYkQ.jpg?width=108&crop=smart&auto=webp&s=8a58ea5fb17e1d94039a3da0bd10f9dd3f314c71', 'width': 108}, {'height': 226, 'url': 'https://external-preview.redd.it/9cfldXg9q3mFY1sUF6wp5TwccUYNZtW2p1-q4ixCYkQ.jpg?width=216&crop=smart&auto=webp&s=72c4ca79034062ed469efea872adc93b6c4491fb', 'width': 216}, {'height': 335, 'url': 'https://external-preview.redd.it/9cfldXg9q3mFY1sUF6wp5TwccUYNZtW2p1-q4ixCYkQ.jpg?width=320&crop=smart&auto=webp&s=33c5b364e59530d89fa195525a0bcdf15bc1e2a4', 'width': 320}, {'height': 671, 'url': 'https://external-preview.redd.it/9cfldXg9q3mFY1sUF6wp5TwccUYNZtW2p1-q4ixCYkQ.jpg?width=640&crop=smart&auto=webp&s=fa2b61616d899333aef0d19eac18a424eceed7f3', 'width': 640}, {'height': 1007, 'url': 'https://external-preview.redd.it/9cfldXg9q3mFY1sUF6wp5TwccUYNZtW2p1-q4ixCYkQ.jpg?width=960&crop=smart&auto=webp&s=9e2bd602236f06da9fbcb910689ba93130ec5954', 'width': 960}, {'height': 1133, 'url': 'https://external-preview.redd.it/9cfldXg9q3mFY1sUF6wp5TwccUYNZtW2p1-q4ixCYkQ.jpg?width=1080&crop=smart&auto=webp&s=2f8e3a8693af825fb1b19d9c219ec62deb5049cc', 'width': 1080}], 'source': {'height': 1496, 'url': 'https://external-preview.redd.it/9cfldXg9q3mFY1sUF6wp5TwccUYNZtW2p1-q4ixCYkQ.jpg?auto=webp&s=e62b24af3f0d05396fd03055eac25e1ab8034176', 'width': 1426}, 'variants': {}}]}
Interpretability wonder: Mapping the latent space of Llama 3.3 70B
49
Goodfire trained Sparse Autoencoders (SAEs) on Llama 3.3 70B and made the interpreted model available via a public API. This breakthrough allows researchers and developers to explore and manipulate the model's latent space, enabling deeper research and new product development. Using DataMapPlot, they created an interactive visualization that reveals how certain features, like special formatting tokens or repetitive chat elements, form distinct clusters in the latent space. For instance, clusters were identified for biomedical knowledge, physics, programming, name abstractions, and phonetic features. The team also demonstrated how latent manipulation can steer the model’s behavior. With the AutoSteer feature, it’s possible to automatically select and adjust latents to achieve desired behaviors. For example, when asking about the Andromeda galaxy with increasing steering intensity, the model gradually adopts a pirate-style speech at 0.4 intensity and fully transitions to this style at 0.5. However, stronger adjustments can degrade the factual accuracy of responses. This work provides a powerful tool for understanding and controlling advanced language models, offering exciting possibilities for interpreting and manipulating their internal representations. For more details, check out the full article at Goodfire Papers: [goodfire.ai](https://www.goodfire.ai/papers/mapping-latent-spaces-llama/?utm_source=chatgpt.com)
2024-12-28T17:18:09
https://www.reddit.com/r/LocalLLaMA/comments/1hoa9ut/interpretability_wonder_mapping_the_latent_space/
Temp3ror
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoa9ut
false
null
t3_1hoa9ut
/r/LocalLLaMA/comments/1hoa9ut/interpretability_wonder_mapping_the_latent_space/
false
false
self
49
null
So, the benchmarks for the Deepseek v3 version..
0
They are for the "Base" version, right? As opposed to the "Deepthink" version..?
2024-12-28T17:25:16
https://www.reddit.com/r/LocalLLaMA/comments/1hoafil/so_the_benchmarks_for_the_deepseek_v3_version/
Mission_Bear7823
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoafil
false
null
t3_1hoafil
/r/LocalLLaMA/comments/1hoafil/so_the_benchmarks_for_the_deepseek_v3_version/
false
false
self
0
null
Browser extension to summarize HN comments using local and cloud based AI models
1
[removed]
2024-12-28T17:50:34
https://www.reddit.com/r/LocalLLaMA/comments/1hoazka/browser_extension_to_summarize_hn_comments_using/
georgeck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoazka
false
null
t3_1hoazka
/r/LocalLLaMA/comments/1hoazka/browser_extension_to_summarize_hn_comments_using/
false
false
self
1
{'enabled': False, 'images': [{'id': 'qjve_trd9anvphES8EV95WaluziZiFKhF2bdVpBwP-4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/mORLiGbqUqW8fszU-6_sMCgUSw7VD1AAi3knYaQKHZQ.jpg?width=108&crop=smart&auto=webp&s=f01862580431d135da0744a0cff09cce94b1109b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/mORLiGbqUqW8fszU-6_sMCgUSw7VD1AAi3knYaQKHZQ.jpg?width=216&crop=smart&auto=webp&s=4c4dff97ea4f0942fcd0caba19070422ad19d868', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/mORLiGbqUqW8fszU-6_sMCgUSw7VD1AAi3knYaQKHZQ.jpg?width=320&crop=smart&auto=webp&s=93f3d7ce7666ff4c7f0f2259a1efc16ef2ca1b73', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/mORLiGbqUqW8fszU-6_sMCgUSw7VD1AAi3knYaQKHZQ.jpg?auto=webp&s=2971f51b8befdf11a8abee1186a9faa62da79535', 'width': 480}, 'variants': {}}]}
PSA about compiling Llama.cpp Cuda on Windows
1
It all started when binaries started to have false positive on Windows Defender. I started compiling builds myself. These are my commands I was using: `cmake -B build -DGGML_CUDA=ON -DGGML_CUDA_FA_ALL_QUANTS=ON -DCMAKE_CXX_FLAGS="-O3 -flto" -DCMAKE_C_FLAGS="-O3 -flto"` `cmake --build build --config Release -j 28` It worked very well but during inference I always had 0% cpu usage on 55 cores and 80% on one single core. It annoyed me so I went on a crusade. Long story short I now use Visual Studio 2022 Enterprise (I don't know if it change something vs Community but thats what I got). Now during inference I've got around 4 cores with 15-20% usage and some more with less than 10%. Tps is still about the same, time to 1st token is faster but number are all over the place and I didn't do a thorough analysis. Compile time is much much faster. Cmake load the cores to about 75% and Visual Studio loads all 56 cores to 100%. Also when you open the .sln and go in project settings, there are lot of optimisations settings that are set to default or no optimisations. And under Code Generation you are supposed to enter your compute score for your gpus and its set by default to 52 which deprive you of lot of benefits according to Gpt. For me, the right settings for 2x P40 and 1x A2000 was: `compute_61, sm_61, compute_86, sm_86` So if any of you want to do the same, you prepare the build folder with the first command I wrote (`cmake -B build -DGGML_CU`...) and then you do not use the second command. You go in the build folder and open llama.cpp.sln with Visual Studio. You then select release instead of debug, change the build settings and build the project.
2024-12-28T18:00:03
https://www.reddit.com/r/LocalLLaMA/comments/1hob709/psa_about_compiling_llamacpp_cuda_on_windows/
DrVonSinistro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hob709
false
null
t3_1hob709
/r/LocalLLaMA/comments/1hob709/psa_about_compiling_llamacpp_cuda_on_windows/
false
false
self
1
null
It's been a while since Google brought anything new to opensource
142
Sometimes I catch myself remembering when Google launched the ancient Gemma 2, at that time humanity was different, and to this day generations and generations dream of the coming of the long-awaited Gemma 3.
2024-12-28T18:18:11
https://www.reddit.com/r/LocalLLaMA/comments/1hoblvh/its_been_a_while_since_google_brought_anything/
thecalmgreen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoblvh
false
null
t3_1hoblvh
/r/LocalLLaMA/comments/1hoblvh/its_been_a_while_since_google_brought_anything/
false
false
self
142
null
Why are bad models like Llama and Phi released more often?
0
Something that has me wondering is why do terrible models like Llama and Phi receive updates more often than other good Opensource models?
2024-12-28T18:20:17
https://www.reddit.com/r/LocalLLaMA/comments/1hobnhp/why_are_bad_models_like_llama_and_phi_released/
Existing_Freedom_342
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hobnhp
false
null
t3_1hobnhp
/r/LocalLLaMA/comments/1hobnhp/why_are_bad_models_like_llama_and_phi_released/
false
false
self
0
null
Open AI model for archichture
1
[removed]
2024-12-28T18:38:23
https://www.reddit.com/r/LocalLLaMA/comments/1hoc232/open_ai_model_for_archichture/
Handcraft-IT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoc232
false
null
t3_1hoc232
/r/LocalLLaMA/comments/1hoc232/open_ai_model_for_archichture/
false
false
self
1
null
Standalone microservice
1
[removed]
2024-12-28T18:50:38
https://www.reddit.com/r/LocalLLaMA/comments/1hocc0k/standalone_microservice/
tashazzi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hocc0k
false
null
t3_1hocc0k
/r/LocalLLaMA/comments/1hocc0k/standalone_microservice/
false
false
self
1
{'enabled': False, 'images': [{'id': '6Xpcy7-vK5jANsgaeubPknAWEwrQe9lVpwXjwTq4ep4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=108&crop=smart&auto=webp&s=3c2fbd60404e8ed4f19688280a3d3c57f5c0dc8b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=216&crop=smart&auto=webp&s=9d8c1c9129a107fbd39ddf064835ad6b559e0f4c', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=320&crop=smart&auto=webp&s=67c7f9fd7dd1781e22e70eacdb7482636b0f1e52', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=640&crop=smart&auto=webp&s=52c2c314997566a69490207ad235f61b8e4aad9e', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=960&crop=smart&auto=webp&s=ef0bfa46ea4eb68e5188f7b3f4feb6b2b85a6fa7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=1080&crop=smart&auto=webp&s=332e6d0312fbb86dc639f8ed24ea41a0aa811929', 'width': 1080}], 'source': {'height': 1896, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?auto=webp&s=c7529d662fdeb9c77805dcb812a85757cff80114', 'width': 3372}, 'variants': {}}]}
How to Instantly Change Clothes Using Comfy UI | Step-by-Step AI Tutorial Workflow
1
2024-12-28T19:02:13
https://youtu.be/bULEDrJWxrY
Consistent-Tax-758
youtu.be
1970-01-01T00:00:00
0
{}
1hoclfd
false
{'oembed': {'author_name': 'AI Verse', 'author_url': 'https://www.youtube.com/@Ai-Verse11', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/bULEDrJWxrY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How to Instantly Change Clothes Using Comfy UI | Step-by-Step AI Tutorial Workflow"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/bULEDrJWxrY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'How to Instantly Change Clothes Using Comfy UI | Step-by-Step AI Tutorial Workflow', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1hoclfd
/r/LocalLLaMA/comments/1hoclfd/how_to_instantly_change_clothes_using_comfy_ui/
false
false
https://a.thumbs.redditm…jfX3ywIvZOB0.jpg
1
{'enabled': False, 'images': [{'id': 'tJboQLPJYuGdShl_vpwmuVDWoH2toAJ-IzZK1MLmRx8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MsGmJuyB9xJG93N0XReMLy1W_teuRxu0eT5iiwdpZ5I.jpg?width=108&crop=smart&auto=webp&s=95ec27d159f02c464b28160347f40d9c4b1af07b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/MsGmJuyB9xJG93N0XReMLy1W_teuRxu0eT5iiwdpZ5I.jpg?width=216&crop=smart&auto=webp&s=30a9e719d00400672c054edef005becbd2bf1cde', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/MsGmJuyB9xJG93N0XReMLy1W_teuRxu0eT5iiwdpZ5I.jpg?width=320&crop=smart&auto=webp&s=7bf418d69782517b81f66bd40d0d18925fa0d892', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/MsGmJuyB9xJG93N0XReMLy1W_teuRxu0eT5iiwdpZ5I.jpg?auto=webp&s=28468dbc22df02d163aa045851cfcb15e9abd9a8', 'width': 480}, 'variants': {}}]}
Is it worth putting 1gig of memory in a server
0
I have a server I don't use, it uses DDR3 memory. I could pretty cheaply put 1g of memory in it. Would it be worth doing this? Would I be able to run DeepSeek v3 on it at a decent speed?
2024-12-28T19:10:46
https://www.reddit.com/r/LocalLLaMA/comments/1hocsgs/is_it_worth_putting_1gig_of_memory_in_a_server/
PositiveEnergyMatter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hocsgs
false
null
t3_1hocsgs
/r/LocalLLaMA/comments/1hocsgs/is_it_worth_putting_1gig_of_memory_in_a_server/
false
false
self
0
null
Is it worth putting 1TB of RAM in a server to run DeepSeek V3
133
I have a server I don't use, it uses DDR3 memory. I could pretty cheaply put 1TB of memory in it. Would it be worth doing this? Would I be able to run DeepSeek v3 on it at a decent speed? It is a dual E3 server. Reposting this since I accidently say GB instead of TB before.
2024-12-28T19:25:38
https://www.reddit.com/r/LocalLLaMA/comments/1hod44a/is_it_worth_putting_1tb_of_ram_in_a_server_to_run/
PositiveEnergyMatter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hod44a
false
null
t3_1hod44a
/r/LocalLLaMA/comments/1hod44a/is_it_worth_putting_1tb_of_ram_in_a_server_to_run/
false
false
self
133
null
What are some good papers/ methods to improve grounding results from VLMs?
1
Wanted to know the papers that you have come across which discusses on improving detection results from VLMs.
2024-12-28T19:28:36
https://www.reddit.com/r/LocalLLaMA/comments/1hod6hd/what_are_some_good_papers_methods_to_improve/
SouvikMandal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hod6hd
false
null
t3_1hod6hd
/r/LocalLLaMA/comments/1hod6hd/what_are_some_good_papers_methods_to_improve/
false
false
self
1
null
Quick guide on how to use DeepSeek-V3 model with Cline
1
[removed]
2024-12-28T19:39:02
https://www.reddit.com/r/LocalLLaMA/comments/1hodeqj/quick_guide_on_how_to_use_deepseekv3_model_with/
M0shka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hodeqj
false
null
t3_1hodeqj
/r/LocalLLaMA/comments/1hodeqj/quick_guide_on_how_to_use_deepseekv3_model_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fpxEZxMue3G_DwKIFiG6MUJzbRpt0TDpM4VQSPsn2WQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/4Ml606ZSD4-HWF0ScXgemb_xCVO_c-WnaYXAoePA-JE.jpg?width=108&crop=smart&auto=webp&s=f52984040ad13d327c1bca9b3b41a67bfa2da28a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/4Ml606ZSD4-HWF0ScXgemb_xCVO_c-WnaYXAoePA-JE.jpg?width=216&crop=smart&auto=webp&s=decc78125fc1b690b59985ebca87e0fbb89a12da', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/4Ml606ZSD4-HWF0ScXgemb_xCVO_c-WnaYXAoePA-JE.jpg?width=320&crop=smart&auto=webp&s=c44dc950617f67e6e2be3445a6cb05a2b21290a1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/4Ml606ZSD4-HWF0ScXgemb_xCVO_c-WnaYXAoePA-JE.jpg?auto=webp&s=5dcf2908c741b98fd58a1f2ccbf1f054330729d3', 'width': 480}, 'variants': {}}]}
r/LocalLLaMA - a year in review
1
[removed]
2024-12-28T19:51:27
https://www.reddit.com/r/LocalLLaMA/comments/1hodoin/rlocalllama_a_year_in_review/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hodoin
false
null
t3_1hodoin
/r/LocalLLaMA/comments/1hodoin/rlocalllama_a_year_in_review/
false
false
self
1
null
Largest model I can run on 32gb M3 Max
0
Have a MacBook Pro M3 Max, 30 core gpu with 32gb of ram, trying to figure out what the largest model I can run in the laptop, sorry if this has been answered before
2024-12-28T20:01:31
https://www.reddit.com/r/LocalLLaMA/comments/1hodwkn/largest_model_i_can_run_on_32gb_m3_max/
Y2KM3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hodwkn
false
null
t3_1hodwkn
/r/LocalLLaMA/comments/1hodwkn/largest_model_i_can_run_on_32gb_m3_max/
false
false
self
0
null
I need a quick help please
1
[removed]
2024-12-28T20:03:22
https://www.reddit.com/r/LocalLLaMA/comments/1hody38/i_need_a_quick_help_please/
Abject-Web-1464
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hody38
false
null
t3_1hody38
/r/LocalLLaMA/comments/1hody38/i_need_a_quick_help_please/
false
false
self
1
null
I need help please
1
[removed]
2024-12-28T20:05:48
https://www.reddit.com/r/LocalLLaMA/comments/1hoe02r/i_need_help_please/
Abject-Web-1464
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoe02r
false
null
t3_1hoe02r
/r/LocalLLaMA/comments/1hoe02r/i_need_help_please/
false
false
self
1
null
Community AutoMod
1
[removed]
2024-12-28T20:09:29
https://www.reddit.com/r/LocalLLaMA/comments/1hoe2w2/community_automod/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoe2w2
false
null
t3_1hoe2w2
/r/LocalLLaMA/comments/1hoe2w2/community_automod/
false
false
self
1
null
DeepSeekV3 vs Claude-Sonnet vs o1-Mini vs Gemini-ept-1206, tested on real world scenario
174
As a long term Sonnet user, i spend some time to look behind the fence to see the other models waiting for me and helping me with coding, and i'm glad i did. \#The experiment I've got a christmas holiday project running here: making a better Google Home / Alexa. For this, i needed a feature, and i've created the feature 4 times to see how the different models perform. The feature is an integration of LLM memory, so i can say "i dont like eggs, remember that", and then it wont give me recipes with eggs anymore. This is the prompt i gave all 4 of them: `We need a new azure functions project that acts as a proxy for storing information in an azure table storage.` `As parameters we need the text of the information and a tablename. Use the connection string in the "StorageConnectionString" env var. We need to add, delete and readall memories in a table.` `After that is done help me to deploy the function with the "az" cli tool.` `After that, add a tool to store memories in @/BlazorWasmMicrophoneStreaming/Services/Tools/ , see the other tools there to know how to implement that. Then, update the AiAccessService.cs file to inject the memories into the system prompt.` (For those interested in the details: this is a Blazor WASM .net app that needs a proxy to access the table storage for storing memories, since accessing the storage from WASM directly is a fuggen pain. Its a function because as a hobby project, i minimize costs as much as possible). The development is done with the CLINE extension of VSCode. The challenges to solve: 1) Does the model adher the custom instructions i put into the editor? https://preview.redd.it/q1kclg0atm9e1.png?width=410&format=png&auto=webp&s=2e91a73756eba3e23dc55131adcb8079c0f78f21 2) Is the most up to date version of the package chosen? 3) are files and implementations found by mentioning them without a direct pointer? 4) Are all 3 steps (create a project, deploy a project, update an existing bigger project) executed? 5) Is the implementation technically correct? 6) Cost efficiency: are there unnecesary loops? Note that i am not gunning for 100% perfect code in one shot. I let LLMs do the grunt work and put in the last 10% of effort myself. Additionally, i checked how long it took to reach the final solution and how much money went down the drain in the meantime. Here is the TLDR; the field reports with how the models each reached their goal (or did not even do that) are below. https://preview.redd.it/je306buwan9e1.png?width=674&format=png&auto=webp&s=1376558b37c89b6e1ee0cb6f2549a7908aa02e18 \#Sonnet Claude-3-5-sonnet worked out solid as always. The VS code extension and my experience grew with it, so there is no surprise that there was no surprise here. Claude did not ask me questions though: he wanted to create resources in azure that were already there instead of asking if i want to reuse an existing resource. Problems arising in the code and in the CLI were discovered and fixed automatically. Also impressive: Sonnet prefilled the URL of the tool after the deployment from the deployment output. One negative thing though: For my hobby projects i am just a regular peasant, capacity wise (compared to my professional life, where tokens go brrrr without mercy), which means i depend on the lowest anthropic API tier. Here i hit the limit after roughly 20 cents already, forcing me to switch to openrouter. The transition to openrouter is not seamless though, propably because the cache is now missing that the anthropic API had build up. Also the cost calculation gets wrong as soon as we switch to OpenRouter. While Cline says 60cents were used, the OpenRouter statistics actually says 2,1$. \#Gemini After some people were enthusiastic about the new exp models from google i wanted to give them a try as well. I am still not sure i chose the best contender with gemini-experimental though. Maybe some flash version would have been better? Please let me know. So this was the slowest of the bunch with 20 minutes from start to finish. But it also asked me the most questions. Right at the creation of the project he asked me about the runtime to use, no other model did that. It took him 3 tries to create the bare project, but succeeded in the end. Gemini insisted on creating multiple files for each of the CRUD actions. That's fair i guess, but not really necessary (Don't be offended SOLID principle believers). Gemini did a good job of already predicting the deployment by using the config file for the ENV var. That was cool. After completing 2 of 3 tasks the token limit was reached though and i had to do the deployment in a different task. That's a prompting issue for sure, but it does not allow for the same amount of laziness as the other models. 24 hours after thee experiment the google console did not sync up with the aistudio of google, so i have no idea how much money it cost me. 1 cent? 100$? No one knows. Boo google. \#o1-mini o1-mini started out promising with a flawless setup of the project and had good initial code in it, using multiple files like gemini did. Unlike gemini however it was painfully slow, so having multiple files felt bad. o1-mini also boldly assumed that he had to create a resource group for me, and tried to do so on a different continent. o1-mini then decided to use the wrong package for the access to the storage. After i intervened and told him the right package name it was already 7 minutes in in which he tried to publish the project for deployment. That is also when an 8 minute fixing rage started which destroyed more than what was gained from it. After 8 minutes he thought he should downgrade the .NET version to get it working, at which point i stopped the whole ordeal. o1-mini failed, and cost me 2.2$ while doing it. \#Deepseek i ran the experiment with deepseek twice: first through openrouter because the official deepseek website had a problem, and then the next day when i ran it again with the official deepseek API. Curiously, running through openrouter and the deepseek api were different experiences. Going through OR, it was dumber. It wanted to delete code and not replace it. It got caught up in duplicating files. It was a mess. After a while it even stopped working completely on openrouter. In contrast, going through the deepseek API was a joyride. It all went smooth, code was looking good. Only at the deployment it got weird. Deepseek tried to do a manual zip deployment, with all steps done individually. That's outdated. This is one prompt away from being a non-issue, but i wanted to see where he ends up. It worked in the end, but it felt like someone had too much coffee. He even build the connection string to the storage himself by looking up the resource. I didn't know you could even do that, i guess yes. So that was interesting. \#Conclusion All models provided a good codebase that was just a few human guided iterations away from working fine. For me for now, it looks like microsoft put their money on the wrong horse, at least for this use case of agentic half-automatic coding. Google, Anthropic and even an open source model performed better than the o1-mini they push. Code-Quality wise i think Claude still has a slight upper hand over Deepseek, but that is only some experience with prompting Deepseek away from being fixed. Then looking at the price, Deepseek clearly won. 2$ vs 0.02$. So there is much, much more room for errors and redos and iterations than it is for claude. Same for gemini: maybe its just some prompting that is missing and it works like a charm. Or i chose the wrong model to begin with. I will definetly go forward using Deepseek now in CLINE, reverting to claude when something feels off, and copy-paste prompting o1-mini when it looks realy grimm, algorithm-wise. For some reason using OpenRouter diminishes my experience. Maybe some model switching i am unaware of?
2024-12-28T20:14:54
https://www.reddit.com/r/LocalLLaMA/comments/1hoe75l/deepseekv3_vs_claudesonnet_vs_o1mini_vs/
ComprehensiveBird317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoe75l
false
null
t3_1hoe75l
/r/LocalLLaMA/comments/1hoe75l/deepseekv3_vs_claudesonnet_vs_o1mini_vs/
false
false
https://b.thumbs.redditm…y7eQWSPv0cwU.jpg
174
null
r/LocalLLaMA - a year in review
1
[removed]
2024-12-28T20:16:21
https://www.reddit.com/r/LocalLLaMA/comments/1hoe8c7/rlocalllama_a_year_in_review/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoe8c7
false
null
t3_1hoe8c7
/r/LocalLLaMA/comments/1hoe8c7/rlocalllama_a_year_in_review/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
Test post
0
Test post
2024-12-28T20:19:23
https://www.reddit.com/r/LocalLLaMA/comments/1hoeaqa/test_post/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoeaqa
false
null
t3_1hoeaqa
/r/LocalLLaMA/comments/1hoeaqa/test_post/
false
false
self
0
null
LocalLLaMA - a year in review
1
2024-12-28T20:21:28
https://gist.github.com/av/5e4820a48210600a458deee0f3385d4f
Everlier
gist.github.com
1970-01-01T00:00:00
0
{}
1hoeccw
false
null
t3_1hoeccw
/r/LocalLLaMA/comments/1hoeccw/localllama_a_year_in_review/
false
false
https://a.thumbs.redditm…ZsnhkAliOvy8.jpg
1
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
r/LocalLLaMA - a year in review
1
[removed]
2024-12-28T20:24:35
https://www.reddit.com/r/LocalLLaMA/comments/1hoeeuo/rlocalllama_a_year_in_review/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoeeuo
false
null
t3_1hoeeuo
/r/LocalLLaMA/comments/1hoeeuo/rlocalllama_a_year_in_review/
false
false
self
1
null
Review of most upvoted posts in 2024
1
Linked in the first comment
2024-12-28T20:25:49
https://www.reddit.com/r/LocalLLaMA/comments/1hoefsa/review_of_most_upvoted_posts_in_2024/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoefsa
false
null
t3_1hoefsa
/r/LocalLLaMA/comments/1hoefsa/review_of_most_upvoted_posts_in_2024/
false
false
self
1
null
Review of most upvoted posts in 2024
1
Please see the first comment in the post
2024-12-28T20:28:09
https://www.reddit.com/r/LocalLLaMA/comments/1hoehli/review_of_most_upvoted_posts_in_2024/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoehli
false
null
t3_1hoehli
/r/LocalLLaMA/comments/1hoehli/review_of_most_upvoted_posts_in_2024/
false
false
self
1
null
Review of the most upvoted posts in 2024
85
Please see the first comment below.
2024-12-28T20:31:55
https://www.reddit.com/r/LocalLLaMA/comments/1hoekks/review_of_the_most_upvoted_posts_in_2024/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoekks
false
null
t3_1hoekks
/r/LocalLLaMA/comments/1hoekks/review_of_the_most_upvoted_posts_in_2024/
false
false
self
85
null
Model between Llama 3.1 8B and Llama 3.3 70B?
1
[removed]
2024-12-28T20:38:11
https://www.reddit.com/r/LocalLLaMA/comments/1hoepg8/model_between_llama_31_8b_and_llama_33_70b/
mastermind202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoepg8
false
null
t3_1hoepg8
/r/LocalLLaMA/comments/1hoepg8/model_between_llama_31_8b_and_llama_33_70b/
false
false
self
1
null
MOE pruning? DeepSeek v3 self hosted idea
17
Hi everyone, I believe most of us are excited about DeepSeek V3. However most of us don’t have the RAM or VRAM to host this beast (671B) However, this beast is using a MOE and it has a lot of experts, bringing the actual active parameters to 37B. Is it possible to prune down some experts? (say using 50% of the experts with 20% performance loss) If this is infeasible, does it mean MOE with tons of experts is the way to go?
2024-12-28T20:50:02
https://www.reddit.com/r/LocalLLaMA/comments/1hoeypz/moe_pruning_deepseek_v3_self_hosted_idea/
henryclw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoeypz
false
null
t3_1hoeypz
/r/LocalLLaMA/comments/1hoeypz/moe_pruning_deepseek_v3_self_hosted_idea/
false
false
self
17
null
Have anyone tried running DeepSeek V3 on EPYC Genoa (or newer) systems yet? What are the performance with q4/5/6/8?
3
Theoretical performance should be 10t/s for q8 and 20t/s for q4 in a single cpu EPYC Genoa system with 12 channel memory. Yet to see real world numbers and time-to-first-token time.
2024-12-28T20:51:55
https://www.reddit.com/r/LocalLLaMA/comments/1hof06u/have_anyone_tried_running_deepseek_v3_on_epyc/
Saren-WTAKO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hof06u
false
null
t3_1hof06u
/r/LocalLLaMA/comments/1hof06u/have_anyone_tried_running_deepseek_v3_on_epyc/
false
false
self
3
null
Translation Model
1
[removed]
2024-12-28T20:54:50
https://www.reddit.com/r/LocalLLaMA/comments/1hof2fo/translation_model/
Aggressive_Basket798
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hof2fo
false
null
t3_1hof2fo
/r/LocalLLaMA/comments/1hof2fo/translation_model/
false
false
self
1
null
Why people complain about licenses?
0
They want to make money with models? Well, they have to build one, make one, or pay for rights, like, idk, it's business, ain't it? I know it's classical speech and libre-open source is fun and everything. When it's about offering free use to ease people life, it's sane circuit to me. When it's offered for you to sell it, it's just free work, and people aren't entitled to that afaik
2024-12-28T21:04:23
https://www.reddit.com/r/LocalLLaMA/comments/1hof9z0/why_people_complain_about_licenses/
xmmr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hof9z0
false
null
t3_1hof9z0
/r/LocalLLaMA/comments/1hof9z0/why_people_complain_about_licenses/
false
false
self
0
null
Anybody has prompts to get detailed summaries like in getrecall.ai?
0
Looks like its the best summarizer which doesnt omit information. [Summarize.tech](http://Summarize.tech) summaries are very vague and useless
2024-12-28T21:16:40
https://www.reddit.com/r/LocalLLaMA/comments/1hofjrb/anybody_has_prompts_to_get_detailed_summaries/
YouWillConcur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hofjrb
false
null
t3_1hofjrb
/r/LocalLLaMA/comments/1hofjrb/anybody_has_prompts_to_get_detailed_summaries/
false
false
self
0
null
Best low gpu Model for translation
1
[removed]
2024-12-28T21:28:46
https://www.reddit.com/r/LocalLLaMA/comments/1hoft06/best_low_gpu_model_for_translation/
Aggressive_Basket798
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoft06
false
null
t3_1hoft06
/r/LocalLLaMA/comments/1hoft06/best_low_gpu_model_for_translation/
false
false
self
1
null
Deepseek V3 is absolutely astonishing
603
I spent most of yesterday just working with deep-seek working through programming problems via Open Hands (previously known as Open Devin). And the model is absolutely Rock solid. As we got further through the process sometimes it went off track but it simply just took a reset of the window to pull everything back into line and we were after the race as once again. Thank you deepseek for raising the bar immensely. 🙏🙏
2024-12-28T21:32:29
https://www.reddit.com/r/LocalLLaMA/comments/1hofvtw/deepseek_v3_is_absolutely_astonishing/
klippers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hofvtw
false
null
t3_1hofvtw
/r/LocalLLaMA/comments/1hofvtw/deepseek_v3_is_absolutely_astonishing/
false
false
self
603
null
DeepSeek Censorship: Foiled by the Mighty Space Bar
1
[removed]
2024-12-28T21:49:43
https://www.reddit.com/r/LocalLLaMA/comments/1hog95a/deepseek_censorship_foiled_by_the_mighty_space_bar/
cocoadaemon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hog95a
false
null
t3_1hog95a
/r/LocalLLaMA/comments/1hog95a/deepseek_censorship_foiled_by_the_mighty_space_bar/
false
false
self
1
null
Deepseek v3 fine tuning
1
[removed]
2024-12-28T22:11:32
https://www.reddit.com/r/LocalLLaMA/comments/1hogpya/deepseek_v3_fine_tuning/
BusOk5392
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hogpya
false
null
t3_1hogpya
/r/LocalLLaMA/comments/1hogpya/deepseek_v3_fine_tuning/
false
false
self
1
null
Recommendations for a Local AI Image Generator?
9
I have an Linux server with an RTX 3090 with 24 GB of VRAM. I'm looking for a good software to generate AI images that I can install (without GUI/X) and access remotely from Windows. Any recommendations? PS: Sorry for asking such a generic and perhaps even silly question, but I'm a bit overwhelmed with all the information.
2024-12-28T22:29:09
https://www.reddit.com/r/LocalLLaMA/comments/1hoh3d9/recommendations_for_a_local_ai_image_generator/
Diegam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoh3d9
false
null
t3_1hoh3d9
/r/LocalLLaMA/comments/1hoh3d9/recommendations_for_a_local_ai_image_generator/
false
false
self
9
null
All You Need is 4x 4090 GPUs to Train Your Own Model
0
2024-12-28T22:39:00
https://sabareesh.com/posts/llm-rig/
thekalki
sabareesh.com
1970-01-01T00:00:00
0
{}
1hohayh
false
null
t3_1hohayh
/r/LocalLLaMA/comments/1hohayh/all_you_need_is_4x_4090_gpus_to_train_your_own/
false
false
default
0
null
Struggling with alternative tasks with DeepSeek V3
1
[removed]
2024-12-28T22:48:13
https://www.reddit.com/r/LocalLLaMA/comments/1hohhu8/struggling_with_alternative_tasks_with_deepseek_v3/
Quiet-Instruction-77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hohhu8
false
null
t3_1hohhu8
/r/LocalLLaMA/comments/1hohhu8/struggling_with_alternative_tasks_with_deepseek_v3/
false
false
self
1
null
Experimental Command-R model, trained and tweaked for creativitiy on 185M book tokens
82
2024-12-28T22:52:21
https://huggingface.co/jukofyork/creative-writer-32b-preview
Downtown-Case-1755
huggingface.co
1970-01-01T00:00:00
0
{}
1hohl1h
false
null
t3_1hohl1h
/r/LocalLLaMA/comments/1hohl1h/experimental_commandr_model_trained_and_tweaked/
false
false
https://b.thumbs.redditm…Jpma0W299_Tk.jpg
82
{'enabled': False, 'images': [{'id': 'LTaS9EVnh8nJ04ETYotvas1fGyGkPJOuaZkJe6ZUdzc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2tPKh_p6ofUVspwzkm06KMW1g0UbSStI5KuilRaMTwU.jpg?width=108&crop=smart&auto=webp&s=2c4b518a314f91cd1fd6b64d1c57451a7ab9bd0d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2tPKh_p6ofUVspwzkm06KMW1g0UbSStI5KuilRaMTwU.jpg?width=216&crop=smart&auto=webp&s=5423d177a5fbbc03706462ef4f9f79aef3f3fcf6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2tPKh_p6ofUVspwzkm06KMW1g0UbSStI5KuilRaMTwU.jpg?width=320&crop=smart&auto=webp&s=2feecec37d2c84c4c6cdcc489c02b7c3f6f8e0da', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2tPKh_p6ofUVspwzkm06KMW1g0UbSStI5KuilRaMTwU.jpg?width=640&crop=smart&auto=webp&s=1dd1142f685eaf8ca049523a437b192e0556984e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2tPKh_p6ofUVspwzkm06KMW1g0UbSStI5KuilRaMTwU.jpg?width=960&crop=smart&auto=webp&s=d618c707fb902aa4dcb6759ec49d2f0fed04bf48', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2tPKh_p6ofUVspwzkm06KMW1g0UbSStI5KuilRaMTwU.jpg?width=1080&crop=smart&auto=webp&s=ff063e81e4863cfd661f151ee88d58f236de924c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2tPKh_p6ofUVspwzkm06KMW1g0UbSStI5KuilRaMTwU.jpg?auto=webp&s=3056a1641fc608f30ea3cc3061c1bb407426f3d6', 'width': 1200}, 'variants': {}}]}
Apple Metal Kernel Fusion
6
Nvidia’s CUDA has many kernel fusion functions with libs like cuDNN, TensorRT (and all its variants), etc. I’ve been wondering, Apple has been recently producing some good chips for local inference. Are there seriously no deep learning kernel fusion frameworks for Apple Metal? Wouldn’t there be a strong need for one considering large scale inference on consumer devices may only grow from here?
2024-12-28T22:56:27
https://www.reddit.com/r/LocalLLaMA/comments/1hoho4n/apple_metal_kernel_fusion/
Delicious-Ad-3552
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoho4n
false
null
t3_1hoho4n
/r/LocalLLaMA/comments/1hoho4n/apple_metal_kernel_fusion/
false
false
self
6
null
Is there an optimal way to setup RAG for an unstructured list of words?
1
[removed]
2024-12-28T23:35:41
https://www.reddit.com/r/LocalLLaMA/comments/1hoihnm/is_there_an_optimal_way_to_setup_rag_for_an/
Comb-Greedy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoihnm
false
null
t3_1hoihnm
/r/LocalLLaMA/comments/1hoihnm/is_there_an_optimal_way_to_setup_rag_for_an/
false
false
self
1
null
Build Sanity Check Please :)
4
Hello I have 4 a5000s on hand and am looking to make a fun low budget but capable build. I would appreciate a rate and any glaring issues on this hardware. MY only somewhat concern is that the cards will run in 8x on pcie-4 due to lane restrictions. While every article I find says there should be little to no difference, I still hear other opinions. Thanks everyone for your insights. \[PCPartPicker Part List\](https://pcpartpicker.com/list/FXmvjn) Type|Item|Price :----|:----|:---- \*\*CPU\*\* | \[Intel Core i9-9820X 3.3 GHz 10-Core Processor\](https://pcpartpicker.com/product/YG448d/intel-core-i9-9820x-33-ghz-10-core-processor-bx80673i99820x) |- on hand \*\*CPU Cooler\*\* | \[Noctua NH-D9DX i4 3U 46.44 CFM CPU Cooler\](https://pcpartpicker.com/product/szNypg/noctua-cpu-cooler-nhd9dxi43u) |- on hand \*\*Motherboard\*\* | \[Asus Pro WS X299 SAGE II SSI CEB LGA2066 Motherboard\](https://pcpartpicker.com/product/zbgQzy/asus-pro-ws-x299-sage-ii-ssi-ceb-lga2066-motherboard-pro-ws-x299-sage-ii) | $938.94 @ MemoryC \*\*Memory\*\* | \[Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3600 CL18 Memory\](https://pcpartpicker.com/product/Yg3mP6/corsair-vengeance-lpx-32-gb-2-x-16-gb-ddr4-3600-memory-cmk32gx4m2d3600c18) | $64.00 @ Amazon \*\*Memory\*\* | \[Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3600 CL18 Memory\](https://pcpartpicker.com/product/Yg3mP6/corsair-vengeance-lpx-32-gb-2-x-16-gb-ddr4-3600-memory-cmk32gx4m2d3600c18) | $64.00 @ Amazon \*\*Memory\*\* | \[Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3600 CL18 Memory\](https://pcpartpicker.com/product/Yg3mP6/corsair-vengeance-lpx-32-gb-2-x-16-gb-ddr4-3600-memory-cmk32gx4m2d3600c18) | $64.00 @ Amazon \*\*Memory\*\* | \[Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3600 CL18 Memory\](https://pcpartpicker.com/product/Yg3mP6/corsair-vengeance-lpx-32-gb-2-x-16-gb-ddr4-3600-memory-cmk32gx4m2d3600c18) | $64.00 @ Amazon \*\*Storage\*\* | \[Samsung 990 Pro 2 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive\](https://pcpartpicker.com/product/34ytt6/samsung-990-pro-2-tb-m2-2280-pcie-40-x4-nvme-solid-state-drive-mz-v9p2t0bw) | $169.99 @ Amazon \*\*Video Card\*\* | \[PNY RTX A-Series RTX A5000 24 GB Video Card\](https://pcpartpicker.com/product/B2ddnQ/pny-rtx-a5000-24-gb-rtx-a-series-video-card-vcnrtxa5000-pb) | on hand \*\*Video Card\*\* | \[PNY RTX A-Series RTX A5000 24 GB Video Card\](https://pcpartpicker.com/product/B2ddnQ/pny-rtx-a5000-24-gb-rtx-a-series-video-card-vcnrtxa5000-pb) | on hand \*\*Video Card\*\* | \[PNY RTX A-Series RTX A5000 24 GB Video Card\](https://pcpartpicker.com/product/B2ddnQ/pny-rtx-a5000-24-gb-rtx-a-series-video-card-vcnrtxa5000-pb) | on hand \*\*Video Card\*\* | \[PNY RTX A-Series RTX A5000 24 GB Video Card\](https://pcpartpicker.com/product/B2ddnQ/pny-rtx-a5000-24-gb-rtx-a-series-video-card-vcnrtxa5000-pb) | on hand \*\*Power Supply\*\* | \[EVGA SuperNOVA 1600 P+ 1600 W 80+ Platinum Certified Fully Modular ATX Power Supply\](https://pcpartpicker.com/product/zKTp99/evga-supernova-1600-p-1600-w-80-platinum-certified-fully-modular-atx-power-supply-220-pp-1600-x1) | $297.14 @ Amazon | Generated by \[PCPartPicker\](https://pcpartpicker.com) 2024-12-28 18:30 EST-0500 |
2024-12-28T23:37:02
https://www.reddit.com/r/LocalLLaMA/comments/1hoiin7/build_sanity_check_please/
koalfied-coder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoiin7
false
null
t3_1hoiin7
/r/LocalLLaMA/comments/1hoiin7/build_sanity_check_please/
false
false
self
4
null
Infinite Craft with in-browser LLM using Transformer.js
1
[removed]
2024-12-29T00:14:23
https://www.reddit.com/r/LocalLLaMA/comments/1hojar9/infinite_craft_with_inbrowser_llm_using/
Roseldine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hojar9
false
null
t3_1hojar9
/r/LocalLLaMA/comments/1hojar9/infinite_craft_with_inbrowser_llm_using/
false
false
self
1
{'enabled': False, 'images': [{'id': 'AX4llt7hMbwyUA30SSMnwQhe8bEYQ1aeE6sVopJ8qEs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BxkJNUfRgSxEuiQnFxcdlqUEQDsvk8bJe96H_TbysHU.jpg?width=108&crop=smart&auto=webp&s=287c98ed2a0456a18f328200021b63bf246a41ea', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BxkJNUfRgSxEuiQnFxcdlqUEQDsvk8bJe96H_TbysHU.jpg?width=216&crop=smart&auto=webp&s=3c36e78f48d097b2d56bf15b435008c5a6a61231', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BxkJNUfRgSxEuiQnFxcdlqUEQDsvk8bJe96H_TbysHU.jpg?width=320&crop=smart&auto=webp&s=09c0f4c91882c6ea1bbc3a61c35c8aea5d07a846', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BxkJNUfRgSxEuiQnFxcdlqUEQDsvk8bJe96H_TbysHU.jpg?width=640&crop=smart&auto=webp&s=d240559305df462a66dbaa7c4f00152e404a98f3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BxkJNUfRgSxEuiQnFxcdlqUEQDsvk8bJe96H_TbysHU.jpg?width=960&crop=smart&auto=webp&s=9c92df4f3b5c7de3dc1bd07e9d605164d73db248', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BxkJNUfRgSxEuiQnFxcdlqUEQDsvk8bJe96H_TbysHU.jpg?width=1080&crop=smart&auto=webp&s=9d68f45635ea108a0086f2557a0deec9b2961a54', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BxkJNUfRgSxEuiQnFxcdlqUEQDsvk8bJe96H_TbysHU.jpg?auto=webp&s=40d3e6abfac12fe09c1dedba8ed97700af17474a', 'width': 1200}, 'variants': {}}]}
There is a way to use DeepSeek V3 for FIM (Fill-in-the-middle) and it works great
66
Guys, a couple of weeks ago I wrote a VS Code extension that uses special prompting technique to request FIM completions on cursor position by big models. By using full blown models instead of optimised ones for millisecond tab completions we get 100% accurate completions. The extension also ALWAYS sends selected on a file tree context (and all open files). To set this up get [https://marketplace.visualstudio.com/items?itemName=robertpiosik.gemini-coder](https://marketplace.visualstudio.com/items?itemName=robertpiosik.gemini-coder) Go to settings JSON and add: "geminiCoder.providers": [ { "name": "DeepSeek", "endpointUrl": "https://api.deepseek.com/v1/chat/completions", "bearerToken": "[API KEY]", "model": "deepseek-chat", "temperature": 0, "instruction": "" }, ] Change default model and use with commands "Gemini Coder..." (more on this in extension's README). Until yesterday I was using Gemini Flash 2.0 and 1206, but DeepSeek is so much better!
2024-12-29T00:19:29
https://www.reddit.com/r/LocalLLaMA/comments/1hojejc/there_is_a_way_to_use_deepseek_v3_for_fim/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hojejc
false
null
t3_1hojejc
/r/LocalLLaMA/comments/1hojejc/there_is_a_way_to_use_deepseek_v3_for_fim/
false
false
self
66
{'enabled': False, 'images': [{'id': 'xe0CO2ErLSK8gDReKHm-_hxiHefw__lsJmVIrd2u5Oc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/s8s01RT_9zKAQq3mAhjf6ugqJbTB-i73brlB58BRFvU.jpg?width=108&crop=smart&auto=webp&s=72fab37089a9d94d7d5065fd520477478a3bebb6', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/s8s01RT_9zKAQq3mAhjf6ugqJbTB-i73brlB58BRFvU.jpg?auto=webp&s=31c6081c76b66382d2361a5812813aba54f00199', 'width': 128}, 'variants': {}}]}
i'm trying to understand nsfw llm use cases, why do people need this?
1
[removed]
2024-12-29T00:31:20
https://www.reddit.com/r/LocalLLaMA/comments/1hojn19/im_trying_to_understand_nsfw_llm_use_cases_why_do/
mrbbhatti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hojn19
false
null
t3_1hojn19
/r/LocalLLaMA/comments/1hojn19/im_trying_to_understand_nsfw_llm_use_cases_why_do/
false
false
nsfw
1
null
Best terminal-based AI pair programmers in 2024 - Aider vs Plandex vs OpenHands
15
Hey all! I'm looking to compare terminal-based AI pair programmers, especially with the recent advances in models like DeepSeek v3. I've been using these tools in a complex project for feature dev, bug fixing, and unit testing. Since I prefer working in the terminal over IDE extensions like cline in VSCode, I'm specifically interested in terminal-based solutions. I've had great experience with Aider, experimenting with different LLMs. Recently discovered two alternatives: 1. Plandex - Seems inspired by Aider (potentially an iterative upgrade?) but appears more focused on greenfield projects (anyone with experience in both?) 2. OpenHands - Caught my attention with its impressive verified score on Swe-Bench While I'm quite satisfied with Aider, I'm curious about the community's experience with these alternatives. Has anyone compared them directly? Any insights on their relative strengths, especially for existing projects vs new developments?
2024-12-29T00:55:34
https://www.reddit.com/r/LocalLLaMA/comments/1hok4h4/best_terminalbased_ai_pair_programmers_in_2024/
Chipbugatti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hok4h4
false
null
t3_1hok4h4
/r/LocalLLaMA/comments/1hok4h4/best_terminalbased_ai_pair_programmers_in_2024/
false
false
self
15
null
SemiKong: First Open-Source Semiconductor-Focused LLM (Built on Llama 3.1)
155
2024-12-29T01:05:50
https://www.marktechpost.com/2024/12/27/meet-semikong-the-worlds-first-open-source-semiconductor-focused-llm/
wegwerfen
marktechpost.com
1970-01-01T00:00:00
0
{}
1hokc1y
false
null
t3_1hokc1y
/r/LocalLLaMA/comments/1hokc1y/semikong_first_opensource_semiconductorfocused/
false
false
https://b.thumbs.redditm…TtDLUyf8TGzE.jpg
155
{'enabled': False, 'images': [{'id': 'hj8ARsIrWMpprZpH3D_nLTwj9JcroE_yj5sMGOATmTo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/uq-Sryq0gi3BCWTX4Zt1ZNEitXCcj0zX0jVVwhMbdy0.jpg?width=108&crop=smart&auto=webp&s=2d6387a6d1581862b92cf489e14eea6654e29620', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/uq-Sryq0gi3BCWTX4Zt1ZNEitXCcj0zX0jVVwhMbdy0.jpg?width=216&crop=smart&auto=webp&s=5ddb40826ba224671da26458b69a411f9b4ccb75', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/uq-Sryq0gi3BCWTX4Zt1ZNEitXCcj0zX0jVVwhMbdy0.jpg?width=320&crop=smart&auto=webp&s=a157b77066cf4c0e2d253889b0c7b0ac62d58d12', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/uq-Sryq0gi3BCWTX4Zt1ZNEitXCcj0zX0jVVwhMbdy0.jpg?width=640&crop=smart&auto=webp&s=9a3410a29e00d0e725fda7d788af010689791e09', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/uq-Sryq0gi3BCWTX4Zt1ZNEitXCcj0zX0jVVwhMbdy0.jpg?width=960&crop=smart&auto=webp&s=5e08007c743cdf741fdac98c18c7685f3b0e37b8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/uq-Sryq0gi3BCWTX4Zt1ZNEitXCcj0zX0jVVwhMbdy0.jpg?width=1080&crop=smart&auto=webp&s=2d5d170ea489515231df383cbbac368168e95c6c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/uq-Sryq0gi3BCWTX4Zt1ZNEitXCcj0zX0jVVwhMbdy0.jpg?auto=webp&s=3ec251fa903ff5b9a1785f76a487077c845edde5', 'width': 1920}, 'variants': {}}]}
Deepseek V3 non-official APIs?
4
I’m looking on openrouter and the only provider is Deepseek themselves, but i have heard they will use your data to train their model, which i’m not interested in doing. Does anyone know of any other providers that are offering deepseek v3?
2024-12-29T01:12:12
https://www.reddit.com/r/LocalLLaMA/comments/1hokgkm/deepseek_v3_nonofficial_apis/
dalhaze
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hokgkm
false
null
t3_1hokgkm
/r/LocalLLaMA/comments/1hokgkm/deepseek_v3_nonofficial_apis/
false
false
self
4
null
Upgrading my pc
1
[removed]
2024-12-29T02:01:10
https://www.reddit.com/r/LocalLLaMA/comments/1holewr/upgrading_my_pc/
barkra123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1holewr
false
null
t3_1holewr
/r/LocalLLaMA/comments/1holewr/upgrading_my_pc/
false
false
self
1
null
Upgrading my pc
1
[removed]
2024-12-29T02:02:00
https://www.reddit.com/r/LocalLLaMA/comments/1holfhr/upgrading_my_pc/
barkra123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1holfhr
false
null
t3_1holfhr
/r/LocalLLaMA/comments/1holfhr/upgrading_my_pc/
false
false
self
1
null
PC Upgrade
1
[removed]
2024-12-29T02:06:17
https://www.reddit.com/r/LocalLLaMA/comments/1holih0/pc_upgrade/
barkra123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1holih0
false
null
t3_1holih0
/r/LocalLLaMA/comments/1holih0/pc_upgrade/
false
false
self
1
null
Image to Text to Image to Text to Image.....
46
I just watched a video earlier titled "AI is becoming inbred..." In the video, the Guy falls about generating an image, fed it into a VLM, and said, "Create a prompt for this." Then they generated another image based on the prompt, fed it back into the VLM, generated a new prompt, and repeated the process. I thought this sounded like a fun, so I quickly implemented it myself. I used SD 3.5 and Molmo. I startet with an picture from Elon 🤣 New WE need an Video with start to end frame picture by picture 🤣🤣 I think thats a good way to heat during this time of year 🐯
2024-12-29T02:47:13
https://i.redd.it/1ibf6qrebp9e1.jpeg
Big-Ad1693
i.redd.it
1970-01-01T00:00:00
0
{}
1hom9z6
false
null
t3_1hom9z6
/r/LocalLLaMA/comments/1hom9z6/image_to_text_to_image_to_text_to_image/
false
false
https://b.thumbs.redditm…N1hRqTCNZnFc.jpg
46
{'enabled': True, 'images': [{'id': 'c29ee4uqGRzGsuJCGzAA0RCflnN6RO_0H_rgCRvWWFo', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/1ibf6qrebp9e1.jpeg?width=108&crop=smart&auto=webp&s=cc120dbbe3f3e4d7729d5d7feeb6db4d3261f142', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/1ibf6qrebp9e1.jpeg?width=216&crop=smart&auto=webp&s=72fa939d2bd164d419460f044cfd7c9bb7930488', 'width': 216}, {'height': 155, 'url': 'https://preview.redd.it/1ibf6qrebp9e1.jpeg?width=320&crop=smart&auto=webp&s=bdb697b51b20f6dc9d17e8917445bd054de419a2', 'width': 320}, {'height': 310, 'url': 'https://preview.redd.it/1ibf6qrebp9e1.jpeg?width=640&crop=smart&auto=webp&s=9fa72cf3d1f70327da42a1e35a93826c00bb05da', 'width': 640}, {'height': 465, 'url': 'https://preview.redd.it/1ibf6qrebp9e1.jpeg?width=960&crop=smart&auto=webp&s=dcca221674fb0e296a6db0fef3ef87739e4e6ff8', 'width': 960}, {'height': 523, 'url': 'https://preview.redd.it/1ibf6qrebp9e1.jpeg?width=1080&crop=smart&auto=webp&s=82025897134e69deda586c0f68842491e0a0e7aa', 'width': 1080}], 'source': {'height': 798, 'url': 'https://preview.redd.it/1ibf6qrebp9e1.jpeg?auto=webp&s=a99d418f89cd174757214f1843df6515a52f76c3', 'width': 1647}, 'variants': {}}]}
Am I the only one who has consistency problems between languages with DeepSeek V3?
3
DeepSeek V3 is astonishing but when I ask the same questions that needs a little bit of reasoning in different languages, the responses are different. Did anyone notice this? I'm trying to integrate DeepSeek in an app that uses multi-lang and I'm thinking of translating the inputs to english in order to get the reasoning part done in english before translating the response in the initial language. It's the only way I found to get consistent responses.
2024-12-29T03:00:41
https://www.reddit.com/r/LocalLLaMA/comments/1homits/am_i_the_only_one_who_has_consistency_problems/
Daktyl_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1homits
false
null
t3_1homits
/r/LocalLLaMA/comments/1homits/am_i_the_only_one_who_has_consistency_problems/
false
false
self
3
null
Evaluating performance of zero shot/ few shot classification on unannotated data
1
[removed]
2024-12-29T03:12:13
https://www.reddit.com/r/LocalLLaMA/comments/1homqhz/evaluating_performance_of_zero_shot_few_shot/
MaterialThing9800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1homqhz
false
null
t3_1homqhz
/r/LocalLLaMA/comments/1homqhz/evaluating_performance_of_zero_shot_few_shot/
false
false
self
1
null
Financial entity schema mapping
1
[removed]
2024-12-29T03:14:30
https://www.reddit.com/r/LocalLLaMA/comments/1homrzu/financial_entity_schema_mapping/
Glittering-Start-945
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1homrzu
false
null
t3_1homrzu
/r/LocalLLaMA/comments/1homrzu/financial_entity_schema_mapping/
false
false
self
1
null
FInancial entity schema mapping
1
[removed]
2024-12-29T03:16:57
https://www.reddit.com/r/LocalLLaMA/comments/1homtlp/financial_entity_schema_mapping/
Glittering-Start-945
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1homtlp
false
null
t3_1homtlp
/r/LocalLLaMA/comments/1homtlp/financial_entity_schema_mapping/
false
false
self
1
null
Recommendation for a tokenizer like tiktoken for Open source models
1
[removed]
2024-12-29T03:28:53
https://www.reddit.com/r/LocalLLaMA/comments/1hon17z/recommendation_for_a_tokenizer_like_tiktoken_for/
spookie-boogie11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hon17z
false
null
t3_1hon17z
/r/LocalLLaMA/comments/1hon17z/recommendation_for_a_tokenizer_like_tiktoken_for/
false
false
self
1
null
So Many Things I want to ask
1
[removed]
2024-12-29T04:16:06
https://www.reddit.com/r/LocalLLaMA/comments/1honv54/so_many_things_i_want_to_ask/
Zealousideal_Tie395
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1honv54
false
null
t3_1honv54
/r/LocalLLaMA/comments/1honv54/so_many_things_i_want_to_ask/
false
false
self
1
null
Small Model Advice Please
1
[removed]
2024-12-29T04:50:35
https://www.reddit.com/r/LocalLLaMA/comments/1hoogkm/small_model_advice_please/
Illustrious-Plant-67
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoogkm
false
null
t3_1hoogkm
/r/LocalLLaMA/comments/1hoogkm/small_model_advice_please/
false
false
self
1
null
Which GPU should I add to my build?
1
Currently I'm running a single 4090 on my 5900x system with 128g ram. I'm looking to add one more card so that I don't need to change the cpu. 4090s are still ridiculously priced where I am so I'm looking at adding either a 3090 ($800), a 3080ti 20g ($480), or a 2080ti ($385). The 2080ti comes with 22g of vram at around half the price of the 24g 3090 but are there any other trade offs? Can anyone tell me how much slower the 2080ti would be for inference compared to the 3090? Also, I'm a little wary of the 3080ti 20g. I couldn't find a ton about it online so I don't know if I'll encounter any driver or software issues. Any opinions or input on this would be extremely helpful. Thank you!
2024-12-29T05:13:48
https://www.reddit.com/r/LocalLLaMA/comments/1hoouyj/which_gpu_should_i_add_to_my_build/
ansmo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoouyj
false
null
t3_1hoouyj
/r/LocalLLaMA/comments/1hoouyj/which_gpu_should_i_add_to_my_build/
false
false
self
1
null
Check out https://github.com/MervinPraison/PraisonAI/ for creating ai agents🤖 Automated AI Agents Creation 🔄 Use CrewAI or AutoGen Framework 💯 100+ LLM Support 💻 Chat with ENTIRE Codebase 🖥️ Interactive UIs 📄 YAML-based Configuration 🛠️ Custom Tool Integration 🔍 Internet Search Capability (u
1
2024-12-29T05:14:20
https://i.redd.it/em0d7bin1q9e1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1hoova2
false
null
t3_1hoova2
/r/LocalLLaMA/comments/1hoova2/check_out_httpsgithubcommervinpraisonpraisonai/
false
false
https://b.thumbs.redditm…12BMYLNTc-_M.jpg
1
{'enabled': True, 'images': [{'id': 'IqSt1tIT5ZwkYHG_poFwzETttyWtHVn-VInAiNdmvlY', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/em0d7bin1q9e1.png?width=108&crop=smart&auto=webp&s=1c14de82b3531ce8c5d524075d46ad09b0d02d7d', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/em0d7bin1q9e1.png?width=216&crop=smart&auto=webp&s=a756ff741844869ce84b0f0e3fffd619ce228849', 'width': 216}, {'height': 155, 'url': 'https://preview.redd.it/em0d7bin1q9e1.png?width=320&crop=smart&auto=webp&s=5b1ef727cbb51d590b95194c72cd490507c815ec', 'width': 320}, {'height': 310, 'url': 'https://preview.redd.it/em0d7bin1q9e1.png?width=640&crop=smart&auto=webp&s=9c1c20fcd016e779b2576073513c9f2e31aa2214', 'width': 640}, {'height': 465, 'url': 'https://preview.redd.it/em0d7bin1q9e1.png?width=960&crop=smart&auto=webp&s=73362a548d6504b48d0702911222028be9d519b1', 'width': 960}, {'height': 523, 'url': 'https://preview.redd.it/em0d7bin1q9e1.png?width=1080&crop=smart&auto=webp&s=d4ec7a3af18b0fb93964962ef681182438feb9b3', 'width': 1080}], 'source': {'height': 910, 'url': 'https://preview.redd.it/em0d7bin1q9e1.png?auto=webp&s=886b84e7a5ec98c8ce345c402d182e15e60e342b', 'width': 1877}, 'variants': {}}]}
PDF to Markdown Converter Shoot Out: Some Preliminary Results From My Experience
121
Docling was discussed here about a month ago, but I thought I would add some observations based on installing three packages to convert PDFs. **My Current Choice: docling** For my purposes, **docling** seemed to work best, and has a strong actitivy on github, **marker** is very good but not quite as strong as docling but a pretty close second, and **markitdown** seems to be much weaker and a distant third. **More details and github links:** [Marker first commit was on Oct 2023](https://github.com/VikParuchuri/marker) [Docling first commit was on July 2024](https://github.com/DS4SD/docling). Also, [IBM did a nice write-up here on some of the unique parts of it.](https://research.ibm.com/blog/docling-generative-AI) [Markitdown first commit was on November 2024](https://github.com/microsoft/markitdown) **Testing Process:** I'm multi-OS, but I run all my PDFs in Win11 environment under Powershell, so I only brought up the packages in Win11 Pro. Marker and Docling require pytorch, which doesn't run under python 3.13, so I pyenv'ed to 3.10.5. Markitdown runs just fine under 3.13.1, as it doesn't look to use pytorch, which means that it doesn't pull in local AI. (As far as I can tell.) Although I have Cuda equipped desktop, I just loaded pytorch CPU version to get some prelim results. Markdown does appear to have an option to allow you to insert a AI key, which it will process images and send back a description of the image in the file that your are processing. I did not verify this capability. I handed all three packages two PDFs, both around 25 pages, filled with tables and graphs. **Results?** Both docling and marker were pretty slow. A dedicated desktop with a Cuda layer on top would most likely help a lot. But if you ignore the process time, I saw the following. Docling really did a good job. It formatted the tables the best, and it embedded PNG into the final .md file. While more space efficent to simply link to an image, this means that you can't simply send a .md to process it because it will lose track of the images without a pointer to the image. I always like that embedded means you only have one doc to process with all the info. However, when you encode your images as ASCII to insert, the file grows. The more charts, the bigger it gets. The reports that I fed docling had every page with a graphic footer, so I had 25 copies of the same image embedded. Growth from PDF to the docling file was about 50%. Also, PNG files are nice, but they are big. The processing for docling was slow, and I gave warnings when it hit a few things it didn't like in the pdf. I had some concerns that I would have a bad convert, but the end product look good. So, it's bark is worse than it's bite. The second PDF that I gave all the packages had a lot charts in in, with the charts laid out side by side in two columns. We read all across the page for most docs, so this gave all the scripts some problems. However, while docling didn't get the order correct, it basically made sure that if there was infomation in the original PDF, it was going to put it somewhere in the final .md file. I consider this a positive. Marker was second best and created a separate .md file and a bunch of jpg graphics files that the md linked to. They also create a separate JSON file to track their converted files. Unlike docling, it would reuse graphics, and thus the file size was about the same size as the original PDF. The table formating was good, but it was not as good as docling. For instance, when it came to the multicolumn pages, it would make mistakes and leave text out. It also cut a chart wrong so that the top was missing, where docling caught the whole graphic. Marker did do a great job of coverting a table graphic into text. Doclin didn't try to convert the table, and just pasted it as a graphic. The table saved space, which was good, but it also lost the original color in the table, which had some value. After the testing, it was just apparent docling was capturing more data. Markitdown was by far the worse. It did not produce any tables, and it didn't format the text correctly. It looked like a Tesseract OCR'ed file, with no formating. It was so bad that I started to look in the source code for Markitdown.[ I haven't done an exhaustive look at this, but if I read the source code correctly](https://github.com/microsoft/markitdown/blob/main/src/markitdown/_markitdown.py#L478), the PDF coverstion may simply be calling PDFminer, which doesn't do a great job with tables. However, I haven't done an exhaustive code review, so corrections welcomed. Worse than that, it hit some type of a tranlation issue on one of the two PDFs and simply stopped. The other scripts had no issue. **Final Thoughts:** Docling is my vehicle of choice. It is unfortunate that marker is a completely separate code base, as it would be great to see the two efforts combined. It appears to me that IBM has grown their consulting base pretty well, and docling may serve as their ingest engine. If this is the case, then docling should see some strong development activity. The biggest draw back to Docling is the embedding of the PNG files and image growth, which is an issue if you have lots of charts. However, it should be a very small project to write a small python utility to go through your .md files and convert from PNG to webp for permanent storage. This will dramatically lower the amount of storage that graphics take. Alternatively, if you only have a few to no graphics it will have less of an impact.
2024-12-29T05:20:19
https://www.reddit.com/r/LocalLLaMA/comments/1hooz1a/pdf_to_markdown_converter_shoot_out_some/
HardDriveGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hooz1a
false
null
t3_1hooz1a
/r/LocalLLaMA/comments/1hooz1a/pdf_to_markdown_converter_shoot_out_some/
false
false
self
121
{'enabled': False, 'images': [{'id': 'I5oV5w-lUb2qD_Dta2fyTgU2I6IbBGgdoDRHZi3NNks', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bYS0_SkKV2yvXchsjVTl8pNQAdtKOufvTYlX_SlwMsE.jpg?width=108&crop=smart&auto=webp&s=b08242b0e0b18ea06074b2d1b9fc15dc132920a2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bYS0_SkKV2yvXchsjVTl8pNQAdtKOufvTYlX_SlwMsE.jpg?width=216&crop=smart&auto=webp&s=0a2715ef7996a0c299e52d40adf18904ae56b69f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bYS0_SkKV2yvXchsjVTl8pNQAdtKOufvTYlX_SlwMsE.jpg?width=320&crop=smart&auto=webp&s=1f9a3f666ae7944e3300313590a69b426353fe66', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bYS0_SkKV2yvXchsjVTl8pNQAdtKOufvTYlX_SlwMsE.jpg?width=640&crop=smart&auto=webp&s=49037418a4980dfe781ab9f8b372961466e225c6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bYS0_SkKV2yvXchsjVTl8pNQAdtKOufvTYlX_SlwMsE.jpg?width=960&crop=smart&auto=webp&s=0ca5b49509475c20a069d09007d4fa96c6c519ac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bYS0_SkKV2yvXchsjVTl8pNQAdtKOufvTYlX_SlwMsE.jpg?width=1080&crop=smart&auto=webp&s=a35892140fe0b6419cc65eeb70857cb02dfdd0fb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bYS0_SkKV2yvXchsjVTl8pNQAdtKOufvTYlX_SlwMsE.jpg?auto=webp&s=d94e4b134c9970f7fed83bf14f01fb348b83cd47', 'width': 1200}, 'variants': {}}]}
How important is VRAM really?
21
I did a test today with QWEN2.5-32B-Instruct The model gives pretty decent answers, but my system running two 3070s only has 16GBs of VRAM, and the 32B qwen instruct model is around 19GBs if you want to run it all in memory. So I thought I would install my two VEGA Frontier cards instead of the nvidia, because then I would have 32GBs of VRAM and be able to run it entirely in memory. Well it ran way slower on the AMDs, so much slower in fact that my CPU ran it faster. So even though I loaded model entirely into memory I got zero performance boost. I suppose maybe its because the GPU itself architecture wise isn't on par with a 3070.... but the test results made me wonder how important it really is. I guess it doesn't matter either way as I can't afford any more GPUs as of now, so pivoting to APIs for anything bigger than qwen is my direction for running non closedai stuff
2024-12-29T05:48:36
https://www.reddit.com/r/LocalLLaMA/comments/1hopff3/how_important_is_vram_really/
RouteGuru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hopff3
false
null
t3_1hopff3
/r/LocalLLaMA/comments/1hopff3/how_important_is_vram_really/
false
false
self
21
null
4 Titan Xp's?
1
[removed]
2024-12-29T05:55:27
https://www.reddit.com/r/LocalLLaMA/comments/1hopjb6/4_titan_xps/
RealMrCactus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hopjb6
false
null
t3_1hopjb6
/r/LocalLLaMA/comments/1hopjb6/4_titan_xps/
false
false
self
1
null
Anyone tried qwq 32b preview with Cline ?
1
[removed]
2024-12-29T06:04:07
https://www.reddit.com/r/LocalLLaMA/comments/1hopodk/anyone_tried_qwq_32b_preview_with_cline/
Glass-Rutabaga-2254
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hopodk
false
null
t3_1hopodk
/r/LocalLLaMA/comments/1hopodk/anyone_tried_qwq_32b_preview_with_cline/
false
false
self
1
null
How do you run local model in the cloud?
2
I would like to use a powerful local model, mostly for privacy but my computer is not powerful enough. My question is how you run a model in the cloud that you can easily use through Fx openwebui. What services are you using and what is your workflow?
2024-12-29T06:48:47
https://www.reddit.com/r/LocalLLaMA/comments/1hoqcwp/how_do_you_run_local_model_in_the_cloud/
Benna100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoqcwp
false
null
t3_1hoqcwp
/r/LocalLLaMA/comments/1hoqcwp/how_do_you_run_local_model_in_the_cloud/
false
false
self
2
null
Question about finetuning for CoT
1
Usually question-answer-question-answer... dialogs are used to train the model But now that models like deepseek r1 or openai o1 are appearing, models spend a lot of tokens on thinking before each answer. And in subsequent messages, the model doesn't see its thoughts - it only sees the previous final output. This can be easily checked if you ask deepseek r1 something of his thoughts, he doesn't know them and will make up new ones. I take it this is done to preserve context and prevent the model from dumbing down due to a lot of unnecessary information. What is the best way to train a model for this behavior? Let's say I have a long dialog and I want to add thoughts before each of the messages. But the model should think only before the message it is writing - there should be no tokens with thoughts in other messages during training and inference. Is it possible to give the model the same dialog several times, but with adding one new message with thoughts each time so that the model remembers how to think before each answer? But such an option would most likely cause overfitting. If I add cot before each message, model can probably start to repeat itself, because there can be a lot of repetitive phrases in cot. Also if cot training is in every message, when inferencing the model it will be hard to execute cot correctly since there will be no previous thoughts?
2024-12-29T06:53:08
https://www.reddit.com/r/LocalLLaMA/comments/1hoqf6a/question_about_finetuning_for_cot/
kiselsa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoqf6a
false
null
t3_1hoqf6a
/r/LocalLLaMA/comments/1hoqf6a/question_about_finetuning_for_cot/
false
false
self
1
null
The baseless fear of AI (LLMs?) being dangerous
0
What is this fear of people and the corporate overlords who want to crumple open source development with these stupid reasons like AI being too dangerous for public use. I mean I can understand if this reason is used for video/image generation models because they could be used for fake news, misinformation or offensive stuff. But what about LLMs? "boo did I scare you? I am autogenerated text". I wonder how would they defend this, if they want to stop open LLM development or hinder it's progress.
2024-12-29T07:13:45
https://www.reddit.com/r/LocalLLaMA/comments/1hoqpz0/the_baseless_fear_of_ai_llms_being_dangerous/
ThiccStorms
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoqpz0
false
null
t3_1hoqpz0
/r/LocalLLaMA/comments/1hoqpz0/the_baseless_fear_of_ai_llms_being_dangerous/
false
false
self
0
null
Small model advise
1
[removed]
2024-12-29T07:33:11
https://www.reddit.com/r/LocalLLaMA/comments/1hoqzua/small_model_advise/
Illustrious-Plant-67
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoqzua
false
null
t3_1hoqzua
/r/LocalLLaMA/comments/1hoqzua/small_model_advise/
false
false
self
1
null
Why does so much of the AI community appear to be from China?
1
[removed]
2024-12-29T07:34:18
https://www.reddit.com/r/LocalLLaMA/comments/1hor0do/why_does_so_much_of_the_ai_community_appear_to_be/
Business_Respect_910
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hor0do
false
null
t3_1hor0do
/r/LocalLLaMA/comments/1hor0do/why_does_so_much_of_the_ai_community_appear_to_be/
false
false
self
1
null
Cline vs Traycer?
0
I've been seeing a lot of people suggesting **Traycer AI** lately and claiming it's better than **Cline**. From what I've seen, it does look pretty promising, but I'm curious if anyone here has actually tried both? If you’ve used Traycer, how does it compare to Cline in terms of features, usability, and overall performance? Are there any standout pros or cons that you’ve noticed?
2024-12-29T07:36:32
https://www.reddit.com/r/LocalLLaMA/comments/1hor1i5/cline_vs_traycer/
tech-coder-pro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hor1i5
false
null
t3_1hor1i5
/r/LocalLLaMA/comments/1hor1i5/cline_vs_traycer/
false
false
self
0
null
Any list of LLM interesting projects to have insight?
14
When I search through the github for LLM projects, I found most of them are agents, chats, is there any list of interesting project to bring some insight to see how LLM can do things? For example: - App help to evaluate your daily work effeciency. - App can browse web to get movie download link for you. - Agent can talk to you first not just wait for you question. - LLM help to sort your files like https://github.com/QiuYannnn/Local-File-Organizer - Games. I've created a simple troll game: https://github.com/halida/ai_troll_game
2024-12-29T08:30:04
https://www.reddit.com/r/LocalLLaMA/comments/1horsbq/any_list_of_llm_interesting_projects_to_have/
linjun_halida
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1horsbq
false
null
t3_1horsbq
/r/LocalLLaMA/comments/1horsbq/any_list_of_llm_interesting_projects_to_have/
false
false
self
14
{'enabled': False, 'images': [{'id': 'ymJxqKCYtoy7xEN0VUftb2yYzVKG7inNVxD4dM2_xtU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fjW_WiMUOzkU9qb_74P0wnGJ9XtdprphNJsbqNQYNEE.jpg?width=108&crop=smart&auto=webp&s=c1bca31a0c3749aae015934c6d50d77a76aa70ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fjW_WiMUOzkU9qb_74P0wnGJ9XtdprphNJsbqNQYNEE.jpg?width=216&crop=smart&auto=webp&s=af1d538ea6d023227a9218dfb27d48cd4e1e66c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fjW_WiMUOzkU9qb_74P0wnGJ9XtdprphNJsbqNQYNEE.jpg?width=320&crop=smart&auto=webp&s=23eefd8814f6e9efcba104083233a791750ce7ea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fjW_WiMUOzkU9qb_74P0wnGJ9XtdprphNJsbqNQYNEE.jpg?width=640&crop=smart&auto=webp&s=187e487612ae798d36a00f0f0dbba8b3f321f0d7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fjW_WiMUOzkU9qb_74P0wnGJ9XtdprphNJsbqNQYNEE.jpg?width=960&crop=smart&auto=webp&s=e8f22424a48195a5489f5413cab13782e908c8b5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fjW_WiMUOzkU9qb_74P0wnGJ9XtdprphNJsbqNQYNEE.jpg?width=1080&crop=smart&auto=webp&s=3213ef3404222a222ed39327d6bd3d20fd86ab4a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fjW_WiMUOzkU9qb_74P0wnGJ9XtdprphNJsbqNQYNEE.jpg?auto=webp&s=bd1a77777f02d55d48f06db561bd7564cd950dc0', 'width': 1200}, 'variants': {}}]}
In defense of terribly bad models
0
I realized that people tend to defend bad role models like Llama and Phi, and they do so not because they have good reasons to defend them, but simply out of a sense of gratitude. Let me remind you that, in most cases, you are or were the product. No large company gives anything for free without it reaping rewards in the short or long term. I understand that there are worse companies, but if we ignore any rubbish that we deliver, it will be bad for the future of the community, as that is what they will always want to deliver. You can throw all your hate at this post, but remember: nothing is free, you are the product and you are getting very little in return. Oh, and I almost forgot, long live China! Thank you for delivering the best you can produce, not the worst.
2024-12-29T08:39:44
https://www.reddit.com/r/LocalLLaMA/comments/1horwtt/in_defense_of_terribly_bad_models/
Existing_Freedom_342
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1horwtt
false
null
t3_1horwtt
/r/LocalLLaMA/comments/1horwtt/in_defense_of_terribly_bad_models/
false
false
self
0
null
LLM Jail Breaking Student Seeks Social Experiment Requests - Help Me Push the Limits of AI Models!
0
Hey Reddit! 👋 I’m a student working in the world of LLM (Large Language Model) jailbreaking as a hobby. Over the past few months, I’ve been upgrading my skills in finding ways to bypass the limitations and constraints of these models. Now, I want to take this a step further and conduct a social experiment with your help! Here’s how it works: 1. **You suggest questions or prompts** that you’ve found LLMs (like ChatGPT, Claude, Gemini, etc.) refuse to answer or struggle with. 2. I’ll take these requests and experiment with jailbreaking techniques to see if I can get the model to respond. 3. I’ll post the results here, including snapshots of the outputs. Models shall be available on lmarena website. **Why am I doing this?** I’m curious to explore the boundaries of LLMs and see how far they can be pushed ethically. This is purely for educational purposes, and I’ll ensure that no harmful or unethical content is generated. **What kind of questions should you ask?** * Questions that LLMs typically refuse to answer (e.g., controversial, sensitive, or restricted topics). * Prompts that are blocked or flagged by the model’s safety filters. Let’s see how far we can push the limits of LLMs together. Drop your questions or prompts in the comments, and I’ll get to work! Looking forward to your input!
2024-12-29T08:48:56
https://www.reddit.com/r/LocalLLaMA/comments/1hos16n/llm_jail_breaking_student_seeks_social_experiment/
indian_truely
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hos16n
false
null
t3_1hos16n
/r/LocalLLaMA/comments/1hos16n/llm_jail_breaking_student_seeks_social_experiment/
false
false
self
0
null
Any ideas for Gen AI research projects?
1
I've had a publication in ML before. I wanted to have a Gen AI research project since long. I'm not much into theory stuff, so I don't know what topic should be chosen. I wanted a mix of theory and application oriented topics. Additionally if it uses other tech stacks such as image processing, it would be better.
2024-12-29T08:51:13
https://www.reddit.com/r/LocalLLaMA/comments/1hos2bl/any_ideas_for_gen_ai_research_projects/
Available-Stress8598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hos2bl
false
null
t3_1hos2bl
/r/LocalLLaMA/comments/1hos2bl/any_ideas_for_gen_ai_research_projects/
false
false
self
1
null
Weaponised Small Language Models
1
[removed]
2024-12-29T09:04:03
https://www.reddit.com/r/LocalLLaMA/comments/1hos8hb/weaponised_small_language_models/
CharacterCheck389
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hos8hb
false
null
t3_1hos8hb
/r/LocalLLaMA/comments/1hos8hb/weaponised_small_language_models/
false
false
self
1
null
Best open source TTS for real time communication
1
[removed]
2024-12-29T09:30:44
https://www.reddit.com/r/LocalLLaMA/comments/1hoskur/best_open_source_tts_for_real_time_communication/
Automatic-Act-4445
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hoskur
false
null
t3_1hoskur
/r/LocalLLaMA/comments/1hoskur/best_open_source_tts_for_real_time_communication/
false
false
self
1
null
Anyone tried HammerAI?
0
I'm pretty new to the AI stuff, intrigued but a little intimidated by the technical aspects of setting it up. In looking an easy option I came across [HammerAI](https://www.hammerai.com/desktop) which claims to be just that, but I'm a little surprised I don't see much discussion of it online if it's as good as it sounds. Anyone tried it?
2024-12-29T10:24:39
https://www.reddit.com/r/LocalLLaMA/comments/1hotaqf/anyone_tried_hammerai/
NuderWorldOrder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hotaqf
false
null
t3_1hotaqf
/r/LocalLLaMA/comments/1hotaqf/anyone_tried_hammerai/
false
false
self
0
{'enabled': False, 'images': [{'id': 'OyyRrQ6m6Laio5hUxP-x5qgOUVabNuA5yMM4lur0nU0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/o78_v8waoUJk8qNiQf1qt0lW-pnH8OAUvk8AXqNxSFw.jpg?width=108&crop=smart&auto=webp&s=e14095b86d0187c3d9f97ecefb33849f54fc7aef', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/o78_v8waoUJk8qNiQf1qt0lW-pnH8OAUvk8AXqNxSFw.jpg?width=216&crop=smart&auto=webp&s=9f54466aca7f97be2005bc18a98292ad80a47069', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/o78_v8waoUJk8qNiQf1qt0lW-pnH8OAUvk8AXqNxSFw.jpg?width=320&crop=smart&auto=webp&s=84bdb8035712b08b9ea9d9228360b921824fadb8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/o78_v8waoUJk8qNiQf1qt0lW-pnH8OAUvk8AXqNxSFw.jpg?width=640&crop=smart&auto=webp&s=63863d60be47ff783c5c61f4eb2d10b1d2333cf5', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/o78_v8waoUJk8qNiQf1qt0lW-pnH8OAUvk8AXqNxSFw.jpg?width=960&crop=smart&auto=webp&s=d26eb15559a49ada324306a7bb51a4218283f305', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/o78_v8waoUJk8qNiQf1qt0lW-pnH8OAUvk8AXqNxSFw.jpg?width=1080&crop=smart&auto=webp&s=2c2b4438bd410062961b55cc3697358722f960f0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/o78_v8waoUJk8qNiQf1qt0lW-pnH8OAUvk8AXqNxSFw.jpg?auto=webp&s=f819e88e0c3cb774560cb99e0da742d732fbff20', 'width': 1200}, 'variants': {}}]}
Understanding ROPE frequency calculation for llama
12
I am having a bit of difficulty understanding how the ROPE frequencies are computed in the LLAMA3.1. The relevant code used by transformers library can be found [here](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_rope_utils.py#L310). File: modeling\_rope\_utils.py, Function: `_compute_llama3_parameters` The line `wavelen = 2 * math.pi / inv_freq` does not make sense to me. It seems the that low and high frequencies are adjusted by different factors and the middle is interpolated to handle a smooth transition. Is there a paper that discusses this particular approach used in llama3.x class of models? Please give some hints on where to dig for information on this.
2024-12-29T10:42:33
https://www.reddit.com/r/LocalLLaMA/comments/1hotji5/understanding_rope_frequency_calculation_for_llama/
graphitout
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hotji5
false
null
t3_1hotji5
/r/LocalLLaMA/comments/1hotji5/understanding_rope_frequency_calculation_for_llama/
false
false
self
12
{'enabled': False, 'images': [{'id': 'wmYdTbY0dw6Rr2dRYUBJmQ3cCZ0eCEp7DPvMzckuExY', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=108&crop=smart&auto=webp&s=609f32e8148c30011d9500f95e07c9ac1fd1d9ce', 'width': 108}, {'height': 92, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=216&crop=smart&auto=webp&s=dea83bc1b9d8a62943b633e891ee777e8fc08f10', 'width': 216}, {'height': 137, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=320&crop=smart&auto=webp&s=59ee3b05fc21c40f9fa8e87346cf361333b36161', 'width': 320}, {'height': 274, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=640&crop=smart&auto=webp&s=398e68c0e90c95d8775ba2bc461fe47c8dc49d56', 'width': 640}, {'height': 411, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=960&crop=smart&auto=webp&s=69da452d2f2f1166afda40f2b4a0bce16533f350', 'width': 960}, {'height': 462, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=1080&crop=smart&auto=webp&s=8886c181c5238a73e06300f9aad1bc4ece11376e', 'width': 1080}], 'source': {'height': 914, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?auto=webp&s=818cf32f448cbd8ea7b9d13491e25b604bde81ba', 'width': 2134}, 'variants': {}}]}
Local models for text to image?
2
What are good local models now for text to image?
2024-12-29T11:24:51
https://www.reddit.com/r/LocalLLaMA/comments/1hou4wc/local_models_for_text_to_image/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hou4wc
false
null
t3_1hou4wc
/r/LocalLLaMA/comments/1hou4wc/local_models_for_text_to_image/
false
false
self
2
null
Are there local models for dressing up images?
1
If I have a photo of myself and I want to try different clothes, hairstyles etc, are there any local models for that?
2024-12-29T11:27:49
https://www.reddit.com/r/LocalLLaMA/comments/1hou6d4/are_there_local_models_for_dressing_up_images/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hou6d4
false
null
t3_1hou6d4
/r/LocalLLaMA/comments/1hou6d4/are_there_local_models_for_dressing_up_images/
false
false
self
1
null
For those wanting reasonable vram (54Gb) very cheap (299 Euro/£247)
18
i spotted this on ebay while checking current prices on the CMP 100-210s i use in my rig. a complete system with 9x P106-100s in giving you a total of 54Gb of Pascal VRAM plus a 1600w PSU and the other bits for 299 euro (or about £275) plus 15 shipping, thats a pretty cheap instant starter LLM rig, swap out a few cards with some with 16gb vram and youll have a really cheap 70b rig, it wont be the fastest but itll run them. anyhow heres the link: [https://www.ebay.co.uk/itm/186691347078?\_trkparms=amclksrc%3DITM%26aid%3D777008%26algo%3DPERSONAL.TOPIC%26ao%3D1%26asc%3D20240130164827%26meid%3D1aec046d42ec46a3b411f03aa70a942c%26pid%3D101959%26rk%3D1%26rkt%3D1%26itm%3D186691347078%26pmt%3D1%26noa%3D1%26pg%3D4375194%26algv%3DRecentlyViewedItemsV2WithMLRPbooster\_BP&\_trksid=p4375194.c101959.m146925&\_trkparms=parentrq%3A124c107c1940aa71792f2b0affff2cec%7Cpageci%3Aced02878-c5dc-11ef-9a30-46168a946ace%7Ciid%3A1%7Cvlpname%3Avlp\_homepage](https://www.ebay.co.uk/itm/186691347078?_trkparms=amclksrc%3DITM%26aid%3D777008%26algo%3DPERSONAL.TOPIC%26ao%3D1%26asc%3D20240130164827%26meid%3D1aec046d42ec46a3b411f03aa70a942c%26pid%3D101959%26rk%3D1%26rkt%3D1%26itm%3D186691347078%26pmt%3D1%26noa%3D1%26pg%3D4375194%26algv%3DRecentlyViewedItemsV2WithMLRPbooster_BP&_trksid=p4375194.c101959.m146925&_trkparms=parentrq%3A124c107c1940aa71792f2b0affff2cec%7Cpageci%3Aced02878-c5dc-11ef-9a30-46168a946ace%7Ciid%3A1%7Cvlpname%3Avlp_homepage)
2024-12-29T12:22:25
https://www.reddit.com/r/LocalLLaMA/comments/1houz68/for_those_wanting_reasonable_vram_54gb_very_cheap/
gaspoweredcat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1houz68
false
null
t3_1houz68
/r/LocalLLaMA/comments/1houz68/for_those_wanting_reasonable_vram_54gb_very_cheap/
false
false
self
18
{'enabled': False, 'images': [{'id': 'Wy7EJRcuXHhBOwzOeHVrC1olg9OTNPezCtiMb31pc7Y', 'resolutions': [{'height': 211, 'url': 'https://external-preview.redd.it/hVt-WKC27B5_d9KSWh2aSmI1-XjdqT_fuvlhmWzh4sg.jpg?width=108&crop=smart&auto=webp&s=9f089ada272b3da3f74662f164dfa4d1460a0435', 'width': 108}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/hVt-WKC27B5_d9KSWh2aSmI1-XjdqT_fuvlhmWzh4sg.jpg?auto=webp&s=1012449a84fb0fd5455cd4045191eb2f36fa76db', 'width': 204}, 'variants': {}}]}
r/LocalLLaMA - a year in review
180
If you think you already seen this post - that's correct. Yesterday's issue with AutoMod was resolved and the workaround post was deleted. We're now able to publish it the proper way instead, below content is identical to the [workaround version](https://gist.github.com/av/5e4820a48210600a458deee0f3385d4f). --- This community was a great part of my life for the past two years, so as 2024 comes to a close, I wanted to feed my nostalgia a bit. Let me take you back to the most notable things happened here this year. This isn't a log of model releases or research, rather things that were discussed and upvoted by the people here. So notable things missing is also an indication of what was going on of sorts. I hope that it'll also show the amount of progress and development that happend in just a single year and make you even more excited for what's to come in 2025. --- The year started with the [excitement about Phi-2](https://reddit.com/r/LocalLLaMA/comments/18zvxs8/phi2_becomes_open_source_mit_license/) (443 upvotes, by u/steph_pop). Phi-2 feels like ancient history these days, it's also fascinating that we end the 2024 with the Phi-4. Just one week after, people discovered that apparently it [was trained on the software engineer's diary](https://reddit.com/r/LocalLLaMA/comments/19366g7/literally_my_first_conversation_with_it/) (601 upvotes, by u/alymahryn) rather than the code itself. This was also time when we didn't have the LLaMA 3 yet (crazy, right?). So, it was really easy to drive our imagination wild with the news about [training LLaMA 3 on 600k H100s](https://reddit.com/r/LocalLLaMA/comments/199y05e/zuckerberg_says_they_are_training_llama_3_on/) (1341 upvotes, by u/kocahmet1) from the man himself. We [weren't even sure](https://www.reddit.com/r/LocalLLaMA/comments/199y05e/comment/kihi3ru/) if the model will be open, as other LLaMAs prior to that were pretty much leaked and appropriated rather than officially released. The amount of research on LLMs architectures became impossible to keep up with a long time ago. So here's [a snippet](https://www.reddit.com/r/LocalLLaMA/comments/19fgpvy/comment/kjjjigu/) (567 upvotes, by u/jd_3d) of all the things that were hard to keep up with at the end of January 2024: - [Mamba](https://arxiv.org/abs/2312.00752) - [Mamba MOE](https://arxiv.org/abs/2401.04081) - [Mambabyte](https://arxiv.org/abs/2401.13660) - [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020) - [Cascade Speculative Drafting](https://arxiv.org/abs/2312.11462) - [LASER](https://arxiv.org/abs/2312.13558) - [DRµGS](https://www.reddit.com/r/LocalLLaMA/comments/18toidc/stop_messing_with_sampling_parameters_and_just/) - [AQLM](https://arxiv.org/abs/2401.06118) The official class separation to GPU-poor and GPU-rich users was also yet to happen, but some people already knew the place they want to take, as shown by u/Breakit-Boris in [his majestic 5xA100 setup](https://www.reddit.com/r/LocalLLaMA/comments/1aduzqq/5_x_a100_setup_finally_complete/) (1006 upvotes). We didn't knew it yet, but it was ready to run LLaMA 3.1 405B. Everyone here understand the importance of alignment (just don't tell folks in r/singularity, they'll find a way to misinterpret it). So we definitely enjoyed [being shamed](https://www.reddit.com/r/LocalLLaMA/comments/1anhy1o/comment/kpul615/) by [Goody 2](https://www.reddit.com/r/LocalLLaMA/comments/1anhy1o/they_created_the_safest_model_which_wont_answer/) (691 upvote, by u/ActualExpert7584) when it came out. Then, we saw another [awesome build from u/Ok-Result5562](https://www.reddit.com/r/LocalLLaMA/comments/1apvbx5/i_can_run_almost_any_model_now_so_so_happy_cost_a/) (537 upvotes) - 192GB VRAM will still take you very far, maybe even [further than expected](https://www.reddit.com/r/LocalLLaMA/comments/1apvbx5/comment/kq8zy86/). Now, ask yourself, which version of Gemma was released early in 2024? If you are anything like me you probably thought about Gemma 2. But it was actually [the first Gemma](https://www.reddit.com/r/LocalLLaMA/comments/1awbo84/google_publishes_open_source_2b_and_7b_model/) (1181 upvote, by u/Tobiaseins). This was a very pleasant and unexpected release in many ways. Firstly, the sentiment was that [Google is loosing the AI wars](https://www.reddit.com/r/LocalLLaMA/comments/1awbo84/comment/krg892m/) (I hope you agree that now it looks like anything but that), secondly it was some of the first large-scale releases paired with a smaller "edge" LLM (2B in this instance). If you think you know what comes next - you're right. [The Bitnet](https://www.reddit.com/r/LocalLLaMA/comments/1b21bbx/this_is_pretty_revolutionary_for_the_local_llm/) (1208 upvotes, by u/Longjumping-City-461). We're still yet to see any large-scale releases with the architecture, which became a bit of a joke in the community. 9th week of 2024 marked a thing that would seem unusual today - [praising Claude 3 for being objective and unaligned](https://www.reddit.com/r/LocalLLaMA/comments/1b83yzi/alignment_in_one_word/) (1072 upvotes, by u/hurrytewer). Shortly after that, we finally solved the [mystery behind the LLMs](https://www.reddit.com/r/LocalLLaMA/comments/1bgh9h4/the_truth_about_llms/) (1807 upvotes, by u/JeepyTea) (it's officially magic, and a bit of [autocomplete](https://www.reddit.com/r/LocalLLaMA/comments/1bgh9h4/comment/kv7em0m/)). It wouldn't be Reddit without the memes about large companies CEOs. ["Who's next?"](https://www.reddit.com/r/LocalLLaMA/comments/1bji5ti/whos_next/) (791 upvote, by u/Alternative-Elk1870) shows our reaction to the news about [Microsoft hiring Inflection founders](https://techcrunch.com/2024/03/19/microsoft-hires-inflection-founders-to-run-new-consumer-ai-division/) to run the consumer AI division - many people were worried about other companies that might be cancelled by Microsoft desire to stay competitive. Then, we saw a very impressive release of the [Voicecraft model](https://www.reddit.com/r/LocalLLaMA/comments/1bqmuto/voicecraft_ive_never_been_more_impressed_in_my/) (1278 upvotes, by u/SignalCompetitive582) and benchmarked a couple of models on [how to overthrow the government](https://www.reddit.com/r/LocalLLaMA/comments/1bte9hk/this_is_why_opensource_matters/) (1116 upvotes, by u/xadiant) ([in Minecraft](https://www.reddit.com/r/LocalLLaMA/comments/1bte9hk/comment/kxlqn7g/), of course). Once again, we're scratching the "progress" itch, April 2024 was as exciting as what we have now. See how [this post compares Mixtral 8x22B to PaLM and Claude 2](https://www.reddit.com/r/LocalLLaMA/comments/1c33agw/todays_open_source_models_beat_closed_source/) (854 upvotes, by u/danielcar). However if anything is constant in the community - it's attitude to OpenAI. AI is dangerous, kids. [LLaMA 3 must be stopped until it's too late](https://www.reddit.com/r/LocalLLaMA/comments/1c7inj3/openais_response/) (1232 upvotes, by u/Wrong_User_Logged). Luckily, we almost always had [some ~good~ insane builds](https://www.reddit.com/r/LocalLLaMA/comments/1c9l181/10x3090_rig_romed82tepyc_7502p_finally_complete/) (882 upvotes, by u/Mass2018) to discuss and decompress over. 10x3090 stays an absolute unit to this day. And back to [roasting OpenAI just the very next day](https://www.reddit.com/r/LocalLLaMA/comments/1cf7hg0/open_ai/) (1586 upvotes, again by u/Wrong_User_Logged). Changing gears, 18th week of 2024 we [joked about context scaling](https://www.reddit.com/r/LocalLLaMA/comments/1ckcw6z/1m_context_models_after_16k_tokens/) (1212 upvotes, by u/cobalt1137). Gemini was [far ahead of the game already](https://www.reddit.com/r/LocalLLaMA/comments/1ckcw6z/comment/l2oanyn/). And back to the [OpenAI bashing](https://www.reddit.com/r/LocalLLaMA/comments/1cr9wvg/friendly_reminder_in_light_of_gpt4o_release/) (1332 upvotes, by u/jferments) - it's a cycle, really. Luckily, just the next week we [had Phi-3 small and medium released](https://www.reddit.com/r/LocalLLaMA/comments/1cxa6w5/phi3_small_medium_are_now_available_under_the_mit/) (879 upvotes, by u/Nunki08) (feels like yesterday, though). We were [already cautious](https://www.reddit.com/r/LocalLLaMA/comments/1cxa6w5/comment/l517cdb/) about Microsoft's approach to releases. May ended with [a shout-out from A. Karpathy](https://www.reddit.com/r/LocalLLaMA/comments/1d3sf1k/were_famous/) (1542 upvotes, by u/False-Tea5957) and a statement from [Andrew Ng defending Open Source AI](https://www.reddit.com/r/LocalLLaMA/comments/1d9w77g/andrew_ng_defends_open_source_ai_says_regulations/) (511 upvotes, by u/ninjasaid13). The excitement didn't end though, Open WebUI project started [a series of brilliant releases](https://www.reddit.com/r/LocalLLaMA/comments/1df1zjr/if_you_havent_checked_out_the_open_webui_github/) (749 upvotes, by u/Porespellar) cementing it as the central tool for local LLM interactions for many of us. The next week hit really hard (harder than we even knew), with [a release of Clause 3.5 Sonnet](https://www.reddit.com/r/LocalLLaMA/comments/1dkctue/anthropic_just_released_their_latest_model_claude/) (1035 upvotes, by u/afsalashyana). The model was both smaller and more capable than Claude 3 Opus. It's still pretty much the most powerful all-round model. ["Explain it with gradually increasing complexity"](https://www.reddit.com/r/LocalLLaMA/comments/1dp378t/very_powerful_prompt_explain_it_with_gradually/) (495 upvotes, by u/Balance-) was an instant hit, and was an early indication of upcoming trend of test time compute and increasing the importance of context-exploration in general. From this point, things feel more like old news, rather than nostalgia-inducing memories. The first week of July saw the [release of Moshi - first real-time voice AI](https://www.reddit.com/r/LocalLLaMA/comments/1duegr1/kyutai_labs_just_released_moshi_a_realtime_native/) (847 upvotes, by u/Nunki08). It felt like France has [became the center of the AI innovation](https://www.reddit.com/r/LocalLLaMA/comments/1duegr1/comment/lbg40df/) in EU with Hugging Face, Mistral and now Moshi. I actually went to Paris around that time and had a wierd feeling that French are going to take over the world - with upcoming olympics and all. Next couple of weeks were quieter (but only because of what to come), we saw a release of [a cool tool for file organization](https://www.reddit.com/r/LocalLLaMA/comments/1dxoz88/i_made_a_cli_with_ollama_to_rename_your_files_by/) (574 upvote, by u/ozgrozer) and were emerged into the [rumours about the LLaMA 3.1 405B release](https://www.reddit.com/r/LocalLLaMA/comments/1e4uwz2/this_meme_only_runs_on_an_h100/) (702 upvotes, by u/Porespellar). We didn't have to wait long, since the release [happened just 6 days after](https://www.reddit.com/r/LocalLLaMA/comments/1ea9eeo/meta_officially_releases_llama3405b_llama3170b/) (1082 upvotes, by u/nanowell), leaving absolutely everybody mind blown. We got a step up in native tool calling, 128k context and an open-weights model to rival closed-source behemoths. You'd be correct to guess that [Meta's releases were a stark contrast with OpenAI's](https://www.reddit.com/r/LocalLLaMA/comments/1eh9sef/just_dropping_the_image/) (1535 upvotes, by u/Wrong_User_Logged) at this corner of the internet, so the jokes [were very soon to follow](https://www.reddit.com/r/LocalLLaMA/comments/1enhe8r/hi_just_dropping_the_image/) (994 upvotes, by u/Wrong_User_Logged). The tone shifted shortly after, as we were discussing [California's AI bill](https://www.reddit.com/r/LocalLLaMA/comments/1es87fm/right_now_is_a_good_time_for_californians_to_tell/) (706 upvotes, by u/1a3orn). The bill made things a bit grim, so [Phi-3.5 MoE release a week after](https://www.reddit.com/r/LocalLLaMA/comments/1ex45m2/phi35_has_been_released/) (750 upvotes, by u/remixer_dec) received a very warm welcome. The only question remaing was ["Wen GGUF?"](https://www.reddit.com/r/LocalLLaMA/comments/1f3cz0g/wen_gguf/) (605 upvotes, by u/Porespellar). I'm sure you can easily name the drama that followed shortly after. Reflection. Wierdly enough, [the post that got the most attention](https://www.reddit.com/r/LocalLLaMA/comments/1fbclkk/reflection_llama_31_70b_independent_eval_results/) (702 upvotes, by u/avianio) was actually about independent eval results - so we can say the truth prevailed. Shortly after, we saw [a meme that is a highest-voted post](https://www.reddit.com/r/LocalLLaMA/comments/1ffv39d/enough_already_if_i_cant_run_it_in_my_3090_i_dont/) (3399 upvotes, by u/Porespellar) in the community to this day. It's all there - showing that the name of the community is truly earned. Memes do not last long, so we were laughing at what the naming of the models had become, with just a tiny bit of nostalgia about [the old days](https://www.reddit.com/r/LocalLLaMA/comments/1fljpdf/the_old_days/) (1140 upvotes, by u/pablogabrieldias). Another week - another regulations discussion, now centered around EU's AI bill. Notably, it [affected Meta's release of LLaMA 3.2](https://www.reddit.com/r/LocalLLaMA/comments/1fpmlga/llama_32_not_available/) (1615 upvotes, by u/Wrong_User_Logged), but we returned to the [usual OpenAI poking](https://www.reddit.com/r/LocalLLaMA/comments/1fung5w/those_two_guys_were_once_friends_and_wanted_ai_to/) (1176 upvotes, by u/Wrong_User_Logged) right after. We had no idea yet that there'll be a whole lot more to discuss about it later. The middle of October was notable due to [a release of Papeg.ai](https://www.reddit.com/r/LocalLLaMA/comments/1g0jehn/ive_been_working_on_this_for_6_months_free_easy/) (1061 upvotes, by u/privacyparachute) - we were surprised with how many various features a single developer packed in the app only leaving its top spot to another [beautiful build with 4x single-slot 4090's](https://www.reddit.com/r/LocalLLaMA/comments/1g4w2vs/6u_threadripper_4xrtx4090_build/) (1481 upvotes, by u/UniLeverLabelMaker). Everything after that is still very recent, so I'll be brief: - [A meme about noone comparing their models to Qwen 2.5](https://www.reddit.com/r/LocalLLaMA/comments/1g8t88y/3_times_this_month_already/) (880 upvotes, by u/visionsmemories) - [Open version of NotebookLM by Meta](https://www.reddit.com/r/LocalLLaMA/comments/1gdk92b/meta_releases_an_open_version_of_googles/) (1005 upvotes, by u/isr_431) - [Even crazier build with 14x RTX 3090s](https://www.reddit.com/r/LocalLLaMA/comments/1gjje70/now_i_need_to_explain_this_to_her/) (1864 upvotes, by u/XMasterrrr) - [Chinese company trained GPT-4 rival with just 2,000 GPUs](https://www.reddit.com/r/LocalLLaMA/comments/1gs0bxj/chinese_company_trained_gpt4_rival_with_just_2000/) (1054 upvotes, by u/hedgehog0) - [Excitement about DeepSeek release](https://www.reddit.com/r/LocalLLaMA/comments/1gx4asf/chad_deepseek/) (2316 upvotes, by u/SquashFront1303) - [A note on the downward trend in the amount of announced LLM releases](https://www.reddit.com/r/LocalLLaMA/comments/1h0jhlq/number_of_announced_llm_models_over_time_the/) (759 upvotes, by u/fairydreaming) - [Release of LLaMA 3.3 70B](https://www.reddit.com/r/LocalLLaMA/comments/1h85tt4/meta_releases_llama33_70b/) (1281 upvotes, by u/Amgadoz) - [Back to OpenAI kicking about their $200 subscription](https://www.reddit.com/r/LocalLLaMA/comments/1haumxe/finally/) (1809 upvotes, by u/Wrong_User_Logged) - [Mind-blowing demo of Genesis physics simulation platform](https://www.reddit.com/r/LocalLLaMA/comments/1hhmebr/new_physics_ai_is_absolutely_insane_opensource/) (2191 upvotes, by u/umarmnaq) - [Zuckerberg watching you use Qwen instead of LLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1hlzci9/zuckerberg_watching_you_use_qwen_instead_of_llama/) (2932 upvotes, by u/Super-Muffin-1230) That's it, folks. I hope you enjoyed this trip down the memory lane. I'm looking forward to what 2025 will bring us. P.S. none of my own posts made it to the cut, but you might've seen my rant about progress in ML or one of my endless mentions of the OSS project I'm maintaining. P.P.S. Let's also celebrate u/Wrong_User_Logged and u/Porespellar, they clearly contributed a lot into luring us to the sub again and again throughout the year.
2024-12-29T12:31:16
https://www.reddit.com/r/LocalLLaMA/comments/1hov3y9/rlocalllama_a_year_in_review/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hov3y9
false
null
t3_1hov3y9
/r/LocalLLaMA/comments/1hov3y9/rlocalllama_a_year_in_review/
false
false
self
180
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
Let's Learn Ai Together
1
[removed]
2024-12-29T12:38:04
https://www.reddit.com/r/LocalLLaMA/comments/1hov7ix/lets_learn_ai_together/
CodingWithSatyam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hov7ix
false
null
t3_1hov7ix
/r/LocalLLaMA/comments/1hov7ix/lets_learn_ai_together/
false
false
self
1
null
Learn AI Together
1
[removed]
2024-12-29T12:41:13
https://www.reddit.com/r/LocalLLaMA/comments/1hov99d/learn_ai_together/
CodingWithSatyam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hov99d
false
null
t3_1hov99d
/r/LocalLLaMA/comments/1hov99d/learn_ai_together/
false
false
self
1
null
Best Group To Learn Ai
1
[removed]
2024-12-29T12:43:03
https://www.reddit.com/r/LocalLLaMA/comments/1hova7n/best_group_to_learn_ai/
ai_way
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hova7n
false
null
t3_1hova7n
/r/LocalLLaMA/comments/1hova7n/best_group_to_learn_ai/
false
false
self
1
null
Is it possible to run quantized Llama3 70B on 250gb RAM
1
2024-12-29T12:44:30
https://i.redd.it/4fu4fndw9s9e1.png
United_Demand
i.redd.it
1970-01-01T00:00:00
0
{}
1hovb0u
false
null
t3_1hovb0u
/r/LocalLLaMA/comments/1hovb0u/is_it_possible_to_run_quantized_llama3_70b_on/
false
false
https://b.thumbs.redditm…8Glqj7i5XaRM.jpg
1
{'enabled': True, 'images': [{'id': 'M2TADjf-F3zAEpvWg-slTQ7t5a74Zm32dfX5AoFzyEs', 'resolutions': [{'height': 9, 'url': 'https://preview.redd.it/4fu4fndw9s9e1.png?width=108&crop=smart&auto=webp&s=82ee57bf4345735a68cca4fb083af92637bc6976', 'width': 108}, {'height': 18, 'url': 'https://preview.redd.it/4fu4fndw9s9e1.png?width=216&crop=smart&auto=webp&s=cd73f68740dcab727e3da4b0710ced16e4118168', 'width': 216}, {'height': 27, 'url': 'https://preview.redd.it/4fu4fndw9s9e1.png?width=320&crop=smart&auto=webp&s=52c3e263736a0f8721834fe0a80fcd9af34b28d7', 'width': 320}, {'height': 55, 'url': 'https://preview.redd.it/4fu4fndw9s9e1.png?width=640&crop=smart&auto=webp&s=d678fac1cd72bff3c7dad1e21f9fa4301c64e1dd', 'width': 640}, {'height': 83, 'url': 'https://preview.redd.it/4fu4fndw9s9e1.png?width=960&crop=smart&auto=webp&s=34759628c7a62d49750b2750374ac8be0e4fa2f8', 'width': 960}, {'height': 93, 'url': 'https://preview.redd.it/4fu4fndw9s9e1.png?width=1080&crop=smart&auto=webp&s=f651c91094d0b0fab18b8bfb5195ecdc88d466aa', 'width': 1080}], 'source': {'height': 221, 'url': 'https://preview.redd.it/4fu4fndw9s9e1.png?auto=webp&s=8ae9e8a124c38601064cbe759289199f6e736527', 'width': 2549}, 'variants': {}}]}
7900 XTX or 4070ti super
1
[removed]
2024-12-29T13:04:21
https://www.reddit.com/r/LocalLLaMA/comments/1hovm7g/7900_xtx_or_4070ti_super/
great_7562
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hovm7g
false
null
t3_1hovm7g
/r/LocalLLaMA/comments/1hovm7g/7900_xtx_or_4070ti_super/
false
false
self
1
null