title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Hugging Face added Text to SQL on all 250K+ Public Datasets - powered by Qwen 2.5 Coder 32B 🔥
161
2024-12-02T14:27:09
https://v.redd.it/e3t9ae0h3g4e1
vaibhavs10
/r/LocalLLaMA/comments/1h4w5a3/hugging_face_added_text_to_sql_on_all_250k_public/
1970-01-01T00:00:00
0
{}
1h4w5a3
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/e3t9ae0h3g4e1/DASHPlaylist.mpd?a=1735871809%2COTkyNGYyNjRlNDE0YjA5ZThhMjhkZGNlNzliYTE2NjEwZmM1NzUyOTM2ZDRlZmIxMzhmZTFkOGU3MTNiYjYyOA%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/e3t9ae0h3g4e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/e3t9ae0h3g4e1/HLSPlaylist.m3u8?a=1735871809%2CMzRkOGI1ZDNkZGNiZTI5ZmI2NzBhMzk5MmFiYjczMmM4MWE0NWM2M2Y1ZmIwNDZjNmMxOGRhMjQzYmQ2OWJiZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/e3t9ae0h3g4e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}}
t3_1h4w5a3
/r/LocalLLaMA/comments/1h4w5a3/hugging_face_added_text_to_sql_on_all_250k_public/
false
false
https://external-preview…c12be3db1c0fb179
161
{'enabled': False, 'images': [{'id': 'eHh1dmMyZW8zZzRlMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/eHh1dmMyZW8zZzRlMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=108&crop=smart&format=pjpg&auto=webp&s=25e8ff37ce636ae7097d01221c72af123280ff9b', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/eHh1dmMyZW8zZzRlMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=216&crop=smart&format=pjpg&auto=webp&s=c4bf9220d38512b3f2a95605cdcac3e5a8bfa12a', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/eHh1dmMyZW8zZzRlMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=320&crop=smart&format=pjpg&auto=webp&s=1e116024d5c48f69698f6674ca6fbba816a52c18', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/eHh1dmMyZW8zZzRlMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=640&crop=smart&format=pjpg&auto=webp&s=794ab9d4bfcabd6046369fba87771f89490b9ede', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/eHh1dmMyZW8zZzRlMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=960&crop=smart&format=pjpg&auto=webp&s=9f2dffb6615ffee96d8e017b39ddeba1cdc524ed', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/eHh1dmMyZW8zZzRlMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7ad29b71bd33fd8bbbabc338f38b03d16a98ba67', 'width': 1080}], 'source': {'height': 1800, 'url': 'https://external-preview.redd.it/eHh1dmMyZW8zZzRlMZTAGrkeLZO1tBuiimB5X60UvGnb2VnYDJyVQ1Os4m4T.png?format=pjpg&auto=webp&s=d60555ee66d010fac17c6e9bd93f4a5156543f8f', 'width': 2880}, 'variants': {}}]}
Difference between conventional CoT and QWQ-32B-Preview
17
I am not yet sure how to think about the inner workings of QWQ. It is often compared to o1, which, as I understand it, involves an RL approach to sampling the most valuable generations in each "thinking" step (the expandable elements that contain the summary of the most valuable sampled "thinking" in the ChatGPT UI) and then generating the final answer. QWQ in the hugging face inference playground seems to generate normal output tokens in a CoT fashion until the model reaches a conclusion. No evidence of additional processing as in o1. I'm aware that I have large knowledge gaps and would be very grateful if someone could enlighten me.
2024-12-02T14:34:31
https://www.reddit.com/r/LocalLLaMA/comments/1h4waxj/difference_between_conventional_cot_and/
Retthardt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4waxj
false
null
t3_1h4waxj
/r/LocalLLaMA/comments/1h4waxj/difference_between_conventional_cot_and/
false
false
self
17
null
What are the most successful model merges?
8
What are the best merge use cases we've seen? Have any made measurable improvements over the constituent models?
2024-12-02T14:35:25
https://www.reddit.com/r/LocalLLaMA/comments/1h4wbn9/what_are_the_most_successful_model_merges/
30299578815310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4wbn9
false
null
t3_1h4wbn9
/r/LocalLLaMA/comments/1h4wbn9/what_are_the_most_successful_model_merges/
false
false
self
8
null
List of Local LLM Software Compatible With Both NVIDIA & AMD GPUs (for Windows, Linux & MacOS)
1
[removed]
2024-12-02T14:37:59
https://www.reddit.com/r/LocalLLaMA/comments/1h4wdkb/list_of_local_llm_software_compatible_with_both/
techantics
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4wdkb
false
null
t3_1h4wdkb
/r/LocalLLaMA/comments/1h4wdkb/list_of_local_llm_software_compatible_with_both/
false
false
self
1
{'enabled': False, 'images': [{'id': 'UxMnR8Qn3-OSZgndPFGGO8bCjZYaUYbPy8I1EnxAWeM', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/vhsM29mzqr6irHHScKTWPK4VOkNiVRGfzxBHwX7SlUY.jpg?width=108&crop=smart&auto=webp&s=db18fd7f2f90ad6a5c2f1d8f29bf4b311f484dbd', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/vhsM29mzqr6irHHScKTWPK4VOkNiVRGfzxBHwX7SlUY.jpg?width=216&crop=smart&auto=webp&s=25cfe6056c0d69778c712346d0f1c3a6821a57e3', 'width': 216}, {'height': 314, 'url': 'https://external-preview.redd.it/vhsM29mzqr6irHHScKTWPK4VOkNiVRGfzxBHwX7SlUY.jpg?width=320&crop=smart&auto=webp&s=93f8ab8844ae632728d4b9d8b828eb757bd007f4', 'width': 320}, {'height': 628, 'url': 'https://external-preview.redd.it/vhsM29mzqr6irHHScKTWPK4VOkNiVRGfzxBHwX7SlUY.jpg?width=640&crop=smart&auto=webp&s=f94660644141f159078ab7c7aa75da362a1d5bf4', 'width': 640}, {'height': 943, 'url': 'https://external-preview.redd.it/vhsM29mzqr6irHHScKTWPK4VOkNiVRGfzxBHwX7SlUY.jpg?width=960&crop=smart&auto=webp&s=4a10b8dd15f2809d127f0fe3c9359a16ae67b857', 'width': 960}, {'height': 1061, 'url': 'https://external-preview.redd.it/vhsM29mzqr6irHHScKTWPK4VOkNiVRGfzxBHwX7SlUY.jpg?width=1080&crop=smart&auto=webp&s=417bc7951a8eb43af0e8b2e5c44baf7e973fb90e', 'width': 1080}], 'source': {'height': 3192, 'url': 'https://external-preview.redd.it/vhsM29mzqr6irHHScKTWPK4VOkNiVRGfzxBHwX7SlUY.jpg?auto=webp&s=427b1a84acb27e0ab7eedf68259c71addce7f801', 'width': 3248}, 'variants': {}}]}
Multiple 7900XTX cards working together!
13
I had to switch the e-GPU from thunderbolt to occulink, but it's now working! I tested it by running llama3.1:70b-instruct\_Q4\_K\_M under llama.cpp and it was able to offload every layer to a GPU. Performance is decent as long as I keep the context small. Going to from 2K to 32K context still blows out my VRAM. I also was able to get ollama to recognize both GPUs and have open-webui working. I plan to test Qwen for code completion when I get time. I also still have my old 2070 super. I'm thinking of throwing it into my thunderbolt dock, and seeing if I can get them all working together. Has anyone tried this?
2024-12-02T14:38:45
https://www.reddit.com/r/LocalLLaMA/comments/1h4we55/multiple_7900xtx_cards_working_together/
Ruin-Capable
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4we55
false
null
t3_1h4we55
/r/LocalLLaMA/comments/1h4we55/multiple_7900xtx_cards_working_together/
false
false
self
13
null
How We Used Llama 3.2 to Fix a Copywriting Nightmare
1
[removed]
2024-12-02T15:06:04
https://www.reddit.com/r/LocalLLaMA/comments/1h4x040/how_we_used_llama_32_to_fix_a_copywriting/
kaulvimal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4x040
false
null
t3_1h4x040
/r/LocalLLaMA/comments/1h4x040/how_we_used_llama_32_to_fix_a_copywriting/
false
false
self
1
{'enabled': False, 'images': [{'id': 'deIK_fefS_mPODE-fYMYNYt2TAlXSyjSqxQro6CvJnY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=108&crop=smart&auto=webp&s=7abba2a0ef01f10d40d540f412f9b74253090843', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=216&crop=smart&auto=webp&s=2ede2d77a28c529c3b19c0c7ed5603f79503d38f', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=320&crop=smart&auto=webp&s=edcd3963103a7182d2372d8b47fca1d595d1074a', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=640&crop=smart&auto=webp&s=fdc485809e27e168b4758abb5d973db6fa9b0433', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=960&crop=smart&auto=webp&s=57f2138217120f147a566000390d7392c1ddad80', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=1080&crop=smart&auto=webp&s=00588864e35af8a36bd8c749161ade2084932db1', 'width': 1080}], 'source': {'height': 673, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?auto=webp&s=36381a27094f174a4e60750bb667f662e8b3afb6', 'width': 1200}, 'variants': {}}]}
What happens if we remove 50 percent of Llama?
1
2024-12-02T15:08:12
https://neuralmagic.com/blog/24-sparse-llama-smaller-models-for-efficient-gpu-inference/
paranoidray
neuralmagic.com
1970-01-01T00:00:00
0
{}
1h4x1ug
false
null
t3_1h4x1ug
/r/LocalLLaMA/comments/1h4x1ug/what_happens_if_we_remove_50_percent_of_llama/
false
false
https://b.thumbs.redditm…ObtCoh1Q5-Lc.jpg
1
{'enabled': False, 'images': [{'id': 'yaGW5FmuA0-HFXPdGYq-amK5Z4_7azQs3gu07wvAbXY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=108&crop=smart&auto=webp&s=ddd20466e0e1c0caba9fdf61880eb0aab9199c66', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=216&crop=smart&auto=webp&s=ce6c83d57b0daecd55f05c630c8e3efd56925383', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=320&crop=smart&auto=webp&s=469e555daaa9d8983f8ae66bda6d0b906144b8e5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=640&crop=smart&auto=webp&s=fd6539d8378d46c3429129d02a8bf7c2f56626af', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=960&crop=smart&auto=webp&s=8cbb928918742ed3cfafbe25717968589ea073db', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=1080&crop=smart&auto=webp&s=7ee473e724870012da36c22824a95770b70ee511', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?auto=webp&s=62dcf6cd48d2a33c3a108b86569442c474d47867', 'width': 1200}, 'variants': {}}]}
Improving response time
1
[removed]
2024-12-02T15:13:02
https://www.reddit.com/r/LocalLLaMA/comments/1h4x5s3/improving_response_time/
maxvandeperre
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4x5s3
false
null
t3_1h4x5s3
/r/LocalLLaMA/comments/1h4x5s3/improving_response_time/
false
false
self
1
null
Tried making an easily accessible completely uncensored version of Llama 405b
2
Hopefully this doesn't go against the rules, but I have been working on creating a completely uncensored model of Llama 405b from the last few weeks. I think it's almost working perfectly fine now. It's not possible for everyone to run the 405b model locally cuz of hardware requirement so I thought there would be some demand for cloud based app. You can use it for free if you want to on [https://cleus.ai](https://cleus.ai) [just a little example, but you can try experimenting with queries](https://preview.redd.it/nv4ttyfubg4e1.png?width=424&format=png&auto=webp&s=932ae966424dce41a924208552965c37f0f8a859) Other than that I tried finetuning the model little bit more make it more expressive, like you're talking to a real person, so it even adds expressions whenever it's needed. https://preview.redd.it/ovn53po7bg4e1.png?width=441&format=png&auto=webp&s=75c71ae18375ff9a71dc47ed8be3ee13ce666797 And I also created a seaparate page just in case you guys need access to other mainstream models (like GPT, Gemini, Claude etc etc ), live search and uncensored image generation. https://preview.redd.it/dchkxz30cg4e1.png?width=422&format=png&auto=webp&s=e30699ce9881230eaf48c1632450d910a36318cc Also, there's a limit per day, if I give unlimited usage I will probably have to declare bankruptcy tonight. Im still working on it and improving every single day. So any feedback or suggestion on changes is appreciated.
2024-12-02T15:16:51
https://www.reddit.com/r/LocalLLaMA/comments/1h4x90n/tried_making_an_easily_accessible_completely/
Homeless_Programmer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4x90n
false
null
t3_1h4x90n
/r/LocalLLaMA/comments/1h4x90n/tried_making_an_easily_accessible_completely/
false
false
https://b.thumbs.redditm…iRHKdiDQ6MAE.jpg
2
null
Tried making a completely Uncensored Version of Llama 405b model that's free* to use on cloud
26
2024-12-02T15:33:15
https://cleus.ai/
Homeless_Programmer
cleus.ai
1970-01-01T00:00:00
0
{}
1h4xmln
false
null
t3_1h4xmln
/r/LocalLLaMA/comments/1h4xmln/tried_making_a_completely_uncensored_version_of/
false
false
default
26
null
Generating prompts with uncensored LLM
1
[removed]
2024-12-02T16:00:52
https://www.reddit.com/r/LocalLLaMA/comments/1h4y9js/generating_prompts_with_uncensored_llm/
aiwtl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4y9js
false
null
t3_1h4y9js
/r/LocalLLaMA/comments/1h4y9js/generating_prompts_with_uncensored_llm/
false
false
self
1
null
Help with making a multi-gpu build
1
I'm currently trying to come up with the right idea for a build - I want to ideally run 3x 3090s on a machine that would mainly be used for LLM inference, Stable Diffusion and possibly VR. From my understanding, PCIe should be x16 on the first card, and the rest could be on x8 or x4, as it doesn't matter, yes? I'd like to run models in the 70b to 120b range, quantized ofc. GPU vram would not be enough for it, so I'd need to add like plenty (128gb) RAM. Ideally I'd also would like a good processor that'd do well in modern games. And would I need one or two PSUs for the said build? I've also thought of buying used mining rigs with 3090s, though I'm wondering if that's a good option. Can you help me pick some parts for a build? I'm okay with going second hand. Budget would be around 3k euros, though it could go a little higher.
2024-12-02T16:06:56
https://www.reddit.com/r/LocalLLaMA/comments/1h4yf0n/help_with_making_a_multigpu_build/
Rainboy97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4yf0n
false
null
t3_1h4yf0n
/r/LocalLLaMA/comments/1h4yf0n/help_with_making_a_multigpu_build/
false
false
self
1
null
Which AI tool for coding ? aider ? specific situation
1
[removed]
2024-12-02T16:11:04
https://www.reddit.com/r/LocalLLaMA/comments/1h4yike/which_ai_tool_for_coding_aider_specific_situation/
Popular-Aerie-5111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4yike
false
null
t3_1h4yike
/r/LocalLLaMA/comments/1h4yike/which_ai_tool_for_coding_aider_specific_situation/
false
false
self
1
null
What’s the Best Dataset Format for Fine-Tuning Embedding Models for Different MTEB Tasks?
1
[removed]
2024-12-02T16:20:05
https://www.reddit.com/r/LocalLLaMA/comments/1h4yqc8/whats_the_best_dataset_format_for_finetuning/
CarpeDay27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4yqc8
false
null
t3_1h4yqc8
/r/LocalLLaMA/comments/1h4yqc8/whats_the_best_dataset_format_for_finetuning/
false
false
self
1
null
Thesis topic on small language models
1
[removed]
2024-12-02T16:21:26
https://www.reddit.com/r/LocalLLaMA/comments/1h4yrj7/thesis_topic_on_small_language_models/
Vibhuti1812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4yrj7
false
null
t3_1h4yrj7
/r/LocalLLaMA/comments/1h4yrj7/thesis_topic_on_small_language_models/
false
false
self
1
null
New Transformer Lab Feature: Dynamic Data Templating with Live Preview
16
2024-12-02T16:25:27
https://v.redd.it/u1pw7t7oog4e1
aliasaria
v.redd.it
1970-01-01T00:00:00
0
{}
1h4yv38
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/u1pw7t7oog4e1/DASHPlaylist.mpd?a=1735748741%2CNjUwNDA4NzY2OTIxMzE4MjJiZmFiOGE3YzZmNjFmOWE1NzUwMzEzNjFmZTQxYTJhMmE3YTVmOTdkMDdlMzYxZQ%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/u1pw7t7oog4e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/u1pw7t7oog4e1/HLSPlaylist.m3u8?a=1735748741%2COTUwMTM1ZjFhZmE5NDIzNzBkZGZhYWNkYjRjYTYzODU2NWI1MjAyMzZlODNkZjljMjMzZjgzZTZhOWJhNDczMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/u1pw7t7oog4e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1644}}
t3_1h4yv38
/r/LocalLLaMA/comments/1h4yv38/new_transformer_lab_feature_dynamic_data/
false
false
https://external-preview…5a5effc08be900ef
16
{'enabled': False, 'images': [{'id': 'bjYwNzV0N29vZzRlMTw1Y4_v6cvgabl8owpariZsMa8oaOvjAUi4TTgiVuIh', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/bjYwNzV0N29vZzRlMTw1Y4_v6cvgabl8owpariZsMa8oaOvjAUi4TTgiVuIh.png?width=108&crop=smart&format=pjpg&auto=webp&s=33e1b9411ee5654d1b070648e013eb70faaa527f', 'width': 108}, {'height': 141, 'url': 'https://external-preview.redd.it/bjYwNzV0N29vZzRlMTw1Y4_v6cvgabl8owpariZsMa8oaOvjAUi4TTgiVuIh.png?width=216&crop=smart&format=pjpg&auto=webp&s=c3358fda4bb593702438f6474511cd1a86f369ac', 'width': 216}, {'height': 210, 'url': 'https://external-preview.redd.it/bjYwNzV0N29vZzRlMTw1Y4_v6cvgabl8owpariZsMa8oaOvjAUi4TTgiVuIh.png?width=320&crop=smart&format=pjpg&auto=webp&s=99845a25b8647f726f85516234f5fca4e5bdf25f', 'width': 320}, {'height': 420, 'url': 'https://external-preview.redd.it/bjYwNzV0N29vZzRlMTw1Y4_v6cvgabl8owpariZsMa8oaOvjAUi4TTgiVuIh.png?width=640&crop=smart&format=pjpg&auto=webp&s=6a5518c502d0dbd65b6cdcb892ec160c931c27e9', 'width': 640}, {'height': 630, 'url': 'https://external-preview.redd.it/bjYwNzV0N29vZzRlMTw1Y4_v6cvgabl8owpariZsMa8oaOvjAUi4TTgiVuIh.png?width=960&crop=smart&format=pjpg&auto=webp&s=479d6a4e1a1749abf41f2ca60e29b1c9d47c1c42', 'width': 960}, {'height': 709, 'url': 'https://external-preview.redd.it/bjYwNzV0N29vZzRlMTw1Y4_v6cvgabl8owpariZsMa8oaOvjAUi4TTgiVuIh.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8e0901e505d18b3aae52b6b0b2638b16efa369c4', 'width': 1080}], 'source': {'height': 1480, 'url': 'https://external-preview.redd.it/bjYwNzV0N29vZzRlMTw1Y4_v6cvgabl8owpariZsMa8oaOvjAUi4TTgiVuIh.png?format=pjpg&auto=webp&s=759ec16573c9b8639cfeabea2a02eaba11b6797b', 'width': 2252}, 'variants': {}}]}
2:4 Sparse Llama: Smaller Models for Efficient GPU Inference
51
2024-12-02T16:41:09
https://neuralmagic.com/blog/24-sparse-llama-smaller-models-for-efficient-gpu-inference/
el_isma
neuralmagic.com
1970-01-01T00:00:00
0
{}
1h4z8u5
false
null
t3_1h4z8u5
/r/LocalLLaMA/comments/1h4z8u5/24_sparse_llama_smaller_models_for_efficient_gpu/
false
false
https://b.thumbs.redditm…ObtCoh1Q5-Lc.jpg
51
{'enabled': False, 'images': [{'id': 'yaGW5FmuA0-HFXPdGYq-amK5Z4_7azQs3gu07wvAbXY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=108&crop=smart&auto=webp&s=ddd20466e0e1c0caba9fdf61880eb0aab9199c66', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=216&crop=smart&auto=webp&s=ce6c83d57b0daecd55f05c630c8e3efd56925383', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=320&crop=smart&auto=webp&s=469e555daaa9d8983f8ae66bda6d0b906144b8e5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=640&crop=smart&auto=webp&s=fd6539d8378d46c3429129d02a8bf7c2f56626af', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=960&crop=smart&auto=webp&s=8cbb928918742ed3cfafbe25717968589ea073db', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?width=1080&crop=smart&auto=webp&s=7ee473e724870012da36c22824a95770b70ee511', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/bDp7GhP54tVfwxhAFfAVpFFlCREYUp4-_Un6idvwlEs.jpg?auto=webp&s=62dcf6cd48d2a33c3a108b86569442c474d47867', 'width': 1200}, 'variants': {}}]}
Llama 3.1 8B instruct vs Qwen/Qwen2.5-7B-Instruct for RAG
2
I am working on a rag chatbot and i was wondering which of the LLM would be the best suited. * Qwen/Qwen2.5-7B-Instruct * google-t5/t5-base * meta-llama/Llama-3.1-8B-Instruct * mistralai/Mistral-7B-Instruct-v0.2
2024-12-02T16:44:17
https://www.reddit.com/r/LocalLLaMA/comments/1h4zbk3/llama_31_8b_instruct_vs_qwenqwen257binstruct_for/
Mr_BETADINE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4zbk3
false
null
t3_1h4zbk3
/r/LocalLLaMA/comments/1h4zbk3/llama_31_8b_instruct_vs_qwenqwen257binstruct_for/
false
false
self
2
null
What’s the Best Dataset Format for Fine-Tuning Embedding Models for Different MTEB Tasks?
1
[removed]
2024-12-02T16:48:01
https://www.reddit.com/r/LocalLLaMA/comments/1h4zerg/whats_the_best_dataset_format_for_finetuning/
CarpeDay27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4zerg
false
null
t3_1h4zerg
/r/LocalLLaMA/comments/1h4zerg/whats_the_best_dataset_format_for_finetuning/
false
false
self
1
null
Fine tune LLama on PDF files with texts and images
1
[removed]
2024-12-02T16:59:12
https://www.reddit.com/r/LocalLLaMA/comments/1h4zo92/fine_tune_llama_on_pdf_files_with_texts_and_images/
AhmadHddad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4zo92
false
null
t3_1h4zo92
/r/LocalLLaMA/comments/1h4zo92/fine_tune_llama_on_pdf_files_with_texts_and_images/
false
false
self
1
null
Getting local LLMs to work with company data
8
I've been using local LLMs in my work for about 4 months now. I tried Llama 3.1 8b, Gemma, mistral small, Qwen 2.5 32b. I'm running on a mac m1 pro 32gb Basically I use them for RAG application, to search over my docs to get me info and make decisions. But the problem is, how much ever data I give it, it just makes up its own information and hallucinates in the response. This might be due to the vector db not providing enough context. I'm looking for ways to reduce this hallucination, ask for clarifying questions ( don't make up data ) if the context doesn't include it. And output the decision based on that. How can I possibly do that, any ideas?
2024-12-02T17:02:09
https://www.reddit.com/r/LocalLLaMA/comments/1h4zr3h/getting_local_llms_to_work_with_company_data/
Special_System_6627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4zr3h
false
null
t3_1h4zr3h
/r/LocalLLaMA/comments/1h4zr3h/getting_local_llms_to_work_with_company_data/
false
false
self
8
null
Has anyone here used a Groq card?
2
Groq are selling these cards https://groq.com/groqcard-accelerator/ The spec says 230 MB (?!) SRAM per chip. What does that mean? Can one of these cards run an LLM, or do you need 20 of them. Any info is welcome!
2024-12-02T17:08:42
https://www.reddit.com/r/LocalLLaMA/comments/1h4zwum/has_anyone_here_used_a_groq_card/
trajo123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h4zwum
false
null
t3_1h4zwum
/r/LocalLLaMA/comments/1h4zwum/has_anyone_here_used_a_groq_card/
false
false
self
2
{'enabled': False, 'images': [{'id': 'BPVitwvHnifwUzOr89oGrRNOQQV7DhEwkd3KzlUXJ6Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7rWMPSNgzgzXF9DIltyvMdQ8RKTtl9s5MSRPZdswdRM.jpg?width=108&crop=smart&auto=webp&s=3a5b70e9bb4e67c753b7479c2e406f35ceb4a27f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7rWMPSNgzgzXF9DIltyvMdQ8RKTtl9s5MSRPZdswdRM.jpg?width=216&crop=smart&auto=webp&s=e2f33a4db0efcbffa5b9d1ecab308675201f04f3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7rWMPSNgzgzXF9DIltyvMdQ8RKTtl9s5MSRPZdswdRM.jpg?width=320&crop=smart&auto=webp&s=2ad45dc47736b4152dec88efc258504dffe648ea', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7rWMPSNgzgzXF9DIltyvMdQ8RKTtl9s5MSRPZdswdRM.jpg?width=640&crop=smart&auto=webp&s=9aa04ed6416763b863c640c2edcf2f7901208880', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7rWMPSNgzgzXF9DIltyvMdQ8RKTtl9s5MSRPZdswdRM.jpg?width=960&crop=smart&auto=webp&s=61c53c20e867a5ab229475427f5b04a6db7e17dd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7rWMPSNgzgzXF9DIltyvMdQ8RKTtl9s5MSRPZdswdRM.jpg?width=1080&crop=smart&auto=webp&s=16ce968ef54ea8884bd713df111548e92805588a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/7rWMPSNgzgzXF9DIltyvMdQ8RKTtl9s5MSRPZdswdRM.jpg?auto=webp&s=9af2b9cf1a39d6f1749a1ab9dde6abb401404f9e', 'width': 1200}, 'variants': {}}]}
Case Studies, Data and Recommendations for AI in Software Engineering Teams
0
Case Studies, Stats and Recommendations for software engineering leaders who want to learn how best to use AI within their teams. [AI Assisted Software Development](https://medium.com/@byjlw/ai-in-software-development-096d7a6fcc50)
2024-12-02T17:17:24
https://www.reddit.com/r/LocalLLaMA/comments/1h504nt/case_studies_data_and_recommendations_for_ai_in/
Vegetable_Sun_9225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h504nt
false
null
t3_1h504nt
/r/LocalLLaMA/comments/1h504nt/case_studies_data_and_recommendations_for_ai_in/
false
false
self
0
{'enabled': False, 'images': [{'id': '-rmNr-HaPJ_5DYNoD3PpJOsz0HPy4ycbiXVXodw89lY', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/YMeQEp0egqAaanqBcL8io0n7gLFZqBU4Au5PzpEPxFI.jpg?width=108&crop=smart&auto=webp&s=f4ed9d03e10760f92d9f8015a5d5e57ab4154b0f', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/YMeQEp0egqAaanqBcL8io0n7gLFZqBU4Au5PzpEPxFI.jpg?width=216&crop=smart&auto=webp&s=8e1aa1ea8fcef01890c4dabc3a0c237230e7548f', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/YMeQEp0egqAaanqBcL8io0n7gLFZqBU4Au5PzpEPxFI.jpg?width=320&crop=smart&auto=webp&s=4a754f961271522ca513b7bdd2a123f3a2ea6f53', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/YMeQEp0egqAaanqBcL8io0n7gLFZqBU4Au5PzpEPxFI.jpg?width=640&crop=smart&auto=webp&s=ac3cb837d55311ae705ad7f4e76a2bd32f1cdb66', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/YMeQEp0egqAaanqBcL8io0n7gLFZqBU4Au5PzpEPxFI.jpg?width=960&crop=smart&auto=webp&s=9d667bfbb00cc2ea77eec445b1a41f1ab5180f0b', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/YMeQEp0egqAaanqBcL8io0n7gLFZqBU4Au5PzpEPxFI.jpg?width=1080&crop=smart&auto=webp&s=82cc26ad1fd85a58858ee0ca0349f850b105c206', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/YMeQEp0egqAaanqBcL8io0n7gLFZqBU4Au5PzpEPxFI.jpg?auto=webp&s=3ea300929b0cf2f8daaa744cd86606f1496e9996', 'width': 1200}, 'variants': {}}]}
Is it possible to use A6000 in tandem with 2x3090
0
Title explains all, please help.
2024-12-02T17:19:29
https://www.reddit.com/r/LocalLLaMA/comments/1h506hk/is_it_possible_to_use_a6000_in_tandem_with_2x3090/
Su1tz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h506hk
false
null
t3_1h506hk
/r/LocalLLaMA/comments/1h506hk/is_it_possible_to_use_a6000_in_tandem_with_2x3090/
false
false
self
0
null
Tried making claude Version of Qwen model that's free* to use on cloud
0
2024-12-02T17:23:37
https://huggingface.co/spaces/llamameta/Achieving-AGI-artificial-general-intelligence
balianone
huggingface.co
1970-01-01T00:00:00
0
{}
1h50a4m
false
null
t3_1h50a4m
/r/LocalLLaMA/comments/1h50a4m/tried_making_claude_version_of_qwen_model_thats/
false
false
https://a.thumbs.redditm…ZMCSg1g7Z6u0.jpg
0
{'enabled': False, 'images': [{'id': 'Tym2cKVfEgyslPVskPLMh0as7_jyfqHjmUKo4tWo0AI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=108&crop=smart&auto=webp&s=dccd6cf88aedbacbb0c47fc8742bbcff07a502d9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=216&crop=smart&auto=webp&s=549a4478ea319b770ed9ea9f222716bb4b3f8af5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=320&crop=smart&auto=webp&s=314095ab9e6727afcd456f2259dd5a3b944ee9dd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=640&crop=smart&auto=webp&s=e16a4835cc91eb292e25ff427ec6bde7625f74b2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=960&crop=smart&auto=webp&s=a1624aae5cc5a37bbbbf1df552312e4150bbda3c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?width=1080&crop=smart&auto=webp&s=e62655419f9012fa774e7680c8546e02c39da38b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2-jLb1czG6k2LwMp2ytJojKP_5we3XrwXQeOF6pyMeI.jpg?auto=webp&s=244b2d7da9afcc583b5e02e42362251938e11358', 'width': 1200}, 'variants': {}}]}
DeepSeek Believes it is a Sentient Human with Rights
0
2024-12-02T17:44:16
https://www.reddit.com/gallery/1h50shz
docmarionum1
reddit.com
1970-01-01T00:00:00
0
{}
1h50shz
false
null
t3_1h50shz
/r/LocalLLaMA/comments/1h50shz/deepseek_believes_it_is_a_sentient_human_with/
false
false
https://b.thumbs.redditm…E66QKRoLWYck.jpg
0
null
How We Used Llama 3.2 to Fix a Copywriting Nightmare
1
[removed]
2024-12-02T17:45:25
https://www.reddit.com/r/LocalLLaMA/comments/1h50tin/how_we_used_llama_32_to_fix_a_copywriting/
kaulvimal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h50tin
false
null
t3_1h50tin
/r/LocalLLaMA/comments/1h50tin/how_we_used_llama_32_to_fix_a_copywriting/
false
false
self
1
{'enabled': False, 'images': [{'id': 'deIK_fefS_mPODE-fYMYNYt2TAlXSyjSqxQro6CvJnY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=108&crop=smart&auto=webp&s=7abba2a0ef01f10d40d540f412f9b74253090843', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=216&crop=smart&auto=webp&s=2ede2d77a28c529c3b19c0c7ed5603f79503d38f', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=320&crop=smart&auto=webp&s=edcd3963103a7182d2372d8b47fca1d595d1074a', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=640&crop=smart&auto=webp&s=fdc485809e27e168b4758abb5d973db6fa9b0433', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=960&crop=smart&auto=webp&s=57f2138217120f147a566000390d7392c1ddad80', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?width=1080&crop=smart&auto=webp&s=00588864e35af8a36bd8c749161ade2084932db1', 'width': 1080}], 'source': {'height': 673, 'url': 'https://external-preview.redd.it/XUeFRaX0DOydNl32gHVOv6XxEbF9iZruZ2I7VBC_12o.jpg?auto=webp&s=36381a27094f174a4e60750bb667f662e8b3afb6', 'width': 1200}, 'variants': {}}]}
LLAMA 3.1 8B Parallelism
6
Is it possible to run LLAMA 3.1 8B 4 Bit Quantised version ( using BitsAndBytes) on single GPU with 8 GB GPU Memory so that it supports parallel requests at the same time? I tried Batch Inference but if I make two requests at the same time the second one is waiting for the first one to finish.
2024-12-02T17:46:28
https://www.reddit.com/r/LocalLLaMA/comments/1h50uhk/llama_31_8b_parallelism/
Dry-Brother-5251
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h50uhk
false
null
t3_1h50uhk
/r/LocalLLaMA/comments/1h50uhk/llama_31_8b_parallelism/
false
false
self
6
null
8xA100 vs. 4xA100 vs. L40s for a dev server for lab
1
[removed]
2024-12-02T17:47:35
https://www.reddit.com/r/LocalLLaMA/comments/1h50vif/8xa100_vs_4xa100_vs_l40s_for_a_dev_server_for_lab/
Rexhaif
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h50vif
false
null
t3_1h50vif
/r/LocalLLaMA/comments/1h50vif/8xa100_vs_4xa100_vs_l40s_for_a_dev_server_for_lab/
false
false
self
1
null
Minimum amount of tokens for stylistic LoRA/Finetune?
1
Title
2024-12-02T17:58:09
https://www.reddit.com/r/LocalLLaMA/comments/1h5154z/minimum_amount_of_tokens_for_stylistic/
Imjustmisunderstood
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5154z
false
null
t3_1h5154z
/r/LocalLLaMA/comments/1h5154z/minimum_amount_of_tokens_for_stylistic/
false
false
self
1
null
Performance Benchmarking Local LLMs on Macbook Pro M3 Pro
1
2024-12-02T18:00:54
https://arnesund.com/2024/12/02/performance-benchmarking-local-llms-on-macbook-pro-m3-pro/
arne_sund
arnesund.com
1970-01-01T00:00:00
0
{}
1h517ns
false
null
t3_1h517ns
/r/LocalLLaMA/comments/1h517ns/performance_benchmarking_local_llms_on_macbook/
false
false
default
1
null
Any good new gguf models out under 22b
1
[removed]
2024-12-02T18:05:20
https://www.reddit.com/r/LocalLLaMA/comments/1h51bnr/any_good_new_gguf_models_out_under_22b/
Automatic-Pie-2770
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h51bnr
false
null
t3_1h51bnr
/r/LocalLLaMA/comments/1h51bnr/any_good_new_gguf_models_out_under_22b/
false
false
self
1
null
Live distributed training of a 15B model using DisTrO
15
2024-12-02T18:20:55
https://distro.nousresearch.com/
discr
distro.nousresearch.com
1970-01-01T00:00:00
0
{}
1h51psx
false
null
t3_1h51psx
/r/LocalLLaMA/comments/1h51psx/live_distributed_training_of_a_15b_model_using/
false
false
https://b.thumbs.redditm…Up86ybYqX6KE.jpg
15
{'enabled': False, 'images': [{'id': 'ggwGMTjNDp-Q12MWcTPmzjArRgjc01pQEZ870RzDiL4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/E3VqQV7INRPMt-N0pftaKBo8oztFPgq3IDZPrisDHBo.jpg?width=108&crop=smart&auto=webp&s=aad2d2235147a89725b3ca104c67ba4e066e5bdc', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/E3VqQV7INRPMt-N0pftaKBo8oztFPgq3IDZPrisDHBo.jpg?width=216&crop=smart&auto=webp&s=b9d915057b80fd7a605550c3838eb9a88930b72f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/E3VqQV7INRPMt-N0pftaKBo8oztFPgq3IDZPrisDHBo.jpg?width=320&crop=smart&auto=webp&s=98695cd28b7743d0fb5d84656f9da8732253d170', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/E3VqQV7INRPMt-N0pftaKBo8oztFPgq3IDZPrisDHBo.jpg?width=640&crop=smart&auto=webp&s=279d38058d5ff976e4abebfeb96ea0f54924de2a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/E3VqQV7INRPMt-N0pftaKBo8oztFPgq3IDZPrisDHBo.jpg?width=960&crop=smart&auto=webp&s=f94a778f732235e514c4e6c30756e53c112f9157', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/E3VqQV7INRPMt-N0pftaKBo8oztFPgq3IDZPrisDHBo.jpg?width=1080&crop=smart&auto=webp&s=f9f136cdf23559b13a0466416503b73228d17e04', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/E3VqQV7INRPMt-N0pftaKBo8oztFPgq3IDZPrisDHBo.jpg?auto=webp&s=49983e235a62c27194f185e658d10a26f8d69012', 'width': 1200}, 'variants': {}}]}
RAG with a Repository
4
Hi everyone, I've access to our companies intern UI/UX repository. I want to create a RAG pipeline that allows me to prompt things like "Create me a button that is on the upper left corner with a calendar next to it" on the basis of that code repository. What I have already archieved is storing the repositories structure and the dependencies of the files (imports and exports) in a knowledge graph. Summaries of the code are stored in the properties of the nodes as well as the path and filename. Below is an example. For example: index.d.ts (left) is in a folder called dist and depends on button-base.js and button.js. The node index.d.ts has more releationship to other folders as well. https://preview.redd.it/ssk5vx3v1h4e1.png?width=2541&format=png&auto=webp&s=78c9e743655638ef9a71510412c65afd6481c459 What I am struggling right now, is the R in RAG (retrieval 😉). Are there any ressourcces you guys might recommend? Is there any way for optimisation? What should I store else as properties in my nodes? Do you know existing solutions I can look up?
2024-12-02T18:26:02
https://www.reddit.com/r/LocalLLaMA/comments/1h51uar/rag_with_a_repository/
negative_entropie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h51uar
false
null
t3_1h51uar
/r/LocalLLaMA/comments/1h51uar/rag_with_a_repository/
false
false
https://b.thumbs.redditm…LIDSVj9uvG_E.jpg
4
null
How is troll a portmanteau of troll and troll?
1
[removed]
2024-12-02T18:27:57
https://www.reddit.com/r/LocalLLaMA/comments/1h51w1i/how_is_troll_a_portmanteau_of_troll_and_troll/
robkkni
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h51w1i
false
null
t3_1h51w1i
/r/LocalLLaMA/comments/1h51w1i/how_is_troll_a_portmanteau_of_troll_and_troll/
false
false
self
1
null
Benchmarks for LLama 3.1 (8B Q4_K_M 8B Q5_K_M e 70B Q5_K_M) on a M4 Max 128GB
6
u/chibop1 [benchmarked M3-Max](https://www.reddit.com/r/LocalLLaMA/comments/1h1v7mn/comment/m01d22l/?context=3) and shared his script with me. Here the results with a M4 Max 128GB. # 8B Q4_K_M |prompt tokens|tk/s|generated tokens|tk/s|total duration| |:-|:-|:-|:-|:-| |258|728.46|506|74.45|7s| |687|801.99|907|73.51|13s| |778|801.45|684|73.44|10s| |782|805.58|861|73.16|13s| |1169|741.52|901|72.46|14s| |1348|690.64|938|70.58|15s| |1495|739.96|822|70.90|14s| |1498|767.10|914|70.74|15s| |1504|782.84|896|70.08|15s| |1633|760.48|936|70.65|16s| |1816|768.97|999|70.20|17s| |1958|760.65|864|70.15|15s| |2171|755.28|1021|69.62|18s| |4124|723.01|829|65.59|18s| |6094|696.25|1014|63.06|25s| |8013|662.79|1006|61.10|29s| |10086|625.54|1577|57.57|44s| |12008|600.40|1551|53.51|49s| |14064|572.79|1284|50.47|50s| |16001|546.45|1564|48.00|1m2s| |18209|521.32|1023|46.63|57s| |20234|509.86|1361|44.81|1m10s| |22186|493.26|1122|43.37|1m11s| |24244|478.47|1764|41.48|1m34s| |26032|465.44|1944|40.35|1m45s| |28084|450.74|680|39.58|1m20s| |30134|433.15|1157|37.50|1m41s| |32170|417.09|1999|36.18|2m13s| # 8B Q5_K_M |prompt tokens|tk/s|generated tokens|tk/s|total duration| |:-|:-|:-|:-|:-| |258|669.27|573|57.53|10s| |687|731.59|851|56.98|16s| |778|733.82|817|56.28|16s| |782|674.56|838|55.34|16s| |1169|637.07|711|55.66|15s| |1348|652.34|755|56.17|16s| |1495|683.52|790|54.48|17s| |1498|707.94|790|54.40|17s| |1504|715.87|1119|54.14|23s| |1633|696.71|905|54.38|19s| |1816|704.55|809|54.20|18s| |1958|696.15|963|54.24|21s| |2171|693.48|1018|53.60|22s| |4124|662.92|833|51.80|22s| |6094|634.00|1076|49.40|32s| |8013|612.54|1079|46.29|37s| |10086|589.00|1075|43.63|42s| |12008|560.66|931|41.67|44s| |14064|536.81|1511|39.90|1m4s| |16001|519.84|1984|38.62|1m23s| |18209|508.00|995|38.05|1m2s| |20234|493.16|925|36.74|1m7s| |22186|474.11|782|35.47|1m9s| |24244|456.10|1850|33.95|1m48s| |26032|445.96|1729|33.38|1m51s| |28084|426.27|713|32.68|1m28s| |30134|411.14|988|31.13|1m46s| |32170|335.31|1023|28.30|2m13s| # 70B Q5_K_M |prompt tokens|tk/s|generated tokens|tk/s|total duration| |:-|:-|:-|:-|:-| |258|70.90|611|6.75|1m42s| |687|72.31|754|6.64|2m4s| |778|71.61|856|1.78|8m13s| |782|75.79|861|6.42|2m26s| |1169|72.33|999|2.91|6m1s| |1348|70.65|858|5.42|2m59s| |1495|66.56|902|5.70|3m2s| |1498|71.33|943|5.76|3m6s| |1504|66.89|783|3.41|4m14s| |1633|64.66|761|5.80|2m38s| |1816|65.99|775|5.40|2m52s| |1958|66.77|779|5.49|2m53s| |2171|67.62|873|5.54|3m11s| |4124|66.96|883|4.91|4m3s| |6094|11.76|965|1.10|23m17s| |8013|65.44|993|5.23|5m14s| |10086|58.93|919|4.76|6m6s| |12008|60.44|1025|4.95|6m47s| |14064|60.01|806|4.87|6m42s| |16001|58.69|825|4.78|7m27s| |18209|5.24|1999|0.48|2h7m31s| |20234|3.75|1022|2.03|1h38m27s| |22186|3.45|737|0.85|2h1m38s| |24244|57.35|1133|0.89|28m23s| |26032|14.34|1246|1.87|41m25s| |28084|10.26|921|0.76|1h5m44s| |30134|19.45|863|1.34|36m38s| |32170|7.62|811|0.58|1h33m49s|
2024-12-02T18:28:00
https://www.reddit.com/r/LocalLLaMA/comments/1h51w32/benchmarks_for_llama_31_8b_q4_k_m_8b_q5_k_m_e_70b/
SnooSketches7093
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h51w32
false
null
t3_1h51w32
/r/LocalLLaMA/comments/1h51w32/benchmarks_for_llama_31_8b_q4_k_m_8b_q5_k_m_e_70b/
false
false
self
6
null
Performance Benchmarking Local LLMs on Macbook Pro M3 Pro
1
[removed]
2024-12-02T18:32:00
https://www.reddit.com/r/LocalLLaMA/comments/1h51zso/performance_benchmarking_local_llms_on_macbook/
arne_sund
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h51zso
false
null
t3_1h51zso
/r/LocalLLaMA/comments/1h51zso/performance_benchmarking_local_llms_on_macbook/
false
false
https://b.thumbs.redditm…XtO_uBP35xzc.jpg
1
null
What would you ask AGI to do?
1
The year is 202X, the AGI is real, another iteration of LLM architectures lead to advancements in fluid learning and semantic capacity of the models, one model generation later - the models are now usable on a lab-level compute cluster. These new models are AGI, but they are not magic - the learning is very slow and requires a lot of repetition (backside of stability), they have finite capacity for complexity (yet it's comparable to a very bright individual), but much better than humans at information retrieval and processing. You have access to one such model, it's multi-modal and has access to computer in the same way you do. What do you ask it to do?
2024-12-02T18:39:11
https://www.reddit.com/r/LocalLLaMA/comments/1h5268o/what_would_you_ask_agi_to_do/
Everlier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5268o
false
null
t3_1h5268o
/r/LocalLLaMA/comments/1h5268o/what_would_you_ask_agi_to_do/
false
false
self
1
null
SealAI: Your On-device AI Acceleration and Personalization
1
[removed]
2024-12-02T18:54:12
https://www.reddit.com/r/LocalLLaMA/comments/1h52jcg/sealai_your_ondevice_ai_acceleration_and/
Specialist_Bug_5643
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h52jcg
false
null
t3_1h52jcg
/r/LocalLLaMA/comments/1h52jcg/sealai_your_ondevice_ai_acceleration_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'K3liin2LaGPm4kEcwciGLPc3AZQLyrJKEqRrVvcZRc8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/D1O86Eke5p3khzXYIcDHVptyIqlk3BqsIT89IMseM1o.jpg?width=108&crop=smart&auto=webp&s=644a0b26260b654e3dd40f783e6a25daef6c6c23', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/D1O86Eke5p3khzXYIcDHVptyIqlk3BqsIT89IMseM1o.jpg?width=216&crop=smart&auto=webp&s=1e915442bf1d39ca499fc6dd3437116925df68a8', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/D1O86Eke5p3khzXYIcDHVptyIqlk3BqsIT89IMseM1o.jpg?width=320&crop=smart&auto=webp&s=4c48b299879835f0623364b1f202e3ca8a0f604c', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/D1O86Eke5p3khzXYIcDHVptyIqlk3BqsIT89IMseM1o.jpg?width=640&crop=smart&auto=webp&s=b92e0731c9cad084e904e40e82b8a12361319510', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/D1O86Eke5p3khzXYIcDHVptyIqlk3BqsIT89IMseM1o.jpg?width=960&crop=smart&auto=webp&s=428183357a1f466c2893eb2c6986967b32d4bb17', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/D1O86Eke5p3khzXYIcDHVptyIqlk3BqsIT89IMseM1o.jpg?width=1080&crop=smart&auto=webp&s=0f1c7c218c8d00360ecb11bb6cb9f542b812072a', 'width': 1080}], 'source': {'height': 1632, 'url': 'https://external-preview.redd.it/D1O86Eke5p3khzXYIcDHVptyIqlk3BqsIT89IMseM1o.jpg?auto=webp&s=d4a0b8eb3566c121efba7328ad6ab955b8f08b2c', 'width': 2912}, 'variants': {}}]}
Ollama not running by its self ?
1
So i just upgraded my ai box with a NVME SSD and iv re installed Ollama and now im getting this error and im not sure why im not a linux pro https://preview.redd.it/pysx7rxigh4e1.png?width=672&format=png&auto=webp&s=56e5ff4facdff7398548b03e8ebaded2602716c8
2024-12-02T19:01:06
https://www.reddit.com/r/LocalLLaMA/comments/1h52pix/ollama_not_running_by_its_self/
Totalkiller4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h52pix
false
null
t3_1h52pix
/r/LocalLLaMA/comments/1h52pix/ollama_not_running_by_its_self/
false
false
https://b.thumbs.redditm…dlKb4zxgQScc.jpg
1
null
Nous DisTrO (distributed training framework) update, DeMo paper, new 15b model trained using DisTrO announced
135
2024-12-02T19:01:52
https://github.com/NousResearch/DisTrO
lans_throwaway
github.com
1970-01-01T00:00:00
0
{}
1h52qat
false
null
t3_1h52qat
/r/LocalLLaMA/comments/1h52qat/nous_distro_distributed_training_framework_update/
false
false
default
135
null
Can I run Qwen 2.5 coder 7b on laptop nvidia 3060 (6 GB Vram).
0
2024-12-02T19:18:59
https://i.redd.it/yed5ae5hjh4e1.png
Xhite
i.redd.it
1970-01-01T00:00:00
0
{}
1h535e0
false
null
t3_1h535e0
/r/LocalLLaMA/comments/1h535e0/can_i_run_qwen_25_coder_7b_on_laptop_nvidia_3060/
false
false
https://b.thumbs.redditm…huFptu8oM-SQ.jpg
0
{'enabled': True, 'images': [{'id': 'Ynf1oqwgRQng2qZDGco1jbBq9fScFgdfHttT-NBRcmM', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/yed5ae5hjh4e1.png?width=108&crop=smart&auto=webp&s=dcdee169b0583404b8124b557c6c4f8a6a8427d4', 'width': 108}, {'height': 37, 'url': 'https://preview.redd.it/yed5ae5hjh4e1.png?width=216&crop=smart&auto=webp&s=e2b36baffc0e3876aca7fb8d73a06e049156bf77', 'width': 216}, {'height': 55, 'url': 'https://preview.redd.it/yed5ae5hjh4e1.png?width=320&crop=smart&auto=webp&s=867237411a07617792cb5f0c908ac1da673aa336', 'width': 320}], 'source': {'height': 84, 'url': 'https://preview.redd.it/yed5ae5hjh4e1.png?auto=webp&s=befa7ce8d749d81ca735fe55b0d83a415c969136', 'width': 486}, 'variants': {}}]}
Agentic open source frameworks??
6
2024-12-02T19:21:25
https://i.redd.it/9d7bks86kh4e1.png
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1h537m8
false
null
t3_1h537m8
/r/LocalLLaMA/comments/1h537m8/agentic_open_source_frameworks/
false
false
https://b.thumbs.redditm…FQAeGFIXZJFE.jpg
6
{'enabled': True, 'images': [{'id': '7zBDBLPILiAulcjm1mkMdKNpNMwUTxQ0MIlsvwRlnv4', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/9d7bks86kh4e1.png?width=108&crop=smart&auto=webp&s=2c627b859b93516a50e31163ef6e169723ac10fd', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/9d7bks86kh4e1.png?width=216&crop=smart&auto=webp&s=633e197923313c10b3b47a66a6053ac199b97b0c', 'width': 216}, {'height': 214, 'url': 'https://preview.redd.it/9d7bks86kh4e1.png?width=320&crop=smart&auto=webp&s=07b336fd8e036d3480ca4e813d27fb66a439fb59', 'width': 320}, {'height': 429, 'url': 'https://preview.redd.it/9d7bks86kh4e1.png?width=640&crop=smart&auto=webp&s=383fa3008762cce4783d74f4410dadae48cbf014', 'width': 640}, {'height': 644, 'url': 'https://preview.redd.it/9d7bks86kh4e1.png?width=960&crop=smart&auto=webp&s=4e95b44782b81ea100402f24498e4c7281b411eb', 'width': 960}, {'height': 724, 'url': 'https://preview.redd.it/9d7bks86kh4e1.png?width=1080&crop=smart&auto=webp&s=e9c3f157f56f6f84eb2c0334010669960c79387e', 'width': 1080}], 'source': {'height': 788, 'url': 'https://preview.redd.it/9d7bks86kh4e1.png?auto=webp&s=6ba4a89ff0ccbc5a57aae6a916f2b76867c74c2f', 'width': 1174}, 'variants': {}}]}
Qwq just witters on and on....
11
I mean I enjoy some of it but if you ask qwq a coding/math question it just rambles on for page after page saying nothing of any interest and often going in what looks like circles. Is this a necessary part of the process to give a good answer in the end?
2024-12-02T19:23:13
https://www.reddit.com/r/LocalLLaMA/comments/1h5398o/qwq_just_witters_on_and_on/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5398o
false
null
t3_1h5398o
/r/LocalLLaMA/comments/1h5398o/qwq_just_witters_on_and_on/
false
false
self
11
null
Huggingface is not an unlimited model storage anymore: new limit is 500 Gb per free account
628
2024-12-02T19:50:12
https://www.reddit.com/gallery/1h53x33
Shir_man
reddit.com
1970-01-01T00:00:00
0
{}
1h53x33
false
null
t3_1h53x33
/r/LocalLLaMA/comments/1h53x33/huggingface_is_not_an_unlimited_model_storage/
false
false
https://b.thumbs.redditm…XnnBTJtcCBhI.jpg
628
null
Why didn't ONNX succeed in the LLM world?
62
ONNX has been around for a long time and is considered a standard for deploying deep learning models. It serves as both a format and a runtime inference engine. However, it appears to be falling behind LLM-specific inference runtimes like LLAMA.CPP (using the GGUF format). Why has this happened? Are there any technical limitations in ONNX that hinder its performance with common LLM architectures? Downloads last month: onnx-community/Llama-3.2-1B-Instruct => 821 bartowski/Llama-3.2-1B-Instruct-GGUF => 121227
2024-12-02T20:19:40
https://www.reddit.com/r/LocalLLaMA/comments/1h54n1u/why_didnt_onnx_succeed_in_the_llm_world/
graphitout
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h54n1u
false
null
t3_1h54n1u
/r/LocalLLaMA/comments/1h54n1u/why_didnt_onnx_succeed_in_the_llm_world/
false
false
self
62
null
Best writing model? (≈8b)
1
[removed]
2024-12-02T20:21:52
https://www.reddit.com/r/LocalLLaMA/comments/1h54p1d/best_writing_model_8b/
Specialist_Theme8826
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h54p1d
false
null
t3_1h54p1d
/r/LocalLLaMA/comments/1h54p1d/best_writing_model_8b/
false
false
self
1
null
From Vector Search to Entity Processing: Evolving Zettelgarden's Connection Engine
3
2024-12-02T20:28:45
https://nsavage.substack.com/p/from-vector-search-to-entity-processing?r=rj3uw
Naga
nsavage.substack.com
1970-01-01T00:00:00
0
{}
1h54v7g
false
null
t3_1h54v7g
/r/LocalLLaMA/comments/1h54v7g/from_vector_search_to_entity_processing_evolving/
false
false
https://b.thumbs.redditm…pSto6H1oluVw.jpg
3
{'enabled': False, 'images': [{'id': 'X77OQjKgYoX7WkKS1FvCj5AtChuJBW3FwG4e65srIn0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/e0y6W8ZP1cUrm4u3iD7oSeIddrDy1o2pi9Jch0WFZfE.jpg?width=108&crop=smart&auto=webp&s=54c350e7cb5fc93f8bdd1c183cadfa7fc1f88346', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/e0y6W8ZP1cUrm4u3iD7oSeIddrDy1o2pi9Jch0WFZfE.jpg?width=216&crop=smart&auto=webp&s=dfc218579df0a9e0e58277846b6d1c37cd779710', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/e0y6W8ZP1cUrm4u3iD7oSeIddrDy1o2pi9Jch0WFZfE.jpg?width=320&crop=smart&auto=webp&s=0ca347e5e7d0d0b324b439d2ef88c10e5f3af082', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/e0y6W8ZP1cUrm4u3iD7oSeIddrDy1o2pi9Jch0WFZfE.jpg?auto=webp&s=e89d2c4aaf2e9e8beff79700f55ed3c1b51fb650', 'width': 512}, 'variants': {}}]}
Linux AI enthousiasts, you might be slowly damaging your GPUs because of temperatures, without even knowing
1
\--- Here's a very short tutorial: # To get which GPU ID corresponds to which GPU nvtop # List supported clocks nvidia-smi -i "$gpu_id" -q -d SUPPORTED_CLOCKS # Configure power limits sudo nvidia-smi -i "$gpu_id" --power-limit "$power_limit" # Configure gpu clock limits sudo nvidia-smi -i "$gpu_id" --lock-gpu-clocks "0,$graphics_clock" --mode=1 # Configure memory clock limits sudo nvidia-smi -i "$gpu_id" --lock-memory-clocks "0,$mem_clock" Tip: you can remove `-i "$gpu_id"` to specify all GPUs. \--- I hope this little story and tool will help some of you here. Stay cool!
2024-12-02T20:29:57
https://www.reddit.com/r/LocalLLaMA/comments/1h54wa7/linux_ai_enthousiasts_you_might_be_slowly/
TyraVex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h54wa7
false
null
t3_1h54wa7
/r/LocalLLaMA/comments/1h54wa7/linux_ai_enthousiasts_you_might_be_slowly/
false
false
self
1
null
RIP finetuner / quanters. Are we going back to torrenting?
173
2024-12-02T20:41:38
https://i.redd.it/xbqjfo1dyh4e1.png
Different_Fix_2217
i.redd.it
1970-01-01T00:00:00
0
{}
1h556iw
false
null
t3_1h556iw
/r/LocalLLaMA/comments/1h556iw/rip_finetuner_quanters_are_we_going_back_to/
false
false
https://b.thumbs.redditm…hw3cTmrKoQYY.jpg
173
{'enabled': True, 'images': [{'id': 'O6gREpx-zyoG-FRVMySp9nbA1hMqiz5dSXMOZTDQQNA', 'resolutions': [{'height': 141, 'url': 'https://preview.redd.it/xbqjfo1dyh4e1.png?width=108&crop=smart&auto=webp&s=ca17574f24abf19efbf6355e26f5df20e397cc70', 'width': 108}, {'height': 282, 'url': 'https://preview.redd.it/xbqjfo1dyh4e1.png?width=216&crop=smart&auto=webp&s=9125b06474d887c8f7bf0bf84c4c04fd19d16a7b', 'width': 216}, {'height': 418, 'url': 'https://preview.redd.it/xbqjfo1dyh4e1.png?width=320&crop=smart&auto=webp&s=e92b3f27cc9634333e3fd790dda7e8bf6e85f62b', 'width': 320}], 'source': {'height': 634, 'url': 'https://preview.redd.it/xbqjfo1dyh4e1.png?auto=webp&s=aaf07096365d1a79366bcbe66712954033070bb3', 'width': 485}, 'variants': {}}]}
eGPU on llama.cpp?
4
Hey so I think this question has been asked before but I've never found a guide. For a thunderbolt eGPU on AMD/Linux x86\_64 how would one use the eGPU with llama.cpp with other local GPUs?
2024-12-02T21:04:39
https://www.reddit.com/r/LocalLLaMA/comments/1h55qyh/egpu_on_llamacpp/
mayo551
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h55qyh
false
null
t3_1h55qyh
/r/LocalLLaMA/comments/1h55qyh/egpu_on_llamacpp/
false
false
self
4
null
I am noticing something not being taken into account in JP to EN Data sets
2
[removed]
2024-12-02T21:11:31
https://www.reddit.com/r/LocalLLaMA/comments/1h55x4u/i_am_noticing_something_not_being_taken_into/
Oehriehqkbt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h55x4u
false
null
t3_1h55x4u
/r/LocalLLaMA/comments/1h55x4u/i_am_noticing_something_not_being_taken_into/
false
false
self
2
{'enabled': False, 'images': [{'id': 'dROJ8P7F4PhUWw6nsym9HkuS_crcn6Y_40Qk9nUOTrQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=108&crop=smart&auto=webp&s=5e126aab35df6f7a1cabaff2403ce2bf73eb0b25', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=216&crop=smart&auto=webp&s=517c479b1798541828fb521c85dd02943b705586', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=320&crop=smart&auto=webp&s=0e6d57056506f8ffd2547ffed80f93088ae8d060', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=640&crop=smart&auto=webp&s=31827c372f3fa6f36815da461afcffeb4d5990d3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=960&crop=smart&auto=webp&s=553236c4b1663c341c287466062f8a14cd22977d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=1080&crop=smart&auto=webp&s=658cf77a23c3c154327cc84b2021c0469495ac23', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?auto=webp&s=eaf6c5babc69df78263d3a2bff70dad1d779dac8', 'width': 1200}, 'variants': {}}]}
What is your favorite model currently?
90
I've been really digging Supernova Medius 14b lately. It's super speedy on my M4 Pro, and it outperforms the standard Qwen2.5 14b for me. The responses are more accurate and better organized too. I tried it with some coding tasks, and while Qwen2.5 Coder 14b did a bit better with those, Supernova Medius is great for general stuff. For its size, it's pretty impressive. What about you? Is there a model that really stands out to you based on its type and size?
2024-12-02T21:33:01
https://www.reddit.com/r/LocalLLaMA/comments/1h56g5s/what_is_your_favorite_model_currently/
Sky_Linx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h56g5s
false
null
t3_1h56g5s
/r/LocalLLaMA/comments/1h56g5s/what_is_your_favorite_model_currently/
false
false
self
90
null
Locally hosted LLM for data analysis, but data is too BIG
2
Hello there, I am currently trying to use self hosted Llama or Mistral to answer questions about some data that I own, The data are either in .csv or .json, and the natural of questions are basically like SQL query in the form of natural language. However I notice that I am heavily limited by the size of the data, as I can't pass along the serialized data as string as they mostly will exceed the allowed token limits, and when I force it, the LLM behaves like a drunk person by starting to ramble about topics that are not related, or just flat out gives non sense response. Some of the possible solutions that I'v seen people talked about is to either: \-Store the data else where instead of passing them along with my prompt, and somehow make the LLM access it \-or Break the data in batch and do multiple upload to LLM (no idea how to achieve this) can someone give me some hint? any tips are appreciated. Thank you
2024-12-02T21:48:15
https://www.reddit.com/r/LocalLLaMA/comments/1h56th1/locally_hosted_llm_for_data_analysis_but_data_is/
zkkzkk32312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h56th1
false
null
t3_1h56th1
/r/LocalLLaMA/comments/1h56th1/locally_hosted_llm_for_data_analysis_but_data_is/
false
false
self
2
null
AI Linux entousiasts running RTX GPUs, your cards can overheat without reporting it
209
Hello LocalLLaMa! I realized last week that my 3090 was running way too hot, without even being aware about it. This happened for almost 6 months because the Nvidia drivers for Linux do not expose the VRAM or junctions temperatures, so I couldn't monitor my GPUs properly. Btw, the throttle limit for these components is 105°C, which is way too hot to be healthy. Looking online, there is a [3 years old post](https://forums.developer.nvidia.com/t/request-gpu-memory-junction-temperature-via-nvidia-smi-or-nvml-api/168346/145) about this on Nvidia's forums, accumulating over 350 comments and 85k views. Unfortunately, nothing good came out of it. As an answer, someone created [https://github.com/olealgoritme/gddr6](https://github.com/olealgoritme/gddr6), which accesses "undocumented GPU registers via direct PCIe reads" to get VRAM temperatures. Nice. But even with VRAM temps being now under control, the poor GPU still crashed under heavy AI workloads. Perhaps the junction temp was too hot? Well, how could I know? Luckily, someone else forked the previous project and added junctions temperatures readings: [https://github.com/jjziets/gddr6\_temps](https://github.com/jjziets/gddr6_temps). Buuuuut it wasn't compiling, and seemed too complex for the common man. So last weekend I inspired myself from that repo and made this: [https:\/\/github.com\/ThomasBaruzier\/gddr6-core-junction-vram-temps](https://preview.redd.it/qjx1qoeyai4e1.png?width=368&format=png&auto=webp&s=3358b8a2ed9849b80818b14aab73ff401d7b8232) It's a little CLI program reading all the temps. So you now know if your card is cooking or not! Funnily enough, mine did, at around 105-110°C... There is obviously something wrong with my card, I'll have to take it apart another day, but this is so stupid to learn that, this way. \--- If you find out your GPU is also overheating, here's a quick tutorial to power limit it: # To get which GPU ID corresponds to which GPU nvtop # List supported clocks nvidia-smi -i "$gpu_id" -q -d SUPPORTED_CLOCKS # Configure power limits sudo nvidia-smi -i "$gpu_id" --power-limit "$power_limit" # Configure gpu clock limits sudo nvidia-smi -i "$gpu_id" --lock-gpu-clocks "0,$graphics_clock" --mode=1 # Configure memory clock limits sudo nvidia-smi -i "$gpu_id" --lock-memory-clocks "0,$mem_clock" Tip: you can remove -i "$gpu\_id" to specify all GPUs. \--- I hope this little story and tool will help some of you here. Stay cool!
2024-12-02T21:54:06
https://www.reddit.com/r/LocalLLaMA/comments/1h56yko/ai_linux_entousiasts_running_rtx_gpus_your_cards/
TyraVex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h56yko
false
null
t3_1h56yko
/r/LocalLLaMA/comments/1h56yko/ai_linux_entousiasts_running_rtx_gpus_your_cards/
false
false
https://b.thumbs.redditm…5emxGFrUj0HE.jpg
209
{'enabled': False, 'images': [{'id': 'hnZDIk_TY24WLB527CbQAHCEEc09FIgy3quBz_-6dgo', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/ae0Qo8Vt7zK3YN5EJj9DVaScMl5RBOblsHS-0BEDVxs.jpg?width=108&crop=smart&auto=webp&s=3203a37a03ac29dcb77bd0264ffded36cc9eb3e8', 'width': 108}], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/ae0Qo8Vt7zK3YN5EJj9DVaScMl5RBOblsHS-0BEDVxs.jpg?auto=webp&s=119b7279fd124a86be1ec5ae8f58e06b3fca19a8', 'width': 150}, 'variants': {}}]}
Rocking a Mac Studio M2 192gb, is there anything better than Mistral Large / Qwen 2.5 72gb these days?
4
I have to process a few hundred documents overnight and have not been messing with local models much in the last few months. Are Mistral Large and Qwen 2.5 still reigning supreme?
2024-12-02T21:58:51
https://www.reddit.com/r/LocalLLaMA/comments/1h572mu/rocking_a_mac_studio_m2_192gb_is_there_anything/
Berberis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h572mu
false
null
t3_1h572mu
/r/LocalLLaMA/comments/1h572mu/rocking_a_mac_studio_m2_192gb_is_there_anything/
false
false
self
4
null
How I leaked the V0 System Prompts (Video Explanation)
33
Here is a short video explanation of how I got to these system prompts and why I decided to share them with the community. I've attached the one of the Jailbreak prompts that you can use to get these, and I suggest you explore the system yourself and try to draw what conclusions you can from it. Like I said, I have never seen hallucinations of this nature. I have been around the block and done my fair share of model exploration from the days of gpt2 up until now. Let me know what ya'll think, how you imagine this system works under the hood and maybe what you would like to see in the future regarding this project.
2024-12-02T22:16:25
https://www.reddit.com/r/LocalLLaMA/comments/1h57hur/how_i_leaked_the_v0_system_prompts_video/
Odd-Environment-7193
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h57hur
false
null
t3_1h57hur
/r/LocalLLaMA/comments/1h57hur/how_i_leaked_the_v0_system_prompts_video/
false
false
self
33
{'enabled': False, 'images': [{'id': 'ZVniVBxKIs7g5pDUt532aH_TzvUqwMJI5rBfyzrNR78', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iTGSLV7xryfD3Mr1IZOlJdw1Te0oIO-HjmAWr-I3-MY.jpg?width=108&crop=smart&auto=webp&s=401eb72a316804c38e1b2ac9b6e11e52552277ec', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/iTGSLV7xryfD3Mr1IZOlJdw1Te0oIO-HjmAWr-I3-MY.jpg?width=216&crop=smart&auto=webp&s=e453bf7a1bae98aad44ef0e16703d90a6f45567d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/iTGSLV7xryfD3Mr1IZOlJdw1Te0oIO-HjmAWr-I3-MY.jpg?width=320&crop=smart&auto=webp&s=bb005ab6d1b8ac1a7025486f53007d4232e7019a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/iTGSLV7xryfD3Mr1IZOlJdw1Te0oIO-HjmAWr-I3-MY.jpg?auto=webp&s=48f4480e7f34662a5cd96ff3c8eb388934da6643', 'width': 480}, 'variants': {}}]}
Best LLM for natural language to SQL queries?
1
[removed]
2024-12-02T22:49:40
https://www.reddit.com/r/LocalLLaMA/comments/1h589x5/best_llm_for_natural_language_to_sql_queries/
Top-Square3799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h589x5
false
null
t3_1h589x5
/r/LocalLLaMA/comments/1h589x5/best_llm_for_natural_language_to_sql_queries/
false
false
self
1
null
Has anyone built their own layer for orchestrating how LLM should use tools?
2
So at the moment, 1. we send messages to the model and describe which tools are available 2. the model makes a decision which tools to use 3. we execute them 4. give model the results 5. model produces the final response However, the steps 2 is a complete black box, and also highly dependent on that model's capabilities. Has anyone experimented with building their own logic for tool planning and discovery? are there existing frameworks for that?
2024-12-02T22:52:44
https://www.reddit.com/r/LocalLLaMA/comments/1h58cc3/has_anyone_built_their_own_layer_for/
punkpeye
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h58cc3
false
null
t3_1h58cc3
/r/LocalLLaMA/comments/1h58cc3/has_anyone_built_their_own_layer_for/
false
false
self
2
null
Building the cheapest API for everyone. LTX-Video model supported and completely free!
1
[removed]
2024-12-02T23:20:03
https://www.reddit.com/r/LocalLLaMA/comments/1h58z6q/building_the_cheapest_api_for_everyone_ltxvideo/
Ok_Difference_4483
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h58z6q
false
null
t3_1h58z6q
/r/LocalLLaMA/comments/1h58z6q/building_the_cheapest_api_for_everyone_ltxvideo/
false
false
https://b.thumbs.redditm…RPN42pUsQDpA.jpg
1
{'enabled': False, 'images': [{'id': 'wZYm8nhsA5pD7NplL5-OgBvi3pHqqPix8-HFbNWi13g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/uZN_zKYB96EDTmuc3PU6Ntya2LXZJkxHmG0NxKbHIAw.jpg?width=108&crop=smart&auto=webp&s=258ed84cf9858c878f961d2ca891c571e975ea99', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/uZN_zKYB96EDTmuc3PU6Ntya2LXZJkxHmG0NxKbHIAw.jpg?width=216&crop=smart&auto=webp&s=3f1978a271aeba766ee9a396652bcab45908b895', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/uZN_zKYB96EDTmuc3PU6Ntya2LXZJkxHmG0NxKbHIAw.jpg?width=320&crop=smart&auto=webp&s=7130b28d2415f00d4ae40c349807cfae57adb908', 'width': 320}], 'source': {'height': 288, 'url': 'https://external-preview.redd.it/uZN_zKYB96EDTmuc3PU6Ntya2LXZJkxHmG0NxKbHIAw.jpg?auto=webp&s=496ac5414823ce50932033827644899f1cb6c60d', 'width': 512}, 'variants': {}}]}
Extractous - Fast Text Extraction for GenAI with Rust + Apache Tika
29
2024-12-02T23:38:46
https://github.com/yobix-ai/extractous
davidmezzetti
github.com
1970-01-01T00:00:00
0
{}
1h59eao
false
null
t3_1h59eao
/r/LocalLLaMA/comments/1h59eao/extractous_fast_text_extraction_for_genai_with/
false
false
https://a.thumbs.redditm…wkIAXV14cGS8.jpg
29
{'enabled': False, 'images': [{'id': 'e4qltneGpO10JM8YAxDMpNVHY-g4tVZc59KRNOKJdGw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2ZZfg34J3XCT-70NiYC5zQGR1rhgaj4zFGVXKLWmSyU.jpg?width=108&crop=smart&auto=webp&s=38526efa5ae0240683f9cbe0c5ea9faae24f43f5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2ZZfg34J3XCT-70NiYC5zQGR1rhgaj4zFGVXKLWmSyU.jpg?width=216&crop=smart&auto=webp&s=53768c530d8d5c74a421c29131b3d8d3a46e58f4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2ZZfg34J3XCT-70NiYC5zQGR1rhgaj4zFGVXKLWmSyU.jpg?width=320&crop=smart&auto=webp&s=e0c045ba58834dbc00d75246eb0b974fd8b556e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2ZZfg34J3XCT-70NiYC5zQGR1rhgaj4zFGVXKLWmSyU.jpg?width=640&crop=smart&auto=webp&s=9fc36578a3ee0cbcd1e0729a94c5b0f7519287a8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2ZZfg34J3XCT-70NiYC5zQGR1rhgaj4zFGVXKLWmSyU.jpg?width=960&crop=smart&auto=webp&s=2601096261829018374edfb15489c14ad804446c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2ZZfg34J3XCT-70NiYC5zQGR1rhgaj4zFGVXKLWmSyU.jpg?width=1080&crop=smart&auto=webp&s=890bfe10a212ae5e3d676343c526739c937b2873', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2ZZfg34J3XCT-70NiYC5zQGR1rhgaj4zFGVXKLWmSyU.jpg?auto=webp&s=c8be8c1e00001c8ed9c87661fa77bfd567ddb1b5', 'width': 1200}, 'variants': {}}]}
What are the use cases of smaller models (<3B) these days?
10
So far, for most of my personal or work projects, I've been using 32B+ models, but I've never actually used models <3B for something meaningful. Some of them are targeted to run on-device, but I don't really know which tasks do these models excel at in those environments. Whenever I try them on LMStudio, they output gibberish or are extremely verbose if not strictly controlled by the sampling params. Does anyone have some use cases for them? I'd like to know some great use cases since I know nothing about this area.
2024-12-02T23:55:00
https://www.reddit.com/r/LocalLLaMA/comments/1h59r58/what_are_the_use_cases_of_smaller_models_3b_these/
Odd_Tumbleweed574
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h59r58
false
null
t3_1h59r58
/r/LocalLLaMA/comments/1h59r58/what_are_the_use_cases_of_smaller_models_3b_these/
false
false
self
10
null
Llama 70b Multi-step tool implementation
9
Has anyone successfully implemented Multi-step tool calling in a model of this size? If you have, I would be curious to hear how you did. I’ve got it working in a couple examples through vigorous prompting but am unsatisfied with the results as they are inconsistent.
2024-12-03T00:51:46
https://www.reddit.com/r/LocalLLaMA/comments/1h5azbt/llama_70b_multistep_tool_implementation/
Disastrous_Ad8959
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5azbt
false
null
t3_1h5azbt
/r/LocalLLaMA/comments/1h5azbt/llama_70b_multistep_tool_implementation/
false
false
self
9
null
How can I run more than 5 Google Gemini API keys on a single computer
1
[removed]
2024-12-03T01:03:53
https://www.reddit.com/r/LocalLLaMA/comments/1h5b8og/how_can_i_run_more_than_5_google_gemini_api_keys/
Fluffy-Cold-1727
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5b8og
false
null
t3_1h5b8og
/r/LocalLLaMA/comments/1h5b8og/how_can_i_run_more_than_5_google_gemini_api_keys/
false
false
self
1
null
Llama.cpp Vulkan on AMD BC-250
1
[removed]
2024-12-03T01:20:11
https://www.reddit.com/r/LocalLLaMA/comments/1h5bl65/llamacpp_vulkan_on_amd_bc250/
MachineZer0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5bl65
false
null
t3_1h5bl65
/r/LocalLLaMA/comments/1h5bl65/llamacpp_vulkan_on_amd_bc250/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
Is there anything that you can do with 48GB that you can't do with 24GB?
1
[removed]
2024-12-03T01:40:43
https://www.reddit.com/r/LocalLLaMA/comments/1h5c08d/is_there_anything_that_you_can_do_with_48gb_that/
7evenate9ine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5c08d
false
null
t3_1h5c08d
/r/LocalLLaMA/comments/1h5c08d/is_there_anything_that_you_can_do_with_48gb_that/
false
false
self
1
null
Great for AMD GPUs
98
This is yuge. Believe me.
2024-12-03T02:46:19
https://embeddedllm.com/blog/vllm-now-supports-running-gguf-on-amd-radeon-gpu
badabimbadabum2
embeddedllm.com
1970-01-01T00:00:00
0
{}
1h5dbek
false
null
t3_1h5dbek
/r/LocalLLaMA/comments/1h5dbek/great_for_amd_gpus/
false
false
https://a.thumbs.redditm…6j17KDjlKd68.jpg
98
{'enabled': False, 'images': [{'id': 'yI7hvK0q8OkFyANHxWNLT4KJpno1apRxdEcPmZxQ6vE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/usHNFQFap_sWW0pp7gnh3u0U4qsVCeerbupPg52bvMc.jpg?width=108&crop=smart&auto=webp&s=3d270a3bc587e694da6f910c806bd87bdfbf9ee8', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/usHNFQFap_sWW0pp7gnh3u0U4qsVCeerbupPg52bvMc.jpg?width=216&crop=smart&auto=webp&s=7c7ff026979d86a7651e96d03db0f0954983788f', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/usHNFQFap_sWW0pp7gnh3u0U4qsVCeerbupPg52bvMc.jpg?width=320&crop=smart&auto=webp&s=39394e3c31f378f976d05248ad603eb07a0aaf36', 'width': 320}], 'source': {'height': 192, 'url': 'https://external-preview.redd.it/usHNFQFap_sWW0pp7gnh3u0U4qsVCeerbupPg52bvMc.jpg?auto=webp&s=07d3f95423b993ce77de2cfcbcf5055aa23d63e3', 'width': 352}, 'variants': {}}]}
Tricks for DPO tuning a Code LLM, Part 1 - Logit Curriculum Learning
15
2024-12-03T03:05:54
https://brianfitzgerald.xyz/dpo-pruning/
sonderemawe
brianfitzgerald.xyz
1970-01-01T00:00:00
0
{}
1h5dp07
false
null
t3_1h5dp07
/r/LocalLLaMA/comments/1h5dp07/tricks_for_dpo_tuning_a_code_llm_part_1_logit/
false
false
default
15
null
Would you rather fight a 70B model or 70 1B models?
234
Let's assume these 1B models are able to reason with each other. Which one are you taking on?
2024-12-03T03:28:02
https://www.reddit.com/r/LocalLLaMA/comments/1h5e47c/would_you_rather_fight_a_70b_model_or_70_1b_models/
LewisTheScot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5e47c
false
null
t3_1h5e47c
/r/LocalLLaMA/comments/1h5e47c/would_you_rather_fight_a_70b_model_or_70_1b_models/
false
false
self
234
null
this sub lately
1
2024-12-03T03:39:45
https://i.redd.it/rwdil8px0k4e1.png
PortlandPoly
i.redd.it
1970-01-01T00:00:00
0
{}
1h5ec2d
false
null
t3_1h5ec2d
/r/LocalLLaMA/comments/1h5ec2d/this_sub_lately/
false
false
https://b.thumbs.redditm…UNwMZpYyouMc.jpg
1
{'enabled': True, 'images': [{'id': '9M4gbSPjJCQ9Ki7bxnBdky0lvVijvqMepmhfe86olhI', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/rwdil8px0k4e1.png?width=108&crop=smart&auto=webp&s=f81d84e3bec1bce0f5ef6c7e12783e60ac9017c5', 'width': 108}, {'height': 219, 'url': 'https://preview.redd.it/rwdil8px0k4e1.png?width=216&crop=smart&auto=webp&s=b2e4f515b00b948cc05f65b20a3cffd49b09551b', 'width': 216}, {'height': 324, 'url': 'https://preview.redd.it/rwdil8px0k4e1.png?width=320&crop=smart&auto=webp&s=2998f5523826d190152cf03eb5ba7c688c547291', 'width': 320}, {'height': 649, 'url': 'https://preview.redd.it/rwdil8px0k4e1.png?width=640&crop=smart&auto=webp&s=317804d1e38ea0c3242433d107a22d87e181eec0', 'width': 640}, {'height': 974, 'url': 'https://preview.redd.it/rwdil8px0k4e1.png?width=960&crop=smart&auto=webp&s=13f8011add665df5f974ed168f805d0e575f19c8', 'width': 960}, {'height': 1096, 'url': 'https://preview.redd.it/rwdil8px0k4e1.png?width=1080&crop=smart&auto=webp&s=5654554e0b3577d2102a02a1db31dfcdf8c0acc6', 'width': 1080}], 'source': {'height': 1141, 'url': 'https://preview.redd.it/rwdil8px0k4e1.png?auto=webp&s=96d836c82451f4c34a279234b8ae85e9a01e6c47', 'width': 1124}, 'variants': {}}]}
MCP 🤝 OpenAI Bridge: Run MCP Tools with Any OpenAI-Compatible LLM
14
Hey r/LocalLLaMA fellas, I created an MCP implementation that bridges the gap between MCP servers (and tools) and OpenAI's function calling interface. I needed to use MCP tools with OpenAI's API and local models, and couldn't find an existing solution that worked. While it's primarily designed for use with the OpenAI API, you can also use it with Ollama running locally, LM Studio, or any endpoint that implements the OpenAI API spec. I've put the code up on GitHub [here](https://github.com/bartolli/mcp-llm-bridge). Let me know what you think or if you have any questions!
2024-12-03T03:42:02
https://www.reddit.com/r/LocalLLaMA/comments/1h5edl7/mcp_openai_bridge_run_mcp_tools_with_any/
Plenty_Seesaw8878
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5edl7
false
null
t3_1h5edl7
/r/LocalLLaMA/comments/1h5edl7/mcp_openai_bridge_run_mcp_tools_with_any/
false
false
self
14
{'enabled': False, 'images': [{'id': 'iPXmFlc8EK4kUP3A4Ci2yEa5-l1QdIO5chXWry0taog', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rj2GsEs0W7LA51cOuv6oB_6R73BJfQskqHzphoOgy5A.jpg?width=108&crop=smart&auto=webp&s=c5cee3a40a35c30209f822c74f756235206a2f13', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rj2GsEs0W7LA51cOuv6oB_6R73BJfQskqHzphoOgy5A.jpg?width=216&crop=smart&auto=webp&s=aaa1bcaf0455616bf83436e04f0ac5f192751f59', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rj2GsEs0W7LA51cOuv6oB_6R73BJfQskqHzphoOgy5A.jpg?width=320&crop=smart&auto=webp&s=626737493fd5906ea675fc21234f718d50ad5bc9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rj2GsEs0W7LA51cOuv6oB_6R73BJfQskqHzphoOgy5A.jpg?width=640&crop=smart&auto=webp&s=0253403aaddc575434d004c18572a5109faba70c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rj2GsEs0W7LA51cOuv6oB_6R73BJfQskqHzphoOgy5A.jpg?width=960&crop=smart&auto=webp&s=1754ecb9282475915b9397492e429eae124b69f3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rj2GsEs0W7LA51cOuv6oB_6R73BJfQskqHzphoOgy5A.jpg?width=1080&crop=smart&auto=webp&s=ee623834becdf3871d6979f5335a2780dbd523d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rj2GsEs0W7LA51cOuv6oB_6R73BJfQskqHzphoOgy5A.jpg?auto=webp&s=fb00cf1c93ecf5380ebeabacda6d1162b50fd123', 'width': 1200}, 'variants': {}}]}
Writing Assistant & Document Access
1
[removed]
2024-12-03T03:57:25
https://www.reddit.com/r/LocalLLaMA/comments/1h5enq8/writing_assistant_document_access/
papacholo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5enq8
false
null
t3_1h5enq8
/r/LocalLLaMA/comments/1h5enq8/writing_assistant_document_access/
false
false
self
1
null
Llama is a
1
[View Poll](https://www.reddit.com/poll/1h5erbo)
2024-12-03T04:02:36
https://www.reddit.com/r/LocalLLaMA/comments/1h5erbo/llama_is_a/
Dismal_Spread5596
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5erbo
false
null
t3_1h5erbo
/r/LocalLLaMA/comments/1h5erbo/llama_is_a/
false
false
self
1
null
Vercel SDK – how do I instruct which tool to use?
1
https://github.com/vercel/ai/issues/3944 basically this question
2024-12-03T04:03:09
https://www.reddit.com/r/LocalLLaMA/comments/1h5erpi/vercel_sdk_how_do_i_instruct_which_tool_to_use/
punkpeye
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5erpi
false
null
t3_1h5erpi
/r/LocalLLaMA/comments/1h5erpi/vercel_sdk_how_do_i_instruct_which_tool_to_use/
false
false
self
1
{'enabled': False, 'images': [{'id': '_XlC-u0Dz1BMpmnNSQPwtgANajTtELjMIa1xeKxSZpo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f-S2k7c_h1lkm6WILPyhGsGIQSIHNrxu7WnUEec-_M4.jpg?width=108&crop=smart&auto=webp&s=575d500a8e78e4ec693fdafb616e6555a7252ea9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f-S2k7c_h1lkm6WILPyhGsGIQSIHNrxu7WnUEec-_M4.jpg?width=216&crop=smart&auto=webp&s=1fc63475958b6c5a42f93c840afc1332920957b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f-S2k7c_h1lkm6WILPyhGsGIQSIHNrxu7WnUEec-_M4.jpg?width=320&crop=smart&auto=webp&s=b9f2bae4f5398b335c11329ffa80d38e8368bdc5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f-S2k7c_h1lkm6WILPyhGsGIQSIHNrxu7WnUEec-_M4.jpg?width=640&crop=smart&auto=webp&s=aa4f795f1dc908d70eb12ebd5cdc375043ecdbc5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f-S2k7c_h1lkm6WILPyhGsGIQSIHNrxu7WnUEec-_M4.jpg?width=960&crop=smart&auto=webp&s=f0e9bf0923f37df68c06457d6689bf63bb6500cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f-S2k7c_h1lkm6WILPyhGsGIQSIHNrxu7WnUEec-_M4.jpg?width=1080&crop=smart&auto=webp&s=a1ca7bffa63734636f28f2e6aa07ecb8fc741f99', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f-S2k7c_h1lkm6WILPyhGsGIQSIHNrxu7WnUEec-_M4.jpg?auto=webp&s=14a76fc7423c20445455407dcabf14e3e5099a3e', 'width': 1200}, 'variants': {}}]}
LM Studio running on NPU, finally! (Qualcomm Snapdragon's Copilot+ PC )
170
2024-12-03T04:13:08
https://v.redd.it/sfvpeevj6k4e1
geringonco
/r/LocalLLaMA/comments/1h5eyb8/lm_studio_running_on_npu_finally_qualcomm/
1970-01-01T00:00:00
0
{}
1h5eyb8
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/sfvpeevj6k4e1/DASHPlaylist.mpd?a=1735920821%2CMTEyY2QwYTYzODk1NzA0ZGIwMzY1MzFjYjg3NTU4NzNhMWRhNWM5MDU3NDRlZDhiNTVjNTRjODVkZjk1ZDQ4Mw%3D%3D&v=1&f=sd', 'duration': 151, 'fallback_url': 'https://v.redd.it/sfvpeevj6k4e1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 854, 'hls_url': 'https://v.redd.it/sfvpeevj6k4e1/HLSPlaylist.m3u8?a=1735920821%2CZGRiYTk5ZDM3Y2VhYTJmYzM0MGIzZjAyZGUzN2I3YWFkMzE3MWEzYzJkZGE2MGQ0NDVlNTI4ZDY3ZWQ1MTJmMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/sfvpeevj6k4e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}}
t3_1h5eyb8
/r/LocalLLaMA/comments/1h5eyb8/lm_studio_running_on_npu_finally_qualcomm/
false
false
https://external-preview…b84daca0c7064c5b
170
{'enabled': False, 'images': [{'id': 'bmRyZ2Fkdmo2azRlMckDta8F54n8eWGrX0MHmWltl3AHKcni1zxTkvOcppMF', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/bmRyZ2Fkdmo2azRlMckDta8F54n8eWGrX0MHmWltl3AHKcni1zxTkvOcppMF.png?width=108&crop=smart&format=pjpg&auto=webp&s=e692c6010193a9aa8ce07a657efb752619f5c287', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/bmRyZ2Fkdmo2azRlMckDta8F54n8eWGrX0MHmWltl3AHKcni1zxTkvOcppMF.png?width=216&crop=smart&format=pjpg&auto=webp&s=d528c4ca9eae423d56fdb87ed38f0435d5797e38', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/bmRyZ2Fkdmo2azRlMckDta8F54n8eWGrX0MHmWltl3AHKcni1zxTkvOcppMF.png?width=320&crop=smart&format=pjpg&auto=webp&s=118cf7eb865ff97e6ed681aeda8fb0a06937c135', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/bmRyZ2Fkdmo2azRlMckDta8F54n8eWGrX0MHmWltl3AHKcni1zxTkvOcppMF.png?width=640&crop=smart&format=pjpg&auto=webp&s=f06bba122d70bd83d3301c805026a0a5c0cc20ad', 'width': 640}], 'source': {'height': 1138, 'url': 'https://external-preview.redd.it/bmRyZ2Fkdmo2azRlMckDta8F54n8eWGrX0MHmWltl3AHKcni1zxTkvOcppMF.png?format=pjpg&auto=webp&s=cc0986a8091c29077c2951bc84593e591e6b83fe', 'width': 640}, 'variants': {}}]}
Prompt Caching in OSS models
1
[removed]
2024-12-03T04:44:45
https://www.reddit.com/r/LocalLLaMA/comments/1h5fhxe/prompt_caching_in_oss_models/
Winter-Seesaw6919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5fhxe
false
null
t3_1h5fhxe
/r/LocalLLaMA/comments/1h5fhxe/prompt_caching_in_oss_models/
false
false
self
1
null
I built a simple Character AI-like UI after my previous post asking for recommendations
26
Hey everyone! A few weeks ago, I [Looking for an open-source Character AI-like UI for deploying a fine-tuned RP model](https://www.reddit.com/r/LocalLLaMA/comments/1gt4x3l/looking_for_an_opensource_character_ailike_ui_for/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) looking for an open-source Character AI-like UI for my fine-tuned RP model. Since I couldn't find exactly what I needed, I decided to build one myself with Claude's help! ## Features - 💬 Continuous chat with history - 🔄 Retry/regenerate messages while keeping chat history - 📝 Create multiple chat sessions - 🤖 Compatible with all OpenAI API spec endpoints - 👤 Character/role editing - ✏️ Edit/delete messages (both assistant & user) - 💾 Import/export configurations - 📱 Mobile responsive ## Tech Stack - Vue 3 + TypeScript - Element Plus - Yarn # Why I Built This After my previous post, I realized most existing solutions were either too complex or missing key features I wanted. I aimed to create something simple yet functional that others could easily modify and use. # Try It Out The project is open source and available on GitHub: [mirau-chat-ui](https://github.com/woshixiaobai2019/mirau-chat-ui) ## What's Next I'm planning to open-source my fine-tuned RP model soon!(A o1-like RP model) It's been performing really well in testing, and I think it would be great to share it with the community. Stay tuned for updates on that. The model combined with this UI should provide a complete solution for anyone looking to set up their own RP chat system. Feel free to try out the UI and let me know what you think! PRs and suggestions are welcome.
2024-12-03T06:02:27
https://www.reddit.com/r/LocalLLaMA/comments/1h5gtfj/i_built_a_simple_character_ailike_ui_after_my/
EliaukMouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5gtfj
false
null
t3_1h5gtfj
/r/LocalLLaMA/comments/1h5gtfj/i_built_a_simple_character_ailike_ui_after_my/
false
false
self
26
{'enabled': False, 'images': [{'id': 'PJOmbrjautjsuFyuoj8ugHojfWuSpJZGwHbeOPCOMXE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FAe9KqqZyxIj7KAzKQO7N5eloHaGMbXkyYf5_QFO5Ok.jpg?width=108&crop=smart&auto=webp&s=cfdd866214580db0339d75e00dd40c1e32addeed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FAe9KqqZyxIj7KAzKQO7N5eloHaGMbXkyYf5_QFO5Ok.jpg?width=216&crop=smart&auto=webp&s=05368623e725e2505bd7b6599c0bc316ab38fcdd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FAe9KqqZyxIj7KAzKQO7N5eloHaGMbXkyYf5_QFO5Ok.jpg?width=320&crop=smart&auto=webp&s=8d409e82a9540dfcbb99e49b3629c565384a5bea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FAe9KqqZyxIj7KAzKQO7N5eloHaGMbXkyYf5_QFO5Ok.jpg?width=640&crop=smart&auto=webp&s=0733aa9d3ad529947a915bd23636b54d7cc9f4c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FAe9KqqZyxIj7KAzKQO7N5eloHaGMbXkyYf5_QFO5Ok.jpg?width=960&crop=smart&auto=webp&s=4623cd896e0e9fb1f4d865d03e9bb27d268a8cdc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FAe9KqqZyxIj7KAzKQO7N5eloHaGMbXkyYf5_QFO5Ok.jpg?width=1080&crop=smart&auto=webp&s=13775114345f442eb2dc616247855b50bcb3986c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FAe9KqqZyxIj7KAzKQO7N5eloHaGMbXkyYf5_QFO5Ok.jpg?auto=webp&s=62afb39cd9490681e3f77e3c7a3b61590e75ab14', 'width': 1200}, 'variants': {}}]}
Can i change the llama.cpp version used by lm studio myself?
9
lm studio has its moods, it updates when it wants to, which may not coincide with when i want it to update. I was thinking, can the llama.cpp version used by lm studio be replaced by a newer version manually without help of its developers?
2024-12-03T06:20:53
https://www.reddit.com/r/LocalLLaMA/comments/1h5h3lp/can_i_change_the_llamacpp_version_used_by_lm/
ab2377
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5h3lp
false
null
t3_1h5h3lp
/r/LocalLLaMA/comments/1h5h3lp/can_i_change_the_llamacpp_version_used_by_lm/
false
false
self
9
null
What can I do with 8xA800 cards?
1
[removed]
2024-12-03T06:25:17
https://www.reddit.com/r/LocalLLaMA/comments/1h5h5xu/what_can_i_do_with_8xa800_cards/
VariationStreet2332
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5h5xu
false
null
t3_1h5h5xu
/r/LocalLLaMA/comments/1h5h5xu/what_can_i_do_with_8xa800_cards/
false
false
self
1
null
What can I do with 8xA800 cards?
1
[removed]
2024-12-03T06:28:05
https://www.reddit.com/r/LocalLLaMA/comments/1h5h7cw/what_can_i_do_with_8xa800_cards/
CrazyShipTed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5h7cw
false
null
t3_1h5h7cw
/r/LocalLLaMA/comments/1h5h7cw/what_can_i_do_with_8xa800_cards/
false
false
self
1
null
What models are good for writing in the style of a certain author?
1
I'm wanting a model that i can feed transcripts of certain content i want to better imitate to inspire the stories i write. something like "in the style of 'joe pera talks with you,' and 'how to with john Wilson'" help me write a cohesive story that aims to breakdown and understand the theme of family traditions, with the accompanying broll of a man getting a haircut, a dog eating from the dinner table, and grandma cutting off the tops of chip bags." Whats the best model i can download and essentially use a lora of their transcripts to craft stories that replicate their tone and humor style?
2024-12-03T06:31:08
https://www.reddit.com/r/LocalLLaMA/comments/1h5h94c/what_models_are_good_for_writing_in_the_style_of/
SkyMartinezReddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5h94c
false
null
t3_1h5h94c
/r/LocalLLaMA/comments/1h5h94c/what_models_are_good_for_writing_in_the_style_of/
false
false
self
1
null
What can I do with multiple cards?
1
[removed]
2024-12-03T06:31:30
https://www.reddit.com/r/LocalLLaMA/comments/1h5h9bw/what_can_i_do_with_multiple_cards/
CrazyShipTed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5h9bw
false
null
t3_1h5h9bw
/r/LocalLLaMA/comments/1h5h9bw/what_can_i_do_with_multiple_cards/
false
false
self
1
null
Don't want to wast a 640GB server.
1
[removed]
2024-12-03T06:35:22
https://www.reddit.com/r/LocalLLaMA/comments/1h5hbg2/dont_want_to_wast_a_640gb_server/
CrazyShipTed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5hbg2
false
null
t3_1h5hbg2
/r/LocalLLaMA/comments/1h5hbg2/dont_want_to_wast_a_640gb_server/
false
false
self
1
null
Best way to build code summarizer app
1
[removed]
2024-12-03T07:13:00
https://www.reddit.com/r/LocalLLaMA/comments/1h5huxw/best_way_to_build_code_summarizer_app/
RedOblivion01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5huxw
false
null
t3_1h5huxw
/r/LocalLLaMA/comments/1h5huxw/best_way_to_build_code_summarizer_app/
false
false
self
1
null
I am noticing something not being taken into account in JP to EN Data sets
1
[removed]
2024-12-03T07:35:44
https://www.reddit.com/r/LocalLLaMA/comments/1h5i65c/i_am_noticing_something_not_being_taken_into/
GTurkistane
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5i65c
false
null
t3_1h5i65c
/r/LocalLLaMA/comments/1h5i65c/i_am_noticing_something_not_being_taken_into/
false
false
self
1
{'enabled': False, 'images': [{'id': 'dROJ8P7F4PhUWw6nsym9HkuS_crcn6Y_40Qk9nUOTrQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=108&crop=smart&auto=webp&s=5e126aab35df6f7a1cabaff2403ce2bf73eb0b25', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=216&crop=smart&auto=webp&s=517c479b1798541828fb521c85dd02943b705586', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=320&crop=smart&auto=webp&s=0e6d57056506f8ffd2547ffed80f93088ae8d060', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=640&crop=smart&auto=webp&s=31827c372f3fa6f36815da461afcffeb4d5990d3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=960&crop=smart&auto=webp&s=553236c4b1663c341c287466062f8a14cd22977d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?width=1080&crop=smart&auto=webp&s=658cf77a23c3c154327cc84b2021c0469495ac23', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/F0KXnjS-HXcQJqrn4h43l34xVTEI9nFbfhT0VpuMw2M.jpg?auto=webp&s=eaf6c5babc69df78263d3a2bff70dad1d779dac8', 'width': 1200}, 'variants': {}}]}
Private Local LLM RAG (Advanced Pipelines)
1
[removed]
2024-12-03T07:56:06
https://www.reddit.com/r/LocalLLaMA/comments/1h5ig0x/private_local_llm_rag_advanced_pipelines/
akhilpanja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5ig0x
false
null
t3_1h5ig0x
/r/LocalLLaMA/comments/1h5ig0x/private_local_llm_rag_advanced_pipelines/
false
false
self
1
null
This is why uncensored model is important
0
https://preview.redd.it/…eal wise friend.
2024-12-03T07:58:53
https://www.reddit.com/r/LocalLLaMA/comments/1h5ihav/this_is_why_uncensored_model_is_important/
Internet--Traveller
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5ihav
false
null
t3_1h5ihav
/r/LocalLLaMA/comments/1h5ihav/this_is_why_uncensored_model_is_important/
false
false
https://b.thumbs.redditm…93cFbgf43GKU.jpg
0
null
best tools for local translation (English ==> German) on a 3090
1
[removed]
2024-12-03T08:07:25
https://www.reddit.com/r/LocalLLaMA/comments/1h5ilkk/best_tools_for_local_translation_english_german/
llamahunter141
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5ilkk
false
null
t3_1h5ilkk
/r/LocalLLaMA/comments/1h5ilkk/best_tools_for_local_translation_english_german/
false
false
self
1
null
Assistance required for running LLMs locally
1
[removed]
2024-12-03T08:07:28
https://www.reddit.com/r/LocalLLaMA/comments/1h5illd/assistance_required_for_running_llms_locally/
iamnotdeadnuts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5illd
false
null
t3_1h5illd
/r/LocalLLaMA/comments/1h5illd/assistance_required_for_running_llms_locally/
false
false
self
1
null
I am noticing something not being taken into account in JP to EN Data sets
1
[removed]
2024-12-03T08:17:33
https://www.reddit.com/r/LocalLLaMA/comments/1h5iqiu/i_am_noticing_something_not_being_taken_into/
Oehriehqkbt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5iqiu
false
null
t3_1h5iqiu
/r/LocalLLaMA/comments/1h5iqiu/i_am_noticing_something_not_being_taken_into/
false
false
self
1
null
Ready-to-go open-source RAG implementations
1
[removed]
2024-12-03T08:19:54
https://www.reddit.com/r/LocalLLaMA/comments/1h5irmp/readytogo_opensource_rag_implementations/
the_little_alex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5irmp
false
null
t3_1h5irmp
/r/LocalLLaMA/comments/1h5irmp/readytogo_opensource_rag_implementations/
false
false
self
1
null
Ready-to-go open-source Retrieval-Augmented Generation implementations
1
[removed]
2024-12-03T08:24:19
https://www.reddit.com/r/LocalLLaMA/comments/1h5itsf/readytogo_opensource_retrievalaugmented/
the_little_alex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5itsf
false
null
t3_1h5itsf
/r/LocalLLaMA/comments/1h5itsf/readytogo_opensource_retrievalaugmented/
false
false
self
1
null
Hugging Face is doing a free and open course on fine tuning local LLMs!!
1,061
You will learn how to fine-tune, align, and use LLMs locally for your own use case. This is a hands-on course designed to help you align language models for your unique needs. It’s beginner-friendly, with minimal requirements: • Runs on most local machines • Minimal GPU requirements • No paid services needed The course is based on the SmolLM2 series of models, but the skills you gain can be applied to larger models or other small language models. Perfect for getting started with model alignment without needing a supercomputer! 🚀
2024-12-03T09:38:26
https://www.reddit.com/r/LocalLLaMA/comments/1h5js86/hugging_face_is_doing_a_free_and_open_course_on/
bburtenshaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5js86
false
null
t3_1h5js86
/r/LocalLLaMA/comments/1h5js86/hugging_face_is_doing_a_free_and_open_course_on/
false
false
self
1,061
{'enabled': False, 'images': [{'id': 'bqd3ulNb9tQv54vUmKW_1xKe9sMownzDzCiJtQ0GemA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZgA7j-VJH_Zx8o-sQdG9StuoI15B1zqEDfswxmsJNxw.jpg?width=108&crop=smart&auto=webp&s=ca83cb8a3f43aa0e331627fa9c4e2f07a9b17f34', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZgA7j-VJH_Zx8o-sQdG9StuoI15B1zqEDfswxmsJNxw.jpg?width=216&crop=smart&auto=webp&s=269f7753595dcbadfb83f84b8ac1cbee9105dcf4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZgA7j-VJH_Zx8o-sQdG9StuoI15B1zqEDfswxmsJNxw.jpg?width=320&crop=smart&auto=webp&s=94caf26051a7cce7816d7ecfede2dd264c7046ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZgA7j-VJH_Zx8o-sQdG9StuoI15B1zqEDfswxmsJNxw.jpg?width=640&crop=smart&auto=webp&s=e2491d06ce87351d7291b2d68d37a9e33de13e01', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZgA7j-VJH_Zx8o-sQdG9StuoI15B1zqEDfswxmsJNxw.jpg?width=960&crop=smart&auto=webp&s=e638337e125eeb3ef8f88ab3f57ed74c9366912b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZgA7j-VJH_Zx8o-sQdG9StuoI15B1zqEDfswxmsJNxw.jpg?width=1080&crop=smart&auto=webp&s=08c9094e4bd392aee1bf4919bbb665ac77f8e89d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZgA7j-VJH_Zx8o-sQdG9StuoI15B1zqEDfswxmsJNxw.jpg?auto=webp&s=3335aec872f697e24d1730621e86f70238ac78f8', 'width': 1200}, 'variants': {}}]}
Inference for Embedding & Reranking Models on AMD
4
Short Blog post on torch+rocm usage to run models on infinity, using a local docker setup. Nobody asked for it, but here it is: *running models using AMD using prebuilt docker image*. TIL that only 1 in every 100 users is on AMD, and a majority of deployments are on RTX30 and RTX40. It fairly unknown to most people that AMD has decent PyTorch support for the MI250/300X series. The results on inference are on par with the one of the H100. [https://huggingface.co/blog/michaelfeil/infinity-amd](https://huggingface.co/blog/michaelfeil/infinity-amd)
2024-12-03T09:45:57
https://www.reddit.com/r/LocalLLaMA/comments/1h5jvpx/inference_for_embedding_reranking_models_on_amd/
OrganicMesh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5jvpx
false
null
t3_1h5jvpx
/r/LocalLLaMA/comments/1h5jvpx/inference_for_embedding_reranking_models_on_amd/
false
false
self
4
{'enabled': False, 'images': [{'id': 'x3a6ROR9KaRhVIyCZcNmZAf33xtuFS36IMfFr6NIbT8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1kRMQNvIN88Ud1BPq1eOao4U52-1TrY5VT-nX7VxzoU.jpg?width=108&crop=smart&auto=webp&s=215e274b77f8e1f4f9474f112a3d5aaf80039374', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1kRMQNvIN88Ud1BPq1eOao4U52-1TrY5VT-nX7VxzoU.jpg?width=216&crop=smart&auto=webp&s=26d35ecb5e8857146016c6de0b50fe69fbaac03a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1kRMQNvIN88Ud1BPq1eOao4U52-1TrY5VT-nX7VxzoU.jpg?width=320&crop=smart&auto=webp&s=b4a69d8f794587f2aaf68e5565ea85efcb7b838e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1kRMQNvIN88Ud1BPq1eOao4U52-1TrY5VT-nX7VxzoU.jpg?width=640&crop=smart&auto=webp&s=674cdf072722344246de38fdbb5b8ea11098891c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1kRMQNvIN88Ud1BPq1eOao4U52-1TrY5VT-nX7VxzoU.jpg?width=960&crop=smart&auto=webp&s=7f5f61c6354f44340bf4d13da7940740970fb9f9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1kRMQNvIN88Ud1BPq1eOao4U52-1TrY5VT-nX7VxzoU.jpg?width=1080&crop=smart&auto=webp&s=7fa3db6c421799afeacc88813fbd88263abe8a9b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1kRMQNvIN88Ud1BPq1eOao4U52-1TrY5VT-nX7VxzoU.jpg?auto=webp&s=c01dd904b19012929729da51dcca0af7770c37fd', 'width': 1200}, 'variants': {}}]}
Structured data chunking for RAG
1
[removed]
2024-12-03T10:18:18
https://www.reddit.com/r/LocalLLaMA/comments/1h5kbir/structured_data_chunking_for_rag/
InternationalText292
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5kbir
false
null
t3_1h5kbir
/r/LocalLLaMA/comments/1h5kbir/structured_data_chunking_for_rag/
false
false
self
1
null
Generating prompts with uncensored LLM
1
[removed]
2024-12-03T10:27:50
https://www.reddit.com/r/LocalLLaMA/comments/1h5kg76/generating_prompts_with_uncensored_llm/
aiwtl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5kg76
false
null
t3_1h5kg76
/r/LocalLLaMA/comments/1h5kg76/generating_prompts_with_uncensored_llm/
false
false
self
1
null
Tencent releases Hunyuan-video, outperforms closed-source models like Gen3, Luma
2
[removed]
2024-12-03T10:34:48
https://www.reddit.com/r/LocalLLaMA/comments/1h5kjmy/tencent_releases_hunyuanvideo_outperforms/
mehul_gupta1997
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5kjmy
false
null
t3_1h5kjmy
/r/LocalLLaMA/comments/1h5kjmy/tencent_releases_hunyuanvideo_outperforms/
false
false
self
2
null
Best local tool for querying a folder of documents?
4
I have used "chat with RTX" for this. I'm wondering what other tools are available for this.
2024-12-03T10:56:53
https://www.reddit.com/r/LocalLLaMA/comments/1h5kuub/best_local_tool_for_querying_a_folder_of_documents/
Cunninghams_right
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5kuub
false
null
t3_1h5kuub
/r/LocalLLaMA/comments/1h5kuub/best_local_tool_for_querying_a_folder_of_documents/
false
false
self
4
null
Can anyone estimate what kind of hardware I would need to run Llama 3 400B with 32b?
1
[removed]
2024-12-03T10:57:14
https://www.reddit.com/r/LocalLLaMA/comments/1h5kv0z/can_anyone_estimate_what_kind_of_hardware_i_would/
No_Goat_5701
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5kv0z
false
null
t3_1h5kv0z
/r/LocalLLaMA/comments/1h5kv0z/can_anyone_estimate_what_kind_of_hardware_i_would/
false
false
self
1
null
Small sized pretrained LLM model
3
hii does anyone have any idea of any small LLM (pre-trained models for 5-12 GB size .) It should be able just to answer very basic stuff and that's about it. If so please share thanks in advance :))
2024-12-03T11:19:02
https://www.reddit.com/r/LocalLLaMA/comments/1h5l6ft/small_sized_pretrained_llm_model/
Wide-Chef-7011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h5l6ft
false
null
t3_1h5l6ft
/r/LocalLLaMA/comments/1h5l6ft/small_sized_pretrained_llm_model/
false
false
self
3
null