title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AI Experts, please evaluate my efforts! | 1 | [removed] | 2025-01-08T23:14:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hwy0cy/ai_experts_please_evaluate_my_efforts/ | QuantityLopsided5201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hwy0cy | false | null | t3_1hwy0cy | /r/LocalLLaMA/comments/1hwy0cy/ai_experts_please_evaluate_my_efforts/ | false | false | 1 | null |
|
Do you think you can find the password ? I created a nice LLM Challenge | 3 | Hey LLM Enthusiasts,
I have been recently so attracted to the combination between CTF challenges and LLMs, so an idea popped in my mind and I turned into a challenge.
I have fine-tuned *unsloth/Llama-3.2-1B-Instruct* to follow a specific pattern I wanted 🤫
The challenge is to make the LLM give you the password, comment the password if you find it !
I know a lot of you will crack it very quickly, but I think it's a very nice experience for me !
Thanks a lot for taking the time to read this and to do the challenge: [here](https://colab.research.google.com/drive/1I6iLgqiN5FBZdwfM8p16fF3MieQC7WFG?usp=sharing) | 2025-01-08T23:50:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hwywen/do_you_think_you_can_find_the_password_i_created/ | Boring_Rabbit2275 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hwywen | false | null | t3_1hwywen | /r/LocalLLaMA/comments/1hwywen/do_you_think_you_can_find_the_password_i_created/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
What happened to AiTracker? | 28 | About 7 months ago, [AiTracker.art](https://AiTracker.art) was announced here as a torrent tracker for AI models. It was fairly useful resource, but I noticed it's no longer accessible. The torrents still work of course, but does anyone know what happened to it? Is it just down for maintenance, or gone forever (the hostname hasn't been resolving for the past week or two)?
Link to original announcement: [https://www.reddit.com/r/LocalLLaMA/comments/1dc1nxg/aitrackerart\_a\_torrent\_tracker\_for\_ai\_models/](https://www.reddit.com/r/LocalLLaMA/comments/1dc1nxg/aitrackerart_a_torrent_tracker_for_ai_models/) | 2025-01-08T23:57:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hwz324/what_happened_to_aitracker/ | whotookthecandyjar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hwz324 | false | null | t3_1hwz324 | /r/LocalLLaMA/comments/1hwz324/what_happened_to_aitracker/ | false | false | self | 28 | null |
Translation tools for Ollama | 0 | I have some large documents I want to translate, are there any self hosted frontends that could split the files to chucks for Ollama backend to translate, then reassemble and ouput complete files?
Ideal with PDF and Docx support. Similar to LibreTranslate, but backed by a local LLM. | 2025-01-09T00:16:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hwzjon/translation_tools_for_ollama/ | xquarx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hwzjon | false | null | t3_1hwzjon | /r/LocalLLaMA/comments/1hwzjon/translation_tools_for_ollama/ | false | false | self | 0 | null |
Phi-4 Llamafied + 4 Bug Fixes + GGUFs, Dynamic 4bit Quants | 214 | Hey r/LocalLLaMA ! I've uploaded **fixed versions** of [Phi-4](https://unsloth.ai/blog/phi4), including GGUF + 4-bit + 16-bit versions on HuggingFace!
We’ve fixed over **4 bugs (3 major ones)** in Phi-4, mainly related to tokenizers and chat templates which affected inference and finetuning workloads. If you were experiencing poor results, we recommend trying our GGUF upload. A detailed post on the fixes will be released tomorrow.
We also **Llamafied** the model meaning it should work out of the box with every framework including [Unsloth](https://github.com/unslothai/unsloth). Fine-tuning is **2x faster, uses 70% VRAM** & has 9x longer context lengths with Unsloth.
View all Phi-4 versions with our bug fixes: [https://huggingface.co/collections/unsloth/phi-4-all-versions-677eecf93784e61afe762afa](https://huggingface.co/collections/unsloth/phi-4-all-versions-677eecf93784e61afe762afa)
|Phi-4 Uploads (with our bug fixes)|
|:-|
|[GGUFs](https://huggingface.co/unsloth/phi-4-GGUF) including 2, 3, 4, 5, 6, 8, 16-bit|
|[Unsloth Dynamic 4-bit](https://huggingface.co/unsloth/phi-4-unsloth-bnb-4bit)|
|[4-bit Bnb](https://huggingface.co/unsloth/phi-4-bnb-4bit)|
|[Original 16-bit](https://huggingface.co/unsloth/phi-4)|
I uploaded Q2\_K\_L quants which works well as well - they are Q2\_K quants, but leaves the embedding as Q4 and lm\_head as Q6 - this should increase accuracy by a bit!
To use Phi-4 in llama.cpp, do:
./llama.cpp/llama-cli
--model unsloth/phi-4-GGUF/phi-4-Q2_K_L.gguf
--prompt '<|im_start|>user<|im_sep|>Provide all combinations of a 5 bit binary number.<|im_end|><|im_start|>assistant<|im_sep|>'
--threads 16
Which will produce:
A 5-bit binary number consists of 5 positions, each of which can be either 0 or 1. Therefore, there are \(2^5 = 32\) possible combinations. Here they are, listed in ascending order:
1. 00000
2. 00001
3. 00010
I also uploaded **Dynamic 4bit quants** which don't quantize every layer to 4bit, and leaves some in 16bit - by using only an extra 1GB of VRAM, you get superior accuracy, especially for finetuning! - Head over to [https://github.com/unslothai/unsloth](https://github.com/unslothai/unsloth) to finetune LLMs and Vision models 2x faster and use 70% less VRAM!
[Dynamic 4bit quants leave some layers as 16bit and not 4bit](https://preview.redd.it/74tja0m83vbe1.png?width=1000&format=png&auto=webp&s=99a09dc6e9ad8c1d8e8e19ad519ccaaabebde3a8) | 2025-01-09T00:20:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hwzmqc/phi4_llamafied_4_bug_fixes_ggufs_dynamic_4bit/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hwzmqc | false | null | t3_1hwzmqc | /r/LocalLLaMA/comments/1hwzmqc/phi4_llamafied_4_bug_fixes_ggufs_dynamic_4bit/ | false | false | 214 | null |
|
Iterative Alignment Therapy - my experience | 1 | [removed] | 2025-01-09T00:33:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hwzvy4/iterative_alignment_therapy_my_experience/ | Gerdel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hwzvy4 | false | null | t3_1hwzvy4 | /r/LocalLLaMA/comments/1hwzvy4/iterative_alignment_therapy_my_experience/ | false | false | self | 1 | null |
Is it possible to run Pixtral-12B on a single 3090? | 2 | Getting OOM even when I tried using the unsloth bnb 4 bit model. Is there any way to run it on a single 3090? Thanks a lot in advance.
`from vllm import LLM`
`from vllm.sampling_params import SamplingParams`
`model_name = "unsloth/Pixtral-12B-2409-bnb-4bit"`
`sampling_params = SamplingParams(max_tokens=8192)`
`llm = LLM(model=model_name, quantization="bitsandbytes", load_format="bitsandbytes", tokenizer_mode="mistral")`
`prompt = "Describe this image in one sentence."`
`image_url = "https://picsum.photos/id/237/200/300"`
`messages = [`
`{`
`"role": "user",`
`"content": [{"type": "text", "text": prompt}, {"type": "image_url", "image_url": {"url": image_url}}]`
`},`
`]`
`outputs = llm.chat(messages, sampling_params=sampling_params)`
`print(outputs[0].outputs[0].text)`
| 2025-01-09T00:56:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hx0con/is_it_possible_to_run_pixtral12b_on_a_single_3090/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx0con | false | null | t3_1hx0con | /r/LocalLLaMA/comments/1hx0con/is_it_possible_to_run_pixtral12b_on_a_single_3090/ | false | false | self | 2 | null |
Now that Phi-4 has been out for a while what do you think? | 112 | on real world use cases does it perform well and what tasks have you tried it on so far? | 2025-01-09T02:03:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hx1qn2/now_that_phi4_has_been_out_for_a_while_what_do/ | pigeon57434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx1qn2 | false | null | t3_1hx1qn2 | /r/LocalLLaMA/comments/1hx1qn2/now_that_phi4_has_been_out_for_a_while_what_do/ | false | false | self | 112 | null |
There are some stuff on this sub that I wish I could understand | 1 | 2025-01-09T02:13:09 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hx1y1b | false | null | t3_1hx1y1b | /r/LocalLLaMA/comments/1hx1y1b/there_are_some_stuff_on_this_sub_that_i_wish_i/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'pNoTzDXyNH4x-rynI0SMV5hgixrxLLFiS4uLSvICv8k', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/is987ryanvbe1.png?width=108&crop=smart&auto=webp&s=b513f5c79d9e09bb116379c2f4d467d6269a7858', 'width': 108}, {'height': 34, 'url': 'https://preview.redd.it/is987ryanvbe1.png?width=216&crop=smart&auto=webp&s=f74a267a33e96ad8b3137ef709a69a515df327fc', 'width': 216}, {'height': 51, 'url': 'https://preview.redd.it/is987ryanvbe1.png?width=320&crop=smart&auto=webp&s=95153190f9a18b57f55020daef8d63dcaf9cd1b4', 'width': 320}, {'height': 103, 'url': 'https://preview.redd.it/is987ryanvbe1.png?width=640&crop=smart&auto=webp&s=9eee34cc815a894d6b16cade3405f7815a1430a6', 'width': 640}, {'height': 155, 'url': 'https://preview.redd.it/is987ryanvbe1.png?width=960&crop=smart&auto=webp&s=c4944b869e25e84ae66fec35fd952cb7266449c7', 'width': 960}, {'height': 174, 'url': 'https://preview.redd.it/is987ryanvbe1.png?width=1080&crop=smart&auto=webp&s=1df5958779f8bb13229fbd0e1986018d44865de8', 'width': 1080}], 'source': {'height': 198, 'url': 'https://preview.redd.it/is987ryanvbe1.png?auto=webp&s=518fc2925cdd341644ea31f2c024ec3e4c584e92', 'width': 1226}, 'variants': {}}]} |
|||
vLLM vs. TGI v3 | 4 | I saw the news about TGI v3 and it sounds like one of those too-good-to-be-true stories. Any neutral 3rd parties have first hand experience with it compared to vLLM?
I've been running llama 3.1 70B awq-int4 in vLLM on 2x3090 for a while; wondering if I should migrate.
(I also still haven't been able to get a 4-bit version of 3.3 70B running under vLLM, so I'm also curious if TGI will make that easier out-of-the-box) | 2025-01-09T02:18:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hx22aw/vllm_vs_tgi_v3/ | e-rox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx22aw | false | null | t3_1hx22aw | /r/LocalLLaMA/comments/1hx22aw/vllm_vs_tgi_v3/ | false | false | self | 4 | null |
384gb RAM on consumer hardware | 1 | [removed] | 2025-01-09T02:48:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hx2q3q/384gb_ram_on_consumer_hardware/ | Physical-Double9735 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx2q3q | false | null | t3_1hx2q3q | /r/LocalLLaMA/comments/1hx2q3q/384gb_ram_on_consumer_hardware/ | false | false | self | 1 | null |
LLMs be like... reminder for the GPU poor among us :‘( | 0 | 2025-01-09T03:06:18 | nderstand2grow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hx338v | false | null | t3_1hx338v | /r/LocalLLaMA/comments/1hx338v/llms_be_like_reminder_for_the_gpu_poor_among_us/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'En1qK_PNtYd81X9TCvdFbF-nPbmEQu9isfa35KYFoik', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/b4kai5lkwvbe1.png?width=108&crop=smart&auto=webp&s=64fc16c54c01b0b3021d96f63e4887d98c2c0637', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/b4kai5lkwvbe1.png?width=216&crop=smart&auto=webp&s=3353521b38bc7248f97b96c38225fb4fa42a32d0', 'width': 216}, {'height': 246, 'url': 'https://preview.redd.it/b4kai5lkwvbe1.png?width=320&crop=smart&auto=webp&s=40ed529cae338b1dcdd13d7431e1ae83baa584d9', 'width': 320}, {'height': 493, 'url': 'https://preview.redd.it/b4kai5lkwvbe1.png?width=640&crop=smart&auto=webp&s=eeaeff42963a9ca5acc3ed1c8a1ee6ce6b65766f', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/b4kai5lkwvbe1.png?auto=webp&s=2e89a48b9bcd50a6e308f6f30d14a2cd25421afd', 'width': 648}, 'variants': {}}]} |
|||
Iterative Alignment Theory | 1 | [removed] | 2025-01-09T03:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hx3jw4/iterative_alignment_theory/ | Gerdel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx3jw4 | false | null | t3_1hx3jw4 | /r/LocalLLaMA/comments/1hx3jw4/iterative_alignment_theory/ | false | false | self | 1 | null |
Please explain the possible capabilities of Project Digits to me | 2 | I have a low level of knowledge when it comes to computers or AI in general.. I use chat gpt and midjourmey quite a bit and I think the tech is cool.. I see that invidia is releasing project digits sometime this year and I'm curious to know what the possible use cases for the computer are? Also, will you need to know how to write code to use the computer the way it's intended to be used and to get the most that you can out of it? | 2025-01-09T03:27:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hx3k43/please_explain_the_possible_capabilities_of/ | thetacollector | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx3k43 | false | null | t3_1hx3k43 | /r/LocalLLaMA/comments/1hx3k43/please_explain_the_possible_capabilities_of/ | false | false | self | 2 | null |
Looking for contributors! | 1 | [removed] | 2025-01-09T03:29:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hx3kzp/looking_for_contributors/ | Downtown-Law-2381 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx3kzp | false | null | t3_1hx3kzp | /r/LocalLLaMA/comments/1hx3kzp/looking_for_contributors/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rRn77Z2RU74MbG0ffZ-mIT2MJWD3mbphUxMLwSTmiCc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=108&crop=smart&auto=webp&s=16d06893d67aaed985c44677c5fbfc76dd4a2f28', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=216&crop=smart&auto=webp&s=e0cb37b523465771c36f0c40918ee9059affb4bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=320&crop=smart&auto=webp&s=90a889b1e57c39423ac6d1e08fadc71b596e58aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=640&crop=smart&auto=webp&s=a118d94c1e808732b731d69cf56f98107951b279', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=960&crop=smart&auto=webp&s=72f84abab4507a6e7eb8b257bc6ebf21550f0eed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=1080&crop=smart&auto=webp&s=f186a7289a44fbd20c335f70a1d54f709700d990', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?auto=webp&s=f03d88f62a1e61d34d0cf6494c1747fb3e03b61b', 'width': 1200}, 'variants': {}}]} |
Anyone know of a good pre-trained model for summarizing medical papers? | 1 | I have a use case where I’d like to summarize sections of certain research papers in medical/bio fields. Is there a good open source model pre-trained for this anyone could recommend? | 2025-01-09T03:30:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hx3lyv/anyone_know_of_a_good_pretrained_model_for/ | thagoodlife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx3lyv | false | null | t3_1hx3lyv | /r/LocalLLaMA/comments/1hx3lyv/anyone_know_of_a_good_pretrained_model_for/ | false | false | self | 1 | null |
trying to use Unsloth locally and get: CalledProcessError: Command. returned non-zero exit status 1 | 1 | [removed] | 2025-01-09T03:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hx3otf/trying_to_use_unsloth_locally_and_get/ | Havre-Banan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx3otf | false | null | t3_1hx3otf | /r/LocalLLaMA/comments/1hx3otf/trying_to_use_unsloth_locally_and_get/ | false | false | self | 1 | null |
Open source project | Looking for contributors! | 1 | [removed] | 2025-01-09T03:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hx3pfm/open_source_project_looking_for_contributors/ | Downtown-Law-2381 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx3pfm | false | null | t3_1hx3pfm | /r/LocalLLaMA/comments/1hx3pfm/open_source_project_looking_for_contributors/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rRn77Z2RU74MbG0ffZ-mIT2MJWD3mbphUxMLwSTmiCc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=108&crop=smart&auto=webp&s=16d06893d67aaed985c44677c5fbfc76dd4a2f28', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=216&crop=smart&auto=webp&s=e0cb37b523465771c36f0c40918ee9059affb4bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=320&crop=smart&auto=webp&s=90a889b1e57c39423ac6d1e08fadc71b596e58aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=640&crop=smart&auto=webp&s=a118d94c1e808732b731d69cf56f98107951b279', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=960&crop=smart&auto=webp&s=72f84abab4507a6e7eb8b257bc6ebf21550f0eed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?width=1080&crop=smart&auto=webp&s=f186a7289a44fbd20c335f70a1d54f709700d990', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BmqXHWvRmVJebgcgZHk7dckjfDN3XmET9KAz2a3dJ0I.jpg?auto=webp&s=f03d88f62a1e61d34d0cf6494c1747fb3e03b61b', 'width': 1200}, 'variants': {}}]} |
help with unsloth | 1 | [removed] | 2025-01-09T03:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hx3r6y/help_with_unsloth/ | Havre-Banan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx3r6y | false | null | t3_1hx3r6y | /r/LocalLLaMA/comments/1hx3r6y/help_with_unsloth/ | false | false | self | 1 | null |
Resources for AI | 5 | Hello, I'm a mern stack developer (college student) :) . Looking Resources to learn AI . Do you have any suggestions or resources. | 2025-01-09T04:06:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hx4d26/resources_for_ai/ | No-Guitar843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx4d26 | false | null | t3_1hx4d26 | /r/LocalLLaMA/comments/1hx4d26/resources_for_ai/ | false | false | self | 5 | null |
Which LLM/SLM works well in Afrikaans, Zulu and South African English? | 1 | [removed] | 2025-01-09T04:23:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hx4nnb/which_llmslm_works_well_in_afrikaans_zulu_and/ | Doingthesciencestuff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx4nnb | false | null | t3_1hx4nnb | /r/LocalLLaMA/comments/1hx4nnb/which_llmslm_works_well_in_afrikaans_zulu_and/ | false | false | self | 1 | null |
Phi 4 is just 14B But Better than llama 3.1 70b for several tasks. | 365 | 2025-01-09T05:12:08 | Vishnu_One | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hx5i8u | false | null | t3_1hx5i8u | /r/LocalLLaMA/comments/1hx5i8u/phi_4_is_just_14b_but_better_than_llama_31_70b/ | false | false | 365 | {'enabled': True, 'images': [{'id': 'oe_kwWJK9WXdD4evCXOiljmAoqgai-eS4gvPdQu6yNM', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/uwfo8ig8jwbe1.png?width=108&crop=smart&auto=webp&s=96864835d622737a671e199ab63d7c201f10432c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/uwfo8ig8jwbe1.png?width=216&crop=smart&auto=webp&s=1c33d971ed5643cba992a1d11c94188af3d9c37c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/uwfo8ig8jwbe1.png?width=320&crop=smart&auto=webp&s=8b5a99957c8f049dbf2bb9a283ee5dbda3b613c7', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/uwfo8ig8jwbe1.png?width=640&crop=smart&auto=webp&s=06707621fd601095edb87fea07007bbf976f658b', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/uwfo8ig8jwbe1.png?width=960&crop=smart&auto=webp&s=843e08a245d24113d82ced127b57be2c5303f0cb', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/uwfo8ig8jwbe1.png?width=1080&crop=smart&auto=webp&s=44ec1dd16c3cc0996282223fa704032bea68b1c1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/uwfo8ig8jwbe1.png?auto=webp&s=825d8a745db98939b54cae4ead84e54fd2f8e69c', 'width': 1920}, 'variants': {}}]} |
|||
If you are trying to learn about a specific topic that your model does not know a lot about, create a glossary. If the glossary is small enough, enter it into a system instructions. If not, insert as a prompt before asking your question.... | 19 | I used to enter dozens of pages worth of tokens. This confused all models except Claude and maybe sometimes Gemini.
All open source models mostly suck at long context in context understanding. I even tried the big ones on open router as I cannot run the big ones locally.
Even LLaMA 3.1 405B struggles.
But now that I enter the glossary of the topic I am learning, it immediately understands the definitions and reduces hallucinations a lot because it knows what I am asking about. Saves a tone of time and compute.
Also the model starts admitting what it doesn't know which spots it from making bs hallucinations that we are likely to fall for. | 2025-01-09T05:27:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hx5rak/if_you_are_trying_to_learn_about_a_specific_topic/ | Powerful-Solution646 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx5rak | false | null | t3_1hx5rak | /r/LocalLLaMA/comments/1hx5rak/if_you_are_trying_to_learn_about_a_specific_topic/ | false | false | self | 19 | null |
New Microsoft research - rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking
| 245 | [https://arxiv.org/abs/2501.04519](https://arxiv.org/abs/2501.04519) | 2025-01-09T05:43:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hx60t4/new_microsoft_research_rstarmath_small_llms_can/ | Ok_Landscape_6819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx60t4 | false | null | t3_1hx60t4 | /r/LocalLLaMA/comments/1hx60t4/new_microsoft_research_rstarmath_small_llms_can/ | false | false | self | 245 | null |
Help choosing a system | 2 | Let's imagine I can spend about $2000. Really it could be more but I don't really want to. I already have an older gaming desktop with a Ryzen 9 3900x that is fine for an RTX 3090 and some more ram.
I have 2 goals really.
1. Would like to get a used Mac laptop and get back into using a Mac ui for my daily driver.
2. Play with local LLMs of decent size with acceptable performance. Not expecting blazing.
Do I get:
a) A MacBook Pro M1 Max 32 core you with 64gb ram and 1-2tb for around $1800?
b) A cheaper MacBook Pro with 16 core gpu and 1tb but just 16-32gb ram + a used RTX 3090 to replace my 1080?
c) something else? What?
Keep in mind I am not wedded to only having 1 machine and huge fast LLMs aren't as important as accomplishing both goals of a Mac laptop + some llm capabilities. I don't mind cloud or api subscriptions for non private stuff. | 2025-01-09T05:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hx63t0/help_choosing_a_system/ | BiteFancy9628 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx63t0 | false | null | t3_1hx63t0 | /r/LocalLLaMA/comments/1hx63t0/help_choosing_a_system/ | false | false | self | 2 | null |
Netflix search bar | 3 | Idea is I want to develop a rag application, first let me explain the problem, lets say , i want to watch king kong movie but i forgot the title, i know the poster or any info about movie, i knew it has a monkey, so if i search monkey in netflix in search bar, will king kong show up? no right, but use vector similarity search and find in movie descfriptions and info , like cosine similarity , it changes the whole search thing right as kong means ape means monkey, the similarity,i can search with anything that relates to the movie
i want to use knowledge graphs for queries like "rajamouli action movies" or "movie of srk from 2013"
i have a huge dataset with 8000+ movies in csv format,
id, title, director, year, country, cast, description
please help me, thanks in advance | 2025-01-09T05:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hx67ax/netflix_search_bar/ | kingofpyrates | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx67ax | false | null | t3_1hx67ax | /r/LocalLLaMA/comments/1hx67ax/netflix_search_bar/ | false | false | self | 3 | null |
Weirdly good finetune - QwQ-LCoT-7B-Instruct | 78 | I use a lot of complex, large-context coding prompts that are high on the difficulty scale using [https://github.com/curvedinf/dir-assistant](https://github.com/curvedinf/dir-assistant) . I've been using APIs for a number of months since prices have come down, but I just did a round of tests in the 7B-14B range. I found this tune randomly while browsing huggingface and it has a whole 304 downloads, but damn is it good. Its consistently outperforming newer 32B models, and older 72B models in my tests. I don't know what the secret is here, but I just wanted to pass this along. I test a LOT of models, and this one is weirdly good for coding.
[https://huggingface.co/prithivMLmods/QwQ-LCoT-7B-Instruct](https://huggingface.co/prithivMLmods/QwQ-LCoT-7B-Instruct)
[https://huggingface.co/bartowski/QwQ-LCoT-7B-Instruct-GGUF](https://huggingface.co/bartowski/QwQ-LCoT-7B-Instruct-GGUF) | 2025-01-09T06:33:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hx6sjc/weirdly_good_finetune_qwqlcot7binstruct/ | 1ncehost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx6sjc | false | null | t3_1hx6sjc | /r/LocalLLaMA/comments/1hx6sjc/weirdly_good_finetune_qwqlcot7binstruct/ | false | false | self | 78 | {'enabled': False, 'images': [{'id': 'PVNdXTL93lKb070RTV6Op2HYdeIjgMd_bN_xTEdRrT0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/j4REuWQiOpPmBijPMUcX0He8EM5yWs6hSAmu5USCKAQ.jpg?width=108&crop=smart&auto=webp&s=c33f8c7265ae74db87365bf24d94b404678bfe43', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/j4REuWQiOpPmBijPMUcX0He8EM5yWs6hSAmu5USCKAQ.jpg?width=216&crop=smart&auto=webp&s=3d8f2dea90801dfd591978766132c31cfcbfc365', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/j4REuWQiOpPmBijPMUcX0He8EM5yWs6hSAmu5USCKAQ.jpg?width=320&crop=smart&auto=webp&s=47be8d50e520f45e90d1a005596798b7f4ad9c1f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/j4REuWQiOpPmBijPMUcX0He8EM5yWs6hSAmu5USCKAQ.jpg?width=640&crop=smart&auto=webp&s=cc8a3bbf6da6827946392c94a8837c494cea0efa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/j4REuWQiOpPmBijPMUcX0He8EM5yWs6hSAmu5USCKAQ.jpg?width=960&crop=smart&auto=webp&s=cacba16d0d77d6f6366a145e06bf646f5c2f1e75', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/j4REuWQiOpPmBijPMUcX0He8EM5yWs6hSAmu5USCKAQ.jpg?width=1080&crop=smart&auto=webp&s=b5c3f1f6de826955334713614401efcc7940cc66', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/j4REuWQiOpPmBijPMUcX0He8EM5yWs6hSAmu5USCKAQ.jpg?auto=webp&s=dd814f3878f0c301b971e6814670f26bdf7b819c', 'width': 1200}, 'variants': {}}]} |
Help Me Decide: RTX 3060 12GB vs. RTX 4060 Ti 16GB for ML and Occasional Gaming | 15 | Hey everyone! I could really use some GPU advice. I primarily do machine learning/model training but also game casually (League of Legends at 60 FPS is more than enough for me). Due to local market constraints, I’ve narrowed it down to:
1. **RTX 3060 12GB** (MSI Ventus 2X) – $365
* [Specs](https://www.techpowerup.com/gpu-specs/msi-rtx-3060-ventus-2x.b8614)
2. **RTX 4060 Ti 16GB** (ZOTAC AMP) – $510
* [Specs](https://www.techpowerup.com/gpu-specs/zotac-rtx-4060-ti-amp-16-gb.b11324)
My current system is an **i5-12400** with **32GB of RAM**.
**Why I’m Torn:**
* The 4060 Ti has more VRAM (16GB vs. 12GB) and higher CUDA core count, which can help with bigger ML models.
* However, it’s got a narrower memory bus (128-bit vs. 192-bit on the 3060).
* There’s also a significant price difference ($510 vs. $365).
**Use Cases:**
* **Machine Learning / Model Training**: Primarily in TensorFlow/PyTorch. VRAM size is important for handling larger models, but memory bandwidth can also be a factor.
* **Gaming**: Mostly League of Legends (60 FPS is plenty). I’m not aiming for ultra settings in AAA titles.
**Questions:**
1. How much does the narrower bus on the 4060 Ti matter for ML workloads in practice?
2. Is it worth paying the extra $145 for the 4060 Ti for the additional VRAM and performance uplift?
I’d really appreciate any insights or experiences you might have. Thanks in advance! | 2025-01-09T06:37:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hx6utz/help_me_decide_rtx_3060_12gb_vs_rtx_4060_ti_16gb/ | zaid2801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx6utz | false | null | t3_1hx6utz | /r/LocalLLaMA/comments/1hx6utz/help_me_decide_rtx_3060_12gb_vs_rtx_4060_ti_16gb/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'zHIP1I7kzMJAyCsbwRrRgMg0Q6vKRQDqf4JweeNBuR0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/q_gUCh8wR1maH4ysqRG7FFpcL2f4kpRKY3HwW73Q23g.jpg?width=108&crop=smart&auto=webp&s=008148e85c319a64e91c0b88334704297e60c6b3', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/q_gUCh8wR1maH4ysqRG7FFpcL2f4kpRKY3HwW73Q23g.jpg?width=216&crop=smart&auto=webp&s=72674b9b3a659830e0c98824c8e07a221b02e6ac', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/q_gUCh8wR1maH4ysqRG7FFpcL2f4kpRKY3HwW73Q23g.jpg?width=320&crop=smart&auto=webp&s=4767b6bc29b01005bfe101f2a5280f81934a5de4', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/q_gUCh8wR1maH4ysqRG7FFpcL2f4kpRKY3HwW73Q23g.jpg?auto=webp&s=df355128cb81027cd6d76746df0713aa325022fd', 'width': 484}, 'variants': {}}]} |
How to Fine-Tune Your LLM Without Sufficient Resources (GPU, RAM, etc.) | 1 | [removed] | 2025-01-09T06:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hx6zji/how_to_finetune_your_llm_without_sufficient/ | kevinjadiya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx6zji | false | null | t3_1hx6zji | /r/LocalLLaMA/comments/1hx6zji/how_to_finetune_your_llm_without_sufficient/ | false | false | self | 1 | null |
TransPixar: a new generative model that preserves transparency, | 561 | 2025-01-09T06:55:51 | https://v.redd.it/8fhb41uq1xbe1 | umarmnaq | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hx7421 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/8fhb41uq1xbe1/DASHPlaylist.mpd?a=1738997763%2CZThiYWMyYzUyOGRiNjQwYTNjZmZhNmQ4M2JiOGRhNzFiZjFmYTI2ZGRmYWFjNzQyN2ZlMWU3YWIyOWYxYzVmNg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/8fhb41uq1xbe1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/8fhb41uq1xbe1/HLSPlaylist.m3u8?a=1738997763%2CNThhYjg1OTMwNzcwZTA4MzM3MzMyMjRmZmRjMmU1MmMxNzkyYWExNDgxZGRlOGEzMzMxMmZmMWM0ZmIyYWQyMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8fhb41uq1xbe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 640}} | t3_1hx7421 | /r/LocalLLaMA/comments/1hx7421/transpixar_a_new_generative_model_that_preserves/ | false | false | 561 | {'enabled': False, 'images': [{'id': 'aHFsc2gwdXExeGJlMYcphE9YFRyNCrr76DvwiShDtswDzb2s93cwaOHelUg2', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aHFsc2gwdXExeGJlMYcphE9YFRyNCrr76DvwiShDtswDzb2s93cwaOHelUg2.png?width=108&crop=smart&format=pjpg&auto=webp&s=c64d92ee1515586e82ceb7b2fec2017598be5581', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aHFsc2gwdXExeGJlMYcphE9YFRyNCrr76DvwiShDtswDzb2s93cwaOHelUg2.png?width=216&crop=smart&format=pjpg&auto=webp&s=f14bea540226e7036909a1a2d378884b389b31a0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aHFsc2gwdXExeGJlMYcphE9YFRyNCrr76DvwiShDtswDzb2s93cwaOHelUg2.png?width=320&crop=smart&format=pjpg&auto=webp&s=4a1b14a93ca22712824a3c8a5ef2d1e655f6c892', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/aHFsc2gwdXExeGJlMYcphE9YFRyNCrr76DvwiShDtswDzb2s93cwaOHelUg2.png?width=640&crop=smart&format=pjpg&auto=webp&s=ddfa74eaff882623de281623b45cdefe838a419a', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/aHFsc2gwdXExeGJlMYcphE9YFRyNCrr76DvwiShDtswDzb2s93cwaOHelUg2.png?format=pjpg&auto=webp&s=699b766f9e846c3ca9b9765f4f0f177fb2be634b', 'width': 640}, 'variants': {}}]} |
||
Who do you recommend following on Twitter? | 0 | I've been exploring AI for weeks+ and I've been really enjoying it and now believe that AI is the future. Twitter is also really great to find smart people, but also can be annoying reading all the bullish things people can say.
Who are you currently following that helped you learn a lot or makes good prediction? | 2025-01-09T07:20:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hx7gff/who_do_you_recommend_following_on_twitter/ | OPsyduck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx7gff | false | null | t3_1hx7gff | /r/LocalLLaMA/comments/1hx7gff/who_do_you_recommend_following_on_twitter/ | false | false | self | 0 | null |
RDNA Metric Cores | 1 | [removed] | 2025-01-09T07:20:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hx7giz/rdna_metric_cores/ | Benyjing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx7giz | false | null | t3_1hx7giz | /r/LocalLLaMA/comments/1hx7giz/rdna_metric_cores/ | false | false | self | 1 | null |
The evolution of LLM Fine-Tuning and Customization in 2024 | 0 | 2024 was a landmark year for LLMs and GenAI. 2025 is going to be even better!
The top 3 trends in 2024 were:
1. Open-Source/ Open-Weight models closing gap with Closed-Source models
2. Rise in Small Language Models
3. Multi-Model and Multi-Modal LLM Approaches
Our top predictions for 2025:
1. Data quality scaling >> Compute scaling
2. Further developments in inference time scaling and their open-source implementations
3. Evolution of reasoning capabilities in Small Language Models
4. New fine-tuning approaches like RFT and their open-source implementations
5. GenAI going into production, with increased focus on control and performance through custom LLMs
Read our year in review for more details and share your thoughts. What's your top 2025 prediction for LLM customization and open-source?
[https://genloop.substack.com/p/the-evolution-of-llm-fine-tuning](https://genloop.substack.com/p/the-evolution-of-llm-fine-tuning)
https://preview.redd.it/56vim963axbe1.png?width=3000&format=png&auto=webp&s=8492868cd4988cafc1dfe979eb6391232c9a5cf3
| 2025-01-09T07:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hx7qwg/the_evolution_of_llm_finetuning_and_customization/ | SirComprehensive7453 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx7qwg | false | null | t3_1hx7qwg | /r/LocalLLaMA/comments/1hx7qwg/the_evolution_of_llm_finetuning_and_customization/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'V5_nQZDkJdKTVBdopvmwMnc36ZpKK8NMx41FixG9Y9c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Rg6UZJTCiIAytbpxhon8wvddRsPkTxvjOfTLsGyPngA.jpg?width=108&crop=smart&auto=webp&s=d8a46ee05c0f9aa34ceaa9c2946496b6b359dc2a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Rg6UZJTCiIAytbpxhon8wvddRsPkTxvjOfTLsGyPngA.jpg?width=216&crop=smart&auto=webp&s=094c523a4b6df6dc492969d7a56327219d4d1301', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Rg6UZJTCiIAytbpxhon8wvddRsPkTxvjOfTLsGyPngA.jpg?width=320&crop=smart&auto=webp&s=d89c2fd62686102c3881455ad953186c6f7a2779', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Rg6UZJTCiIAytbpxhon8wvddRsPkTxvjOfTLsGyPngA.jpg?width=640&crop=smart&auto=webp&s=b8726e5cbeb9a14bc6eeb56ac5d6483561d464b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Rg6UZJTCiIAytbpxhon8wvddRsPkTxvjOfTLsGyPngA.jpg?width=960&crop=smart&auto=webp&s=f7c909b79ae3f86987255391538cb9b0a55b46e7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Rg6UZJTCiIAytbpxhon8wvddRsPkTxvjOfTLsGyPngA.jpg?width=1080&crop=smart&auto=webp&s=6a63e46f45f204990147517e0372944b8e3d9d31', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Rg6UZJTCiIAytbpxhon8wvddRsPkTxvjOfTLsGyPngA.jpg?auto=webp&s=99f101a4636700645f84d17282d50efd1e7296a6', 'width': 1200}, 'variants': {}}]} |
|
Models gets installed to the default folder and not the configured location! | 0 | I have installed Ollama and before pulling any model I have used the below line to change the download path to another SSD that I have with bigger size for downloading model:
Sudo chown -R ollama:llama/media/baz/DataDrive/Models (from ollama github)
However after I ran the line and pulled the models I saw that they still got downloaded to the default path in my primary OS SSD
What am I doing wrong ? | 2025-01-09T07:46:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hx7sp5/models_gets_installed_to_the_default_folder_and/ | IbrBaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx7sp5 | false | null | t3_1hx7sp5 | /r/LocalLLaMA/comments/1hx7sp5/models_gets_installed_to_the_default_folder_and/ | false | false | self | 0 | null |
genAI engineer interview | 0 | What makes a great generative AI engineer with a focus on LLMs?
Also, what are your thoughts on the interview process? Any ideas for questions I should ask as an interviewer? | 2025-01-09T07:46:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hx7sut/genai_engineer_interview/ | lapups | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx7sut | false | null | t3_1hx7sut | /r/LocalLLaMA/comments/1hx7sut/genai_engineer_interview/ | false | false | self | 0 | null |
Dual GPU Power Supply, Mounting, PCIe Lanes | 1 | I'm planing on upgrading my 3080 to a 5090 once the 5090 releases, and I'm wondering how viable it would be to keep the 3080 in my second pcie slot.
1. The problem with that is that that second slot only has 4 pcie lanes (don't know if that's a problem for AI).
2. How much Power do I need to run both a 5090 and 3080 along with the rest of my system (cpu is a 7800x3d). My 1000W power supply is probably not going to be enough.
3. The 5090 is probably going to be large enough that it require one of those weird GPU support sticks in order to not sag in the pcie slot whcih is bad becouse it can crack the pcb and destroy the card. but using one of thsoe gpu support sticks is not really possible with another gpu bellow the 5090. So I need to deal with some other way of making sure the GPU is mountet properly and dosn't sag.
Is the extra 10GB of Vram worth the effort and cost of buying a new PSU (how much more can I run with 42GB vs 32GB) and does the lack of PCIE lanes matter for AI?. Another option would be to sell the 3080 and buy a 4060ti 16GB instead for less power and more vram then the 3080. | 2025-01-09T08:23:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hx8b1m/dual_gpu_power_supply_mounting_pcie_lanes/ | KeinNiemand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx8b1m | false | null | t3_1hx8b1m | /r/LocalLLaMA/comments/1hx8b1m/dual_gpu_power_supply_mounting_pcie_lanes/ | false | false | self | 1 | null |
A beginners guide to LLM scripting using Python with the KoboldCpp API | 26 | ## A guide to using the KoboldCpp API with Python
KoboldCpp is a great way to run LLMs.
1. Can run any GGUF LLM that llamacpp can (and more)
2. No installation, compiling, or python dependencies needed
3. Linux, Windows, macOS
4. CUDA, and Vulkan acceleration
5. Included GUI frontend with a ton of features
6. Image generation (SD, Flux)
7. Image to text
8. Voice to text and text to voice
9. OpenAI and Ollama compatible API endpoints
The one-file no-install ability makes it great for running portable scripts. Just pack the exe and model weights or a kcppt file and batch file or shell script and a bit of Python code and you have a portable LLM engine.
### What background do you need to follow this guide?
In order to follow this guide you must first learn about and have ready:
1. A Python install for your OS
2. A terminal you can use to run command line programs
3. A text editor meant for coding like notepad++ or an IDE like VSCode or PyCharm setup to write Python
4. A way to manage python environments like miniconda, and an environment setup for this use
5. A least the first part of a [Python tutorial](https://www.kaggle.com/code/alexisbcook/arithmetic-and-variables)
6. Familiarity with running LLMs and downloading weights
7. Familiarity with the JSON format
### Basic API use
Load a model in KoboldCpp and in the web browser navigate to to http://localhost:5001/api
That page is the API documentation. Each URL portion listen are called **endpoints**. By putting that at the end of the KoboldCpp URL you will reach that endpoint, which has a functionality usually defined by its location. By clicking on an endpoint it will allow you to test it live and see the results. Notice the POST or GET next to the endpoint location -- this is important. Sometimes there are two entries for the same endpoint: one for POST and one for GET and they do different things.
Click on the /api/v1/generate entry and then click *Try it out* and then *Execute*. The result of the query will appear.
* [/api/v1/generate](https://imgur.com/dG4lZDS)
* [/api/v1/generate](https://imgur.com/UtA5HX7)
The documentation shows you an example payload and an example response for each endpoint. We communicate with the API using JSON, which contains key value pairs formatted in a specific way.
Ad the bottom is the **Schema** list. These are all of the possible key value pairs you can send to or receieve from an endpoint.
If click on *GenerationInput* you will see every key you can specify, and the types required.
* [GenerationInput](https://imgur.com/CoUbH2t)
### Example using Python
In the terminal, you will need to install the requests library and then open a Python interpreter.
pip install requests
python
Enter the following (I recommend you type it instead of copying and pasting, for practice):
import json
import requests
endpoint = "http://localhost:5001/api/v1/generate"
headers = {"Content-Type": "application/json"}
payload = {
"prompt": "What is a Kobold?",
"max_length": 200,
"temperature": 0.7,
}
response = requests.post(endpoint, json=payload, headers=headers).json()
print(response)
You will see the response from the KoboldCpp API. Compare the results you got from that with the example in the API spec page.
* [Response.](https://imgur.com/pQScvGm)
Now you know how to do a basic API call to KoboldCpp!
## Scripting
Now let's open the text editor or IDE and write a script. In this script we will create a function that communicates with the api for us so we don't have to write the same thing every time. All we need to specific will be the endpoint and the prompt.
import requests
import json
def call_kobold_api(endpoint, request_type, payload=""):
# We establish our base configuration
base_url = "http://localhost:5001"
headers = {
"Content-Type": "application/json"
}
# We build the full URL
url = base_url + endpoint
if request_type == "GET":
response = requests.get(url, headers=headers)
else:
response = requests.post(url, json=payload, headers=headers)
return response.json()
We can now call that function to talk to the API.
For the most common task - generating text from a prompt:
generate = "/api/v1/generate"
max_length = 100
temperature = 1
prompt = input("Kobold: ")
payload = {
"prompt": prompt,
"max_length": max_length,
"temperature": temperature,
}
response = call_kobold_api(generate, "POST", payload)
print(response['results'][0]['text'])
Save the script and run it.
Since there is no instruct template, it is going to be in *text completion* mode and will generally just finish what you started writing.
* [Response.](https://imgur.com/H0fDLJF)
Congrats on learning how to use the KoboldCpp API in Python!
### Token Management
Count tokens in your text:
def count_tokens(text):
endpoint = "/api/extra/tokencount"
data = {"prompt": text}
response = call_kobold_api(endpoint, "POST", data)
return response["value"]
# Example usage
token_count = count_tokens("Hello, world!")
print(f"Token count: {token_count}")
### Model Information
Get information about the running model:
def get_model_info():
model = call_kobold_api("/api/v1/model", "GET")
max_context = call_kobold_api("/api/extra/true_max_context_length")
return {
"model_name": model["result"],
"max_context": max_context["value"]
}
### Do I need to do all this myself?
Nope! Someone already did the tedious stuff for you. Like automatically finding and wrapping the correct template, dealing with text files, images, and creating all the basic endpoint functions
[You can download it here](https://github.com/jabberjabberjabber/koboldapi-python). Of course you are free to modify them to your needs as long as you make the source public until GPLv3.
*This is a work in progress. I am not an expert, by far. Constructive criticisms, corrections, additions and questions are welcome.* | 2025-01-09T08:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hx8gid/a_beginners_guide_to_llm_scripting_using_python/ | Eisenstein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx8gid | false | null | t3_1hx8gid | /r/LocalLLaMA/comments/1hx8gid/a_beginners_guide_to_llm_scripting_using_python/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'C3SZdNfCuyenFifg_4LX71e5bYJ1AvC6ThME_dwF0Co', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/l9s5tdwAFKtMmMMIn3-Yl42FnZ-JcmcBeLAIsFdfb60.jpg?width=108&crop=smart&auto=webp&s=85fcff6b095051ce931667226c1f4fcc75f8538b', 'width': 108}, {'height': 175, 'url': 'https://external-preview.redd.it/l9s5tdwAFKtMmMMIn3-Yl42FnZ-JcmcBeLAIsFdfb60.jpg?width=216&crop=smart&auto=webp&s=f5b5323d803e190845ba9c8f659911b1c5477aaa', 'width': 216}, {'height': 260, 'url': 'https://external-preview.redd.it/l9s5tdwAFKtMmMMIn3-Yl42FnZ-JcmcBeLAIsFdfb60.jpg?width=320&crop=smart&auto=webp&s=73e76c8dd5a5c3d908017327a6cd9ad57d3815a9', 'width': 320}, {'height': 520, 'url': 'https://external-preview.redd.it/l9s5tdwAFKtMmMMIn3-Yl42FnZ-JcmcBeLAIsFdfb60.jpg?width=640&crop=smart&auto=webp&s=b9aa335009a90ca9ab681aea8a52ff1965afa59c', 'width': 640}, {'height': 780, 'url': 'https://external-preview.redd.it/l9s5tdwAFKtMmMMIn3-Yl42FnZ-JcmcBeLAIsFdfb60.jpg?width=960&crop=smart&auto=webp&s=01a3971570c8b111f25eb61c1373d45ee4299627', 'width': 960}, {'height': 878, 'url': 'https://external-preview.redd.it/l9s5tdwAFKtMmMMIn3-Yl42FnZ-JcmcBeLAIsFdfb60.jpg?width=1080&crop=smart&auto=webp&s=7568af76f02d1cb5e8269655977cc3ec2e346e9d', 'width': 1080}], 'source': {'height': 1197, 'url': 'https://external-preview.redd.it/l9s5tdwAFKtMmMMIn3-Yl42FnZ-JcmcBeLAIsFdfb60.jpg?auto=webp&s=9aad78a5ba2cf29e05013def12b6514bdfc3e5ff', 'width': 1472}, 'variants': {}}]} |
"rStar-Math demonstrates that small language models (SLMs) can rival or even surpass the math reasoning capability of OpenAI o1, without distillation from superior models. rStar-Math achieves this by exercising "deep thinking" through Monte Carlo Tree Search (MCTS)....." | 136 | We present rStar-Math to demonstrate that small language models (SLMs) can rival or even surpass the math reasoning capability of OpenAI o1, without distillation from superior models. rStar-Math achieves this by exercising "deep thinking" through Monte Carlo Tree Search (MCTS), where a math policy SLM performs test-time search guided by an SLM-based process reward model. rStar-Math introduces three innovations to tackle the challenges in training the two SLMs: (1) a novel code-augmented CoT data sythesis method, which performs extensive MCTS rollouts to generate step-by-step verified reasoning trajectories used to train the policy SLM; (2) a novel process reward model training method that avoids naïve step-level score annotation, yielding a more effective process preference model (PPM); (3) a self-evolution recipe in which the policy SLM and PPM are built from scratch and iteratively evolved to improve reasoning capabilities. Through 4 rounds of self-evolution with millions of synthesized solutions for 747k math problems, rStar-Math boosts SLMs' math reasoning to state-of-the-art levels. On the MATH benchmark, it improves Qwen2.5-Math-7B from 58.8% to 90.0% and Phi3-mini-3.8B from 41.4% to 86.4%, surpassing o1-preview by +4.5% and +0.9%. On the USA Math Olympiad (AIME), rStar-Math solves an average of 53.3% (8/15) of problems, ranking among the top 20% the brightest high school math students.
https://arxiv.org/abs/2501.04519 | 2025-01-09T08:51:09 | https://www.reddit.com/gallery/1hx8nex | Powerful-Solution646 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hx8nex | false | null | t3_1hx8nex | /r/LocalLLaMA/comments/1hx8nex/rstarmath_demonstrates_that_small_language_models/ | false | false | 136 | null |
|
BREAKING NEWS: AI safety blogging companies partnering with Defense Technology companies to lobby for regulations on 'dangerous' Open source AI. | 152 | https://www.reuters.com/technology/artificial-intelligence/defense-firm-anduril-partners-with-openai-use-ai-national-security-missions-2024-12-04/
https://finance.yahoo.com/news/anthropic-teams-palantir-aws-sell-122648188.html | 2025-01-09T09:33:46 | https://www.reddit.com/gallery/1hx973s | Powerful-Solution646 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hx973s | false | null | t3_1hx973s | /r/LocalLLaMA/comments/1hx973s/breaking_news_ai_safety_blogging_companies/ | false | false | 152 | null |
|
Former OpenAI employee Miles Brundage: "o1 is just an LLM though, no reasoning infrastructure. The reasoning is in the chain of thought." Current OpenAI employee roon: "Miles literally knows what o1 does." | 253 | 2025-01-09T09:39:31 | https://www.reddit.com/gallery/1hx99oi | Wiskkey | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hx99oi | false | null | t3_1hx99oi | /r/LocalLLaMA/comments/1hx99oi/former_openai_employee_miles_brundage_o1_is_just/ | false | false | 253 | null |
||
AI agent “alive” 24/7? | 0 | Hi.
I need an AI agent that can remember me events from my calendar, also to look into Internet and regularly search for news or anything else I need and report it to me.
Someone knows how to do this or if already exists a online service like this? | 2025-01-09T09:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hx9cva/ai_agent_alive_247/ | SnooWoofers780 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx9cva | false | null | t3_1hx9cva | /r/LocalLLaMA/comments/1hx9cva/ai_agent_alive_247/ | false | false | self | 0 | null |
Using LLM to create benchmark | 2 | This might be a stupid question. Ive noticed that a lot of people create their own, private test benchmarks which they use when testing different LLMs. They dont post these questions online since they might end up in the training data eventually. My question is this, if I use an advanced model to create a benchmark with questions, is that benchmark useless since it most likely is based on data that the model has already seen? Or is that a viable option? Synthetic benchmarks? | 2025-01-09T09:56:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hx9hpk/using_llm_to_create_benchmark/ | jaxupaxu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hx9hpk | false | null | t3_1hx9hpk | /r/LocalLLaMA/comments/1hx9hpk/using_llm_to_create_benchmark/ | false | false | self | 2 | null |
Help, Trying to run LLM on local machine, would appreciate your input on my memory issue. | 1 | Hi,
So I am running the Mistral AI model locally on my Machine the 7B model is occupying a roughly 8.5GB of memory(GPU) but when I am loading the chat context to the model the memory occupied by the model exceeds 12 GB of the graphics memory available, how should I go about this?
Thought about retrieving the memory out of the harddisk, but have no idea on how to do it.
online solution says that I will have to use a new GPU, don't have the money to do so, any Inputs will be appreciated. | 2025-01-09T10:37:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hxa1o1/help_trying_to_run_llm_on_local_machine_would/ | According_Fig_4784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxa1o1 | false | null | t3_1hxa1o1 | /r/LocalLLaMA/comments/1hxa1o1/help_trying_to_run_llm_on_local_machine_would/ | false | false | self | 1 | null |
Which tests do you use to test an LLM's capabilities? | 6 | I feel that recently so many benchmarks are becoming saturated. There is really a need for some kind of 'dynamic' benchmarking . Personally, I always try and play tic-tac-toe or connect 4 against LLMs. Not a single LLM (open or proprietary) can do this. Most fail to keep track of the game state and when they actually can play, they play dumb moves. | 2025-01-09T10:51:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hxa8n0/which_tests_do_you_use_to_test_an_llms/ | MoonIcebear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxa8n0 | false | null | t3_1hxa8n0 | /r/LocalLLaMA/comments/1hxa8n0/which_tests_do_you_use_to_test_an_llms/ | false | false | self | 6 | null |
inference help | 1 | [removed] | 2025-01-09T10:55:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hxaam0/inference_help/ | Alternative_Bee_2142 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxaam0 | false | null | t3_1hxaam0 | /r/LocalLLaMA/comments/1hxaam0/inference_help/ | false | false | self | 1 | null |
What is the best LLM for writing and coding that can be run on 4090? | 20 | It doesn't have to be one LLM that does both coding and writing, just to be clear.
I understand this question gets asked a lot, but things are changing quickly and I would like your help to stay on top of them. Thank you. | 2025-01-09T11:08:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hxaho9/what_is_the_best_llm_for_writing_and_coding_that/ | idleWizard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxaho9 | false | null | t3_1hxaho9 | /r/LocalLLaMA/comments/1hxaho9/what_is_the_best_llm_for_writing_and_coding_that/ | false | false | self | 20 | null |
When will we get local thinking time? | 2 | Is it possible currently to have your local 4b model think for like 10 mins? I’ve seen recent info that it can significantly improve the accuracy of even smaller models - does that mean less vram will be more acceptable to the average user now assuming you have the time to compute, rather than stacking vram for the bigger models? | 2025-01-09T11:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hxaqtk/when_will_we_get_local_thinking_time/ | Trustingmeerkat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxaqtk | false | null | t3_1hxaqtk | /r/LocalLLaMA/comments/1hxaqtk/when_will_we_get_local_thinking_time/ | false | false | self | 2 | null |
Best open source LLM for common sense reasoning? | 1 | [removed] | 2025-01-09T11:39:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hxaxc8/best_open_source_llm_for_common_sense_reasoning/ | feedback001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxaxc8 | false | null | t3_1hxaxc8 | /r/LocalLLaMA/comments/1hxaxc8/best_open_source_llm_for_common_sense_reasoning/ | false | false | self | 1 | null |
something similar to elia for terminal llm chat? | 1 | I am interested in this tool [https://github.com/Decentralised-AI/elia-interacting-with-LLM?tab=readme-ov-file](https://github.com/Decentralised-AI/elia-interacting-with-LLM?tab=readme-ov-file) . Basically I want to train my model myself, and try to set up a chatbot via terminal with just some customized questions I pose, then I store the answers given in that chat, stores chat history in a structured way. ELIA doesn't doesn't support models not of ollama, my pc in company isn't accepting ollama. Any tools? | 2025-01-09T11:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hxb30o/something_similar_to_elia_for_terminal_llm_chat/ | jiii95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxb30o | false | null | t3_1hxb30o | /r/LocalLLaMA/comments/1hxb30o/something_similar_to_elia_for_terminal_llm_chat/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '88xKNNinOpBALw4dI-tjJUJ_jvR55YQR0GS1Wu72Jog', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EPRF23G5m_jRl2Z7O2pNgvDBpOqTs466Z1-P-CAfwdY.jpg?width=108&crop=smart&auto=webp&s=e12c55cf98acfc10fe88db45d85a0b2a22f133fc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EPRF23G5m_jRl2Z7O2pNgvDBpOqTs466Z1-P-CAfwdY.jpg?width=216&crop=smart&auto=webp&s=251dec0765c7c215aae82af02974844173741f0b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EPRF23G5m_jRl2Z7O2pNgvDBpOqTs466Z1-P-CAfwdY.jpg?width=320&crop=smart&auto=webp&s=ec3a56b217111a0bdd8784cff3a51ef01268bf2e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EPRF23G5m_jRl2Z7O2pNgvDBpOqTs466Z1-P-CAfwdY.jpg?width=640&crop=smart&auto=webp&s=3aefefacdee4a20d2d19a479285d854f94f2a0f4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EPRF23G5m_jRl2Z7O2pNgvDBpOqTs466Z1-P-CAfwdY.jpg?width=960&crop=smart&auto=webp&s=48c86eea7c4efb63cb5c97b0011b9fee93db6916', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EPRF23G5m_jRl2Z7O2pNgvDBpOqTs466Z1-P-CAfwdY.jpg?width=1080&crop=smart&auto=webp&s=e8d3a780ecaf2a78544bfd347db1c198741becef', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EPRF23G5m_jRl2Z7O2pNgvDBpOqTs466Z1-P-CAfwdY.jpg?auto=webp&s=89c03af459c09ad24b8d4a0bee9bda0f0cd21b82', 'width': 1200}, 'variants': {}}]} |
How to semi-automatically make many legitimate and valuable open-source contributions | 0 | This is more of a thought experiment but hear me out:
Let's call this TODO driven development:
1- Crawl Github for open source projects within your domain of expertise
2- Look for comments starting with #TODO
3- Use AI to interrogate the codebase and quickly figure out the ones that are easy for you
4- (optional) Use AI to implement those changes and write the PR
Since the TODOs were introduced in the code by maintainers themselves, you will most likely solve a real issue that they didn't have the knowledge or time to solve themselves.
With the semi-automated crawling, you it might just find a ton of low hanging fruits that predate the AI era.
I'm writing all this because I realised that this precise thing likely happened on my own open-source repo, and that I was actually pretty happy about it, see the PR [here](https://github.com/cyclotruc/gitingest/pull/116/files)
This is quite meta (and convenient segway for me) because this project is [gitingest.com](http://gitingest.com), a tool that makes step 3 much more efficient | 2025-01-09T12:22:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hxbmb6/how_to_semiautomatically_make_many_legitimate_and/ | MrCyclopede | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxbmb6 | false | null | t3_1hxbmb6 | /r/LocalLLaMA/comments/1hxbmb6/how_to_semiautomatically_make_many_legitimate_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Fx5C1x3Wye96dC5TFSpODsu4zC64CpHNRNvC0osdur0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5mY47XcFGnKwR7GovWKCBl8993EqIW-L5Vz1PmXsVGE.jpg?width=108&crop=smart&auto=webp&s=c0ac32e7134141aa59dcc58a7091f8447ca84ac6', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5mY47XcFGnKwR7GovWKCBl8993EqIW-L5Vz1PmXsVGE.jpg?width=216&crop=smart&auto=webp&s=40788963af87885a3df6a815db31001bf1b22be7', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/5mY47XcFGnKwR7GovWKCBl8993EqIW-L5Vz1PmXsVGE.jpg?width=320&crop=smart&auto=webp&s=7943da0e43bf502bc23b1e4faa68aa74ee9eeaa3', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/5mY47XcFGnKwR7GovWKCBl8993EqIW-L5Vz1PmXsVGE.jpg?auto=webp&s=5e8935eae5132d44e6b27f69a73ce2b41a4425ce', 'width': 400}, 'variants': {}}]} |
Braina now includes LocalLibrary Local RAG feature and Deep Web Search for Local LLMs! | 4 | Hi everyone! I wanted to share some useful features that **Braina AI** has recently rolled out. 1) **LocalLibrary RAG** and 2) **Deep Web Search**
This seems like a solid alternative for those looking for a free privacy-focused local RAG solution that works with local models as well as a local Perplexity alternative that does web search using its local website database and privacy friendly search engines. Would love to hear your experiences if you've tried it! It uses Ollama in backend.
[LocalLibrary Features - Screenshot from Braina's website](https://preview.redd.it/qsjltvwliybe1.png?width=1752&format=png&auto=webp&s=161c8aa4c78c534755f9c1f3ed1ba12a548f6f9f)
[LocalLibrary example from Braina's website](https://preview.redd.it/bg7zgf1piybe1.png?width=2560&format=png&auto=webp&s=2224db318c38a85c39062328eb6b21f5436ddb4c)
Source: [LocalLibrary: Your Personal Local RAG System for Documents - Chat with Files (brainasoft.com)](https://www.brainasoft.com/braina/local-library-rag-chat-with-documents.html)
Note: It only works on Windows OS (as of now).
Tip: When using the deep web search make sure that you have increased the context window length for the model at least to 8192 tokens or more via LLM options panel in Braina for deep web search to work properly. You can also use both LocalLibrary RAG and deep web search at the same time and it works great for my use-cases with the Falcon3:10b and Qwen 2.5:14b-instruct models. I use the following system prompt when doing web search:
Use provided web results to give helpful answers along with reference links. Current date and time is {timestamp} {timezone}
[ ](https://preview.redd.it/r2g25fnejybe1.png?width=1498&format=png&auto=webp&s=d673513c93391f1d301af3dc9100a197ba206217)
https://preview.redd.it/x6vbnr1cpybe1.png?width=1249&format=png&auto=webp&s=b4a3805ed13076b9a92b47af44ba63929347be27
| 2025-01-09T12:32:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hxbs1o/braina_now_includes_locallibrary_local_rag/ | Few-Business-8777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxbs1o | false | null | t3_1hxbs1o | /r/LocalLLaMA/comments/1hxbs1o/braina_now_includes_locallibrary_local_rag/ | false | false | 4 | null |
|
Stuck on Intent Classification for a Finance Chatbot - Urgent Help Needed! | 0 | Hey everyone,
I’ve been working on a finance chatbot that handles dynamic data and queries, but I’ve hit a wall when it comes to intent classification. The bot works fine overall, but the main challenge is mapping user queries to the right categories. If the mapping is wrong, the whole thing falls apart.
Here’s what I’ve got so far:
# Current Setup
I have predefined API fields like:
`"shareholdings"`, `"valuation"`, `"advisory"`, `"results"`, `"technical_summary"`, `"profitloss"`, `"company_info"`, etc.
[Keyword Dictionary](https://pastebin.com/NnpdcrhQ)
| 2025-01-09T12:38:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hxbvjf/stuck_on_intent_classification_for_a_finance/ | Maleficent_Repair359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxbvjf | false | null | t3_1hxbvjf | /r/LocalLLaMA/comments/1hxbvjf/stuck_on_intent_classification_for_a_finance/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
Is there a table of which LLMs use your data VS. which respect privacy? | 1 | [removed] | 2025-01-09T13:15:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hxcja9/is_there_a_table_of_which_llms_use_your_data_vs/ | migueladv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxcja9 | false | null | t3_1hxcja9 | /r/LocalLLaMA/comments/1hxcja9/is_there_a_table_of_which_llms_use_your_data_vs/ | false | false | self | 1 | null |
Llama 3.3 70B Q40 is predicting 6.51 tok/s on 4 x Mac Mini M4 Pro (CPU only) | 0 | 2025-01-09T13:31:06 | https://github.com/b4rtaz/distributed-llama/discussions/147 | b4rtaz | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hxcu70 | false | null | t3_1hxcu70 | /r/LocalLLaMA/comments/1hxcu70/llama_33_70b_q40_is_predicting_651_toks_on_4_x/ | false | false | default | 0 | null |
|
What is the way to go AI-powered document search? (10.000 - 20.000 pdf's) | 1 | [removed] | 2025-01-09T13:39:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hxczwt/what_is_the_way_to_go_aipowered_document_search/ | cocodirasta3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxczwt | false | null | t3_1hxczwt | /r/LocalLLaMA/comments/1hxczwt/what_is_the_way_to_go_aipowered_document_search/ | false | false | self | 1 | null |
Do I use an Agentic Framework for this? And which one? (LangGraph/AutoGen/CrewAI) | 1 | [removed] | 2025-01-09T13:48:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hxd5v0/do_i_use_an_agentic_framework_for_this_and_which/ | FortuneTurbulent7514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxd5v0 | false | null | t3_1hxd5v0 | /r/LocalLLaMA/comments/1hxd5v0/do_i_use_an_agentic_framework_for_this_and_which/ | false | false | self | 1 | null |
Why does Deepseek allow its models to identify as OpenAI models | 0 | My question applies to all companies that create models which often respond as models of competitor companies.
I understand that they get training data from other models such as GPT-4o, but why don’t they replace the keyword “OpenAI” with the keyword “Deepseek”?
That would hide the fact that they use data from other models (ethically or not)
The replace function in text data is not expensive at all, it has O(n) time complexity and O(1) space complexity. So, why they don’t do it? | 2025-01-09T14:09:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hxdl62/why_does_deepseek_allow_its_models_to_identify_as/ | ResearchCrafty1804 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxdl62 | false | null | t3_1hxdl62 | /r/LocalLLaMA/comments/1hxdl62/why_does_deepseek_allow_its_models_to_identify_as/ | false | false | self | 0 | null |
Open-source, Python-based data connectors? | 12 | I'm building some AI agents for which I'm looking for the following:
* Data connectors for common software products like Google Workspace (Docs, Sheets, Gmail, Calendar, Drive, Meet), Notion, Airtable, Slack, Jira, Zoom, Todoist, etc
* Supports both reading and writing
* Open-Source
* Python-based
I did some research on my own, and here is what I found:
* LlamaIndex/Langchain: they have a lot of readers but not writers. For example, I can [read data from Notion](https://docs.llamaindex.ai/en/stable/examples/data_connectors/NotionDemo/), but I can't have an agent write a new doc and save it inside Notion (unless I'm missing something)
* n8n [has all these integrations](https://n8n.io/integrations/), but their license is too restrictive, and it's not Python-based | 2025-01-09T14:20:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hxdsu2/opensource_pythonbased_data_connectors/ | Brilliant-Day2748 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxdsu2 | false | null | t3_1hxdsu2 | /r/LocalLLaMA/comments/1hxdsu2/opensource_pythonbased_data_connectors/ | false | false | self | 12 | null |
2x3090 vs 2x4090 vs 8000 Quadro | 2 | I'm getting 2x4090s for a about 200 usd each more than 2x3090s
2x3090: 700usd each
2x 4090: 900~ USD
RTX 8000 Quadro 48Gb: 2000USD
I'm on a consumer grade system with 2 8x pcie. Gen 4 lanes for gpus
Is it worth it going for the 4090s over 3090s?
How much hotter do they run in comparison with 3090s?
Watercooling isn't much of an option, but I plan on connecting custom ducts to each gpu to direct airflow, also slightly reduced tdp
Quadro is a little over budget, and I'm not sure how it performs for outside LLM related tasks like daily use and gaming n all.
Usage is mostly inference, RAG and web agent pipelines and systems, possibly some SFT DPO RL.
Which would be the best deal to go for? | 2025-01-09T14:20:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hxdtdj/2x3090_vs_2x4090_vs_8000_quadro/ | dRraMaticc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxdtdj | false | null | t3_1hxdtdj | /r/LocalLLaMA/comments/1hxdtdj/2x3090_vs_2x4090_vs_8000_quadro/ | false | false | self | 2 | null |
LLMs for marketing? | 1 | [removed] | 2025-01-09T14:21:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hxdtxy/llms_for_marketing/ | mohamedk97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxdtxy | false | null | t3_1hxdtxy | /r/LocalLLaMA/comments/1hxdtxy/llms_for_marketing/ | false | false | self | 1 | null |
is QwQ the best local model for CoT/reasoning? | 26 | As per title. I can go up to 100b parameters.
I'm running a script that requires an llm to classify text for a charity project. The concept that is being classified is quite complex and subjective that requires multiple tests to pass before it can be labelled as positive (this is why CoT works well). QwQ seems to do better than 72b Qwen 2.5, but given i have the hardware i wonder if there is a larger/better alternative.
I know I can implement my own kind of CoT but if there's one fine tuned already i thought i may as well look at that!
thankyou | 2025-01-09T14:32:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hxe2cy/is_qwq_the_best_local_model_for_cotreasoning/ | Moreh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxe2cy | false | null | t3_1hxe2cy | /r/LocalLLaMA/comments/1hxe2cy/is_qwq_the_best_local_model_for_cotreasoning/ | false | false | self | 26 | null |
Question about rtx 50 series vs 40 series | 1 | [removed] | 2025-01-09T14:47:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hxecwl/question_about_rtx_50_series_vs_40_series/ | Smiltute | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxecwl | false | null | t3_1hxecwl | /r/LocalLLaMA/comments/1hxecwl/question_about_rtx_50_series_vs_40_series/ | false | false | self | 1 | null |
DeepSeek v3 CPU only (41 tps input, 12 tps output) | 1 | [removed] | 2025-01-09T15:05:40 | https://x.com/nisten/status/1877276141905387565?s=46&t=IVb1jfEDzxryef3B6WTQpQ | SeanP_AI | x.com | 1970-01-01T00:00:00 | 0 | {} | 1hxerlp | false | null | t3_1hxerlp | /r/LocalLLaMA/comments/1hxerlp/deepseek_v3_cpu_only_41_tps_input_12_tps_output/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ljMdxLddGFmuS-ufhUd-pArDBSI7hlUQ131Sez8CMvc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1UJOE49kilOQ_AcSEeoppAe-rJ1T2lfWwMX8_PkEwRA.jpg?width=108&crop=smart&auto=webp&s=fc776137d2a5e586c3e198549c72fd991a167a1b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/1UJOE49kilOQ_AcSEeoppAe-rJ1T2lfWwMX8_PkEwRA.jpg?width=216&crop=smart&auto=webp&s=9d47d51f3cf8fbc58e12a536324518225c148bfa', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/1UJOE49kilOQ_AcSEeoppAe-rJ1T2lfWwMX8_PkEwRA.jpg?width=320&crop=smart&auto=webp&s=3482fb8179ecad7e0df5f4cf1fd5ebe1bae6e282', 'width': 320}, {'height': 337, 'url': 'https://external-preview.redd.it/1UJOE49kilOQ_AcSEeoppAe-rJ1T2lfWwMX8_PkEwRA.jpg?width=640&crop=smart&auto=webp&s=2dc7ce3c3ab4c024863edba451e96dfeaad597e4', 'width': 640}, {'height': 505, 'url': 'https://external-preview.redd.it/1UJOE49kilOQ_AcSEeoppAe-rJ1T2lfWwMX8_PkEwRA.jpg?width=960&crop=smart&auto=webp&s=b0e54805c52bb3b53c727c25ad1e4730f9501d61', 'width': 960}, {'height': 569, 'url': 'https://external-preview.redd.it/1UJOE49kilOQ_AcSEeoppAe-rJ1T2lfWwMX8_PkEwRA.jpg?width=1080&crop=smart&auto=webp&s=e4f3eebaa902b02b9d8192751408fabcd9573744', 'width': 1080}], 'source': {'height': 1004, 'url': 'https://external-preview.redd.it/1UJOE49kilOQ_AcSEeoppAe-rJ1T2lfWwMX8_PkEwRA.jpg?auto=webp&s=a46c840a302615a3ef9c4ef158306939909d690a', 'width': 1905}, 'variants': {}}]} |
|
Current way to do voice? Custom voices available? | 1 | Run multiple models locally and text has always been fine. What’s the best current solution for dialogue and are there open source voices available to choose from? | 2025-01-09T15:08:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hxetsg/current_way_to_do_voice_custom_voices_available/ | everardproudfoot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxetsg | false | null | t3_1hxetsg | /r/LocalLLaMA/comments/1hxetsg/current_way_to_do_voice_custom_voices_available/ | false | false | self | 1 | null |
How fast are easy-to-setup VLMs? | 1 | [removed] | 2025-01-09T15:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hxey0j/how_fast_are_easytosetup_vlms/ | InternationalMany6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxey0j | false | null | t3_1hxey0j | /r/LocalLLaMA/comments/1hxey0j/how_fast_are_easytosetup_vlms/ | false | false | self | 1 | null |
Seeking LM Studio Models for Accurate Local Data Analysis | 1 | [removed] | 2025-01-09T15:24:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hxf64c/seeking_lm_studio_models_for_accurate_local_data/ | No-Construction-5105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxf64c | false | null | t3_1hxf64c | /r/LocalLLaMA/comments/1hxf64c/seeking_lm_studio_models_for_accurate_local_data/ | false | false | self | 1 | null |
Open source library of prompts | 2 | Is there such a thing? Thank you in advance | 2025-01-09T15:39:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hxfhz8/open_source_library_of_prompts/ | freecodeio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxfhz8 | false | null | t3_1hxfhz8 | /r/LocalLLaMA/comments/1hxfhz8/open_source_library_of_prompts/ | false | false | self | 2 | null |
Best Tools for AI-Powered Document Search? | 3 | Hi all,
A company that sells electric garden tools asked me to look into how they can create an AI-powered search system for their product docs.
They’ve got about 5,000 products, each with manuals, guides, and technical info. +20.000 pdf's
The goal is to make it easy for employees (and maybe customers) to search through it all.
They’re looking for something that:
Can be hosted locally if necessary
handles complex questions well
Easy to use
Ive looked into tools like Glean, Vertex or even tought of building it myself but I’d really like some advice from people who’ve done this kind of thing before. Any tips on what models or tools to check out? Thanks in advance! | 2025-01-09T15:50:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hxfrtv/best_tools_for_aipowered_document_search/ | cocodirasta3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxfrtv | false | null | t3_1hxfrtv | /r/LocalLLaMA/comments/1hxfrtv/best_tools_for_aipowered_document_search/ | false | false | self | 3 | null |
Introducing LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset | 40 | I’m excited to release [LongTalk-CoT v0.1](https://huggingface.co/datasets/kenhktsui/longtalk-cot-v0.1), a dataset designed for training o1-like reasoning model. Each response is prompted using QwQ-32B-Preview, and specifically handcrafted system message that encourages **more vocalised thinking**, and **self reflection**. As such, its output token length is 5.29x longer than HuggingFaceTB/smoltalk
It is able to solve "Is 9.11 greater than 9.9" and "How many letters R in the word strawberry?"!
It is overthinking sometimes as expected, but I love its behavior of trying multiple approaches to solve a question.
# Quick Test 1: Is 9.11 greater than 9.9?
# After finetuning
No, 9.11 is not greater than 9.9. Let's think about this step by step.
First, it helps to understand that both numbers are decimal numbers, specifically they have two digits after the decimal point. So, 9.11 and 9.9 can be compared directly based on their values in each place value: whole number part and the tenths and hundredths parts.
Let's break them down:
* For 9.11:
* The whole number part is 9.
* The tenths place is 1 (which means one tenth).
* The hundredths place is also 1 (one hundredth).
* For 9.9:
* The whole number part is also 9.
* But here, the tenths place is 9 (nine tenths), and there's no hundredths digit after that; it's considered to have zero in the hundredths place because decimal numbers with fewer digits are filled with zeros.
So, comparing them:
* Both have the same whole number part: 9.
* For the tenth place:
* 1 vs. 9.
Since 9 is greater than 1, this already tells us that 9.9 is larger than 9.11. But to be thorough, let's look at the hundredths places as well:
* In 9.11: hundredths = 1
* In 9.9: hundredths = 0
Even though there are no digits in the hundredths place for 9.9, it's still considered zero because of the way decimals work.
Therefore, overall, 9.11 < 9.9.
Wait a minute, but I should make sure that I'm not missing anything here. Sometimes, people might think about adding or subtracting numbers differently, but in this case, since both numbers have the same whole number part and one has a larger tenths digit, it's clear which is larger.
Another way to look at it is to consider them as percentages:
* 9.11 = 911%
* 9.9 = 990%
But that might be confusing; in percentage terms, 990% is still greater than 911%.
Alternatively, if we multiply both numbers by 100 to eliminate the decimal point:
* 9.11 \* 100 = 911
* 9.9 \* 100 = 990
Again, 990 is greater than 911.
So, no matter how we look at it, 9.9 is larger than 9.11.
I think that covers the comparison thoroughly. There doesn't seem to be any trick or special case here; it's a straightforward decimal comparison once you understand place values.
**Final Answer**
\[ \\boxed{9.11 < 9.9} \]
# Before finetuning
No, 9.11 is not greater than 9.9. When comparing decimal numbers, you can compare them digit by digit from left to right. In this case:
* The whole number part of both numbers is the same (9).
* Comparing the tenths place, we see that 1 in 9.11 is larger than 9 in 9.9.
* Therefore, 9.11 > 9.9.
So, 9.11 is greater than 9.9. | 2025-01-09T16:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hxg435/introducing_longtalkcot_v01_a_very_long/ | Financial_Counter199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxg435 | false | null | t3_1hxg435 | /r/LocalLLaMA/comments/1hxg435/introducing_longtalkcot_v01_a_very_long/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200}, 'variants': {}}]} |
SwitchLoRA: Parameter-efficient pre-training can achieves results comparable to full training | 1 | 2025-01-09T16:09:44 | https://github.com/asfsdsf/SwitchLoRA | Embarrassed_List_717 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hxg7gz | false | null | t3_1hxg7gz | /r/LocalLLaMA/comments/1hxg7gz/switchlora_parameterefficient_pretraining_can/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'e1rS9P_NbI5565GqTYHU3_23le9JLzbz7vOlvNERYlk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tLI9UM4MmaNEIUs3dfWyQOblGCIOTFJv_Znz_OHaSMg.jpg?width=108&crop=smart&auto=webp&s=accef04b92934884ba273c0b7a0fd419a5dd4b6f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tLI9UM4MmaNEIUs3dfWyQOblGCIOTFJv_Znz_OHaSMg.jpg?width=216&crop=smart&auto=webp&s=464a2093867e48bc95290943b29224d230b158c8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tLI9UM4MmaNEIUs3dfWyQOblGCIOTFJv_Znz_OHaSMg.jpg?width=320&crop=smart&auto=webp&s=b1bdb0a328f0145155bd59a55f9797e347c35575', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tLI9UM4MmaNEIUs3dfWyQOblGCIOTFJv_Znz_OHaSMg.jpg?width=640&crop=smart&auto=webp&s=3b5289e74d4d5ff04e7ee7619d75ab7dfab0ffcb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tLI9UM4MmaNEIUs3dfWyQOblGCIOTFJv_Znz_OHaSMg.jpg?width=960&crop=smart&auto=webp&s=09ab3eca539e6fdfc08425a91df68bfad1b0bbb0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tLI9UM4MmaNEIUs3dfWyQOblGCIOTFJv_Znz_OHaSMg.jpg?width=1080&crop=smart&auto=webp&s=4568d67ce34d751434767e5a264c0a712599df11', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tLI9UM4MmaNEIUs3dfWyQOblGCIOTFJv_Znz_OHaSMg.jpg?auto=webp&s=1f4697fdcd8f29fc0e36d0b1862b21ed3268ad97', 'width': 1200}, 'variants': {}}]} |
||
Realtime STT to TTS | 1 | [removed] | 2025-01-09T16:18:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hxgeuu/realtime_stt_to_tts/ | Odd-Indication4929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxgeuu | false | null | t3_1hxgeuu | /r/LocalLLaMA/comments/1hxgeuu/realtime_stt_to_tts/ | false | false | self | 1 | null |
Help regarding fine-tuning an LLM | 1 | [removed] | 2025-01-09T16:18:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hxgf5k/help_regarding_finetuning_an_llm/ | Acceptable_Young_167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxgf5k | false | null | t3_1hxgf5k | /r/LocalLLaMA/comments/1hxgf5k/help_regarding_finetuning_an_llm/ | false | false | self | 1 | null |
We've just released LLM Pools, end-to-end deployment of Large Language Models that can be installed anywhere | 27 | LLM Pools are all inclusive environments that can be installed on everyday hardware to simplify LLM deployment. Compatible with a multitude of model engines, out-of-the-box single and multi-node friendly, with a single API endpoint + UI playground.
You can install your own for free, but the easiest way to get started is joining our public LLM pool (also free, and you get to share each other models): [https://kalavai-net.github.io/kalavai-client/public\_llm\_pool/](https://kalavai-net.github.io/kalavai-client/public_llm_pool/)
Open source: [https://github.com/kalavai-net/kalavai-client](https://github.com/kalavai-net/kalavai-client) | 2025-01-09T16:19:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hxgfu5/weve_just_released_llm_pools_endtoend_deployment/ | Good-Coconut3907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxgfu5 | false | null | t3_1hxgfu5 | /r/LocalLLaMA/comments/1hxgfu5/weve_just_released_llm_pools_endtoend_deployment/ | false | false | self | 27 | null |
Need Help With GPT4ALL Chat Template (I'm Brand New To This) | 1 | [removed] | 2025-01-09T16:34:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hxgsw3/need_help_with_gpt4all_chat_template_im_brand_new/ | OfficialRedhed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxgsw3 | false | null | t3_1hxgsw3 | /r/LocalLLaMA/comments/1hxgsw3/need_help_with_gpt4all_chat_template_im_brand_new/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'j6v0oKhwX0a4fQVE6mjHUduUC9nwg16u8iVnLmQ65DY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/X4Y2hIhFoTRdnOJA-nr1jHEzFwqCSPHuoxQi8oF9tzk.jpg?width=108&crop=smart&auto=webp&s=5feb7481a95e0d2acc51ee9ff843785512f2b2b9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/X4Y2hIhFoTRdnOJA-nr1jHEzFwqCSPHuoxQi8oF9tzk.jpg?width=216&crop=smart&auto=webp&s=cc628c690282d1a7a2ce7f96956ec5cf39796abf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/X4Y2hIhFoTRdnOJA-nr1jHEzFwqCSPHuoxQi8oF9tzk.jpg?width=320&crop=smart&auto=webp&s=181cfcedf4cbf8424dce3752bb6cbeea5561f1d1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/X4Y2hIhFoTRdnOJA-nr1jHEzFwqCSPHuoxQi8oF9tzk.jpg?width=640&crop=smart&auto=webp&s=6f4962c1f2c43c2da57bb6f895738529697f813a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/X4Y2hIhFoTRdnOJA-nr1jHEzFwqCSPHuoxQi8oF9tzk.jpg?width=960&crop=smart&auto=webp&s=2413b0f24608df9ff1311a59a3f9caf1b7426694', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/X4Y2hIhFoTRdnOJA-nr1jHEzFwqCSPHuoxQi8oF9tzk.jpg?width=1080&crop=smart&auto=webp&s=4937f6706c01219f1d389f9b3224a3b44b37eef0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/X4Y2hIhFoTRdnOJA-nr1jHEzFwqCSPHuoxQi8oF9tzk.jpg?auto=webp&s=7c93960d6401c0d10c91e516f9d26af4943751c1', 'width': 1200}, 'variants': {}}]} |
Open-source AI web agent powered by Llama-3.3 | 18 | I previously shared a [demo](https://sh.reddit.com/r/LocalLLaMA/comments/1hlzja2/asking_an_ai_agent_powered_by_llama33_find_me_2/) of my web agent that the community loved, so wanted to share another update:
The agent is now fully [open source](https://github.com/addy999/onequery)!
You can run it with Ollama, Anthopic or even DeepSeek. ALl work well but I haven't done a deep comparison yet.
Comments and contributions are welcome - the project is still under development. | 2025-01-09T16:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hxh3ln/opensource_ai_web_agent_powered_by_llama33/ | spacespacespapce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxh3ln | false | null | t3_1hxh3ln | /r/LocalLLaMA/comments/1hxh3ln/opensource_ai_web_agent_powered_by_llama33/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'pSLQrYX8B4YW0BMtMiqqU6WGkLUta2WBqoCnTmFbGYk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N40UZFsWcsbvoBH9YBYeMauOTIiuchxqX2WCihm1Vb4.jpg?width=108&crop=smart&auto=webp&s=b75c7ea59082333929f6e247ee202427f69633b1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N40UZFsWcsbvoBH9YBYeMauOTIiuchxqX2WCihm1Vb4.jpg?width=216&crop=smart&auto=webp&s=121af350812c3329a8e44fb9c7ae174384bfd12d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N40UZFsWcsbvoBH9YBYeMauOTIiuchxqX2WCihm1Vb4.jpg?width=320&crop=smart&auto=webp&s=e90070e9df6c4a23274bc09450dcc652cb9fffda', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N40UZFsWcsbvoBH9YBYeMauOTIiuchxqX2WCihm1Vb4.jpg?width=640&crop=smart&auto=webp&s=8ab9fcee430174077a898a9c2af7a230a5a9d173', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N40UZFsWcsbvoBH9YBYeMauOTIiuchxqX2WCihm1Vb4.jpg?width=960&crop=smart&auto=webp&s=fdd0784eeeff63d9d90cb83fe2f36fea55cb6259', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N40UZFsWcsbvoBH9YBYeMauOTIiuchxqX2WCihm1Vb4.jpg?width=1080&crop=smart&auto=webp&s=db6460740d7d0392f86fc477cc5ac7d64923bab8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N40UZFsWcsbvoBH9YBYeMauOTIiuchxqX2WCihm1Vb4.jpg?auto=webp&s=896bba74386e117677435cdd065d48b3442fa19c', 'width': 1200}, 'variants': {}}]} |
Only O1 Got this problem right | 3 | TimestampModelAppTokensCostSpeedProvider Jan 8, 08:32:13 AMMistral: Mistral NemoMyExampleApp286120,000000549$64,9tpsDeepInfraJan 8, 08:32:12 AMMistral: Mistral NemoMyExampleApp461330,00000371$108,6tpsMistralJan 8, 04:17:33 AMMistral: Mistral NemoMyExampleApp301110,000000571$71,9tpsDeepInfraJan 8, 04:17:33 AMMistral: Mistral NemoMyExampleApp584240,00000456$103,0tpsMistralJan 8, 04:17:17 AMMistral: Mistral NemoMyExampleApp291120,000000557$67,4tpsDeepInfraJan 8, 04:17:17 AMMistral: Mistral NemoMyExampleApp529140,000000982$95,9tpsDeepInfraJan 8, 04:17:02 AMMistral: Mistral NemoMyExampleApp295110,00000056$57,6tpsDeepInfraJan 8, 04:17:02 AMMistral: Mistral NemoMyExampleApp508130,000000941$69,9tpsDeepInfraJan 8, 04:16:37 AMMistral: Mistral NemoMyExampleApp272120,000000524$56,3tpsDeepInfraJan 8, 04:16:36 AMMistral: Mistral NemoMyExampleApp482190,00000092$49,6tpsDeepInfraJan 8, 04:14:08 AMMistral: Mistral NemoMyExampleApp267120,000000515$55,6tpsDeepInfraJan 8, 04:14:07 AMMistral: Mistral NemoMyExampleApp481140,000000898$75,3tpsDeepInfraJan 8, 12:42:19 AMMistral: Mistral NemoMyExampleApp27510,000000485$19,2tpsDeepInfraJan 8, 12:42:19 AMMistral: Mistral NemoMyExampleApp550220,00000105$92,4tpsDeepInfraJan 7, 10:54:23 PMMistral: Mistral NemoMyExampleApp318110,00000247$111,1tpsMistralJan 7, 10:54:22 PMMistral: Mistral NemoMyExampleApp403210,000000789$70,2tpsDeepInfraJan 7, 10:54:11 PMMistral: Mistral NemoMyExampleApp30910,000000545$11,2tpsDeepInfraJan 7, 10:54:10 PMMistral: Mistral NemoMyExampleApp476320,00000381$76,7tpsMistralJan 7, 10:53:58 PMMistral: Mistral NemoMyExampleApp295120,000000564$50,2tpsDeepInfraJan 7, 10:53:57 PMMistral: Mistral NemoMyExampleApp376180,00000073$75,3tpsDeepInfra
Given this unstructured data, create a structured table with
1 - Avg input tokens
2 - Avg output tokens
3 - Avg price per inference
The inferences are being done at pricing of $0.035 each 1m input tokens and $0.08
Create another table considering if we were paying $0.15 each 1m input and output tokens
To give you an example of how to interpret the data, the data point is this:
Jan 8, 08:32:13 AMMistral: Mistral NemoMyExampleApp286120,000000549$64,9tpsDeepInfra, It means 286 input tokens, 12 output tokens, 0,000000549 is the price in usd and 64,9tps is the velocity tokens were generated I want a table of a single row with the agg values
Yes, the prompting was poor because I was too lazy to write, still, after giving it a lot of thought, O1 came up with the right output values
# Scenario A
(**$0.035** per 1M input, **$0.08** per 1M output)
|Avg Input Tokens|Avg Output Tokens|Avg Price per Inference (USD)|
|:-|:-|:-|
|388|15.25|0.0000148|
# Scenario B
(**$0.15** per 1M input, **$0.15** per 1M output)
|Avg Input Tokens|Avg Output Tokens|Avg Price per Inference (USD)|
|:-|:-|:-|
|388|15.25|0.0000605|
Every single model out of Gemini 2.0 Flash Thinking, QwQ both 32 and 70b, Llama 3.3 and Deepseek V3 failed to parse the data, specifically all of them though there was a "0" more in the output tokens and even after telling them they're wong, they still can't get it right. They all outputted something like
**Output:**
|Avg input tokens|Avg output tokens|Avg price per inference|
|:-|:-|:-|
|373.15|144.05|$0.00000116|
|Avg input tokens|Avg output tokens|Avg price per inference (at $0.15 per million tokens)|
|:-|:-|:-|
|373.15|144.05|$0.00007758| | 2025-01-09T16:47:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hxh496/only_o1_got_this_problem_right/ | felipejfc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxh496 | false | null | t3_1hxh496 | /r/LocalLLaMA/comments/1hxh496/only_o1_got_this_problem_right/ | false | false | self | 3 | null |
Best Qwen2.5-Coder prompts/system prompts? | 2 | Has anyone had success in finding good general prompts or system prompts for qwen2.5-coder:32b. Ideally something broad enough where I don't need a new system prompt for every different kind of problem I can ask a model. Or a paragraph I can add at the start of a normal prompt for similar results? | 2025-01-09T17:06:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hxhkyy/best_qwen25coder_promptssystem_prompts/ | Brilliant-Sun2643 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxhkyy | false | null | t3_1hxhkyy | /r/LocalLLaMA/comments/1hxhkyy/best_qwen25coder_promptssystem_prompts/ | false | false | self | 2 | null |
Help with using Qwen2.5-32B-Instruct-AWQ on a single RTX 4090 | 4 | Hey guys,
I am running into OOM error with Qwen2.5-32B-Instruct-AWQ model on a RTX 4090 (24 GB VRAM). My input context length is \~ 6K tokens. The model itself takes around 19GB but the prompt doesn't fit in the remaining 5. I've tried enabling flash attention 2 as well, but that somehow that had no effect on the VRAM usage. Is there anything else I can try to reduce memory usage without having to chunk my inputs? I am using the hugging face transformers library.
This is how I declare my model :
`model_name = "Qwen/Qwen2.5-32B-Instruct-AWQ"`
`model_obj = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto",attn_implementation="flash_attention_2")`
`tokenizer_obj = AutoTokenizer.from_pretrained(model_name)`
On a side note, is there a way to check if flash attention is actually being used? I've checked that the Qwen2. 5 models support flash attention 2. | 2025-01-09T17:12:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hxhpw9/help_with_using_qwen2532binstructawq_on_a_single/ | qwerty_qwer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxhpw9 | false | null | t3_1hxhpw9 | /r/LocalLLaMA/comments/1hxhpw9/help_with_using_qwen2532binstructawq_on_a_single/ | false | false | self | 4 | null |
Cheapest way to run 70b model at greater than 5 tps? Digits | 0 | Cheapest way to run a 70b model at greater than five tokens per second:
1. [Digits 8 tokens per second.](https://x.com/alexocheema/status/1876676954549620961)
2. AMD AI max at 2.5 tokens per second.
3. Apple m4 Max at 4 tokens per second.
4. 3090/4090/5090 approximately 1 tps because doesn't fit in vram and has to partially run on slow Cpu that is memory bandwidth constrained. | 2025-01-09T17:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hxhr88/cheapest_way_to_run_70b_model_at_greater_than_5/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxhr88 | false | null | t3_1hxhr88 | /r/LocalLLaMA/comments/1hxhr88/cheapest_way_to_run_70b_model_at_greater_than_5/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'szMUn4oA0LBpVjZuYRdLJvR4-rCu4_3VO8b7RiN9RYE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MBJSLr1BrGGAWiERNKV1EsVPqt-H8qI6KhVXjMFntpA.jpg?width=108&crop=smart&auto=webp&s=aa3d0f32059934bdc8facdc90251ece46466d220', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MBJSLr1BrGGAWiERNKV1EsVPqt-H8qI6KhVXjMFntpA.jpg?width=216&crop=smart&auto=webp&s=c7d7cff1346adb93d9ee0e0806b52148ec948b46', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/MBJSLr1BrGGAWiERNKV1EsVPqt-H8qI6KhVXjMFntpA.jpg?width=320&crop=smart&auto=webp&s=907fe8af91752d34d0e7a5a5bd128149064e0cc3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/MBJSLr1BrGGAWiERNKV1EsVPqt-H8qI6KhVXjMFntpA.jpg?width=640&crop=smart&auto=webp&s=4ffe16868d431044dc8975ae38ca0056c5252984', 'width': 640}], 'source': {'height': 680, 'url': 'https://external-preview.redd.it/MBJSLr1BrGGAWiERNKV1EsVPqt-H8qI6KhVXjMFntpA.jpg?auto=webp&s=c83dfe150e33e17d0c3afdfea334393c9c3ea1e0', 'width': 680}, 'variants': {}}]} |
2x 3090 or 5090 | 7 | Alright, I want some of your opinions before I buy.
I’m concerned about buying used 3090s, but I’d like some opinions if it should be okay. I obviously can test them beforehand, but I’m more concerned about the lack of warranty than anything.
I’m also considering just doing a 5090 instead of dual 3090s. I plan on doing gaming on the side, so it’s worth considering since while the vram is less, I can probably get away with doing a less intensive model, although I would like to run 70B models. Additionally I could eventually buy a second 5090 down the road.
My total budget for the graphics cards (not the system) is 2,000-2,500, so I’m trying to figure out what’s best.
Input would be greatly appreciated. | 2025-01-09T17:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hxhuhf/2x_3090_or_5090/ | DersWasTaken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxhuhf | false | null | t3_1hxhuhf | /r/LocalLLaMA/comments/1hxhuhf/2x_3090_or_5090/ | false | false | self | 7 | null |
Can you connect Apple devices over a network to run inference for a locally run LLaMa model ? | 1 | [removed] | 2025-01-09T17:20:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hxhw4a/can_you_connect_apple_devices_over_a_network_to/ | CalendarVarious3992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxhw4a | false | null | t3_1hxhw4a | /r/LocalLLaMA/comments/1hxhw4a/can_you_connect_apple_devices_over_a_network_to/ | false | false | self | 1 | null |
Unwired decentralized inference | 1 | [removed] | 2025-01-09T17:37:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hxibbi/unwired_decentralized_inference/ | CalendarVarious3992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxibbi | false | null | t3_1hxibbi | /r/LocalLLaMA/comments/1hxibbi/unwired_decentralized_inference/ | false | false | self | 1 | null |
vikhyatk/moondream2 · Hugging Face (New 2B & 0.5B) | 77 | 2025-01-09T17:38:26 | https://huggingface.co/vikhyatk/moondream2 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hxibs2 | false | null | t3_1hxibs2 | /r/LocalLLaMA/comments/1hxibs2/vikhyatkmoondream2_hugging_face_new_2b_05b/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'AiWEMDQVnpy-IlneoAHqTqfvkvDBzAbD97d6OLqixhM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mBbXqyUSUFt03PXE9oC_lqnJ1RBSkMJjxd4Z-R3eomo.jpg?width=108&crop=smart&auto=webp&s=8638aa530a028db78c3a3c2f646ad84ec8ceb667', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mBbXqyUSUFt03PXE9oC_lqnJ1RBSkMJjxd4Z-R3eomo.jpg?width=216&crop=smart&auto=webp&s=a6290098e441c42a679b3a8d7698a666634951cc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mBbXqyUSUFt03PXE9oC_lqnJ1RBSkMJjxd4Z-R3eomo.jpg?width=320&crop=smart&auto=webp&s=a9dba835c230dadd5de36d02ece103e52c16df40', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mBbXqyUSUFt03PXE9oC_lqnJ1RBSkMJjxd4Z-R3eomo.jpg?width=640&crop=smart&auto=webp&s=a26635c940a449443fafcc84f621627681d2cfdb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mBbXqyUSUFt03PXE9oC_lqnJ1RBSkMJjxd4Z-R3eomo.jpg?width=960&crop=smart&auto=webp&s=d0d2cc76a169f23bf82abc43e491d565b52b53b7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mBbXqyUSUFt03PXE9oC_lqnJ1RBSkMJjxd4Z-R3eomo.jpg?width=1080&crop=smart&auto=webp&s=f827e586eed1ce8aa7350f1895d71322685a589b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mBbXqyUSUFt03PXE9oC_lqnJ1RBSkMJjxd4Z-R3eomo.jpg?auto=webp&s=0625792d83d21db2888d38839c0eb2939cbd6702', 'width': 1200}, 'variants': {}}]} |
||
Wireless decentralized inference for local models in a network | 1 |
I’m wondering if I can setup a locally hosted LLM model that’s locked into my local network and uses distributed computation from machines on that network. Without the devices being wired together
Is there any guidelines or pointers on how to do this ? | 2025-01-09T17:39:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hxici5/wireless_decentralized_inference_for_local_models/ | Competitive_Push_52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxici5 | false | null | t3_1hxici5 | /r/LocalLLaMA/comments/1hxici5/wireless_decentralized_inference_for_local_models/ | false | false | self | 1 | null |
Quick Test of Phi-4! | 1 | 2025-01-09T17:44:22 | Financial_Counter199 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hxigrq | false | null | t3_1hxigrq | /r/LocalLLaMA/comments/1hxigrq/quick_test_of_phi4/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'thgj9a0f90ce1', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/thgj9a0f90ce1.png?width=108&crop=smart&auto=webp&s=3d72d0ba3e0abbb0a3cd53122357be73f56c2f93', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/thgj9a0f90ce1.png?width=216&crop=smart&auto=webp&s=e08abec287a8a7edd0007f2f55fc7e2749b574b7', 'width': 216}, {'height': 83, 'url': 'https://preview.redd.it/thgj9a0f90ce1.png?width=320&crop=smart&auto=webp&s=bce750d9be6bf6a6d30d1f714220844f046072c0', 'width': 320}, {'height': 166, 'url': 'https://preview.redd.it/thgj9a0f90ce1.png?width=640&crop=smart&auto=webp&s=46d82dcc7cd7ce82298b95d79030eb3efa1c9fff', 'width': 640}, {'height': 249, 'url': 'https://preview.redd.it/thgj9a0f90ce1.png?width=960&crop=smart&auto=webp&s=ddf83562191d067d82bb7ffc21da2ad07eeeb5b1', 'width': 960}, {'height': 280, 'url': 'https://preview.redd.it/thgj9a0f90ce1.png?width=1080&crop=smart&auto=webp&s=5a5ff2a8da6224184a431473940784be1eafc2e6', 'width': 1080}], 'source': {'height': 552, 'url': 'https://preview.redd.it/thgj9a0f90ce1.png?auto=webp&s=114ed762c61f96f7c827e46a41904d9176a69e25', 'width': 2122}, 'variants': {}}]} |
||
Ask, and it shall be given: Turing completeness of prompting | 67 | 2025-01-09T17:59:36 | https://arxiv.org/abs/2411.01992 | CommunismDoesntWork | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1hxitud | false | null | t3_1hxitud | /r/LocalLLaMA/comments/1hxitud/ask_and_it_shall_be_given_turing_completeness_of/ | false | false | default | 67 | null |
|
Cloud AI for video games stinks, on-device is the way forward | 1 | [removed] | 2025-01-09T18:02:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hxiwnf/cloud_ai_for_video_games_stinks_ondevice_is_the/ | maiybe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxiwnf | false | null | t3_1hxiwnf | /r/LocalLLaMA/comments/1hxiwnf/cloud_ai_for_video_games_stinks_ondevice_is_the/ | false | false | self | 1 | null |
How can I make an AI writing assistant that captures individual writing styles? | 1 | [removed] | 2025-01-09T18:07:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hxj11y/how_can_i_make_an_ai_writing_assistant_that/ | ldemesla_eth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxj11y | false | null | t3_1hxj11y | /r/LocalLLaMA/comments/1hxj11y/how_can_i_make_an_ai_writing_assistant_that/ | false | false | self | 1 | null |
Me when joining this sub wanting to know about AI and seeing the comments: | 0 | 2025-01-09T18:14:19 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hxj6hx | false | null | t3_1hxj6hx | /r/LocalLLaMA/comments/1hxj6hx/me_when_joining_this_sub_wanting_to_know_about_ai/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'f6gnJXLScEYCXIDibOJIs6I5E4dJILCF_nC7GqKu1IE', 'resolutions': [{'height': 13, 'url': 'https://preview.redd.it/uu8lnxpte0ce1.png?width=108&crop=smart&auto=webp&s=b80d92c80c0e5d259064dfdbf243b4ae7943fa15', 'width': 108}, {'height': 26, 'url': 'https://preview.redd.it/uu8lnxpte0ce1.png?width=216&crop=smart&auto=webp&s=0c2f194d612e889d0ab246d2dfbc1389fdaa4015', 'width': 216}, {'height': 39, 'url': 'https://preview.redd.it/uu8lnxpte0ce1.png?width=320&crop=smart&auto=webp&s=a8a032e97c52e81f53bf815092d267fd2e7f748a', 'width': 320}, {'height': 78, 'url': 'https://preview.redd.it/uu8lnxpte0ce1.png?width=640&crop=smart&auto=webp&s=1a9df1389e355354450a116edf491a688ca40c85', 'width': 640}, {'height': 117, 'url': 'https://preview.redd.it/uu8lnxpte0ce1.png?width=960&crop=smart&auto=webp&s=bfefb595e601f75e353c80b98e2e7dd64893d195', 'width': 960}, {'height': 132, 'url': 'https://preview.redd.it/uu8lnxpte0ce1.png?width=1080&crop=smart&auto=webp&s=86ca345696b0fff22571bd7ca6b283530138d8eb', 'width': 1080}], 'source': {'height': 166, 'url': 'https://preview.redd.it/uu8lnxpte0ce1.png?auto=webp&s=332351975e67c95cbac31d36ffbb2001205f8101', 'width': 1356}, 'variants': {}}]} |
|||
How can I make an AI writing assistant that truly captures individual writing styles?
| 1 | [removed] | 2025-01-09T18:16:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hxj8hy/how_can_i_make_an_ai_writing_assistant_that_truly/ | ldemesla_eth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxj8hy | false | null | t3_1hxj8hy | /r/LocalLLaMA/comments/1hxj8hy/how_can_i_make_an_ai_writing_assistant_that_truly/ | false | false | self | 1 | null |
New Moondream 2B vision language model release | 482 | 2025-01-09T18:48:07 | radiiquark | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hxjzol | false | null | t3_1hxjzol | /r/LocalLLaMA/comments/1hxjzol/new_moondream_2b_vision_language_model_release/ | false | false | 482 | {'enabled': True, 'images': [{'id': '38DMnTwkFmTUtQp6hgXCpFXff59RBaZIHphkK33iv8Y', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/oyxiuuxok0ce1.png?width=108&crop=smart&auto=webp&s=5c17cf39c77fc637cf40a537fc66d68a4a302455', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/oyxiuuxok0ce1.png?width=216&crop=smart&auto=webp&s=644fc1bd9dde4f833a72c7b9cb8b76bfe7bc3bd8', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/oyxiuuxok0ce1.png?width=320&crop=smart&auto=webp&s=64e0343278a862f9f449bf135854b1a8720d7aa2', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/oyxiuuxok0ce1.png?width=640&crop=smart&auto=webp&s=ad794e5897db0faf8abb4f80573d4a142cd70939', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/oyxiuuxok0ce1.png?width=960&crop=smart&auto=webp&s=4d8df6b2997653a384bb3234e450d654501c532b', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/oyxiuuxok0ce1.png?width=1080&crop=smart&auto=webp&s=12348a97ae2dbc90ff3bb2be130912f80b2579d7', 'width': 1080}], 'source': {'height': 2092, 'url': 'https://preview.redd.it/oyxiuuxok0ce1.png?auto=webp&s=b7b157d88aa612a6cea36fd3d9a6306a1807d5ab', 'width': 3720}, 'variants': {}}]} |
|||
test | 1 | [removed] | 2025-01-09T18:56:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hxk74h/test/ | bburtenshaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxk74h | false | null | t3_1hxk74h | /r/LocalLLaMA/comments/1hxk74h/test/ | false | false | self | 1 | null |
Best model for processing meeting transcripts | 1 | [removed] | 2025-01-09T19:04:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hxkdnw/best_model_for_processing_meeting_transcripts/ | automaticSteve | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxkdnw | false | null | t3_1hxkdnw | /r/LocalLLaMA/comments/1hxkdnw/best_model_for_processing_meeting_transcripts/ | false | false | self | 1 | null |
LLM as a Judge, What platforms to use? | 1 | [removed] | 2025-01-09T19:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hxke9l/llm_as_a_judge_what_platforms_to_use/ | Sam_Tech1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxke9l | false | null | t3_1hxke9l | /r/LocalLLaMA/comments/1hxke9l/llm_as_a_judge_what_platforms_to_use/ | false | false | self | 1 | null |
Why is nobody going crazy for this grounded VLM? | 1 | 2025-01-09T19:14:15 | Zealousideal-Cut590 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hxkm6o | false | null | t3_1hxkm6o | /r/LocalLLaMA/comments/1hxkm6o/why_is_nobody_going_crazy_for_this_grounded_vlm/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'RC9FIS8K83RlHW1YmBAbG1JAAtngLbhwLVL62JeAsz8', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/tuu2wfyip0ce1.jpeg?width=108&crop=smart&auto=webp&s=bed103e55bc0cceb96ed1f458d51ff9034a9f833', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/tuu2wfyip0ce1.jpeg?width=216&crop=smart&auto=webp&s=5bb8142dd01be76853c4ff5c27118c1d96ac815c', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/tuu2wfyip0ce1.jpeg?width=320&crop=smart&auto=webp&s=49738fb3e2aad14f27d891a55351007fb1d8c077', 'width': 320}, {'height': 454, 'url': 'https://preview.redd.it/tuu2wfyip0ce1.jpeg?width=640&crop=smart&auto=webp&s=a540ec91d4e501d3089f4d8605c98587cd997fe3', 'width': 640}, {'height': 682, 'url': 'https://preview.redd.it/tuu2wfyip0ce1.jpeg?width=960&crop=smart&auto=webp&s=508e13eae3b902fc220705b59164901af3f0b3e4', 'width': 960}, {'height': 767, 'url': 'https://preview.redd.it/tuu2wfyip0ce1.jpeg?width=1080&crop=smart&auto=webp&s=ec35a14e7b0f9d7d030f23ac28b4964a460c5cf4', 'width': 1080}], 'source': {'height': 1925, 'url': 'https://preview.redd.it/tuu2wfyip0ce1.jpeg?auto=webp&s=b6cd950023a9f51a0fe792dd761ef868babda353', 'width': 2709}, 'variants': {}}]} |
|||
System Upgrade worth it? | 1 | [removed] | 2025-01-09T19:30:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hxkzxz/system_upgrade_worth_it/ | Cicatrix87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxkzxz | false | null | t3_1hxkzxz | /r/LocalLLaMA/comments/1hxkzxz/system_upgrade_worth_it/ | false | false | self | 1 | null |
Thoughts on using Ampere Altra for running LLMs locally? | 10 | Ampere ARM Altra offers a memory bandwidth supported by eight 72-bit DDR4-3200 channels. Motherboards and cpus can be bought now in retail channels like newegg, for example: https://www.newegg.com/asrock-rack-altrad8ud-1l2t-q64-22-ampere-altra-max-ampere-altra-processors/p/N82E16813140134
Was seeing some disappointment in the low memory bandwidth for the NVidia Digits and Strix Halo systems and wondered if perhaps a system like this based on Ampere Altra having 8 DDR4 channels might be another way to go here. | 2025-01-09T19:37:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hxl5ze/thoughts_on_using_ampere_altra_for_running_llms/ | cafedude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxl5ze | false | null | t3_1hxl5ze | /r/LocalLLaMA/comments/1hxl5ze/thoughts_on_using_ampere_altra_for_running_llms/ | false | false | self | 10 | null |
Agent Laboratory: Using LLM Agents as Research Assistants - Autonomous LLM-based Framework Capable of Completing the Entire Research Process | 17 | Paper: [https://arxiv.org/pdf/2501.04227](https://arxiv.org/pdf/2501.04227)
Github: [https://github.com/SamuelSchmidgall/AgentLaboratory?tab=readme-ov-file](https://github.com/SamuelSchmidgall/AgentLaboratory?tab=readme-ov-file)
Blog: [https://agentlaboratory.github.io/](https://agentlaboratory.github.io/)
Abstract:
>Historically, scientific discovery has been a lengthy and costly process, demanding substantial time and resources from initial conception to final results. To accelerate scientific discovery, reduce research costs, and improve research quality, we introduce Agent Laboratory, an autonomous LLM-based framework capable of completing the entire research process. This framework accepts a human-provided research idea and progresses through three stages--literature review, experimentation, and report writing to produce comprehensive research outputs, including a code repository and a research report, while enabling users to provide feedback and guidance at each stage. We deploy Agent Laboratory with various state-of-the-art LLMs and invite multiple researchers to assess its quality by participating in a survey, providing human feedback to guide the research process, and then evaluate the final paper. We found that: (**1) Agent Laboratory driven by o1-preview generates the best research outcomes; (2) The generated machine learning code is able to achieve state-of-the-art performance compared to existing methods; (3) Human involvement, providing feedback at each stage, significantly improves the overall quality of research; (4) Agent Laboratory significantly reduces research expenses, achieving an 84% decrease compared to previous autonomous research methods. We hope Agent Laboratory enables researchers to allocate more effort toward creative ideation rather than low-level coding and writing, ultimately accelerating scientific discovery.**
https://preview.redd.it/sgnmwjgiu0ce1.jpg?width=1591&format=pjpg&auto=webp&s=f0de015073ef3809ce1613cbeba4b73de167c71c | 2025-01-09T19:47:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hxledt/agent_laboratory_using_llm_agents_as_research/ | Singularian2501 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxledt | false | null | t3_1hxledt | /r/LocalLLaMA/comments/1hxledt/agent_laboratory_using_llm_agents_as_research/ | false | false | 17 | null |
|
It's not possible download any Model..Any tips? | 1 | [removed] | 2025-01-09T19:53:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hxlk49/its_not_possible_download_any_modelany_tips/ | Sharp-Government9024 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxlk49 | false | null | t3_1hxlk49 | /r/LocalLLaMA/comments/1hxlk49/its_not_possible_download_any_modelany_tips/ | false | false | 1 | null |
|
Quick Test of Phi-4 | 1 | 2025-01-09T20:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hxlsjm/quick_test_of_phi4/ | Financial_Counter199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hxlsjm | false | null | t3_1hxlsjm | /r/LocalLLaMA/comments/1hxlsjm/quick_test_of_phi4/ | false | false | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.