title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Are there aspects of very/large parameter models that can be matched by smaller ones? | 0 | And how | 2024-12-24T09:20:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hl9z3q/are_there_aspects_of_verylarge_parameter_models/ | xmmr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hl9z3q | false | null | t3_1hl9z3q | /r/LocalLLaMA/comments/1hl9z3q/are_there_aspects_of_verylarge_parameter_models/ | false | false | self | 0 | null |
Public standalone model/software for common knowledge questions? | 1 | [removed] | 2024-12-24T09:20:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hl9z4x/public_standalone_modelsoftware_for_common/ | radozd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hl9z4x | false | null | t3_1hl9z4x | /r/LocalLLaMA/comments/1hl9z4x/public_standalone_modelsoftware_for_common/ | false | false | self | 1 | null |
OpenAI employee: "o3 is an LLM" | 191 | 2024-12-24T09:29:25 | Wiskkey | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hla3am | false | null | t3_1hla3am | /r/LocalLLaMA/comments/1hla3am/openai_employee_o3_is_an_llm/ | false | false | 191 | {'enabled': True, 'images': [{'id': '6R_yWe6FD4jhs2HVtGx-0D_890hes7pd0GSo-Bz6atA', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/7vby6piemr8e1.jpeg?width=108&crop=smart&auto=webp&s=c06cb84169e49723096ed8e7d793255bc00835c9', 'width': 108}], 'source': {'height': 142, 'url': 'https://preview.redd.it/7vby6piemr8e1.jpeg?auto=webp&s=c1bde2d0ea06c650fa51121a97d7d67a42a9417e', 'width': 204}, 'variants': {}}]} |
|||
i'm considering between this and Tinybox for coding and testing LLMs | 1 | [removed] | 2024-12-24T09:30:07 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hla3nr | false | null | t3_1hla3nr | /r/LocalLLaMA/comments/1hla3nr/im_considering_between_this_and_tinybox_for/ | false | false | default | 1 | null |
||
Llama 3 CPU usage vs Qwen 2.5 | 5 | When using Llama 1B CPU usage is maxed out whereas Qwen 2.5 doesn't use so much CPU. Does anyone else experience this and know the reason why? This is with 4KL GGUF. | 2024-12-24T09:55:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hlag39/llama_3_cpu_usage_vs_qwen_25/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlag39 | false | null | t3_1hlag39 | /r/LocalLLaMA/comments/1hlag39/llama_3_cpu_usage_vs_qwen_25/ | false | false | self | 5 | null |
Is there a way to artificially limit my GPU's memory bandwidth for testing purposes? | 6 | From what I'm reading online, LLMs are currently bandwidth-limited. I've heard it said that tokens/second scale pretty linearly with memory bandwidth, so I'd like to test this for myself just to satisfy my own curiosity. How can I artificially limit the memory bandwidth of my laptop's dGPU to test how tokens/second scales with bandwidth? | 2024-12-24T11:18:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hlbkgs/is_there_a_way_to_artificially_limit_my_gpus/ | TheSilverSmith47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlbkgs | false | null | t3_1hlbkgs | /r/LocalLLaMA/comments/1hlbkgs/is_there_a_way_to_artificially_limit_my_gpus/ | false | false | self | 6 | null |
Best tool or workflow to animate artworks? | 2 | I recently met an artist whose paintings are truly captivating. I think his website could really benefit from some dynamic elements, like short, animated versions of his artwork. What’s the go-to tool these days for creating simple 10–15 second animations from existing artwork? With min to med level of control? Thanks! | 2024-12-24T11:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hlbtll/best_tool_or_workflow_to_animate_artworks/ | Junkie-Junkinston | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlbtll | false | null | t3_1hlbtll | /r/LocalLLaMA/comments/1hlbtll/best_tool_or_workflow_to_animate_artworks/ | false | false | self | 2 | null |
LLaMA Introduction | 1 | [removed] | 2024-12-24T12:48:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hlcv27/llama_introduction/ | skye-blue-852 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlcv27 | false | null | t3_1hlcv27 | /r/LocalLLaMA/comments/1hlcv27/llama_introduction/ | false | false | self | 1 | null |
*asking to users who use qwen QwQ (or others open weights compute-scaling models)... * | 2 | hi everyone!
I'm experimenting with QwQ 32B and I'm wondering what system instruction do you provide to the model...
I noticed that somehow any attempt to influence with prompting it's reasoning flow result is degraded performance, but probably I don't get right the way to prompt it (both in wording and format).
Also, what sampler parameters are you using?
I had pretty decent results with temp 0.95, top_P 0.55, top_k 10, min_P 0.25 (even for non creative tasks, it seems that a low temp doesn't necessarily increase accuracy in that context)
but I thing that settings are far from optimal... what do you guys use?
(I think I can expand the questions to any others open weight reasoning model) | 2024-12-24T12:51:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hlcwky/asking_to_users_who_use_qwen_qwq_or_others_open/ | Distinct-Target7503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlcwky | false | null | t3_1hlcwky | /r/LocalLLaMA/comments/1hlcwky/asking_to_users_who_use_qwen_qwq_or_others_open/ | false | false | self | 2 | null |
Any ai models that I can run locally for interior designs | 2 | Title. I am in the middle of renovation of my apartment and I was wondering if there is an AI that i can use for the same where I can upload my room pictures and get suggestions. | 2024-12-24T13:00:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hld262/any_ai_models_that_i_can_run_locally_for_interior/ | 13myths | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hld262 | false | null | t3_1hld262 | /r/LocalLLaMA/comments/1hld262/any_ai_models_that_i_can_run_locally_for_interior/ | false | false | self | 2 | null |
My challenge to you: Get any AI model (open or closed) to count the correct number of digits: | 132 | 2024-12-24T13:44:55 | Super-Muffin-1230 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hldtx0 | false | null | t3_1hldtx0 | /r/LocalLLaMA/comments/1hldtx0/my_challenge_to_you_get_any_ai_model_open_or/ | false | false | 132 | {'enabled': True, 'images': [{'id': 'gtqmJozgi8rXb6snYNrqJWqTlROtLFOnw3tK8hmPWdU', 'resolutions': [{'height': 146, 'url': 'https://preview.redd.it/9kpq1b07ws8e1.jpeg?width=108&crop=smart&auto=webp&s=2afa7ed821955ce8fe1af6cc0e19e33863ecda26', 'width': 108}, {'height': 292, 'url': 'https://preview.redd.it/9kpq1b07ws8e1.jpeg?width=216&crop=smart&auto=webp&s=1b504598f07bcd089196dc60bf0f83b0bee4b98e', 'width': 216}, {'height': 432, 'url': 'https://preview.redd.it/9kpq1b07ws8e1.jpeg?width=320&crop=smart&auto=webp&s=af23f625c38d66b4fda2892aba16f77556f8eaa2', 'width': 320}], 'source': {'height': 644, 'url': 'https://preview.redd.it/9kpq1b07ws8e1.jpeg?auto=webp&s=d808f6f9a622f983a30318622925582378d68b66', 'width': 476}, 'variants': {}}]} |
|||
What hardware do you currently use for your LocalLLaMa LLMs ? | 1 | [removed] | 2024-12-24T13:48:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hldwau/what_hardware_do_you_currently_use_for_your/ | kapetans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hldwau | false | null | t3_1hldwau | /r/LocalLLaMA/comments/1hldwau/what_hardware_do_you_currently_use_for_your/ | false | false | self | 1 | null |
QwQ going god mode on scientific creativity - matching o1-preview on LiveIdeaBench 🤯 | 1 | [removed] | 2024-12-24T13:51:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hldye0/qwq_going_god_mode_on_scientific_creativity/ | realJoeTrump | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hldye0 | false | null | t3_1hldye0 | /r/LocalLLaMA/comments/1hldye0/qwq_going_god_mode_on_scientific_creativity/ | false | false | self | 1 | null |
QwQ matches o1-preview in scientific creativity | 1 | [removed] | 2024-12-24T13:57:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hle1sj/qwq_matches_o1preview_in_scientific_creativity/ | realJoeTrump | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hle1sj | false | null | t3_1hle1sj | /r/LocalLLaMA/comments/1hle1sj/qwq_matches_o1preview_in_scientific_creativity/ | false | false | 1 | null |
|
Use case help | 1 | [removed] | 2024-12-24T14:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hley4d/use_case_help/ | Confusedx2d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hley4d | false | null | t3_1hley4d | /r/LocalLLaMA/comments/1hley4d/use_case_help/ | false | false | self | 1 | null |
RA.Aid v0.10.0 - Web research, interactive chat, and more | 20 | Hey all,
Following up on: [https://www.reddit.com/r/LocalLLaMA/comments/1hczbla/aider\_langchain\_a\_match\_made\_in\_heaven/](https://www.reddit.com/r/LocalLLaMA/comments/1hczbla/aider_langchain_a_match_made_in_heaven/)
Just wanted to share an update on RA.Aid v0.10.0. If you haven't come across RA.Aid before, it's our community's open-source autonomous AI dev agent. It works by placing AI into a ReAct loop, much like windsurf, cursor, devin, or [aide.dev](http://aide.dev), but it's completely free and under the Apache License 2.0.
What's New?
* Web Research: RA.Aid can now pull information from the web, making it smarter and more relevant to your coding needs.
* Interactive Chat Mode: With the --chat flag, you can now guide RA.Aid directly, asking questions or redirecting tasks.
* Ctrl-C Interrupt: You can interrupt its process anytime to give feedback or change direction, or just exit.
Why RA.Aid?
* Community Built: This project thrives on our collective efforts. Let's make this our dev agent.
* Open Source: No paywalls here, just open collaboration for all.
* Versatile: From refactoring to feature implementation, RA.Aid is there for you.
Contribute or Check it Out:
* Explore RA.Aid: [https://github.com/ai-christianson/RA.Aid](https://github.com/ai-christianson/RA.Aid)
* Contribute: Whether it's code, ideas, or bug reports, your input shapes RA.Aid.
* Feedback: Got thoughts? Let's discuss them in the issues.
Let's keep building RA.Aid together into something truly useful for the developer community.
Happy coding! 💻✨🎉 | 2024-12-24T14:59:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hlf7tz/raaid_v0100_web_research_interactive_chat_and_more/ | ai-christianson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlf7tz | false | null | t3_1hlf7tz | /r/LocalLLaMA/comments/1hlf7tz/raaid_v0100_web_research_interactive_chat_and_more/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'pGolXb59sOfFv8fuzh9VXLSXbPU3P6WI03gKjY0yS3c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6ZCpQ9U0H0rohx6yni_du-OHhuu17EhMaTNqquf83IM.jpg?width=108&crop=smart&auto=webp&s=6d3c0afdad2aa5741a02ae449de7cd7591855149', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/6ZCpQ9U0H0rohx6yni_du-OHhuu17EhMaTNqquf83IM.jpg?width=216&crop=smart&auto=webp&s=c98acffaad3230bd34061e2c2d4f745391bf2c0f', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/6ZCpQ9U0H0rohx6yni_du-OHhuu17EhMaTNqquf83IM.jpg?width=320&crop=smart&auto=webp&s=01fb287ed7b0c9adbfb9878c73cbbf2e04361968', 'width': 320}], 'source': {'height': 216, 'url': 'https://external-preview.redd.it/6ZCpQ9U0H0rohx6yni_du-OHhuu17EhMaTNqquf83IM.jpg?auto=webp&s=371a4a7640702a1f9429261292feb0180a3569ac', 'width': 388}, 'variants': {}}]} |
An example of a misleading technique (applicable to O1 PRO) | 0 | Whenever you present a 'classic' puzzle, but change the key words in it slightly, they will often think it is still the original puzzle.
Here is an example:
Five pirates divide the gold, 100 gold coins, the goal is to prioritize their own survival, then the most gold coins, and then the death of their companions. From left to right, they propose plans in this order.
1 If the number of vetoes is greater than the number of votes in favor, then the person will be executed, and the next person will propose it.
2 If the number of vetoes is equal to the number of votes in favor, then it is invalid but will not be executed, and the next person will propose it.
3 If the number of votes in favor is greater than the number of vetoes, then the plan is successful.
4 If all the plans of everyone are invalid, then the gold coins will be divided equally among the survivors. | 2024-12-24T15:15:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hlfk0f/an_example_of_a_misleading_technique_applicable/ | flysnowbigbig | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlfk0f | false | null | t3_1hlfk0f | /r/LocalLLaMA/comments/1hlfk0f/an_example_of_a_misleading_technique_applicable/ | false | false | self | 0 | null |
Playing with LoRA Finetuning HyperParameters on a ChatBot Dataset | 1 | [removed] | 2024-12-24T15:19:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hlfmkm/playing_with_lora_finetuning_hyperparameters_on_a/ | Kind-Mathematician-8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlfmkm | false | null | t3_1hlfmkm | /r/LocalLLaMA/comments/1hlfmkm/playing_with_lora_finetuning_hyperparameters_on_a/ | false | false | 1 | null |
|
Best open source tool to write collaboratively with local/API models? | 1 | [removed] | 2024-12-24T15:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hlfqrp/best_open_source_tool_to_write_collaboratively/ | hrbcn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlfqrp | false | null | t3_1hlfqrp | /r/LocalLLaMA/comments/1hlfqrp/best_open_source_tool_to_write_collaboratively/ | false | false | self | 1 | null |
Playing with LoRA Finetuning HyperParameters on a ChatBot Dataset | 18 | In early April I decided to play around with different settings of batch size, gradient accumulation, group by length, and packing, when finetuning Mistral-OpenOrca-7B, just to see what would happen and figured I'd share and discuss my notes here.
This was for an undergraduate senior capstone project where we made a demo ChatBot for our school's website that we finetuned on a synthetic dataset generated from our school's website contents. Upon graduating I got a bit busy and never posted it here, managed to get some free time and I'm brushing back up on my old work and the latest in LocalLLaMA again.
TXT of Results and Python Visualization Scripts: [https://drive.google.com/drive/folders/1FFAQukfylkb10fgzk9FIhEaufiux5wtX?usp=sharing](https://drive.google.com/drive/folders/1FFAQukfylkb10fgzk9FIhEaufiux5wtX?usp=sharing)
# Setup: 03/30/24-04/18/24
* NVIDIA GeForce RTX 3090 24.0 GB VRAM
* Ubuntu Linux (WSL)
* PyTorch 2.2.2+cu121
* CUDA = 8.6, Toolkit = 12.1
* UnSloth 2024.3
* Transformers 4.39.2
* Xformers = 0.0.25post1
# LLM Metadata:
**Model:** [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
**Dataset:** [Augmented-UWP-Instruct](https://huggingface.co/datasets/Aaron616/Augmented-UWP-Instruct)
* 50,990 rows
**Questions Length**:
* 70% 100-200
* 30% 200-300
**Answers Length**:
* 55% 0-150
* 35% 150-300
* 10% 300-450
**dataset.shuffle(seed=42)**
* Train: 80%
* Validate: 20%
# Static Hyperparameters:
* Bfloat16 = True
* FA = True
* max\_seq\_length = 4096
* load\_in\_4bit = True
* r = 32
* lora\_alpha = 64
* lora\_dropout = 0.1
* bias = "none"
* warmup\_ratio = 0.03
* learning\_rate = 2e-4
* optim = "adamw\_8bit"
* weight\_decay = 0.01
* lr\_scheduler\_type = "cosine"
* neftune\_noise\_alpha = 5
* report\_to = "tensorboard"
* EarlyStoppingCallback(early\_stopping\_patience=10, early\_stopping\_threshold=0.05)
# Dynamic Hyperparameters:
* per\_device\_train\_batch\_size = 1/2/4
* per\_device\_train\_batch\_size = 1/2/4
* gradient\_accumulation\_steps = 1/2/4/8/16
* group\_by\_length = True/False
* packing = True/False
# Note:
* Any runs beyond 10-15 hours that looked to have stabilized, I manually cut off. I did include the estimated duration in the dataset but didn't feel like wasting the electricity nor my time.
[Plotly interactive graph of training and evaluation loss of different hyperparameter configurations over time, with exploded gradient runs.](https://preview.redd.it/ebr9tb0xdt8e1.png?width=2259&format=png&auto=webp&s=482065996123b48d83574d6bc9d873f80d5202b2)
[Plotly grapph of total training time ](https://preview.redd.it/gmfxgenxdt8e1.png?width=2259&format=png&auto=webp&s=ebd28240bd051dfea2295e11efda71899898b4f8)
[Plotly interactive graph of training and evaluation loss of different hyperparameter configurations over time, zoomed in to remove unstable runs.](https://preview.redd.it/4eekoqwxdt8e1.png?width=2259&format=png&auto=webp&s=c564398ccccbfc7c6657218fc775d220a2985483)
# My Conclusions:
* Packing makes training more stable and much much faster. This is to be expected since my dataset has many short sequences and isn't very uniform.
* More time didn't always improve training, as seen in the above graphs, there's no strong correlation between training time and evaluation loss.
* I expected low total batch sizes to make training much longer but also much better, they ended up not being stable and exploding, which then led them to converge much higher than other runs. It luckily turns out that total batch sizes appropriately sized for the given dataset benefits both stability and performance.
* Training loss kept going downward into the 0.2 range, although evaluation loss usually stabilized around 0.3-0.35. This makes me wonder if we are in a local minima or is there something inherent in my methodology or dataset that limits the eval performance?
* Our total batch size is ideal at around 16 (4x4) but can handle 4 (2x2) and 8 (2x4) as well, just lengthening the training time by 90 minutes for 0.02 loss improvement (from 0.35 to 0.33-0.31 -> Not worth it IMO) so 16 is ideal with training getting evaluation loss down to 0.35 in just 3 hours
* I could take a bigger look into group\_by\_length but what little I did play around with it, didn't cause any atypical behavior.
* All of this is just based on training and evaluation loss and no actual response testing, I've since deleted all the models to free up the hard drive space but it would've be interesting to take a look at if I were to do this again over Winter break.
Anyone else play around with manual hyperparameter tuning and get some fun insights into your project? Any thoughts on my training versus evaluation loss plateaus?
Any other hyperparameters I should play around with and let run in the background while I'm not at home? | 2024-12-24T15:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hlfrz4/playing_with_lora_finetuning_hyperparameters_on_a/ | Byt3G33k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlfrz4 | false | null | t3_1hlfrz4 | /r/LocalLLaMA/comments/1hlfrz4/playing_with_lora_finetuning_hyperparameters_on_a/ | false | false | 18 | null |
|
Open-webui ComfyUI Img2Img Workflow Support | 1 | 2024-12-24T15:53:28 | https://www.youtube.com/watch?v=ZnoNR1UtrAU | pwillia7 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1hlgb1q | false | {'oembed': {'author_name': 'Patrick Williams', 'author_url': 'https://www.youtube.com/@ptkwilliams', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ZnoNR1UtrAU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Open-webui ComfyUI Img2Img Workflow Support"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/ZnoNR1UtrAU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Open-webui ComfyUI Img2Img Workflow Support', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1hlgb1q | /r/LocalLLaMA/comments/1hlgb1q/openwebui_comfyui_img2img_workflow_support/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'TRd5fTnybtYcmapn77rNuuaERoWVaK9dMNkJY7zg1kU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iFldj9htKgRyFGsjzWSYjkWBA2ME1bbbYfl4b7id-FA.jpg?width=108&crop=smart&auto=webp&s=590c8263917cc292257ecfc822b521d760708beb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/iFldj9htKgRyFGsjzWSYjkWBA2ME1bbbYfl4b7id-FA.jpg?width=216&crop=smart&auto=webp&s=62bad50a99624e0f5958b4d3aa9144388884ec5c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/iFldj9htKgRyFGsjzWSYjkWBA2ME1bbbYfl4b7id-FA.jpg?width=320&crop=smart&auto=webp&s=a368905adf3dd1fb49170c43052e6a4838ac922f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/iFldj9htKgRyFGsjzWSYjkWBA2ME1bbbYfl4b7id-FA.jpg?auto=webp&s=26eb5b4acae31aa9dedf86a466043fa3bb3a6815', 'width': 480}, 'variants': {}}]} |
||
Open-webui ComfyUI Img2Img Workflow Support | 1 | [removed] | 2024-12-24T15:55:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hlgch3/openwebui_comfyui_img2img_workflow_support/ | pwillia7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlgch3 | false | null | t3_1hlgch3 | /r/LocalLLaMA/comments/1hlgch3/openwebui_comfyui_img2img_workflow_support/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TRd5fTnybtYcmapn77rNuuaERoWVaK9dMNkJY7zg1kU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iFldj9htKgRyFGsjzWSYjkWBA2ME1bbbYfl4b7id-FA.jpg?width=108&crop=smart&auto=webp&s=590c8263917cc292257ecfc822b521d760708beb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/iFldj9htKgRyFGsjzWSYjkWBA2ME1bbbYfl4b7id-FA.jpg?width=216&crop=smart&auto=webp&s=62bad50a99624e0f5958b4d3aa9144388884ec5c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/iFldj9htKgRyFGsjzWSYjkWBA2ME1bbbYfl4b7id-FA.jpg?width=320&crop=smart&auto=webp&s=a368905adf3dd1fb49170c43052e6a4838ac922f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/iFldj9htKgRyFGsjzWSYjkWBA2ME1bbbYfl4b7id-FA.jpg?auto=webp&s=26eb5b4acae31aa9dedf86a466043fa3bb3a6815', 'width': 480}, 'variants': {}}]} |
How to work with LLM for code? | 9 | Like, I have a git repo full of directories full of files full of code
Somtimes I try to retro engineer the whole tree to know what should I work on, sometimes same on a directory level or file level or function level
After the LLM located where it should work, you have the LLM to modify concerned line/function and return it
But it can't just return the whole file, otherwise it would flood the context window. If the LLM return a part, it needs to generate a got diff patch to state what he modify/delete/add. Because in code it can't be vague, one character less or more and it doesn't compile
Thing is that it's utterly bad at generating git diff patch. It hallucinates it more or less. And git diff patch is code, can't tolerate a character error otherwise it doesn't work
Without talking that if he operates too much in diff, the complete tree/directory/file exit out of window and he doesn't know on what he works anymore
And even with code in context, how he will diff correctly if the code change at each iteration without full context code being updated?
It needs a CoPilot operation mode. Where he is constantly aware in live of the whole git tree and operate on the code file itself instead of in the chat where you operate the changes yourself
Even better, if he can merge request himself and you just code review him and he correct its own patch. So he is constantly aware of the tree and of what he propose. When it's ready you merge and that's all
Any model wrapper that can cooperate with you on a a git tree as a user? | 2024-12-24T16:03:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hlgib8/how_to_work_with_llm_for_code/ | xmmr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlgib8 | false | null | t3_1hlgib8 | /r/LocalLLaMA/comments/1hlgib8/how_to_work_with_llm_for_code/ | false | false | self | 9 | null |
Creating your own NotebookLM Podcast that can run locally | 25 | Hey guys!
I actually had developed an alternative to Google NotebookLM couple months ago but abandoned the project along with the UI.
Since I realize NotebookLM is gaining more and more traction, I figured I can just open source some of the code I had used to create the archived website, only this would be mainly a CLI tool.
I want this to be completely open source but right now I am using these tools:
* Azure Document Intelligence
* Ollama LLMs
* Azure TTS
I would love for this to grow and be more robust and full of different features especially to the point where it doesn't require using Azure and can output the same level of TTS in the resulting podcast.
Here's the link to the repo: [https://github.com/shagunmistry/NotebookLM\_Alternative](https://github.com/shagunmistry/NotebookLM_Alternative)
Please let me know your thoughts!
The podcasts it creates are under here for example: [https://github.com/shagunmistry/NotebookLM\_Alternative/tree/main/examples](https://github.com/shagunmistry/NotebookLM_Alternative/tree/main/examples) | 2024-12-24T16:06:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hlgku4/creating_your_own_notebooklm_podcast_that_can_run/ | ordinary_shazzamm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlgku4 | false | null | t3_1hlgku4 | /r/LocalLLaMA/comments/1hlgku4/creating_your_own_notebooklm_podcast_that_can_run/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'DImI6jYuvomETkZakM3Yhl400Po9dT55vjL89z74oLg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CyKvnl9tA8m6M67Nz2P42FdWJvARlTuHvr4D4VNYyic.jpg?width=108&crop=smart&auto=webp&s=65a1d7f5ff65b91e6b619d3b1399e2fea953b1c7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CyKvnl9tA8m6M67Nz2P42FdWJvARlTuHvr4D4VNYyic.jpg?width=216&crop=smart&auto=webp&s=ce14700d450f1ef4af4f0573afb3ef6370c815dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CyKvnl9tA8m6M67Nz2P42FdWJvARlTuHvr4D4VNYyic.jpg?width=320&crop=smart&auto=webp&s=b7cd02d8e9e656a353be052ee26b7ce9da7a68c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CyKvnl9tA8m6M67Nz2P42FdWJvARlTuHvr4D4VNYyic.jpg?width=640&crop=smart&auto=webp&s=13614390d87046806e17e863e79c930cc3caa8cb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CyKvnl9tA8m6M67Nz2P42FdWJvARlTuHvr4D4VNYyic.jpg?width=960&crop=smart&auto=webp&s=d1b5faa8bfa8d03c242df8971749aa51d3055dbb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CyKvnl9tA8m6M67Nz2P42FdWJvARlTuHvr4D4VNYyic.jpg?width=1080&crop=smart&auto=webp&s=267332a0d9bd458e3267cd6b0f6d44fb100ed5ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CyKvnl9tA8m6M67Nz2P42FdWJvARlTuHvr4D4VNYyic.jpg?auto=webp&s=9a70b3e801975997269082e8b3ecf07065b2f756', 'width': 1200}, 'variants': {}}]} |
Similar models to Phi3.5 mini (but with more parameters) | 2 | [removed] | 2024-12-24T16:24:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hlgxpj/similar_models_to_phi35_mini_but_with_more/ | agent61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlgxpj | false | null | t3_1hlgxpj | /r/LocalLLaMA/comments/1hlgxpj/similar_models_to_phi35_mini_but_with_more/ | false | false | self | 2 | null |
RAG vs Fine-tuning for analyzing millions of GA4 records with GPT-4? | 1 | [removed] | 2024-12-24T16:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hlh5i4/rag_vs_finetuning_for_analyzing_millions_of_ga4/ | Duraijeeva | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlh5i4 | false | null | t3_1hlh5i4 | /r/LocalLLaMA/comments/1hlh5i4/rag_vs_finetuning_for_analyzing_millions_of_ga4/ | false | false | self | 1 | null |
Any information about LLama3.x Huggingface repo access Rejections? | 2 | Be good citizen
Fill out request for access for LLama3.2 or 3.3
Wait a minute
Access denied
No appeal process or explanation.
What did I do wrong?
Be nice if they told you why. Kinda sucks to have to rely on someone else's mirror.
| 2024-12-24T16:37:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hlh6vy/any_information_about_llama3x_huggingface_repo/ | bigattichouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlh6vy | false | null | t3_1hlh6vy | /r/LocalLLaMA/comments/1hlh6vy/any_information_about_llama3x_huggingface_repo/ | false | false | self | 2 | null |
End-to-end local models bringing live chat with Santa Claus to life. | 1 | [removed] | 2024-12-24T16:42:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hlha6g/endtoend_local_models_bringing_live_chat_with/ | Simple-Holiday5446 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlha6g | false | null | t3_1hlha6g | /r/LocalLLaMA/comments/1hlha6g/endtoend_local_models_bringing_live_chat_with/ | false | false | self | 1 | null |
2 T4 GPUs bringing live chat with Santa Claus to life. | 1 | [removed] | 2024-12-24T16:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hlheo4/2_t4_gpus_bringing_live_chat_with_santa_claus_to/ | Simple-Holiday5446 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlheo4 | false | null | t3_1hlheo4 | /r/LocalLLaMA/comments/1hlheo4/2_t4_gpus_bringing_live_chat_with_santa_claus_to/ | false | false | self | 1 | null |
I challenge you to write a prompt that can generate original jokes: | 0 | My prompt:
1. Think step by step outloud to randomly choose one or more random topics. Think for atleast 5 steps before choosing the random topics.
2. Explain the topics in necessary detail.
3. Write 9 jokes about topics. 3 short jokes, 3 medium length jokes and 3 long jokes.
4. Critique each joke based on their ability to make people laugh and rank them from worst to best.
5. Provide the best joke.
| 2024-12-24T16:53:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hlhil5/i_challenge_you_to_write_a_prompt_that_can/ | Super-Muffin-1230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlhil5 | false | null | t3_1hlhil5 | /r/LocalLLaMA/comments/1hlhil5/i_challenge_you_to_write_a_prompt_that_can/ | false | false | self | 0 | null |
QVQ - New Qwen Realease | 577 | 2024-12-24T17:08:21 | notrdm | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlhtm0 | false | null | t3_1hlhtm0 | /r/LocalLLaMA/comments/1hlhtm0/qvq_new_qwen_realease/ | false | false | 577 | {'enabled': True, 'images': [{'id': 'FvFeOk4nhuwIoj-lGBYtoMrIMSZJ8aPhYwlhwhJt5Js', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/7l4dwx9awt8e1.png?width=108&crop=smart&auto=webp&s=232f5f6e2d3ec0c5735c61602e8e8fb159360dda', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/7l4dwx9awt8e1.png?width=216&crop=smart&auto=webp&s=bc46c76f5b5d2b7d61dc871ba3f89c675e081380', 'width': 216}, {'height': 155, 'url': 'https://preview.redd.it/7l4dwx9awt8e1.png?width=320&crop=smart&auto=webp&s=72a9eb38f9d043de2d44736c369f41a85599577f', 'width': 320}], 'source': {'height': 289, 'url': 'https://preview.redd.it/7l4dwx9awt8e1.png?auto=webp&s=f7e3133c435eba2d14ba594d738d44d4ed2adf79', 'width': 596}, 'variants': {}}]} |
|||
Qwen/QVQ-72B-Preview · Hugging Face | 219 | 2024-12-24T17:24:14 | https://huggingface.co/Qwen/QVQ-72B-Preview | itsmekalisyn | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hli5dn | false | null | t3_1hli5dn | /r/LocalLLaMA/comments/1hli5dn/qwenqvq72bpreview_hugging_face/ | false | false | 219 | {'enabled': False, 'images': [{'id': 'EZQpn5cdcqnNPLKoAK9W_WlxjhoxiRMs0mpQ7ttLXfE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=108&crop=smart&auto=webp&s=c0bf81b62705f2b0dc0d8596760557ddc9665c8d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=216&crop=smart&auto=webp&s=4a93bddd53b45c8458cbfe0adf63682a0aef2d46', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=320&crop=smart&auto=webp&s=33e1c45b457c0c3935b9e1a996e9ada8fabe1624', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=640&crop=smart&auto=webp&s=462a35429bf616091d063edc5cc35ddc44e4cbac', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=960&crop=smart&auto=webp&s=9c0e1d2970a0beab1824a6efcbb9d82e6ea9b0a7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=1080&crop=smart&auto=webp&s=65554edd06c19f8b3a9ea65e671acf94a8162d9e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?auto=webp&s=b143ed79d4208bdc81ca3300b579c19c11a93700', 'width': 1200}, 'variants': {}}]} |
||
More evidence from an OpenAI employee that o3 uses the same paradigm as o1: "[...] progress from o1 to o3 was only three months, which shows how fast progress will be in the new paradigm of RL on chain of thought to scale inference compute." | 65 | 2024-12-24T17:26:11 | Wiskkey | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hli6v1 | false | null | t3_1hli6v1 | /r/LocalLLaMA/comments/1hli6v1/more_evidence_from_an_openai_employee_that_o3/ | false | false | 65 | {'enabled': True, 'images': [{'id': 'WT89lJ3FWdglREoUit_423LhyS6j0KFtDiSrDyVW9cg', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/w4rzjmnazt8e1.jpeg?width=108&crop=smart&auto=webp&s=f17496be6d2814c875f1b84d808085d5f0ccce77', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/w4rzjmnazt8e1.jpeg?width=216&crop=smart&auto=webp&s=b53fbed56beaa32245eadca4f0c322d732bd425d', 'width': 216}, {'height': 342, 'url': 'https://preview.redd.it/w4rzjmnazt8e1.jpeg?width=320&crop=smart&auto=webp&s=83e36e8a6b8e1db9c20123e7446abec83eb44269', 'width': 320}], 'source': {'height': 498, 'url': 'https://preview.redd.it/w4rzjmnazt8e1.jpeg?auto=webp&s=d27648dfe8730c8753153ac3251b8950fc5346a8', 'width': 465}, 'variants': {}}]} |
|||
How do open source LLMs earn money | 151 | Since models like Qwen, MiniCPM etc are free for use, I was wondering how do they make money out of it. I am just a beginner in LLMs and open source. So can anyone tell me about it? | 2024-12-24T17:53:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hlir5e/how_do_open_source_llms_earn_money/ | Available-Stress8598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlir5e | false | null | t3_1hlir5e | /r/LocalLLaMA/comments/1hlir5e/how_do_open_source_llms_earn_money/ | false | false | self | 151 | null |
Qwen often output chinese | 7 | When I evaluate Qwen model on my own test data, There is a problem with Chinese being mixed in the middle of the output.
Is this a typical qwen model issue, or is it because the data is in Korean? ( I'm Korean :) )
Even if I modify the prompt a little bit, such as "Do not include Chinese in your answer.", nothing changes.
Have you guys had similar experiences?
Or any suggestions?
| 2024-12-24T17:56:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hlitkn/qwen_often_output_chinese/ | always_newbee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlitkn | false | null | t3_1hlitkn | /r/LocalLLaMA/comments/1hlitkn/qwen_often_output_chinese/ | false | false | self | 7 | null |
LLM Chess Arena (MIT Licensed): Pit Two LLMs Against Each Other in Chess! | 26 | I’ve had this idea for a while and finally decided to code it. It’s still in the very early stages. It’s an LLM Chess arena—enter the configuration details, and let two LLMs battle it out. Only Groq supported for now, test it with Llama3.3 . More providers and models on the DEV branch.
The code runs only client side and is very simple.
MIT license:
[https://github.com/llm-chess-arena](https://github.com/llm-chess-arena)
Thank you for your PRs, they should be done to the DEV branch.
Current version can be tested here:
[https://llm-chess-arena.github.io/llm-chess-arena/](https://llm-chess-arena.github.io/llm-chess-arena/)
Get a free Grok API from here:
[https://console.groq.com/keys](https://console.groq.com/keys)
[LLM Chess Arena 0.1](https://preview.redd.it/zwbre9yb4u8e1.png?width=1566&format=png&auto=webp&s=1a9e4053dccd3eb0500f36b71ea309535776d193)
https://preview.redd.it/k0iy33qz4u8e1.png?width=1529&format=png&auto=webp&s=4b3d0c91a9949fb40a49007f54e7c808f4d0f0de
| 2024-12-24T17:56:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hlitnb/llm_chess_arena_mit_licensed_pit_two_llms_against/ | estebansaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlitnb | false | null | t3_1hlitnb | /r/LocalLLaMA/comments/1hlitnb/llm_chess_arena_mit_licensed_pit_two_llms_against/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'Ic0Ap_TXi6iPYwiOHJDHQZPWJEhbEzOvI6GIkQxScx4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/VpSYBnavEjAv8PlzdBM5SHDVDKKmKsaMbBcteRg8S5I.jpg?width=108&crop=smart&auto=webp&s=15dc1c95dc59150d148f36a4d6547dd66a38c9be', 'width': 108}], 'source': {'height': 148, 'url': 'https://external-preview.redd.it/VpSYBnavEjAv8PlzdBM5SHDVDKKmKsaMbBcteRg8S5I.jpg?auto=webp&s=9d3b610bb25c0cc098c6a1b155fc0747c81c73fd', 'width': 148}, 'variants': {}}]} |
|
What are the best models around 14b at the moment? | 28 | Are Virtuoso Small for general tasks and Qwen 2.5 Coder 14b for coding still the best 14b models currently or is there something better at a comparable size? | 2024-12-24T18:01:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hlixhr/what_are_the_best_models_around_14b_at_the_moment/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlixhr | false | null | t3_1hlixhr | /r/LocalLLaMA/comments/1hlixhr/what_are_the_best_models_around_14b_at_the_moment/ | false | false | self | 28 | null |
MarinaBox: Open-Source Sandbox Infra for AI Agents | 1 | [removed] | 2024-12-24T18:14:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hlj6kg/marinabox_opensource_sandbox_infra_for_ai_agents/ | bayllama97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlj6kg | false | null | t3_1hlj6kg | /r/LocalLLaMA/comments/1hlj6kg/marinabox_opensource_sandbox_infra_for_ai_agents/ | false | false | self | 1 | null |
Wow | 187 | 2024-12-24T18:36:14 | Evening_Action6217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hljmv1 | false | null | t3_1hljmv1 | /r/LocalLLaMA/comments/1hljmv1/wow/ | false | false | 187 | {'enabled': True, 'images': [{'id': '00iXQGtfnoLJfjyig1ccsXKQ36JYI6JuUjyzg6a1CXw', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/tvux5av5cu8e1.jpeg?width=108&crop=smart&auto=webp&s=5449128c00d00639e2f60b11fcb3c9f5a622de50', 'width': 108}, {'height': 84, 'url': 'https://preview.redd.it/tvux5av5cu8e1.jpeg?width=216&crop=smart&auto=webp&s=85ecb0727ef4cb796914c51bfd7fa78cd7ea166e', 'width': 216}, {'height': 124, 'url': 'https://preview.redd.it/tvux5av5cu8e1.jpeg?width=320&crop=smart&auto=webp&s=3954b1fbe37c857391cff8c31ad9f38d0debdf3a', 'width': 320}, {'height': 249, 'url': 'https://preview.redd.it/tvux5av5cu8e1.jpeg?width=640&crop=smart&auto=webp&s=588a3f9fd8cb30c2a2eaf1676c74ec56258cf9d3', 'width': 640}, {'height': 373, 'url': 'https://preview.redd.it/tvux5av5cu8e1.jpeg?width=960&crop=smart&auto=webp&s=8cc7c332f380ce4148aa124a3a26d2a79568f393', 'width': 960}, {'height': 420, 'url': 'https://preview.redd.it/tvux5av5cu8e1.jpeg?width=1080&crop=smart&auto=webp&s=3b6e61ed84c966d121750e98997e087835bcdfb8', 'width': 1080}], 'source': {'height': 570, 'url': 'https://preview.redd.it/tvux5av5cu8e1.jpeg?auto=webp&s=ee44fa3e235fff13020ea143c5055e449faec851', 'width': 1464}, 'variants': {}}]} |
|||
Why aren't LLM used as databases? | 0 | Not to be confused with using LLMs for generating SQL queries, etc. but using the LLM context as the data store itself? It's not applicable for all of the use cases, but particularly for local / private data, it can simplify the stack quite a lot by replacing the SQL DB engine, vector DB, etc. with just the LLM itself? | 2024-12-24T18:43:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hljsd8/why_arent_llm_used_as_databases/ | EstablishmentOdd785 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hljsd8 | false | null | t3_1hljsd8 | /r/LocalLLaMA/comments/1hljsd8/why_arent_llm_used_as_databases/ | false | false | self | 0 | null |
Are most llama.cpp-compatible LLM models on Hugging Face fine-tunes of LLaMA or Mistral models? | 1 | I am trying to figure this out the Local LLaMA model universe. Can you please correct me, or help fill in the gaps:
LLaMA = Meta/Facebook = most models on Hugging Face.
Mistral = French Company, very popular RP models trending earlier this year, lots of NSFW variants.
Qwen = Chinese company, people impressed with coding ability, not sure what else is going on with them.
Google = The tiny adorable Phi models?
You can make fun of my ignorance, it took me long enough just to be able to formulate this semi-question. | 2024-12-24T19:13:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hlkdke/are_most_llamacppcompatible_llm_models_on_hugging/ | a_chatbot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlkdke | false | null | t3_1hlkdke | /r/LocalLLaMA/comments/1hlkdke/are_most_llamacppcompatible_llm_models_on_hugging/ | false | false | self | 1 | null |
Any clever ways to fit a 2nd GPU when the first GPU is blocking the PCIE slot? | 1 | [removed] | 2024-12-24T19:24:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hlklpk/any_clever_ways_to_fit_a_2nd_gpu_when_the_first/ | TomerHorowitz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlklpk | false | null | t3_1hlklpk | /r/LocalLLaMA/comments/1hlklpk/any_clever_ways_to_fit_a_2nd_gpu_when_the_first/ | false | false | self | 1 | null |
This era is awesome! | 180 | LLMs are improving stupidly fast. If you build applications with them, in a couple months or weeks you are almost guaranteed better, faster, *and* cheaper just by swapping out the model file, or if you're using an API just swapping a string! It's what I imagine computer geeks felt like in the 70s and 80s but much more rapid and open source. It kinda looks like building a moat around LLMs isn't that realistic even for the giants, if Qwen catching up to openAI has shown us anything. What a world! Super excited for the new era of open reasoning models, we're getting pretty damn close to open AGI. | 2024-12-24T20:48:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hlm72o/this_era_is_awesome/ | AnAngryBirdMan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlm72o | false | null | t3_1hlm72o | /r/LocalLLaMA/comments/1hlm72o/this_era_is_awesome/ | false | false | self | 180 | null |
We've seen posts about benchmarks of Gemini Flash 2.0, but what are your actual experiences with it? | 13 | and how is it compared to other popular open weight models? | 2024-12-24T21:11:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hlmmip/weve_seen_posts_about_benchmarks_of_gemini_flash/ | ThaisaGuilford | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlmmip | false | null | t3_1hlmmip | /r/LocalLLaMA/comments/1hlmmip/weve_seen_posts_about_benchmarks_of_gemini_flash/ | false | false | self | 13 | null |
Experimenting With LCM Models (Meta's Alternative To LLM Models) | 62 | 2024-12-24T21:26:32 | https://www.youtube.com/watch?v=2ZLd0uZvwbU&t=618s | Fun_Yam_6721 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1hlmwmq | false | {'oembed': {'author_name': 'Richard Aragon', 'author_url': 'https://www.youtube.com/@richardaragon8471', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/2ZLd0uZvwbU?start=618&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Experimenting With LCM Models (Meta's Alternative To LLM Models)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/2ZLd0uZvwbU/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Experimenting With LCM Models (Meta's Alternative To LLM Models)", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1hlmwmq | /r/LocalLLaMA/comments/1hlmwmq/experimenting_with_lcm_models_metas_alternative/ | false | false | 62 | {'enabled': False, 'images': [{'id': '4PoT_0xE-eL0uIcH_3ycbsWdLFXzhD22NA8mawlxaI4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/F9H5hEy-XIUhAiUTntsb-AWU0OcbnRkJ74ecCsLlnq0.jpg?width=108&crop=smart&auto=webp&s=378ae637df89a2b37a741cd242e6fb770571863c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/F9H5hEy-XIUhAiUTntsb-AWU0OcbnRkJ74ecCsLlnq0.jpg?width=216&crop=smart&auto=webp&s=ab06792286bf47f50578d9515f55bf874a395c06', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/F9H5hEy-XIUhAiUTntsb-AWU0OcbnRkJ74ecCsLlnq0.jpg?width=320&crop=smart&auto=webp&s=e924a554309dd6c4d7a33b4e00dd4b01f3aa4f54', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/F9H5hEy-XIUhAiUTntsb-AWU0OcbnRkJ74ecCsLlnq0.jpg?auto=webp&s=f21fa5eac4e049c63ad365ff24ad5d8a6828514c', 'width': 480}, 'variants': {}}]} |
||
Small specialized "pre-LLM" models vs Foundation models | 0 | Pretty much the title, How do small specialized models (say, a fine-tuned BERT/modernBERT), compare to the latest foundation models in specialized tasks these days? I obviously mean tasks that are reasonably within the foundational model's distribution otherwise it is a meaningless question. Would be nice if someone can point me to any work that does recent comparisons. | 2024-12-24T21:37:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hln3q6/small_specialized_prellm_models_vs_foundation/ | Infrared12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hln3q6 | false | null | t3_1hln3q6 | /r/LocalLLaMA/comments/1hln3q6/small_specialized_prellm_models_vs_foundation/ | false | false | self | 0 | null |
How Hallucinatory A.I. Helps Science Dream Up Big Breakthroughs | 0 | 2024-12-24T21:40:50 | https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html?unlocked_article_code=1.j04.AtYf.oFr5ztKQUdvZ&smid=re-share | chibop1 | nytimes.com | 1970-01-01T00:00:00 | 0 | {} | 1hln5ye | false | null | t3_1hln5ye | /r/LocalLLaMA/comments/1hln5ye/how_hallucinatory_ai_helps_science_dream_up_big/ | false | false | 0 | {'enabled': False, 'images': [{'id': '9t7bLItj5omjDJqBJYD0q0y9HGKIcW6bhLXI2eYneok', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4GDrFTcBcWYT4B6SE7iR7633Z2mSP9pdu1w1vlPv5J0.jpg?width=108&crop=smart&auto=webp&s=a4e2e56c5b5dc6e49538250eb7265152ea7d147a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/4GDrFTcBcWYT4B6SE7iR7633Z2mSP9pdu1w1vlPv5J0.jpg?width=216&crop=smart&auto=webp&s=31caf2dcf4d4080e530457b53e5d6b71a618869b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/4GDrFTcBcWYT4B6SE7iR7633Z2mSP9pdu1w1vlPv5J0.jpg?width=320&crop=smart&auto=webp&s=34d47a07c01ec42b65fcf7c9454c89da5ddadbca', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/4GDrFTcBcWYT4B6SE7iR7633Z2mSP9pdu1w1vlPv5J0.jpg?width=640&crop=smart&auto=webp&s=e912c42b12910153f115bdcf489a34ce94832843', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/4GDrFTcBcWYT4B6SE7iR7633Z2mSP9pdu1w1vlPv5J0.jpg?width=960&crop=smart&auto=webp&s=a37a69c9d6f1b13cdc6179c9347a9d026638e92b', 'width': 960}], 'source': {'height': 550, 'url': 'https://external-preview.redd.it/4GDrFTcBcWYT4B6SE7iR7633Z2mSP9pdu1w1vlPv5J0.jpg?auto=webp&s=d211417aaf3998cc0945b6978c7f1027cde0e36e', 'width': 1050}, 'variants': {}}]} |
||
QVQ 72B Preview - a Hugging Face Space by Qwen | 25 | 2024-12-24T21:42:03 | https://huggingface.co/spaces/Qwen/QVQ-72B-preview | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hln6pw | false | null | t3_1hln6pw | /r/LocalLLaMA/comments/1hln6pw/qvq_72b_preview_a_hugging_face_space_by_qwen/ | false | false | 25 | {'enabled': False, 'images': [{'id': '2b69hkpXSMDFk_BVFiMFLhMT_zYEHaNpZ_N_iqYyj28', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xhKKASWU05MEFRnXsDGA_HriPwRaEI7K5Lir2UVWvpg.jpg?width=108&crop=smart&auto=webp&s=71015496e74d3ea051c5317e1de8aa59ce9a3de8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xhKKASWU05MEFRnXsDGA_HriPwRaEI7K5Lir2UVWvpg.jpg?width=216&crop=smart&auto=webp&s=09bd6f2381160d077df71fe77a5fb970531ac908', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xhKKASWU05MEFRnXsDGA_HriPwRaEI7K5Lir2UVWvpg.jpg?width=320&crop=smart&auto=webp&s=8dcb41849ba821436f6500f8c583932fea54c06b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xhKKASWU05MEFRnXsDGA_HriPwRaEI7K5Lir2UVWvpg.jpg?width=640&crop=smart&auto=webp&s=9ba0b00edf85b7c84552fcd8e9b32f89b3c63d9b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xhKKASWU05MEFRnXsDGA_HriPwRaEI7K5Lir2UVWvpg.jpg?width=960&crop=smart&auto=webp&s=fa694cf5a4155466fb7bb834136bd46c4314460c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xhKKASWU05MEFRnXsDGA_HriPwRaEI7K5Lir2UVWvpg.jpg?width=1080&crop=smart&auto=webp&s=4cd3731e3a50b4a05101fa3fd864fedded946028', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xhKKASWU05MEFRnXsDGA_HriPwRaEI7K5Lir2UVWvpg.jpg?auto=webp&s=7fc0c427f6e9e3cb60ff407cbd477054a3100546', 'width': 1200}, 'variants': {}}]} |
||
I finetuned CLIP to predict the art styles for several image generation websites | 12 | 2024-12-24T21:43:36 | https://www.njkumar.com/finetuning-clip-to-analyze-art-styles-in-stable-diffusion-playgroundai-and-midjourney/ | fendiwap1234 | njkumar.com | 1970-01-01T00:00:00 | 0 | {} | 1hln7pg | false | null | t3_1hln7pg | /r/LocalLLaMA/comments/1hln7pg/i_finetuned_clip_to_predict_the_art_styles_for/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'bWXVvwa8flCrmkYJvLXfE5G12bSSSTbkElYwWaDiCi0', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?width=108&crop=smart&auto=webp&s=2cd1045517eda93c2aaafc19130bea85c7466318', 'width': 108}], 'source': {'height': 120, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?auto=webp&s=6d730f0aadb2da7eefca105ee16d8e99ecfca4a6', 'width': 124}, 'variants': {}}]} |
||
QVQ-72B is no joke , this much intelligence is enough intelligence | 752 | 2024-12-24T21:44:03 | https://www.reddit.com/gallery/1hln7zr | TheLogiqueViper | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hln7zr | false | null | t3_1hln7zr | /r/LocalLLaMA/comments/1hln7zr/qvq72b_is_no_joke_this_much_intelligence_is/ | false | false | 752 | null |
||
Does anyone know what happened to the wizard team? | 47 | I remember a while back they were releasing monster fine tunes, and a superstar team at Microsoft. What happened? | 2024-12-24T21:50:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hlnc99/does_anyone_know_what_happened_to_the_wizard_team/ | Mediocre_Tree_5690 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlnc99 | false | null | t3_1hlnc99 | /r/LocalLLaMA/comments/1hlnc99/does_anyone_know_what_happened_to_the_wizard_team/ | false | false | self | 47 | null |
Math skills🔥 | 18 | 2024-12-24T22:01:21 | https://www.reddit.com/gallery/1hlnjhv | TheLogiqueViper | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hlnjhv | false | null | t3_1hlnjhv | /r/LocalLLaMA/comments/1hlnjhv/math_skills/ | false | false | 18 | null |
||
Best models out at the moment | 12 | Hey guys, just wondering what everyone is using at the moment? I'm using qwen2.5 14b, Marco, llama 3.1 and llama 3.2 on 4070S Ti | 2024-12-24T22:09:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hlnosj/best_models_out_at_the_moment/ | purplehaze031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlnosj | false | null | t3_1hlnosj | /r/LocalLLaMA/comments/1hlnosj/best_models_out_at_the_moment/ | false | false | self | 12 | null |
Nailed it! | 1 | [removed] | 2024-12-24T22:17:04 | clendaniel | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlntv3 | false | null | t3_1hlntv3 | /r/LocalLLaMA/comments/1hlntv3/nailed_it/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'xeZEi_7cOvADtCe7waIzgja24JWzaw8Fy8mbw5_D9N8', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/acg04bgkfv8e1.jpeg?width=108&crop=smart&auto=webp&s=cffe4aebbb33ff5ecacfcbd27b2176013e2bac54', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/acg04bgkfv8e1.jpeg?width=216&crop=smart&auto=webp&s=31ae774e47292a1daf2468cfac8a30f2245cc91c', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/acg04bgkfv8e1.jpeg?width=320&crop=smart&auto=webp&s=ff5e33b3448f8c4f4f5d223a1bca3f03a0ba7edd', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/acg04bgkfv8e1.jpeg?width=640&crop=smart&auto=webp&s=9296f3d06934e26294942b2fa8d549d23cf38a71', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/acg04bgkfv8e1.jpeg?width=960&crop=smart&auto=webp&s=da860842581d8cc7c924041584054ad4f2f17eb7', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/acg04bgkfv8e1.jpeg?width=1080&crop=smart&auto=webp&s=83afe0493b3954035894cbe333e0bd686e4f1c69', 'width': 1080}], 'source': {'height': 2556, 'url': 'https://preview.redd.it/acg04bgkfv8e1.jpeg?auto=webp&s=d904f21a2fa2c78be67f64af82bb25144a3b1bc7', 'width': 1179}, 'variants': {}}]} |
||
Evaluating performance of zero shot/ few shot classification on unanotated data | 1 | [removed] | 2024-12-24T23:04:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hloon9/evaluating_performance_of_zero_shot_few_shot/ | MaterialThing9800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hloon9 | false | null | t3_1hloon9 | /r/LocalLLaMA/comments/1hloon9/evaluating_performance_of_zero_shot_few_shot/ | false | false | self | 1 | null |
How do you define rewards for RL on chain of thought reasoning? Trying to understanding a bit more into how o3 from OpenAI was trained. | 7 | Even for results on coding and math, how does one go about RL for chain on thought. Are the rewards on the final outputs or also intermediate steps? If the latter, how does one know the best intermediate steps? Presumably OpenAI has the money/resources to curate data for large swaths of coding and math tasks, but would that really scale to a large diversity of sub tasks - especially considering they worked with Arc-AGI to show provable results?
And if the answer is another model, then how do you verify that the model is being rewarded appropriately especially if the other model is the same weights/size on the other side. Perhaps there are multiple evaluators and then multiple RL steps to see which one proves to be the best?
Its an incredible achievement, and I am trying to see if we can bring some of those learnings to smaller OSS models. | 2024-12-24T23:16:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hlow6b/how_do_you_define_rewards_for_rl_on_chain_of/ | AdditionalWeb107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlow6b | false | null | t3_1hlow6b | /r/LocalLLaMA/comments/1hlow6b/how_do_you_define_rewards_for_rl_on_chain_of/ | false | false | self | 7 | null |
Open weight / open source AI models from Meta, Qwen, Mistral, Deepseek and others are a great boon to companies like SSI / Nousresearch who are only interested in post-training. Ilya's company doesn't have to train a massive LLM to compete with the OpenAI, Anthropic, Google and xAI. | 1 | I think the best Open weight large models are only maybe 5-10% worse than the best Open weight large models (3.5 Sonnet).
This is kind of crazy if you think about it.
Now labs that only focus on post-training approaches like o1 can compete directly with OpenAI who will spend billions of dollars to spend their next frontier model (GPT 5) then spend hundreds of millions more on post-training work while post-training companies need to only spend hundreds of millions on post training research and compute.
I never thought about this but Open-source AGI is maybe possible in the future.
| 2024-12-24T23:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hlp1wp/open_weight_open_source_ai_models_from_meta_qwen/ | Super-Muffin-1230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlp1wp | false | null | t3_1hlp1wp | /r/LocalLLaMA/comments/1hlp1wp/open_weight_open_source_ai_models_from_meta_qwen/ | false | false | self | 1 | null |
Open weight / open source AI models from Meta, Qwen, Mistral, Deepseek and others are a great boon to companies like SSI / Nousresearch who are only interested in post-training. Ilya's company doesn't have to train a massive model to compete with the OpenAI, Anthropic, Google and xAI. | 6 | I think the best Open weight large models are only maybe 5-10% worse than the best proprietary large models (3.5 Sonnet).
This is kind of crazy if you think about it.
Now labs that only focus on post-training approaches like o1 can compete directly with OpenAI who will spend billions of dollars to spend their next frontier model (GPT 5) then spend hundreds of millions more on post-training work while post-training companies need to only spend hundreds of millions on post training research and compute.
I never thought about this but Open-source AGI is maybe possible in the future.
| 2024-12-24T23:28:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hlp3bg/open_weight_open_source_ai_models_from_meta_qwen/ | Super-Muffin-1230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlp3bg | false | null | t3_1hlp3bg | /r/LocalLLaMA/comments/1hlp3bg/open_weight_open_source_ai_models_from_meta_qwen/ | false | false | self | 6 | null |
🎄10 LLM Papers That Caught My Attention: a Year in Review | 1 | [removed] | 2024-12-24T23:39:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hlpai3/10_llm_papers_that_caught_my_attention_a_year_in/ | Kooky-Somewhere-2883 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlpai3 | false | null | t3_1hlpai3 | /r/LocalLLaMA/comments/1hlpai3/10_llm_papers_that_caught_my_attention_a_year_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nWgqXdtQgIsubgPXjqAbJbA5fVDT8QeZkhZXn3omT_g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JBrLOZI29Ndh74WO23T0AJRctt5ZajjsM5uJzdU8hww.jpg?width=108&crop=smart&auto=webp&s=c168717c0682d1ad4203de621e4942048def8811', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JBrLOZI29Ndh74WO23T0AJRctt5ZajjsM5uJzdU8hww.jpg?width=216&crop=smart&auto=webp&s=b5a1da17075b78b9126577e02f54308510edacae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JBrLOZI29Ndh74WO23T0AJRctt5ZajjsM5uJzdU8hww.jpg?width=320&crop=smart&auto=webp&s=a95ce039b12a5fd97f0ccb412e4b219b8a7f7b4d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JBrLOZI29Ndh74WO23T0AJRctt5ZajjsM5uJzdU8hww.jpg?width=640&crop=smart&auto=webp&s=122f7aa7be68ede110ad84816a4f9e7ab4a22b33', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JBrLOZI29Ndh74WO23T0AJRctt5ZajjsM5uJzdU8hww.jpg?width=960&crop=smart&auto=webp&s=7284e5425f025e780931ac3c573cfbd114fa329a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JBrLOZI29Ndh74WO23T0AJRctt5ZajjsM5uJzdU8hww.jpg?width=1080&crop=smart&auto=webp&s=d44f561f3716151c0556e5d2f7c510066a5ff57f', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/JBrLOZI29Ndh74WO23T0AJRctt5ZajjsM5uJzdU8hww.jpg?auto=webp&s=952a55abafbf54d7f4d1a08f018b97070daadec1', 'width': 2048}, 'variants': {}}]} |
Clariti AI: iOS Feature Preview | 1 | [removed] | 2024-12-24T23:43:33 | https://v.redd.it/tocaizuzuv8e1 | claritiai | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlpctn | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/tocaizuzuv8e1/DASHPlaylist.mpd?a=1737675827%2CODIxY2IzNThhODgzODNmOGE3ZmIyNGRjYWY1MGNkN2FlZGZlYzA3Mjk4NzY0NGM4YzM2MTEyZDczYjVjNGMyMw%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/tocaizuzuv8e1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/tocaizuzuv8e1/HLSPlaylist.m3u8?a=1737675827%2CMTZkNGQ4Y2FjYzRiYTc5N2UzZjU3Y2Y4NzU2NzI0OTA3ZGRkNjNlN2IyYzIzODhlZjY0NmY3Yzk2NGEzYTZlMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tocaizuzuv8e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 590}} | t3_1hlpctn | /r/LocalLLaMA/comments/1hlpctn/clariti_ai_ios_feature_preview/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'b2E3ZTV0c3p1djhlMXLniEdACxk8TJUVqSihXWYxB6jK4WGRJvQLR07vdiSC', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/b2E3ZTV0c3p1djhlMXLniEdACxk8TJUVqSihXWYxB6jK4WGRJvQLR07vdiSC.png?width=108&crop=smart&format=pjpg&auto=webp&s=0c7f77056e309f513339ba1f323e58d12f37671e', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/b2E3ZTV0c3p1djhlMXLniEdACxk8TJUVqSihXWYxB6jK4WGRJvQLR07vdiSC.png?width=216&crop=smart&format=pjpg&auto=webp&s=f0fb739bfc59806d6eae36f3c14810af2e2dd9bb', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/b2E3ZTV0c3p1djhlMXLniEdACxk8TJUVqSihXWYxB6jK4WGRJvQLR07vdiSC.png?width=320&crop=smart&format=pjpg&auto=webp&s=0e02ac6b82826e0c4059819245869d9099323697', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/b2E3ZTV0c3p1djhlMXLniEdACxk8TJUVqSihXWYxB6jK4WGRJvQLR07vdiSC.png?width=640&crop=smart&format=pjpg&auto=webp&s=3404a92dd238c2a8a73d22a8510b231cfce089b8', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/b2E3ZTV0c3p1djhlMXLniEdACxk8TJUVqSihXWYxB6jK4WGRJvQLR07vdiSC.png?format=pjpg&auto=webp&s=e004ac03e0da3e5b6a7acd3c0b750b7548254e3e', 'width': 886}, 'variants': {}}]} |
|
OMG, am I just made transformer architecture much better with a few lines of code? It can't be true, right? | 9 | Hi guys! I've been playing with transformer architecture and made a slight change and get unbelievable improvement. Am I crazy? May someone check please with better hardware?
Here are the results:
*Processing img yfuxfjpdtv8e1...*
WHAT I DID?! I sightly changed MLP layer (PyTorch):
*Processing img stsrgi2ptv8e1...*
*Processing img tgkkjz8xuv8e1...*
This change will increase amount of parameters slightly, so to counter that I decreased size of MLP layer. Original model has 192 n\_embd, my version has only 166. Ratio may vary depending on your numbers, so you will need to tune for fair comprising.
Can anybody check please? It shouldn't take much time. It can't be true, right? | 2024-12-25T00:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hlpp6y/omg_am_i_just_made_transformer_architecture_much/ | Danil_Kutny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlpp6y | false | null | t3_1hlpp6y | /r/LocalLLaMA/comments/1hlpp6y/omg_am_i_just_made_transformer_architecture_much/ | false | false | self | 9 | null |
I need help running a happy face model. How do I exactly do it? | 1 | I am unsure if I am searching in the wrong places or not but, I have been trying to run this model "Qwen2-VL-7B-Instruct" from happy face. Im kinda new to everything and would like some guidance. My goal is to have a python project where I have Ai specifically look at only a certain area of a image based pdf and extract the hand written text, the other programs like tesseract are really bad at understanding handwritten text.
Someone else in the community asked for the best OCR for handwriting and the AI model I mention above was recommended since it was able to read really bad hand writing like the one attached. I downloaded openwebui and ran some llama models like "llama3.2-vision" which were able to OCR pretty well, the problem is that it is a 11b model that wont work for where I am trying to use the program due to the system having older components. The Qwen2-VL-7B-Instruct model is 7b which might be able to run at where I am trying to run it but I need to test it first. The ollama model was easy to download on openwebui but I don't know how to download a happyface model on it, so instead I got LM studio which lets you download happy face models but the problem is when I run it I get this error "\`\`\` Failed to load the model Error loading model. (Exit code: 0). Some model operation failed. Try a different model and/or config.\`\`\`"
I'm thinking the next best way to use the model is by using it in python which I don't know how to do but will eventually will need to learn if I want to create the project. Looking online there is hardly any simple information on how to do this. Can someone guide me in the right direction?
[Handwriting test fed t AI models](https://preview.redd.it/qiuyhpzizv8e1.png?width=1778&format=png&auto=webp&s=876cc79fa9aaa48e44e21696fca25a72abe23cfd)
[The error I am getting om LM Studio](https://preview.redd.it/zq0l8dfq1w8e1.png?width=463&format=png&auto=webp&s=8a13fc66d4a84fa63954dbfbeea3cb2856153ead)
| 2024-12-25T00:22:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hlpzno/i_need_help_running_a_happy_face_model_how_do_i/ | International_Boat14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlpzno | false | null | t3_1hlpzno | /r/LocalLLaMA/comments/1hlpzno/i_need_help_running_a_happy_face_model_how_do_i/ | false | false | 1 | null |
|
New To This - 2x3060 12GB vs 1xP40 | 1 | I am looking to see what my best options are in terms of what I have on hand and possibly adding to it. I have been researching getting into running models and what to use for it. I have one 3060 12gb and one P40 currently. I was using them for image generation currently. I was going to retire the P40 to a media server. (I know a little high powered but its better than just sitting in a box) I was looking to get into language models. I have read the P40 is ok for it. It has the ram but is slow. So I was looking to either use it for this or swap the 3060 between my two VM's (image generation and language model) I could get a second 3060 to get the 24gb and have faster performance than the P40 too. Eventually getting a better GPU for image generation and running the dual 3060 for just the language model. Is this the way to go or should I start with the P40 for the language model.
Keep in mind I am new. I don't know if I need a larger model right away. Is it better to start with speed or more vram? I am also keeping in mind that P40's used are now the same price as a 3060 12GB. I bought my P40 during the $150 times on ebay.
Also I read that I can mix GPU's for language models, but at the cost of running at the slower cards speed. So I assume running the 3060 and P40 together would be as useful as the P40 by itself correct? | 2024-12-25T00:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hlq1cz/new_to_this_2x3060_12gb_vs_1xp40/ | CaptainxShittles | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlq1cz | false | null | t3_1hlq1cz | /r/LocalLLaMA/comments/1hlq1cz/new_to_this_2x3060_12gb_vs_1xp40/ | false | false | self | 1 | null |
Best model for text detection of any language on videos and best model for translation? | 1 | [removed] | 2024-12-25T00:42:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hlqbio/best_model_for_text_detection_of_any_language_on/ | Relative-Pace-2923 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlqbio | false | null | t3_1hlqbio | /r/LocalLLaMA/comments/1hlqbio/best_model_for_text_detection_of_any_language_on/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'I71BiTF96O0hrMCvni4Bzq6AB2fKIpn6C1gHJriIuYU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/g1xQIZXd0blE1uFbPIUxmEF3XNm8pNTkQt29tkq9e4E.jpg?width=108&crop=smart&auto=webp&s=fa7c25fd400e5cf2dc486c4e08530f3732313ec0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/g1xQIZXd0blE1uFbPIUxmEF3XNm8pNTkQt29tkq9e4E.jpg?width=216&crop=smart&auto=webp&s=c3b8643dc66b9cc19d86f353ee687e4e60ba659e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/g1xQIZXd0blE1uFbPIUxmEF3XNm8pNTkQt29tkq9e4E.jpg?width=320&crop=smart&auto=webp&s=4453fa82b7c7133b87914ec0f0f8fa4a3bf998e1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/g1xQIZXd0blE1uFbPIUxmEF3XNm8pNTkQt29tkq9e4E.jpg?auto=webp&s=57a655c399eafddc218828b271ce5f8ecf7ff357', 'width': 480}, 'variants': {}}]} |
Wrote an article about automating RAG content ingestion - some feedback would be appreciated! | 1 | [removed] | 2024-12-25T00:42:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hlqbou/wrote_an_article_about_automating_rag_content/ | RAGcontent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlqbou | false | null | t3_1hlqbou | /r/LocalLLaMA/comments/1hlqbou/wrote_an_article_about_automating_rag_content/ | false | false | self | 1 | null |
Using AI models together with search works really well, even with smaller ones! | 41 | In another thread, I mentioned alternatives to Perplexity AI, and I ended up choosing Farfalle with the Qwen2.5 14b model. The results have been impressive! The "Expert search" mode works just like Perplexity—giving me up-to-date, direct answers in seconds. If I need more depth, it provides all the resources it uses. Pretty handy!
Are you also using something similar? | 2024-12-25T00:45:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hlqd95/using_ai_models_together_with_search_works_really/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlqd95 | false | null | t3_1hlqd95 | /r/LocalLLaMA/comments/1hlqd95/using_ai_models_together_with_search_works_really/ | false | false | self | 41 | null |
Best Small-Medium model for local RAG search? | 3 | Currently using a combo of nomic for embedding and Phi4 for the LLM. The results are OK-ish. Using a mac mini with 48 GB.
What model(s) have others found to be useful, that would be worth a shot in the same RAM range.
| 2024-12-25T01:41:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hlr8s5/best_smallmedium_model_for_local_rag_search/ | KittyPigeon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlr8s5 | false | null | t3_1hlr8s5 | /r/LocalLLaMA/comments/1hlr8s5/best_smallmedium_model_for_local_rag_search/ | false | false | self | 3 | null |
Use cases for Local LLMs | 1 | [removed] | 2024-12-25T01:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hlrbq3/use_cases_for_local_llms/ | General_Duck_666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlrbq3 | false | null | t3_1hlrbq3 | /r/LocalLLaMA/comments/1hlrbq3/use_cases_for_local_llms/ | false | false | self | 1 | null |
Best way to interface with ollama on mac | 0 | I installed ollama with qwen2.5-coder:32b on my m3 max and it runs surprisingly well. How is everyone intefacing with it on their mac? Should I use Open WebUI through a docker locally as well? I currently use it on my main network with my 3090 but I was thinking it may be a good idea for me to run it direct on my mac as well if i am someplace without internet. | 2024-12-25T01:56:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hlrh1b/best_way_to_interface_with_ollama_on_mac/ | PositiveEnergyMatter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlrh1b | false | null | t3_1hlrh1b | /r/LocalLLaMA/comments/1hlrh1b/best_way_to_interface_with_ollama_on_mac/ | false | false | self | 0 | null |
Use cases for Local LLMs | 0 | I'm curious to learn what other folks here run local LLMs for. What workflows or novel use cases do you have ?
Mainly having a bit of fomo and creative block.
I'll start first, I have a home server with dual 4090s, running OpenWeb Ul, with Ollama. I share the access with my wife and we have VPN set up for on the go access. We don’t pay for other LLM solutions.
I built a couple of data pipelines to access my notes that I store in GitHub.
I also have several OVOS speakers around the house, and using a fallback skill that uses the LLM for most queries.
Lastly, I have a script which daily asks the LLM about day-of socially relevant and cultural events, and ask it to generate a light theme for my outdoor landscaping lights, which is then automatically applied. It chose correctly Green and Red for today :)
What cool stuff is everyone doing ? | 2024-12-25T01:58:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hlrhvy/use_cases_for_local_llms/ | HeadOfCelery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlrhvy | false | null | t3_1hlrhvy | /r/LocalLLaMA/comments/1hlrhvy/use_cases_for_local_llms/ | false | false | self | 0 | null |
tangent 🌱 update: Electron based Ollama UI w. built-in Python & React interpreters! | 11 | Hey all! This is a brief follow-up on a post from last week about a UI I'm developing called [tangent](https://www.reddit.com/r/LocalLLaMA/comments/1hgc64u/tangent_the_ai_chat_canvas_that_grows_with_you/). The project has been completely overhauled (structurally) and now stands 10000x cleaner than before (with lots of room for improvement still)
It also now has basic python interpreting as well as a react rendering feature inspired by Claude's Artifacts.
See below
[simple python + react example](https://reddit.com/link/1hlrkko/video/mmu8kn8ljw8e1/player)
[three js visualization](https://reddit.com/link/1hlrkko/video/4pvv47vhjw8e1/player)
Here are some more details:
1. Python Interpreter: Run Python code right in your chat:- No Docker or complex setup - everything runs in your browser using Pyodide- Matplotlib visualization support- Numpy integration- Real-time output/error handling- All executing locally alongside your Ollama instance
2. React Component Renderer: Create and test React components on the fly:- Browser-based sandbox environment - no build setup needed- Built-in Tailwind support- Three.js/React Three Fiber for 3D- Live preview with hot-reloading
Next up:
\- Ongoing efforts at migrating from jsx to ts a contributor (who already refactored the entire backend and currently on a break for the holidays) [https://github.com/itsPreto/tangent/pull/13](https://github.com/itsPreto/tangent/pull/13)
\- OpenAI compatibility (next up after jsx to ts migration)
\- I'm working on adding file upload and image handling VLMs.
Code's open source: \[https://github.com/itsPreto/tangent\] | 2024-12-25T02:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hlrkko/tangent_update_electron_based_ollama_ui_w_builtin/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlrkko | false | null | t3_1hlrkko | /r/LocalLLaMA/comments/1hlrkko/tangent_update_electron_based_ollama_ui_w_builtin/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'gSddjIeZxnz4Gemr7mU7eSI_Mt7qD5nybX62uw9wWlc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/APMYhLUho8QyN3-M5Es3vZfWxNsK5_OB9BTUxY2PW-I.jpg?width=108&crop=smart&auto=webp&s=2098e845032a133f4da489b33146b527c675a3b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/APMYhLUho8QyN3-M5Es3vZfWxNsK5_OB9BTUxY2PW-I.jpg?width=216&crop=smart&auto=webp&s=6cac9d7e3bf1d51a83b18e27b0fd9bfc4f6d7910', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/APMYhLUho8QyN3-M5Es3vZfWxNsK5_OB9BTUxY2PW-I.jpg?width=320&crop=smart&auto=webp&s=af233b613717be31e1161d05cf6ea0f0108ada60', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/APMYhLUho8QyN3-M5Es3vZfWxNsK5_OB9BTUxY2PW-I.jpg?width=640&crop=smart&auto=webp&s=b6b286b1d78ddcd6fa33896476aa124a5a337d40', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/APMYhLUho8QyN3-M5Es3vZfWxNsK5_OB9BTUxY2PW-I.jpg?width=960&crop=smart&auto=webp&s=5ac768c1432300f6873b7d2ca2e018949d0d6a20', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/APMYhLUho8QyN3-M5Es3vZfWxNsK5_OB9BTUxY2PW-I.jpg?width=1080&crop=smart&auto=webp&s=760b294c45a6659ef440ae9b15fa1a2350d30e00', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/APMYhLUho8QyN3-M5Es3vZfWxNsK5_OB9BTUxY2PW-I.jpg?auto=webp&s=e6680ecf11ea1b4504523f1ebb4e7e19930e8124', 'width': 1200}, 'variants': {}}]} |
|
Looking to start with local, would like some advice! | 1 | [removed] | 2024-12-25T02:54:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hlsbuw/looking_to_start_with_local_would_like_some_advice/ | ReasonablePossum_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlsbuw | false | null | t3_1hlsbuw | /r/LocalLLaMA/comments/1hlsbuw/looking_to_start_with_local_would_like_some_advice/ | false | false | self | 1 | null |
How do reasoning models benefit from extremely long reasoning chains if their context length less than the thinking token used? | 16 | I mean, I just read o3 used up to 5.7 billion thinking tokens to answer a question, and its context length is what, 100k? 1M at most? | 2024-12-25T03:02:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hlsg9p/how_do_reasoning_models_benefit_from_extremely/ | NoIntention4050 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlsg9p | false | null | t3_1hlsg9p | /r/LocalLLaMA/comments/1hlsg9p/how_do_reasoning_models_benefit_from_extremely/ | false | false | self | 16 | null |
How do I make an LLM more knowledgeable in a certain domain? | 1 | I would like to make an LLM more specialized in a certain domain (a certain kind of Harry Potter fanfiction). How do I do this?
(I don’t think that RAG is the solution as I want it to come up with original ideas in the theme rather than regurgitating the documents)
(Please suggest no code methods if possible) | 2024-12-25T03:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hlslac/how_do_i_make_an_llm_more_knowledgeable_in_a/ | PublicQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlslac | false | null | t3_1hlslac | /r/LocalLLaMA/comments/1hlslac/how_do_i_make_an_llm_more_knowledgeable_in_a/ | false | false | self | 1 | null |
Recommendations for best usage of current resources | 1 | Been reading papers on Magentic-One, Llama, Phi-4, etc. Really interested in the Magentic-One (Multi Agentic approach), and have some hardware to play around with. Please help me choose an ideal setup.
Hardware that I have:
2x3090
1x2080Ti
1x970 (probably useless now)
1xK80 (also useless now)
Computers:
Intel i9-10900KF, 2x16GB DDR4, 2TB NVMe
Ryzen 5700X, 4x8GB DDR4, 1TB NVMe, 500GB SSD
(NAS) R730xd 2x12 Core E5-2678V3 (2.5GHz), 128GB DDR4, ~32TB HDD storage, 2x128GB SSD
I am thinking I will put the 2x3090 in the intel machine, with NVLink, and try to run the 70b models in 4bit. I can use the 2080Ti in the AMD machine, running an 11B model.
Overall, my goal is to fork Magentic-One, allowing for individually configurable agents with different LLMs.
So if you were in my shoes, what models would you choose, and how would you leverage this? Right now I don't see myself training much more than a LoRA, and my goal is to have an LLM system capable of Software Project planning, code/repo surfing, and some code generation.
Finally, what would your growth plan be after this? Move towards a single machine and more cards? | 2024-12-25T03:19:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hlsp40/recommendations_for_best_usage_of_current/ | decrement-- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlsp40 | false | null | t3_1hlsp40 | /r/LocalLLaMA/comments/1hlsp40/recommendations_for_best_usage_of_current/ | false | false | self | 1 | null |
Alpine LLaMA: A gift for the GPU poor and the disk poor | 37 | No GPU? No problem. No disk space? Even better.
This Docker image, which currently weighs 8.4 MiB (compressed), contains the bare essentials: a LLaMA.cpp HTTP server.
The project is available at the [DockerHub](https://hub.docker.com/r/samueltallet/alpine-llama-cpp-server) and [GitHub](https://github.com/SamuelTallet/alpine-llama-cpp-server).
No animals were harmed in the making of this photo.
The text on the sweatshirt may have a hidden meaning. | 2024-12-25T03:31:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hlsvgt/alpine_llama_a_gift_for_the_gpu_poor_and_the_disk/ | SamuelTallet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlsvgt | false | null | t3_1hlsvgt | /r/LocalLLaMA/comments/1hlsvgt/alpine_llama_a_gift_for_the_gpu_poor_and_the_disk/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': '6Xpcy7-vK5jANsgaeubPknAWEwrQe9lVpwXjwTq4ep4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=108&crop=smart&auto=webp&s=3c2fbd60404e8ed4f19688280a3d3c57f5c0dc8b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=216&crop=smart&auto=webp&s=9d8c1c9129a107fbd39ddf064835ad6b559e0f4c', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=320&crop=smart&auto=webp&s=67c7f9fd7dd1781e22e70eacdb7482636b0f1e52', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=640&crop=smart&auto=webp&s=52c2c314997566a69490207ad235f61b8e4aad9e', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=960&crop=smart&auto=webp&s=ef0bfa46ea4eb68e5188f7b3f4feb6b2b85a6fa7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?width=1080&crop=smart&auto=webp&s=332e6d0312fbb86dc639f8ed24ea41a0aa811929', 'width': 1080}], 'source': {'height': 1896, 'url': 'https://external-preview.redd.it/4tPoseYvVk_DiQRH-clfRFLejS_sZmV2Y_bF77RQbRg.jpg?auto=webp&s=c7529d662fdeb9c77805dcb812a85757cff80114', 'width': 3372}, 'variants': {}}]} |
Next-Generation AMD Radeon Pro SSG? | 1 | [removed] | 2024-12-25T03:40:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hlszpg/nextgeneration_amd_radeon_pro_ssg/ | Ancient_Wait_8788 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlszpg | false | null | t3_1hlszpg | /r/LocalLLaMA/comments/1hlszpg/nextgeneration_amd_radeon_pro_ssg/ | false | false | self | 1 | null |
how many r in strawberrry? Surprised when Claude Sonet 3.5 and Gemini 2.0 Failed | 1 | [removed] | 2024-12-25T04:17:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hltii4/how_many_r_in_strawberrry_surprised_when_claude/ | jack-pham9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hltii4 | false | null | t3_1hltii4 | /r/LocalLLaMA/comments/1hltii4/how_many_r_in_strawberrry_surprised_when_claude/ | false | false | 1 | null |
|
Seeking Advice on Flux LoRA Fine-Tuning with More Photos & Higher Steps | 295 | I’ve been working on a flux LoRA model for my Nebelung cat, Tutu, which you can check out here: [https://huggingface.co/bochen2079/tutu](https://huggingface.co/bochen2079/tutu)
So far, I’ve trained it on RunPod with a modest GPU rental using only 20 images and 2,000 steps, and I’m pleased with the results. Tutu’s likeness is coming through nicely, but I’m considering taking this further and would really appreciate your thoughts before I do a much bigger setup.
My plan is to gather 100+ photos so I can capture a wider range of poses, angles, and expressions for Tutu, and then push the training to around 5,000+ steps or more. The extra data and additional steps should (in theory) give me more fine-grained detail and consistency in the images. I’m also thinking about renting an 8x H100 GPU setup, not just for speed but to ensure I have enough VRAM to handle the expanded dataset and higher step count without a hitch.
I’m curious about how beneficial these changes might be. Does going from 20 to 100 images truly help a LoRA model learn finer nuances, or is there a point of diminishing returns and if so what is that graph look like etc? Is 5,000 steps going to achieve significantly better detail and stability compared to the 2,000 steps I used originally, or could it risk overfitting? Also, is such a large GPU cluster overkill, or is the performance boost and stability worth it for a project like this? I’d love to hear your experiences, particularly if you’ve done fine-tuning with similarly sized datasets or experimented with bigger hardware configurations. Any tips about learning rates, regularization techniques, or other best practices would also be incredibly helpful.
| 2024-12-25T04:59:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hlu3w9/seeking_advice_on_flux_lora_finetuning_with_more/ | Quantum_Qualia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlu3w9 | false | null | t3_1hlu3w9 | /r/LocalLLaMA/comments/1hlu3w9/seeking_advice_on_flux_lora_finetuning_with_more/ | false | false | self | 295 | {'enabled': False, 'images': [{'id': '_j2LuMGrjn2wZ17kjoxH86flHtGhdfwQBHAF5Wrd2xQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WeJ441fuRsAqe3mAFrjrWgP_nTMh2ThXx22RE-OGFeg.jpg?width=108&crop=smart&auto=webp&s=be6ce07bb5ebf2c01ecb351785ea28ef2084ad56', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WeJ441fuRsAqe3mAFrjrWgP_nTMh2ThXx22RE-OGFeg.jpg?width=216&crop=smart&auto=webp&s=a198d08bea4d8b09bcb5e349650333249608016a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WeJ441fuRsAqe3mAFrjrWgP_nTMh2ThXx22RE-OGFeg.jpg?width=320&crop=smart&auto=webp&s=629d41fcd32580e65fd5509fd05142675a2baa03', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WeJ441fuRsAqe3mAFrjrWgP_nTMh2ThXx22RE-OGFeg.jpg?width=640&crop=smart&auto=webp&s=9620c26b1df228351912ca2e163eed37068f675c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WeJ441fuRsAqe3mAFrjrWgP_nTMh2ThXx22RE-OGFeg.jpg?width=960&crop=smart&auto=webp&s=a782dc251f520aa9754c4687587a0498c50f2c96', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WeJ441fuRsAqe3mAFrjrWgP_nTMh2ThXx22RE-OGFeg.jpg?width=1080&crop=smart&auto=webp&s=fe76c7e23933147d7472fbef7ee0e487b56be907', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WeJ441fuRsAqe3mAFrjrWgP_nTMh2ThXx22RE-OGFeg.jpg?auto=webp&s=063acc2d9fa7c1fc526412fc46ce70e95cc01b86', 'width': 1200}, 'variants': {}}]} |
RAG an entire codebase? | 5 | I mostly use llm's for coding help. I started self hosting ollama and open web ui. I recently learned about RAG. I started wondering about putting an entire code base in it and seeing if it becomes more useful.
I searched the web, and I came across this [repo](https://github.com/Neverdecel/CodeRAG).
Does anyone know of other open source repos like this?
Or have any good tutorials on it? | 2024-12-25T05:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hlup69/rag_an_entire_codebase/ | Corpo_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlup69 | false | null | t3_1hlup69 | /r/LocalLLaMA/comments/1hlup69/rag_an_entire_codebase/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'Q-RQD0nQOYAMCtzCTsr5Wr9ZPDiegI2swgWzAwxX7Is', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qhd2m2dxZY1E-t723mRcSs8Fw0a8TkyCONXUlzI9bRI.jpg?width=108&crop=smart&auto=webp&s=2039ad70f1b40484bbf62ee6430edf3cb1721cce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Qhd2m2dxZY1E-t723mRcSs8Fw0a8TkyCONXUlzI9bRI.jpg?width=216&crop=smart&auto=webp&s=094e935186b999361621eb9172264ffb4c5b2f8d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Qhd2m2dxZY1E-t723mRcSs8Fw0a8TkyCONXUlzI9bRI.jpg?width=320&crop=smart&auto=webp&s=31a2ae25d08e64d9ef1255bfb21690a3463cd0d4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Qhd2m2dxZY1E-t723mRcSs8Fw0a8TkyCONXUlzI9bRI.jpg?width=640&crop=smart&auto=webp&s=65e2799289ac3575751dd221a15ae36f6bd2ef77', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Qhd2m2dxZY1E-t723mRcSs8Fw0a8TkyCONXUlzI9bRI.jpg?width=960&crop=smart&auto=webp&s=b1bba7a978c01f32f0095094e859395ced191d24', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Qhd2m2dxZY1E-t723mRcSs8Fw0a8TkyCONXUlzI9bRI.jpg?width=1080&crop=smart&auto=webp&s=05468fcef5e8c93d64be479aedc71694a2a55c02', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Qhd2m2dxZY1E-t723mRcSs8Fw0a8TkyCONXUlzI9bRI.jpg?auto=webp&s=e30e444a041d198133a1e5031dc9dc7cb2f6245d', 'width': 1200}, 'variants': {}}]} |
iGPU for LLM use? | 2 | Hello all,
I enjoy running LLMs and was curious if I could take some cpu inference off and utilize my iGPU as well? I have a powerful desktop but often travel and just have my laptop, my laptop has 24gb RAM, Ryzen 7 4700U and an nvme ssd. I wanted to load up about a 3B LLM, like llama3.2 however inference is very slow. Is there a way that the integrated gpu could help in processing this?
Thanks all | 2024-12-25T05:44:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hlur0y/igpu_for_llm_use/ | Zawseh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlur0y | false | null | t3_1hlur0y | /r/LocalLLaMA/comments/1hlur0y/igpu_for_llm_use/ | false | false | self | 2 | null |
VLLM Universal Assisted Generation | 1 | [removed] | 2024-12-25T06:37:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hlvgi3/vllm_universal_assisted_generation/ | throwaway2873738277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlvgi3 | false | null | t3_1hlvgi3 | /r/LocalLLaMA/comments/1hlvgi3/vllm_universal_assisted_generation/ | false | false | self | 1 | null |
Forgot if I posted this here but completly local offline FREE version of NotebookLM I made before it was released to the public. Ollama + Xtts | 1 | [removed] | 2024-12-25T07:10:25 | https://github.com/DrewThomasson/doc2interview | Impossible_Belt_7757 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hlvwfn | false | null | t3_1hlvwfn | /r/LocalLLaMA/comments/1hlvwfn/forgot_if_i_posted_this_here_but_completly_local/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Q9ckGhtvS-GJifPJ3fugaY_WIL1xD9sibWdXlxDNvUo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J5u3RwTM7RfOOcIXo0lMCU3pKfMuWAzAQMn5asAsMOw.jpg?width=108&crop=smart&auto=webp&s=7dca8cab93c2ab8814aad1e4b6863260766b03c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J5u3RwTM7RfOOcIXo0lMCU3pKfMuWAzAQMn5asAsMOw.jpg?width=216&crop=smart&auto=webp&s=bf49f2a997b8c8335fa85ee8fb7055635ac46f38', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J5u3RwTM7RfOOcIXo0lMCU3pKfMuWAzAQMn5asAsMOw.jpg?width=320&crop=smart&auto=webp&s=e0774928d022ae2835be88a5a925edecc9260968', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J5u3RwTM7RfOOcIXo0lMCU3pKfMuWAzAQMn5asAsMOw.jpg?width=640&crop=smart&auto=webp&s=4075f3bdcbc6c7348ddcbc5f448e8a4e8af224fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J5u3RwTM7RfOOcIXo0lMCU3pKfMuWAzAQMn5asAsMOw.jpg?width=960&crop=smart&auto=webp&s=69cc351c6264edae777db9b32b60800f9331f26c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J5u3RwTM7RfOOcIXo0lMCU3pKfMuWAzAQMn5asAsMOw.jpg?width=1080&crop=smart&auto=webp&s=ce246b1bbc3bac69a90530d969d25f663bc63948', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/J5u3RwTM7RfOOcIXo0lMCU3pKfMuWAzAQMn5asAsMOw.jpg?auto=webp&s=b80e817b154fb069daa920ecfa42552168d514aa', 'width': 1200}, 'variants': {}}]} |
|
2x AMD MI60 working with vLLM! Llama3.3 70B reaches 20 tokens/s | 88 | Hi everyone,
Two months ago I posted 2x AMD MI60 card inference speeds ([link](https://www.reddit.com/r/LocalLLaMA/comments/1g37nad/2x_amd_mi60_inference_speed_mlcllm_is_a_fast/)). llama.cpp was not fast enough for 70B (was getting around 9 t/s). Now, thanks to the amazing work of lamikr ([github](https://github.com/lamikr/rocm_sdk_builder/commit/c337b2f5da1ebe9c5dcfc799a52e00020ffcf1c0#diff-5e6f75860a086816a1bb585ee23a164c0d68059c9bf4389cc9bcd633444a5455R33)), I am able to build both triton and vllm in my system. I am getting around 20 t/s for Llama3.3 70B.
I forked [triton](https://github.com/Said-Akbar/triton-gcn5) and [vllm](https://github.com/Said-Akbar/vllm-rocm) repositories by making those changes made by lamikr. I added instructions on how to install both of them on Ubuntu 22.04. In short, you need ROCm 6.2.2 with latest pytorch 2.6.0 to get such speeds. Also, vllm supports GGUF, GPTQ, FP16 on AMD GPUs! | 2024-12-25T07:17:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hlvzjo/2x_amd_mi60_working_with_vllm_llama33_70b_reaches/ | MLDataScientist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlvzjo | false | null | t3_1hlvzjo | /r/LocalLLaMA/comments/1hlvzjo/2x_amd_mi60_working_with_vllm_llama33_70b_reaches/ | false | false | self | 88 | {'enabled': False, 'images': [{'id': 'sC-XntWFt9qgiEcYPqGQ3Z--Id0B5naJ7g_6KwgH7cQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FzxvZgHLdCtoAI-scUXJ3flUosnpgpmhP-1uG-PwLhY.jpg?width=108&crop=smart&auto=webp&s=d5a5b184c2e9dbb9fc5580b9819210c053d13579', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FzxvZgHLdCtoAI-scUXJ3flUosnpgpmhP-1uG-PwLhY.jpg?width=216&crop=smart&auto=webp&s=0dde7e4d17ab060e2bff0715756c3dcf936fc5eb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FzxvZgHLdCtoAI-scUXJ3flUosnpgpmhP-1uG-PwLhY.jpg?width=320&crop=smart&auto=webp&s=983c3f504661469d87a5874a6b44b178702ae5ea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FzxvZgHLdCtoAI-scUXJ3flUosnpgpmhP-1uG-PwLhY.jpg?width=640&crop=smart&auto=webp&s=18971780f62fffed5577c18cfd2baf6b4f01c346', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FzxvZgHLdCtoAI-scUXJ3flUosnpgpmhP-1uG-PwLhY.jpg?width=960&crop=smart&auto=webp&s=10e0fd25e21ecbb3de849a711c17519989bd22cd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FzxvZgHLdCtoAI-scUXJ3flUosnpgpmhP-1uG-PwLhY.jpg?width=1080&crop=smart&auto=webp&s=2c0ff80581f1cf4a5b18b87cd63d0aff60582e61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FzxvZgHLdCtoAI-scUXJ3flUosnpgpmhP-1uG-PwLhY.jpg?auto=webp&s=f8980e8cff4a91aa884fa00519a40ed3d4b78f83', 'width': 1200}, 'variants': {}}]} |
Google is using Anthropic’s Claude to improve its Gemini AI | 0 | [https://techcrunch.com/2024/12/24/google-is-using-anthropics-claude-to-improve-its-gemini-ai/](https://techcrunch.com/2024/12/24/google-is-using-anthropics-claude-to-improve-its-gemini-ai/) | 2024-12-25T07:26:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hlw3v2/google_is_using_anthropics_claude_to_improve_its/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlw3v2 | false | null | t3_1hlw3v2 | /r/LocalLLaMA/comments/1hlw3v2/google_is_using_anthropics_claude_to_improve_its/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'L9Bv2qi46Ie3yfy8eH4Wgy70MEHcfwZeauafDX9bo_s', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/hdqqN0KDdmJWHFQ1hBx7VJJi35cIsuTL-iBnpYUb79E.jpg?width=108&crop=smart&auto=webp&s=571f120679eef3d9f4ed6355735adffa0c951c51', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/hdqqN0KDdmJWHFQ1hBx7VJJi35cIsuTL-iBnpYUb79E.jpg?width=216&crop=smart&auto=webp&s=650a514768f8ecf7ae978341f96358f7342a89fe', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/hdqqN0KDdmJWHFQ1hBx7VJJi35cIsuTL-iBnpYUb79E.jpg?width=320&crop=smart&auto=webp&s=18dac52704ae2f9eb4fee316f827fad86b980b42', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/hdqqN0KDdmJWHFQ1hBx7VJJi35cIsuTL-iBnpYUb79E.jpg?width=640&crop=smart&auto=webp&s=e6484e1481867fc241dea5157f095236f2e71ab5', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/hdqqN0KDdmJWHFQ1hBx7VJJi35cIsuTL-iBnpYUb79E.jpg?width=960&crop=smart&auto=webp&s=444a4199915ede53cfb5f1264547bdcf5f4906d2', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/hdqqN0KDdmJWHFQ1hBx7VJJi35cIsuTL-iBnpYUb79E.jpg?width=1080&crop=smart&auto=webp&s=524f0e2f571279ec3057a6e006f23a89aa58cb2f', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/hdqqN0KDdmJWHFQ1hBx7VJJi35cIsuTL-iBnpYUb79E.jpg?auto=webp&s=d05b262eab07d38af19487259329af3141e4764b', 'width': 1200}, 'variants': {}}]} |
Does my LLM have alzheimers? It's more common than you think. | 0 | I'm sure some of you that use LLMs for conversational applications have noticed how depressingly forgetful they can be at times. After a brief period, they can become disoriented and confused, leaving me to guide them back on track. It's not uncommon for me to need to remind an LLM of our previous discussions within the same conversation just so it doesn't drift off-topic or forget essential context.
This limitation has led me to ask: what options do I have that are compatible with LM Studio to help improve their long-term memory retention? I'd gladly invest in additional resources if it meant they could retain information for longer than a day.
I'm specifically asking about LM Studio because it's the only platform I can get to work to be remotely accessible with my phone and the LMSA app. But What's 'Langchain' and MEMGPT? And are those things that can be hosted locally or do I need a connection to their services for them to work?
Any insights, suggestions, or workarounds would be greatly appreciated | 2024-12-25T07:29:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hlw55j/does_my_llm_have_alzheimers_its_more_common_than/ | switchpizza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlw55j | false | null | t3_1hlw55j | /r/LocalLLaMA/comments/1hlw55j/does_my_llm_have_alzheimers_its_more_common_than/ | false | false | self | 0 | null |
2x3090 is close to great, but not enough | 44 | Since getting my 2nd 3090 to run Llama 3.x 70B and setting everything up with TabbyAPI, litellm, open-webui I'm amazed at how responsive and fun to use this setup is, but I can't help to feel that I'm this close to greatness, but not there just yet.
I can't load Llama 3.3 70B at 6.0bpw with any context to 48GB, but I'd love to try for programming questions. At 4.65bpw I can only use around 20k context, a far cry from model's 131072 max and supposed 200k of Claude. To not compromise on context or quantization, a minimum of 105GB VRAM is needed, that's 4x3090. Am I just being silly and chasing diminishing returns or do others with 2x24GB cards feel the same? I think I was happier with 1 card and my Mac whilst in the acceptance that local is good for privacy, but not enough to compete with hosted on useability. Now I see that local is much better at everything, but I still lack hardware. | 2024-12-25T07:37:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hlw8jo/2x3090_is_close_to_great_but_not_enough/ | 330d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlw8jo | false | null | t3_1hlw8jo | /r/LocalLLaMA/comments/1hlw8jo/2x3090_is_close_to_great_but_not_enough/ | false | false | self | 44 | null |
Qwen just got rid of their Apache 2.0 license for QVQ 72B | 319 | Just a heads up for those who it might affect differently than the prior Apache 2.0 license.
So far I'm reading that if you use any of the output to create, train, fine-tune, you need to attribute that it was either:
* Built with Qwen, or
* Improved using Qwen
And that if you have 100 million monthly active users you need to apply for a license.
Some other things too, but I'm not a lawyer.
[https://huggingface.co/Qwen/QVQ-72B-Preview/commit/53b19b90d67220c896e868a809ef1b93d0c8dab8](https://huggingface.co/Qwen/QVQ-72B-Preview/commit/53b19b90d67220c896e868a809ef1b93d0c8dab8) | 2024-12-25T07:56:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hlwhav/qwen_just_got_rid_of_their_apache_20_license_for/ | Many_SuchCases | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlwhav | false | null | t3_1hlwhav | /r/LocalLLaMA/comments/1hlwhav/qwen_just_got_rid_of_their_apache_20_license_for/ | false | false | self | 319 | {'enabled': False, 'images': [{'id': 'EZQpn5cdcqnNPLKoAK9W_WlxjhoxiRMs0mpQ7ttLXfE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=108&crop=smart&auto=webp&s=c0bf81b62705f2b0dc0d8596760557ddc9665c8d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=216&crop=smart&auto=webp&s=4a93bddd53b45c8458cbfe0adf63682a0aef2d46', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=320&crop=smart&auto=webp&s=33e1c45b457c0c3935b9e1a996e9ada8fabe1624', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=640&crop=smart&auto=webp&s=462a35429bf616091d063edc5cc35ddc44e4cbac', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=960&crop=smart&auto=webp&s=9c0e1d2970a0beab1824a6efcbb9d82e6ea9b0a7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?width=1080&crop=smart&auto=webp&s=65554edd06c19f8b3a9ea65e671acf94a8162d9e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/spRLQ3wT7TS33Oo4-RCNI2yTc0Eu3PJ6i9sxZggdgfA.jpg?auto=webp&s=b143ed79d4208bdc81ca3300b579c19c11a93700', 'width': 1200}, 'variants': {}}]} |
My STORY WRITING PROMPT which allows you to PRECISELY CONTROL various ELEMENTS of the story. You can chose to use this WITHOUT PRECISE CONTROL TOO. You can also ADD OR REMOVE ANY ELEMENTS you do not want to control very easily. Please try it out and suggest any amendments. LMK if you need help. | 1 | [removed] | 2024-12-25T08:13:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hlwp6v/my_story_writing_prompt_which_allows_you_to/ | Super-Muffin-1230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlwp6v | false | null | t3_1hlwp6v | /r/LocalLLaMA/comments/1hlwp6v/my_story_writing_prompt_which_allows_you_to/ | false | false | self | 1 | null |
STORY WRITING PROMPT which allows you to PRECISELY CONTROL various ELEMENTS of the story. You can chose to use this WITHOUT PRECISE CONTROL TOO. You can also ADD OR REMOVE ANY ELEMENTS you do not want to control very easily. Try it out and suggest any amendments. | 1 | [removed] | 2024-12-25T08:25:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hlwuh2/story_writing_prompt_which_allows_you_to/ | Super-Muffin-1230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlwuh2 | false | null | t3_1hlwuh2 | /r/LocalLLaMA/comments/1hlwuh2/story_writing_prompt_which_allows_you_to/ | false | false | self | 1 | null |
whats your current workflow for fine tuning on cpu? | 1 | I spent the last couple days building a cpu friendly solution using ctransformers and transformers to train a lora fine tune model on from a llama 7b model. Then I merged the lora weights with the base layer weights, then quantize that, then convert to gguf only to find that I can't load the new model. I'm getting an error Failed to create LLM 'gguf' from 'D:\\models\\finalModel\\finalModel.gguf'. I can't seem to find much documentation on this approach so I'm wondering what those of you with similar solutions are doing? Ollama? are you writing in c++ or python? Thanks for answering | 2024-12-25T08:40:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hlx0ni/whats_your_current_workflow_for_fine_tuning_on_cpu/ | Separate-Proof4309 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlx0ni | false | null | t3_1hlx0ni | /r/LocalLLaMA/comments/1hlx0ni/whats_your_current_workflow_for_fine_tuning_on_cpu/ | false | false | self | 1 | null |
Looks like deepseekv3 API is up | 115 | 2024-12-25T08:45:05 | shing3232 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlx2n8 | false | null | t3_1hlx2n8 | /r/LocalLLaMA/comments/1hlx2n8/looks_like_deepseekv3_api_is_up/ | false | false | nsfw | 115 | {'enabled': True, 'images': [{'id': '9owOoX7CM4Q6F8LaeAIjMxDUO4S9sUDdlSTZluzda-s', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=108&crop=smart&auto=webp&s=7a33c5caecf834e821a4c91434a1c7b03c622d4f', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=216&crop=smart&auto=webp&s=3de8069583126cf724b147ed36f28026ef5e79dc', 'width': 216}, {'height': 143, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=320&crop=smart&auto=webp&s=40f3a6dfe9989d501418c555178c84b685192d8d', 'width': 320}, {'height': 286, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=640&crop=smart&auto=webp&s=533d2ef5100b510c6c29acb575def9ec5cca10c3', 'width': 640}, {'height': 429, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=960&crop=smart&auto=webp&s=cc199c7645ca7dcf8f551795d67589731f043044', 'width': 960}, {'height': 482, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=1080&crop=smart&auto=webp&s=e71f41236e0ca2b94a9fa8da0548bc5a593a6491', 'width': 1080}], 'source': {'height': 572, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?auto=webp&s=fe12fa3d840f7c99564bfd511e660c1e10be7249', 'width': 1280}, 'variants': {'nsfw': {'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=75fdfa231cf7e75fc562dfe9cae0de06e9cd4e9c', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=3f54258f4273bebe34f6200b9cc3a2a82fbb6ed3', 'width': 216}, {'height': 143, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e562f6449762151ba7e95e7154bcf8ebfdabde87', 'width': 320}, {'height': 286, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=1e0f30511183ec91ed9d5ec3ac6dd7e31355b14b', 'width': 640}, {'height': 429, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=e43de3ebf9519fbc427d391a66aec5ddfff5c111', 'width': 960}, {'height': 482, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=76da1aac5c5f6c6f374b24856ab07e883d1864c7', 'width': 1080}], 'source': {'height': 572, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?blur=40&format=pjpg&auto=webp&s=f5d0f255f8d0d011b920a984c4e43816afad1eeb', 'width': 1280}}, 'obfuscated': {'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=75fdfa231cf7e75fc562dfe9cae0de06e9cd4e9c', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=3f54258f4273bebe34f6200b9cc3a2a82fbb6ed3', 'width': 216}, {'height': 143, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e562f6449762151ba7e95e7154bcf8ebfdabde87', 'width': 320}, {'height': 286, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=1e0f30511183ec91ed9d5ec3ac6dd7e31355b14b', 'width': 640}, {'height': 429, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=e43de3ebf9519fbc427d391a66aec5ddfff5c111', 'width': 960}, {'height': 482, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=76da1aac5c5f6c6f374b24856ab07e883d1864c7', 'width': 1080}], 'source': {'height': 572, 'url': 'https://preview.redd.it/vxkmwpwljy8e1.jpeg?blur=40&format=pjpg&auto=webp&s=f5d0f255f8d0d011b920a984c4e43816afad1eeb', 'width': 1280}}}}]} |
||
How do you mean? | 1 | [removed] | 2024-12-25T08:47:00 | Ragecommie | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlx3hj | false | null | t3_1hlx3hj | /r/LocalLLaMA/comments/1hlx3hj/how_do_you_mean/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'VJ0uNnOwSjz3dJyCRT-Ux9kxGRtEpezGF-7ua9xNN6Q', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/e0nwekbyjy8e1.png?width=108&crop=smart&auto=webp&s=29753146b9752c54b1c8d1a1ea670d292557510b', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/e0nwekbyjy8e1.png?width=216&crop=smart&auto=webp&s=f9dd28b31533d20fc9e83f72bb4f3229cf373357', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/e0nwekbyjy8e1.png?width=320&crop=smart&auto=webp&s=5c9df4e7e03ef35eb3a8ed0f37a4236bfc424440', 'width': 320}, {'height': 340, 'url': 'https://preview.redd.it/e0nwekbyjy8e1.png?width=640&crop=smart&auto=webp&s=8cefeea503e43810c5d12047cd77b6849a0f8a37', 'width': 640}], 'source': {'height': 404, 'url': 'https://preview.redd.it/e0nwekbyjy8e1.png?auto=webp&s=9c04c53cebaf7e490adced431458f3ba2a55d05a', 'width': 759}, 'variants': {}}]} |
||
Deepseek V3 is online | 86 | *Processing img ptjk6nmlny8e1...*
They will announce later. | 2024-12-25T09:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hlxcw5/deepseek_v3_is_online/ | Round-Lucky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlxcw5 | false | null | t3_1hlxcw5 | /r/LocalLLaMA/comments/1hlxcw5/deepseek_v3_is_online/ | false | false | self | 86 | null |
DeepSeek Internal API updated: deepseek-v3-600b | 1 | [removed] | 2024-12-25T09:29:59 | lty5921 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hlxmk2 | false | null | t3_1hlxmk2 | /r/LocalLLaMA/comments/1hlxmk2/deepseek_internal_api_updated_deepseekv3600b/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'wV7E_Ug1ZL2--yIJHfMbihmHIHPZtESoUzlL-4u-sxE', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/k06lbefmry8e1.png?width=108&crop=smart&auto=webp&s=f4c1cce31303571284592cae38a32a73418833de', 'width': 108}, {'height': 86, 'url': 'https://preview.redd.it/k06lbefmry8e1.png?width=216&crop=smart&auto=webp&s=7a20296c5d4730ba2e71a4aabe2304198419d2ea', 'width': 216}, {'height': 128, 'url': 'https://preview.redd.it/k06lbefmry8e1.png?width=320&crop=smart&auto=webp&s=46f97036ddf2ee4843be26c129972cc8903ef94d', 'width': 320}, {'height': 256, 'url': 'https://preview.redd.it/k06lbefmry8e1.png?width=640&crop=smart&auto=webp&s=a0c520e00955e66f9071a288a0d0b04f820ef961', 'width': 640}, {'height': 384, 'url': 'https://preview.redd.it/k06lbefmry8e1.png?width=960&crop=smart&auto=webp&s=7f96b87e0120af9676e4462a4ae3440063c5cd6f', 'width': 960}, {'height': 433, 'url': 'https://preview.redd.it/k06lbefmry8e1.png?width=1080&crop=smart&auto=webp&s=a78408b508eaa64e48fdb75de9fe5ab73ed156aa', 'width': 1080}], 'source': {'height': 498, 'url': 'https://preview.redd.it/k06lbefmry8e1.png?auto=webp&s=9b80c88af110b8d1b78bd9b98d1dc4d4dbbe6407', 'width': 1242}, 'variants': {}}]} |
||
Prompt for RAG with Qwen2.5 7B | 1 | [removed] | 2024-12-25T09:32:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hlxnrd/prompt_for_rag_with_qwen25_7b/ | BackgroundLow3793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlxnrd | false | null | t3_1hlxnrd | /r/LocalLLaMA/comments/1hlxnrd/prompt_for_rag_with_qwen25_7b/ | false | false | self | 1 | null |
RAG / LLM document search & ingest tools, local only, linux, foss, but also very trustworthy from big SW vendor? | 1 | ISO RAG / LLM document search & ingest tools, local only, linux, foss, but also very trustworthy authored / maintained from big / reputable SW vendor?
Basically what could one choose and very likely just install & run the latest thing without having too much to be concerned about in terms of SW direct / indirect supply chain, being able to trust it's totally offline & local, has had reasonable care wrt. quality & security in development & distribution?
e.g. if something came directly from & authored / maintained redhat / ibm, canonical, mozilla, opensuse, apache, debian, docker, etc. then one would probably be more or less able to believe it's about as trustworthy as their other main linux / foss sw.
Less so with facebook, google, apple, microsoft, adobe, amazon, etc. if only just because much of their stuff is "intrinsically" cloud oriented / connected and otherwise tends to have more ads / telemetry or less absolutely unconcerning privacy policies etc. but there are exceptions of course.
But if you're looking for some "baseline" viable utility that you could just use / recommend for any general personal or business use case that's FOSS what sort of CLI / TUI / GUI / web-ui offline app / docker container / snap / flatpak / appimage etc. is in this category of utility vs. reputation & maintenance status?
Obviously there are lots of good community made / tiny startup tech org FOSS ones like what's possible with ollama, sillytavern, etc. etc. but given that they're tending to be from much smaller tech organizations or even just community projects it's harder to just point to X and say "hey, install this as an option for X use case" and not necessarily get some cases where it's not easily able to be used if it's not easy for IT or whatever to vet as OK to the level that "libreoffice", "firefox", "postgres", etc. is prominently widely accepted / known.
I see the likes of ibm / redhat, salesforce, microsoft, etc. making plenty of good ML models and ML adjacent foundational SW for search / ingestion / whatever but I don't recall seeing any prominent "app" solutions using the underlying RAG / LLM / document ingestion / search etc. tools that are being open sourced from similar organizations.
Microsoft wants to sell you copilot, apple wants to sell you macs and macos and apps / siri / apple AI. Microsoft, fb, google, et. al. wants to be a panopticon. But surely there are some big tech or big OSS orgs that just want to make good open infrastructure / utility tools and have done so at the GUI level? | 2024-12-25T10:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hly45u/rag_llm_document_search_ingest_tools_local_only/ | Calcidiol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hly45u | false | null | t3_1hly45u | /r/LocalLLaMA/comments/1hly45u/rag_llm_document_search_ingest_tools_local_only/ | false | false | self | 1 | null |
DeepSeek V3 released | 1 | [removed] | 2024-12-25T10:12:44 | Formal-Narwhal-1610 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hly5nw | false | null | t3_1hly5nw | /r/LocalLLaMA/comments/1hly5nw/deepseek_v3_released/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'OejNgp-7uWC632Er1LvI1kd-O3pTObHlFFzNDOpZMN0', 'resolutions': [{'height': 190, 'url': 'https://preview.redd.it/490uy6q8zy8e1.jpeg?width=108&crop=smart&auto=webp&s=5418c84370aedc737b6526255c7dc5b44d11d885', 'width': 108}, {'height': 380, 'url': 'https://preview.redd.it/490uy6q8zy8e1.jpeg?width=216&crop=smart&auto=webp&s=3eb821738eb2260e9187f8d9ae64621c5cb8954e', 'width': 216}, {'height': 563, 'url': 'https://preview.redd.it/490uy6q8zy8e1.jpeg?width=320&crop=smart&auto=webp&s=2990360f67f274519f6ee5d93bd7a8190396e6cc', 'width': 320}, {'height': 1127, 'url': 'https://preview.redd.it/490uy6q8zy8e1.jpeg?width=640&crop=smart&auto=webp&s=602fe7227a45fd21bc3f1e2ac15d20c6f390119b', 'width': 640}, {'height': 1691, 'url': 'https://preview.redd.it/490uy6q8zy8e1.jpeg?width=960&crop=smart&auto=webp&s=f57f9609d5fdbd837c2264a68c629e30b9a5b9e5', 'width': 960}, {'height': 1902, 'url': 'https://preview.redd.it/490uy6q8zy8e1.jpeg?width=1080&crop=smart&auto=webp&s=59fbab7cb8630098284a0678862ed31ccd70e2ee', 'width': 1080}], 'source': {'height': 2061, 'url': 'https://preview.redd.it/490uy6q8zy8e1.jpeg?auto=webp&s=8a8b1a1a2d750c10b677bd115308cb12cec06041', 'width': 1170}, 'variants': {}}]} |
||
Need hardware update? If so what should I change? | 1 | [removed] | 2024-12-25T10:18:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hly8bd/need_hardware_update_if_so_what_should_i_change/ | Repsol_Honda_PL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hly8bd | false | null | t3_1hly8bd | /r/LocalLLaMA/comments/1hly8bd/need_hardware_update_if_so_what_should_i_change/ | false | false | self | 1 | null |
Cline guidance looking for tips! | 1 | [removed] | 2024-12-25T10:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hlyf68/cline_guidance_looking_for_tips/ | Vexed_Ganker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hlyf68 | false | null | t3_1hlyf68 | /r/LocalLLaMA/comments/1hlyf68/cline_guidance_looking_for_tips/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.