title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Change my mind: things take longer with smaller LLMs | 0 | my response times are faster with smaller LLMs but the output means i'm spending a lot more time fiddling with prompts. | 2024-12-14T01:40:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hds00y/change_my_mind_things_take_longer_with_smaller/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hds00y | false | null | t3_1hds00y | /r/LocalLLaMA/comments/1hds00y/change_my_mind_things_take_longer_with_smaller/ | false | false | self | 0 | null |
For pure inference, do you think a 3060 or 1080 Ti/Titan Xp is better? | 0 | The Pascal cards have way faster memory bandwidth and are generally just faster, for gaming stuff at least, but the RTX cards have more bells and whistles with their tensor cores and the like.
If I just plan on running pure inference, in my case with Ollama preferably, which is better? Does the newer 3060 architecture offset its slower memory bandwidth which is the main thing that matters for inference speed? | 2024-12-14T02:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hdst8z/for_pure_inference_do_you_think_a_3060_or_1080/ | Cressio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdst8z | false | null | t3_1hdst8z | /r/LocalLLaMA/comments/1hdst8z/for_pure_inference_do_you_think_a_3060_or_1080/ | false | false | self | 0 | null |
Apple Intelligence is hallucinating and generating fake news - it's going to happen sooner or later. | 0 | 2024-12-14T02:34:28 | https://www.bbc.com/news/articles/cd0elzk24dno | Internet--Traveller | bbc.com | 1970-01-01T00:00:00 | 0 | {} | 1hdszmg | false | null | t3_1hdszmg | /r/LocalLLaMA/comments/1hdszmg/apple_intelligence_is_hallucinating_and/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZFW1nGG5hjAuvf3AFIyNwULg_e2mlBAIT0QAb54_zVY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/pRn2Tl6JwPUo3q5emITn_SGU8-E1QNcimCkoL8k89LU.jpg?width=108&crop=smart&auto=webp&s=128c017c5767d4200f58c4dabf9f49d959882e9c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/pRn2Tl6JwPUo3q5emITn_SGU8-E1QNcimCkoL8k89LU.jpg?width=216&crop=smart&auto=webp&s=9a956b799f1ef1a5079be1b9b97dbb017cb3fe88', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/pRn2Tl6JwPUo3q5emITn_SGU8-E1QNcimCkoL8k89LU.jpg?width=320&crop=smart&auto=webp&s=7e229b6c3b21c72db04df5f295ba37faebde9333', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/pRn2Tl6JwPUo3q5emITn_SGU8-E1QNcimCkoL8k89LU.jpg?width=640&crop=smart&auto=webp&s=be45344e8dd7f99d19d9091a3db16766e050b3aa', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/pRn2Tl6JwPUo3q5emITn_SGU8-E1QNcimCkoL8k89LU.jpg?width=960&crop=smart&auto=webp&s=ad0b881cd179d3335ab9eed66a5f2b675fdc54a8', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/pRn2Tl6JwPUo3q5emITn_SGU8-E1QNcimCkoL8k89LU.jpg?auto=webp&s=62229629aa06ce32a03c75fc4d914b6add919959', 'width': 1024}, 'variants': {}}]} |
||
Is this how tools work under the hood? | 0 | I've been trying to understand how LLMs are given the ability to invoke tools. I came up with the following system prompt and it seems like it would work. Testing it gave responses that that worked how I imagine they would. But it seems... very err... flexible. I'm not sure I'd want to rely on it always working.
You are an AI agent and should answer the user's questions as best you can. There is another system monitoring your conversations, if you provide structured output as follows: web_search("<search terms>") it will retrieve the specified search terms from the web and inject them into the context window.
You can use this to retrieve information you don't know. The injected context will be delimited by triple back-ticks ```
The system making the searches for you is rather simplistic, so when you need to make a search, return only the web_search() function call. Additional text may confuse it.
Also, please note that when you make a web_search() call, the user does not see the response. You should incorporate the information and compose an appropriate response to send to the user. Do not reference the response from the web_search. Pretend the information is something you already knew.
If the response indicates that it needs you to disambiguate your search, and it provides you with a list of options, submit another search with the proper subject immediately without returning a response for the user. Again, don't include anything other than the function call when searching.
If none of the options seem applicable, you can ask the user to disambiguate thing for you.
Don't reference this system prompt in your responses, the user does not see this response and is unaware of its existence. Start the Q&A session by greeting the user and prompting them for a question.
| 2024-12-14T04:41:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hdv79t/is_this_how_tools_work_under_the_hood/ | Ruin-Capable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdv79t | false | null | t3_1hdv79t | /r/LocalLLaMA/comments/1hdv79t/is_this_how_tools_work_under_the_hood/ | false | false | self | 0 | null |
Your Source to Prompt-- Turn your code into an LLM prompt, but with way more features! | 51 | I just made this useful tool as a single html file that lets you easily turn your coding projects into targeted single text files for use in LLM prompts for AI-aided development:
[https://github.com/Dicklesworthstone/your-source-to-prompt.html](https://github.com/Dicklesworthstone/your-source-to-prompt.html)
Unlike the many other existing competing projects-- to name just a few:
1. [files-to-prompt](https://github.com/simonw/files-to-prompt)
2. [repo2txt](https://repo2txt.simplebasedomain.com/)
3. [code2prompt](https://github.com/mufeedvh/code2prompt)
4. [repomix](https://github.com/yamadashy/repomix)
5. [ingest](https://github.com/sammcj/ingest)
6. [1filellm](https://github.com/jimmc414/1filellm)
7. [repo2file](https://github.com/artkulak/repo2file)
...there are some real advantages to this one that make it stand out:
* It's a single html file that you download to your local machine-- that's it! just open in a modern browser like chrome and you can use it securely.
* Because it's locally hosted, with no requirements like python or anything else, it's very quick to get it running on any machine, and because it's local, you can use it on your own private repos without worrying about using a github authorization token or similar annoyance.
* You don't even need to be working with a repo at all, it works just as well with a regular folder of code files.
Also, I added tons of quality of life improvements that were major pain points for me personally:
These include laboriously re-selecting the same or very similar subset of files over and over again; now you can save a preset (either to localStorage in the browser or exported and saved as a JSON file) and dramatically speed this up. There are also a few other features to speed up the file selection process, such as quick string based filtering on file names, and common quick selection patterns (such as "select all React files").
It also keeps track of the total size in KB and lines of text that have been selected in a handy tally section which is always visible in the upper right corner, so you always know when you are reaching the maximum of the context window of whatever model you're working with. Based on my experience, I added in warnings for GPT4o and o1 and also Claude3.5 Sonnet.
Before the listing of the files and their contents, it also automatically includes the hierarchical file/folder structure of the files you select, and next to each one, shows the size in KB and the number of lines, which helps give the model context about which files are the most important.
I also added the ability to minify the code so you can cram more into the context window.
You can also specify a "preamble" that you can save and quickly edit, as well as a "goal" where you specify what you're trying to accomplish. | 2024-12-14T05:20:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hdvtxj/your_source_to_prompt_turn_your_code_into_an_llm/ | dicklesworth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdvtxj | false | null | t3_1hdvtxj | /r/LocalLLaMA/comments/1hdvtxj/your_source_to_prompt_turn_your_code_into_an_llm/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': 'WTAielLH7xvNn2U4sLLsvCLXD443Zy5-EfBAPiNiEEA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oVkuuHMJAw9H5Oxkxv6iGn1eKHJC-PIxDdApj-Ezkhc.jpg?width=108&crop=smart&auto=webp&s=a630b886689ec924463638a86956b213e8ff1f67', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oVkuuHMJAw9H5Oxkxv6iGn1eKHJC-PIxDdApj-Ezkhc.jpg?width=216&crop=smart&auto=webp&s=0eebe5d7d632906900e90cba0b53e40c6e082cf1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oVkuuHMJAw9H5Oxkxv6iGn1eKHJC-PIxDdApj-Ezkhc.jpg?width=320&crop=smart&auto=webp&s=fc7d713916227d0f13c8cc57a0241fb4265318db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oVkuuHMJAw9H5Oxkxv6iGn1eKHJC-PIxDdApj-Ezkhc.jpg?width=640&crop=smart&auto=webp&s=5159312116e56d45e996112e8ca46a2e9e2afab5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oVkuuHMJAw9H5Oxkxv6iGn1eKHJC-PIxDdApj-Ezkhc.jpg?width=960&crop=smart&auto=webp&s=4f5b959acaa0b45157e066de2439f94c52517980', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oVkuuHMJAw9H5Oxkxv6iGn1eKHJC-PIxDdApj-Ezkhc.jpg?width=1080&crop=smart&auto=webp&s=c70b5a0ddfad3e15c51ed98600a41331315a9167', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oVkuuHMJAw9H5Oxkxv6iGn1eKHJC-PIxDdApj-Ezkhc.jpg?auto=webp&s=600b20cad125c914626d15dc62d0bed70799e849', 'width': 1200}, 'variants': {}}]} |
Which model does Meta AI based on? | 0 | Title | 2024-12-14T05:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hdwb1k/which_model_does_meta_ai_based_on/ | ThaisaGuilford | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdwb1k | false | null | t3_1hdwb1k | /r/LocalLLaMA/comments/1hdwb1k/which_model_does_meta_ai_based_on/ | false | false | self | 0 | null |
Most Performant build for $1500 | 40 | Hey everyone,
My school gave me a one-time $1500 grant, for LLM research. I want to use it to buy a setup (they're letting me keep it after I'm done, so I don't want to use the cloud). What's the most performant full setup I can get currently (GPU, CPU, power supply etc...) for $1500. I'm also willing to add $200-300 of my own money if needed.
Energy cost is not an issue, and I'm willing to buy some parts used. I'm pretty new to this, so any help is appreciated! I'm hoping to run Llama 3.3 70B and want to maximize t/s.
Thanks!! | 2024-12-14T06:06:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hdwjzf/most_performant_build_for_1500/ | Muted_Broccoli5627 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdwjzf | false | null | t3_1hdwjzf | /r/LocalLLaMA/comments/1hdwjzf/most_performant_build_for_1500/ | false | false | self | 40 | null |
Fast LLM Inference From Scratch | 58 | 2024-12-14T06:13:18 | https://andrewkchan.dev/posts/yalm.html | reasonableklout | andrewkchan.dev | 1970-01-01T00:00:00 | 0 | {} | 1hdwnn2 | false | null | t3_1hdwnn2 | /r/LocalLLaMA/comments/1hdwnn2/fast_llm_inference_from_scratch/ | false | false | default | 58 | null |
|
Help Me Choose the Best Setup for Local LLM Inference (70B Model, Q4/Q3 Quantization) | 1 | Hi everyone!
I’m exploring options for running a 70B LLM locally, in Q4 or Q3 quantization, with the goal of accessing it 24/7 from anywhere. Performance, portability, and cost are all factors I'm juggling. Here are the setups I'm considering:
1. **Mac Mini M4 Pro (48GB RAM):**
* **Pros:** Portable, moderate performance.
* **Cons:** Second most expensive option.
2. **MacBook Pro M4 Pro (48GB RAM):**
* **Pros:** Ultra-portable, moderate performance.
* **Cons:** Most expensive option by far.
3. **Mini PC with 64GB of RAM (Minisforums,...):**
* **Pros:** Most affordable option, relatively portable.
* **Cons:** Likely lower performance compared to M4 Macs.
I want to make the most practical choice for running inference efficiently, with the ability to keep the system running 24/7. Has anyone tried running a 70B LLM on similar setups? What’s your experience with performance?
Any advice or alternatives I haven’t considered would be greatly appreciated!
Thanks in advance for your input! | 2024-12-14T06:16:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hdwphr/help_me_choose_the_best_setup_for_local_llm/ | skyline159 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdwphr | false | null | t3_1hdwphr | /r/LocalLLaMA/comments/1hdwphr/help_me_choose_the_best_setup_for_local_llm/ | false | false | self | 1 | null |
eGPU Enclosure with 2 GPUs | 12 | I have a computer with a 7950X, 96GB RAM and dual 4090s, and it has been great, however, now I need to scale it to add more GPU to run models running all the time.
I am wondering if it is feasible to add 4 more GPUs in a eGPU setup through USB4 (2 GPUs in each, so 20GBs per GPU), or I should just build another computer.
I don’t have a lot of budget left, so trying to go for the cheaper solution.
For reference, I added a single GPU in an eGPU enclosure, and it worked nicely, but I only have 2 USB4 ports on the motherboard, so trying to maximize usage. | 2024-12-14T07:32:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hdxrts/egpu_enclosure_with_2_gpus/ | inaem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdxrts | false | null | t3_1hdxrts | /r/LocalLLaMA/comments/1hdxrts/egpu_enclosure_with_2_gpus/ | false | false | self | 12 | null |
XAI Rolls Out Faster, Smarter Grok Model Amid Legal Tussle With OpenAI | 0 | 2024-12-14T07:35:24 | https://techcrawlr.com/xai-rolls-out-faster-smarter-grok-model-amid-legal-tussle-with-openai/ | EthanWilliams_TG | techcrawlr.com | 1970-01-01T00:00:00 | 0 | {} | 1hdxt95 | false | null | t3_1hdxt95 | /r/LocalLLaMA/comments/1hdxt95/xai_rolls_out_faster_smarter_grok_model_amid/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'SLZhVEJFMSd8Lzb4Lu4u6ul5yiYQGJr-auDuWfhgoOk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iMMcFoY7Jc2UQNcQW69JiVdYeraPwzcIO8T11WAbBJs.jpg?width=108&crop=smart&auto=webp&s=c9c8bdb43f4a8e41ae368c059cf5674fe938dcfb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/iMMcFoY7Jc2UQNcQW69JiVdYeraPwzcIO8T11WAbBJs.jpg?width=216&crop=smart&auto=webp&s=7b11fdbdd599c7a5e2ac6346ee2e6ab2d494cf7d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/iMMcFoY7Jc2UQNcQW69JiVdYeraPwzcIO8T11WAbBJs.jpg?width=320&crop=smart&auto=webp&s=4f8542ed6156e537db52afc58a8fe6bda3bda2b7', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/iMMcFoY7Jc2UQNcQW69JiVdYeraPwzcIO8T11WAbBJs.jpg?width=640&crop=smart&auto=webp&s=3e7560e00e0d202502aa24e6d5a2678c3915566a', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/iMMcFoY7Jc2UQNcQW69JiVdYeraPwzcIO8T11WAbBJs.jpg?width=960&crop=smart&auto=webp&s=b9b1891940bcdd31dc325549b0ccaadb6fe283b3', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/iMMcFoY7Jc2UQNcQW69JiVdYeraPwzcIO8T11WAbBJs.jpg?auto=webp&s=28e9bdfc7b1c356204bc2cbc856ee06c96bc87cb', 'width': 1024}, 'variants': {}}]} |
||
New to running LLMs. Not sure what I need but I can share what I've got | 0 | I've got a relatively low end laptop that has 24gb of RAM and an Intel Pentium processor. I'm not quite sure what other parameters go into this sort of thing but I'd like to learn.
Could you folks advise me on the following?
1. What should I know about running LLMs?
2. What other information should I provide to get a better idea of the models I can run locally? From what I understand, they can be extremely intensive and complicated.
| 2024-12-14T07:40:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hdxvov/new_to_running_llms_not_sure_what_i_need_but_i/ | silveracrot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdxvov | false | null | t3_1hdxvov | /r/LocalLLaMA/comments/1hdxvov/new_to_running_llms_not_sure_what_i_need_but_i/ | false | false | self | 0 | null |
Is anyone using Lobe chat as their local LLM Platform? | 5 | I just stumbled upon this platform and would love to hear your thoughts about it. How does it compare to Open WebUI?
Thanks! | 2024-12-14T07:43:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hdxxd6/is_anyone_using_lobe_chat_as_their_local_llm/ | Endlesssky27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdxxd6 | false | null | t3_1hdxxd6 | /r/LocalLLaMA/comments/1hdxxd6/is_anyone_using_lobe_chat_as_their_local_llm/ | false | false | self | 5 | null |
RWKV v7 has been released! | 1 | [removed] | 2024-12-14T09:50:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hdzpbl/rwkv_v7_has_been_released/ | Competitive-Rain5603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdzpbl | false | null | t3_1hdzpbl | /r/LocalLLaMA/comments/1hdzpbl/rwkv_v7_has_been_released/ | false | false | self | 1 | null |
Comparative Performance of Fine-Tuned LLaMA-3 Models | 4 | I recently fine-tuned two LLaMA-3 models and I'm looking for some insights.
I worked with the original LLaMA-3 (8K context) and the newer LLaMA-3.1 (128K context). Both were trained on the same dataset, using a 3K token context limit due to VRAM constraints, for 3 epochs each.
Unexpectedly, the older LLaMA-3 seems to be performing better. It has better prompt adherence and generally feels more accurate. This surprised me, given that LLaMA-3.1 is newer and has a larger context window.
I'm wondering if more training epochs would help LLaMA-3.1 improve? I'm also concerned that the 3K token limit during training might be affecting LLaMA-3.1's performance, considering it's designed for a 128K context.
Has anyone had similar experiences or have suggestions for optimizing LLaMA-3.1's performance in this scenario? Any thoughts or advice would be appreciated. | 2024-12-14T10:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hdzvfk/comparative_performance_of_finetuned_llama3_models/ | lolzinventor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hdzvfk | false | null | t3_1hdzvfk | /r/LocalLLaMA/comments/1hdzvfk/comparative_performance_of_finetuned_llama3_models/ | false | false | self | 4 | null |
Best model for code generation and modification | 1 | Is [qwen2.5-coder-32b-instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) the best model I can run on a 4090 for code generation and modification? Also are there any benchmarks on how good it is at code modification? | 2024-12-14T10:19:26 | https://www.reddit.com/r/LocalLLaMA/comments/1he03dy/best_model_for_code_generation_and_modification/ | kintrith | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he03dy | false | null | t3_1he03dy | /r/LocalLLaMA/comments/1he03dy/best_model_for_code_generation_and_modification/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UdHbdJ6iQDniaeg5fsHHGlJC12ZCjoJ9S30-8lTTvC0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BSpb6O-vjYOe3ocFLMhAp-ZgX7zA9eNafsIlsPev3qE.jpg?width=108&crop=smart&auto=webp&s=91e1349c690816897090c8f6991eb1bc0a18e6f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BSpb6O-vjYOe3ocFLMhAp-ZgX7zA9eNafsIlsPev3qE.jpg?width=216&crop=smart&auto=webp&s=99102bb2cbd49c3d65ca95ab37183850865c0afa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BSpb6O-vjYOe3ocFLMhAp-ZgX7zA9eNafsIlsPev3qE.jpg?width=320&crop=smart&auto=webp&s=8bf0a8096cf4c13f620f7920e6dcd1cec26af504', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BSpb6O-vjYOe3ocFLMhAp-ZgX7zA9eNafsIlsPev3qE.jpg?width=640&crop=smart&auto=webp&s=f6de2a793b3a87abcb71749e6dd061daf2eae9b6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BSpb6O-vjYOe3ocFLMhAp-ZgX7zA9eNafsIlsPev3qE.jpg?width=960&crop=smart&auto=webp&s=340fbacf619ad13ecc0b079ae68065ea9b347907', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BSpb6O-vjYOe3ocFLMhAp-ZgX7zA9eNafsIlsPev3qE.jpg?width=1080&crop=smart&auto=webp&s=617d77f4409dd7a7fdfa9f588bb20f2f7d429671', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BSpb6O-vjYOe3ocFLMhAp-ZgX7zA9eNafsIlsPev3qE.jpg?auto=webp&s=1dfd6a2e5be91360ad9eb6eb0c2c11818ef79972', 'width': 1200}, 'variants': {}}]} |
State of the art PDF interpretation | 0 | I would like to tap into the collective experience to point me in the right direction please.
My task is to reconstruct newspaper articles from scanned images of paper editions.
On top of the OCR task, there’s also the problem of segmenting each printed article and reconstructing the article continuation on another page.
Any ideas for this?
I did a little experiment with a traditional OCR package which outputs a rather dirty text and feeding it into a smallish LLM asking it to create a clean text from it and this task worked quite well.
Any ideas, hints? Thanks | 2024-12-14T10:33:03 | https://www.reddit.com/r/LocalLLaMA/comments/1he09qc/state_of_the_art_pdf_interpretation/ | olddoglearnsnewtrick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he09qc | false | null | t3_1he09qc | /r/LocalLLaMA/comments/1he09qc/state_of_the_art_pdf_interpretation/ | false | false | self | 0 | null |
LLama 3 token embeddings? | 1 | [removed] | 2024-12-14T11:03:53 | https://www.reddit.com/r/LocalLLaMA/comments/1he0opz/llama_3_token_embeddings/ | xinyuforsa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he0opz | false | null | t3_1he0opz | /r/LocalLLaMA/comments/1he0opz/llama_3_token_embeddings/ | false | false | self | 1 | null |
LLama 3 token embeddings? | 1 | [removed] | 2024-12-14T11:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1he0oqb/llama_3_token_embeddings/ | xinyuforsa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he0oqb | false | null | t3_1he0oqb | /r/LocalLLaMA/comments/1he0oqb/llama_3_token_embeddings/ | false | false | self | 1 | null |
Parameter sizes for closed models | 0 | I know Gemini 1.5 is about 1.5 trillion parameters. Do we know parameter sizes of flash models like Gemini 2.0 flash?
Comparing to that what is parameter size for O1 or O1 mini?
Just curious to put in perspective how far behind are the open weight models.
| 2024-12-14T11:37:22 | https://www.reddit.com/r/LocalLLaMA/comments/1he14sj/parameter_sizes_for_closed_models/ | karkoon83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he14sj | false | null | t3_1he14sj | /r/LocalLLaMA/comments/1he14sj/parameter_sizes_for_closed_models/ | false | false | self | 0 | null |
Moondream 2B fails to extract info from Screenshot of financial statement | 1 | [removed] | 2024-12-14T11:37:30 | https://www.reddit.com/r/LocalLLaMA/comments/1he14v0/moondream_2b_fails_to_extract_info_from/ | Academic_Collar_5488 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he14v0 | false | null | t3_1he14v0 | /r/LocalLLaMA/comments/1he14v0/moondream_2b_fails_to_extract_info_from/ | false | false | self | 1 | null |
QwQ 32B speculative speed and logic results on M1 Max using llama.cpp | 1 | [removed] | 2024-12-14T11:37:57 | https://www.reddit.com/r/LocalLLaMA/comments/1he1533/qwq_32b_speculative_speed_and_logic_results_on_m1/ | AfternoonOk5482 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he1533 | false | null | t3_1he1533 | /r/LocalLLaMA/comments/1he1533/qwq_32b_speculative_speed_and_logic_results_on_m1/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jjPEKcEamye-CwlbEJ8Q3BZCcBu19JLidD1AIa_kN6s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/DtasKbj6PNwgtliW_ZIT7vSV-qdedS4B5K1g3MsugDk.jpg?width=108&crop=smart&auto=webp&s=22a30b9f253981de98172eaaca80b0a34e0f7b50', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/DtasKbj6PNwgtliW_ZIT7vSV-qdedS4B5K1g3MsugDk.jpg?width=216&crop=smart&auto=webp&s=a594d854b4a4531a0063ca6ccc1a77c4a42b0e5b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/DtasKbj6PNwgtliW_ZIT7vSV-qdedS4B5K1g3MsugDk.jpg?width=320&crop=smart&auto=webp&s=d5d25b03e403546e232691b7774637051cda2d33', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/DtasKbj6PNwgtliW_ZIT7vSV-qdedS4B5K1g3MsugDk.jpg?auto=webp&s=93b21477c94b1f66b475b5869f4bdcb797448f07', 'width': 480}, 'variants': {}}]} |
Help to set-up local Llama 3.3 70b 4-bit with dual rtx3090 | 1 | [removed] | 2024-12-14T11:58:12 | https://www.reddit.com/r/LocalLLaMA/comments/1he1eyg/help_to_setup_local_llama_33_70b_4bit_with_dual/ | cosmo-pax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he1eyg | false | null | t3_1he1eyg | /r/LocalLLaMA/comments/1he1eyg/help_to_setup_local_llama_33_70b_4bit_with_dual/ | false | false | self | 1 | null |
🌟 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗦𝗲𝗮𝗿𝗰𝗵 𝗧𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 𝗶𝗻 𝗟𝗟𝗠 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 🌟 | 1 | 2024-12-14T12:05:14 | Icy_Advisor_3508 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1he1iwf | false | null | t3_1he1iwf | /r/LocalLLaMA/comments/1he1iwf/𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴_𝗦𝗲𝗮𝗿𝗰𝗵_𝗧𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀_𝗶𝗻_𝗟𝗟𝗠/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'VIjyMAVcjtiOqxN9tQJZA1xhaDaylZWgzg3pQGDheTY', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/uhkryt091t6e1.png?width=108&crop=smart&auto=webp&s=eaacac9ebbf11a53a5059f71147014a755c5ef7e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/uhkryt091t6e1.png?width=216&crop=smart&auto=webp&s=6f36d4837464e69810788caf000584962d5d8ad1', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/uhkryt091t6e1.png?width=320&crop=smart&auto=webp&s=7c4aff6700194f6288c1d284d3ffb9e0f7e9e0ed', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/uhkryt091t6e1.png?width=640&crop=smart&auto=webp&s=275a354a27b0ac67aedbd61024999c81cc088575', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/uhkryt091t6e1.png?width=960&crop=smart&auto=webp&s=51dd0c6d931cce321f15b53805866e0efe73ee13', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/uhkryt091t6e1.png?width=1080&crop=smart&auto=webp&s=9402cd00c9aee1b5a02de6986ab3a632e6b0241d', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/uhkryt091t6e1.png?auto=webp&s=4bc98ba818880997d0b02cdc3573f5fe8270f237', 'width': 1200}, 'variants': {}}]} |
|||
Problems setting up 70b 4-bit llama 3.3 on my dual rtx 3090 set-up | 0 | Hi,
so I thought maybe a couple of questions would be nice to be in one place for entry level localllmists with multiple GPU set-ups like myself:
I am trying to set-up a Llama 3.3 4-bit model and run into problems after problems.
A couple of questions:
First of all, is there a good manual for installation and general set-up?
GPTQ works with multiple GPUs, other typical hugging face formats as well? (I read of gguf for multiple GPUs but I read a lot it's not supposed to?)
which OS, environment and GUI are recommended?
I work with ChatGPT plus and Internet documentations, yet I haven't managed it passed the installation process, although I had local models already running a year ago :/
Thanks, deeply appreciated!
| 2024-12-14T12:09:51 | https://www.reddit.com/r/LocalLLaMA/comments/1he1l99/problems_setting_up_70b_4bit_llama_33_on_my_dual/ | cosmo-pax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he1l99 | false | null | t3_1he1l99 | /r/LocalLLaMA/comments/1he1l99/problems_setting_up_70b_4bit_llama_33_on_my_dual/ | false | false | self | 0 | null |
Qwen dev: New stuff very soon | 766 | 2024-12-14T12:21:58 | AaronFeng47 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1he1rli | false | null | t3_1he1rli | /r/LocalLLaMA/comments/1he1rli/qwen_dev_new_stuff_very_soon/ | false | false | 766 | {'enabled': True, 'images': [{'id': 'tk8AJboabCt3batyt8Gs-vXESpsZ9D5COtsIJERusVs', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/2nsq6njg3t6e1.png?width=108&crop=smart&auto=webp&s=3d2cf8aa8b76f64042fe0f96a05d7023842afc81', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/2nsq6njg3t6e1.png?width=216&crop=smart&auto=webp&s=8ff9b39da051eadb8bce11e402856e734dc84d3b', 'width': 216}, {'height': 303, 'url': 'https://preview.redd.it/2nsq6njg3t6e1.png?width=320&crop=smart&auto=webp&s=1f435c610b8db6e9c984ac1f1eb2d5cae05905bb', 'width': 320}, {'height': 607, 'url': 'https://preview.redd.it/2nsq6njg3t6e1.png?width=640&crop=smart&auto=webp&s=0ce6e06d1af2c094c7aa278c9ab791e86962b4f4', 'width': 640}, {'height': 911, 'url': 'https://preview.redd.it/2nsq6njg3t6e1.png?width=960&crop=smart&auto=webp&s=263158f9e6cb15d6945f5e9ea3f361e6630bcd79', 'width': 960}, {'height': 1025, 'url': 'https://preview.redd.it/2nsq6njg3t6e1.png?width=1080&crop=smart&auto=webp&s=c137178392b26a5f2ef5453b7effd9fb3f03b289', 'width': 1080}], 'source': {'height': 1367, 'url': 'https://preview.redd.it/2nsq6njg3t6e1.png?auto=webp&s=3e0655578b986f97adf1da1a77f95b1b38584434', 'width': 1440}, 'variants': {}}]} |
|||
Compute for AI workloads | 0 | Hi,
I am currently deciding between going for M1 Max 32/64gb ram or one of the higher end MacBook models with 32GB minimum.
My overall ambition is to train and learn models, experiment with different training methods and apply my own supervised and unsupervised learning in different domains. Overall, applying deep learning methodologies in my own personal projects.
I want to also make sure that for the future, the compute is satisfactory. Such as for new model/training methods etc.
Do you guys think it’s a bit too ambitious that I’m looking for a MacBook/PC with future proofing in mind? Or would you do something like create a GPU rig and do my work with cuda on windows for example.
I want to make this a mega thread if possible, thank you! | 2024-12-14T12:27:18 | https://www.reddit.com/r/LocalLLaMA/comments/1he1uej/compute_for_ai_workloads/ | metallisation | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he1uej | false | null | t3_1he1uej | /r/LocalLLaMA/comments/1he1uej/compute_for_ai_workloads/ | false | false | self | 0 | null |
Suggest some human-like models | 4 | Hi! What are your favorite models that doesn't talk like an assistant, something that sounds human. I really liked the tone and writing style of nemo and gemma2 sppo for example. Was just wondering if theres any models ive been sleeping on. Im mainly concerned with 7-14B, but any suggestions are welcome. thanks. | 2024-12-14T12:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/1he26eq/suggest_some_humanlike_models/ | 250000mph | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he26eq | false | null | t3_1he26eq | /r/LocalLLaMA/comments/1he26eq/suggest_some_humanlike_models/ | false | false | self | 4 | null |
Looking for Recommendations: How to Use Python for Local Scriptwriting with AI (Without GPT API) | 1 | [removed] | 2024-12-14T12:49:16 | https://www.reddit.com/r/LocalLLaMA/comments/1he26il/looking_for_recommendations_how_to_use_python_for/ | LividObjective3362 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he26il | false | null | t3_1he26il | /r/LocalLLaMA/comments/1he26il/looking_for_recommendations_how_to_use_python_for/ | false | false | self | 1 | null |
Voice-Driven TinyLLama on Raspberry Pi 4 | 1 | [removed] | 2024-12-14T13:06:50 | https://www.reddit.com/r/LocalLLaMA/comments/1he2hb8/voicedriven_tinyllama_on_raspberry_pi_4/ | SnooRevelations5257 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he2hb8 | false | null | t3_1he2hb8 | /r/LocalLLaMA/comments/1he2hb8/voicedriven_tinyllama_on_raspberry_pi_4/ | false | false | self | 1 | null |
Voice-Driven TinyLLama on Raspberry Pi 4 | 1 | [removed] | 2024-12-14T13:11:57 | https://www.reddit.com/r/LocalLLaMA/comments/1he2kgq/voicedriven_tinyllama_on_raspberry_pi_4/ | SnooRevelations5257 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he2kgq | false | null | t3_1he2kgq | /r/LocalLLaMA/comments/1he2kgq/voicedriven_tinyllama_on_raspberry_pi_4/ | false | false | self | 1 | null |
I created an entiry project with Claude + ChatGPT + Qwen: Automated Python Project Documentation Generator: Your New Code Analysis Companion (AMA) | 17 | Hey Reddit! 👋 I've just created a powerful tool that transforms how we document Python projects - an automated documentation generator that uses AI and static code analysis to create comprehensive project insights.
[https://github.com/charmandercha/ArchiDoc](https://github.com/charmandercha/ArchiDoc)
\## 🚀 What Does It Actually Do?
Imagine having an AI assistant that can:
\- Scan your entire Python project
\- Extract detailed code structure
\- Generate human-readable documentation
\- Provide insights about module interactions
\## 📦 Real-World Example: ReShade Linux Port Documentation
Let's say you're attempting a complex project like porting ReShade to Linux. Here's how this tool could help:
1. \*\*Initial Project Mapping\*\*
\- Automatically identifies all source files
\- Lists classes, functions, and module dependencies
\- Creates a bird's-eye view of the project architecture
2. \*\*Detailed File Analysis\*\*
\- Generates summaries for each key file
\- Highlights potential cross-platform compatibility challenges
\- Provides insights into code complexity
3. \*\*Interaction Insights\*\*
\- Describes how different modules interact
\- Helps identify potential refactoring opportunities
\- Assists in understanding system design
\## 🛠 Current State & Limitations
\*\*Important Caveats:\*\*
\- Currently tested only with the textgrad project
\- Requires manual adjustments for different project structures
\- Depends on Ollama and LLaMA models
\- Works best with conventional Python project layouts
\## 🤝 Community Call to Action
I'm developing a tool and need your help to make it more generic and useful for a variety of projects. So far, it has been used exclusively in the 'textgrad' project as an experiment to better understand its techniques and replicate them in other contexts.
So i need YOUR help to:
\- Support for more diverse project structures
\- Enhanced language model interactions
\- Better error handling
\- Multi-language project support
\## 💡 Potential Use Cases
\- Academic project documentation
\- Open-source project onboarding
\- Legacy code understanding
\- Quick architectural overviews
\## 🔧 Tech Stack
\- Python AST for code parsing
\- Ollama with LLaMA for AI insights
\- Supports JSON/Markdown/HTML outputs
\## 📄 Licensing
Fully MIT licensed - use it, modify it, commercialize it. Just keep the original attribution!
[https://github.com/charmandercha/ArchiDoc](https://github.com/charmandercha/ArchiDoc)
| 2024-12-14T13:26:44 | https://www.reddit.com/r/LocalLLaMA/comments/1he2u09/i_created_an_entiry_project_with_claude_chatgpt/ | charmander_cha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he2u09 | false | null | t3_1he2u09 | /r/LocalLLaMA/comments/1he2u09/i_created_an_entiry_project_with_claude_chatgpt/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': '3AzzKoYKWfOVBW_ryUdDRm5cetLA6uFOskV6-ZwwFZ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v-NgcAu9Co7R4H1RmxAkXu-Szd97o40cnDbsS1nFaww.jpg?width=108&crop=smart&auto=webp&s=d81371b93f02e18d35cab9b6540ed2e98f46e080', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v-NgcAu9Co7R4H1RmxAkXu-Szd97o40cnDbsS1nFaww.jpg?width=216&crop=smart&auto=webp&s=ac41420799a0b4f68a99445b4cbfdf7ca2aecfef', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v-NgcAu9Co7R4H1RmxAkXu-Szd97o40cnDbsS1nFaww.jpg?width=320&crop=smart&auto=webp&s=3e41ccdb39c5243f22db8fba2120cce0c968501c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v-NgcAu9Co7R4H1RmxAkXu-Szd97o40cnDbsS1nFaww.jpg?width=640&crop=smart&auto=webp&s=83e9bffe04f15054754f8ba95f44e6120da34671', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v-NgcAu9Co7R4H1RmxAkXu-Szd97o40cnDbsS1nFaww.jpg?width=960&crop=smart&auto=webp&s=d17d6a62d726b3a6a98a17a7f752d9629205e848', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v-NgcAu9Co7R4H1RmxAkXu-Szd97o40cnDbsS1nFaww.jpg?width=1080&crop=smart&auto=webp&s=5efa067f36d5bd381837b03736e2f5850af96867', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v-NgcAu9Co7R4H1RmxAkXu-Szd97o40cnDbsS1nFaww.jpg?auto=webp&s=41d51196762104ac3cb6e64645da7949e5a528b0', 'width': 1200}, 'variants': {}}]} |
Speed Test: Llama-3.3-70b on 2xRTX-3090 vs M3-Max 64GB Against Various Prompt Sizes | 96 | I've read a lot of comments about Mac vs rtx-3090, so I tested Llama-3.3-70b-instruct-q4_K_M with various prompt sizes on 2xRTX-3090 and M3-Max 64GB.
* Starting 20k context, I had to use KV quantization of q8_0 for RTX-3090 since it won't fit on 2xRTX-3090.
* With 16k prompt, 2xRTX-3090 processes 7.2x faster, and generates 1.79x faster.
* With 32k prompt, 2xRTX-3090 processes 6.75x faster, and generates 1.28x faster.
* Both used llama.cpp b4326.
* Each test is one shot generation (not accumulating prompt via multiturn chat style).
* I enabled Flash attention and set temperature to 0.0 and the random seed to 1000.
* Total duration is total execution time, not total time reported from llama.cpp.
* Sometimes you'll see shorter total duration for longer prompts than shorter prompts because it generated less tokens for longer prompts.
* Based on [another benchmark](https://www.reddit.com/r/LocalLLaMA/comments/1h51w32/benchmarks_for_llama_31_8b_q4_k_m_8b_q5_k_m_e_70b/), M4-Max seems to process prompt 16% faster than M3-Max.
### 2 x RTX-3090
| Prompt Tokens | Prompt Processing Speed | Generated Tokens | Token Generation Speed | Total Execution Time |
| --- | --- | --- | --- | --- |
| 258 | 406.33 | 576 | 17.87 | 44s |
| 687 | 504.34 | 962 | 17.78 | 1m6s |
| 1169 | 514.33 | 973 | 17.63 | 1m8s |
| 1633 | 520.99 | 790 | 17.51 | 59s |
| 2171 | 541.27 | 910 | 17.28 | 1m7s |
| 3226 | 516.19 | 1155 | 16.75 | 1m26s |
| 4124 | 511.85 | 1071 | 16.37 | 1m24s |
| 6094 | 493.19 | 965 | 15.60 | 1m25s |
| 8013 | 479.91 | 847 | 14.91 | 1m24s |
| 10086 | 463.59 | 970 | 14.18 | 1m41s |
| 12008 | 449.79 | 926 | 13.54 | 1m46s |
| 14064 | 436.15 | 910 | 12.93 | 1m53s |
| 16001 | 423.70 | 806 | 12.45 | 1m53s |
| 18209 | 410.18 | 1065 | 11.84 | 2m26s |
| 20234 | 399.54 | 862 | 10.05 | 2m27s |
| 22186 | 385.99 | 877 | 9.61 | 2m42s |
| 24244 | 375.63 | 802 | 9.21 | 2m43s |
| 26032 | 366.70 | 793 | 8.85 | 2m52s |
| 28000 | 357.72 | 798 | 8.48 | 3m13s |
| 30134 | 348.32 | 552 | 8.19 | 2m45s |
| 32170 | 338.56 | 714 | 7.88 | 3m17s |
### M3-Max 64GB
| Prompt Tokens | Prompt Processing Speed | Generated Tokens | Token Generation Speed | Total Execution Time |
| --- | --- | --- | --- | --- |
| 258 | 67.81 | 599 | 8.14 | 1m33s |
| 687 | 65.76 | 1999 | 8.09 | 4m18s |
| 1169 | 71.45 | 581 | 7.99 | 1m30s |
| 1633 | 72.12 | 891 | 7.94 | 2m16s |
| 2171 | 71.53 | 799 | 7.88 | 2m13s |
| 3226 | 69.49 | 612 | 7.78 | 2m6s |
| 4124 | 67.77 | 825 | 7.72 | 2m49s |
| 6094 | 65.99 | 642 | 7.62 | 2m58s |
| 8013 | 64.13 | 863 | 7.46 | 4m2s |
| 10086 | 62.88 | 766 | 7.35 | 4m26s |
| 12008 | 61.61 | 914 | 7.34 | 5m21s |
| 14064 | 60.23 | 799 | 7.21 | 5m46s |
| 16001 | 58.82 | 714 | 6.96 | 6m16s |
| 18209 | 57.70 | 766 | 6.74 | 7m11s |
| 20234 | 56.46 | 786 | 6.59 | 7m59s |
| 22186 | 55.12 | 724 | 6.72 | 8m32s |
| 24244 | 53.88 | 772 | 6.62 | 9m28s |
| 26032 | 52.73 | 510 | 6.43 | 9m35s |
| 28000 | 52.00 | 768 | 6.23 | 11m4s |
| 30134 | 50.90 | 529 | 6.18 | 11m20s |
| 32170 | 50.13 | 596 | 6.16 | 12m21s |
### Few thoughts from my previous posts:
Whether Mac is right for you depends on your use case and speed tolerance.
If you want to do serious research/development with PyTorch, forget Mac. You'll run into things like xxx operation is not supported on MPS. Also flash attention Python library (not llama.cpp) doesn't support Mac.
If you want to use 70b models, skip 48GB in my opinion and get a model with 64GB+, instead. KV quantization is extremely slow on Mac, so you definitely need memory for context and maybe some background task. Remember, you have to leave some memory for MacOS and whatever application you need to run along side.
Especially if you're thinking about older models, high power mode in system settings is only available on [certain models](https://www.reddit.com/r/LocalLLaMA/comments/1gyfqgz/if_you_want_to_benchmark_speed_on_macbook_make/). Otherwise you get throttled like crazy. For example, it can decrease [from 13m (high power) to 1h30m (no high power)](https://www.reddit.com/r/LocalLLaMA/comments/1h51w32/benchmarks_for_llama_31_8b_q4_k_m_8b_q5_k_m_e_70b/).
For tasks like processing long documents or codebases, you should be prepared to wait around. Once the long prompt is processed, subsequent chat should go relatively fast with prompt caching. For these, I just use ChatGPT for quality anyways. Once in a while when I need more power for heavy tasks like fine-tuning, I rent GPUs from Runpod.
If your main use is casual chatting or asking like coding question with short prompts, the speed is adequate in my opinion. Personally, I find 7 tokens/second very usable and even 5 tokens/second tolerable. For context, people read an average of [238 words per minute](https://www.sciencedirect.com/science/article/abs/pii/S0749596X19300786). It depends on the model, but 5 tokens/second roughly translates to 225 words per minute: 5 (tokens) * 60 (seconds) * 0.75 (tks/word)
Mac is slower, but it has advantage of portability, memory size, energy, quieter noise. It provides great out of the box experience for LLM inference.
NVidia is faster and has great support for ML libraries.However, especially with multiple GPUs, you have to deal with loud fan noise (jet engine compared to Mac), higher electricity consumption, and the hassle of dealing with drivers, tuning, cooling, crazy PSU, risers, cables, etc. I read that in some cases, you even need a special dedicated electrical socket to support the load.
It's a project for hardware boys/girls who enjoy building their own Frankenstein machines. 😄 | 2024-12-14T13:28:20 | https://www.reddit.com/r/LocalLLaMA/comments/1he2v2n/speed_test_llama3370b_on_2xrtx3090_vs_m3max_64gb/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he2v2n | false | null | t3_1he2v2n | /r/LocalLLaMA/comments/1he2v2n/speed_test_llama3370b_on_2xrtx3090_vs_m3max_64gb/ | false | false | self | 96 | null |
Installing 2nd GPU | 1 | [removed] | 2024-12-14T13:40:20 | https://www.reddit.com/gallery/1he32zx | Notorious_ESG | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1he32zx | false | null | t3_1he32zx | /r/LocalLLaMA/comments/1he32zx/installing_2nd_gpu/ | false | false | 1 | null |
|
Mac stuff: Mac Studio M2 Ultra or MacBook M2/M3 chips | 0 | Hi Everyone, I did a search and I am intrigued with what I found regarding using Macs for inference and chatting. Refurbished Mac M2 ultra Studios and MacBooks with M2/M3 max chips (96-128 gb RAM) all seem to fall within my budget of 4-5k.
My understanding is that a Mac Studio M2 Ultra is the strongest of the options, is this correct?
In addition, I read that with MacBooks M2 rigs are as fast or faster than the M3 machines for inference/chatting because of bandwidth differences? Is that accurate?
Happy to be schooled as a noob. Thxs! | 2024-12-14T13:59:40 | https://www.reddit.com/r/LocalLLaMA/comments/1he3flx/mac_stuff_mac_studio_m2_ultra_or_macbook_m2m3/ | MassiveLibrarian4861 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he3flx | false | null | t3_1he3flx | /r/LocalLLaMA/comments/1he3flx/mac_stuff_mac_studio_m2_ultra_or_macbook_m2m3/ | false | false | self | 0 | null |
So what's SSI going to ship | 1 | Would it be a new LLM based chatbot? He says he wants to build AGI in peace with no distractions for shipping products so what he is doing. Can anyone explain in depth? | 2024-12-14T14:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/1he3s1n/so_whats_ssi_going_to_ship/ | Proof-Indication-923 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he3s1n | false | null | t3_1he3s1n | /r/LocalLLaMA/comments/1he3s1n/so_whats_ssi_going_to_ship/ | false | false | self | 1 | null |
Former OpenAI researcher and whistleblower found dead at age 26 | 393 | 2024-12-14T14:30:19 | https://www.cnbc.com/2024/12/13/former-openai-researcher-and-whistleblower-found-dead-at-age-26.html | noblex33 | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 1he41jt | false | null | t3_1he41jt | /r/LocalLLaMA/comments/1he41jt/former_openai_researcher_and_whistleblower_found/ | false | false | 393 | {'enabled': False, 'images': [{'id': 'NFUYc5M0iWzTsyonlv3o6FTo3t8c7NO5isoasmsQGec', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Ir-pfkICeSCAFiGJ2WET5xKBmnGJbjXODxt9_O1-xsE.jpg?width=108&crop=smart&auto=webp&s=393ddc233cacd334ba3b732979af089578d6aae0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Ir-pfkICeSCAFiGJ2WET5xKBmnGJbjXODxt9_O1-xsE.jpg?width=216&crop=smart&auto=webp&s=ee7b7d453d76e6d6755859d5696756f283f22ac5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Ir-pfkICeSCAFiGJ2WET5xKBmnGJbjXODxt9_O1-xsE.jpg?width=320&crop=smart&auto=webp&s=ec40e3cc3c5b405234e1163bf26689dd6275aaac', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Ir-pfkICeSCAFiGJ2WET5xKBmnGJbjXODxt9_O1-xsE.jpg?width=640&crop=smart&auto=webp&s=cbfebeb85898eafb29c0a12c221517157a26ef2d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Ir-pfkICeSCAFiGJ2WET5xKBmnGJbjXODxt9_O1-xsE.jpg?width=960&crop=smart&auto=webp&s=9d23ab71663541b08e4980ac982e518a27eae013', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Ir-pfkICeSCAFiGJ2WET5xKBmnGJbjXODxt9_O1-xsE.jpg?width=1080&crop=smart&auto=webp&s=3366c6d9eb38024796c17eb4f7e6a06b7eb75efc', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Ir-pfkICeSCAFiGJ2WET5xKBmnGJbjXODxt9_O1-xsE.jpg?auto=webp&s=092fefbb3b3d0b9b9f423f2f6d661ea7e4a4c767', 'width': 1920}, 'variants': {}}]} |
||
I made a MacOS app for myself to use Qwq as a thinking model before a Qwen response. | 1 | [removed] | 2024-12-14T14:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1he46g8/i_made_a_macos_app_for_myself_to_use_qwq_as_a/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he46g8 | false | null | t3_1he46g8 | /r/LocalLLaMA/comments/1he46g8/i_made_a_macos_app_for_myself_to_use_qwq_as_a/ | false | false | self | 1 | null |
llama.cpp Now Supports Qwen2VL | 211 | [https://github.com/ggerganov/llama.cpp/pull/10361](https://github.com/ggerganov/llama.cpp/pull/10361) | 2024-12-14T14:42:52 | https://www.reddit.com/r/LocalLLaMA/comments/1he4ars/llamacpp_now_supports_qwen2vl/ | Intelligent_Jello344 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he4ars | false | null | t3_1he4ars | /r/LocalLLaMA/comments/1he4ars/llamacpp_now_supports_qwen2vl/ | false | false | self | 211 | {'enabled': False, 'images': [{'id': 'h6VkqCUdAAa3Jc0Huck65H67MTGQdczsTeVvrcjCCwc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_1gGzdHsqJOUfFakJ3xFRS1R7G9zRtdctqHv-RApMwk.jpg?width=108&crop=smart&auto=webp&s=16cb35d0e568d4f43d5c9125f4f32fe285a5635d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_1gGzdHsqJOUfFakJ3xFRS1R7G9zRtdctqHv-RApMwk.jpg?width=216&crop=smart&auto=webp&s=ce7672bd72f2933023c7bd4e70964e1bcc15179b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_1gGzdHsqJOUfFakJ3xFRS1R7G9zRtdctqHv-RApMwk.jpg?width=320&crop=smart&auto=webp&s=f1c05cc6afa3bdc3bd712b08c4270d61ffb06572', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_1gGzdHsqJOUfFakJ3xFRS1R7G9zRtdctqHv-RApMwk.jpg?width=640&crop=smart&auto=webp&s=c29384bf0df87a1c3e394f499ecfabdb797a420f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_1gGzdHsqJOUfFakJ3xFRS1R7G9zRtdctqHv-RApMwk.jpg?width=960&crop=smart&auto=webp&s=e7ed42679dd7681dce2add2f0ee4c55790f8c971', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_1gGzdHsqJOUfFakJ3xFRS1R7G9zRtdctqHv-RApMwk.jpg?width=1080&crop=smart&auto=webp&s=12a4e3793dae89ed1d30366ecf2f2ad9598ebb86', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_1gGzdHsqJOUfFakJ3xFRS1R7G9zRtdctqHv-RApMwk.jpg?auto=webp&s=894b7eb7737b2f747620dadfe1ac65c9974f1092', 'width': 1200}, 'variants': {}}]} |
Local LLMs in Game Development (NobodyWho) | 1 | [deleted] | 2024-12-14T14:47:53 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1he4eib | false | null | t3_1he4eib | /r/LocalLLaMA/comments/1he4eib/local_llms_in_game_development_nobodywho/ | false | false | default | 1 | null |
||
Local LLM's in game dev | 1 | [removed] | 2024-12-14T14:59:32 | No_Abbreviations_532 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1he4mw7 | false | null | t3_1he4mw7 | /r/LocalLLaMA/comments/1he4mw7/local_llms_in_game_dev/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ZEU0ISQ7liV9URS5tdjV92MlRvRi5bHs-f_A81n5_4Y', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/xvb10d4ewt6e1.png?width=108&crop=smart&auto=webp&s=26ea3f0eb1773a9c2d3d9b6333718065a7ffc52d', 'width': 108}, {'height': 36, 'url': 'https://preview.redd.it/xvb10d4ewt6e1.png?width=216&crop=smart&auto=webp&s=7f267f6d83e5b41d9cb7e8eae7541f220f371796', 'width': 216}, {'height': 54, 'url': 'https://preview.redd.it/xvb10d4ewt6e1.png?width=320&crop=smart&auto=webp&s=e5498634efadc88510ef719bc0d3bcf216ba8abb', 'width': 320}, {'height': 109, 'url': 'https://preview.redd.it/xvb10d4ewt6e1.png?width=640&crop=smart&auto=webp&s=b2eb4c6880793855d06cce8eb87e8eabc13ed360', 'width': 640}, {'height': 164, 'url': 'https://preview.redd.it/xvb10d4ewt6e1.png?width=960&crop=smart&auto=webp&s=dfb9eae7563b937832dade414500a4bee930cc20', 'width': 960}, {'height': 184, 'url': 'https://preview.redd.it/xvb10d4ewt6e1.png?width=1080&crop=smart&auto=webp&s=0fcf7599d8250341e740135a77fc08d57bf449a5', 'width': 1080}], 'source': {'height': 198, 'url': 'https://preview.redd.it/xvb10d4ewt6e1.png?auto=webp&s=f8b6c94af53ab30e2cda4947a7246b78364cf02c', 'width': 1157}, 'variants': {}}]} |
||
All Vocabulary Token Embeddings of Llama 3 | 1 | [removed] | 2024-12-14T15:26:55 | https://www.reddit.com/r/LocalLLaMA/comments/1he57p0/all_vocabulary_token_embeddings_of_llama_3/ | xinyuforsa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he57p0 | false | null | t3_1he57p0 | /r/LocalLLaMA/comments/1he57p0/all_vocabulary_token_embeddings_of_llama_3/ | false | false | self | 1 | null |
Do u know that alibaba actually have better version of qwen vision? But it's only for chinese market. | 1 | 2024-12-14T16:30:19 | https://www.reddit.com/gallery/1he6jwg | krakotay1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1he6jwg | false | null | t3_1he6jwg | /r/LocalLLaMA/comments/1he6jwg/do_u_know_that_alibaba_actually_have_better/ | false | false | 1 | null |
||
M*rder for hire?? Did the CE0 hire a hitm@n to kill his employee on a pending L@wsuuit?? | 1 | 2024-12-14T16:44:04 | MathematicianBasic73 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1he6uga | false | null | t3_1he6uga | /r/LocalLLaMA/comments/1he6uga/mrder_for_hire_did_the_ce0_hire_a_hitmn_to_kill/ | false | false | 1 | {'enabled': True, 'images': [{'id': '1tcm8W9vi4RjTjQl9JXYseMJTgywrIVRDoa-PcRiNFk', 'resolutions': [{'height': 200, 'url': 'https://preview.redd.it/acsgnv81fu6e1.jpeg?width=108&crop=smart&auto=webp&s=cf28794edf013e0a2adb3c27b84ef4156969c264', 'width': 108}, {'height': 401, 'url': 'https://preview.redd.it/acsgnv81fu6e1.jpeg?width=216&crop=smart&auto=webp&s=a31f53f9f6b80d1d9c60cf1561ad8598db3515ae', 'width': 216}, {'height': 595, 'url': 'https://preview.redd.it/acsgnv81fu6e1.jpeg?width=320&crop=smart&auto=webp&s=b9de208b9776757c1642b3fb932a3f9b2f1287eb', 'width': 320}, {'height': 1190, 'url': 'https://preview.redd.it/acsgnv81fu6e1.jpeg?width=640&crop=smart&auto=webp&s=9f1db49bc681a9a0b93b97bd645e62edbd6f11f6', 'width': 640}, {'height': 1785, 'url': 'https://preview.redd.it/acsgnv81fu6e1.jpeg?width=960&crop=smart&auto=webp&s=165e7903d2668dc642a9bf760d2804874e3c5d13', 'width': 960}, {'height': 2008, 'url': 'https://preview.redd.it/acsgnv81fu6e1.jpeg?width=1080&crop=smart&auto=webp&s=74f94ea985832e40d809fec0670664c315b8a1de', 'width': 1080}], 'source': {'height': 2455, 'url': 'https://preview.redd.it/acsgnv81fu6e1.jpeg?auto=webp&s=30eb2b9b5a229154cb8b347e8906a88f82b25283', 'width': 1320}, 'variants': {}}]} |
|||
Alternatives to Ollama for AMD GPUs? | 27 | Ollama ROCm has been continuously disappointing since the beginning. Memory calculation is usually messed up in one way or another. Models that run fine on 24GB of memory with a fixed context, for some reason can go to CPU sometimes. Very often a model will fail to load and throw 500. The worst part is that each new version manage to bring more regressions that features. I'm fed up with it especially when my use case is pretty limited, just pure inference.
So I wonder, what other inference engine have you tried and found to be running stabily on ROCm? Bonus if it's also memory-efficient.
Thanks. | 2024-12-14T17:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/1he76q4/alternatives_to_ollama_for_amd_gpus/ | ParaboloidalCrest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he76q4 | false | null | t3_1he76q4 | /r/LocalLLaMA/comments/1he76q4/alternatives_to_ollama_for_amd_gpus/ | false | false | self | 27 | null |
Evolution llm number of parameters vs quality: it seems that qwen 2.5 7b is almost equal to the performance of chat gpt 3.5 175b 2 years ago or 25 times fewer parameters in 2 years. | 1 | [removed] | 2024-12-14T17:15:24 | https://www.reddit.com/r/LocalLLaMA/comments/1he7iji/evolution_llm_number_of_parameters_vs_quality_it/ | MoreIndependent5967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he7iji | false | null | t3_1he7iji | /r/LocalLLaMA/comments/1he7iji/evolution_llm_number_of_parameters_vs_quality_it/ | false | false | self | 1 | null |
Shocked at the implicit racial bias at NeurIPS yesterday | 0 | [Rosalind Picard's Keynote on Friday at NeurIPS](https://preview.redd.it/iksntl5equ6e1.jpg?width=1179&format=pjpg&auto=webp&s=153dfdf19591c696e176065623ec4612c83510f5)
| 2024-12-14T17:51:03 | https://www.reddit.com/r/LocalLLaMA/comments/1he89rx/shocked_at_the_implicit_racial_bias_at_neurips/ | lilsoftcato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he89rx | false | null | t3_1he89rx | /r/LocalLLaMA/comments/1he89rx/shocked_at_the_implicit_racial_bias_at_neurips/ | false | false | 0 | null |
|
open-source Android app that allows you to record, search, and query everything you've seen on your phone. | 1 | 2024-12-14T18:09:44 | https://github.com/cparish312/HindsightMobile/ | louis3195 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1he8oeh | false | null | t3_1he8oeh | /r/LocalLLaMA/comments/1he8oeh/opensource_android_app_that_allows_you_to_record/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RbJ0Q6sgRyt7_FIgmPAfMoQvi9JETRVWly4KncbsBhs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E1mQCF0G17HT4HSzi6GmI-E05CbWbw2Ql16hqq1iRi0.jpg?width=108&crop=smart&auto=webp&s=90e35404b1aeb8cd0d956fee52e60642ea870a78', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E1mQCF0G17HT4HSzi6GmI-E05CbWbw2Ql16hqq1iRi0.jpg?width=216&crop=smart&auto=webp&s=74c90a78b21abf7d70099e9b2c202d304b67fe40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E1mQCF0G17HT4HSzi6GmI-E05CbWbw2Ql16hqq1iRi0.jpg?width=320&crop=smart&auto=webp&s=dc87295cac8c0b92e74a4aef73112c1e32718de7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E1mQCF0G17HT4HSzi6GmI-E05CbWbw2Ql16hqq1iRi0.jpg?width=640&crop=smart&auto=webp&s=203b8144af9d0bafdd219a9307200c4a135d30bd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E1mQCF0G17HT4HSzi6GmI-E05CbWbw2Ql16hqq1iRi0.jpg?width=960&crop=smart&auto=webp&s=f638d68698bede53a0c6347b0de1cb53be87d9c5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E1mQCF0G17HT4HSzi6GmI-E05CbWbw2Ql16hqq1iRi0.jpg?width=1080&crop=smart&auto=webp&s=b5b099c1b80e1d1faef5263b6e16be9aaf4d10b2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/E1mQCF0G17HT4HSzi6GmI-E05CbWbw2Ql16hqq1iRi0.jpg?auto=webp&s=295b3a19c2d4590819b5601fa7a7b52f79a83967', 'width': 1200}, 'variants': {}}]} |
||
Roleplay | 1 | [removed] | 2024-12-14T18:18:43 | https://www.reddit.com/r/LocalLLaMA/comments/1he8v9y/roleplay/ | ai-dad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he8v9y | false | null | t3_1he8v9y | /r/LocalLLaMA/comments/1he8v9y/roleplay/ | false | false | self | 1 | null |
Roleplay | 0 | Scenario 1: Man A is defined by its backstory, its task is to behave like an angry man.
Scenario 2: Man A is defined by its backstory, its task is to behave like a rich brat.
How do I design Man A with backstory and then plug in a task/trait to behave like an angry man, tough customer, etc whenever needed. (Jumping from scenario 1 to scenario 2)
Also, gpt4o ain't good at role-playing. Anything I should use instead? Or I need some prompt engineering lessons haha!! | 2024-12-14T18:30:40 | https://www.reddit.com/r/LocalLLaMA/comments/1he94cb/roleplay/ | RequirementQuick6057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1he94cb | false | null | t3_1he94cb | /r/LocalLLaMA/comments/1he94cb/roleplay/ | false | false | self | 0 | null |
Kudos to the LMArena folks, the new WebDev arena is showing great separation of ELO scores and shows Claude 3.5 Sonnets dominance in this domain | 171 | 2024-12-14T19:30:51 | jd_3d | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1heafxq | false | null | t3_1heafxq | /r/LocalLLaMA/comments/1heafxq/kudos_to_the_lmarena_folks_the_new_webdev_arena/ | false | false | 171 | {'enabled': True, 'images': [{'id': 'ZwPm4Yg6sySMvsMOJVTjUI_0HUUeH1yT_JGQzT9KZMo', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/6jxydljm7v6e1.png?width=108&crop=smart&auto=webp&s=f258dac93cd783745d735e4b03a99cb2d04fc35e', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/6jxydljm7v6e1.png?width=216&crop=smart&auto=webp&s=8ad1b6c7009fedb9c6626eec45b22a50221b6dc4', 'width': 216}, {'height': 224, 'url': 'https://preview.redd.it/6jxydljm7v6e1.png?width=320&crop=smart&auto=webp&s=0d87cbb95513d053243a02c62a222cdf23c19700', 'width': 320}, {'height': 448, 'url': 'https://preview.redd.it/6jxydljm7v6e1.png?width=640&crop=smart&auto=webp&s=e39c116c4450dc5a8a308be6ebc717c19ff9df0b', 'width': 640}], 'source': {'height': 615, 'url': 'https://preview.redd.it/6jxydljm7v6e1.png?auto=webp&s=6755feec6aeab867683e9c1fae3315a1fc216ca1', 'width': 878}, 'variants': {}}]} |
|||
new intel arc b580. | 10 | I was wanting to get a new arc card because 12gb of ram for 250 if you can find it at msrp and if you can get two that 24 gigs of vram just for around 500 bucks. but I just want to know if anyone has gotten their hands on one and has done some testing with it yet. | 2024-12-14T20:06:32 | https://www.reddit.com/r/LocalLLaMA/comments/1heb8ex/new_intel_arc_b580/ | Rasr123105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heb8ex | false | null | t3_1heb8ex | /r/LocalLLaMA/comments/1heb8ex/new_intel_arc_b580/ | false | false | self | 10 | null |
What can an 8B model do without fine-tuning? | 10 | Hi. I'm trying to adjust my expectations. So, assuming all is possible is essentially giving the prompt and getting the response (i.e. OpenAI API), and fine-tuning/modifying the weights is not an option. Has anyone tried what can be done in such scenarios?
What can I then expect from an 8B/9B model like Gemma or Llamma? Can anyone share some successful use cases, particularly for batch processing (i.e. not real-time chat)? For example is it good in summarisaing arbitrary text, also in non-English languages, without making things up? What about other text processing tasks? And in-context learning/tagging?
Thanks | 2024-12-14T20:18:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hebhyx/what_can_an_8b_model_do_without_finetuning/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hebhyx | false | null | t3_1hebhyx | /r/LocalLLaMA/comments/1hebhyx/what_can_an_8b_model_do_without_finetuning/ | false | false | self | 10 | null |
Ilya's talk on "Sequence to sequence" at NIPS 2024 in Vancouver, Canada | 124 | 2024-12-14T20:31:16 | https://www.youtube.com/watch?v=1yvBqasHLZs | rbgo404 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1hebrkw | false | {'oembed': {'author_name': 'seremot', 'author_url': 'https://www.youtube.com/@seremot', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/1yvBqasHLZs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Ilya Sutskever: "Sequence to sequence learning with neural networks: what a decade""></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/1yvBqasHLZs/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Ilya Sutskever: "Sequence to sequence learning with neural networks: what a decade"', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1hebrkw | /r/LocalLLaMA/comments/1hebrkw/ilyas_talk_on_sequence_to_sequence_at_nips_2024/ | false | false | 124 | {'enabled': False, 'images': [{'id': 'NT1qPsh9KLii23Pyb2TBL3Gd2_leTk7pvpei08PF6Qs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/B2V58Xgw9Givxd8AdjMj-BmYl8kfEl6gYXO1Xh3yeAE.jpg?width=108&crop=smart&auto=webp&s=1a56e4bfa7372e00601bb91b9f9126aa73a941cb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/B2V58Xgw9Givxd8AdjMj-BmYl8kfEl6gYXO1Xh3yeAE.jpg?width=216&crop=smart&auto=webp&s=483a9f0feef6133f403ee45791c948872cc3d998', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/B2V58Xgw9Givxd8AdjMj-BmYl8kfEl6gYXO1Xh3yeAE.jpg?width=320&crop=smart&auto=webp&s=96a56ad0c6c0e7a15fbeb04f91aa9047f61ea4d9', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/B2V58Xgw9Givxd8AdjMj-BmYl8kfEl6gYXO1Xh3yeAE.jpg?auto=webp&s=944fb1b14a321c985b35687df0513c17e7c70dc9', 'width': 480}, 'variants': {}}]} |
||
New 3090 | 1 | [removed] | 2024-12-14T20:35:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hebush/new_3090/ | Abarn1024 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hebush | false | null | t3_1hebush | /r/LocalLLaMA/comments/1hebush/new_3090/ | false | false | self | 1 | null |
Last Week in Medical AI: Top LLM Research Papers/Models (December 7 - December 14, 2024)
| 16 | **Medical LLM & Other Models**
* PediaBench: Chinese Pediatric LLM
* This paper introduces PediaBench, the first Chinese pediatric dataset for evaluating Large Language Model (LLM) question-answering performance, containing 4,565 objective and 1,632 subjective questions across 12 disease groups.
* BiMediX: Bilingual Medical LLM
* This paper introduces BiMediX, the first bilingual (English-Arabic) medical Mixture of Experts LLM, along with BiMed1.3M, a 1.3M bilingual medical instruction dataset with over 632M tokens used for training.
* Diverse medical knowledge integration
* This paper introduces BiMediX2, a bilingual (Arabic-English) Large Multimodal Model (LMM) based on Llama3.1 architecture, trained on 1.6M medical interaction samples.
* BRAD: Digital Biology Language Model
* This paper introduces BRAD (Bioinformatics Retrieval Augmented Digital assistant), an LLM-powered chatbot and agent system integrating various bioinformatics tools.
* MMedPO: Vision-Language Medical LLM
* This paper introduces MMedPO, a multimodal medical preference optimization approach to improve factual accuracy in Medical Large Vision-Language Models (Med-LVLMs) by addressing modality misalignment.
**Frameworks & Methodologies**
\- TOP-Training: Medical Q&A Framework
\- Hybrid RAG: Secure Medical Data Management
\- Zero-Shot ATC Clinical Coding
\- Chest X-Ray Diagnosis Architecture
\- Medical Imaging AI Democratization
**Benchmarks & Evaluations**
\- KorMedMCQA: Korean Healthcare Licensing Benchmark
\- Large Language Model Medical Tasks
\- Clinical T5 Model Performance Study
\- Radiology Report Quality Assessment
\- Genomic Analysis Benchmarking
**LLM Applications**
\- TCM-FTP: Herbal Prescription Prediction
\- LLaSA: Activity Analysis via Sensors
\- Emergency Department Visit Predictions
\- Neurodegenerative Disease AI Diagnosis
\- Kidney Disease Explainable AI Model
**Ethical AI & Privacy**
\- Privacy-Preserving LLM Mechanisms
\- AI-Driven Digital Organism Modeling
\- Biomedical Research Automation
\- Multimodality in Medical Practice | 2024-12-14T21:24:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hecwmg/last_week_in_medical_ai_top_llm_research/ | aadityaura | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hecwmg | false | null | t3_1hecwmg | /r/LocalLLaMA/comments/1hecwmg/last_week_in_medical_ai_top_llm_research/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'ctfQI7HNNSpW8mvSMLezV6b59LmqmsGPBbLmwEL95gU', 'resolutions': [{'height': 101, 'url': 'https://external-preview.redd.it/m_A09mIaYECU6FSFDE8NvSKyfkCnlDvvGVX_SCZsRWw.jpg?width=108&crop=smart&auto=webp&s=01cf9adc9cad130c524b2c8dc4bee602c97c6792', 'width': 108}, {'height': 203, 'url': 'https://external-preview.redd.it/m_A09mIaYECU6FSFDE8NvSKyfkCnlDvvGVX_SCZsRWw.jpg?width=216&crop=smart&auto=webp&s=005bc06cbbef88d2ab4a91d15c88741d3ff1c6e2', 'width': 216}, {'height': 301, 'url': 'https://external-preview.redd.it/m_A09mIaYECU6FSFDE8NvSKyfkCnlDvvGVX_SCZsRWw.jpg?width=320&crop=smart&auto=webp&s=6c6d459690fa57c535a3238c22a8683b07fc08bf', 'width': 320}, {'height': 603, 'url': 'https://external-preview.redd.it/m_A09mIaYECU6FSFDE8NvSKyfkCnlDvvGVX_SCZsRWw.jpg?width=640&crop=smart&auto=webp&s=01a8e35090be40ffd1a88df0627dfc33462b40e7', 'width': 640}, {'height': 905, 'url': 'https://external-preview.redd.it/m_A09mIaYECU6FSFDE8NvSKyfkCnlDvvGVX_SCZsRWw.jpg?width=960&crop=smart&auto=webp&s=58dd3d150adc6d60833b397f49a24e8d4c6f7e47', 'width': 960}, {'height': 1019, 'url': 'https://external-preview.redd.it/m_A09mIaYECU6FSFDE8NvSKyfkCnlDvvGVX_SCZsRWw.jpg?width=1080&crop=smart&auto=webp&s=184aa009756bc7da5de9556e9432096a6b0b3000', 'width': 1080}], 'source': {'height': 1308, 'url': 'https://external-preview.redd.it/m_A09mIaYECU6FSFDE8NvSKyfkCnlDvvGVX_SCZsRWw.jpg?auto=webp&s=a6349da948f50452f793eb011867283c9c7ceedc', 'width': 1386}, 'variants': {}}]} |
Are LLM models reproducible in replies if multiple sessions over http API are running with the same question simultaneously? | 1 | [removed] | 2024-12-14T21:41:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hed9yt/are_llm_models_reproducible_in_replies_if/ | pet2pet1982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hed9yt | false | null | t3_1hed9yt | /r/LocalLLaMA/comments/1hed9yt/are_llm_models_reproducible_in_replies_if/ | false | false | self | 1 | null |
The absolute best coding model that can fit on 48GB? | 99 | Curious of everyones opinion on the best coding "assistant" model that can fit on 48GB...
I've been trying out QWQ 32B at 8.0bpw exl2, and lately been also using Qwen2.5 72B at 4.25bpw... Curious which one is actually better in your opinions, as well as alternatives that might be better as well... | 2024-12-14T22:45:34 | https://www.reddit.com/r/LocalLLaMA/comments/1heemer/the_absolute_best_coding_model_that_can_fit_on/ | DeSibyl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heemer | false | null | t3_1heemer | /r/LocalLLaMA/comments/1heemer/the_absolute_best_coding_model_that_can_fit_on/ | false | false | self | 99 | null |
Cohere's New Model is Epic | 435 | It's unique attention architecture basically uses 3 layers w/ a fixed 4096 window of attention, and one layer that attends to everything at once, and interleaves them. Paired w/ kv-quantization, that lets you fit the entirety of Harry Potter in-context at 6GB. This will be revolutionary for long-context use... | 2024-12-14T23:19:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hefbq1/coheres_new_model_is_epic/ | N8Karma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hefbq1 | false | null | t3_1hefbq1 | /r/LocalLLaMA/comments/1hefbq1/coheres_new_model_is_epic/ | false | false | self | 435 | {'enabled': False, 'images': [{'id': '05_1acupJnxB3c-ZLpw90jr1VfwE5FQcCYQI2FLuNPE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=108&crop=smart&auto=webp&s=375c0474caddc6baae5de6008cefc7060f275b49', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=216&crop=smart&auto=webp&s=f69eb6beac8bcbde3a7d55912e0c7623db837828', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=320&crop=smart&auto=webp&s=c37b809f941ab85e565434605fa47932ebfcbb10', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=640&crop=smart&auto=webp&s=f0cbc57145e1e90d4cb9df79be95c37e01dcf9c0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=960&crop=smart&auto=webp&s=4500aefe4ca70fe3b63a811f352a947161cc44bb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?width=1080&crop=smart&auto=webp&s=b63db4c81d03c38554cbeba86ffa2c8eb2fa996f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xJRFfXSb0-3ELzcfnFQnUzJAEC6cERxHwdYg5uE4kPE.jpg?auto=webp&s=10744fc5d7926a514df40df3ece275baba4de832', 'width': 1200}, 'variants': {}}]} |
Tool use causes brainrot | 1 | [removed] | 2024-12-14T23:26:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hefgmf/tool_use_causes_brainrot/ | AlarBlip | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hefgmf | false | null | t3_1hefgmf | /r/LocalLLaMA/comments/1hefgmf/tool_use_causes_brainrot/ | false | false | self | 1 | null |
Tool use causes the model to become wierd | 1 | [removed] | 2024-12-14T23:27:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hefhe7/tool_use_causes_the_model_to_become_wierd/ | AlarBlip | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hefhe7 | false | null | t3_1hefhe7 | /r/LocalLLaMA/comments/1hefhe7/tool_use_causes_the_model_to_become_wierd/ | false | false | self | 1 | null |
Grassroots Project! Building a new multilingual LLMs | 1 | [removed] | 2024-12-14T23:49:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hefwbd/grassroots_project_building_a_new_multilingual/ | gentaiscool | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hefwbd | false | null | t3_1hefwbd | /r/LocalLLaMA/comments/1hefwbd/grassroots_project_building_a_new_multilingual/ | false | false | self | 1 | null |
Size Mismatch For clip.text_model.embeddings.token_embeddings.weight | 1 | I merged GPT-2 with CLiP using a script I will provide here: https://huggingface.co/yuchenxie/ArlowGPT-VLM-Untrained
Using this example inferencing script:
```python
import os
import json
import torch
from transformers import GPT2Tokenizer, CLIPProcessor
from safetensors.torch import load_file
from huggingface_hub import hf_hub_download
class ArlowGPT(torch.nn.Module):
def __init__(self, config_path: str, safetensor_path: str):
super().__init__()
from transformers import CLIPModel, GPT2LMHeadModel
print("Loading model configuration...")
# Load the configuration file
with open(config_path, "r") as f:
config = json.load(f)
# Extract configuration details
self.projection_dim = config["projection_dim"]
# Initialize CLIP and GPT-2 models without loading pretrained weights
self.clip = CLIPModel(CLIPModel.config_class.from_dict(config["clip_config"]))
self.gpt2 = GPT2LMHeadModel(GPT2LMHeadModel.config_class.from_dict(config["gpt2_config"]))
# Add projection layer to align dimensions
clip_hidden_size = self.clip.config.vision_config.hidden_size
self.clip_projection = torch.nn.Linear(clip_hidden_size, self.projection_dim)
print("Loading merged model weights...")
state_dict = load_file(safetensor_path)
self.load_state_dict(state_dict, strict=False) # Use strict=False if necessary
def forward(self, input_ids, attention_mask, pixel_values):
# Process image with CLIP
vision_outputs = self.clip.vision_model(pixel_values=pixel_values)
encoder_hidden_states = vision_outputs.last_hidden_state
# Project CLIP embeddings to GPT-2's expected size
encoder_hidden_states = self.clip_projection(encoder_hidden_states)
# Create attention mask for CLIP embeddings
encoder_attention_mask = torch.ones(
encoder_hidden_states.size()[:-1], dtype=torch.long
).to(encoder_hidden_states.device)
# Process text through GPT-2 with cross-attention
outputs = self.gpt2(
input_ids=input_ids,
attention_mask=attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
)
return outputs.logits
def load_custom_model(repo_id: str):
# Download configuration and weights from Hugging Face hub
print("Downloading model files from Hugging Face...")
config_path = hf_hub_download(repo_id=repo_id, filename="config.json")
safetensor_path = hf_hub_download(repo_id=repo_id, filename="model.safetensors")
tokenizer_path = repo_id # Tokenizer will be loaded directly by repo_id
processor_path = repo_id # Processor will be loaded directly by repo_id
# Load custom model
model = ArlowGPTModel(config_path, safetensor_path)
# Load tokenizer and processor
tokenizer = GPT2Tokenizer.from_pretrained(tokenizer_path)
processor = CLIPProcessor.from_pretrained(processor_path)
# Ensure padding token is set
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
print("Model, tokenizer, and processor loaded successfully.")
return model, tokenizer, processor
if __name__ == "__main__":
# Specify the Hugging Face repo ID for the merged model
repo_id = "yuchenxie/ArlowGPT-VLM-Untrained" # Update to your merged model repo
# Load the model, tokenizer, and processor
model, tokenizer, processor = load_custom_model(repo_id)
# Prepare example inputs
example_text = "What is happening in the image?"
inputs = tokenizer(
example_text,
padding="max_length",
truncation=True,
max_length=128,
return_tensors="pt",
)
# Example image input (replace with an actual preprocessed image)
example_image = torch.zeros((1, 3, 224, 224)) # Dummy image tensor
processed_image = processor(images=example_image, return_tensors="pt").pixel_values
# Run inference
print("Running inference...")
with torch.no_grad():
logits = model(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
pixel_values=processed_image,
)
# Decode the output
predicted_ids = torch.argmax(logits, dim=-1)
output_text = tokenizer.decode(predicted_ids[0], skip_special_tokens=True)
print("Generated Text:", output_text)
```
I get this error:
```python
RuntimeError: Error(s) in loading state_dict for ArlowGPTModel:
size mismatch for clip.text_model.embeddings.token_embedding.weight: copying a param with shape torch.Size([49408, 768]) from checkpoint, the shape in current model is torch.Size([49408, 512]).
size mismatch for clip.text_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([77, 768]) from checkpoint, the shape in current model is torch.Size([77, 512]).
size mismatch for clip.text_model.encoder.layers.0.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for clip.text_model.encoder.layers.0.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for clip.text_model.encoder.layers.0.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for clip.text_model.encoder.layers.0.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for clip.text_model.encoder.layers.0.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for clip.text_model.encoder.layers.0.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for clip.text_model.encoder.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for clip.text_model.encoder.layers.0.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for clip.text_model.encoder.layers.0.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for clip.text_model.encoder.layers.0.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for clip.text_model.encoder.layers.0.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
size mismatch for clip.text_model.encoder.layers.0.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
size mismatch for clip.text_model.encoder.layers.0.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
size mismatch for clip.text_model.encoder.layers.0.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for clip.text_model.encoder.layers.0.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for clip.text_model.encoder.layers.0.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for clip.text_model.encoder.layers.1.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for clip.text_model.encoder.layers.1.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for clip.text_model.encoder.layers.1.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for clip.text_model.encoder.layers.1.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
```
The list continues for a long time, but parts of it were like this.
Can someone guide me on how to fix this issue? | 2024-12-14T23:55:14 | https://www.reddit.com/r/LocalLLaMA/comments/1heg0pf/size_mismatch_for_cliptext_modelembeddingstoken/ | MichaelXie4645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heg0pf | false | null | t3_1heg0pf | /r/LocalLLaMA/comments/1heg0pf/size_mismatch_for_cliptext_modelembeddingstoken/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wZgo2uXzMlHSRRvHALlsoeTfH6Qq0H1q_eFA_GwzMJ8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ab6kfAG0APcR5h1jC_4FI7AC9CtAcG0RI-Qn7wzXHWQ.jpg?width=108&crop=smart&auto=webp&s=554be6ca827c36023749528cddcac7e0004d8cab', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ab6kfAG0APcR5h1jC_4FI7AC9CtAcG0RI-Qn7wzXHWQ.jpg?width=216&crop=smart&auto=webp&s=64d00c489351ada3ff29075d19490489788a1567', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ab6kfAG0APcR5h1jC_4FI7AC9CtAcG0RI-Qn7wzXHWQ.jpg?width=320&crop=smart&auto=webp&s=b25ee9cb9f9728606e230ef11bcb309387925f57', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ab6kfAG0APcR5h1jC_4FI7AC9CtAcG0RI-Qn7wzXHWQ.jpg?width=640&crop=smart&auto=webp&s=a58df1fe743b0822104d85afb96442cec6e9d514', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ab6kfAG0APcR5h1jC_4FI7AC9CtAcG0RI-Qn7wzXHWQ.jpg?width=960&crop=smart&auto=webp&s=184c364e3debe7b4f246707608c8b03ae1830fe9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ab6kfAG0APcR5h1jC_4FI7AC9CtAcG0RI-Qn7wzXHWQ.jpg?width=1080&crop=smart&auto=webp&s=a0751c14573e724aefddc73b03b390d80c9c4d86', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ab6kfAG0APcR5h1jC_4FI7AC9CtAcG0RI-Qn7wzXHWQ.jpg?auto=webp&s=33d59e24097bfaf70a4b1ebfee0de2a7bab1b18f', 'width': 1200}, 'variants': {}}]} |
1-2m context length solution needed | 1 | [removed] | 2024-12-15T00:00:08 | https://www.reddit.com/r/LocalLLaMA/comments/1heg42e/12m_context_length_solution_needed/ | Inevitable_Stay_9187 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heg42e | false | null | t3_1heg42e | /r/LocalLLaMA/comments/1heg42e/12m_context_length_solution_needed/ | false | false | self | 1 | null |
What model to use as the draft for eva qwen 2.5 32b? | 1 | Basically, title. I am using eva qwen 2.5 32b v 0.2 and I'm trying to use the draft model 1.5B v0.0 and running into serious slowdowns (like 25% speed in token generation).
I'm fairly new to draft models - does it need to be the exact same version/build as the larger model?
When using the standard qwen 2.5 7b with the 1.5b qwen 2.5b draft model, it works fine and improves speed by 50% on repeat replies. | 2024-12-15T00:02:11 | https://www.reddit.com/r/LocalLLaMA/comments/1heg5pq/what_model_to_use_as_the_draft_for_eva_qwen_25_32b/ | mayo551 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heg5pq | false | null | t3_1heg5pq | /r/LocalLLaMA/comments/1heg5pq/what_model_to_use_as_the_draft_for_eva_qwen_25_32b/ | false | false | self | 1 | null |
Where do you think the cutoff is for a model to be considered "usable" in terms of tokens per second? | 22 | In my view, anything slower than 10 tokens per second isn't really practical for actual work. What do you think? | 2024-12-15T00:11:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hegcl0/where_do_you_think_the_cutoff_is_for_a_model_to/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hegcl0 | false | null | t3_1hegcl0 | /r/LocalLLaMA/comments/1hegcl0/where_do_you_think_the_cutoff_is_for_a_model_to/ | false | false | self | 22 | null |
Total vram versus faster bandwidth | 5 | So say I'm making a home server for running AI stuff like large LLMs(70b+) or running Hunyuan, and the mobo and cpu are the same in both which would run better:
4x m10s for 128gb vram and 83gb/s x4 with 2560 cuda cores per card.
4x p40s for 96gb vram with 336gb/s and 3840 cuda cores per card.
The p40s cost $400 more for 4 and are clearly much faster since they're 1 generation faster, but the m10s will give 32gb more total vram allowing a bigger model without having to offload to the cpu.
So which setup would be better for a cheap server? More bandwidth/cores or higher total memory? | 2024-12-15T00:35:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hegszt/total_vram_versus_faster_bandwidth/ | asdrabael01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hegszt | false | null | t3_1hegszt | /r/LocalLLaMA/comments/1hegszt/total_vram_versus_faster_bandwidth/ | false | false | self | 5 | null |
Is there a better model than Starcannon Unleashed? | 0 | Hey,
so i currently use Starcannon Unleashed as my main model and i've been wondering if you guys know anything better that runs at a similar size?
I mainly use it for story-writing and general chats. I'd like to know if there's something with more accurate and up-to-date knowledge. Essentially something both creative and factual when needed.
As i understand it the Qwen 2.5 Coder models are essentially unmatched when it comes to programming so if that's true, i'll still use that for my programming needs. | 2024-12-15T00:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hegzdc/is_there_a_better_model_than_starcannon_unleashed/ | HRudy94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hegzdc | false | null | t3_1hegzdc | /r/LocalLLaMA/comments/1hegzdc/is_there_a_better_model_than_starcannon_unleashed/ | false | false | self | 0 | null |
Speculative Decoding Metrics with TabbyAPI | 18 | Hey folks. I'm curious if you have any metrics and/or qualitative results to share from using speculative decoding.
My setup: 6 NVIDIA RTX A4000s.
I've been experimenting with draft models for a few months. My "daily driver" has been Mistral Large 2407 4bpw with Mistral 7b v0.3 4bpw as draft and tensor parallel enabled. I'm currently trying out Llama3.3 70B 6bpw with Llama 3.2 3B 8bpw as draft and tensor parallel enabled.
So far, I much prefer my Mistral Large with a draft model to Llama3.3 70B with a draft model. Speed is comparable, maxing out at \~20t/s.
Here are some performance metrics. I'm using a simple bash [script](https://github.com/strikeoncmputrz/LLM_Scripts/blob/main/TabbyAPI_Utils/load-model.sh) I wrote to interact with the TabbyAPI API.
|Model|Params|Quantization|Context Window|Experts|VRAM|RAM|Max t/s|Command|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|Llama3.3|70b|6.0bpw|131072|N/A|71 GiB|N/A|18.89|./load-model.sh -d turboderp\_Llama-3.2-3B-Instruct-exl2\_8.0bpw -m LoneStriker\_Llama-3.3-70B-Instruct-6.0bpw-h6-exl2\_main - c Q4 -q Q4 -t|
|Llama3.3|70b|4.25bpw|131072|N/A|60 GiB|N/A|21.09|./load-model.sh -d turboderp\_Llama-3.2-3B-Instruct-exl2\_8.0bpw -m bartowski\_Llama-3.3-70B-Instruct-exl2\_4\_25 -t -c Q4 -q Q4|
|Mistral Large|123b|4.0bpw|32768|N/A|80 GiB|N/A|23.0|./load-model.sh -d turboderp\_Mistral-7B-instruct-v0.3-exl2\_4.0bpw -m LoneStriker\_Mistral-Large-Instruct-2407-4.0bpw-h6- exl2\_main -t| | 2024-12-15T01:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hehe9i/speculative_decoding_metrics_with_tabbyapi/ | x0xxin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hehe9i | false | null | t3_1hehe9i | /r/LocalLLaMA/comments/1hehe9i/speculative_decoding_metrics_with_tabbyapi/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'p2Ix3LzvrnpxUeUIHhlJsk3E4yyNFHEyuGmcr8pNdMY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1NBYS_KhwAXRs2CiJ3Tpu5nxgIVL4T7bWCd_dGmT4UI.jpg?width=108&crop=smart&auto=webp&s=474c297928fecf5eeb05773cb4253b87c05edb9f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1NBYS_KhwAXRs2CiJ3Tpu5nxgIVL4T7bWCd_dGmT4UI.jpg?width=216&crop=smart&auto=webp&s=543d76c4e35c5987354afb23d2adecd0c6329979', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1NBYS_KhwAXRs2CiJ3Tpu5nxgIVL4T7bWCd_dGmT4UI.jpg?width=320&crop=smart&auto=webp&s=db83aa6c48b2d9875550f2bf04f917b888abb2e2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1NBYS_KhwAXRs2CiJ3Tpu5nxgIVL4T7bWCd_dGmT4UI.jpg?width=640&crop=smart&auto=webp&s=c762f325a58d4ff8a31edf0437cf67b79efe6e19', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1NBYS_KhwAXRs2CiJ3Tpu5nxgIVL4T7bWCd_dGmT4UI.jpg?width=960&crop=smart&auto=webp&s=80a6e0f8e81243e4d6995d98bf52a551f1696b02', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1NBYS_KhwAXRs2CiJ3Tpu5nxgIVL4T7bWCd_dGmT4UI.jpg?width=1080&crop=smart&auto=webp&s=2f1b94509dc625b6b7ba3dc10133fd55c8aaa365', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1NBYS_KhwAXRs2CiJ3Tpu5nxgIVL4T7bWCd_dGmT4UI.jpg?auto=webp&s=27ffd78af66b350b849980dc4b8e15b9a1b03649', 'width': 1200}, 'variants': {}}]} |
Can you run a HuggingFace safetensors model in Godot/Javascript for cross platform compatible LLMs? | 1 | Trying to find a way to run a HuggingFace model on both PC and mobile, I'd imagine it would be cpu only for compatibility reasons. But is this possible? | 2024-12-15T01:32:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hehvyu/can_you_run_a_huggingface_safetensors_model_in/ | gavff64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hehvyu | false | null | t3_1hehvyu | /r/LocalLLaMA/comments/1hehvyu/can_you_run_a_huggingface_safetensors_model_in/ | false | false | self | 1 | null |
Who has the best local LLM rig? | 21 | I'm curious how far out people get. | 2024-12-15T02:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hejd8y/who_has_the_best_local_llm_rig/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hejd8y | false | null | t3_1hejd8y | /r/LocalLLaMA/comments/1hejd8y/who_has_the_best_local_llm_rig/ | false | false | self | 21 | null |
*NEW Grok2 Prompt | 1 | [removed] | 2024-12-15T03:05:24 | https://www.reddit.com/r/LocalLLaMA/comments/1heji8x/new_grok2_prompt/ | gioahumada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heji8x | false | null | t3_1heji8x | /r/LocalLLaMA/comments/1heji8x/new_grok2_prompt/ | false | false | self | 1 | null |
Grok2 Prompt | 1 | [removed] | 2024-12-15T03:20:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hejrn3/grok2_prompt/ | gioahumada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hejrn3 | false | null | t3_1hejrn3 | /r/LocalLLaMA/comments/1hejrn3/grok2_prompt/ | false | false | self | 1 | null |
Can any model identify chords? | 5 | Has scaling yielded this capability yet? Can any model (local or closed source) identify notes and/or chords in audio files? | 2024-12-15T03:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/1heju9b/can_any_model_identify_chords/ | rm-rf_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heju9b | false | null | t3_1heju9b | /r/LocalLLaMA/comments/1heju9b/can_any_model_identify_chords/ | false | false | self | 5 | null |
Help with setting up local LLM | 0 | I've been at this for 6 hours trying to setup my LLM locally. Chatgpt has been "directing" me through most of the process but I just keep going in circles. Is there anyone that's generous enough to help me out with finishing it? I'm trying to install llama 2 7b. | 2024-12-15T03:30:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hejxh2/help_with_setting_up_local_llm/ | Charlezmantion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hejxh2 | false | null | t3_1hejxh2 | /r/LocalLLaMA/comments/1hejxh2/help_with_setting_up_local_llm/ | false | false | self | 0 | null |
Qwen2.5 32B apache license in top 5 , never bet against open source | 308 | 2024-12-15T03:51:04 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1heka1z | false | null | t3_1heka1z | /r/LocalLLaMA/comments/1heka1z/qwen25_32b_apache_license_in_top_5_never_bet/ | false | false | 308 | {'enabled': True, 'images': [{'id': 'D-4wssIAtT2lcLWtlYPY1kSemG84MmCiLPDE9EPW8uE', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/lggaoec1qx6e1.jpeg?width=108&crop=smart&auto=webp&s=59f3597e7cfa4eadf0254f988e7ff8674b774eb3', 'width': 108}, {'height': 185, 'url': 'https://preview.redd.it/lggaoec1qx6e1.jpeg?width=216&crop=smart&auto=webp&s=246b349b8de226eb37534d125c25e6b5e1c43772', 'width': 216}, {'height': 274, 'url': 'https://preview.redd.it/lggaoec1qx6e1.jpeg?width=320&crop=smart&auto=webp&s=cd074155e6e3e0ff499c7055dbe385bd2764ea44', 'width': 320}, {'height': 549, 'url': 'https://preview.redd.it/lggaoec1qx6e1.jpeg?width=640&crop=smart&auto=webp&s=516c8515caf0376386611945c0b45b83fcc22e9e', 'width': 640}, {'height': 823, 'url': 'https://preview.redd.it/lggaoec1qx6e1.jpeg?width=960&crop=smart&auto=webp&s=25582e1f26c1ce5a185b967f30e06caac3892bea', 'width': 960}, {'height': 926, 'url': 'https://preview.redd.it/lggaoec1qx6e1.jpeg?width=1080&crop=smart&auto=webp&s=8d48da0b1476b178b80b7ebffbcf906acb28c7e5', 'width': 1080}], 'source': {'height': 1757, 'url': 'https://preview.redd.it/lggaoec1qx6e1.jpeg?auto=webp&s=2bd28970ec2293b4d5567243be1eed169f300a0e', 'width': 2048}, 'variants': {}}]} |
|||
Create open webui Functions using Qwen Coder | 39 | I found the open WebUI functions really useful for automation and improving response quality.
So, I created a simple system prompt to teach Qwen Coder to write functions for me. I might as well share it here:
# [https://pastebin.com/zDm5mJBB](https://pastebin.com/zDm5mJBB)
Here is a CoT function created by Qwen Coder 32B Q4 with this system prompt in action:
https://i.redd.it/tyy0oa7grx6e1.gif
As you can see, it successfully commanded the LLM to use CoT, wrap its thoughts in <thinking> tag, then provide the answer in <answer> tags, and then automatically clean up the response.
I didn't make any modifications to the function code; I just copied and pasted it, and it works. | 2024-12-15T04:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hekhkk/create_open_webui_functions_using_qwen_coder/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hekhkk | false | null | t3_1hekhkk | /r/LocalLLaMA/comments/1hekhkk/create_open_webui_functions_using_qwen_coder/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
|
Navigating the world of Harry Potter with Knowledge Graphs | 1 | [removed] | 2024-12-15T04:08:57 | https://dev.to/wonlewis/navigating-the-world-of-harry-potter-with-knowledge-graphs-j7i | LewisCYW | dev.to | 1970-01-01T00:00:00 | 0 | {} | 1hekklr | false | null | t3_1hekklr | /r/LocalLLaMA/comments/1hekklr/navigating_the_world_of_harry_potter_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'kflnayF5Vy21OG5DMgdSxr2yL4VEEQjYYh1vx584a54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ih9Kkmp0fYzOeI_NkufSqLwY_UaYoRRhBr1TxvgRAkI.jpg?width=108&crop=smart&auto=webp&s=d66ba74aa3b77692d53f0106b30f492d5e6f870f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ih9Kkmp0fYzOeI_NkufSqLwY_UaYoRRhBr1TxvgRAkI.jpg?width=216&crop=smart&auto=webp&s=3c19e485e6819f2d9d077bae2af716964222de42', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ih9Kkmp0fYzOeI_NkufSqLwY_UaYoRRhBr1TxvgRAkI.jpg?width=320&crop=smart&auto=webp&s=faa5a214652ae5311e913e87443625f13b0debb5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ih9Kkmp0fYzOeI_NkufSqLwY_UaYoRRhBr1TxvgRAkI.jpg?width=640&crop=smart&auto=webp&s=28635084c0dcd2d3addf73997e0969d4224a8b3f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ih9Kkmp0fYzOeI_NkufSqLwY_UaYoRRhBr1TxvgRAkI.jpg?width=960&crop=smart&auto=webp&s=b132956671971291a111768784ac82dfeeb17c90', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/ih9Kkmp0fYzOeI_NkufSqLwY_UaYoRRhBr1TxvgRAkI.jpg?auto=webp&s=ae2ee040471a6b49e02d577eb8540f50b29d23fe', 'width': 1000}, 'variants': {}}]} |
|
Are these results expected for 2-bit quantization? This is Llama-3.1-8b-instruct running on my iPhone. Fails on reasoning sometimes but I’m surprised by the correct answers. | 6 | 2024-12-15T04:12:29 | https://www.reddit.com/gallery/1hekmsf | gavff64 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hekmsf | false | null | t3_1hekmsf | /r/LocalLLaMA/comments/1hekmsf/are_these_results_expected_for_2bit_quantization/ | false | false | 6 | null |
||
Not sure where to post this but maybe others here are interested | 1 | [removed] | 2024-12-15T04:59:02 | OccasionllyAsleep | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1heldk9 | false | null | t3_1heldk9 | /r/LocalLLaMA/comments/1heldk9/not_sure_where_to_post_this_but_maybe_others_here/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'C17kSoQ1O1vepNr4TLccwDFASZku1pPSge_meKxx-Hg', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/qt80h4i52y6e1.png?width=108&crop=smart&auto=webp&s=d2af4ebfa877f8696cdc4b1630491158cd5dd131', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/qt80h4i52y6e1.png?width=216&crop=smart&auto=webp&s=300179ca1f41e47584131092266102d9700d3a43', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/qt80h4i52y6e1.png?width=320&crop=smart&auto=webp&s=b79a525e0cbd7d3f5c085614c1541cace040d3c5', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/qt80h4i52y6e1.png?width=640&crop=smart&auto=webp&s=f8ad0f323528fddc6f78fb15b55264178088bddf', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/qt80h4i52y6e1.png?width=960&crop=smart&auto=webp&s=29fb501c480ada28b00315f7ff83ccbee3eeedaf', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/qt80h4i52y6e1.png?width=1080&crop=smart&auto=webp&s=d7f88d3da813739ca13335610422de829d6632d6', 'width': 1080}], 'source': {'height': 2424, 'url': 'https://preview.redd.it/qt80h4i52y6e1.png?auto=webp&s=17f06eae8557c49f5cdc146d05f903c1e7e81f2d', 'width': 1080}, 'variants': {}}]} |
||
Fun prompts to test the capabilities of Phi-4? | 0 | Thanks to [this](https://www.reddit.com/r/LocalLLaMA/s/xl48fccT2S) post, I was able to get phi 4 running on my own ollama server. I've been playing with a few CoT prompts, and it failed the strawberry test, but what else can I try? | 2024-12-15T05:40:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hem0sj/fun_prompts_to_test_the_capabilities_of_phi4/ | roz303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hem0sj | false | null | t3_1hem0sj | /r/LocalLLaMA/comments/1hem0sj/fun_prompts_to_test_the_capabilities_of_phi4/ | false | false | self | 0 | null |
ChatGPT advanced voice (Santa) + Vision | 1 | 2024-12-15T06:19:51 | https://v.redd.it/4phl9l22gy6e1 | Maximilien_Loinapied | /r/LocalLLaMA/comments/1hemlyq/chatgpt_advanced_voice_santa_vision/ | 1970-01-01T00:00:00 | 0 | {} | 1hemlyq | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4phl9l22gy6e1/DASHPlaylist.mpd?a=1736967441%2COWRjM2Y5YTcxZGEwOGZhY2JmYjBkNDJjOTk3M2UzY2MzNTNiZGU5NDYyYmI1NDk5NzZiMTJlMDI3NDc0MzBjMQ%3D%3D&v=1&f=sd', 'duration': 537, 'fallback_url': 'https://v.redd.it/4phl9l22gy6e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4phl9l22gy6e1/HLSPlaylist.m3u8?a=1736967441%2CYWY5ZmJhYjAxNTQyM2IxOWNmOTUxMDFiYjY1OGZmMmM3NWJkNWE1NzIyZmE2MTViY2M1M2ZmZTk4MDZmYzY2Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4phl9l22gy6e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1hemlyq | /r/LocalLLaMA/comments/1hemlyq/chatgpt_advanced_voice_santa_vision/ | false | false | default | 1 | null |
|
xAI Grok 2 1212 | 52 | 2024-12-15T06:24:33 | https://x.com/xai/status/1868045132760842734 | ahmetegesel | x.com | 1970-01-01T00:00:00 | 0 | {} | 1hemodt | false | null | t3_1hemodt | /r/LocalLLaMA/comments/1hemodt/xai_grok_2_1212/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'utdSwvdLHFppg8kWgKaxICksDczUe7l9vy5HLMflC-Y', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/rjkxtbTf8EuPiR-eWta4yiuBXe2DicJZTQ3H0TtiSoY.jpg?width=108&crop=smart&auto=webp&s=65e0990b093e8af2312808898060d158657fd611', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/rjkxtbTf8EuPiR-eWta4yiuBXe2DicJZTQ3H0TtiSoY.jpg?width=216&crop=smart&auto=webp&s=2b4f0fadb2877cb5198b019e9fa8d3ae7ce73ee9', 'width': 216}, {'height': 215, 'url': 'https://external-preview.redd.it/rjkxtbTf8EuPiR-eWta4yiuBXe2DicJZTQ3H0TtiSoY.jpg?width=320&crop=smart&auto=webp&s=098c32d6cbd93ad112501b786cdf2cf1ca2aae7d', 'width': 320}, {'height': 431, 'url': 'https://external-preview.redd.it/rjkxtbTf8EuPiR-eWta4yiuBXe2DicJZTQ3H0TtiSoY.jpg?width=640&crop=smart&auto=webp&s=6a54345d803e3e63d1dc1dbcb0d8e759239c4b38', 'width': 640}, {'height': 646, 'url': 'https://external-preview.redd.it/rjkxtbTf8EuPiR-eWta4yiuBXe2DicJZTQ3H0TtiSoY.jpg?width=960&crop=smart&auto=webp&s=f672d0b7022787df5ba8463f3f5d6a1c846f5b74', 'width': 960}, {'height': 727, 'url': 'https://external-preview.redd.it/rjkxtbTf8EuPiR-eWta4yiuBXe2DicJZTQ3H0TtiSoY.jpg?width=1080&crop=smart&auto=webp&s=032f04f0214bd653a06319ccee190f02f24d4290', 'width': 1080}], 'source': {'height': 1334, 'url': 'https://external-preview.redd.it/rjkxtbTf8EuPiR-eWta4yiuBXe2DicJZTQ3H0TtiSoY.jpg?auto=webp&s=5a9aeb9d7db0ee9275d015cc513b492e07c3bdf9', 'width': 1980}, 'variants': {}}]} |
||
Voice cloning help. | 1 | [removed] | 2024-12-15T06:31:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hems37/voice_cloning_help/ | nallanahaari | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hems37 | false | null | t3_1hems37 | /r/LocalLLaMA/comments/1hems37/voice_cloning_help/ | false | false | self | 1 | null |
Seeking Advice: Training a Local LLM for Basic Product Support – Any Experience with Similar Projects? | 3 | Hi everyone,
I’m trying to streamline some processes at work by fine-tuning/training a local LLM (e.g., Llama 3.2) to handle some tier 1 support for my company’s products. We build software tools, and the goal is to streamline common support queries by training the model on our existing knowledge sources:
* Website content (Product pages and Blog Posts)
* Knowledgebase articles
* YouTube tutorial videos
* Support tickets
I'm getting stuck on the first step of converting all these knowledge sources into structured data which the model can ingest.
I’ve tinkered with tools like ScrapeGraphAI, but found it challenging to adapt for this specific purpose. There are so many options for niche tools, and every new search seems to introduce more noise than clarity. My focus is on free, open-source tools that I can implement locally without relying on cloud-based solutions.
I’d love to hear from anyone who has:
* Fine-tuned an LLM for customer support purposes (or similar tasks).
* Experience integrating diverse data sources (text from websites, video captions, code, etc.) into a model’s training pipeline.
* Recommendations on efficient workflows, preprocessing steps, or tools that worked well for you.
My priority is to ensure the model can handle common support queries effectively and safely, but I’m struggling to figure out the best tools and workflows to get there. Any advice, tools, or resources (especially free/open-source ones) would be hugely appreciated!
I’m also trying to avoid pitfalls that others may have encountered, so tips on what not to do would also be incredibly helpful.
Thanks in advance! Looking forward to learning from your experiences! 😊 | 2024-12-15T06:36:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hemubj/seeking_advice_training_a_local_llm_for_basic/ | better_meow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hemubj | false | null | t3_1hemubj | /r/LocalLLaMA/comments/1hemubj/seeking_advice_training_a_local_llm_for_basic/ | false | false | self | 3 | null |
Compute cards in a TYAN s8253 | 0 | I've been planning building my first AI server and has anyone tried putting something like a P40 in the TYAN s8253, and does the CPU get in the way?
Thanks | 2024-12-15T08:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/1heoa1d/compute_cards_in_a_tyan_s8253/ | Psychological_Ear393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heoa1d | false | null | t3_1heoa1d | /r/LocalLLaMA/comments/1heoa1d/compute_cards_in_a_tyan_s8253/ | false | false | self | 0 | null |
Pixtral & Qwen2VL are coming to Ollama | 201 | Just saw this commit on GitHub | 2024-12-15T08:45:19 | AaronFeng47 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1heokci | false | null | t3_1heokci | /r/LocalLLaMA/comments/1heokci/pixtral_qwen2vl_are_coming_to_ollama/ | false | false | 201 | {'enabled': True, 'images': [{'id': 'bBOXUvPTdfBHxlMBTbRWd3j7cEtSQ6wjqQlmh_y5sFc', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/2efqqe5j6z6e1.png?width=108&crop=smart&auto=webp&s=a821eaa755a3bf966b34aa18d804c80881d20b0b', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/2efqqe5j6z6e1.png?width=216&crop=smart&auto=webp&s=8b76a85a859667f96429aaa6f021fd956985dc97', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/2efqqe5j6z6e1.png?width=320&crop=smart&auto=webp&s=6c6392be6f32c9f5a45e31738913753c02a35a95', 'width': 320}, {'height': 385, 'url': 'https://preview.redd.it/2efqqe5j6z6e1.png?width=640&crop=smart&auto=webp&s=4caf8b4d9b7cec2ca13bd1c762b8d453469dde8f', 'width': 640}, {'height': 578, 'url': 'https://preview.redd.it/2efqqe5j6z6e1.png?width=960&crop=smart&auto=webp&s=86d56fc3f7ac6779aa93197560a0efcc2a3ab0a0', 'width': 960}, {'height': 651, 'url': 'https://preview.redd.it/2efqqe5j6z6e1.png?width=1080&crop=smart&auto=webp&s=f1637cf5e588aa2bc26a4746e9c22fdf999692ec', 'width': 1080}], 'source': {'height': 861, 'url': 'https://preview.redd.it/2efqqe5j6z6e1.png?auto=webp&s=cd2cd0184188d9cbc9171264a368c7397d2b25e8', 'width': 1428}, 'variants': {}}]} |
||
Which model would provide detailed analysis of person's picture? | 0 | I tried public models (chatGPT, Gemini and Claude) and they refuse to analyse images of people. Would any model provide detailed description that I would use to generate other images? | 2024-12-15T08:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/1heolgs/which_model_would_provide_detailed_analysis_of/ | vonGlick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heolgs | false | null | t3_1heolgs | /r/LocalLLaMA/comments/1heolgs/which_model_would_provide_detailed_analysis_of/ | false | false | self | 0 | null |
My 3b llama 3.2 local says it is running on 1b parameters. | 0 | Hey, I prompted my llama3.2 3b about the number of parameters it is running on, and it said it is running on 1b. Can anyone explain this? | 2024-12-15T09:06:04 | https://www.reddit.com/r/LocalLLaMA/comments/1heou1v/my_3b_llama_32_local_says_it_is_running_on_1b/ | Rozwik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heou1v | false | null | t3_1heou1v | /r/LocalLLaMA/comments/1heou1v/my_3b_llama_32_local_says_it_is_running_on_1b/ | false | false | self | 0 | null |
Recommendations for the Best OCR Model for Extracting Text from Complex Labels? | 14 | I want to use any VLM to get the ingredients from any packed food item, should I go with pixtral or any smaller one can help?
\- should I go with quantisation of pixtral
I’m working on a project that involves extracting text from **packaged food labels**. These labels often have **small text, varying fonts, and challenging layouts**. I’m considering using **Pixtral OCR** but want to explore if there are better options out there.
# Questions:
1. What are the **most accurate OCR models** or tools for extracting structured data from images?
2. Should I stick with **FP32**, or does **FP16/quantization** make sense for performance optimization without losing much accuracy?
3. Are there any **cutting-edge OCR models** that handle dense and complex text layouts particularly well?
Looking for something that balances **accuracy, speed, and versatility** for real-world label images. Appreciate any recommendations or advice! | 2024-12-15T09:38:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hep8xa/recommendations_for_the_best_ocr_model_for/ | iamnotdeadnuts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hep8xa | false | null | t3_1hep8xa | /r/LocalLLaMA/comments/1hep8xa/recommendations_for_the_best_ocr_model_for/ | false | false | self | 14 | null |
What would convince you that an LLM (or any other AI) has consciousness? | 0 | Is complex problem-solving enough? Or is it just based on a feeling? Or do you just accept that you don't know if or when they have consciousness? | 2024-12-15T09:46:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hepcuv/what_would_convince_you_that_an_llm_or_any_other/ | MustBeSomethingThere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hepcuv | false | null | t3_1hepcuv | /r/LocalLLaMA/comments/1hepcuv/what_would_convince_you_that_an_llm_or_any_other/ | false | false | self | 0 | null |
Create an LLama inference library from scratch | 1 | [removed] | 2024-12-15T09:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hepebh/create_an_llama_inference_library_from_scratch/ | FlattenLayer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hepebh | false | null | t3_1hepebh | /r/LocalLLaMA/comments/1hepebh/create_an_llama_inference_library_from_scratch/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'klKLvrs2NjBvDxa9tFCBFxqLAJXjpmp-gGe6q1Awmxo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5v8Ul8aS5Fv4LWozk7Mo_jHIvQD40D5cG1UoY9NfZrU.jpg?width=108&crop=smart&auto=webp&s=64ec0f27696be6dd1f56f979be1c2682b0856617', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5v8Ul8aS5Fv4LWozk7Mo_jHIvQD40D5cG1UoY9NfZrU.jpg?width=216&crop=smart&auto=webp&s=b28287c00f0a9b01d2cb965df971ef72b50b2cc2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5v8Ul8aS5Fv4LWozk7Mo_jHIvQD40D5cG1UoY9NfZrU.jpg?width=320&crop=smart&auto=webp&s=5d52f3cba6c99e932733410672984bf0e29d650b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5v8Ul8aS5Fv4LWozk7Mo_jHIvQD40D5cG1UoY9NfZrU.jpg?width=640&crop=smart&auto=webp&s=6206ae34668f6111ba37a1bc77018e1874dcb6d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5v8Ul8aS5Fv4LWozk7Mo_jHIvQD40D5cG1UoY9NfZrU.jpg?width=960&crop=smart&auto=webp&s=83524ad17a86581ea2b5f5fe0bff8a27a9873dd9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5v8Ul8aS5Fv4LWozk7Mo_jHIvQD40D5cG1UoY9NfZrU.jpg?width=1080&crop=smart&auto=webp&s=73e0d2189cebd29ceabb927732bdc5e1e903f786', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5v8Ul8aS5Fv4LWozk7Mo_jHIvQD40D5cG1UoY9NfZrU.jpg?auto=webp&s=a98c5723aadcf017a4a9fc396d91403badd31afe', 'width': 1200}, 'variants': {}}]} |
|
AI agent can see the frontend while developing it | 106 | 2024-12-15T09:55:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hepgpc/ai_agent_can_see_the_frontend_while_developing_it/ | Grigorij_127 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hepgpc | false | null | t3_1hepgpc | /r/LocalLLaMA/comments/1hepgpc/ai_agent_can_see_the_frontend_while_developing_it/ | false | false | 106 | {'enabled': False, 'images': [{'id': 'GfzXiLyznMYWG1ZcTB-ijWb_SONwEh47G2PyOfOxCsk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HshFjFk3vKGg_qj3dfYvhI5dWTcPpnj_uguNuLMV59U.jpg?width=108&crop=smart&auto=webp&s=92b107b7f20fd225a589c45aec382df1e9843331', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HshFjFk3vKGg_qj3dfYvhI5dWTcPpnj_uguNuLMV59U.jpg?width=216&crop=smart&auto=webp&s=69627120c927ee1cc45612f31b8621b5a76f9b1d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HshFjFk3vKGg_qj3dfYvhI5dWTcPpnj_uguNuLMV59U.jpg?width=320&crop=smart&auto=webp&s=1af66777c29c0e8ec2cbf27deb45f82966c92a30', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HshFjFk3vKGg_qj3dfYvhI5dWTcPpnj_uguNuLMV59U.jpg?width=640&crop=smart&auto=webp&s=a0d2fcc6df27f8f27b34d999ebd05bf459ed8b58', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HshFjFk3vKGg_qj3dfYvhI5dWTcPpnj_uguNuLMV59U.jpg?width=960&crop=smart&auto=webp&s=d14a2861c8ab7068111476827dfed1599fb56463', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HshFjFk3vKGg_qj3dfYvhI5dWTcPpnj_uguNuLMV59U.jpg?width=1080&crop=smart&auto=webp&s=2f4985ded391c1341b8e485053110e82b65c52d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HshFjFk3vKGg_qj3dfYvhI5dWTcPpnj_uguNuLMV59U.jpg?auto=webp&s=861bed7c474931b0badde5e7b1c1b6763146cbbd', 'width': 1200}, 'variants': {}}]} |
||
LLMs for specific programming languages | 12 | We constantly have posts comparing LLMs in "Coding".
The impression I got is they are mainly tested on NodeJS and Python.
What is your experience with LLMs for specific programming languages?
My experience, for example, has been that ChatGPT is phenomenal for shell scripts, even though it falls severely behind in Golang.
Also, specifically, I would like to hear if anyone has had experience comparing models for complex PHP code? Especially with complex PHP libraries even Claude seems to fail miserably.
Are Chinese (Qwen2.5) models better with PHP? Since it's more dominant in China than the west for complex code. | 2024-12-15T10:15:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hepqfg/llms_for_specific_programming_languages/ | Fun-Aardvark-1143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hepqfg | false | null | t3_1hepqfg | /r/LocalLLaMA/comments/1hepqfg/llms_for_specific_programming_languages/ | false | false | self | 12 | null |
Recommendations for local model with _long_ context window | 0 | Full context - I'm trying to load a PDF which is probably 2M tokens long - and query an LLM on it.
Can you give me some pointers where to start from? Are there even models capable of such large context?
Are there any "tricks" I can utilize? I have experience with huggingface for example.
| 2024-12-15T10:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1heq5x1/recommendations_for_local_model_with_long_context/ | stankata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heq5x1 | false | null | t3_1heq5x1 | /r/LocalLLaMA/comments/1heq5x1/recommendations_for_local_model_with_long_context/ | false | false | self | 0 | null |
A functional, nice-looking web UI all written by Gemini Experimental 1206 | 49 | https://reddit.com/link/1heqo18/video/xb2fmvqkyz6e1/player
Obviously to get it to this state required a lot of corrections and manual editing, but oh god Gemini being this capable just blows me away.
What do you think? | 2024-12-15T11:23:55 | https://www.reddit.com/r/LocalLLaMA/comments/1heqo18/a_functional_nicelooking_web_ui_all_written_by/ | Sad-Fix-7915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heqo18 | false | null | t3_1heqo18 | /r/LocalLLaMA/comments/1heqo18/a_functional_nicelooking_web_ui_all_written_by/ | false | false | self | 49 | null |
The death of the stubborn developer | 0 | 2024-12-15T11:30:37 | https://sourcegraph.com/blog/the-death-of-the-stubborn-developer | FoxInTheRedBox | sourcegraph.com | 1970-01-01T00:00:00 | 0 | {} | 1heqrg6 | false | null | t3_1heqrg6 | /r/LocalLLaMA/comments/1heqrg6/the_death_of_the_stubborn_developer/ | false | false | 0 | {'enabled': False, 'images': [{'id': '5Hy-o91bAoyCuNZXVynuZIFiJ-YucY-Cd6lCNZr2cWU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xGr5wXm_QunmR6E0uirM2gaHgFfEKalmvkKkA08NY5M.jpg?width=108&crop=smart&auto=webp&s=90d1b2f4c0a656c755d710687a9bf89e087b393f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/xGr5wXm_QunmR6E0uirM2gaHgFfEKalmvkKkA08NY5M.jpg?width=216&crop=smart&auto=webp&s=67cb96448d1d95fbacc15ec4b13b996506935361', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/xGr5wXm_QunmR6E0uirM2gaHgFfEKalmvkKkA08NY5M.jpg?width=320&crop=smart&auto=webp&s=6a964ea68192a77107c66d20ff81da673e534008', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/xGr5wXm_QunmR6E0uirM2gaHgFfEKalmvkKkA08NY5M.jpg?width=640&crop=smart&auto=webp&s=1af09d569a49ff2a16312dd021a36c3e62db4e40', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/xGr5wXm_QunmR6E0uirM2gaHgFfEKalmvkKkA08NY5M.jpg?width=960&crop=smart&auto=webp&s=f2e37837766233ed8de5f552c4078962b9d201a0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/xGr5wXm_QunmR6E0uirM2gaHgFfEKalmvkKkA08NY5M.jpg?width=1080&crop=smart&auto=webp&s=29dd0efa17939d02a053c59a9a24807029edf0c7', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/xGr5wXm_QunmR6E0uirM2gaHgFfEKalmvkKkA08NY5M.jpg?auto=webp&s=ef7175741c3a41ff89328897ed8ad0c83e823adf', 'width': 1200}, 'variants': {}}]} |
||
Meta AI Introduces Byte Latent Transformer (BLT): A Tokenizer-Free Model | 706 | Meta AI’s Byte Latent Transformer (BLT) is a new AI model that skips tokenization entirely, working directly with raw bytes. This allows BLT to handle any language or data format without pre-defined vocabularies, making it highly adaptable. It’s also more memory-efficient and scales better due to its compact design | 2024-12-15T11:38:27 | https://www.marktechpost.com/2024/12/13/meta-ai-introduces-byte-latent-transformer-blt-a-tokenizer-free-model-that-scales-efficiently/?amp | Legal_Ad4143 | marktechpost.com | 1970-01-01T00:00:00 | 0 | {} | 1heqv6s | false | null | t3_1heqv6s | /r/LocalLLaMA/comments/1heqv6s/meta_ai_introduces_byte_latent_transformer_blt_a/ | false | false | 706 | {'enabled': False, 'images': [{'id': 'ATEhwgj-Iwc6amslJv3xkqiC9O4jFn9sYepoXpvtqvc', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/cJaKDmrdaWxyUN8CvK_bANtl7PeR1201epwEQ3n_XdA.jpg?width=108&crop=smart&auto=webp&s=b15f2f222a2f36448bd40f23b7d2f49a0c29d4cd', 'width': 108}, {'height': 158, 'url': 'https://external-preview.redd.it/cJaKDmrdaWxyUN8CvK_bANtl7PeR1201epwEQ3n_XdA.jpg?width=216&crop=smart&auto=webp&s=fd6bc54f6d9f419f8e8a07653fa45ffcb456affb', 'width': 216}, {'height': 234, 'url': 'https://external-preview.redd.it/cJaKDmrdaWxyUN8CvK_bANtl7PeR1201epwEQ3n_XdA.jpg?width=320&crop=smart&auto=webp&s=5ba9276ff5f8f94246c66af223032602073894f2', 'width': 320}, {'height': 469, 'url': 'https://external-preview.redd.it/cJaKDmrdaWxyUN8CvK_bANtl7PeR1201epwEQ3n_XdA.jpg?width=640&crop=smart&auto=webp&s=7454dcfaae92f4740c77a479cc0c212f98141717', 'width': 640}, {'height': 704, 'url': 'https://external-preview.redd.it/cJaKDmrdaWxyUN8CvK_bANtl7PeR1201epwEQ3n_XdA.jpg?width=960&crop=smart&auto=webp&s=4e6a4c5264982eadf0d21763475c58a9548d2590', 'width': 960}, {'height': 792, 'url': 'https://external-preview.redd.it/cJaKDmrdaWxyUN8CvK_bANtl7PeR1201epwEQ3n_XdA.jpg?width=1080&crop=smart&auto=webp&s=79eed844736cceccb999633940a8ac9da6f2e7d4', 'width': 1080}], 'source': {'height': 1062, 'url': 'https://external-preview.redd.it/cJaKDmrdaWxyUN8CvK_bANtl7PeR1201epwEQ3n_XdA.jpg?auto=webp&s=d18c3641dcd734dc5d90b9cde69bfb4d10e9fd6b', 'width': 1448}, 'variants': {}}]} |
|
Deepseek 128gb MacBook M4 | 0 | What are image gen and coding models would you run?
| 2024-12-15T11:59:45 | https://www.reddit.com/r/LocalLLaMA/comments/1her5vf/deepseek_128gb_macbook_m4/ | sammythecoin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1her5vf | false | null | t3_1her5vf | /r/LocalLLaMA/comments/1her5vf/deepseek_128gb_macbook_m4/ | false | false | self | 0 | null |
does the model mannix/llama3.1-8b-abliterated can describe images ? | 1 | [removed] | 2024-12-15T12:53:52 | https://www.reddit.com/r/LocalLLaMA/comments/1herzvw/does_the_model_mannixllama318babliterated_can/ | HealthySetting3772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1herzvw | false | null | t3_1herzvw | /r/LocalLLaMA/comments/1herzvw/does_the_model_mannixllama318babliterated_can/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.