title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
YES! i dont have to get rid of my rig after all! | 0 | ive built myself a nice chunky little 5 card rack server, it has 3x 1600W PSUs and i use it without a care because for the last year or so ive had unlimited electric, its not free as such, i pay a bit over £200 a month and thats all my gas, electric and water, its kind of meant for people renting and splitting bills etc, not needing to care about power usage allowed me to build my budget beast.
only problem is im just going through the final steps of buying a house and the fear that id have to actually pay the full energy costs meant i was facing having to sell the big rig. i almost considered a mac for a second (then i felt dirty and disappointed in myself lol i have a bit of a hatred of macs)
but i then had a look at the site, turns out theyve expanded and now also do owned homes! (you used to need a tenancy agreement but they now accept a sale agreement) so i get to keep my unlimited power and my rig is safe! i may even treat it to an extra GPU to celebrate | 2024-12-15T12:55:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hes0xi/yes_i_dont_have_to_get_rid_of_my_rig_after_all/ | gaspoweredcat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hes0xi | false | null | t3_1hes0xi | /r/LocalLLaMA/comments/1hes0xi/yes_i_dont_have_to_get_rid_of_my_rig_after_all/ | false | false | self | 0 | null |
Dual 3090 on 850w | 4 | Hi, I finally decided to add another 3090 to my setup. I will try not to upgrade psu, but if it turns out unstable then I guess I will have no choice. Question I have is what cpu for socket am4 do u recommend? This setup is just for llms and deep learning stuff. | 2024-12-15T12:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hes2wo/dual_3090_on_850w/ | martinmazur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hes2wo | false | null | t3_1hes2wo | /r/LocalLLaMA/comments/1hes2wo/dual_3090_on_850w/ | false | false | self | 4 | null |
Speed Test #2: Llama.CPP vs MLX with Llama-3.3-70B and Various Prompt Sizes | 42 | Following up with my [test between 2xRTX-3090 vs M3-Max](https://www.reddit.com/r/LocalLLaMA/comments/1he2v2n/speed_test_llama3370b_on_2xrtx3090_vs_m3max_64gb/), I completed the same test to compare Llama.CPP and Mlx on my M3-Max 64GB.
### Setup
* Both used the temperature 0.0, top_p 0.9, seed 1000.
* MLX-LM: 0.20.4
* MLX: 0.21.1
* Model: Llama-3.3-70B-Instruct-4bit
* Llama.cpp: b4326
* Model: llama-3.3-70b-instruct-q4_K_M
* Flash attention enabled
### Notes
* MLX seems to be consistently faster than Llama.cpp now.
* In average, MLX processes tokens 1.14x faster and generates tokens 1.12x faster.
* MLX increased fused attention speed in MLX 0.19.0.
* MLX-LM fixed the slow performance bug with long context in 0.20.1.
* Each test is one shot generation (not accumulating prompt via multiturn chat style).
* Speed is in tokens per second.
* Total duration is total execution time, not total time reported from llama.cpp.
* Sometimes you'll see shorter total duration for longer prompts than shorter prompts because it generated less tokens for longer prompts.
### MLX
| Prompt Tokens | Prompt Processing Speed | Generated Tokens | Token Generation Speed | Total Execution Time |
| --- | --- | --- | --- | --- |
| 260 | 75.871 | 309 | 9.351 | 48s |
| 689 | 83.567 | 760 | 9.366 | 1m42s |
| 1171 | 83.843 | 744 | 9.287 | 1m46s |
| 1635 | 83.239 | 754 | 9.222 | 1m53s |
| 2173 | 83.092 | 776 | 9.123 | 2m3s |
| 3228 | 81.068 | 744 | 8.970 | 2m15s |
| 4126 | 79.410 | 724 | 8.917 | 2m25s |
| 6096 | 76.796 | 752 | 8.724 | 2m57s |
| 8015 | 74.840 | 786 | 8.520 | 3m31s |
| 10088 | 72.363 | 887 | 8.328 | 4m18s |
| 12010 | 71.017 | 1139 | 8.152 | 5m20s |
| 14066 | 68.943 | 634 | 7.907 | 4m55s |
| 16003 | 67.948 | 459 | 7.779 | 5m5s |
| 18211 | 66.105 | 568 | 7.604 | 6m1s |
| 20236 | 64.452 | 625 | 7.423 | 6m49s |
| 22188 | 63.332 | 508 | 7.277 | 7m10s |
| 24246 | 61.424 | 462 | 7.121 | 7m50s |
| 26034 | 60.375 | 1178 | 7.019 | 10m9s |
| 28002 | 59.009 | 27 | 6.808 | 8m9s |
| 30136 | 58.080 | 27 | 6.784 | 8m53s |
| 32172 | 56.502 | 27 | 6.482 | 9m44s |
### Llama.CPP
| Prompt Tokens | Prompt Processing Speed | Generated Tokens | Token Generation Speed | Total Execution Time |
| --- | --- | --- | --- | --- |
| 258 | 67.86 | 599 | 8.15 | 1m32s |
| 687 | 66.65 | 1999 | 8.09 | 4m18s |
| 1169 | 72.12 | 581 | 7.99 | 1m30s |
| 1633 | 72.57 | 891 | 7.93 | 2m16s |
| 2171 | 71.87 | 799 | 7.87 | 2m13s |
| 3226 | 69.86 | 612 | 7.78 | 2m6s |
| 4124 | 68.39 | 825 | 7.72 | 2m48s |
| 6094 | 66.62 | 642 | 7.64 | 2m57s |
| 8013 | 65.17 | 863 | 7.48 | 4m |
| 10086 | 63.28 | 766 | 7.34 | 4m25s |
| 12008 | 62.07 | 914 | 7.34 | 5m19s |
| 14064 | 60.80 | 799 | 7.23 | 5m43s |
| 16001 | 59.50 | 714 | 7.00 | 6m13s |
| 18209 | 58.14 | 766 | 6.74 | 7m9s |
| 20234 | 56.88 | 786 | 6.60 | 7m57s |
| 22186 | 55.91 | 724 | 6.69 | 8m27s |
| 24244 | 55.04 | 772 | 6.60 | 9m19s |
| 26032 | 53.74 | 510 | 6.41 | 9m26s |
| 28000 | 52.68 | 768 | 6.23 | 10m57s |
| 30134 | 51.39 | 529 | 6.29 | 11m13s |
| 32170 | 50.32 | 596 | 6.13 | 12m19s | | 2024-12-15T13:07:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hes7wm/speed_test_2_llamacpp_vs_mlx_with_llama3370b_and/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hes7wm | false | null | t3_1hes7wm | /r/LocalLLaMA/comments/1hes7wm/speed_test_2_llamacpp_vs_mlx_with_llama3370b_and/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'JWExjKp3DKDFgnVDTytNslqLL_tbuyJ5_QM4UmdycLI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/C8PeRz7MRm6MQ-k0kqVLyUAbaVTOWuvFCIPzMIO6Lzw.jpg?width=108&crop=smart&auto=webp&s=d1b7bf5f5351e479a2aa89b8c12fb00a3fd6c876', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/C8PeRz7MRm6MQ-k0kqVLyUAbaVTOWuvFCIPzMIO6Lzw.jpg?width=216&crop=smart&auto=webp&s=cec4e877820e85caefb13d20fd16f35eba23b9f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/C8PeRz7MRm6MQ-k0kqVLyUAbaVTOWuvFCIPzMIO6Lzw.jpg?width=320&crop=smart&auto=webp&s=e322e446cbbcb5b4b35b90871a7c60beb20aea56', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/C8PeRz7MRm6MQ-k0kqVLyUAbaVTOWuvFCIPzMIO6Lzw.jpg?width=640&crop=smart&auto=webp&s=2b4f6b1553097f10cb7872b01688f97c9a174b89', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/C8PeRz7MRm6MQ-k0kqVLyUAbaVTOWuvFCIPzMIO6Lzw.jpg?width=960&crop=smart&auto=webp&s=fe82c69f444e879dcde42a9048f38e0fd815af64', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/C8PeRz7MRm6MQ-k0kqVLyUAbaVTOWuvFCIPzMIO6Lzw.jpg?width=1080&crop=smart&auto=webp&s=6d24637373818bc5036e8e953d8cd2b53fbe954c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/C8PeRz7MRm6MQ-k0kqVLyUAbaVTOWuvFCIPzMIO6Lzw.jpg?auto=webp&s=280bb01e742b64628dd9f9524dbe18c237bcb522', 'width': 1200}, 'variants': {}}]} |
This is How Speculative Decoding Speeds the Model up | 63 | I tried to derive the formula for speculative decoding speeds and made this plot to guide the search for speculative decoding parameters:
https://preview.redd.it/prpbobebf07e1.png?width=1536&format=png&auto=webp&s=7823a78b78109fc27fec6b16f51320bc1838fa48
**Parameters:**
* Acceptance Probability: How likely the speculated tokens are correct and accepted by the main model (efficiency measured in exllamav2)
* Ts/Tv ratio: Time cost ratio between draft model speculation and main model verification
* N: Number of tokens to speculate ahead in each cycle
The red line shows where speculative decoding starts to speed up.
Optimal N is found for every point through direct search.
**Quick takeaways:**
1. The draft model should find a balance between model size (Ts) and accept rate to get high speed ups
2. Optimal N stays small unless you have both high acceptance rate and low Ts/Tv
This is just theoretical results, for practical use, you still need to test out different configurations to see which is fastest.
Those who are interested the derivation and plot coding details can visit the repo https://github.com/v2rockets/sd\_optimization. | 2024-12-15T13:21:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hesft1/this_is_how_speculative_decoding_speeds_the_model/ | Fluid_Intern5048 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hesft1 | false | null | t3_1hesft1 | /r/LocalLLaMA/comments/1hesft1/this_is_how_speculative_decoding_speeds_the_model/ | false | false | 63 | null |
|
FEDS Notes: Using Generative AI Models to Understand FOMC Monetary Policy Discussions | 17 | The researchers in Federal Reserves compared some LLMs with their context understanding and classification tasks. It looks interesting and provides some very useful information. For me it's like a guide of models for summarization. GPT-4O, Gemini-1.5-Pro, llama3.1-70B are pretty good, I hope they can include llama3.3 in an update.
|Model|Average scores|Average Labor Market + Inflation|Real Activity|Labor Market|Inflation|Financial Stability|Fed funds rate|Balance Sheet|Financial Developments|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|Anthropic.claude-3-5-sonnet-20240620-v1|0.89|0.91|0.84|0.90|0.91|0.86|0.95|0.97|0.82|
|Cohere.command-r-v1|0.85|0.90|0.87|0.89|0.90|0.69|0.92|0.95|0.74|
|Cohere.command-r-plus-v1|0.89|0.90|0.83|0.91|0.89|0.81|0.95|0.96|0.85|
|Meta.llama3-1-70b-instruct-v1|0.91|0.91|0.90|0.90|0.92|0.88|0.95|0.96|0.85|
|Meta.llama3-1-405b-instruct-v1|0.91|0.90|0.91|0.90|0.90|0.87|0.95|0.96|0.84|
|Mistral.mistral-7b-instruct-v0.2|0.88|0.84|0.86|0.81|0.87|0.93|0.94|0.95|0.80|
|Mistral.mixtral-8x7b-instruct-v0.1|0.85|0.84|0.79|0.83|0.85|0.78|0.95|0.96|0.81|
|Meta.llama2-70b-chat-v1|0.80|0.88|0.47|0.85|0.90|0.67|0.94|0.93|0.80|
|Gemini-1.5-pro|0.93|0.92|0.92|0.92|0.92|0.95|0.96|0.96|0.90|
|Gpt-4o-mini|0.82|0.80|0.78|0.81|0.79|0.85|0.90|0.93|0.71|
|Gpt-4o|0.92|0.93|0.90|0.94|0.93|0.94|0.96|0.96|0.83|
|Gpt-4o (chunking)|0.95|0.96|0.94|0.96|0.96|0.97|0.96|0.97|0.89|
|Anthropic.claude-3-5-sonnet-20240620-v1 (chunking)|0.94|0.96|0.92|0.96|0.96|0.97|0.96|0.97|0.87| | 2024-12-15T13:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hesk7z/feds_notes_using_generative_ai_models_to/ | first2wood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hesk7z | false | null | t3_1hesk7z | /r/LocalLLaMA/comments/1hesk7z/feds_notes_using_generative_ai_models_to/ | false | false | self | 17 | null |
git diff patch | 0 | [removed] | 2024-12-15T13:41:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hessjc/git_diff_patch/ | xmmr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hessjc | false | null | t3_1hessjc | /r/LocalLLaMA/comments/1hessjc/git_diff_patch/ | false | false | self | 0 | null |
Anyone figured out how to limit qwq-32b's overthinking? | 1 | I'm trying to experiment with a voice framework I've been developing that works really well so I wanted to expand its capabilities with an "analysis mode" the user would enable by speaking. Problem is that this would switch the model to qwq-32b and after several attempts at prompting and modifying parameters (temperature, top\_p) qwq-32b continues overthinking despite being instructed to keep its reasoning steps short.
I still think there is a way to get it to think less but I'm not sure yet where the issue lies. This is such a weird model. Its so good for planning, strategizing and analysis but it really goes down a deep rabbit hole. It takes \~2 minutes to finish generating a text response on my GPU. Imagine having every sentence generated with that kind of output.
Don't get me wrong, its conclusions are spot on, and its so smart in so many ways, which is why I'm trying to wrangle this model to limit its reasoning steps but honestly I don't think I have a lot of options here.
:/
| 2024-12-15T13:43:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hestzk/anyone_figured_out_how_to_limit_qwq32bs/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hestzk | false | null | t3_1hestzk | /r/LocalLLaMA/comments/1hestzk/anyone_figured_out_how_to_limit_qwq32bs/ | false | false | self | 1 | null |
Anyone figured out how to limit qwq-32b's overthinking? | 1 | I'm trying to experiment with a voice framework I've been developing that works really well so I wanted to expand its capabilities with an "analysis mode" the user would enable by speaking. Problem is that this would switch the model to qwq-32b and after several attempts at prompting and modifying parameters (temperature, top\_p) qwq-32b continues overthinking despite being instructed to keep its reasoning steps short.
I still think there is a way to get it to think less but I'm not sure yet where the issue lies. This is such a weird model. Its so good for planning, strategizing and analysis but it really goes down a deep rabbit hole. It takes \~2 minutes to finish generating a text response on my GPU. Imagine having every sentence generated with that kind of output.
Don't get me wrong, its conclusions are spot on, and its so smart in so many ways, which is why I'm trying to wrangle this model to limit its reasoning steps but honestly I don't think I have a lot of options here.
:/
| 2024-12-15T13:43:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hesu02/anyone_figured_out_how_to_limit_qwq32bs/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hesu02 | false | null | t3_1hesu02 | /r/LocalLLaMA/comments/1hesu02/anyone_figured_out_how_to_limit_qwq32bs/ | false | false | self | 1 | null |
Google Colab Pro Worth for LoRA Training? | 2 | Or do yall use something else to train your LoRAs? | 2024-12-15T13:45:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hesv34/google_colab_pro_worth_for_lora_training/ | stevelon_mobs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hesv34 | false | null | t3_1hesv34 | /r/LocalLLaMA/comments/1hesv34/google_colab_pro_worth_for_lora_training/ | false | false | self | 2 | null |
About to board a 12h flight with a M4 Max 128GB. What's the best local coding model, as of December 2024, should I download? | 29 | \- to not feel too handicapped while being offiline (compared to SOTA models like Claude 3.5 Sonnet) | 2024-12-15T13:57:15 | https://www.reddit.com/r/LocalLLaMA/comments/1het2sg/about_to_board_a_12h_flight_with_a_m4_max_128gb/ | alphageek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1het2sg | false | null | t3_1het2sg | /r/LocalLLaMA/comments/1het2sg/about_to_board_a_12h_flight_with_a_m4_max_128gb/ | false | false | self | 29 | null |
How to local llm as per openai conventions? | 1 | [removed] | 2024-12-15T14:09:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hetb9b/how_to_local_llm_as_per_openai_conventions/ | anupk11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hetb9b | false | null | t3_1hetb9b | /r/LocalLLaMA/comments/1hetb9b/how_to_local_llm_as_per_openai_conventions/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Ygb1kDMHZQ9xmJdegz58_-ogPJoq09-p4gi0XQTqiyA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/S_luPWJSyKf2Ipf8-JxAusVN_mn4PeLEiGSOUo-JYgU.jpg?width=108&crop=smart&auto=webp&s=26c24b74bbfaafbac40b646c187029101c2dee70', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/S_luPWJSyKf2Ipf8-JxAusVN_mn4PeLEiGSOUo-JYgU.jpg?width=216&crop=smart&auto=webp&s=c5e5af7b8b92c840ad4c008da71fcd5bf40ab44d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/S_luPWJSyKf2Ipf8-JxAusVN_mn4PeLEiGSOUo-JYgU.jpg?width=320&crop=smart&auto=webp&s=0c440e04eae62e567de46b71fbfa1b556e7b47e0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/S_luPWJSyKf2Ipf8-JxAusVN_mn4PeLEiGSOUo-JYgU.jpg?width=640&crop=smart&auto=webp&s=6f69fde00dcc4996d8fe3db3e9b7043b25a7a7c0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/S_luPWJSyKf2Ipf8-JxAusVN_mn4PeLEiGSOUo-JYgU.jpg?width=960&crop=smart&auto=webp&s=9f4880d28a5d33bf33cdc587e472395ff7845400', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/S_luPWJSyKf2Ipf8-JxAusVN_mn4PeLEiGSOUo-JYgU.jpg?width=1080&crop=smart&auto=webp&s=4710f6d6e4705715ab486840d31529f052030448', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/S_luPWJSyKf2Ipf8-JxAusVN_mn4PeLEiGSOUo-JYgU.jpg?auto=webp&s=b7f14a08c08aff3330af9e3c8ab571af65b040b7', 'width': 1200}, 'variants': {}}]} |
Yet another proof why open source local ai is the way | 633 | 2024-12-15T14:30:11 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hetp9s | false | null | t3_1hetp9s | /r/LocalLLaMA/comments/1hetp9s/yet_another_proof_why_open_source_local_ai_is_the/ | false | false | 633 | {'enabled': True, 'images': [{'id': 'XvpnQthE0lbTfF15I53cZGKvOXxrrxKBSoZIIJQXlFg', 'resolutions': [{'height': 160, 'url': 'https://preview.redd.it/z522pk62w07e1.png?width=108&crop=smart&auto=webp&s=5c757872a676fa3f78c4d15a9aa8d569f1d4530d', 'width': 108}, {'height': 321, 'url': 'https://preview.redd.it/z522pk62w07e1.png?width=216&crop=smart&auto=webp&s=6a9e5630baf99d9aa85e7e6563d2d955821f43ac', 'width': 216}, {'height': 475, 'url': 'https://preview.redd.it/z522pk62w07e1.png?width=320&crop=smart&auto=webp&s=34281ffaf2cd796a49f5bb22dfd739de788dda96', 'width': 320}, {'height': 951, 'url': 'https://preview.redd.it/z522pk62w07e1.png?width=640&crop=smart&auto=webp&s=3a158096f741240a6ba58ac3ee9405ef7dcf4a3a', 'width': 640}, {'height': 1427, 'url': 'https://preview.redd.it/z522pk62w07e1.png?width=960&crop=smart&auto=webp&s=a60957afbcf2264cf9638524273a581148ff9a7d', 'width': 960}, {'height': 1606, 'url': 'https://preview.redd.it/z522pk62w07e1.png?width=1080&crop=smart&auto=webp&s=b8b34c974f861343f94191b2feb5645156e33cc0', 'width': 1080}], 'source': {'height': 1606, 'url': 'https://preview.redd.it/z522pk62w07e1.png?auto=webp&s=0f3ae8c0f6d1acd81f7549d92d5a744f1d0ebba0', 'width': 1080}, 'variants': {}}]} |
|||
Ollama not using gpu to the max performance | 1 | [removed] | 2024-12-15T15:30:43 | https://www.reddit.com/r/LocalLLaMA/comments/1heuxje/ollama_not_using_gpu_to_the_max_performance/ | East-Ad6713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heuxje | false | null | t3_1heuxje | /r/LocalLLaMA/comments/1heuxje/ollama_not_using_gpu_to_the_max_performance/ | false | false | 1 | null |
|
What's the highest you'd pay for a proprietary AGI model access | 0 | Let's say hypothetically AGI has been achieved, and has a paid ChatGPT-type interface. What's the highest you're willing to pay to access it?
[View Poll](https://www.reddit.com/poll/1hevd26) | 2024-12-15T15:51:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hevd26/whats_the_highest_youd_pay_for_a_proprietary_agi/ | aitookmyj0b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hevd26 | false | null | t3_1hevd26 | /r/LocalLLaMA/comments/1hevd26/whats_the_highest_youd_pay_for_a_proprietary_agi/ | false | false | self | 0 | null |
Beginner: questions on how to design a RAG for huge data context and how reliable is it? | 6 | I'm fairly new to this topic and I found different posts with different quality claims here regarding local RAG and LLMs hallucinating.
So I'm not sure whether what I'm thinking of makes any sense.
So let's say I have a bunch of books who may or may not relate to each other and I want to give a reasonable rating of the appearance of [Hobbits / Halflings](https://en.wikipedia.org/wiki/Hobbit).
The result should look somehow like this:
> - **Height**: Hobbits are much shorter than humans, typically standing between 2.5 and 4 feet tall.
> - **Build**: They are generally stout and stocky, with a round and solid build, though not overly muscular.
> - **Feet**: Hobbits have large, tough, and hairy feet with leathery soles. They often go barefoot, and their feet are one of their most distinctive features.
> - **Face and Hair**: They have round faces with often rosy cheeks and bright, friendly expressions. Their hair is usually brown or black and is thick and curly, growing on their heads and sometimes on their feet and legs.
> - **Ears**: Hobbits have slightly pointed ears, but they are not as sharp as elves' ears.
> - **Clothing**: They typically wear simple, practical clothing, such as waistcoats, breeches, and shirts, often made from natural materials like wool and linen. Their clothing is usually earth-toned, blending well with their rural environment.
>
> **Summary:** Overall, hobbits have a cozy, earthbound look, reflecting their peaceful, pastoral lifestyle.
>
> **Rating:** Hobbits do not typically fit the physical mold of Western beauty standards, which emphasize height, symmetry, sharp features, and polished grooming. However, their warmth, kindness, and "earthy" charm are valued in different ways, especially in contexts that appreciate simplicity, cuteness, or natural beauty. In essence, their appeal lies more in their personality and lifestyle than in their physical traits according to traditional Western standards.
Of course I, as a human, _know_ that I'll find the best information about them in J. R. R. Tolkien's books but lets assume I wouldn't know that.
But I have a bunch of books who describe Hobbits (_J. R. R. Tolkien's books are amongst them_) and a bunch of books who aren't related (_i.e. Hitchhiker's Guide to the Galaxy_).
Now at first I'd like to have the summary. Ideally with a reference to the book and page.
I assume that a RAG would be able to that, right?
And whenever Frodo is described, the RAG would also be able to tell that Frodo's features also apply to Hobbits, since Frodo is a Hobbit.
Is this assumption correct, too?
And after I have the generall appearance facts (as long as there's no hallucination involved), I want to be able to answer questions, summaries or ratings regarding this.
Now, my questions are:
1. Can I expect reasonable output?
2. I probably have to process/index the ebooks first, right? The indexing would then probably be slow?
3. And I read a few times that the context size of RAG should be as limited as possible since they'll start to do weird things over 32k or so? Or would you split something like Lord of the Rings in their chapters? But even if you do, would the system be able to combine things from different chapters? Or is there a better way to make sure that it's not doing strange things?
4. Would a regular notebook be okay to do this?
5. What would be the best way to optimize this if I also want to get a similar answer later about "Arthur Dent"? | 2024-12-15T16:34:20 | https://www.reddit.com/r/LocalLLaMA/comments/1hewb4i/beginner_questions_on_how_to_design_a_rag_for/ | rrrmmmrrrmmm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hewb4i | false | null | t3_1hewb4i | /r/LocalLLaMA/comments/1hewb4i/beginner_questions_on_how_to_design_a_rag_for/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'kaqL8Le0Uyxoswt0m_TI-_JNK7DASAtuWV_2oNiyG20', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/vlH_uqlPvtlDaFWy6x1K9nIunUW-rEcsjq43S2WDGmY.jpg?width=108&crop=smart&auto=webp&s=6149e3833a27d1bd974d95316314ebdd6df86ddf', 'width': 108}, {'height': 168, 'url': 'https://external-preview.redd.it/vlH_uqlPvtlDaFWy6x1K9nIunUW-rEcsjq43S2WDGmY.jpg?width=216&crop=smart&auto=webp&s=18e8dcff4b5ccd16d19b0314d68c9d9a233c924b', 'width': 216}, {'height': 249, 'url': 'https://external-preview.redd.it/vlH_uqlPvtlDaFWy6x1K9nIunUW-rEcsjq43S2WDGmY.jpg?width=320&crop=smart&auto=webp&s=0f5eed9bd0abffc5ff1351fb60b21c1f41af87f8', 'width': 320}], 'source': {'height': 374, 'url': 'https://external-preview.redd.it/vlH_uqlPvtlDaFWy6x1K9nIunUW-rEcsjq43S2WDGmY.jpg?auto=webp&s=1159148db3f4f99e2883d634483b1364f48aa30d', 'width': 480}, 'variants': {}}]} |
Nvidia GeForce RTX 5070 Ti gets 16 GB GDDR7 memory | 290 | [Source: https:\/\/wccftech.com\/nvidia-geforce-rtx-5070-ti-16-gb-gddr7-gb203-300-gpu-350w-tbp\/](https://preview.redd.it/myvnl6n6k17e1.png?width=837&format=png&auto=webp&s=f2c026c7205f0a4c7fc985367a4bc530391cf91e)
| 2024-12-15T16:46:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hewkg3/nvidia_geforce_rtx_5070_ti_gets_16_gb_gddr7_memory/ | AdamDhahabi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hewkg3 | false | null | t3_1hewkg3 | /r/LocalLLaMA/comments/1hewkg3/nvidia_geforce_rtx_5070_ti_gets_16_gb_gddr7_memory/ | false | false | 290 | {'enabled': False, 'images': [{'id': '_FyW82MJtJznMshWfrcoCIoKwEvesTFExTYEGaUlrFA', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/ohKlIZPOm7LUvIJY4I3vLB77M-0wboGmu6GODIaCDyg.jpg?width=108&crop=smart&auto=webp&s=597ca3772ab9470c0dda666a9e85dd82e6df931d', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/ohKlIZPOm7LUvIJY4I3vLB77M-0wboGmu6GODIaCDyg.jpg?width=216&crop=smart&auto=webp&s=569d5a207eef177006bc82ae1bd676878f76a2e6', 'width': 216}, {'height': 188, 'url': 'https://external-preview.redd.it/ohKlIZPOm7LUvIJY4I3vLB77M-0wboGmu6GODIaCDyg.jpg?width=320&crop=smart&auto=webp&s=d32c46771c60c5685835d533916c3e90ce917b7e', 'width': 320}, {'height': 376, 'url': 'https://external-preview.redd.it/ohKlIZPOm7LUvIJY4I3vLB77M-0wboGmu6GODIaCDyg.jpg?width=640&crop=smart&auto=webp&s=f3b993629d45c5c9f1ad872fc1b537833ff0475b', 'width': 640}, {'height': 564, 'url': 'https://external-preview.redd.it/ohKlIZPOm7LUvIJY4I3vLB77M-0wboGmu6GODIaCDyg.jpg?width=960&crop=smart&auto=webp&s=498b13738b5b3f3b55e3d0edc4f098c40d1fb8cd', 'width': 960}, {'height': 634, 'url': 'https://external-preview.redd.it/ohKlIZPOm7LUvIJY4I3vLB77M-0wboGmu6GODIaCDyg.jpg?width=1080&crop=smart&auto=webp&s=eaa512d1ee1225d07e2bb68af8f7ad33989ec753', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/ohKlIZPOm7LUvIJY4I3vLB77M-0wboGmu6GODIaCDyg.jpg?auto=webp&s=f323a777462d52c6e44e1b4c72f50a22ab9c3c10', 'width': 2451}, 'variants': {}}]} |
|
where to run Goliath 120b gguf locally? | 5 | I'm new to local AI.
I have 80gb ram, ryzen 5 5600x, RTX 3070 (8GB)
What web ui (is that what they call it?) should i use and what settings and which version of the ai? I'm just so confused...
I want to use this ai for both role play and help for writing article for college. I heard it's way more helpful than chat gpt in that field!
sorry for my bad English and also thanks in advance for your help! | 2024-12-15T16:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hewkje/where_to_run_goliath_120b_gguf_locally/ | pooria_hmd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hewkje | false | null | t3_1hewkje | /r/LocalLLaMA/comments/1hewkje/where_to_run_goliath_120b_gguf_locally/ | false | false | self | 5 | null |
How Can RAG Systems Be Enhanced for Numerical/statistical Analysis? | 1 | [removed] | 2024-12-15T16:50:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hewnwj/how_can_rag_systems_be_enhanced_for/ | SaltyAd6001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hewnwj | false | null | t3_1hewnwj | /r/LocalLLaMA/comments/1hewnwj/how_can_rag_systems_be_enhanced_for/ | false | false | self | 1 | null |
Marc Andreesen being interviewed by Bari Weiss about Government regulation of AI | 0 | 2024-12-15T17:02:27 | https://x.com/WallStreetApes/status/1868138972200739148 | Useful44723 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1hewxn3 | false | null | t3_1hewxn3 | /r/LocalLLaMA/comments/1hewxn3/marc_andreesen_being_interviewed_by_bari_weiss/ | false | false | 0 | {'enabled': False, 'images': [{'id': '1saYKh1GeDX_6cmgYtgZD88r1aPXb7flhPvuC8YGgCo', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/yD0K3sqOkclQ9p70eURdi2B8y3xLSC3Ion3EZQ7Mc4Y.jpg?width=108&crop=smart&auto=webp&s=eb684b90de46086ce24198f2c5e2a74b7c3776d5', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/yD0K3sqOkclQ9p70eURdi2B8y3xLSC3Ion3EZQ7Mc4Y.jpg?width=216&crop=smart&auto=webp&s=adc8ccd9a91d8fd8bf31001dcf6a0b8ff3163d90', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/yD0K3sqOkclQ9p70eURdi2B8y3xLSC3Ion3EZQ7Mc4Y.jpg?width=320&crop=smart&auto=webp&s=8902032201978857d63d967b0e2a5cd8723bfe7a', 'width': 320}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/yD0K3sqOkclQ9p70eURdi2B8y3xLSC3Ion3EZQ7Mc4Y.jpg?auto=webp&s=96a5256040d8dd842b8a5456ef26e2efb47c9c15', 'width': 576}, 'variants': {}}]} |
||
Do gemini-exp-1206 have rate limits? | 1 | [removed] | 2024-12-15T17:02:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hewxpk/do_geminiexp1206_have_rate_limits/ | kkatiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hewxpk | false | null | t3_1hewxpk | /r/LocalLLaMA/comments/1hewxpk/do_geminiexp1206_have_rate_limits/ | false | false | 1 | null |
|
OPENAI O1: The failure of science and the future of AI | 0 | At present, genuine thinking capabilities are arguably no more developed than those of a third-grade elementary school student, with the fundamental issue lying in the lack of basic logical reasoning. This manifests as an inability to handle even simple variations of elementary school word problems.
We can conceptualize intellectual tasks as corresponding to different abstract mathematical structures/objects and the operations performed on these structures. These operations (thinking strategies) can be viewed as a form of search algorithm. Fully mastering a simple structure and its related strategies can be imagined as covering the potential solution space for corresponding problems.
Depending on the questioning approach:
\- Replacing numbers in a specific problem represents a single point
\- Changing order or adding irrelevant conditions creates a line
\- Composite, nested structures of similar types might represent a plane
True comprehension requires covering the entire three-dimensional space of problem-solution mappings.
From a computational perspective, this translates to minimal error rates. The system should not increase error probability with structural complexity or problem scale. Mastering fundamental constraints and elimination techniques, combined with underlying computational architectures, would render problems like the zebra puzzle trivially solvable—regardless of scale.
Currently, O1 At best, it covers many points, many lines, a small amount of 2D planes, and sporadic 3D fragments in many mathematical structures (including relatively advanced mathematics), but for very basic mathematical structures, there is no complete coverage, so the scores of those so-called high-difficulty problem sets can only be a fit to a specific form of problem, which is a vanity built on the beach. | 2024-12-15T17:11:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hex4f8/openai_o1_the_failure_of_science_and_the_future/ | flysnowbigbig | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hex4f8 | false | null | t3_1hex4f8 | /r/LocalLLaMA/comments/1hex4f8/openai_o1_the_failure_of_science_and_the_future/ | false | false | self | 0 | null |
When is Llama 3.3 coming to Meta.AI? | 1 | [removed] | 2024-12-15T17:11:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hex4he/when_is_llama_33_coming_to_metaai/ | lIlI1lII1Il1Il | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hex4he | false | null | t3_1hex4he | /r/LocalLLaMA/comments/1hex4he/when_is_llama_33_coming_to_metaai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KxNuHjxAJMgSDWPMJ_eZ6DTB7eL06jO_mHhqCsDPsmU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/MVa6z5D_XWJpP-0hgjdIFR2m1qqWKU2bQA773WOduMM.jpg?width=108&crop=smart&auto=webp&s=ae32debcf89521c7753603039fe425e962add3bc', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/MVa6z5D_XWJpP-0hgjdIFR2m1qqWKU2bQA773WOduMM.jpg?width=216&crop=smart&auto=webp&s=d5f1b0845991441c2bfa648dc9d56f24e09b975c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/MVa6z5D_XWJpP-0hgjdIFR2m1qqWKU2bQA773WOduMM.jpg?width=320&crop=smart&auto=webp&s=6b55794e67ef9084319679c611cba2e71c3ec23c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/MVa6z5D_XWJpP-0hgjdIFR2m1qqWKU2bQA773WOduMM.jpg?width=640&crop=smart&auto=webp&s=338e776d4e7823565d28e1259e404ce63666d149', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/MVa6z5D_XWJpP-0hgjdIFR2m1qqWKU2bQA773WOduMM.jpg?width=960&crop=smart&auto=webp&s=2d06ca828f9ad5734283388af30f2b8c7efbe098', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/MVa6z5D_XWJpP-0hgjdIFR2m1qqWKU2bQA773WOduMM.jpg?width=1080&crop=smart&auto=webp&s=04e22ec303d09ce9f24441817a9101aae3fefbf6', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/MVa6z5D_XWJpP-0hgjdIFR2m1qqWKU2bQA773WOduMM.jpg?auto=webp&s=86581e27e2095316616ca2edb727549f2e252491', 'width': 1200}, 'variants': {}}]} |
Any recommendations for a smallish model that is great for function calling? | 1 | [removed] | 2024-12-15T17:37:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hexqd0/any_recommendations_for_a_smallish_model_that_is/ | alimmmmmmm69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hexqd0 | false | null | t3_1hexqd0 | /r/LocalLLaMA/comments/1hexqd0/any_recommendations_for_a_smallish_model_that_is/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dKGdjQ3Gi5gqa2OBz2L0jmxdB0adyYtU4AtiDBXWPr4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CUQ8f9oxZVyeK0241oFddSKatVF5KvBZ00jbXN7SRJI.jpg?width=108&crop=smart&auto=webp&s=0645bb3d3d8a6b5612d912f964d9a7f0f52c1ef1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CUQ8f9oxZVyeK0241oFddSKatVF5KvBZ00jbXN7SRJI.jpg?width=216&crop=smart&auto=webp&s=525b362ab4696dacba6d89dcb9aa4322a08a6021', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CUQ8f9oxZVyeK0241oFddSKatVF5KvBZ00jbXN7SRJI.jpg?width=320&crop=smart&auto=webp&s=66ddbb8aae596245791ce56e432cdf8d177d6310', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CUQ8f9oxZVyeK0241oFddSKatVF5KvBZ00jbXN7SRJI.jpg?width=640&crop=smart&auto=webp&s=9dee6f1ce02b6be33d03c514684f0826a6928ab4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CUQ8f9oxZVyeK0241oFddSKatVF5KvBZ00jbXN7SRJI.jpg?width=960&crop=smart&auto=webp&s=c08ce73d891672dcc1090fb274ac8342796c0b6a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CUQ8f9oxZVyeK0241oFddSKatVF5KvBZ00jbXN7SRJI.jpg?width=1080&crop=smart&auto=webp&s=21e4245a875c328901fa681533f47c471e1ba664', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CUQ8f9oxZVyeK0241oFddSKatVF5KvBZ00jbXN7SRJI.jpg?auto=webp&s=9d47a7c4040fff07978712594f4812f9892e4fad', 'width': 1200}, 'variants': {}}]} |
New MLM - InternLM-X-Composer2.5-OmniLive | 46 | 2024-12-15T17:38:48 | https://github.com/InternLM/InternLM-XComposer/tree/main/InternLM-XComposer-2.5-OmniLive | ekaj | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hexr8c | false | null | t3_1hexr8c | /r/LocalLLaMA/comments/1hexr8c/new_mlm_internlmxcomposer25omnilive/ | false | false | 46 | {'enabled': False, 'images': [{'id': 'O4LU3OWOHmSi0c8aJPELbIzI1qzPCk5L-VrL3j3DI9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v0aPXOqHCIdTJKf9ZwstrK5jB0R5GXNMlbinKdnIqXw.jpg?width=108&crop=smart&auto=webp&s=dec6fd4c52357df2434d53c685280c376e7e9066', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v0aPXOqHCIdTJKf9ZwstrK5jB0R5GXNMlbinKdnIqXw.jpg?width=216&crop=smart&auto=webp&s=3b6429d7292295c76c8e29131a75b2353f30ab88', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v0aPXOqHCIdTJKf9ZwstrK5jB0R5GXNMlbinKdnIqXw.jpg?width=320&crop=smart&auto=webp&s=37e9f9ed29d4cae76ff3d13ff43d38bd32223922', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v0aPXOqHCIdTJKf9ZwstrK5jB0R5GXNMlbinKdnIqXw.jpg?width=640&crop=smart&auto=webp&s=89073794d106ba879df118d9dce4964f8f38a5b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v0aPXOqHCIdTJKf9ZwstrK5jB0R5GXNMlbinKdnIqXw.jpg?width=960&crop=smart&auto=webp&s=24ea418c46037bcdb0be6e9cf8fe67e068bbf236', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v0aPXOqHCIdTJKf9ZwstrK5jB0R5GXNMlbinKdnIqXw.jpg?width=1080&crop=smart&auto=webp&s=bd7080b18d91380a1c25f9f631891e8a9656ab39', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v0aPXOqHCIdTJKf9ZwstrK5jB0R5GXNMlbinKdnIqXw.jpg?auto=webp&s=f775d02f5c4262cbc09b81c01c2b7f47b7888f7b', 'width': 1200}, 'variants': {}}]} |
||
koboldcpp with speculative decoding on macOS | 5 | Hi,
I am using koboldcpp on my MacBook Air M3 24GB to load llms. I am now intersted in speculative decoding, especially for Qwen2.5 models 14B and use a 0.5 or 1.5B as the draft model. But how do I do this?
koboldcpp says: \[--draftmodel DRAFTMODEL\] \[--draftamount \[tokens\]\]
For draftamount the help says: How many tokens to draft per chunk before verifying results.
So, what is a reasonable amount here? Is someone using koboldcpp on mac with speculative decoding and can help me out? Thanks. | 2024-12-15T18:22:46 | https://www.reddit.com/r/LocalLLaMA/comments/1heyqjf/koboldcpp_with_speculative_decoding_on_macos/ | doc-acula | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heyqjf | false | null | t3_1heyqjf | /r/LocalLLaMA/comments/1heyqjf/koboldcpp_with_speculative_decoding_on_macos/ | false | false | self | 5 | null |
Fully Fine-tuned and LoRA LLM Production Use-Cases | 0 | Ik lot's of prod image model LoRAs and fine-tunes but how much for LLMS? | 2024-12-15T18:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/1heyqzy/fully_finetuned_and_lora_llm_production_usecases/ | stevelon_mobs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1heyqzy | false | null | t3_1heyqzy | /r/LocalLLaMA/comments/1heyqzy/fully_finetuned_and_lora_llm_production_usecases/ | false | false | self | 0 | null |
Any local live-vision alternative to ChatGPT/Gemini? | 0 | Or any in-progress projects? | 2024-12-15T18:41:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hez5sm/any_local_livevision_alternative_to_chatgptgemini/ | OceanRadioGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hez5sm | false | null | t3_1hez5sm | /r/LocalLLaMA/comments/1hez5sm/any_local_livevision_alternative_to_chatgptgemini/ | false | false | self | 0 | null |
Llama coding question | 1 | [removed] | 2024-12-15T18:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hezhpl/llama_coding_question/ | No-Barber1679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hezhpl | false | null | t3_1hezhpl | /r/LocalLLaMA/comments/1hezhpl/llama_coding_question/ | false | false | self | 1 | null |
Opensource 8B parameter test time compute scaling(reasoning) model | 211 | 2024-12-15T19:02:00 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hezmas | false | null | t3_1hezmas | /r/LocalLLaMA/comments/1hezmas/opensource_8b_parameter_test_time_compute/ | false | false | 211 | {'enabled': True, 'images': [{'id': 'EWcOvswaeglx476VRCiF525L8gkgdw8zkHNe-HuETBQ', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/2tpfe1uf827e1.png?width=108&crop=smart&auto=webp&s=8d11da4b221a187e53d74f3bab16af275b7ae143', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/2tpfe1uf827e1.png?width=216&crop=smart&auto=webp&s=e7a7434a1d4937304cf47b831aabc91cf57a462d', 'width': 216}, {'height': 143, 'url': 'https://preview.redd.it/2tpfe1uf827e1.png?width=320&crop=smart&auto=webp&s=ea2c375491d60ca1fc569342c725c6becc5def06', 'width': 320}, {'height': 287, 'url': 'https://preview.redd.it/2tpfe1uf827e1.png?width=640&crop=smart&auto=webp&s=1608683644c52cc94bd6dfeb4c7539a42db0043c', 'width': 640}, {'height': 431, 'url': 'https://preview.redd.it/2tpfe1uf827e1.png?width=960&crop=smart&auto=webp&s=95cec076a8a54a2ac0e0e18cbe8fac11426214d0', 'width': 960}, {'height': 485, 'url': 'https://preview.redd.it/2tpfe1uf827e1.png?width=1080&crop=smart&auto=webp&s=c997fe2ba670a695ac190daf9579fb81770e4977', 'width': 1080}], 'source': {'height': 522, 'url': 'https://preview.redd.it/2tpfe1uf827e1.png?auto=webp&s=d68871c82146bf6a48dff6fb8bc69f695fb09654', 'width': 1162}, 'variants': {}}]} |
|||
Open source framework for building synthetic datasets from AI feedback. | 7 | Hello u/LocalLLaMA folks!
I'm excited to share with the community: OpenPO, an open source framework for building synthetic dataset for preference tuning: [https://github.com/dannylee1020/openpo](https://github.com/dannylee1020/openpo)
\- multiple providers to collect diverse set of responses from 200+ LLMs.
\- various evaluation methods for data synthesis including state-of-art evaluation models.
here is a notebook demonstrating how to build dataset using OpenPO and PairRM: [https://colab.research.google.com/drive/1G1T-vOTXjIXuRX3h9OlqgnE04-6IpwIf?usp=sharing](https://colab.research.google.com/drive/1G1T-vOTXjIXuRX3h9OlqgnE04-6IpwIf?usp=sharing)
building dataset using Prometheus2: [https://colab.research.google.com/drive/1dro0jX1MOfSg0srfjA\_DZyeWIWKOuJn2?usp=sharing](https://colab.research.google.com/drive/1dro0jX1MOfSg0srfjA_DZyeWIWKOuJn2?usp=sharing)
IMO, synthetic data generation has a lot of potential to make impact to the open source community without throwing a lot of resources into it.
It's still in the early development phase, so any feedback or contribution would be super valuable! Let me know how you all think! | 2024-12-15T19:06:31 | https://www.reddit.com/r/LocalLLaMA/comments/1hezpxm/open_source_framework_for_building_synthetic/ | dphntm1020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hezpxm | false | null | t3_1hezpxm | /r/LocalLLaMA/comments/1hezpxm/open_source_framework_for_building_synthetic/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'jsRAQ5lhQtU1APDUHJPy7PTUIHqOp7pNGF_-nQsTrWw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AVfQWWOpq1NHp9hU1Bhtwnxfrc61ukHmrBThSMB1iJ0.jpg?width=108&crop=smart&auto=webp&s=afdf5f0cfcd50c5052359cbc69e9eaaa37604aee', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AVfQWWOpq1NHp9hU1Bhtwnxfrc61ukHmrBThSMB1iJ0.jpg?width=216&crop=smart&auto=webp&s=9622109bc4fadb7d513cf79b500c80ceadaf1a87', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AVfQWWOpq1NHp9hU1Bhtwnxfrc61ukHmrBThSMB1iJ0.jpg?width=320&crop=smart&auto=webp&s=2dbef8dfb6c66bfe82fca800ffbc057afd7996cc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AVfQWWOpq1NHp9hU1Bhtwnxfrc61ukHmrBThSMB1iJ0.jpg?width=640&crop=smart&auto=webp&s=dceddcffb006c4f750fc0cea082e4edad3bce59f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AVfQWWOpq1NHp9hU1Bhtwnxfrc61ukHmrBThSMB1iJ0.jpg?width=960&crop=smart&auto=webp&s=1ae4bc3175a0b1c992a1231697a8c2f8757f16f4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AVfQWWOpq1NHp9hU1Bhtwnxfrc61ukHmrBThSMB1iJ0.jpg?width=1080&crop=smart&auto=webp&s=0b3e5461dce76997f8a9d9cfc1733551a15b19e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AVfQWWOpq1NHp9hU1Bhtwnxfrc61ukHmrBThSMB1iJ0.jpg?auto=webp&s=f9a20eecaa1d990a04396fe520d9791917f89bbf', 'width': 1200}, 'variants': {}}]} |
Opensource 8B parameter test time compute scaling(reasoning) model performance comparison Ruliad_AI | 50 | 2024-12-15T19:10:40 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hezt5m | false | null | t3_1hezt5m | /r/LocalLLaMA/comments/1hezt5m/opensource_8b_parameter_test_time_compute/ | false | false | 50 | {'enabled': True, 'images': [{'id': '1EUJ_55FXdaGiuWWMkRShxanB8LLWU6Sfb_dWaGxdp0', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/uvvct2k1a27e1.png?width=108&crop=smart&auto=webp&s=b7264c34db8e7ec938d82dca663f088b8a9ad15a', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/uvvct2k1a27e1.png?width=216&crop=smart&auto=webp&s=385113a4c20fb01baafc6c1366c65bb1f1bb220f', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/uvvct2k1a27e1.png?width=320&crop=smart&auto=webp&s=828d0cadf1ad5ae95fbfb14f8f035706ddff412d', 'width': 320}, {'height': 491, 'url': 'https://preview.redd.it/uvvct2k1a27e1.png?width=640&crop=smart&auto=webp&s=39afd8327e3d0a47f88328c8aa2b27a14591df9c', 'width': 640}, {'height': 737, 'url': 'https://preview.redd.it/uvvct2k1a27e1.png?width=960&crop=smart&auto=webp&s=0c5cae2f4fdf9e4bb966e89ac1ba8f8b73999946', 'width': 960}, {'height': 829, 'url': 'https://preview.redd.it/uvvct2k1a27e1.png?width=1080&crop=smart&auto=webp&s=55d78888ee8ebfdf689ed1bc1b5247bcacfdcc1f', 'width': 1080}], 'source': {'height': 850, 'url': 'https://preview.redd.it/uvvct2k1a27e1.png?auto=webp&s=f400a1ae3a7671dc7ba9c59488855c6152e82f4a', 'width': 1107}, 'variants': {}}]} |
|||
Running LLMs on Raspberry Pi 5 | 1 | [removed] | 2024-12-15T19:42:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hf0j26/running_llms_on_raspberry_pi_5/ | Same-Listen-2646 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf0j26 | false | null | t3_1hf0j26 | /r/LocalLLaMA/comments/1hf0j26/running_llms_on_raspberry_pi_5/ | false | false | self | 1 | null |
3090 or 4080? | 0 | I’m starting to dabble more and more into local models but my little 2070 isn’t holding up well.
There is a local deal for a 3090 computer (5700x 32gb) for $800 or do I go for better pc 9950x with 4080 super (or 4070tis). My current pc is a 2600x 48gb 4 channel (I know I know!)
I’m looking to break into vision and multi modal models.
Any help appreciated. | 2024-12-15T20:05:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hf1105/3090_or_4080/ | MrWiseOwl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf1105 | false | null | t3_1hf1105 | /r/LocalLLaMA/comments/1hf1105/3090_or_4080/ | false | false | self | 0 | null |
In-Context Learning: Looking for Practical Examples | 4 | Hi. I'm trying to optimise an in-context learning scenario. Most of the examples I have seen with this regard have had prompts like this:
\`\`\`
Text: \*\* Label: A
Text: \*\* Label: B
...
\`\`\`
But what if I can provide more information about the target label, its probability, etc..? How do I fit them in the prompt? Does providing examples actually improve anything over "explaining the label", or the other way round? Are there some practical examples of prompts, ideally on models like Llama 8B/Gemma 9B, that I can try? | 2024-12-15T20:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hf1o89/incontext_learning_looking_for_practical_examples/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf1o89 | false | null | t3_1hf1o89 | /r/LocalLLaMA/comments/1hf1o89/incontext_learning_looking_for_practical_examples/ | false | false | self | 4 | null |
Hosting My Own LLM for Personal Use - Looking for Recommendations! | 1 | [removed] | 2024-12-15T20:39:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hf1rpl/hosting_my_own_llm_for_personal_use_looking_for/ | AbbreviationsOdd7728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf1rpl | false | null | t3_1hf1rpl | /r/LocalLLaMA/comments/1hf1rpl/hosting_my_own_llm_for_personal_use_looking_for/ | false | false | self | 1 | null |
Cheapest way to run larger models? (Even at slower speeds) | 3 | I'm very new to running LLMs locally and have been playing around with it the last week or so testing things out.
I was wondering, cause I have an old i9 9900k system which is currently just a game server without a GPU. If I put in 128GB of RAM would that be enough to run larger models? I don't really need quick responses, just better more coherent responses. Them taking a long time isn't really an issue for me right now.
I know having a couple of GPUs is probably the best/fastest way to run LLMs but I don't really have the money for that right now and my current system only has a 2080ti in it (planning on upgrading when 50 series launches)
I'm open to all suggestions thanks! | 2024-12-15T20:45:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hf1xd2/cheapest_way_to_run_larger_models_even_at_slower/ | Jathulioh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf1xd2 | false | null | t3_1hf1xd2 | /r/LocalLLaMA/comments/1hf1xd2/cheapest_way_to_run_larger_models_even_at_slower/ | false | false | self | 3 | null |
Chatgpt vs Claude vs LLama for agent orchestration and conversational responses | 0 | As mentioned, I’m currently working on a startup [kallro.com](http://kallro.com) and we’re trying to figure out which LLM would give us the best bang for our buck. We need something that can handle conversational TTS, detect emotion and intent, and adapt responses accordingly. We’re also looking for a model (or maybe two, one for each) that can handle backend orchestration with something like n8n. Any suggestions on which LLM would be the best fit here, while still being cost-effective? | 2024-12-15T20:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hf2227/chatgpt_vs_claude_vs_llama_for_agent/ | Masony817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf2227 | false | null | t3_1hf2227 | /r/LocalLLaMA/comments/1hf2227/chatgpt_vs_claude_vs_llama_for_agent/ | false | false | self | 0 | null |
Train llama 3 8B out of memory in rtx4060 | 1 | [removed] | 2024-12-15T20:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hf25g9/train_llama_3_8b_out_of_memory_in_rtx4060/ | mrleles | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf25g9 | false | null | t3_1hf25g9 | /r/LocalLLaMA/comments/1hf25g9/train_llama_3_8b_out_of_memory_in_rtx4060/ | false | false | self | 1 | null |
Unified speech-to-speech model with translation (dubbing) | 1 | [removed] | 2024-12-15T21:35:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hf3158/unified_speechtospeech_model_with_translation/ | antiworkprotwerk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf3158 | false | null | t3_1hf3158 | /r/LocalLLaMA/comments/1hf3158/unified_speechtospeech_model_with_translation/ | false | false | self | 1 | null |
Advice on Running Local LLM on a Laptop (MacBook Pro?) | 0 | I’m looking to run a local LLM on a laptop, as I deal with privacy-sensitive data and can’t rely on cloud-based solutions.
I’ll be writing keywords, and the LLM needs to produce sensible and coherent text based on those. It doesn’t need to perform heavy search tasks, but speed is critical since I’ll be generating a new report every 30-60 minutes.
I’m contemplating a MacBook Pro but unsure about the best configuration. Specifically:
1. RAM: How much do I actually need to ensure smooth and fast performance for running local models like LLaMA or similar? Would 32GB/48GB be enough, or should I go for 64GB or higher?
2. Chip: Does the difference between the M4 Pro and M4 Max really matter for this use case?
If anyone has experience running local models on a laptop (MacBook or otherwise), I’d love to hear your insights! Suggestions for alternative setups or additional considerations are also welcome. I will be working in different locations, so it needs to be a laptop. | 2024-12-15T22:18:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hf3yv0/advice_on_running_local_llm_on_a_laptop_macbook/ | mopf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf3yv0 | false | null | t3_1hf3yv0 | /r/LocalLLaMA/comments/1hf3yv0/advice_on_running_local_llm_on_a_laptop_macbook/ | false | false | self | 0 | null |
Recs for a model to run for my purposes? | 1 | [removed] | 2024-12-15T22:27:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hf45wz/recs_for_a_model_to_run_for_my_purposes/ | OrganizationAny4570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf45wz | false | null | t3_1hf45wz | /r/LocalLLaMA/comments/1hf45wz/recs_for_a_model_to_run_for_my_purposes/ | false | false | self | 1 | null |
NotebookLM Fir Android Offline? | 1 | I'm a huge fan of Google NotebookLM. It was able to answer questions about my websites and books, but I'd like something like this offline, either for Android or Windows. Any options? | 2024-12-15T22:32:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hf49z5/notebooklm_fir_android_offline/ | Lucky-Royal-6156 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf49z5 | false | null | t3_1hf49z5 | /r/LocalLLaMA/comments/1hf49z5/notebooklm_fir_android_offline/ | false | false | self | 1 | null |
[Release] SmolTulu-1.7b-Instruct - My first model release! New SOTA for small models on IFEval & GSM8K | 1 | [removed] | 2024-12-15T22:57:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hf4sk9/release_smoltulu17binstruct_my_first_model/ | SulRash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf4sk9 | false | null | t3_1hf4sk9 | /r/LocalLLaMA/comments/1hf4sk9/release_smoltulu17binstruct_my_first_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GDX8ULtA7vVPSsdDK0iTzb4GthqQONyDlYxob7VoLj8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=108&crop=smart&auto=webp&s=ac979d327f3058d12a097660a20f0fb44d3204af', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=216&crop=smart&auto=webp&s=f784de779f8c89395baa6f0a68079793aa2c91f3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=320&crop=smart&auto=webp&s=2085811d7faf3a8c337cee13284d374a897c1e80', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=640&crop=smart&auto=webp&s=dfbd753fcaaaf6cdefd89706b018b8b275876032', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=960&crop=smart&auto=webp&s=393c0033e17a645a17c3f23e23488be928117128', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=1080&crop=smart&auto=webp&s=041cebfbd54ef48cd60e416be1e53837f494be2d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?auto=webp&s=40048700d631830a9b2239fcce393e01253cd7e9', 'width': 1200}, 'variants': {}}]} |
Automatic Flux LoRA Switching | 24 | I created an Open WebUI tool that combines Llama 3.3 and Flux in a unique way - and figured I should share it with the community.
The tool can be found \[here\](https://openwebui.com/t/johnthenerd/image\_autolora). It currently only works with ComfyUI and requires a bit of manual configuration as it's not fully polished. However, once set up, it's quite nice to work with!
The way it works is, the LLM is allowed to pick from a number of LoRA's, which are then used to edit the ComfyUI workflow and add the necessary prompt trigger on-the-fly. This allows one to simply "ask the AI for a picture" just like ChatGPT, but also gets way better responses than you'd otherwise expect.
Here's an example!
https://preview.redd.it/dfjko9zzg37e1.png?width=1138&format=png&auto=webp&s=f24f8fcaa6b0a41da94972b4e7a0dfa615dfd05d
It automatically decided to use the [Yarn Art Flux LoRA](https://huggingface.co/linoyts/yarn_art_Flux_LoRA) and created this image:
https://preview.redd.it/pgd0ymwjh37e1.png?width=1088&format=png&auto=webp&s=8ed925449fcaa29c07ab0bc969b7f94c7f4d8120
| 2024-12-15T23:16:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hf56zt/automatic_flux_lora_switching/ | JohnTheNerd3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf56zt | false | null | t3_1hf56zt | /r/LocalLLaMA/comments/1hf56zt/automatic_flux_lora_switching/ | false | false | 24 | null |
|
Local/remote chat app like Msty or LM Studio on iOS? | 1 | I use a lot of api models, so I don't really need to run a local model, but I can't find a good mobile app that's like Msty. Anybody got a recommendation? | 2024-12-15T23:19:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hf591u/localremote_chat_app_like_msty_or_lm_studio_on_ios/ | realityexperiencer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf591u | false | null | t3_1hf591u | /r/LocalLLaMA/comments/1hf591u/localremote_chat_app_like_msty_or_lm_studio_on_ios/ | false | false | self | 1 | null |
What token generation speed can I expect from Llama-3.3-70B-Instruct-Q8_0.gguf on an H100 80GB? | 1 | [removed] | 2024-12-15T23:55:12 | https://www.reddit.com/r/LocalLLaMA/comments/1hf5yz6/what_token_generation_speed_can_i_expect_from/ | all_is_okay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf5yz6 | false | null | t3_1hf5yz6 | /r/LocalLLaMA/comments/1hf5yz6/what_token_generation_speed_can_i_expect_from/ | false | false | self | 1 | null |
What token generation speed can I expect from Llama-3.3-70B-Instruct-Q8_0.gguf on an H100 80GB? | 1 | [removed] | 2024-12-16T00:01:10 | https://www.reddit.com/r/LocalLLaMA/comments/1hf6389/what_token_generation_speed_can_i_expect_from/ | all_is_okay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf6389 | false | null | t3_1hf6389 | /r/LocalLLaMA/comments/1hf6389/what_token_generation_speed_can_i_expect_from/ | false | false | self | 1 | null |
Is it possible to suspend the Nvidia 3090 (e.g. using ASPM)? | 8 | Currently it idles at 23w (effectively 30w at watt meter), but it seems to sometimes get stuck idling at 40w or more, despite nvidia-smi reading that it's in P8 state.Resetting with
nvidia-smi -i 0 -rnvidia-smi -i 0 -r
brings it down to 23w again, (after 125w for 10s).
Currently it idles at 23w (effectively 30w at watt meter), but it seems to sometimes get stuck idling at 40w or more, despite nvidia-smi reading that it's in P8 state.Resetting with `nvidia-smi -i 0 -r` brings it down to 23w again, (after 125w for 10s).
But I'm curious if it can be brought to zero, since the entire PC can suspend to 1w.
I've tried removing the PCI device using
echo 0000:01:00.0 > /sys/bus/pci/devices/0000:01:00.0/driver/unbind
echo 1 > /sys/bus/pci/devices/0000:01:00.0/remove
but it freezes. I've also tried
modprobe -r nvidia_drm
modprobe -r nvidia_modeset
modprobe -r nvidia_uvm
modprobe -r nvidia
but it refuses:
modprobe: FATAL: Module nvidia_modeset is in use.
modprobe: FATAL: Module nvidia is in use.
I've tried blacklisting it, but it is still loaded.
rm -f /etc/modprobe.d/nvidia-modeset.conf
cat > /etc/modprobe.d/blacklist-nvidia-modeset.conf <<EOF
blacklist nvidia_modeset
blacklist nvidia
EOF
update-initramfs -u
reboot
and
lsmod | grep nvidia_modeset
returns
`nvidia_modeset 1404928 2 nvidia_drm`
`nvidia 70623232 6 nvidia_modeset`
`video 65536 3 <redacted>,i915,nvidia_modeset`
I'm thinking if it would help to use passthrough/IOMMU to a VM, but it seems overkill, and I'm not sure if it would even work?
I've also tried "drain" but that caused it to stay in P0 state.
# doesn't work
nvidia-smi drain -p 0000:01:00.0 -m 1
nvidia-smi drain -p 0000:01:00.0 -m 0
and forced removal also fails
rmmod --force nvidia_modeset
Any experiences that you can share? | 2024-12-16T00:56:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hf787p/is_it_possible_to_suspend_the_nvidia_3090_eg/ | GeniusPengiun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf787p | false | null | t3_1hf787p | /r/LocalLLaMA/comments/1hf787p/is_it_possible_to_suspend_the_nvidia_3090_eg/ | false | false | self | 8 | null |
Everyone share their favorite chain of thought prompts! | 270 | Here’s my favorite one, I DID NOT MAKE IT:
Begin by enclosing all thoughts within <thinking> tags, exploring multiple angles and approaches. Break down the solution into clear steps within <step> tags. Start with a 20-step budget, requesting more for complex problems if needed. Use <count> tags after each step to show the remaining budget. Stop when reaching 0. Continuously adjust your reasoning based on intermediate results and reflections, adapting your strategy as you progress. Regularly evaluate progress using <reflection> tags. Be critical and honest about your reasoning process. Assign a quality score between 0.0 and 1.0 using <reward> tags after each reflection. Use this to guide your approach: 0.8+: Continue current approach 0.5-0.7: Consider minor adjustments Below 0.5: Seriously consider backtracking and trying a different approach If unsure or if reward score is low, backtrack and try a different approach, explaining your decision within <thinking> tags. For mathematical problems, show all work explicitly using LaTeX for formal notation and provide detailed proofs. Explore multiple solutions individually if possible, comparing approaches in reflections. Use thoughts as a scratchpad, writing out all calculations and reasoning explicitly. Synthesize the final answer within <answer> tags, providing a clear, concise summary. Conclude with a final reflection on the overall solution, discussing effectiveness, challenges, and solutions. Assign a final reward score. | 2024-12-16T01:12:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hf7jd2/everyone_share_their_favorite_chain_of_thought/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf7jd2 | false | null | t3_1hf7jd2 | /r/LocalLLaMA/comments/1hf7jd2/everyone_share_their_favorite_chain_of_thought/ | false | false | self | 270 | null |
Running LLMs on Dual Xeon E5-2699 v4 (22T/44C) (no GPU, yet) | 7 | Hi all,
I recently bought a HP DL360 G9 with 2x Xeon E5-2699v4 -> That is a total of 44 cores / 88 Threads. Together with 512GB 2400Mhz DDR4 RAM, I am wondering what kinds of speeds I would be looking at for selfhosting a decent llm for code generation/ general purpose? Does anyone has experience with these CPU?
I expect it to be very slow without any graphics card.
On that note, what kind of card can I add which may improve performance and most importantly fit in this 1u chassis.
Any thoughts/ recommendations are highly appreciated. Thank you in advance.
PS. This is for my personal use only. The server will be used for selfhosting some other stuff. The use is minimal. | 2024-12-16T01:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/1hf80e4/running_llms_on_dual_xeon_e52699_v4_22t44c_no_gpu/ | nodonaldplease | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf80e4 | false | null | t3_1hf80e4 | /r/LocalLLaMA/comments/1hf80e4/running_llms_on_dual_xeon_e52699_v4_22t44c_no_gpu/ | false | false | self | 7 | null |
Language Model Optimized for Language? | 0 | Do you guys know of any language model thats optimized for language? What I mean is a LLM that has a tokenizer scheme or just the way it was trained to be best for language, for example many LLM's have a lot of tokens for coding tasks or maths, but for my usecase that would be a waste. | 2024-12-16T02:24:41 | https://www.reddit.com/r/LocalLLaMA/comments/1hf8xd9/language_model_optimized_for_language/ | ImpressiveHead69420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf8xd9 | false | null | t3_1hf8xd9 | /r/LocalLLaMA/comments/1hf8xd9/language_model_optimized_for_language/ | false | false | self | 0 | null |
Someone posted some numbers for LLM on the Intel B580. It's fast. | 1 | [removed] | 2024-12-16T02:27:28 | https://www.reddit.com/r/LocalLLaMA/comments/1hf8zb3/someone_posted_some_numbers_for_llm_on_the_intel/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf8zb3 | false | null | t3_1hf8zb3 | /r/LocalLLaMA/comments/1hf8zb3/someone_posted_some_numbers_for_llm_on_the_intel/ | false | false | self | 1 | null |
Someone posted some numbers for LLM on the Intel B580. It's fast. | 107 | I asked someone to post some LLM numbers on their B580. It's fast. I posted the same benchmark on my A770. It's slow. They are running Windows and I'm running linux. I'll switch to Windows and update to the new driver and see if that makes a difference.
I tried making a post with the link to the reddit post, but for someone reason whenever I put a link to reddit in a post, my post is shadowed. It's invisible. Look for the thread I started in the intelarc sub.
Here's a copy and paste from there.
From user phiw's B580.
> | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | Vulkan,RPC | 99 | tg128 | 35.89 ± 0.11 |
>
>
> | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | Vulkan,RPC | 99 | tg256 | 35.75 ± 0.12 |
>
> | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | Vulkan,RPC | 99 | tg512 | 35.45 ± 0.14 |
From my A770.
> | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | Vulkan,RPC | 99 | tg128 | 10.87 ± 0.04 |
>
> | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | Vulkan,RPC | 99 | tg256 | 6.94 ± 0.00 |
>
> | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | Vulkan,RPC | 99 | tg512 | 10.62 ± 0.01 | | 2024-12-16T02:41:14 | https://www.reddit.com/r/LocalLLaMA/comments/1hf98oy/someone_posted_some_numbers_for_llm_on_the_intel/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf98oy | false | null | t3_1hf98oy | /r/LocalLLaMA/comments/1hf98oy/someone_posted_some_numbers_for_llm_on_the_intel/ | false | false | self | 107 | null |
What's the difference between a bot and an agent? | 8 | Feels to me "agents" are the jargon invented for this AI hypecycle and its little more than a more capable bot virtue of LLMs. | 2024-12-16T02:59:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hf9l72/whats_the_difference_between_a_bot_and_an_agent/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hf9l72 | false | null | t3_1hf9l72 | /r/LocalLLaMA/comments/1hf9l72/whats_the_difference_between_a_bot_and_an_agent/ | false | false | self | 8 | null |
AI Studio Realtime Feature doesnt work (or im missing something?) | 7 | Its literally Hallucinating. Its been like this since they released this feature in Ai Studio. idk why but lol, it creeps me out on the first time i use it. I thought it seeing things that i cant see.
My Realtime Input, which is in there was a still video with my dog and my guitar on the ground, with a TV above them with messy wirings and a white wall background. | 2024-12-16T03:00:00 | nojukuramu | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hf9l9g | false | null | t3_1hf9l9g | /r/LocalLLaMA/comments/1hf9l9g/ai_studio_realtime_feature_doesnt_work_or_im/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'RLQmWijNstKfl2_Ws6ZkLxIidjdd0Nh6pgDBT4mzJcc', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/1yz92y9ul47e1.jpeg?width=108&crop=smart&auto=webp&s=08b52c0e392570642126e45e2ec162be38535ef2', 'width': 108}, {'height': 277, 'url': 'https://preview.redd.it/1yz92y9ul47e1.jpeg?width=216&crop=smart&auto=webp&s=16e63f53a22a2d6258d99dd6719347d03142cfc1', 'width': 216}, {'height': 411, 'url': 'https://preview.redd.it/1yz92y9ul47e1.jpeg?width=320&crop=smart&auto=webp&s=d2c170b2e28a2e9a9a283da610a0d616966a9627', 'width': 320}, {'height': 822, 'url': 'https://preview.redd.it/1yz92y9ul47e1.jpeg?width=640&crop=smart&auto=webp&s=6f6fef7f58154c0c166058052dd6c18e689e39d3', 'width': 640}, {'height': 1233, 'url': 'https://preview.redd.it/1yz92y9ul47e1.jpeg?width=960&crop=smart&auto=webp&s=e7d834c07aff1bfba5edbeff42e03a15530d7baf', 'width': 960}, {'height': 1388, 'url': 'https://preview.redd.it/1yz92y9ul47e1.jpeg?width=1080&crop=smart&auto=webp&s=94c7b3e063918dbba0c1c4e1a84c3ba3f97cf303', 'width': 1080}], 'source': {'height': 1388, 'url': 'https://preview.redd.it/1yz92y9ul47e1.jpeg?auto=webp&s=5b0cb977adb6b3b12206cb23c3acf431b4c3dff6', 'width': 1080}, 'variants': {}}]} |
||
Building commercial product with open source project | 2 | For context, I dont have a degree in cs and I am new to programming. Basically I'm trying to build an ai assistant using rag. Can I just fork an open source project for the pipeline and add UI? Is there a legal consequence for such thing? What should I watch out for? | 2024-12-16T03:34:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hfa7cv/building_commercial_product_with_open_source/ | Easy-Mix8745 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfa7cv | false | null | t3_1hfa7cv | /r/LocalLLaMA/comments/1hfa7cv/building_commercial_product_with_open_source/ | false | false | self | 2 | null |
How do I use ollama for getting insights? | 0 | What is the process to get insights from an excel sheet using an OSS Model like llama3.3, or other that is best suited to provide insights on the data in the excel sheet. Are there specific prompts that need to be followed. What would be the workflow to ingest the data vectorized? Looking for guidance. Is this something that can be implemented as a workflow say using n8n or langflow? | 2024-12-16T03:49:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hfagnp/how_do_i_use_ollama_for_getting_insights/ | Fine-Degree431 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfagnp | false | null | t3_1hfagnp | /r/LocalLLaMA/comments/1hfagnp/how_do_i_use_ollama_for_getting_insights/ | false | false | self | 0 | null |
3B chain of thought model with 128 context window. Based on Llama 3.2 3B. Performance on par with Llama 3.0 8B model, but fits into 8GB VRAM, so it can be run on a medium spec laptop for document summary etc. | 1 | 2024-12-16T03:53:39 | https://huggingface.co/chrisrutherford/Llama-3.2-3B-SingleShotCotV1 | lolzinventor | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hfajl0 | false | null | t3_1hfajl0 | /r/LocalLLaMA/comments/1hfajl0/3b_chain_of_thought_model_with_128_context_window/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'sbh9cFRM3ZXN4x9CXXr3nCbDfvcvA4W4a51m09r32UY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=108&crop=smart&auto=webp&s=f99e636a5b0b45351cf767062e5f55d93c3b145a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=216&crop=smart&auto=webp&s=c098342f4d6d6dca2043d30c6c34b0599a130b6f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=320&crop=smart&auto=webp&s=7df67fdb157882596a30b29a7029942d4168dd72', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=640&crop=smart&auto=webp&s=2cecaa84e4fc466b039897d957718ef8cc39a5dc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=960&crop=smart&auto=webp&s=4297090767da59dc42937cc046614073542f3af9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=1080&crop=smart&auto=webp&s=8a6b14cfdf2a29709c5ebe7bb648cdb918be5388', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?auto=webp&s=c640e6fe1590b56c1f2d3185591fca6a957348eb', 'width': 1200}, 'variants': {}}]} |
||
Help understanding performance needs for my use case. | 0 | Team, I've been reading for a while and still not clearer on this so here we go.
I'm writing a book and I have about 2000 articles and research papers I'll be basing this off of.
Lets just toss out the number 5 million words give or take total information.
I don't need to fine tune a model with all this information and ask penetrating subject matter questions, I'm not deploying this anywhere, and I don't need it to be fast per se.
Rather I want to do a few more basic tasks.
1. feed in a paper at a time, maybe 3000 words, and ask for summaries.
2. feed in a group of papers based on subject, say 30k words and ask questions like "show me everywhere 'mitochondria' are mentioned"
3. feed in chapters of the book for writing and editing assistance which would be several thousand words give or take.
All that said, is my post/question too ignorant for a coherent response? Like is this question nonsensical on its face? Or can anyone guide me to a little more understanding?
Thank you! | 2024-12-16T04:00:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hfaobv/help_understanding_performance_needs_for_my_use/ | 1800-5-PP-DOO-DOO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfaobv | false | null | t3_1hfaobv | /r/LocalLLaMA/comments/1hfaobv/help_understanding_performance_needs_for_my_use/ | false | false | self | 0 | null |
3B chain of thought model with 128K context window. Based on Llama 3.2 3B. Performance on par with Llama 3.0 8B model, but fits into 8GB VRAM, so it can be run on a medium spec laptop for document summary etc.
| 75 | 2024-12-16T04:03:24 | https://huggingface.co/chrisrutherford/Llama-3.2-3B-SingleShotCotV1 | lolzinventor | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hfapyx | false | null | t3_1hfapyx | /r/LocalLLaMA/comments/1hfapyx/3b_chain_of_thought_model_with_128k_context/ | false | false | 75 | {'enabled': False, 'images': [{'id': 'sbh9cFRM3ZXN4x9CXXr3nCbDfvcvA4W4a51m09r32UY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=108&crop=smart&auto=webp&s=f99e636a5b0b45351cf767062e5f55d93c3b145a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=216&crop=smart&auto=webp&s=c098342f4d6d6dca2043d30c6c34b0599a130b6f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=320&crop=smart&auto=webp&s=7df67fdb157882596a30b29a7029942d4168dd72', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=640&crop=smart&auto=webp&s=2cecaa84e4fc466b039897d957718ef8cc39a5dc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=960&crop=smart&auto=webp&s=4297090767da59dc42937cc046614073542f3af9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?width=1080&crop=smart&auto=webp&s=8a6b14cfdf2a29709c5ebe7bb648cdb918be5388', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x-qKxvAimj6DmQxYNaG1KUg0ve12ZISaiD7vwA8GXRI.jpg?auto=webp&s=c640e6fe1590b56c1f2d3185591fca6a957348eb', 'width': 1200}, 'variants': {}}]} |
||
Any advice on FIM (fill in the middle) models and datasets that AREN'T code? | 8 | For a research project I'm looking into FIM models and datasets for natural language, i.e. not code. Anyone who has worked on this, any tips? Any models you found particularly powerful?
Is it reasonable to fine-tune a really strong code model for natural language, or is the code too baked in and I should look for a less powerful, but natural language, model? | 2024-12-16T04:04:18 | https://www.reddit.com/r/LocalLLaMA/comments/1hfaqi7/any_advice_on_fim_fill_in_the_middle_models_and/ | hemphock | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfaqi7 | false | null | t3_1hfaqi7 | /r/LocalLLaMA/comments/1hfaqi7/any_advice_on_fim_fill_in_the_middle_models_and/ | false | false | self | 8 | null |
Teuken-7B - 24 European languages, part of the OpenGPT-X project, aimed at providing multilingual AI solutions | 55 | 2024-12-16T04:26:49 | https://www.handelsblatt.com/technik/ki/kuenstliche-intelligenz-warum-die-telekom-auf-eine-ki-von-fraunhofer-setzt/100065528.html | skuddeliwoo | handelsblatt.com | 1970-01-01T00:00:00 | 0 | {} | 1hfb4c8 | false | null | t3_1hfb4c8 | /r/LocalLLaMA/comments/1hfb4c8/teuken7b_24_european_languages_part_of_the/ | false | false | 55 | {'enabled': False, 'images': [{'id': 'Xn2R9qTdGS4IkyW1LiiDh7pQ6N_V-44HHpNet1btnzQ', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/yE8T4RxQV3Ie0bClC2GMJk1ZzEfY6x3WKF9XUYXJnZw.jpg?width=108&crop=smart&auto=webp&s=b94a8de2752a2e73b8f6dad4d13dbd22ed7149ac', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/yE8T4RxQV3Ie0bClC2GMJk1ZzEfY6x3WKF9XUYXJnZw.jpg?width=216&crop=smart&auto=webp&s=5fd2bcf8b89fa900873d950abb5aaecfc54266c5', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/yE8T4RxQV3Ie0bClC2GMJk1ZzEfY6x3WKF9XUYXJnZw.jpg?width=320&crop=smart&auto=webp&s=ab70929e6897702404e1449b13eb83255c66b9d1', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/yE8T4RxQV3Ie0bClC2GMJk1ZzEfY6x3WKF9XUYXJnZw.jpg?width=640&crop=smart&auto=webp&s=079926d30898c183b2cab15a06e47164205df518', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/yE8T4RxQV3Ie0bClC2GMJk1ZzEfY6x3WKF9XUYXJnZw.jpg?width=960&crop=smart&auto=webp&s=d504360b2a8c1a9d3208e193032fed4760d07cfd', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/yE8T4RxQV3Ie0bClC2GMJk1ZzEfY6x3WKF9XUYXJnZw.jpg?width=1080&crop=smart&auto=webp&s=76f6fb2ec66c1e7590d3dbd0f85f350ef9f83ad2', 'width': 1080}], 'source': {'height': 934, 'url': 'https://external-preview.redd.it/yE8T4RxQV3Ie0bClC2GMJk1ZzEfY6x3WKF9XUYXJnZw.jpg?auto=webp&s=cd237b217e62b355a20cc8cf9a802d7064981b45', 'width': 1400}, 'variants': {}}]} |
||
Logit Bias Whitelisting | 1 | Hi does anyone know how to only allow certain tokens to be generated through either self hosting or an api, preferably scalable? I'm aware of the logit\_bias however that only allows 1024 tokens and I want to basically only allow the model to generate from a few thousand tokens. Basically soft whitelisting but on a larger scale, 1000 - 5000 tokens. | 2024-12-16T04:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/1hfbd7w/logit_bias_whitelisting/ | ImpressiveHead69420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfbd7w | false | null | t3_1hfbd7w | /r/LocalLLaMA/comments/1hfbd7w/logit_bias_whitelisting/ | false | false | self | 1 | null |
Perplexity AI Pro 1-YEAR Coupon - Only $25 (€23) | Subscribe then Pay!
| 1 | [removed] | 2024-12-16T04:42:55 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hfbe5t | false | null | t3_1hfbe5t | /r/LocalLLaMA/comments/1hfbe5t/perplexity_ai_pro_1year_coupon_only_25_23/ | false | false | default | 1 | null |
||
What AI things would you do with $1500 in compute credit? | 1 | [removed] | 2024-12-16T05:16:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hfbys7/what_ai_things_would_you_do_with_1500_in_compute/ | shepbryan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfbys7 | false | null | t3_1hfbys7 | /r/LocalLLaMA/comments/1hfbys7/what_ai_things_would_you_do_with_1500_in_compute/ | false | false | self | 1 | null |
How do I chat with hundreds of thousands of files? | 2 | So, I've got this backup of an old website. It's got hundreds of thousands of files from the mid-90s to 2017. The files have many different extensions and have no consistent format. I would like to chat with the files in the directory that contain text. Is there a no-code way of doing this? I am running a 4060, but it doesn't have to be local.
Thank you! | 2024-12-16T05:29:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hfc5xc/how_do_i_chat_with_hundreds_of_thousands_of_files/ | PublicQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfc5xc | false | null | t3_1hfc5xc | /r/LocalLLaMA/comments/1hfc5xc/how_do_i_chat_with_hundreds_of_thousands_of_files/ | false | false | self | 2 | null |
Convert Ollama models back to GGUF | 1 | 2024-12-16T05:43:16 | https://github.com/mattjamo/OllamaToGGUF | BlindedByTheWiFi | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hfce8j | false | null | t3_1hfce8j | /r/LocalLLaMA/comments/1hfce8j/convert_ollama_models_back_to_gguf/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'A4r6GyJltux0HMp1m9njOMBdX-A_CwlDYnJRQSUlSHM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jl3X49AAmrzTnULyAz7ChT51J-lGstozOhpv5IjPb74.jpg?width=108&crop=smart&auto=webp&s=d7301967a08c2c7d9bfc6fc17cd6cdd02aa5679b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jl3X49AAmrzTnULyAz7ChT51J-lGstozOhpv5IjPb74.jpg?width=216&crop=smart&auto=webp&s=89aedc2c742f40024f9f8539b97df97beb79122c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jl3X49AAmrzTnULyAz7ChT51J-lGstozOhpv5IjPb74.jpg?width=320&crop=smart&auto=webp&s=58a6d133f8ba39b7aba3806b65cad553702ff08a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jl3X49AAmrzTnULyAz7ChT51J-lGstozOhpv5IjPb74.jpg?width=640&crop=smart&auto=webp&s=7a7ada04804bea9bb17caeb2bade79c446ff676a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jl3X49AAmrzTnULyAz7ChT51J-lGstozOhpv5IjPb74.jpg?width=960&crop=smart&auto=webp&s=847300b6cd8ba34d51d3cd537df9dc2f1044b1f2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jl3X49AAmrzTnULyAz7ChT51J-lGstozOhpv5IjPb74.jpg?width=1080&crop=smart&auto=webp&s=f7d57c044907b4a970a62dca10fac33af4862473', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jl3X49AAmrzTnULyAz7ChT51J-lGstozOhpv5IjPb74.jpg?auto=webp&s=9b927caae25d27668d5f80f008088a04c56a07e9', 'width': 1200}, 'variants': {}}]} |
||
What exactly is a system Prompt? How different is it from user prompt? | 2 | For my projects I pass every instruction and few shots in system prompt, but is it even necessary to provide system prompts all of this? | 2024-12-16T05:47:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hfcgol/what_exactly_is_a_system_prompt_how_different_is/ | ShippersAreIdiots | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfcgol | false | null | t3_1hfcgol | /r/LocalLLaMA/comments/1hfcgol/what_exactly_is_a_system_prompt_how_different_is/ | false | false | self | 2 | null |
How does advanced speech and Gemini’s new model know when to start speaking? | 1 | [removed] | 2024-12-16T06:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hfd5vt/how_does_advanced_speech_and_geminis_new_model/ | Old-Calligrapher1950 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfd5vt | false | null | t3_1hfd5vt | /r/LocalLLaMA/comments/1hfd5vt/how_does_advanced_speech_and_geminis_new_model/ | false | false | self | 1 | null |
When OpenAI predicted outputed input content is large, the effect is average? | 1 | [removed] | 2024-12-16T06:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hfddde/when_openai_predicted_outputed_input_content_is/ | zionfly1996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfddde | false | null | t3_1hfddde | /r/LocalLLaMA/comments/1hfddde/when_openai_predicted_outputed_input_content_is/ | false | false | self | 1 | null |
Need endorsement on arXiv (cs.AI) for LLM research paper | 1 | [removed] | 2024-12-16T06:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hfddfz/need_endorsement_on_arxiv_csai_for_llm_research/ | NefariousnessSad2208 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfddfz | false | null | t3_1hfddfz | /r/LocalLLaMA/comments/1hfddfz/need_endorsement_on_arxiv_csai_for_llm_research/ | false | false | self | 1 | null |
Deploying OpenBioLLM 8B on EC2 with Reliable API Performance | 1 | I’ve been experimenting with the **OpenBioLLM 8B** 8-Bit **quantized version** using **LLM Studio**, and the performance has been solid during testing. However, when I attempt inference locally on my **M1 Mac Pro** via **FastAPI**, the results are disappointing — it generates arbitrary responses and performs poorly.
I’ve even replicated the same configurations from LLM Studio, but the local inference still doesn’t work as expected.
Now, I’m looking to **deploy the base 8B model on an EC2 instance** (not using SageMaker) and serve it as an API. Unfortunately, I haven’t found any resources or guides for this specific setup.
Does anyone have experience with:
1. **Deploying OpenBioLLM on EC2** for stable inference?
2. Optimizing FastAPI with such models to handle inference efficiently?
3. Setting up the right environment (frameworks, libraries, etc.) for EC2 deployment? | 2024-12-16T07:09:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hfdnkt/deploying_openbiollm_8b_on_ec2_with_reliable_api/ | SnooTigers4634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfdnkt | false | null | t3_1hfdnkt | /r/LocalLLaMA/comments/1hfdnkt/deploying_openbiollm_8b_on_ec2_with_reliable_api/ | false | false | self | 1 | null |
The Cheapest service to train local AI? | 0 | I'm looking for something really cheap, the hourly price should be as low as possible. In terms of price, what is offered in the USA and Europe does not work for me. Any kind of service in China or India? Something like Salad.com but cheaper. | 2024-12-16T07:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hfdq3e/the_cheapest_service_to_train_local_ai/ | DarKresnik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfdq3e | false | null | t3_1hfdq3e | /r/LocalLLaMA/comments/1hfdq3e/the_cheapest_service_to_train_local_ai/ | false | false | self | 0 | null |
GitHub - microsoft/markitdown: Python tool for converting files and office documents to Markdown. | 294 | 2024-12-16T07:29:49 | https://github.com/microsoft/markitdown | LinkSea8324 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hfdx7k | false | null | t3_1hfdx7k | /r/LocalLLaMA/comments/1hfdx7k/github_microsoftmarkitdown_python_tool_for/ | false | false | 294 | {'enabled': False, 'images': [{'id': 'pmff19DWYOclvaNbXpZwhLs67hY4jJV_vTD9L_xXXzw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dTchCfiYP2ai6E-2JIdoYyzYfYz7ewKguYSQ2aHYkUM.jpg?width=108&crop=smart&auto=webp&s=48c057c3ae84f3828d9bdc6ecf4a991626ab11c0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dTchCfiYP2ai6E-2JIdoYyzYfYz7ewKguYSQ2aHYkUM.jpg?width=216&crop=smart&auto=webp&s=a06d37b725323e72796b76670443e84a68520bb4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dTchCfiYP2ai6E-2JIdoYyzYfYz7ewKguYSQ2aHYkUM.jpg?width=320&crop=smart&auto=webp&s=13bb44aae8ec90b9ff4045d0e7292eca66c7ada5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dTchCfiYP2ai6E-2JIdoYyzYfYz7ewKguYSQ2aHYkUM.jpg?width=640&crop=smart&auto=webp&s=fb15d2feb5d50fc27ebd4e91d256e182ebebb954', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dTchCfiYP2ai6E-2JIdoYyzYfYz7ewKguYSQ2aHYkUM.jpg?width=960&crop=smart&auto=webp&s=d9cc5a348808f298e552792ad33e1fb948566fb2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dTchCfiYP2ai6E-2JIdoYyzYfYz7ewKguYSQ2aHYkUM.jpg?width=1080&crop=smart&auto=webp&s=da29f0ea357f824bd97e600daca0c5c703c234f6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dTchCfiYP2ai6E-2JIdoYyzYfYz7ewKguYSQ2aHYkUM.jpg?auto=webp&s=6c2a28f10cb07ff42b96831f9b85ddf7ab6fbfc3', 'width': 1200}, 'variants': {}}]} |
||
SmolTulu-1.7b-Instruct - My first model release! New SOTA for small models on IFEval & GSM8K | 1 | [removed] | 2024-12-16T07:49:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hfe6q6/smoltulu17binstruct_my_first_model_release_new/ | SulRash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfe6q6 | false | null | t3_1hfe6q6 | /r/LocalLLaMA/comments/1hfe6q6/smoltulu17binstruct_my_first_model_release_new/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GDX8ULtA7vVPSsdDK0iTzb4GthqQONyDlYxob7VoLj8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=108&crop=smart&auto=webp&s=ac979d327f3058d12a097660a20f0fb44d3204af', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=216&crop=smart&auto=webp&s=f784de779f8c89395baa6f0a68079793aa2c91f3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=320&crop=smart&auto=webp&s=2085811d7faf3a8c337cee13284d374a897c1e80', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=640&crop=smart&auto=webp&s=dfbd753fcaaaf6cdefd89706b018b8b275876032', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=960&crop=smart&auto=webp&s=393c0033e17a645a17c3f23e23488be928117128', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?width=1080&crop=smart&auto=webp&s=041cebfbd54ef48cd60e416be1e53837f494be2d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7zNyD8DMqhmgSvzOAQK-uo_84KbhNZ2hqoSIkpPM5NQ.jpg?auto=webp&s=40048700d631830a9b2239fcce393e01253cd7e9', 'width': 1200}, 'variants': {}}]} |
Model Convergence and What it Says About Humanity | 0 | So, I’ve fine-tuned a few models already and am going to be posting about them at some point, along with my datasets and my website. However, the main thing I wanted to ask about has to do with models and how they converge during a fine-tune or pretrain.
Essentially, the training loss minimizes during the training run if you’ve written your axolotl config correctly. I’m wondering if this is what we’re doing as human beings here on Reddit.
I’ve been everywhere on this platform, places of all opinions and ideas. I see people constantly using ad hominem against each other or fighting over what I personally find to be pointless nonsense. I see that over time, as people continue to interact with each other and work together, society itself changes, almost as if we’re converging on each other every day.
That brings me to the topic of this discussion: is this way that we interact and fight with each other online and in real life …our human way of training and converging? Our ability to take input data in multiple forms and then learn from it in real-time as we continue to interact with our environment. Furthermore, wouldn’t a system like ours be better overall in order to supplement human labor and increase our productivity as a species?
Instead of creating AI just to automate a task or to make money, shouldn’t we be digitizing all aspects and components of human intelligence in order to create a more holistic system that can benefit the world and all living species which inhabit it?
Just as a side-note, I believe the original goal of AI was to digitize all aspects of human intelligence, rather than to make a workforce that we can make infinite profit off of. I believe this was declared at the Dartmouth Conference which is considered to be the birth of AI. AGI is just a side quest on this adventure. | 2024-12-16T08:04:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hfeds1/model_convergence_and_what_it_says_about_humanity/ | Helpful-Desk-8334 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfeds1 | false | null | t3_1hfeds1 | /r/LocalLLaMA/comments/1hfeds1/model_convergence_and_what_it_says_about_humanity/ | false | false | self | 0 | null |
spoof a GPU so it appears as another in linux? | 0 | i have a Gigabyte G431-MM0 for my LLM rig, its a generally decent shell for my CMP 100-210s with one slight problem, the bios/BC doesnt recognize the CMP, it has a load of other supported cards it lists including the V100 just not the CMPs,
now technically you wouldnt think this would be a big issue but for some reason the system disables the GPU temp sensors unless it finds a known GPU which means i currently have to manually set the fan speed via the BMC. i tried using setpci but it seemed to make no difference. does anyone know of any other tweaks i may be able to do to fix this? modded bios or something like that? | 2024-12-16T08:07:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hfef57/spoof_a_gpu_so_it_appears_as_another_in_linux/ | gaspoweredcat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfef57 | false | null | t3_1hfef57 | /r/LocalLLaMA/comments/1hfef57/spoof_a_gpu_so_it_appears_as_another_in_linux/ | false | false | self | 0 | null |
Smaller models for coding and research that work well on a RTX 3070? | 2 | Has anyone has any luck with finding a good model to run locally with a RTX 3070? I tried Nvidia's Mistral Nemo 12B Q4 but it still only uses only 30% of the GPU, my guess is that it doesnt fit on the limited 8GB VRAM (such a scam from Nvidia). | 2024-12-16T08:19:55 | https://www.reddit.com/r/LocalLLaMA/comments/1hfektk/smaller_models_for_coding_and_research_that_work/ | Theboyscampus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfektk | false | null | t3_1hfektk | /r/LocalLLaMA/comments/1hfektk/smaller_models_for_coding_and_research_that_work/ | false | false | self | 2 | null |
Better to pay a subscription or build a local system | 19 | Cost aside, I love how ai enhances my learning capabilities. Would it be better to continue to pay for monthly subscriptions (currently just Claude pro and chat gpt teams but canceled chat gpt, not paying $200 a month). My thought in building a local hosted system is that it in itself is the best learning experience. Whether it’s a waste of money I’ll have insight to products and services in a more nuanced way than ever before. What are your opinions ? | 2024-12-16T08:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hfem6r/better_to_pay_a_subscription_or_build_a_local/ | DragonfruitBright932 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfem6r | false | null | t3_1hfem6r | /r/LocalLLaMA/comments/1hfem6r/better_to_pay_a_subscription_or_build_a_local/ | false | false | self | 19 | null |
Any Local Llama Webscraper? | 1 | Anyone know a product that can do this?
| 2024-12-16T08:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hfepdz/any_local_llama_webscraper/ | mrsimple162 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfepdz | false | null | t3_1hfepdz | /r/LocalLLaMA/comments/1hfepdz/any_local_llama_webscraper/ | false | false | self | 1 | null |
Is there a way to remotely access my self-hosted LM Studio from my phone or another device? | 2 | I've been trying to find a way to do this but I keep hitting dead ends. I tried using [LMSA](https://github.com/techcow2/lmsa) but it never actually connects. I set up Tailscale but I don't know how to connect the two programs. Is there a straightforward and easy way to do this? Like a service (LM Studio, SillyTavern, etc) that has an Android app/Windows app bridge? | 2024-12-16T08:30:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hfeptq/is_there_a_way_to_remotely_access_my_selfhosted/ | switchpizza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfeptq | false | null | t3_1hfeptq | /r/LocalLLaMA/comments/1hfeptq/is_there_a_way_to_remotely_access_my_selfhosted/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'BvkuxjQlUROBoKEP7fu722-KQ8o6t2OxUEgqo3yjfAc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/p0ryHdmeJgSyfMY0Bzxf_bJux95y6wutW9BY6p58j6I.jpg?width=108&crop=smart&auto=webp&s=681391280f576dd0ad7131f2dbbebe34514156da', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/p0ryHdmeJgSyfMY0Bzxf_bJux95y6wutW9BY6p58j6I.jpg?width=216&crop=smart&auto=webp&s=4cccb478d5cdba634d15cdcc3551055aa906d0fb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/p0ryHdmeJgSyfMY0Bzxf_bJux95y6wutW9BY6p58j6I.jpg?width=320&crop=smart&auto=webp&s=3076730a021cbdde7a38ee108e940755aa946041', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/p0ryHdmeJgSyfMY0Bzxf_bJux95y6wutW9BY6p58j6I.jpg?width=640&crop=smart&auto=webp&s=39f9adce838b6415a498a57f768f8883c9171621', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/p0ryHdmeJgSyfMY0Bzxf_bJux95y6wutW9BY6p58j6I.jpg?width=960&crop=smart&auto=webp&s=2699a72bc73c7c8c7a52d65ac5a8d86e3af55c97', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/p0ryHdmeJgSyfMY0Bzxf_bJux95y6wutW9BY6p58j6I.jpg?width=1080&crop=smart&auto=webp&s=c161f341e03ed31473a20f1c5352d5e4e361580a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/p0ryHdmeJgSyfMY0Bzxf_bJux95y6wutW9BY6p58j6I.jpg?auto=webp&s=f57fea6e5ddbff97e45cd59fdd8988ce5f6032d7', 'width': 1200}, 'variants': {}}]} |
On-Device AI Apps, Safe to use at work | 1 | [removed] | 2024-12-16T08:31:52 | https://www.reddit.com/r/LocalLLaMA/comments/1hfeqcg/ondevice_ai_apps_safe_to_use_at_work/ | shopliftingisfun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfeqcg | false | null | t3_1hfeqcg | /r/LocalLLaMA/comments/1hfeqcg/ondevice_ai_apps_safe_to_use_at_work/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'M_85tOxsTSiSI9O1PmAoYfoB2_94QKefixD4MHYh5YE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AncRjynleNFChxbiAc9fSvmdcNmK0Fhi2b7FY0_aYrw.jpg?width=108&crop=smart&auto=webp&s=ab436f2ecbe185174875f1578bd2b0848b288f76', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AncRjynleNFChxbiAc9fSvmdcNmK0Fhi2b7FY0_aYrw.jpg?width=216&crop=smart&auto=webp&s=b820a0816151526a63b189363c43c215f62f5d7a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AncRjynleNFChxbiAc9fSvmdcNmK0Fhi2b7FY0_aYrw.jpg?width=320&crop=smart&auto=webp&s=6be03e1b4e7894fe4e36aed2558c40d88e423719', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AncRjynleNFChxbiAc9fSvmdcNmK0Fhi2b7FY0_aYrw.jpg?width=640&crop=smart&auto=webp&s=b89b0c6c3e189a4d9cc64be771b301f1072f2e43', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AncRjynleNFChxbiAc9fSvmdcNmK0Fhi2b7FY0_aYrw.jpg?width=960&crop=smart&auto=webp&s=cb51ba75a3cdb96e7a2ebea4fb32b9f059dfedb8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/AncRjynleNFChxbiAc9fSvmdcNmK0Fhi2b7FY0_aYrw.jpg?width=1080&crop=smart&auto=webp&s=e7238f75d903f85f9bd731031be79a471b4fcce6', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/AncRjynleNFChxbiAc9fSvmdcNmK0Fhi2b7FY0_aYrw.jpg?auto=webp&s=c67a12c2ee41631442dcfb6f075afae816762c03', 'width': 3840}, 'variants': {}}]} |
Ottomator open source ai agents platform pre release | 0 | 2024-12-16T08:32:34 | https://www.reddit.com/gallery/1hfeqmo | TheLogiqueViper | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hfeqmo | false | null | t3_1hfeqmo | /r/LocalLLaMA/comments/1hfeqmo/ottomator_open_source_ai_agents_platform_pre/ | false | false | 0 | null |
||
Llama 3.2 1B surprisingly good | 99 | I had some basic text processing pipeline to be done and tried Llama 3.2 1B for the first time and was pleasantly surprised by how good it was! I even preferred it to the 3B version (sometimes, being a bit dumber and not over-complicating things can be useful).
Intrigued, I tried asking a few general knowledge questions and was pleasantly surprised that a lot of information is there. I wonder how much you can really store in a 1B model quantized at 4-5bits? | 2024-12-16T08:55:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hff0wj/llama_32_1b_surprisingly_good/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hff0wj | false | null | t3_1hff0wj | /r/LocalLLaMA/comments/1hff0wj/llama_32_1b_surprisingly_good/ | false | false | self | 99 | null |
Meta releases the Apollo family of Large Multimodal Models. The 7B is SOTA and can comprehend a 1 hour long video. You can run this locally. | 892 | 2024-12-16T09:30:35 | https://huggingface.co/papers/2412.10360 | jd_3d | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hffh35 | false | null | t3_1hffh35 | /r/LocalLLaMA/comments/1hffh35/meta_releases_the_apollo_family_of_large/ | false | false | 892 | {'enabled': False, 'images': [{'id': 'ZpERwteSpIgSMAPNiugnJ6z1Rwcy7XAACAYMyJ2p8Ro', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KoaRN74Bezmpeg0ErAeyYxDxp-W_mnLW0ECSPTgDXuA.jpg?width=108&crop=smart&auto=webp&s=1ddab6ee31f8c585f888429920e18afb6d86f510', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KoaRN74Bezmpeg0ErAeyYxDxp-W_mnLW0ECSPTgDXuA.jpg?width=216&crop=smart&auto=webp&s=58d9c3cad63d7fcd44ccde8cf09a16798f41c26f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KoaRN74Bezmpeg0ErAeyYxDxp-W_mnLW0ECSPTgDXuA.jpg?width=320&crop=smart&auto=webp&s=d03d2bf722da8c611fa88f16af63bbd646f2ba31', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KoaRN74Bezmpeg0ErAeyYxDxp-W_mnLW0ECSPTgDXuA.jpg?width=640&crop=smart&auto=webp&s=4adebfd6539209dabc63dceaca80ae55c21c5941', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KoaRN74Bezmpeg0ErAeyYxDxp-W_mnLW0ECSPTgDXuA.jpg?width=960&crop=smart&auto=webp&s=f873d22534c712eb1611230e495c02f69e864ea8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KoaRN74Bezmpeg0ErAeyYxDxp-W_mnLW0ECSPTgDXuA.jpg?width=1080&crop=smart&auto=webp&s=a72168392a97ffb9a9d90c9c434cb05c5a006707', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KoaRN74Bezmpeg0ErAeyYxDxp-W_mnLW0ECSPTgDXuA.jpg?auto=webp&s=16f044bff1bddda1ede3e2ba3df43a814ea6fda9', 'width': 1200}, 'variants': {}}]} |
||
Open Sourcing PlugOvr.ai | 6 | We just released PlugOvr to the open source community under the MIT license.
[https://github.com/PlugOvr-ai/PlugOvr](https://github.com/PlugOvr-ai/PlugOvr)
PlugOvr is an AI Assistant that lets you directly interact with your favorite applications.
You can define templates for your own use cases and select individual LLMs per template.
Choose any LLM from Ollama's complete offerings.
Feel free to reach out if you'd like to contribute. | 2024-12-16T09:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hffiqv/open_sourcing_plugovrai/ | cwefelscheid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hffiqv | false | null | t3_1hffiqv | /r/LocalLLaMA/comments/1hffiqv/open_sourcing_plugovrai/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'ZXQg6GQl0P5nLlxmuoAl9DBJs1E4F9fPrgXBQv7yJZc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cMYlS3oypB349UwmTkb5aWoJNyFEZe7P0CFMFAek54o.jpg?width=108&crop=smart&auto=webp&s=10cf510010fe356ec05abbd8f7c5c3d740c8f71c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cMYlS3oypB349UwmTkb5aWoJNyFEZe7P0CFMFAek54o.jpg?width=216&crop=smart&auto=webp&s=af652ce6311f7edaabdd58959d1a0d51039e7816', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cMYlS3oypB349UwmTkb5aWoJNyFEZe7P0CFMFAek54o.jpg?width=320&crop=smart&auto=webp&s=e042cf96cace814f6ee2477700213b14737b45a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cMYlS3oypB349UwmTkb5aWoJNyFEZe7P0CFMFAek54o.jpg?width=640&crop=smart&auto=webp&s=5faac93402d2d6ac1f05d16d1df20b1629b1f304', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cMYlS3oypB349UwmTkb5aWoJNyFEZe7P0CFMFAek54o.jpg?width=960&crop=smart&auto=webp&s=3042a275d58150863f80595fdcb28004eb56106c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cMYlS3oypB349UwmTkb5aWoJNyFEZe7P0CFMFAek54o.jpg?width=1080&crop=smart&auto=webp&s=65054352de06a6f34daeda019cf70e6c4c514804', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cMYlS3oypB349UwmTkb5aWoJNyFEZe7P0CFMFAek54o.jpg?auto=webp&s=0333d128a3c86c4bbdc772968dd4338f0de86abb', 'width': 1200}, 'variants': {}}]} |
Help Me Navigate the Maze of Local AI Text Processing Models! (12GB VRAM) | 4 | Hey fellow tech enthusiasts!
I'm on a quest to set up a local AI model on my Windows 11 PC that can process text files and extract/present data intelligently. My setup involves an RTX 4070 Ti with 12GB VRAM, and I'm determined to leverage that GPU power without getting bogged down by system memory limitations.
The struggle has been real. I've spent countless hours googling, feeling like I'm drowning in technical jargon that seems more like an alien language than helpful guidance. Every forum and tutorial I've encountered has left me more confused than enlightened, with conflicting advice and overwhelming technical details.
What I'm seeking is a straightforward solution: an AI model capable of reading local text files, intelligently extracting meaningful data, and presenting that information in a customizable format. I'm hoping to find a GPU-accelerated option that doesn't require a PhD in computer science to set up.
I would be incredibly grateful for a hero willing to share some wisdom and help me navigate this complex landscape. Specifically, I'm looking for a beginner-friendly recommendation, some step-by-step installation guidance, and maybe a few tips to avoid the common pitfalls that seem to trap newcomers like myself.
Any guidance would be immensely appreciated. You'd essentially be rescuing a fellow tech adventurer from the depths of confusion! 🙏 | 2024-12-16T09:45:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hffnhq/help_me_navigate_the_maze_of_local_ai_text/ | thebeeq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hffnhq | false | null | t3_1hffnhq | /r/LocalLLaMA/comments/1hffnhq/help_me_navigate_the_maze_of_local_ai_text/ | false | false | self | 4 | null |
yawu web UI is here! | 25 | If you've seen my previous post about a web UI written mostly by Gemini, it's now released after some more polishing!
You can now get it from [GitHub](https://github.com/FishiaT/yawullm).
What's changed since that post (literally just yesterday):
* Animations/transitions effects
* More color palettes for you to play with
* Parameter configuration
* Polished than before
* Bigger HTML file size I guess...?
Tell me what do you guys think about this!
And here's another video showcasing it.
https://reddit.com/link/1hffzje/video/262cmx8cq67e1/player
| 2024-12-16T10:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hffzje/yawu_web_ui_is_here/ | Sad-Fix-7915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hffzje | false | null | t3_1hffzje | /r/LocalLLaMA/comments/1hffzje/yawu_web_ui_is_here/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'NceP5enAwoIiLBwUGPeVOHabuw53p2OyHyiX-J2WoR4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3I1Ut7FOQElOzF93f29iBEfTg_IjI5Biv6zX0zmF7Y4.jpg?width=108&crop=smart&auto=webp&s=1bc6d87153a4547cc7b51ebc4087e3e19ebc08a0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3I1Ut7FOQElOzF93f29iBEfTg_IjI5Biv6zX0zmF7Y4.jpg?width=216&crop=smart&auto=webp&s=ed00dac5ed327a7c2d2bccfcb474fcbbf577c278', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3I1Ut7FOQElOzF93f29iBEfTg_IjI5Biv6zX0zmF7Y4.jpg?width=320&crop=smart&auto=webp&s=1923480e4711c356a9b2a4f3198b449cc3716211', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3I1Ut7FOQElOzF93f29iBEfTg_IjI5Biv6zX0zmF7Y4.jpg?width=640&crop=smart&auto=webp&s=842cc2da35df5b677dc874c46054ab264d86d8a4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3I1Ut7FOQElOzF93f29iBEfTg_IjI5Biv6zX0zmF7Y4.jpg?width=960&crop=smart&auto=webp&s=7bee5ba5dc7a82f983fcb7ff54b07a87713744e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3I1Ut7FOQElOzF93f29iBEfTg_IjI5Biv6zX0zmF7Y4.jpg?width=1080&crop=smart&auto=webp&s=65ba63e01780c5cf03a4f7e7691eae42d198e57a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3I1Ut7FOQElOzF93f29iBEfTg_IjI5Biv6zX0zmF7Y4.jpg?auto=webp&s=6ca9f5ba89292bc05257990fe433eeead37014ac', 'width': 1200}, 'variants': {}}]} |
|
Any actual game based on LLM? | 38 | Hey, I wish there was a game that's similar to normal roleplay chat with LLM (text based game is sufficient), but it would also include some backend software that controls pre-made quests or an actual storyline, and some underlying system controlling inventory, stats, skills, you know, like a game. :)
Have you heard of anything like this existing?
I'm getting bored with being an omnipotent gamemaster in every RP chat, and the fact that I have to push the story forward or best case scenario let it be totally random. And that any 'rules' in the game are made up by me and only I have to guard myself to stick to those rules. In one RP i was bored and said to the NPC 'I look down and find a million dollars on the street' and the LLM was like "Sure, alright boss'. I hate that. A real human gamemaster would reach for a long wooden ruler and smack me right in the head for acting like an idiot, and would simply say 'No'! ;) | 2024-12-16T10:14:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hfg1ro/any_actual_game_based_on_llm/ | filszyp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfg1ro | false | null | t3_1hfg1ro | /r/LocalLLaMA/comments/1hfg1ro/any_actual_game_based_on_llm/ | false | false | self | 38 | null |
Open Source - Ollama LLM client MyOllama has been revised to v1.1.0 | 0 | This version supports iPad and Mac Desktop
If you can build flutter, you can download the source from the link.
Android can download the binary from this link. It's 1.0.7, but I'll post it soon.
iOS users please update or build from source
Github
[https://github.com/bipark/my\_ollama\_app](https://github.com/bipark/my_ollama_app)
\#MyOllama
https://preview.redd.it/upmjztxe277e1.png?width=2752&format=png&auto=webp&s=4f3a3ede7e9a2588fcaa5bc3edf2d7966700d2f3
| 2024-12-16T11:18:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hfgx5r/open_source_ollama_llm_client_myollama_has_been/ | billythepark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfgx5r | false | null | t3_1hfgx5r | /r/LocalLLaMA/comments/1hfgx5r/open_source_ollama_llm_client_myollama_has_been/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'AIvjY-js2d9C7SYYIGHv4Mfq3NIHm63c5_Vv3FN-Jtc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/m_QJ5zMc01UXG6y3VNcZmw2jn1XTxacxxD2UkjG1Gg0.jpg?width=108&crop=smart&auto=webp&s=794eb45b6b1b9a0405bb71ccb8ff4b922987faa5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/m_QJ5zMc01UXG6y3VNcZmw2jn1XTxacxxD2UkjG1Gg0.jpg?width=216&crop=smart&auto=webp&s=79ae0a0485720c3d6bffdb4c7fc1cc757f8abab8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/m_QJ5zMc01UXG6y3VNcZmw2jn1XTxacxxD2UkjG1Gg0.jpg?width=320&crop=smart&auto=webp&s=409cc0bab9951a40e8314bafab8beadc2e76cf03', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/m_QJ5zMc01UXG6y3VNcZmw2jn1XTxacxxD2UkjG1Gg0.jpg?width=640&crop=smart&auto=webp&s=a556c91165d24a4a03ad9c1f0d4a1bf40ffbf4e7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/m_QJ5zMc01UXG6y3VNcZmw2jn1XTxacxxD2UkjG1Gg0.jpg?width=960&crop=smart&auto=webp&s=b84d6b4879533fbe8946fbddad28895dafa4492c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/m_QJ5zMc01UXG6y3VNcZmw2jn1XTxacxxD2UkjG1Gg0.jpg?width=1080&crop=smart&auto=webp&s=aecb24f154738df3d02d0bea85e1085541afe2f2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/m_QJ5zMc01UXG6y3VNcZmw2jn1XTxacxxD2UkjG1Gg0.jpg?auto=webp&s=34cf55406193f4a6ce46aa0ce7f0f7ebde516c6f', 'width': 1200}, 'variants': {}}]} |
|
P510 Thinkstation (already own) with Tesla T4 or new AM5/DDR5 with RTX 4060/4070Ti | 1 | [removed] | 2024-12-16T11:58:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hfhiq0/p510_thinkstation_already_own_with_tesla_t4_or/ | garbo77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfhiq0 | false | null | t3_1hfhiq0 | /r/LocalLLaMA/comments/1hfhiq0/p510_thinkstation_already_own_with_tesla_t4_or/ | false | false | self | 1 | null |
Mapping footnotes | 0 | Hey all. I'm a developer by trade but have dove head first into this world to create a RAG pipeline and a local LLMs on mobile devices based on a collection of copyright free books.
My issue is finding a tool that will parse the PDFs and leave me with as little guesswork as possible. I've tested several tools and gotten basically perfect output except for one thing, footnotes.
I just tried and bounced off nougat because it seems unmaintained and it hallucinates too much and I'm going to try marker but I just wanted to ask... Are there any good tools for this application?
Ultimate goals are to get main PDF text with no front matter before an intro/preface and no back matter and, after getting a perfect page parse, to separate the footnotes and in a perfect world, be able to tie them back to the text chunk they are referenced in.
Any help would be appreciated and thanks in advance!
I've tried:
- Simple parsers like PyMuPDF, PDFplumber, etc. Way too much guesswork.
- layout-parser - better but still too much guesswork
- Google Document AI Layout Parser - perfect output, have to guess on the footnotes.
- Google Document AI OCR - clustering based on y position was okay but text heights were unreliable and it was too hard to parse out the footnotes.
- nougat - as described above, not maintained and though output is good and footnotes are marked, there's to many pages where it entirely hallucinates and fails to read the content.
- marker - my next attempt since I've already got a script to setup a VM with a GPU and it looks like footnotes are somewhat consistent I hope... | 2024-12-16T13:04:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hfimwz/mapping_footnotes/ | aDamnCommunist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfimwz | false | null | t3_1hfimwz | /r/LocalLLaMA/comments/1hfimwz/mapping_footnotes/ | false | false | self | 0 | null |
Inference on 165U iGpu with lpddr5x 6400 slower than expected | 1 | [removed] | 2024-12-16T13:12:45 | https://www.reddit.com/r/LocalLLaMA/comments/1hfis1a/inference_on_165u_igpu_with_lpddr5x_6400_slower/ | aquanapse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfis1a | false | null | t3_1hfis1a | /r/LocalLLaMA/comments/1hfis1a/inference_on_165u_igpu_with_lpddr5x_6400_slower/ | false | false | self | 1 | null |
Has anyone got Apollo working locally? | 1 | I can't even figure out what dependencies are needed :C
They say `pip install -e .` but I can't even find requirements.txt or anything
So with the command above I get:
`Apollo does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found`
My tech stack is far off python and I'd appreciate it if someone could guide me a little. | 2024-12-16T13:21:06 | https://www.reddit.com/r/LocalLLaMA/comments/1hfixlq/has_anyone_got_apollo_working_locally/ | No_Pilot_1974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfixlq | false | null | t3_1hfixlq | /r/LocalLLaMA/comments/1hfixlq/has_anyone_got_apollo_working_locally/ | false | false | self | 1 | null |
Lamma with internet access? | 1 | [removed] | 2024-12-16T13:30:26 | https://www.reddit.com/r/LocalLLaMA/comments/1hfj3yf/lamma_with_internet_access/ | Ok_Patient8412 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfj3yf | false | null | t3_1hfj3yf | /r/LocalLLaMA/comments/1hfj3yf/lamma_with_internet_access/ | false | false | self | 1 | null |
Extracting Embedding from an LLM | 4 | Hi. I see that most providers have separate API and different models for embedding extraction versus chat completion. Is that just for convenience? Can't I directly use e.g. Llama 8B only for its embedding extraction part?
If not, then how do we decide about the embedding-completion pair in a RAG (or other similar) pipeline? Are there some pairs that work better together than others? Are there considerations to make? What library do people normally use for computing embeddings in connection with using a local or cloud LLM? LlamaIndex?
Many thanks | 2024-12-16T13:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hfjewf/extracting_embedding_from_an_llm/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfjewf | false | null | t3_1hfjewf | /r/LocalLLaMA/comments/1hfjewf/extracting_embedding_from_an_llm/ | false | false | self | 4 | null |
multilingual local model into a Nuxt 3 TypeScript application | 1 | [removed] | 2024-12-16T13:58:48 | https://www.reddit.com/r/LocalLLaMA/comments/1hfjo78/multilingual_local_model_into_a_nuxt_3_typescript/ | DEGOYA_digital | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfjo78 | false | null | t3_1hfjo78 | /r/LocalLLaMA/comments/1hfjo78/multilingual_local_model_into_a_nuxt_3_typescript/ | false | false | self | 1 | null |
Better looking CoT prompt with <details> & <summary> tags | 5 | Idk why those CoT prompts are not using this, but you can use <details> & <summary> tags to make the LLM hide its thinking process withing a collapsible section
Here is an example in open webui, I use my CoT system prompt to tell qwen 32b to use CoT within these tags, plus a function written by qwen coder to reinforce the CoT process
In my opinion, this looks much better then simply wrap the CoT in 2 <thinking> tags
https://i.redd.it/0mutu5aww77e1.gif
| 2024-12-16T14:11:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hfjxl0/better_looking_cot_prompt_with_details_summary/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfjxl0 | false | null | t3_1hfjxl0 | /r/LocalLLaMA/comments/1hfjxl0/better_looking_cot_prompt_with_details_summary/ | false | false | 5 | null |
|
Where can I find which quantization of Llama 3.3 performs best? | 30 | I'm new to running local LLMs, so apologies if my question is naive, but I'm running Ollama and trying to figure which of the following llama3.3 models performs best, or rather, what exactly their performance tradeoffs areL
- `70b-instruct-fp16` (too slow on my system)
- `70b-instruct-q2_K`
- `70b-instruct-q3_K_M`
- `70b-instruct-q3_K_S`
- `70b-instruct-q4_0`
- `70b-instruct-q4_1`
- `70b-instruct-q4_K_M`
- `70b-instruct-q4_K_S`
- `70b-instruct-q5_0`
- `70b-instruct-q5_1`
- `70b-instruct-q5_K_M`
- `70b-instruct-q6_K`
- `70b-instruct-q8_0`
From what I've gathered, the number `X` in `qX` denotes the bit width, but what exactly do `K`, `K_M`, and `K_S` signify?
And where can I find performance comparisons (speed and quality) of these variants? | 2024-12-16T14:29:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hfkbkr/where_can_i_find_which_quantization_of_llama_33/ | Mandelmus100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfkbkr | false | null | t3_1hfkbkr | /r/LocalLLaMA/comments/1hfkbkr/where_can_i_find_which_quantization_of_llama_33/ | false | false | self | 30 | null |
Answering my own question, I got Apollo working locally with a 3090 | 201 | [Here](https://github.com/efogdev/apollo) is a repo with the fixes for local environment. Tested with python 3.11 on Linux.
[\~190Mb video, \~40 sec to first token](https://preview.redd.it/oncpqk16687e1.png?width=1290&format=png&auto=webp&s=713056b215a9e26b32bafa0f81fd9698d8d95f00)
| 2024-12-16T15:00:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hfkytk/answering_my_own_question_i_got_apollo_working/ | No_Pilot_1974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hfkytk | false | null | t3_1hfkytk | /r/LocalLLaMA/comments/1hfkytk/answering_my_own_question_i_got_apollo_working/ | false | false | 201 | {'enabled': False, 'images': [{'id': 'zenbEv4sBAywAAWfkiM1C1CntxiAuGS1vziQW9_cD_U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/llPSz8hJrlQlagNnOITDefy4-1ju8N20QAuchMXFQBo.jpg?width=108&crop=smart&auto=webp&s=ed39a02773f09575f8ca588859dfe54a3aa40cf8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/llPSz8hJrlQlagNnOITDefy4-1ju8N20QAuchMXFQBo.jpg?width=216&crop=smart&auto=webp&s=2ef95b7cfb11b00706b7c4cac058cd003b5468b1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/llPSz8hJrlQlagNnOITDefy4-1ju8N20QAuchMXFQBo.jpg?width=320&crop=smart&auto=webp&s=a36539a537adee691e7b4d998827f30f7b5cbede', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/llPSz8hJrlQlagNnOITDefy4-1ju8N20QAuchMXFQBo.jpg?width=640&crop=smart&auto=webp&s=c906248403061bdabda499b892c32132250a710c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/llPSz8hJrlQlagNnOITDefy4-1ju8N20QAuchMXFQBo.jpg?width=960&crop=smart&auto=webp&s=d8363da52e59e1e812f48fe5443e0c711c89e8aa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/llPSz8hJrlQlagNnOITDefy4-1ju8N20QAuchMXFQBo.jpg?width=1080&crop=smart&auto=webp&s=520d0a628b11d18b24437e8e681c935542627901', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/llPSz8hJrlQlagNnOITDefy4-1ju8N20QAuchMXFQBo.jpg?auto=webp&s=a5fa6bf5506a1924ec296390f7c54e05b8da2780', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.