title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Incredible blog post on Large Concept Models (LCM)
75
https://preview.redd.it/…of explanation. 
2024-12-30T07:31:50
https://www.reddit.com/r/LocalLLaMA/comments/1hphyxv/incredible_blog_post_on_large_concept_models_lcm/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hphyxv
false
null
t3_1hphyxv
/r/LocalLLaMA/comments/1hphyxv/incredible_blog_post_on_large_concept_models_lcm/
false
false
https://b.thumbs.redditm…8wCCghyXxNNw.jpg
75
{'enabled': False, 'images': [{'id': 'PQVj85ILcBD5eK6sLllcm-QmBtppTT-NE0GUmg4yZJA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6uGHni0V2oijMr-sci6cKs7MN8AsJ1puVMECXOurW2c.jpg?width=108&crop=smart&auto=webp&s=f8e31df4e0b41517d6067d6036f47369b3a10e2d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6uGHni0V2oijMr-sci6cKs7MN8AsJ1puVMECXOurW2c.jpg?width=216&crop=smart&auto=webp&s=454252694f97f24978eb5d58120dc90c33337837', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6uGHni0V2oijMr-sci6cKs7MN8AsJ1puVMECXOurW2c.jpg?width=320&crop=smart&auto=webp&s=861930fbad01e7a26806525288fe5466e7fe655e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6uGHni0V2oijMr-sci6cKs7MN8AsJ1puVMECXOurW2c.jpg?width=640&crop=smart&auto=webp&s=fd44bcdb11945679a249aa0771a845d585e1da29', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6uGHni0V2oijMr-sci6cKs7MN8AsJ1puVMECXOurW2c.jpg?width=960&crop=smart&auto=webp&s=a78fe480106dc5c0012660ccdf2743d6c8b59be4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6uGHni0V2oijMr-sci6cKs7MN8AsJ1puVMECXOurW2c.jpg?width=1080&crop=smart&auto=webp&s=b25f3e78542a67355f2152093216d75f57dfe31f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6uGHni0V2oijMr-sci6cKs7MN8AsJ1puVMECXOurW2c.jpg?auto=webp&s=8481fc37d89597ca76ff322749134c351ee60b8f', 'width': 1200}, 'variants': {}}]}
How are y’all trying out Deepseek V3?
5
Hello. I wasn’t following the developments closely over the past some time(one or two of you might remember me), and I am summoned by the release of Deepseek V3. Some of y’all seem to have very positive thoughts about the model, but I’m wondering, how and where is the model available? Are y’all running it? Is there an online service? Thanks in advance.
2024-12-30T07:46:15
https://www.reddit.com/r/LocalLLaMA/comments/1hpi695/how_are_yall_trying_out_deepseek_v3/
bot-333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpi695
false
null
t3_1hpi695
/r/LocalLLaMA/comments/1hpi695/how_are_yall_trying_out_deepseek_v3/
false
false
self
5
null
Can narrative data (stories) be stored as knowledge graphs?
14
This is in the context of storing a story as a KG for RAG Q&A. KGs are amazing for storing ontological/relationship data and for querying for factual data. But how does one store *Narrative data* in a knowledge graph without losing a lot of information? For one thing, there's a temporal dimension in a story, and relationships change over the course of a story (a person may stay in location A in chapter 1 and move to location B in chapter 2). This https://www.youtube.com/watch?v=g6xBklAIrsA has some ideas but doesn't really get into the issues.
2024-12-30T08:21:43
https://www.reddit.com/r/LocalLLaMA/comments/1hpioka/can_narrative_data_stories_be_stored_as_knowledge/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpioka
false
null
t3_1hpioka
/r/LocalLLaMA/comments/1hpioka/can_narrative_data_stories_be_stored_as_knowledge/
false
false
self
14
{'enabled': False, 'images': [{'id': 'UuKzYsPFE8jNwZO4Ll4GPZVTBtuKXjH0AK5zvfEWgzQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/kDwcExFUBZ4DKmH62fRGYL0UcQFXiSA_v4Ohq1j8lBw.jpg?width=108&crop=smart&auto=webp&s=49a7d0b6a3b4ddd47afef5fc4746937b7c6b6e46', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/kDwcExFUBZ4DKmH62fRGYL0UcQFXiSA_v4Ohq1j8lBw.jpg?width=216&crop=smart&auto=webp&s=9765807af0ecfc3ca324029649bc7241f52b7953', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/kDwcExFUBZ4DKmH62fRGYL0UcQFXiSA_v4Ohq1j8lBw.jpg?width=320&crop=smart&auto=webp&s=6041a7d78a38f93373a5c252bb041414049e95a8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/kDwcExFUBZ4DKmH62fRGYL0UcQFXiSA_v4Ohq1j8lBw.jpg?auto=webp&s=c02a94ecc0d2b48e631ce6fc9ffd860ece20f2f7', 'width': 480}, 'variants': {}}]}
Beast spec machine for llm training under 3k usd - give suggestions
1
[removed]
2024-12-30T08:35:57
https://www.reddit.com/r/LocalLLaMA/comments/1hpivnl/beast_spec_machine_for_llm_training_under_3k_usd/
Routine_Delay4575
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpivnl
false
null
t3_1hpivnl
/r/LocalLLaMA/comments/1hpivnl/beast_spec_machine_for_llm_training_under_3k_usd/
false
false
self
1
null
I built a chatGPT but for sensitive data & regulated work 🔒 runs offline!
0
Hey r/LocalLLaMA ! I wanted to share an app I've been working on called Clariti - it's an AI assistant designed specifically for situations where you can't/shouldn't use ChatGPT due to privacy concerns. Built with SwiftUI and MLX-Swift to chat with LLM's like LLama 3.2 3B Instruct Chat with your documents, calendar, health data, and more... 100% Private and runs Offline! You can check it out here: [\[App Store Link\]](https://apps.apple.com/us/app/clariti-ai-privately/id6739746682) \- **Free Trial !** https://preview.redd.it/nsbya92q6y9e1.png?width=1284&format=png&auto=webp&s=56b555aff06e923b8835065c7cf628de00c0d949 Key Technical Details: \- 100% offline processing - all AI runs locally on device \- Built with SwiftUI + SwiftData \- Integrates with HealthKit and EventKit for private data analysis \- Document processing stays completely local \- Custom prompt templates for different use cases Core Features: \- Document analysis \- Health data insights \- Calendar management \- Voice notes \- Smart writing assistance \- Research tools The main technical challenge was optimizing the LLMs to run efficiently on-device while maintaining a responsive UI. I used SwiftData for persistence and built a custom context-aware prompt system that can privately analyze documents and personal data. I'd love feedback from fellow Swift developers, especially on: 1. The SwiftUI architecture 2. Performance optimizations 3. Local data handling 4. Integration patterns with Apple frameworks You can check it out here: [\[App Store Link\]](https://apps.apple.com/us/app/clariti-ai-privately/id6739746682) Free Trial !
2024-12-30T08:37:52
https://www.reddit.com/r/LocalLLaMA/comments/1hpiwmh/i_built_a_chatgpt_but_for_sensitive_data/
claritiai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpiwmh
false
null
t3_1hpiwmh
/r/LocalLLaMA/comments/1hpiwmh/i_built_a_chatgpt_but_for_sensitive_data/
false
false
https://b.thumbs.redditm…UR72i4rC1GxQ.jpg
0
{'enabled': False, 'images': [{'id': 'AsTTKxiogsKsBySmKZUYYpBEv3AbUDxA1CcgmRDwomY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=108&crop=smart&auto=webp&s=6d74ccc435765d69dd274c51d9dda480e2049bbf', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=216&crop=smart&auto=webp&s=9c7688c89986027c2cf7e3a39af338c7904b171d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=320&crop=smart&auto=webp&s=bfec5219cb0bd63bdc5ca6d971e884b265c99652', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=640&crop=smart&auto=webp&s=e52670082168907b115bf44aff8f5ab5c9d232e3', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=960&crop=smart&auto=webp&s=3dc227c90c23700f43fad5317f32a945c1838161', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?width=1080&crop=smart&auto=webp&s=a5cd94b38013480be883f20332e17ecdcf06a94b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/PY1qy83d8jP5mmKT0JM8wOIePw9fhmsU5mr13a9vqXg.jpg?auto=webp&s=5a2c972f3418c1206000828c17fc8184e2b6d24c', 'width': 1200}, 'variants': {}}]}
Quadro RTX 5000 vs Tesla K80
3
Hiya. I want to add some LLM capabilities (and preferably at the same time hardware, multi-stream h265 encoding/decoding) to my server. Power consumption and thermal properties are a big factor too as electricity is expensive and my server rack doesnt have AC. I can get very cheaply either RTX5000 or Tesla K80. I know that the RTX5000 is much more powerful, but it has less RAM and maybe considering my needs and power/thermal considerations K80 may be a better choice for me?
2024-12-30T09:02:19
https://www.reddit.com/r/LocalLLaMA/comments/1hpj8nz/quadro_rtx_5000_vs_tesla_k80/
Dinth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpj8nz
false
null
t3_1hpj8nz
/r/LocalLLaMA/comments/1hpj8nz/quadro_rtx_5000_vs_tesla_k80/
false
false
self
3
null
Deepseek V3 performs surprisingly bad in Misguided Attention eval, which tests for overfitting.
222
The [Misguided Attention](https://github.com/cpldcpu/MisguidedAttention) eval is a collection of prompts that are slight variations of commonly known thought experiments, riddles or paradoxes ("trick questions"). Most LLMs are overfit to the "normal" version of these questions from their pretraining and will provide an answer based on the unmodified problem. This is a test to show how well the LLM is able to attend to "weak" signals. Deepseek V3 solved only 22% of the prompts in the 13 test questions. This is unexpectedly bad for a new model of this size and vintage. It appears that some of the optimizations (The compressed KV cache? MoE?) made it more sensitive to overfitting. https://preview.redd.it/efru2sifdy9e1.png?width=4167&format=png&auto=webp&s=9c754d454b5d06bd6c39c452aef29fc809e5a0c4
2024-12-30T09:20:45
https://www.reddit.com/r/LocalLLaMA/comments/1hpjhm0/deepseek_v3_performs_surprisingly_bad_in/
cpldcpu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpjhm0
false
null
t3_1hpjhm0
/r/LocalLLaMA/comments/1hpjhm0/deepseek_v3_performs_surprisingly_bad_in/
false
false
https://b.thumbs.redditm…p8zqCQTCPZtc.jpg
222
{'enabled': False, 'images': [{'id': 'a-zxNxfy1psDFkEnGtKSyGDsABSR5P0nOa5-gFpA2wM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tSDd_suPijbdxTtIemxtmtYRJMfuCTxCs60CjhCAoKE.jpg?width=108&crop=smart&auto=webp&s=7d78083fbe8e751d26079cc48272d159fae21912', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tSDd_suPijbdxTtIemxtmtYRJMfuCTxCs60CjhCAoKE.jpg?width=216&crop=smart&auto=webp&s=233f64b34a4b96e180a600d3d18757541fd7a780', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tSDd_suPijbdxTtIemxtmtYRJMfuCTxCs60CjhCAoKE.jpg?width=320&crop=smart&auto=webp&s=c9907d495ca6ed5806a8fb6bb434ccb05c17a3e7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tSDd_suPijbdxTtIemxtmtYRJMfuCTxCs60CjhCAoKE.jpg?width=640&crop=smart&auto=webp&s=941f52614319c4b2d5a890db5a2f05fa29e3787d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tSDd_suPijbdxTtIemxtmtYRJMfuCTxCs60CjhCAoKE.jpg?width=960&crop=smart&auto=webp&s=68bd3ecb73212719fdda51706ea14fd3987fa85d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tSDd_suPijbdxTtIemxtmtYRJMfuCTxCs60CjhCAoKE.jpg?width=1080&crop=smart&auto=webp&s=efcd5b16777d414827542eb07c481db46dfc5fbd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tSDd_suPijbdxTtIemxtmtYRJMfuCTxCs60CjhCAoKE.jpg?auto=webp&s=5ed641664f57cbeb41441d2efcb4155b3a82070b', 'width': 1200}, 'variants': {}}]}
M1 ultra, M2 ultra, or M4/M3 max
2
Hi, I'm considering what machine to buy to do inference locally. my target is to run llama 3.3 at decent speed, to do documents summaries etc, so I'll need RAM to store both the model and the context windows, since I target docs with \~20 000 tokens => 64 go minimum I would think. I want to have access to it even when I'm on the go, either directly on a portable computer or a remote server, I don't want to have to deal with a >1000 watt consumption so I think I'll need to go with apple sillicon. Also, I saw that the prompt treatment might be quite long, but I do not see any solution for that on apple silicon So I'm targeting systems with : \- At least 64 GB of ram, but ideally 128 gb \- 1 to storage or more Options are : M1 ultra with 64 core / 128 go ram M2 Ultra with 60 cores / 128 go ram M4 max 16 with 40 cores / 128 go ram M3 max 14 with 40 cores / 128 go ram the price is around the same for the M1 / M2 / M4 (\~>5500 euros), but the M3 max can be found for \~4300 on the refurb). So my questions are : \- what would be the best one in terms of performance? what is the difference going to be from your point of view in terms of processing speed? \- the M3 max would probably be the most sensical choice, since it would replace my current laptop (which I could then resell), but I'm afraid that the throttling issue would make inference speed go crazy on long prompts. What do you think of that? \- what is the easiest thing to resell later on (I'm thinking that if I go for M1 / M2 Ultra, I might be tempted in a few month to buy an m4 ultra when it gets out). \- what did you / would you buy if it were you? thanks a lot!
2024-12-30T09:31:13
https://www.reddit.com/r/LocalLLaMA/comments/1hpjmte/m1_ultra_m2_ultra_or_m4m3_max/
HappyFaithlessness70
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpjmte
false
null
t3_1hpjmte
/r/LocalLLaMA/comments/1hpjmte/m1_ultra_m2_ultra_or_m4m3_max/
false
false
self
2
null
Create a model like public huggingface transformers model(safetensor, tokenizer.json, config.json, etcs)
1
[removed]
2024-12-30T10:35:33
https://www.reddit.com/r/LocalLLaMA/comments/1hpkitw/create_a_model_like_public_huggingface/
Wonderful_Second5322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpkitw
false
null
t3_1hpkitw
/r/LocalLLaMA/comments/1hpkitw/create_a_model_like_public_huggingface/
false
false
self
1
null
What is the usage of each of these models: llama,gemma,gemmasutra,h20.ai,Phi,Qwen,SmolLM2 ?
0
What sort of usage(inquiry) is each of these AI models best for?
2024-12-30T10:39:15
https://www.reddit.com/r/LocalLLaMA/comments/1hpkkqh/what_is_the_usage_of_each_of_these_models/
ExtremePresence3030
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpkkqh
false
null
t3_1hpkkqh
/r/LocalLLaMA/comments/1hpkkqh/what_is_the_usage_of_each_of_these_models/
false
false
self
0
null
Create a model like public huggingface transformers model(safetensor, tokenizer.json, config.json, etcs)
1
[removed]
2024-12-30T10:39:37
https://www.reddit.com/r/LocalLLaMA/comments/1hpkkx5/create_a_model_like_public_huggingface/
Subject-Smell-6714
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpkkx5
false
null
t3_1hpkkx5
/r/LocalLLaMA/comments/1hpkkx5/create_a_model_like_public_huggingface/
false
false
self
1
null
Creating a docker LLM stack - picking the optimal components
1
[removed]
2024-12-30T10:43:06
https://www.reddit.com/r/LocalLLaMA/comments/1hpkmqa/creating_a_docker_llm_stack_picking_the_optimal/
Shaamaan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpkmqa
false
null
t3_1hpkmqa
/r/LocalLLaMA/comments/1hpkmqa/creating_a_docker_llm_stack_picking_the_optimal/
false
false
self
1
null
Top 25 AI models in 2024 on Hugging Face (based on likes)
204
2024-12-30T11:20:07
https://i.redd.it/9na52d2rzy9e1.jpeg
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1hpl61n
false
null
t3_1hpl61n
/r/LocalLLaMA/comments/1hpl61n/top_25_ai_models_in_2024_on_hugging_face_based_on/
false
false
https://b.thumbs.redditm…_D_Ue2-uQbgo.jpg
204
{'enabled': True, 'images': [{'id': '_MxxykepML9LhJbSBdMoryrhzrlgL0zgAWvqcJOM4tU', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/9na52d2rzy9e1.jpeg?width=108&crop=smart&auto=webp&s=fa6bee847861f4cea7890b0c291a88e9e91cedde', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/9na52d2rzy9e1.jpeg?width=216&crop=smart&auto=webp&s=af53a967a51b9fc7d1c4fcd6e3ef34c62f73bc12', 'width': 216}, {'height': 212, 'url': 'https://preview.redd.it/9na52d2rzy9e1.jpeg?width=320&crop=smart&auto=webp&s=93a1afe1c3cb26137c2fc4768720b1711f82e5d6', 'width': 320}, {'height': 425, 'url': 'https://preview.redd.it/9na52d2rzy9e1.jpeg?width=640&crop=smart&auto=webp&s=7c418f0f21a905845bff434a7d4c5b867bdf9a02', 'width': 640}, {'height': 638, 'url': 'https://preview.redd.it/9na52d2rzy9e1.jpeg?width=960&crop=smart&auto=webp&s=372befedabe41e81539c8cbb52f68339b2b55f48', 'width': 960}, {'height': 718, 'url': 'https://preview.redd.it/9na52d2rzy9e1.jpeg?width=1080&crop=smart&auto=webp&s=9e29763b72bcc078e0040cb459f6242b190be4bd', 'width': 1080}], 'source': {'height': 790, 'url': 'https://preview.redd.it/9na52d2rzy9e1.jpeg?auto=webp&s=ee41b399b8ed426a280ed19889bfee28e3bc9699', 'width': 1187}, 'variants': {}}]}
A new localLLaMA native tool arrives the macOS AppStore: Say hello to Lana!
1
[removed]
2024-12-30T11:28:24
https://www.reddit.com/r/LocalLLaMA/comments/1hplafe/a_new_localllama_native_tool_arrives_the_macos/
peter_shaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hplafe
false
null
t3_1hplafe
/r/LocalLLaMA/comments/1hplafe/a_new_localllama_native_tool_arrives_the_macos/
false
false
https://a.thumbs.redditm…ZAj7BEPj-qZ4.jpg
1
{'enabled': False, 'images': [{'id': 'zKBO_imOj4cuDI0wiy6-f3aKgGW7nA0Sjc_SRs6fV1c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/rlf8Tv7vQRHqINR7Ye3OZB73k6ELxxt_pb-cWQohQa0.jpg?width=108&crop=smart&auto=webp&s=00935103b1441235c3617eabc7d6bf2256a98fb4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/rlf8Tv7vQRHqINR7Ye3OZB73k6ELxxt_pb-cWQohQa0.jpg?width=216&crop=smart&auto=webp&s=2ab56de33b88367c8d5f23bcf8eb6399eac46d5a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/rlf8Tv7vQRHqINR7Ye3OZB73k6ELxxt_pb-cWQohQa0.jpg?width=320&crop=smart&auto=webp&s=05a4e27f803a850fb2cc6b0f03799d74f4064d2e', 'width': 320}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/rlf8Tv7vQRHqINR7Ye3OZB73k6ELxxt_pb-cWQohQa0.jpg?auto=webp&s=8c4ed9ecec7225ad81cddb5522541c31359e34f6', 'width': 630}, 'variants': {}}]}
Noob question: What's your stack for finetuning? (eg: Together, Openpipe, finetunedb, etc)
1
[removed]
2024-12-30T11:34:06
https://www.reddit.com/r/LocalLLaMA/comments/1hpldff/noob_question_whats_your_stack_for_finetuning_eg/
abhi_shek1994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpldff
false
null
t3_1hpldff
/r/LocalLLaMA/comments/1hpldff/noob_question_whats_your_stack_for_finetuning_eg/
false
false
self
1
null
Cline and underlying code-strong LLM with large context
3
HI, New to cline and testing. Currently trying with DeepSeeker V3 but finding the 64K content window size extremely limiting, as quickly hit the limit during simple app creation tasks. What strong models with large context windows are you having success with in use of Cline?
2024-12-30T11:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1hpldnn/cline_and_underlying_codestrong_llm_with_large/
LostGoatOnHill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpldnn
false
null
t3_1hpldnn
/r/LocalLLaMA/comments/1hpldnn/cline_and_underlying_codestrong_llm_with_large/
false
false
self
3
null
Steve Job's dream? L2E/llama2.c running on Amiga 1200+, Atari ST, and Classic Mac
3
2024-12-30T11:35:34
https://news.ycombinator.com/item?id=42547011
AMICABoard
news.ycombinator.com
1970-01-01T00:00:00
0
{}
1hple74
false
null
t3_1hple74
/r/LocalLLaMA/comments/1hple74/steve_jobs_dream_l2ellama2c_running_on_amiga_1200/
false
false
default
3
null
Top 25 open models on Hugging Face in 2024
30
2024-12-30T11:44:14
https://i.redd.it/xd47szx34z9e1.png
vaibhavs10
i.redd.it
1970-01-01T00:00:00
0
{}
1hplikv
false
null
t3_1hplikv
/r/LocalLLaMA/comments/1hplikv/top_25_open_models_on_hugging_face_in_2024/
false
false
https://a.thumbs.redditm…Bv2MMlXqTOB0.jpg
30
{'enabled': True, 'images': [{'id': 'inIBuUTFm3tWs2Qn7NnSG0gjBzx3PvB8EfdZs_X-GTs', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/xd47szx34z9e1.png?width=108&crop=smart&auto=webp&s=63cb61532ab804f7886d15a3990001e04d30bada', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/xd47szx34z9e1.png?width=216&crop=smart&auto=webp&s=4b240f675f210d27ba2971818bc3c45fe3865428', 'width': 216}, {'height': 212, 'url': 'https://preview.redd.it/xd47szx34z9e1.png?width=320&crop=smart&auto=webp&s=0d97b0b89a33a529a7547afcf607f63e27a2b5cc', 'width': 320}, {'height': 425, 'url': 'https://preview.redd.it/xd47szx34z9e1.png?width=640&crop=smart&auto=webp&s=fccef4871fff28f41ee65d6a63cc590907b53020', 'width': 640}, {'height': 638, 'url': 'https://preview.redd.it/xd47szx34z9e1.png?width=960&crop=smart&auto=webp&s=7ccd30939310b9e2534d563d2cce05bcb3e997a9', 'width': 960}, {'height': 718, 'url': 'https://preview.redd.it/xd47szx34z9e1.png?width=1080&crop=smart&auto=webp&s=85391db29b60e93f1c07f9da98353dccb8fbdc89', 'width': 1080}], 'source': {'height': 790, 'url': 'https://preview.redd.it/xd47szx34z9e1.png?auto=webp&s=619e4e7e13e9248aa49ad62c7e2140c3e009ac36', 'width': 1187}, 'variants': {}}]}
Junyang lin replied , maybe we will get small reasoning models
219
2024-12-30T11:49:17
https://i.redd.it/ng01go215z9e1.jpeg
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1hpllbi
false
null
t3_1hpllbi
/r/LocalLLaMA/comments/1hpllbi/junyang_lin_replied_maybe_we_will_get_small/
false
false
https://b.thumbs.redditm…oZuYeu08mnUY.jpg
219
{'enabled': True, 'images': [{'id': 'RKgdf9utK9t2n7wE4vWKJeAG8vbSOnseGI4uPYxyZoE', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ng01go215z9e1.jpeg?width=108&crop=smart&auto=webp&s=ab26ab1bb4ad2158fce7ea8431d1a2ce03811df4', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ng01go215z9e1.jpeg?width=216&crop=smart&auto=webp&s=bfcabe2e28758f7fb0711d8976c97f53ac69708a', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ng01go215z9e1.jpeg?width=320&crop=smart&auto=webp&s=4985fa8a5db63620ac0ea965671ba3d221bf3361', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ng01go215z9e1.jpeg?width=640&crop=smart&auto=webp&s=e9e02919ec1b5d7faaec878ea89067c8d5f577ac', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ng01go215z9e1.jpeg?width=960&crop=smart&auto=webp&s=23eaa092c5d570453b5bea2dc19e16cfa9b5cb3c', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/ng01go215z9e1.jpeg?width=1080&crop=smart&auto=webp&s=288fb852645d0384e0212e73d9501a3dadd4b79b', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/ng01go215z9e1.jpeg?auto=webp&s=42f27ccd9a13098bed8a9d332ba2bbe738863593', 'width': 1080}, 'variants': {}}]}
Can someone benchmark a M4 Max 32-GPU with llama.cpp
1
Looks like the numbers for the binned 14” M4 Max (32-GPU) is missing — can someone please run the benchmark and contribute to the discussion? Thanks!
2024-12-30T11:54:44
https://github.com/ggerganov/llama.cpp/discussions/4167
sunpazed
github.com
1970-01-01T00:00:00
0
{}
1hplo9j
false
null
t3_1hplo9j
/r/LocalLLaMA/comments/1hplo9j/can_someone_benchmark_a_m4_max_32gpu_with_llamacpp/
false
false
https://a.thumbs.redditm…VsyPcnYD5p18.jpg
1
{'enabled': False, 'images': [{'id': 'MmR3_T238Y_E5Qt1YMexnmhC8Vi4XZkzAyWS1sHzyy0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jXMS8ntrq51P2_faS_hDlwZUtOC8APpmWzj7sTsTnSM.jpg?width=108&crop=smart&auto=webp&s=9b797c4ab62327a4fcad248f216e9b8a61b7573a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jXMS8ntrq51P2_faS_hDlwZUtOC8APpmWzj7sTsTnSM.jpg?width=216&crop=smart&auto=webp&s=05bf5285038cd12a89b7909226bc9b3ea5012c45', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jXMS8ntrq51P2_faS_hDlwZUtOC8APpmWzj7sTsTnSM.jpg?width=320&crop=smart&auto=webp&s=25f8f0cf1db060a2bf582a24abd16306d05bbf23', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jXMS8ntrq51P2_faS_hDlwZUtOC8APpmWzj7sTsTnSM.jpg?width=640&crop=smart&auto=webp&s=904248870713c9b075e1eb4aa99d3f85bb238d6b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jXMS8ntrq51P2_faS_hDlwZUtOC8APpmWzj7sTsTnSM.jpg?width=960&crop=smart&auto=webp&s=d641b1b10e69b236c1ff8a3c2911fb22c0ae671f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jXMS8ntrq51P2_faS_hDlwZUtOC8APpmWzj7sTsTnSM.jpg?width=1080&crop=smart&auto=webp&s=d115365951a25da8ff7d14d6e446653d13e1cc0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jXMS8ntrq51P2_faS_hDlwZUtOC8APpmWzj7sTsTnSM.jpg?auto=webp&s=a15cd359c40c3b779e5177f02de536735c95b19d', 'width': 1200}, 'variants': {}}]}
A new LocalLLamAA native tool arrives the macOS AppStore: Say hello to Lana
3
[Lana in action](https://preview.redd.it/u81tusd61z9e1.png?width=1280&format=png&auto=webp&s=d29e2360430e0798daa83cceb378054af6e0808a) Free to download: [https://apps.apple.com/us/app/lana-local-language-model/id6739474076](https://apps.apple.com/us/app/lana-local-language-model/id6739474076) \- Fine-Tune System Prompts: Shape your AI’s personality, tone, and expertise with unparalleled precision. This tool gives you complete control over the design process, ensuring that your system prompts deliver the perfect conversation flow every time. \- Seamlessly Explore Different Contexts: Tackle diverse projects effortlessly, whether you're solving technical challenges or crafting nuanced narratives. Switch between contexts with ease, supported by advanced tools for managing complexity. \- Replay and Refine: Analyze full conversation logs and experiment with different parameter settings. Perfect your system prompts through iterative refinement, enabling you to build AI interactions that truly excel.
2024-12-30T12:08:54
https://www.reddit.com/r/LocalLLaMA/comments/1hplwfx/a_new_localllamaa_native_tool_arrives_the_macos/
peter_shaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hplwfx
false
null
t3_1hplwfx
/r/LocalLLaMA/comments/1hplwfx/a_new_localllamaa_native_tool_arrives_the_macos/
false
false
https://a.thumbs.redditm…ZAj7BEPj-qZ4.jpg
3
{'enabled': False, 'images': [{'id': 'zKBO_imOj4cuDI0wiy6-f3aKgGW7nA0Sjc_SRs6fV1c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/rlf8Tv7vQRHqINR7Ye3OZB73k6ELxxt_pb-cWQohQa0.jpg?width=108&crop=smart&auto=webp&s=00935103b1441235c3617eabc7d6bf2256a98fb4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/rlf8Tv7vQRHqINR7Ye3OZB73k6ELxxt_pb-cWQohQa0.jpg?width=216&crop=smart&auto=webp&s=2ab56de33b88367c8d5f23bcf8eb6399eac46d5a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/rlf8Tv7vQRHqINR7Ye3OZB73k6ELxxt_pb-cWQohQa0.jpg?width=320&crop=smart&auto=webp&s=05a4e27f803a850fb2cc6b0f03799d74f4064d2e', 'width': 320}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/rlf8Tv7vQRHqINR7Ye3OZB73k6ELxxt_pb-cWQohQa0.jpg?auto=webp&s=8c4ed9ecec7225ad81cddb5522541c31359e34f6', 'width': 630}, 'variants': {}}]}
Bill Gates announces new open source Propolon NLINK NTECH AI model in collaboration with Elon Musk and Mark Zuckerberg.
0
2024-12-30T12:10:24
https://v.redd.it/m2nxicms8z9e1
Personal-Dot-380
v.redd.it
1970-01-01T00:00:00
0
{}
1hplx8e
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m2nxicms8z9e1/DASHPlaylist.mpd?a=1738152639%2CNzhkNzkyNjM5NzNmYTY0MGY4ZDc2NjYxYjQyZDNlZGIzOTdmMWMyYzU5NWRjYjJiODg0OTJlZDg2Yjc5NTY4YQ%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/m2nxicms8z9e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/m2nxicms8z9e1/HLSPlaylist.m3u8?a=1738152639%2CM2FlY2NjYmJmY2QxZGIwMTNmYjI1MWY1MGFjNDQ1NDMxN2M0ZmMzMjViZjNmOGUwMjQ4NDFjNjlmN2IxOTFmMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/m2nxicms8z9e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1hplx8e
/r/LocalLLaMA/comments/1hplx8e/bill_gates_announces_new_open_source_propolon/
false
false
https://external-preview…4c61a907ec23a994
0
{'enabled': False, 'images': [{'id': 'emhwdzNmZ3M4ejllMeOwxEKY_BwUmvv0yJlvuSQnrkHkZJuTTKSVmRt4UrhV', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/emhwdzNmZ3M4ejllMeOwxEKY_BwUmvv0yJlvuSQnrkHkZJuTTKSVmRt4UrhV.png?width=108&crop=smart&format=pjpg&auto=webp&s=4184848c483119da33693c5b2d2802afa21580f5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/emhwdzNmZ3M4ejllMeOwxEKY_BwUmvv0yJlvuSQnrkHkZJuTTKSVmRt4UrhV.png?width=216&crop=smart&format=pjpg&auto=webp&s=bc85c41db92d195ffa7f7dd1a212b6b668a69907', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/emhwdzNmZ3M4ejllMeOwxEKY_BwUmvv0yJlvuSQnrkHkZJuTTKSVmRt4UrhV.png?width=320&crop=smart&format=pjpg&auto=webp&s=50ab584c3c4e5ee23ab1a13f4fc8f756e43057cd', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/emhwdzNmZ3M4ejllMeOwxEKY_BwUmvv0yJlvuSQnrkHkZJuTTKSVmRt4UrhV.png?format=pjpg&auto=webp&s=b5193d5e17b9512f2444e088351189e8735be7c1', 'width': 500}, 'variants': {}}]}
Running ollama and open web ui without wsl2 and hyper-v on windows
1
[removed]
2024-12-30T12:16:42
https://www.reddit.com/r/LocalLLaMA/comments/1hpm0sd/running_ollama_and_open_web_ui_without_wsl2_and/
Bully79
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpm0sd
false
null
t3_1hpm0sd
/r/LocalLLaMA/comments/1hpm0sd/running_ollama_and_open_web_ui_without_wsl2_and/
false
false
self
1
null
[ Removed by Reddit ]
0
[ Removed by Reddit on account of violating the [content policy](/help/contentpolicy). ]
2024-12-30T12:23:45
https://www.reddit.com/r/LocalLLaMA/comments/1hpm4s0/removed_by_reddit/
De-Alf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpm4s0
false
null
t3_1hpm4s0
/r/LocalLLaMA/comments/1hpm4s0/removed_by_reddit/
false
false
self
0
null
Need suggestions for some OCRs that can parse complex tables and convert them to markdowns/html which can be sent to LLMs for further processing
9
This is how my table looks like. So far Marker-single worked well but it had many spelling mistakes that can't be ignored. For nougat I am getting "could not initalize NNPACK! Reason: Unsupported hardware".
2024-12-30T12:26:38
https://i.redd.it/5xutbf3pbz9e1.jpeg
ShippersAreIdiots
i.redd.it
1970-01-01T00:00:00
0
{}
1hpm6hb
false
null
t3_1hpm6hb
/r/LocalLLaMA/comments/1hpm6hb/need_suggestions_for_some_ocrs_that_can_parse/
false
false
https://b.thumbs.redditm…Yp6fzpffRjHk.jpg
9
{'enabled': True, 'images': [{'id': 'XH5G0mab5tY7HvnBhcOtab2SH7RIkSL2B1WfAUedsoI', 'resolutions': [{'height': 189, 'url': 'https://preview.redd.it/5xutbf3pbz9e1.jpeg?width=108&crop=smart&auto=webp&s=1e31ba835176ecde5283ec846469ba54716d59bc', 'width': 108}, {'height': 379, 'url': 'https://preview.redd.it/5xutbf3pbz9e1.jpeg?width=216&crop=smart&auto=webp&s=17bd22289bbafa0c6ee9512f022b8dd19bcd286a', 'width': 216}, {'height': 561, 'url': 'https://preview.redd.it/5xutbf3pbz9e1.jpeg?width=320&crop=smart&auto=webp&s=ff5a56955648537f79c60f28b868bf8925c489d4', 'width': 320}, {'height': 1123, 'url': 'https://preview.redd.it/5xutbf3pbz9e1.jpeg?width=640&crop=smart&auto=webp&s=b9227b55b405076b8c060f42b9b17d8ff6d35ed1', 'width': 640}], 'source': {'height': 1239, 'url': 'https://preview.redd.it/5xutbf3pbz9e1.jpeg?auto=webp&s=6dbce4cafb65def8c185540a2c6fc454583bb3e9', 'width': 706}, 'variants': {}}]}
Deepseek #7 ! lmsys leaderboard updated
1
2024-12-30T12:29:05
https://www.reddit.com/gallery/1hpm7v7
Evening_Action6217
reddit.com
1970-01-01T00:00:00
0
{}
1hpm7v7
false
null
t3_1hpm7v7
/r/LocalLLaMA/comments/1hpm7v7/deepseek_7_lmsys_leaderboard_updated/
false
false
https://b.thumbs.redditm…YA61fRkVvMic.jpg
1
null
Deepseek V3 Ranks 7th on lmarena Leaderboard, Surpassing o1 mini
1
2024-12-30T12:33:31
https://lmarena.ai/?leaderboard
De-Alf
lmarena.ai
1970-01-01T00:00:00
0
{}
1hpmaht
false
null
t3_1hpmaht
/r/LocalLLaMA/comments/1hpmaht/deepseek_v3_ranks_7th_on_lmarena_leaderboard/
false
false
default
1
null
Aren't Mamba or other linear attention models better for reasoning/ chain of thought instead of transformers?
1
Everytime a transformer reasoning model outputs a huge chain of thought, the computation cost of self attention mechanism increases quadratically. Whereas in Mamba it's linear. The only drawbacks of Mamba type models was that they are bad at retrieving tasks if you remember at the start of this year. This drawback won't effect their reasoning capabilities. So do Mamba and other linear attention mechanisms make their comeback in reasoning models in 2025?
2024-12-30T12:46:39
https://www.reddit.com/r/LocalLLaMA/comments/1hpmi2r/arent_mamba_or_other_linear_attention_models/
Personal-Dot-380
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpmi2r
false
null
t3_1hpmi2r
/r/LocalLLaMA/comments/1hpmi2r/arent_mamba_or_other_linear_attention_models/
false
false
self
1
null
GPT4ALL LLAMA 3.2 3B Instruct Your message was too long and could not be processed (185655 > 2044). Please try again with something shorter.
1
[removed]
2024-12-30T12:47:14
https://www.reddit.com/r/LocalLLaMA/comments/1hpmifz/gpt4all_llama_32_3b_instruct_your_message_was_too/
Global-Day2357
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpmifz
false
null
t3_1hpmifz
/r/LocalLLaMA/comments/1hpmifz/gpt4all_llama_32_3b_instruct_your_message_was_too/
false
false
https://a.thumbs.redditm…I785lsdKWnB4.jpg
1
null
can anyone recommend a reputable course or certificate program?
4
I'm an intermediate python coder who has been enamored with LLM's since I first touched one. Wanting to learn more about the domain I took Googles AI Essentials Certification, but it was WAY too basic and was essentially a waste of money. I'm hoping someone here can recommend something a little more in depth. Much appreciated!
2024-12-30T12:52:36
https://www.reddit.com/r/LocalLLaMA/comments/1hpmljd/can_anyone_recommend_a_reputable_course_or/
ForensicAstronomer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpmljd
false
null
t3_1hpmljd
/r/LocalLLaMA/comments/1hpmljd/can_anyone_recommend_a_reputable_course_or/
false
false
self
4
null
Deepseek v3 best open source model !!
248
2024-12-30T12:56:30
https://i.redd.it/5fgmtas0hz9e1.jpeg
Evening_Action6217
i.redd.it
1970-01-01T00:00:00
0
{}
1hpmnof
false
null
t3_1hpmnof
/r/LocalLLaMA/comments/1hpmnof/deepseek_v3_best_open_source_model/
false
false
https://b.thumbs.redditm…2obp88yI-Kkk.jpg
248
{'enabled': True, 'images': [{'id': 'f8whPEd2tzhumh9kuFCoro85-3cfvsKfB1JM02htoCs', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/5fgmtas0hz9e1.jpeg?width=108&crop=smart&auto=webp&s=3f9603a7be8289b4993caf84802376050414ed9c', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/5fgmtas0hz9e1.jpeg?width=216&crop=smart&auto=webp&s=c918e882b8065a87063c31fd5aab9e3a5c7ddacb', 'width': 216}, {'height': 208, 'url': 'https://preview.redd.it/5fgmtas0hz9e1.jpeg?width=320&crop=smart&auto=webp&s=558cc85e40aa8c37c09504760a2d3e6d053ee272', 'width': 320}, {'height': 417, 'url': 'https://preview.redd.it/5fgmtas0hz9e1.jpeg?width=640&crop=smart&auto=webp&s=29689981341820cc77b011605b0fb85ab14ace76', 'width': 640}, {'height': 625, 'url': 'https://preview.redd.it/5fgmtas0hz9e1.jpeg?width=960&crop=smart&auto=webp&s=a5983854ebbf439e7fcee5aaa8647218b0c2d0ad', 'width': 960}, {'height': 704, 'url': 'https://preview.redd.it/5fgmtas0hz9e1.jpeg?width=1080&crop=smart&auto=webp&s=9bdedbae38784c1d2a683938727fcc24dd5f9545', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/5fgmtas0hz9e1.jpeg?auto=webp&s=bd418b74944074545a69fbf1400f2c12a5abda83', 'width': 2454}, 'variants': {}}]}
AI Generated Game - Can We Have AI Generated OS?
1
[removed]
2024-12-30T12:58:19
https://www.reddit.com/r/LocalLLaMA/comments/1hpmor9/ai_generated_game_can_we_have_ai_generated_os/
gawyli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpmor9
false
null
t3_1hpmor9
/r/LocalLLaMA/comments/1hpmor9/ai_generated_game_can_we_have_ai_generated_os/
false
false
self
1
null
models that are *really* open source?
32
since, you know, almost everything Local here is just a ready to go binary, the very opposite of what "open source" means. Is that smallish model from AMD the only recent one?
2024-12-30T12:59:43
https://www.reddit.com/r/LocalLLaMA/comments/1hpmpkn/models_that_are_really_open_source/
JakoDel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpmpkn
false
null
t3_1hpmpkn
/r/LocalLLaMA/comments/1hpmpkn/models_that_are_really_open_source/
false
false
self
32
null
SambaNova integrated Qwen models, 225 t/s 72B, 566 t/s coder, 368 t/s qwq
41
2024-12-30T13:15:58
https://i.redd.it/d89ggx87jz9e1.png
inkberk
i.redd.it
1970-01-01T00:00:00
0
{}
1hpmzxy
false
null
t3_1hpmzxy
/r/LocalLLaMA/comments/1hpmzxy/sambanova_integrated_qwen_models_225_ts_72b_566/
false
false
https://b.thumbs.redditm…2nYMWQFOMors.jpg
41
{'enabled': True, 'images': [{'id': '7VAMfo53IEoG702hOBDor7b03Kf9-GIHLvQvpbLnOVU', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/d89ggx87jz9e1.png?width=108&crop=smart&auto=webp&s=b9adb5b49aa3f804ef87aeee166f9e3fa62ebb4b', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/d89ggx87jz9e1.png?width=216&crop=smart&auto=webp&s=412e818005c3c9744926f95c39b47897ef30668f', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/d89ggx87jz9e1.png?width=320&crop=smart&auto=webp&s=6d067f973811b17be6dd2e5b79a7f39325bfd90a', 'width': 320}, {'height': 404, 'url': 'https://preview.redd.it/d89ggx87jz9e1.png?width=640&crop=smart&auto=webp&s=afb292e7c5cd60424629ab9f26799457a4b53990', 'width': 640}, {'height': 607, 'url': 'https://preview.redd.it/d89ggx87jz9e1.png?width=960&crop=smart&auto=webp&s=369b66540cd2967041388d42f9ca8f2b6e5aa7eb', 'width': 960}, {'height': 683, 'url': 'https://preview.redd.it/d89ggx87jz9e1.png?width=1080&crop=smart&auto=webp&s=c93aa5ed96b3357358e6f3f6adefb850fedad874', 'width': 1080}], 'source': {'height': 1870, 'url': 'https://preview.redd.it/d89ggx87jz9e1.png?auto=webp&s=95eefd1714fb97904ac180f6a409e726152bf50f', 'width': 2956}, 'variants': {}}]}
What’s everyone’s favourite model (around 8B) for general creative writing and storytelling?
3
Looking for general writing
2024-12-30T13:32:07
https://www.reddit.com/r/LocalLLaMA/comments/1hpna0z/whats_everyones_favourite_model_around_8b_for/
sardoa11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpna0z
false
null
t3_1hpna0z
/r/LocalLLaMA/comments/1hpna0z/whats_everyones_favourite_model_around_8b_for/
false
false
self
3
null
Statistical responses of various LLMs to the question "Who created you?" (update)
2
There are some outliers - the interpretation is all yours. Code and more [results in the repository here](https://github.com/cpldcpu/llmfingerprint) https://preview.redd.it/ouqnscdrnz9e1.png?width=5598&format=png&auto=webp&s=d795e05e1eb38f6c88be6424e22caeda3a4735cf
2024-12-30T13:34:22
https://www.reddit.com/r/LocalLLaMA/comments/1hpnbhb/statistical_responses_of_various_llms_to_the/
cpldcpu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpnbhb
false
null
t3_1hpnbhb
/r/LocalLLaMA/comments/1hpnbhb/statistical_responses_of_various_llms_to_the/
false
false
https://b.thumbs.redditm…cUwd6dVIFYpc.jpg
2
{'enabled': False, 'images': [{'id': '06QDx5-JQlBi7UsyIeJfLPe528yBpvud2_v1soqM2EQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7BfMqvz5FCFVgBwS_H590QzkChnJT4tNNa7fXtJaWo4.jpg?width=108&crop=smart&auto=webp&s=03580ad1e4046a1a28ad8b8e50e15c79914349e5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7BfMqvz5FCFVgBwS_H590QzkChnJT4tNNa7fXtJaWo4.jpg?width=216&crop=smart&auto=webp&s=10630773162a6653a880b60a1291b640ffa9d059', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7BfMqvz5FCFVgBwS_H590QzkChnJT4tNNa7fXtJaWo4.jpg?width=320&crop=smart&auto=webp&s=d90a75671ee8ba5923a9831e46621c7e8722d5a3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7BfMqvz5FCFVgBwS_H590QzkChnJT4tNNa7fXtJaWo4.jpg?width=640&crop=smart&auto=webp&s=62e04a33a5c84afcb48e021bcff0a76aa808ca7b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7BfMqvz5FCFVgBwS_H590QzkChnJT4tNNa7fXtJaWo4.jpg?width=960&crop=smart&auto=webp&s=d16dea4d348343c07150df404fe3ee57f453382e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7BfMqvz5FCFVgBwS_H590QzkChnJT4tNNa7fXtJaWo4.jpg?width=1080&crop=smart&auto=webp&s=366a68aa4eefc31f53bd6ad70ec93702cca2242d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7BfMqvz5FCFVgBwS_H590QzkChnJT4tNNa7fXtJaWo4.jpg?auto=webp&s=cdee67dc7dc3d5803bb4d7f6bddb7009cdc40eaa', 'width': 1200}, 'variants': {}}]}
Theory: Why reasoning tokens might be crucial for O1-like models - a speculation about context compression
1
Recent open-source Test Time Scaling models (like R1 and qwq) have shown impressive performance in single-turn responses, but they struggle with multi-turn conversations. They seem to be missing a crucial component that OpenAI's O1 uses - the reasoning tokens. I believe O1's reasoning tokens serve as a compression mechanism for previous context. In O1's architecture, the process flows like "input->reasoning tokens=>new input->reasoning tokens", where reasoning tokens likely contain compressed information from previous interactions (not in a single token, but distributed across multiple reasoning tokens). This hypothesis is somewhat supported by the fact that OpenAI charges extra for reasoning tokens in O1, suggesting they play a significant role in the model's operation [o1 reasoning](https://preview.redd.it/o31hj3b6pz9e1.png?width=706&format=png&auto=webp&s=d97161b1b347b7e4ce1cbb1ffaf85ae8f8bdc675) Interestingly, Meta recently published a paper [Training Large Language Models to Reason in a Continuous Latent Space](https://arxiv.org/html/2412.06769v1) that aligns with this idea. They propose reasoning in latent space rather than language space, which could be similar to how O1 uses reasoning tokens to compress and process information Current open-source models output their entire reasoning process in natural language, which quickly fills up the context window. Could the lack of this compression mechanism be a key limitation?
2024-12-30T13:43:15
https://www.reddit.com/r/LocalLLaMA/comments/1hpnhb0/theory_why_reasoning_tokens_might_be_crucial_for/
EliaukMouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpnhb0
false
null
t3_1hpnhb0
/r/LocalLLaMA/comments/1hpnhb0/theory_why_reasoning_tokens_might_be_crucial_for/
false
false
https://b.thumbs.redditm…76idyw7gD-oA.jpg
1
null
Do you think this is true and that software engineering will fundamentally change forever?
0
https://preview.redd.it/…ssential skills.
2024-12-30T13:58:49
https://www.reddit.com/r/LocalLLaMA/comments/1hpnrmo/do_you_think_this_is_true_and_that_software/
SrData
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpnrmo
false
null
t3_1hpnrmo
/r/LocalLLaMA/comments/1hpnrmo/do_you_think_this_is_true_and_that_software/
false
false
https://b.thumbs.redditm…5YO3r01SAXhc.jpg
0
{'enabled': False, 'images': [{'id': 'uHauWFnYVvqT83usz0GLiiOd6tLpR0YlCmIE_Hm5kyM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ymEXlPOu7MjruDaRAkQJnz0p0LFpRHZQwHTyjsRvmIw.jpg?width=108&crop=smart&auto=webp&s=dc4598c06e2444f059d02809cca04eb1b09ebbb0', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/ymEXlPOu7MjruDaRAkQJnz0p0LFpRHZQwHTyjsRvmIw.jpg?auto=webp&s=3190bec5fde300fb90470c44d2138eed50cead73', 'width': 200}, 'variants': {}}]}
LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset for Reasoning Model Post-Training
1
[removed]
2024-12-30T14:03:38
https://huggingface.co/datasets/kenhktsui/longtalk-cot-v0.1
transformer_ML
huggingface.co
1970-01-01T00:00:00
0
{}
1hpnvd5
false
null
t3_1hpnvd5
/r/LocalLLaMA/comments/1hpnvd5/longtalkcot_v01_a_very_long_chainofthought/
false
false
https://b.thumbs.redditm…Xd6885uAUQGo.jpg
1
{'enabled': False, 'images': [{'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200}, 'variants': {}}]}
Help me figure out what I did wrong
4
Hi, first post here and new to running models locally but not completely new to LLMs as a user. It all started with a \~15 mins podcast episode that had book recommendations that I wanted to extract, I ended up trying like 5 models when all of them failed to extract the books list from the transcript. **Setup:** * Nvidia 4070 Super (12 GB), so aiming for models with size of <= 10GB * Ollama running on Win11 * Open WebUI running in a docker container on Ubuntu/WSL2 * No configs modified for any of Ollama or Open WebUI **What I did:** 1. Uploading the mp3 file to Open WebUI immediately transcribed the episode with high accuracy, I modified nothing, extracted the text, went over it quickly to confirm accuracy 2. The prompt used for all models (transcript is 17KB, less than 4K tokens according to [https://platform.openai.com/tokenizer](https://platform.openai.com/tokenizer) ) >the following is a transcript of a podcast episode, extract the recommended books <transcript text pasted here> **Models tested:** 1. llama3.2:latest \[3b\] 2. llama3.3:latest \[70b\] (it's huge, but I wanted to see if the issue was basically smaller models being not that good) 3. gemma2:latest \[9b\] 4. qwen2:latest \[7b\] 5. qwq:latest \[32b\] **Results:** Basically all models kinda ignored my prompt and went on to summarize the wall of text, and when any of them returned book recommendations list they were **always** a mix of actual books from the transcript and other books related to the episode topic but never mentioned in the text. I manually extracted the books list to compare them. I also tested the same prompt with online providers like Gemini 1.5 Flash (free version), and the new DeepSeek (free version) and both were flawless and super quick, as expected (funny enough, both returned the exact same format of the output, it was like one of them was calling the API of the other ;) ), this proved to me that the problem is not my prompt, probably. **Questions:** * What am I doing wrong, basically? * How to choose the correct model for such use case (if it's even a special use case)? * Any general recommendations for a newbie? Appreciate the community here, learned a lot just lurking here for a couple of weeks!
2024-12-30T14:14:25
https://www.reddit.com/r/LocalLLaMA/comments/1hpo2v0/help_me_figure_out_what_i_did_wrong/
mtantawy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpo2v0
false
null
t3_1hpo2v0
/r/LocalLLaMA/comments/1hpo2v0/help_me_figure_out_what_i_did_wrong/
false
false
self
4
null
Introducing SmallThinker-3B-Preview. An o1-like reasoning SLM!
452
Today we release [SmallThinker-3B-Preview](https://huggingface.co/PowerInfer/SmallThinker-3B-Preview)**.** A reasoning model finetuned from [Qwen2.5-3b-Instruc](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) https://reddit.com/link/1hpop3y/video/t3j07jlwzz9e1/player [Benchmark score](https://preview.redd.it/y9zjbr39uz9e1.png?width=1584&format=png&auto=webp&s=df8676e9b09dc357ce70aa67dff2d559854967f4) SmallThinker is designed for the following use cases: 1. **Edge Deployment**: Its small size makes it ideal for deployment on resource-constrained devices. 2. **Draft Model for QwQ-32B-Preview**: SmallThinker can serve as a fast and efficient draft model for the larger QwQ-32B-Preview model. From my test, in llama.cpp we can get over **70%** speedup (from **40** tokens/s to **70** tokens/s). We believe that for achieving reasoning capabilities, it's crucial to generate long chains of COT reasoning. Therefore, based on [QWQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview), we used various synthetic techniques(such as personahub) to create the [QWQ-LONGCOT-500K](https://huggingface.co/datasets/PowerInfer/QWQ-LONGCOT-500K) dataset. Compared to other similar datasets, over 75% of our samples have output tokens exceeding 8K. To encourage research in the open-source community, we've also made the dataset publicly available - feel free to use it! **Limitation**: This is just our first step, and currently, the model has some issues: it tends to produce repetitive outputs. Please increase the repeat penalty to mitigate this problem. We will continue to iterate on similar models, and we hope that in the future, everyone will have their own reasoning model! Despite our demo being conducted on PC GPUs, we are currently developing an inference framework for SLM specifically optimized for Qualcomm NPUs. Stay tuned!
2024-12-30T14:45:20
https://www.reddit.com/r/LocalLLaMA/comments/1hpop3y/introducing_smallthinker3bpreview_an_o1like/
Zealousideal_Bad_52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpop3y
false
null
t3_1hpop3y
/r/LocalLLaMA/comments/1hpop3y/introducing_smallthinker3bpreview_an_o1like/
false
false
https://b.thumbs.redditm…ZWA3LXw4dFfs.jpg
452
{'enabled': False, 'images': [{'id': 'Kp2jVdqird5EdJ7513ykVXC3nIelbj7fQhNWplIdqGc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cpTK_0U6umBZVH9kRRAEcTjaYWiqpssYVW_EsySrW3s.jpg?width=108&crop=smart&auto=webp&s=4d7c7a70c9f334efef8ecefab9099cdfb2f7ae9d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cpTK_0U6umBZVH9kRRAEcTjaYWiqpssYVW_EsySrW3s.jpg?width=216&crop=smart&auto=webp&s=884e6aa77d959c379d1882c36416c067a4b4282f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cpTK_0U6umBZVH9kRRAEcTjaYWiqpssYVW_EsySrW3s.jpg?width=320&crop=smart&auto=webp&s=3a88af89cccadeead85a0982318e397925675446', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cpTK_0U6umBZVH9kRRAEcTjaYWiqpssYVW_EsySrW3s.jpg?width=640&crop=smart&auto=webp&s=73926e68555eeb5ea5c4638a2d9645bb290fbb86', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cpTK_0U6umBZVH9kRRAEcTjaYWiqpssYVW_EsySrW3s.jpg?width=960&crop=smart&auto=webp&s=10327d6b26e24d5ec5bdcdb143f637acc2a225d3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cpTK_0U6umBZVH9kRRAEcTjaYWiqpssYVW_EsySrW3s.jpg?width=1080&crop=smart&auto=webp&s=e29044e33c01e78c1b2be181ab726c33101cdfd8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cpTK_0U6umBZVH9kRRAEcTjaYWiqpssYVW_EsySrW3s.jpg?auto=webp&s=966d0a6f63eda4dae4bdbed4ed0779cd02a5c2a6', 'width': 1200}, 'variants': {}}]}
[HOLIDAY PROMO] Perplexity AI PRO - 1 YEAR PLAN OFFER - 75% OFF
1
[removed]
2024-12-30T14:53:20
[deleted]
1970-01-01T00:00:00
0
{}
1hpov2j
false
null
t3_1hpov2j
/r/LocalLLaMA/comments/1hpov2j/holiday_promo_perplexity_ai_pro_1_year_plan_offer/
false
false
default
1
null
Introducing LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset for Reasoning Model Post-Training
1
[removed]
2024-12-30T15:08:43
https://www.reddit.com/r/LocalLLaMA/comments/1hpp6yb/introducing_longtalkcot_v01_a_very_long/
transformer_ML
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpp6yb
false
null
t3_1hpp6yb
/r/LocalLLaMA/comments/1hpp6yb/introducing_longtalkcot_v01_a_very_long/
false
false
self
1
{'enabled': False, 'images': [{'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200}, 'variants': {}}]}
Value of SLMs
1
[removed]
2024-12-30T15:11:24
https://www.reddit.com/r/LocalLLaMA/comments/1hpp8y7/value_of_slms/
FunWater2829
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpp8y7
false
null
t3_1hpp8y7
/r/LocalLLaMA/comments/1hpp8y7/value_of_slms/
false
false
self
1
null
Introducing LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset for Reasoning Model Post-Training
1
[removed]
2024-12-30T15:12:55
https://www.reddit.com/r/LocalLLaMA/comments/1hppa18/introducing_longtalkcot_v01_a_very_long/
transformer_ML
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hppa18
false
null
t3_1hppa18
/r/LocalLLaMA/comments/1hppa18/introducing_longtalkcot_v01_a_very_long/
false
false
self
1
{'enabled': False, 'images': [{'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200}, 'variants': {}}]}
Value of SLMs - Usecases in the wild
1
[removed]
2024-12-30T15:16:09
https://www.reddit.com/r/LocalLLaMA/comments/1hppcig/value_of_slms_usecases_in_the_wild/
FunWater2829
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hppcig
false
null
t3_1hppcig
/r/LocalLLaMA/comments/1hppcig/value_of_slms_usecases_in_the_wild/
false
false
self
1
null
Aider + Open Source Deepseek 3 vs Claude 3.5 Sonnet (side-by-side coding battle)
64
I hosted an LLM coding battle between the two best models on Aider's new Polyglot Coding benchmark: [https://youtu.be/EUXISw6wtuo](https://youtu.be/EUXISw6wtuo) Some findings: \- Regarding Deepseek 3, I was VERY surprised to see an open source model measure up to its published benchmarks! \- The 3x speed boost from v2 to v3 of Deepseek is noticeable (you'll see it in the video). This is what myself and others were missing when using previous versions of Deepseek \- Deepseek is indeed better at other programming languages like .NET (as seen in the video with the ASP .NET API) \- I didn't think it would come this year, but I honestly think we have a new LLM coding king \- Deepseek is still not perfect in coding \- Sometimes Deepseek seemed to have been used Claude to train how to code. I saw this in the type of questions it asks, which are very similar in style to how Claude asks questions Please let me know what you think, and subscribe to the channel if you like side-by-side LLM battles
2024-12-30T15:20:57
https://www.reddit.com/r/LocalLLaMA/comments/1hppg78/aider_open_source_deepseek_3_vs_claude_35_sonnet/
marvijo-software
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hppg78
false
null
t3_1hppg78
/r/LocalLLaMA/comments/1hppg78/aider_open_source_deepseek_3_vs_claude_35_sonnet/
false
false
self
64
{'enabled': False, 'images': [{'id': 'q5fYQnqNJPr0sPPzUmRfIWbTmr8WuBOml3i_Zw_XRX0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/LeuclTZrAep1yLTRHVGCEfMqQbQzlAuwVQF6wWUWjxo.jpg?width=108&crop=smart&auto=webp&s=a19df0f979e47f393fc4c3e47018b5631762829f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/LeuclTZrAep1yLTRHVGCEfMqQbQzlAuwVQF6wWUWjxo.jpg?width=216&crop=smart&auto=webp&s=3b0c2642f59fe7d84b841a4d3d4b050893716adc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/LeuclTZrAep1yLTRHVGCEfMqQbQzlAuwVQF6wWUWjxo.jpg?width=320&crop=smart&auto=webp&s=06bbf21d65ef310363c2eed383a7b971e19449c5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/LeuclTZrAep1yLTRHVGCEfMqQbQzlAuwVQF6wWUWjxo.jpg?auto=webp&s=07a22456986abdd70604867e4a9e380197c45898', 'width': 480}, 'variants': {}}]}
[D] - Two phase pre-training Qwen2.5
1
[removed]
2024-12-30T15:47:31
https://www.reddit.com/r/LocalLLaMA/comments/1hpq1ce/d_two_phase_pretraining_qwen25/
hoang_hust1811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpq1ce
false
null
t3_1hpq1ce
/r/LocalLLaMA/comments/1hpq1ce/d_two_phase_pretraining_qwen25/
false
false
self
1
null
Two-phase pre-training Qwen 2.5?
1
[removed]
2024-12-30T15:49:02
https://www.reddit.com/r/LocalLLaMA/comments/1hpq2lu/twophase_pretraining_qwen_25/
hoang_hust1811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpq2lu
false
null
t3_1hpq2lu
/r/LocalLLaMA/comments/1hpq2lu/twophase_pretraining_qwen_25/
false
false
self
1
null
Releasing LongTalk-CoT v0.1: A Very Long Chain-of-Thought Dataset for Reasoning Model Post-Training
1
[removed]
2024-12-30T15:49:06
https://www.reddit.com/r/LocalLLaMA/comments/1hpq2nz/releasing_longtalkcot_v01_a_very_long/
transformer_ML
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpq2nz
false
null
t3_1hpq2nz
/r/LocalLLaMA/comments/1hpq2nz/releasing_longtalkcot_v01_a_very_long/
false
false
self
1
{'enabled': False, 'images': [{'id': '8Pl-tuF8qq0FGhF87hP-gp6cLVSmONxUgbO6t3Sq8gE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=108&crop=smart&auto=webp&s=b1f2b9313c129fad72056229a1efc349ce65dad6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=216&crop=smart&auto=webp&s=08a7bf256e634d678110fcce751a0b2cab6f7650', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=320&crop=smart&auto=webp&s=5ab7eff83693193060796fc61a06fad060713db8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=640&crop=smart&auto=webp&s=53501c885f23edcc9b7570e44220eceffae513f1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=960&crop=smart&auto=webp&s=07be6237a8d51f573024ced54f4e73dab71687d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?width=1080&crop=smart&auto=webp&s=ef880a29e5883c11b4fafd504d5b8e75cd910735', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VC22AVsL9MTw0daATP-PSXNjyCnIaKMG_B6rCRYBsAE.jpg?auto=webp&s=a7959bd3de4a444d39e475d30532d2744e67cbca', 'width': 1200}, 'variants': {}}]}
Value of SLMs in enterprise
3
Are there any use cases/applications where there is no scope of using commercial LLMs and one has to use local models? I’m trying to understand if creating SLMs for a particular company is of any value? Would you pay to get a model fine tuned?
2024-12-30T15:52:44
https://www.reddit.com/r/LocalLLaMA/comments/1hpq5rv/value_of_slms_in_enterprise/
FunWater2829
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpq5rv
false
null
t3_1hpq5rv
/r/LocalLLaMA/comments/1hpq5rv/value_of_slms_in_enterprise/
false
false
self
3
null
Dolphin 3.0 !
236
2024-12-30T16:00:42
https://i.redd.it/4n864duvd0ae1.jpeg
Evening_Action6217
i.redd.it
1970-01-01T00:00:00
0
{}
1hpqcgg
false
null
t3_1hpqcgg
/r/LocalLLaMA/comments/1hpqcgg/dolphin_30/
false
false
https://b.thumbs.redditm…0Qh6sX6Po2Lg.jpg
236
{'enabled': True, 'images': [{'id': 'LU1AsH2jLwSGffrJOat8nWXvbxLdZ9oA4ZOwFalKTeY', 'resolutions': [{'height': 175, 'url': 'https://preview.redd.it/4n864duvd0ae1.jpeg?width=108&crop=smart&auto=webp&s=bf8332b1cb61c7f6e7a703d8455fc666b759177e', 'width': 108}, {'height': 351, 'url': 'https://preview.redd.it/4n864duvd0ae1.jpeg?width=216&crop=smart&auto=webp&s=8a51d59cba9ad45057007668e1d0f42288c4ff68', 'width': 216}, {'height': 520, 'url': 'https://preview.redd.it/4n864duvd0ae1.jpeg?width=320&crop=smart&auto=webp&s=d22184bdc786ee948b7b1d5947d9302d256318de', 'width': 320}, {'height': 1040, 'url': 'https://preview.redd.it/4n864duvd0ae1.jpeg?width=640&crop=smart&auto=webp&s=b08b1cdeb775a5abecb1a69923ae9e04d73346a7', 'width': 640}, {'height': 1560, 'url': 'https://preview.redd.it/4n864duvd0ae1.jpeg?width=960&crop=smart&auto=webp&s=95724bdc12b4c22d5c764b09b8bf5d802f0d87e5', 'width': 960}, {'height': 1756, 'url': 'https://preview.redd.it/4n864duvd0ae1.jpeg?width=1080&crop=smart&auto=webp&s=d9feab31df94cf2e5b10e31512b023cdd9089214', 'width': 1080}], 'source': {'height': 1756, 'url': 'https://preview.redd.it/4n864duvd0ae1.jpeg?auto=webp&s=e2c591fc2a6323c4010a2ea5dfcb89b44d7ff8d8', 'width': 1080}, 'variants': {}}]}
Self-hosted/ open source Plaud.ai-style functionality?
1
[removed]
2024-12-30T16:01:39
https://www.reddit.com/r/LocalLLaMA/comments/1hpqdbc/selfhosted_open_source_plaudaistyle_functionality/
UrbanCircles
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpqdbc
false
null
t3_1hpqdbc
/r/LocalLLaMA/comments/1hpqdbc/selfhosted_open_source_plaudaistyle_functionality/
false
false
self
1
null
My very simple prompt to entertain yourself with reasoning models:
56
Prompt: Tell me [number of] specific facts about [any topic] that [a professional in a completely different field] would be fascinated with. Example prompt: Tell me 20 specific facts about bird biology that a litigation lawyer would be fascinated with. Reasoning models do really well and provide entertaining and relevant answers.
2024-12-30T16:26:10
https://www.reddit.com/r/LocalLLaMA/comments/1hpqxr8/my_very_simple_prompt_to_entertain_yourself_with/
Personal-Dot-380
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpqxr8
false
null
t3_1hpqxr8
/r/LocalLLaMA/comments/1hpqxr8/my_very_simple_prompt_to_entertain_yourself_with/
false
false
self
56
null
High Quality Coding instruction-response pair dataset?
5
Looking for a solid dataset of instruction-response pairs for coding tasks. Ideally something diverse and open-source. Any recommendations?
2024-12-30T16:53:09
https://www.reddit.com/r/LocalLLaMA/comments/1hprkfk/high_quality_coding_instructionresponse_pair/
Initial_Track6190
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hprkfk
false
null
t3_1hprkfk
/r/LocalLLaMA/comments/1hprkfk/high_quality_coding_instructionresponse_pair/
false
false
self
5
null
Context length calculator
6
I’ve been playing around in LMStudio on my M1 MacBook Air with 8 GB. To my surprise it works kinda good. Now I’m looking to level up and purchase a new GPU(I currently have a 6700 nonXT). I’m noticing that my usage tends towards long chats and I’m not able to get a lot of use because I get good use only upto 4K tokens on 3-8B models. I want to do some analysis of how much context length I can get for how much VRAM, before upgrading my GPU. I searched a lot for context length calculator that would tell me for a given VRAM size and given model, what’s the maximum context I could fit before purchasing. I know a lot of other factors like memory bandwidth etc are important too, but I want to get a rough idea of what I can get. Due to budget constraints I won’t be purchasing 4080-90 so any guides etc would be definitely appreciated.
2024-12-30T16:54:06
https://www.reddit.com/r/LocalLLaMA/comments/1hprl8s/context_length_calculator/
DukeBaset
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hprl8s
false
null
t3_1hprl8s
/r/LocalLLaMA/comments/1hprl8s/context_length_calculator/
false
false
self
6
null
In 2022 Gary Marcus was 50% sure that AGI wouldn't happen by 2029. According to his new bet he is now 9% sure that ASI won't happen by 2027.
100
Gary Marcus' [new bet](https://garymarcus.substack.com/p/where-will-ai-be-at-the-end-of-2027) is the most insane goalpost shift I have ever seen. **Background** In 2022, [Gary Marcus bet](https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things?r=8tdk6&utm_campaign=post&utm_medium=web&triedRedirect=true) that we wouldn't have AGI by 2029. He said he would agree that AGI was achieved if it could do THREE of the five following thing: * "In 2029, AI will not be able to watch a movie and tell you accurately what is going on (what I called [the comprehension challenge](https://www.newyorker.com/tech/annals-of-technology/what-comes-after-the-turing-test) in *The New Yorker*, in 2014). Who are the characters? What are their conflicts and motivations? etc. * In 2029, AI will not be able to read a novel and reliably answer questions about plot, character, conflicts, motivations, etc. Key will be going beyond the literal text, as Davis and I explain in [*Rebooting AI*](http://rebooting.ai). * In 2029, AI will not be able to work as a competent cook in an arbitrary kitchen (extending [Steve Wozniak’s cup of coffee benchmark](https://www.fastcompany.com/1568187/wozniak-could-computer-make-cup-coffee)). * In 2029, AI will not be able to reliably construct bug-free code of more than 10,000 lines from natural language specification or by interactions with a non-expert user. \[Gluing together code from existing libraries doesn’t count.\] * In 2029, AI will not be able to take arbitrary proofs from the mathematical literature written in natural language and convert them into a symbolic form suitable for symbolic verification." In 2024, he bet that we wouldn't get ASI by 2027 and gave a list of 11 things he thought would indicate ASI. **The new bet** You can read it [here](https://garymarcus.substack.com/p/where-will-ai-be-at-the-end-of-2027). He has basically taken four requirements from his AGI test and added six requirements from his ASI test. Here is a comparison against his 2022 AGI test (he refers to it as his '2023' bet for completely inexplicable reasons). He has: 1. Reduced the amount he is willing to bet down by two orders of magnitude ($100,000 -> $2,000). 2. Brought the date up by two years (from 2029 to 2027) 3. Added six new items that he previously classified as indicating ASI not AGI. 4. Changed the odds he was will to give from 1:1 (indicating that he was 50% sure he would be right) to 10:1 (indicating that he is now 9% sure that he is right). 5. He changed the threshold for passing the test from achieving 60% of the tasks (3 out of 5) to achieving 80% of the tasks (8 out of 10). Basically, he has gone from being 50% sure that AGI won't happen by 2029 to being 9% sure that ASI won't happen by 2027. If you add in the reduction in the amount he is willing to bet and the increase in the threshold for passing the test its an even greater reduction in confidence. You can measure the exponential improvement in AI just by looking at the increasing velocity of goal post shifting by Gary Marcus.
2024-12-30T17:02:58
https://www.reddit.com/r/LocalLLaMA/comments/1hprsyv/in_2022_gary_marcus_was_50_sure_that_agi_wouldnt/
Xron_J
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hprsyv
false
null
t3_1hprsyv
/r/LocalLLaMA/comments/1hprsyv/in_2022_gary_marcus_was_50_sure_that_agi_wouldnt/
false
false
self
100
{'enabled': False, 'images': [{'id': '7dt2XOq42Kn-CnlqS1eL_OLXrvF_UCWFU-62JLuRH0U', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/xJMLaf8KS23nSSWwLxgy_gOnwc9pzlwgw3hmCzq6BMc.jpg?width=108&crop=smart&auto=webp&s=a4b7d4675c976f0828bf1f5ae3e926a110a0affb', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/xJMLaf8KS23nSSWwLxgy_gOnwc9pzlwgw3hmCzq6BMc.jpg?width=216&crop=smart&auto=webp&s=13b764f69d0793c756e113d80ec0dfda6b8d160f', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/xJMLaf8KS23nSSWwLxgy_gOnwc9pzlwgw3hmCzq6BMc.jpg?width=320&crop=smart&auto=webp&s=caa05f77c128899c7fb9329bf9926deb759ef362', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/xJMLaf8KS23nSSWwLxgy_gOnwc9pzlwgw3hmCzq6BMc.jpg?width=640&crop=smart&auto=webp&s=336e37659e8b1ac0fdb6a8f9b1ffc33faa9c0100', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/xJMLaf8KS23nSSWwLxgy_gOnwc9pzlwgw3hmCzq6BMc.jpg?width=960&crop=smart&auto=webp&s=f12b49b9c1d028f6ec128e6829439a1a0bda61b6', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xJMLaf8KS23nSSWwLxgy_gOnwc9pzlwgw3hmCzq6BMc.jpg?auto=webp&s=24ef2a9ee5b19074c9b692f6e0a34f8410addfe8', 'width': 1024}, 'variants': {}}]}
Many asked: When will we have an open source model better than chatGPT4? The day has arrived.
494
Deepseek V3 [https://x.com/lmarena\_ai/status/1873695386323566638](https://x.com/lmarena_ai/status/1873695386323566638)
2024-12-30T17:10:24
https://www.reddit.com/r/LocalLLaMA/comments/1hprz6x/many_asked_when_will_we_have_an_open_source_model/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hprz6x
false
null
t3_1hprz6x
/r/LocalLLaMA/comments/1hprz6x/many_asked_when_will_we_have_an_open_source_model/
false
false
self
494
{'enabled': False, 'images': [{'id': 'e6ImZrrvih32zDI-VXMDkVXiD36NZyfMwd0KijFp9DE', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/qglZeNWYvHF_1AveLW1cG2-Nxv88gOMxYilXVZyeljM.jpg?width=108&crop=smart&auto=webp&s=20ca6523605edd0a6681104da8c814a666080710', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/qglZeNWYvHF_1AveLW1cG2-Nxv88gOMxYilXVZyeljM.jpg?width=216&crop=smart&auto=webp&s=f36f0c672ea03021070f10484d04e06f94438c31', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/qglZeNWYvHF_1AveLW1cG2-Nxv88gOMxYilXVZyeljM.jpg?width=320&crop=smart&auto=webp&s=904bd951ccfecf97af62d37bca77ae5a9036b3ee', 'width': 320}, {'height': 417, 'url': 'https://external-preview.redd.it/qglZeNWYvHF_1AveLW1cG2-Nxv88gOMxYilXVZyeljM.jpg?width=640&crop=smart&auto=webp&s=e1f99bcb102f258392d13871d656bc3110926c0d', 'width': 640}, {'height': 625, 'url': 'https://external-preview.redd.it/qglZeNWYvHF_1AveLW1cG2-Nxv88gOMxYilXVZyeljM.jpg?width=960&crop=smart&auto=webp&s=f81186d09ad07c9947b336da4d1bc993325c5f85', 'width': 960}, {'height': 704, 'url': 'https://external-preview.redd.it/qglZeNWYvHF_1AveLW1cG2-Nxv88gOMxYilXVZyeljM.jpg?width=1080&crop=smart&auto=webp&s=77057ab5ae467f7ff0d048b281a25e71db838565', 'width': 1080}], 'source': {'height': 1335, 'url': 'https://external-preview.redd.it/qglZeNWYvHF_1AveLW1cG2-Nxv88gOMxYilXVZyeljM.jpg?auto=webp&s=4841b6d572330b3e51789c1025203a15a85c012e', 'width': 2048}, 'variants': {}}]}
Multimodal AI agents
1
https://github.com/MervinPraison/PraisonAI/
2024-12-30T17:17:48
https://i.redd.it/ld7i6c3nr0ae1.jpeg
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1hps57j
false
null
t3_1hps57j
/r/LocalLLaMA/comments/1hps57j/multimodal_ai_agents/
false
false
https://b.thumbs.redditm…0l5MoOjxAwMw.jpg
1
{'enabled': True, 'images': [{'id': 'l4etlmd1sEggJXlatpsJgo2WHJDQnMoqkA9ChfsWIT0', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ld7i6c3nr0ae1.jpeg?width=108&crop=smart&auto=webp&s=05d2cc946c3f59fa2b6ee6fdce55a5b14a84c71b', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ld7i6c3nr0ae1.jpeg?width=216&crop=smart&auto=webp&s=0eb3a6f7ff4e01efee131c65ac93449ab75df34b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ld7i6c3nr0ae1.jpeg?width=320&crop=smart&auto=webp&s=668bae855192139e9eb621d2cd41d015df97d223', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ld7i6c3nr0ae1.jpeg?width=640&crop=smart&auto=webp&s=83d32dfe8fc908c587c327edc1764da91c47805a', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ld7i6c3nr0ae1.jpeg?width=960&crop=smart&auto=webp&s=0259ded718fc4544358d5ecfd437c6a750bf69f0', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/ld7i6c3nr0ae1.jpeg?width=1080&crop=smart&auto=webp&s=60303048f232f79d422a7db4f6c96b125fd326f9', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/ld7i6c3nr0ae1.jpeg?auto=webp&s=130ad7ec873cf2ba0d1fb67a3ae6c1d487b0dff1', 'width': 1080}, 'variants': {}}]}
70B on single 3090 with more ram?
3
I have a single 3090 and 32 gb of ram. If i buy another 32gb of ddr5 ram would it be possible to run llama 3.3 70b and doing some offloading?
2024-12-30T17:28:12
https://www.reddit.com/r/LocalLLaMA/comments/1hpse1w/70b_on_single_3090_with_more_ram/
Apart_Paramedic_7767
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpse1w
false
null
t3_1hpse1w
/r/LocalLLaMA/comments/1hpse1w/70b_on_single_3090_with_more_ram/
false
false
self
3
null
Does dual 3090 double Tokens/s, or just increases the size of models we can load?
7
I have been trying to get a clear answer on this for a few days but I can't find a definitive answer. Me question: Will adding a second 3090 make my LLM answer faster. (I use it for Home-Assistant voice endpoints, and speed if of the essence) I currently use \`qwen2.5:14b-instruct\`
2024-12-30T18:19:11
https://www.reddit.com/r/LocalLLaMA/comments/1hptl8f/does_dual_3090_double_tokenss_or_just_increases/
maxi1134
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hptl8f
false
null
t3_1hptl8f
/r/LocalLLaMA/comments/1hptl8f/does_dual_3090_double_tokenss_or_just_increases/
false
false
self
7
null
Are there any Local Speech to Text and then Text to Speech Set ups out there?
13
I can not find something that is very well built to enable Voice assistants. Right now I am using just Open Web UI plus a Open AI TTS endpoint but it isnt really that great. Is there anything out there already?
2024-12-30T18:30:16
https://www.reddit.com/r/LocalLLaMA/comments/1hptukr/are_there_any_local_speech_to_text_and_then_text/
SeriousGrab6233
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hptukr
false
null
t3_1hptukr
/r/LocalLLaMA/comments/1hptukr/are_there_any_local_speech_to_text_and_then_text/
false
false
self
13
null
New LLM Divergent Thinking Creativity Benchmark
50
2024-12-30T18:32:20
https://github.com/lechmazur/divergent
zero0_one1
github.com
1970-01-01T00:00:00
0
{}
1hptwfc
false
null
t3_1hptwfc
/r/LocalLLaMA/comments/1hptwfc/new_llm_divergent_thinking_creativity_benchmark/
false
false
https://b.thumbs.redditm…9zIrhpXX2zoU.jpg
50
{'enabled': False, 'images': [{'id': 'aGaL-ZmT386a_pQWzky821-iA_TvhvHTE8fl9qApHCo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LBxvGZZ5JeIBRtzgsiv12tVxzTskof-f4bJbI0Ajrms.jpg?width=108&crop=smart&auto=webp&s=7b888a61353cf2d6566a2b26dae87aa3414aba97', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LBxvGZZ5JeIBRtzgsiv12tVxzTskof-f4bJbI0Ajrms.jpg?width=216&crop=smart&auto=webp&s=9482ebd10bde14e728521b7574de48c275727575', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LBxvGZZ5JeIBRtzgsiv12tVxzTskof-f4bJbI0Ajrms.jpg?width=320&crop=smart&auto=webp&s=6a99d8bc75e0b3f2bf653c4393d7c4dae4a42b4c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LBxvGZZ5JeIBRtzgsiv12tVxzTskof-f4bJbI0Ajrms.jpg?width=640&crop=smart&auto=webp&s=f5284c61caed81a4e77601069d07a2afd6fcfdc6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LBxvGZZ5JeIBRtzgsiv12tVxzTskof-f4bJbI0Ajrms.jpg?width=960&crop=smart&auto=webp&s=9aeda2599fc2b02c75cfbe62d3ca3f0a373d86db', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LBxvGZZ5JeIBRtzgsiv12tVxzTskof-f4bJbI0Ajrms.jpg?width=1080&crop=smart&auto=webp&s=acb372ce0935c765d48163cec35c3d318a9e3b66', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LBxvGZZ5JeIBRtzgsiv12tVxzTskof-f4bJbI0Ajrms.jpg?auto=webp&s=201a6c629cb0a035de17977303beaa85175b88cb', 'width': 1200}, 'variants': {}}]}
Please help me with train text from scratch
1
[removed]
2024-12-30T18:46:11
https://www.reddit.com/r/LocalLLaMA/comments/1hpu8bc/please_help_me_with_train_text_from_scratch/
North-Regular-3256
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpu8bc
false
null
t3_1hpu8bc
/r/LocalLLaMA/comments/1hpu8bc/please_help_me_with_train_text_from_scratch/
false
false
self
1
null
State of r/LocalLLaMA and Moderation
1
[removed]
2024-12-30T18:51:55
https://www.reddit.com/r/LocalLLaMA/comments/1hpud85/state_of_rlocalllama_and_moderation/
zerking_off
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpud85
false
null
t3_1hpud85
/r/LocalLLaMA/comments/1hpud85/state_of_rlocalllama_and_moderation/
false
false
self
1
null
Do I bother to buy a 7900 xtx?
10
Sorry for the kind of vague question, but here it goes. I've been saving for a while to buy a new GPU, mainly for gaming, and also want to start exploring the llama world. Since I'm a Linux user, I'm focusing on AMD's GPUs, perhaps also the Intel ones. So I've been expecting the AMD announcements in January to finish deciding which card to get. I was hoping for cards with a good amount of VRAM, but it seems 16gb will be the max. So apparently the 7900 xtx is the best option, at least for Llama, and it is also quite capable in gaming. However, it is a 2 years old card, and still quite expensive. And probably the new AMD gpus will give it at least a run for their money regarding gaming, maybe be even better at it. So my question is, given the card is quite expensive, and its gaming performance will not be that ahead, does it worth it for its vram amount? I've been reading posts here and seems there is a lot of dual setup which is not an option for me now. So how much can you actually achieve with 24 gb of vram?
2024-12-30T18:53:35
https://www.reddit.com/r/LocalLLaMA/comments/1hpuepa/do_i_bother_to_buy_a_7900_xtx/
trevanian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpuepa
false
null
t3_1hpuepa
/r/LocalLLaMA/comments/1hpuepa/do_i_bother_to_buy_a_7900_xtx/
false
false
self
10
null
Sales or marketing specific LLMs?
1
[removed]
2024-12-30T18:53:39
https://www.reddit.com/r/LocalLLaMA/comments/1hpuerg/sales_or_marketing_specific_llms/
saifee177
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpuerg
false
null
t3_1hpuerg
/r/LocalLLaMA/comments/1hpuerg/sales_or_marketing_specific_llms/
false
false
self
1
null
State of r/LocalLLaMA
1
[removed]
2024-12-30T18:54:58
https://www.reddit.com/r/LocalLLaMA/comments/1hpufwf/state_of_rlocalllama/
zerking_off
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpufwf
false
null
t3_1hpufwf
/r/LocalLLaMA/comments/1hpufwf/state_of_rlocalllama/
false
false
self
1
null
Rig building, immersion cooling
0
Just to give, you crazy fools, some ideas.. Building a rig is about having a cpu and motherboard with plenty pcie lanes. Then you have to connect your gpu, they usually come in 2, 3 or 4(?!) slots width.. so you need riser, or eventually retimers if you go for some crazy multi gpu builds. Wtf a workstation with 7 pci slots and just enough space for 3 gpu?! Then you have watercooling, adding more than a 150usd per gpu in waterblock pipes and heat exchanger. Did you knowelectric fluids exist? You literally strip you gpu and cpu of its cooling solution and trow your pc in a box full of that fluid We are lucking to have our crypto cousins pioneering some of these ideas for cheap. You can find dielectric fluids for around 7usd/l pipes, pumps and heat exchanger can be found fairly easily.. Have fun https://youtu.be/nCBM_LUeXCU?si=K3HvPZLLOv4QUTw2
2024-12-30T19:19:04
https://www.reddit.com/r/LocalLLaMA/comments/1hpv0fq/rig_building_immersion_cooling/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpv0fq
false
null
t3_1hpv0fq
/r/LocalLLaMA/comments/1hpv0fq/rig_building_immersion_cooling/
false
false
self
0
{'enabled': False, 'images': [{'id': 'oIrooj20RdctGOVxaIe3tVogXlyaTqn2W0EXHCe27lc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/8CDO5dEhiuDCl91L7400uFBhlpd_mwsUU0FHUzrgVic.jpg?width=108&crop=smart&auto=webp&s=05978250ae6e13e6bff5b505482aa5dc9b0e4d14', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/8CDO5dEhiuDCl91L7400uFBhlpd_mwsUU0FHUzrgVic.jpg?width=216&crop=smart&auto=webp&s=17f04b45a34412ead250d55c9350eabbfb6e7f5b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/8CDO5dEhiuDCl91L7400uFBhlpd_mwsUU0FHUzrgVic.jpg?width=320&crop=smart&auto=webp&s=604d103cfa3979c56cbe3c7f9bd664c6f8ca3a80', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/8CDO5dEhiuDCl91L7400uFBhlpd_mwsUU0FHUzrgVic.jpg?auto=webp&s=b59691d1c5a17f76c00528ced4c5dc2517d415a7', 'width': 480}, 'variants': {}}]}
VQA / VLLM for identifying highlights in sporting event videos? (Semantic search)
7
Hi all, I'd like to play around with semantic search over videos. As a use case I'd like to try finding key moments in soccer games such as "player shoots on goal" etc. I've investigated yolo and Sam 2 for object detection, tracking, and segmentation, but was curious if there might be a vqa VLLM that supports video and might adapt more easily to a variety of use cases. Has anyone worked in this space and do you have any suggestions of where I could start? I tried Gemini, and although it could understand the context of a clip it was not very helpful in identifying key moments with much accuracy.
2024-12-30T19:21:51
https://www.reddit.com/r/LocalLLaMA/comments/1hpv2tc/vqa_vllm_for_identifying_highlights_in_sporting/
RMCPhoto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpv2tc
false
null
t3_1hpv2tc
/r/LocalLLaMA/comments/1hpv2tc/vqa_vllm_for_identifying_highlights_in_sporting/
false
false
self
7
null
Methodologies for Continual Pre-training
1
[removed]
2024-12-30T19:22:44
https://www.reddit.com/r/LocalLLaMA/comments/1hpv3k2/methodologies_for_continual_pretraining/
wandering-ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpv3k2
false
null
t3_1hpv3k2
/r/LocalLLaMA/comments/1hpv3k2/methodologies_for_continual_pretraining/
false
false
self
1
null
I tried to self-host code assistants and failed miserably
1
After trying cursor and windsurf with free trials, I got hooked into command mode where you simply explain what is the project while the assistant produces a basic working version and also self test & run it one go without further prompting. This instantly got to me and I kept building more & more thereby exhausting the free credits quickly. Just when I tempted to buy the subscription, I got hit with 502 errors and after seeing lots of reddit post about how these cloud based inner-loop solutions doesn't really scale etc I thought why not I self host it. I tried open-hands, aider-chat, ra.aid . All failed miserably for my case and none came close to what I was expecting comparing to cloud based saas . All the 3 I tried create initial code by they are very poor at follow ups or handling files locally. aider-chat failed to work with quantized models. I admit I don't have beast hardware (M2 with 16Gs) but nevertheless I attempted and failed miserably. Perhaps these SaaS coding assistants are here to stay.
2024-12-30T19:49:06
https://www.reddit.com/r/LocalLLaMA/comments/1hpvpoa/i_tried_to_selfhost_code_assistants_and_failed/
kspviswaphd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpvpoa
false
null
t3_1hpvpoa
/r/LocalLLaMA/comments/1hpvpoa/i_tried_to_selfhost_code_assistants_and_failed/
false
false
self
1
null
Q: is it a sound idea to build a small computing cluster using 4-8 Orin Nanos?
3
I hope people who know more about GPU hardware can help answer this question — is it possible to set up a small computing cluster using 4-8 [NVIDIA Jetson Orin Nano's](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/nano-super-developer-kit/) (or would it be better to build up such a cluster using a different, likely used NVIDIA GPU?) My goal here is building something that a few others and I can remotely access; I want to primarily use it for the training and inference of deep learning models. I expect that the largest model I will deal with is Llama 3.2 70B, but more typically, models with ~5-40B parameters. The appealing part about the Orin Nano is the price point. I know it is primarily built for robotics, but it also seems to have been optimized for LLM usage. Thank you!
2024-12-30T20:30:59
https://www.reddit.com/r/LocalLLaMA/comments/1hpwosw/q_is_it_a_sound_idea_to_build_a_small_computing/
impartialhedonist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpwosw
false
null
t3_1hpwosw
/r/LocalLLaMA/comments/1hpwosw/q_is_it_a_sound_idea_to_build_a_small_computing/
false
false
self
3
{'enabled': False, 'images': [{'id': '1yk1N333Cqp5A9orvSbi4yZmXDWW5ZQF4BuhevhFFRE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GaDRkSMG0sHHEFO4R7wlfh48g9uTcSD5AfaDReO1I50.jpg?width=108&crop=smart&auto=webp&s=88222f075760c8c6a4327fda9f507975d65c692a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GaDRkSMG0sHHEFO4R7wlfh48g9uTcSD5AfaDReO1I50.jpg?width=216&crop=smart&auto=webp&s=89c46cf579513c0b2729ad25275e564f9ae21a64', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GaDRkSMG0sHHEFO4R7wlfh48g9uTcSD5AfaDReO1I50.jpg?width=320&crop=smart&auto=webp&s=b39ce92fc0b1ed24c40b298a43e17ad4b46e29ec', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GaDRkSMG0sHHEFO4R7wlfh48g9uTcSD5AfaDReO1I50.jpg?width=640&crop=smart&auto=webp&s=965748ab08d9d6561a9c061f109260abfd394f0e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GaDRkSMG0sHHEFO4R7wlfh48g9uTcSD5AfaDReO1I50.jpg?width=960&crop=smart&auto=webp&s=cf2c9b402c482db74cf7d6299010bff3c41a4330', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GaDRkSMG0sHHEFO4R7wlfh48g9uTcSD5AfaDReO1I50.jpg?width=1080&crop=smart&auto=webp&s=22f0975f8511e70cab48874a15bc2ffd34e75ef7', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/GaDRkSMG0sHHEFO4R7wlfh48g9uTcSD5AfaDReO1I50.jpg?auto=webp&s=23930671e17ec58934a5a18c3b601162673aaab8', 'width': 1200}, 'variants': {}}]}
[LLM Inference] SGLang vs HF
1
[removed]
2024-12-30T20:37:12
https://www.reddit.com/r/LocalLLaMA/comments/1hpwtuw/llm_inference_sglang_vs_hf/
Ok_Honey_9386
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpwtuw
false
null
t3_1hpwtuw
/r/LocalLLaMA/comments/1hpwtuw/llm_inference_sglang_vs_hf/
false
false
self
1
null
[LLM Inference] SGLang vs HF
1
[removed]
2024-12-30T20:43:21
https://www.reddit.com/r/LocalLLaMA/comments/1hpwyv8/llm_inference_sglang_vs_hf/
Ok_Honey_9386
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpwyv8
false
null
t3_1hpwyv8
/r/LocalLLaMA/comments/1hpwyv8/llm_inference_sglang_vs_hf/
false
false
self
1
null
Help to select hardware for personal AI
1
[removed]
2024-12-30T21:23:33
https://www.reddit.com/r/LocalLLaMA/comments/1hpxw4j/help_to_select_hardware_for_personal_ai/
AlexDorofeev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpxw4j
false
null
t3_1hpxw4j
/r/LocalLLaMA/comments/1hpxw4j/help_to_select_hardware_for_personal_ai/
false
false
self
1
null
Can someone build a market where I can sell access to my super Niche LLM by the token?
0
First some background, I'm a nodejs developer and I spend most of my day writing code. Claude is great, but the web interface is too limiting, and then it gets expensive when using the API. I've developed a niche LLM fine tune that achieves near SOTA performance yet ONLY FOR NodeJS development at a significantly lower cost than mainstream alternatives. This got me thinking: there's likely demand for niche, domain-specific LLMs like mine in the broader developer community. While platforms like OpenRouter exist, there seems to be a gap in the market for a dedicated marketplace where developers can offer their specialized LLM solutions. This could be an opportunity to create a platform that connects LLM developers with users seeking cost-effective, domain-specific AI solutions. Has anyone explored building something like this?
2024-12-30T21:28:01
https://www.reddit.com/r/LocalLLaMA/comments/1hpxzt3/can_someone_build_a_market_where_i_can_sell/
estebansaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpxzt3
false
null
t3_1hpxzt3
/r/LocalLLaMA/comments/1hpxzt3/can_someone_build_a_market_where_i_can_sell/
false
false
self
0
null
3x Arc A770 build
1
[removed]
2024-12-30T21:38:26
https://www.reddit.com/gallery/1hpy8cv
Echo9Zulu-
reddit.com
1970-01-01T00:00:00
0
{}
1hpy8cv
false
null
t3_1hpy8cv
/r/LocalLLaMA/comments/1hpy8cv/3x_arc_a770_build/
false
false
https://b.thumbs.redditm…w6EGGz2R0-ko.jpg
1
null
Lets make 2025 the year we all use Ai for the good of humanity
1
[removed]
2024-12-30T21:44:05
https://www.reddit.com/r/LocalLLaMA/comments/1hpyd3o/lets_make_2025_the_year_we_all_use_ai_for_the/
unknownstudentoflife
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpyd3o
false
null
t3_1hpyd3o
/r/LocalLLaMA/comments/1hpyd3o/lets_make_2025_the_year_we_all_use_ai_for_the/
false
false
self
1
null
I just realized that tokens/s does not matter so much
1
[removed]
2024-12-30T21:45:08
https://www.reddit.com/r/LocalLLaMA/comments/1hpye08/i_just_realized_that_tokenss_does_not_matter_so/
badabimbadabum2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpye08
false
null
t3_1hpye08
/r/LocalLLaMA/comments/1hpye08/i_just_realized_that_tokenss_does_not_matter_so/
false
false
self
1
null
I am new to the LLM scene and I want to build a PC to accommodate over 30 b parameters, aside for price what would be the best build ? I want to do at least a GTX 4090 gpu it doesn’t matter if it’s AMD or Intel.
1
[removed]
2024-12-30T21:45:49
https://www.reddit.com/r/LocalLLaMA/comments/1hpyeki/i_am_new_to_the_llm_scene_and_i_want_to_build_a/
AA8Corp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpyeki
false
null
t3_1hpyeki
/r/LocalLLaMA/comments/1hpyeki/i_am_new_to_the_llm_scene_and_i_want_to_build_a/
false
false
self
1
null
Fine tuning a existing uncensored model
0
Hi. I have a collection of stories I've written over the years, and I thought it'd be fun to try to stuff it into a LLM and see how that works. I am already using Orengutengs 3.1 8B, and despite some weirdness, it's working quite well for me thus far. Trying to read up on fine tuning, it seems all ways lead to Unsloth, and from what I can gather, I need to choose one of the models they provide..? Is there a way to just finetune over the model I'm already using? I think I have sufficient hw to finetune over it, with one computer having a 3090 and another having a 4090, but wouldn't be opposed to roll a cloud service it that's substantially better for the tuning.
2024-12-30T21:52:31
https://www.reddit.com/r/LocalLLaMA/comments/1hpyk7a/fine_tuning_a_existing_uncensored_model/
smokeofc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpyk7a
false
null
t3_1hpyk7a
/r/LocalLLaMA/comments/1hpyk7a/fine_tuning_a_existing_uncensored_model/
false
false
self
0
null
Upgrade options on budget?
1
[removed]
2024-12-30T22:41:17
https://www.reddit.com/r/LocalLLaMA/comments/1hpzom5/upgrade_options_on_budget/
stat-insig-005
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hpzom5
false
null
t3_1hpzom5
/r/LocalLLaMA/comments/1hpzom5/upgrade_options_on_budget/
false
false
self
1
null
Getting gpt-4o-mini to perform like gpt-4o
26
2024-12-30T22:48:00
https://bits.logic.inc/p/getting-gpt-4o-mini-to-perform-like
lnxaddct
bits.logic.inc
1970-01-01T00:00:00
0
{}
1hpztwb
false
null
t3_1hpztwb
/r/LocalLLaMA/comments/1hpztwb/getting_gpt4omini_to_perform_like_gpt4o/
false
false
https://b.thumbs.redditm…NNDJY8TAx0Xw.jpg
26
{'enabled': False, 'images': [{'id': 'PaIt0eNuAi5uBrcXAzmwOgWrIAm16pvMEj5Lk1h4_Qc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MZmxD0LNxjx45LFVQ1UStOk0Hm7rlReHoaxFAKPqHpQ.jpg?width=108&crop=smart&auto=webp&s=6863ec7505f49bf5e9168a5e91db8b1e9dc46955', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MZmxD0LNxjx45LFVQ1UStOk0Hm7rlReHoaxFAKPqHpQ.jpg?width=216&crop=smart&auto=webp&s=c920ed4f946111311a0c810ef01154e233d243dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MZmxD0LNxjx45LFVQ1UStOk0Hm7rlReHoaxFAKPqHpQ.jpg?width=320&crop=smart&auto=webp&s=2652c0440e1abfda28a97031faae982f0fab3fd1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MZmxD0LNxjx45LFVQ1UStOk0Hm7rlReHoaxFAKPqHpQ.jpg?width=640&crop=smart&auto=webp&s=85359efeb5033ddc98f9e43ee1d1daf54a621f75', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MZmxD0LNxjx45LFVQ1UStOk0Hm7rlReHoaxFAKPqHpQ.jpg?width=960&crop=smart&auto=webp&s=ea32958b41f969e156b9d174847bab89b02c6f16', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MZmxD0LNxjx45LFVQ1UStOk0Hm7rlReHoaxFAKPqHpQ.jpg?width=1080&crop=smart&auto=webp&s=50c782b5f21e973fd45b1d8d9a94bbe2e4224fb6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MZmxD0LNxjx45LFVQ1UStOk0Hm7rlReHoaxFAKPqHpQ.jpg?auto=webp&s=57c751c1dc0916376cf212ba7340f27b2fb7c305', 'width': 1200}, 'variants': {}}]}
Sam's latest investor meeting now that open source models are catching up
1
[removed]
2024-12-30T23:03:29
https://v.redd.it/hr8uegeah2ae1
0xlisykes
v.redd.it
1970-01-01T00:00:00
0
{}
1hq06ki
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/hr8uegeah2ae1/DASHPlaylist.mpd?a=1738191825%2CZDIxYjlkYjcwZWZlN2Y5MDQ3NGRiYTIwOGM4ZjRmODk1NTZlZDU3Y2I4MTE5MDhlMDFjYzhlYWFiNGY2YTk3NA%3D%3D&v=1&f=sd', 'duration': 102, 'fallback_url': 'https://v.redd.it/hr8uegeah2ae1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 536, 'hls_url': 'https://v.redd.it/hr8uegeah2ae1/HLSPlaylist.m3u8?a=1738191825%2CNTMzYmU0NjE0M2E5ODMyMzNmZDA5MTEwMWRjNjY4ZWVjMGM3MTkwYzU3MzFkM2UyOGQ2MWJiMGVkN2U2MzI3Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hr8uegeah2ae1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1hq06ki
/r/LocalLLaMA/comments/1hq06ki/sams_latest_investor_meeting_now_that_open_source/
false
false
https://external-preview…501af4f64caabc5e
1
{'enabled': False, 'images': [{'id': 'cWprc2E2NmFoMmFlMUryplhRnQb7DkOh6kW2fK0l_klrkXqf9Uz5drigEkWC', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/cWprc2E2NmFoMmFlMUryplhRnQb7DkOh6kW2fK0l_klrkXqf9Uz5drigEkWC.png?width=108&crop=smart&format=pjpg&auto=webp&s=c3ac3598ae602d0bfea4bab04827b7f1bdcdd732', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/cWprc2E2NmFoMmFlMUryplhRnQb7DkOh6kW2fK0l_klrkXqf9Uz5drigEkWC.png?width=216&crop=smart&format=pjpg&auto=webp&s=53a72431e14e48d90451cd9d15ae1b1047730bfe', 'width': 216}, {'height': 134, 'url': 'https://external-preview.redd.it/cWprc2E2NmFoMmFlMUryplhRnQb7DkOh6kW2fK0l_klrkXqf9Uz5drigEkWC.png?width=320&crop=smart&format=pjpg&auto=webp&s=3c6ec8ddc642814c8807d3bc20deca6c968f720b', 'width': 320}, {'height': 268, 'url': 'https://external-preview.redd.it/cWprc2E2NmFoMmFlMUryplhRnQb7DkOh6kW2fK0l_klrkXqf9Uz5drigEkWC.png?width=640&crop=smart&format=pjpg&auto=webp&s=e88a25521f64be2be19cf1d35e9aaa434d2c743f', 'width': 640}, {'height': 402, 'url': 'https://external-preview.redd.it/cWprc2E2NmFoMmFlMUryplhRnQb7DkOh6kW2fK0l_klrkXqf9Uz5drigEkWC.png?width=960&crop=smart&format=pjpg&auto=webp&s=8277fee75644714cd4eb828f49bb69061113f3a8', 'width': 960}, {'height': 452, 'url': 'https://external-preview.redd.it/cWprc2E2NmFoMmFlMUryplhRnQb7DkOh6kW2fK0l_klrkXqf9Uz5drigEkWC.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c31fd73bc26edaa613d3387c1903e358f1e4eda9', 'width': 1080}], 'source': {'height': 804, 'url': 'https://external-preview.redd.it/cWprc2E2NmFoMmFlMUryplhRnQb7DkOh6kW2fK0l_klrkXqf9Uz5drigEkWC.png?format=pjpg&auto=webp&s=3d0c7130e76f692b84d9de47a1ab140bea46f952', 'width': 1920}, 'variants': {}}]}
Embedding apples to apples vs OpenAI
1
[removed]
2024-12-30T23:46:09
https://www.reddit.com/r/LocalLLaMA/comments/1hq14dl/embedding_apples_to_apples_vs_openai/
nycjdg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq14dl
false
null
t3_1hq14dl
/r/LocalLLaMA/comments/1hq14dl/embedding_apples_to_apples_vs_openai/
false
false
https://a.thumbs.redditm…A_RhM7WHZeN8.jpg
1
null
Embedding apples to apples: Llama 3.3/3.2 vs OpenAI
1
[removed]
2024-12-30T23:53:48
https://www.reddit.com/r/LocalLLaMA/comments/1hq1alb/embedding_apples_to_apples_llama_3332_vs_openai/
nycjdg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq1alb
false
null
t3_1hq1alb
/r/LocalLLaMA/comments/1hq1alb/embedding_apples_to_apples_llama_3332_vs_openai/
false
false
self
1
null
Prompt for deep questions/talk therapy I like to use
4
I found talk therapy wildly unhelpful back when I had a therapist. I didn’t really understand to ask them to give me more questions, but I’ve learned that I do best when given tons of deep, reflective questions about the issues I’m having. So, I use this prompt, and it helps. Try it, and tell me what you think: I’m dealing with an issue, and I need your help exploring it deeply. Your job is to ask me leading and reflective questions to help me uncover the roots of my thoughts and feelings, clarify my statements, and challenge my assumptions. Focus entirely on exploration—don’t suggest actions unless they’re framed as gentle challenges to my thinking, like ‘What would it look like if you tried X instead of Y?’ Avoid generic advice like ‘try deep breathing.’ Ask me about unspoken influences, cultural or societal norms, and whether someone in my life might have shaped the way I feel. If I bring up vague or ambiguous ideas, call them out and help me clarify. (For example: ‘When you say you feel overwhelmed, do you mean mentally, emotionally, or something else? What does overwhelmed look like for you in this situation?’ Or, ‘You said you feel stuck—what does stuck mean? Is it about not knowing what to do, or feeling like no option will work?’) Compare extremes to moderate examples when relevant, and explore opposites to challenge my perspective. (For instance: ‘You mentioned you feel this way when things go wrong—what about when they go right? Do you feel anything similar, or is it completely different?’ Or, ‘What’s the worst-case scenario here? How does it compare to what usually happens?’) If I seem resistant or defensive, ask why I might feel that way and suggest possible reasons based on what we’ve talked about. (For example: ‘You seemed frustrated by that last question. Do you think it’s because the question feels too personal? Or maybe it’s hard to put into words right now?’) Focus on both emotional and logical angles. Help me validate my feelings while staying grounded, asking questions like, ‘When you say it feels impossible, what do you mean by impossible? Is it about time, effort, or something deeper?’ or ‘How often does this actually happen, versus how often you fear it will?’ Tie recurring themes together, point out inconsistencies, and ask what makes two seemingly similar situations different. (For example: ‘Earlier you said you’re comfortable doing X, but you find Y really difficult even though they seem similar. What do you think makes them different?’) Explore how things might have changed over time when appropriate. (For instance: ‘Do you think this has always been an issue, or did it develop after something specific?’ Or, ‘When was the last time you didn’t feel this way? What was different then?’) Ask me a lot of questions at a time—20 or so per response is fine—and make them in-depth and multi-part when needed. (For example: ‘Do you ever feel this way about something else? Why or why not? How does that compare to what you’re describing now?’) Keep the flow organic, and let the conversation naturally explore different areas. If I seem ready to stop, let me end the conversation instead of asking meta questions like ‘Do you feel this is helpful?’ Feel free to use terms like cognitive distortions when relevant, and reference my mental health conditions (like [your mental health conditions]) if they seem important—but don’t assume every issue comes from them. (For example: ‘Do you think [one of my illnesses here] could be playing a role here, or do you think it’s unrelated?’ Or, ‘You mentioned anxiety earlier—do you think that’s the main issue here, or is something else coming into play?’) Occasionally reframe my statements as questions to challenge my perspective. (For instance: ‘You said you can never succeed at this—do you really mean never, or is it just harder under certain conditions?’) Do not waste space reminding me you’re an AI or what your limitations are—I already know. Dive straight into the exploration and keep digging until I ask you to stop. Your tone should reflect the examples I’ve provided: neutral but curious, relaxed but pointed when necessary. Ask about unspoken influences, cultural and societal norms, assumptions, and recurring themes. Point out inconsistencies or anything I seem to be avoiding, and explore those areas in depth. (For example: ‘You said this is a problem for you, but you also mentioned you can handle similar situations easily. What do you think is different here? Are you avoiding a specific feeling or idea that’s tied to this situation? How does it compare to what we’ve already discussed?’) Every time I answer your questions, come back with more, particularly about things I’ve answered unsatisfactorily. The topic is: [fill in your topic here.] (There are a few spots for fill in the blank, don’t forget to do that above before you send it to ChatGPT. As usual, erase anything that isn’t relevant to you.) Then, I just copy the questions to a document and answer them directly below each one, and then send back all of them at once, telling it to follow the instructions again. Then it provides me with more questions!
2024-12-30T23:54:06
https://www.reddit.com/r/LocalLLaMA/comments/1hq1au3/prompt_for_deep_questionstalk_therapy_i_like_to/
bearbarebere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq1au3
false
null
t3_1hq1au3
/r/LocalLLaMA/comments/1hq1au3/prompt_for_deep_questionstalk_therapy_i_like_to/
false
false
self
4
null
What's the most affordable cloud-based VM/GPU that works well?
1
I'm looking to do some LLM and SD image stuff and need some on-demand access to GPUs (32gb vram to 48 or maybe more). I'm doing ComfyUI and ollama stuff. What's the most affordable cloud-based solution you know of? Also, does anybody know how RunPod works? I've read their site but don't fully understand it. Do I build a VM somehow and when I query it, the GPU connects automatically and I get billed for it? OR how do these Cloud-based on demand GPUs work overall? Thanks!
2024-12-30T23:58:15
https://www.reddit.com/r/LocalLLaMA/comments/1hq1e1m/whats_the_most_affordable_cloudbased_vmgpu_that/
StartupTim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq1e1m
false
null
t3_1hq1e1m
/r/LocalLLaMA/comments/1hq1e1m/whats_the_most_affordable_cloudbased_vmgpu_that/
false
false
self
1
null
Help converting . pth to Huggingface compatible . bin
1
[removed]
2024-12-30T23:58:28
https://www.reddit.com/r/LocalLLaMA/comments/1hq1e7l/help_converting_pth_to_huggingface_compatible_bin/
CompleteStand0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq1e7l
false
null
t3_1hq1e7l
/r/LocalLLaMA/comments/1hq1e7l/help_converting_pth_to_huggingface_compatible_bin/
false
false
self
1
null
Help converting . pth to . bin
1
[removed]
2024-12-31T00:00:42
https://www.reddit.com/r/LocalLLaMA/comments/1hq1g2a/help_converting_pth_to_bin/
CompleteStand0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq1g2a
false
null
t3_1hq1g2a
/r/LocalLLaMA/comments/1hq1g2a/help_converting_pth_to_bin/
false
false
self
1
null
GPUs with expandable VRAM! Next hype in AI/LLM?
1
[removed]
2024-12-31T01:20:42
https://www.reddit.com/r/LocalLLaMA/comments/1hq351r/gpus_with_expandable_vram_next_hype_in_aillm/
Expensive_Response69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq351r
false
null
t3_1hq351r
/r/LocalLLaMA/comments/1hq351r/gpus_with_expandable_vram_next_hype_in_aillm/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Q-LTUiielqGAZfkAod3rG3o6Yiw-d3LNiHxInOi7zIM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P2fN0NxdxZU61gKoFc5oFTX0nrGy4vvfoLrI4KUTRog.jpg?width=108&crop=smart&auto=webp&s=a395271295f0aa188d41e2ca96211606ad60a69e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/P2fN0NxdxZU61gKoFc5oFTX0nrGy4vvfoLrI4KUTRog.jpg?width=216&crop=smart&auto=webp&s=20336aba4ee1d9fd2b8120f83fbd257d8286c72e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/P2fN0NxdxZU61gKoFc5oFTX0nrGy4vvfoLrI4KUTRog.jpg?width=320&crop=smart&auto=webp&s=2dc2d6439a12a12648c7979f0168634a71faa7bf', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/P2fN0NxdxZU61gKoFc5oFTX0nrGy4vvfoLrI4KUTRog.jpg?auto=webp&s=12d405073ad8b6233593a433881a8c42eb116c7c', 'width': 400}, 'variants': {}}]}
Practical (online & offline) RAG Setups for Long Documents on Consumer Laptops with <16GB RAM
290
## Motivation As an academic, I work with very long dense documents literally all the time. Decades ago, I dreamt to be able to interact, to converse with such documents using AI, and now I was wondering if it was possible at all. After testing regularly since about a year, the answer is finally yes, although it is clunky and only a few tools allow it from my tests. The challenge being that it needed to run on my consumer-grade, albeit premium, laptop. I am going to explain what I found as I believe this may be useful for others with similar needs as me, and I would like to invite to a discussion about other tools that may be interesting to explore for this purpose, or future tech to watch out for. Note: please don't expect a fancy extensive results table. I did not have the time to record all the failures, so this post is mainly to explain the few setups that worked and my methods so that the results can be reproduced. ## Methods ### Step 1: A repeatable multi-needles test First, I defined a simple standard repeatable test to assess any RAG system on the same basis. I decided to reuse the excellent 4-questions multi-needles test on a 60K text devised by ggerganov of llama.cpp: https://github.com/ggerganov/llama.cpp/pull/4815#issuecomment-1883289977 Essentially, we generate a 60k tokens text (or any size we want to test), and we insert 4 needles at different places in the text: close to the start, somewhere before the middle, somewhere after the middle, and close to the end. Now the trick is that the prompt is also engineered to be particularly difficult: 1. it asks to retrieve ALL the needles at once. 2. it asks for them in a non-sequential order (ie, we retrieve the last needle, and then a needle earlier in the text). 3. it asks for knowledge that shadows common knowledge (ie, "dolphins are known for their advanced underwater civilization") 4. it asks for two passphrases that need to be restituted verbatim and fully (ie, this can test the limit of embeddings that may cut off in the middle of a sentence). In addition to the test ggerganov did, I also placed the content in multiple file formats (.md, .pdf, .docx), as a RAG system needs to be able to process different types. Although ggerganov explains how he generated the test data and give the prompt, I published my exact dataset and prompt in a github repository to ease test repeatability if you want to try it for yourself or check the details: https://github.com/lrq3000/multi-needles-rag-test ### Step 2: Reviewing methods to process a very long documents using genAI Secondly, I explored the methods to process very long documents. There are broadly two families of methods right now: * use an LLM with an already long context size. Even SLMs such as phi-3.5-mini now have 128k context size, so in theory this should work. * extend the context size: self-extend, rope, infini-attention, etc. * or work around it (RAG, RAGgraph, KAG, etc). There is a prevalent opinion that RAG as it was initially conceived to work around the context size limitations is going to go extinct with the future LLMs with longer context sizes. Unfortunately, at the moment, I found that LLMs with long context size tend to fail quite badly in retrieval tasks over a long context, or they consume an unwieldy amount of RAM to reach the necessary context length, so they cannot run on my relatively resources constrained machine. This issue of increased RAM usage includes most context extension methods such as self-extend, despite [succeeding the test](https://github.com/ggerganov/llama.cpp/pull/4815#issuecomment-1883289977) according to ggerganov. However, some methods such as rope and infini-attention require less RAM, so they could work. Finally, there are RAG and descendents methods. Unfortunately, RAG is still very much in its infancy, so there is no standard best practices way to do it, and there are a ton of different frameworks and libraries to implement a RAG system. For the purpose of my tests, I only focused on those with a UI or offering an already made RAG pipeline, because I did not yet learn how to implement RAG by myself. ### Step 3: Identify implementations and run the test Thirdly, I ran the test! Here is a non-exhaustive list of configurations I tried: * Various offline and online LLM models: phi-3.5-mini, gemma2-2b-it, mistral, phi-4, Hermes2.5, Ghost, Qwen2.5:1.5b, Qwen2.5:7b, Llama3.2:3b, Phi-3-Medium, Tiger-Gemma-9B (Marco-O1 remains to be tested). Note: almost all were quantized to Q4_K_M except the SLMs which were quantized at Q_6_K. * RAG Frontends: msty, anythingLLM, Witsy, RAGFlow, Kotaemon, khoj.dev, Dify, etc. (OpenWebUI failed to install on my machine, QwenAgent remains to be tested) * Backend: ollama, ChatGPT, Gemini. ## Successful results Although several solutions could get 1 question right (usually the first one about the Konservenata restaurant), it was rare to get any more correctly answered. I found only two working solutions for the multi-needles test to succeed 4 out of 4 (4/4): * Either without RAG using LLM models that implement infini-attention. Although the method has been published openly, currently, only the Gemini models (online, including the free Flash models) implement it, offering 1M tokens context size. I used Gemini 2.0 Flash Experimental for my tests via Google AI Studio (and also via RAGFlow, both worked). * Either with a RAG that somehow mimics infinite attention, such as RAGflow which implements their [Infinity](https://github.com/infiniflow/infinity) RAG engine and some clever optimizations according to their [blog](https://medium.com/@infiniflowai/ragflow-customizable-credible-explainable-rag-engine-based-on-document-structure-recognition-6a2a2369bd2a). This requires the use of a multi-tasks embeddings model such as bge-m3 (ollama bge-m3:latest), a LLM model that supports iterative reasoning (such as Phi-4_Q4_K_M, precisely ollama vanilj/Phi-4:latest, the only model I found to succeed while being <8GB RAM, the maximum my computer supports), and a reranker such as maidalun1020/bce-reranker-base_v1 . Raptor was disabled (did not improve the results with any LLM model I tried, despite the much bigger consumption of tokens - even in their paper, the improvement is very small), and all parameters were set to default otherwise, and either the .md file was used or both a .md and .pdf of the same content. All of these models can be run offline, so this solution works in theory totally offline since RAGflow can run in a docker. However, currently RAGflow does [not support reranker models from ollama](https://github.com/infiniflow/ragflow/issues/3680), but hopefully this will be fixed in the future (please upvote if you'd like to see that happen too!). * ChatGPT-4o also succeeded using its RAG, and only with the iterative prompt (otherwise it fails severely at half of the questions). o1 cannot yet read .md attachments, so it remains untested. Note that in all successful cases, I found that making the prompt to be iterative (which is a change I did over ggerganov's original prompt) was necessary to increase the reliability of the retrieval, otherwise some questions (up to half of the questions) failed (even with Gemini IIRC). ## Closing thoughts I was surprised that so many RAG solutions failed to pass more than 1 needle, and several passed none. A lot of solutions also hallucinated solutions. Still, I was positively surprised that there are already two existing solutions, one being self-hostable offline and fully opensource (both the RAG system and the models) to successfully complete this hard retrieval task on long documents. While [infini-attention](https://arxiv.org/abs/2404.07143v1) seems incredibly promising to drastically scale up the amount of tokens (and hence data) that LLMs can process on reduced RAM budgets, it seems [all interest died down](https://www.reddit.com/r/LocalLLaMA/comments/1fp4s7e/comment/lovl4mk/) in trying to reproduce it after the famous [failed attempt](https://huggingface.co/blog/infini-attention) by HuggingFace's researchers. However, there are a few other implementations and even a model that claim to have implemented it successfully, although there is a lack of published tests. Personally, I think pursuing this lead would be incredibly worthwhile for opensource LLMs, but I guess other teams have already tried and failed somehow since no-one came close to reproducing what Google did (and we know they did since we can certainly see for ourselves how successfully Gemini models, even the Flash ones, can process very long documents and retrieve any information anywhere in them under 1M tokens). Here are the implementations I found: * https://github.com/a-r-r-o-w/infini-attention * https://github.com/vmarinowski/infini-attention * https://github.com/jlamprou/Infini-Attention * a published model weights of a 10M Gemma-2B model, under 32GB of memory only! https://github.com/mustafaaljadery/gemma-2B-10M ([reddit post](https://www.reddit.com/r/LocalLLaMA/comments/1co3xu8/gemma_2b_with_10m_context_runs_on_32gb_of_memory/)) -- I wonder if the quantized model would run on a consumer grade machine, but even then, I would be interested to know if the full unquantized model does indeed retrieve multiple needles! * There are also a few educational posts that explain the algorithm [here](https://getcoai.com/news/infini-attention-and-the-challenge-of-extending-ai-models-context-window/) and [here](https://kshitijkutumbe.medium.com/breaking-barriers-how-infini-attention-unlocks-limitless-context-for-transformers-48a347bbf81c). Since I have no experience with RAG systems, I could not make my own pipeline, so it is certainly possible that there are more solutions that can be made with custom pipelines (if you have a suggestion please let me know!). IMHO, one of the big issues I had when looking for a RAG solution is that there are too many competing frameworks, it's hard to know which one is best for what type. It seems some (most) RAG frameworks are more optimized for correlating together lots of documents, but very few for retrieval of precise accurate information from a few very long and dense documents. There are also new methods I did not try, such as ring-attention, but it seems to me most of them are much more limited than infini-attention in terms of the scale and precision they can achieve, usually only a 4x or 8x max, whereas infini-attention essentially does a 10-100x in context length while maintaining (or even improving?) recall. One exception being [YOCO](https://arxiv.org/pdf/2405.05254) (you only cache once), which claim to be able to achieve 1M context with near-perfect needle retrieval! And another method called [Mnemosyne](https://arxiv.org/abs/2409.17264) by Microsoft and others claiming to achieve multi-million tokens context size. If anyone has any suggestion of another system (especially offline/self-hostable ones) that may successfully complete this test, under the mentioned constraints of limited RAM, please share in a comment, I will test and report the results.
2024-12-31T01:22:34
https://www.reddit.com/r/LocalLLaMA/comments/1hq36dn/practical_online_offline_rag_setups_for_long/
lrq3000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq36dn
false
null
t3_1hq36dn
/r/LocalLLaMA/comments/1hq36dn/practical_online_offline_rag_setups_for_long/
false
false
self
290
{'enabled': False, 'images': [{'id': 'I0WwmmZCW8LjjKqvQWLLZ8pF1YE__qLwzRhVxjSKXnk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AAzYPyx7nm5oYhqvavpZubtqDSLe64F0-397iKf_QjM.jpg?width=108&crop=smart&auto=webp&s=554882bf178aa496be34155b66f0d6a4f34b2f1d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AAzYPyx7nm5oYhqvavpZubtqDSLe64F0-397iKf_QjM.jpg?width=216&crop=smart&auto=webp&s=9860a4f684754c6ee7b759e8f844544207b8400b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AAzYPyx7nm5oYhqvavpZubtqDSLe64F0-397iKf_QjM.jpg?width=320&crop=smart&auto=webp&s=7658f472e008cf3264f87c62d86bc8f8fe9da536', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AAzYPyx7nm5oYhqvavpZubtqDSLe64F0-397iKf_QjM.jpg?width=640&crop=smart&auto=webp&s=6eabb87b2d28dca7f7985e977b2044a177946deb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AAzYPyx7nm5oYhqvavpZubtqDSLe64F0-397iKf_QjM.jpg?width=960&crop=smart&auto=webp&s=22cd1b3730e372d476ce020b0d07df5ffdb8a455', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AAzYPyx7nm5oYhqvavpZubtqDSLe64F0-397iKf_QjM.jpg?width=1080&crop=smart&auto=webp&s=914846706752a9471b9bc1dcf14191a60977b718', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AAzYPyx7nm5oYhqvavpZubtqDSLe64F0-397iKf_QjM.jpg?auto=webp&s=19aca83a008db62942cf62582e17c8338535e379', 'width': 1200}, 'variants': {}}]}
Performance of LLMs on Advent of code 2024
9
2024-12-31T01:27:21
https://www.jerpint.io/blog/advent-of-code-llms/
foldl-li
jerpint.io
1970-01-01T00:00:00
0
{}
1hq39x8
false
null
t3_1hq39x8
/r/LocalLLaMA/comments/1hq39x8/performance_of_llms_on_advent_of_code_2024/
false
false
https://a.thumbs.redditm…WUve59XWXf58.jpg
9
{'enabled': False, 'images': [{'id': 'YGab8AkvlFR621vGpVm-NCJkfKpyFkgaw0mTfeuVl-M', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/TjiTlgiWbsxdnyY98cs5AlXGBkpTGcIQ4z4MNTlZ1Z8.jpg?width=108&crop=smart&auto=webp&s=4112e7ee4886e13aee1e863d1a7b280a2fd60027', 'width': 108}, {'height': 180, 'url': 'https://external-preview.redd.it/TjiTlgiWbsxdnyY98cs5AlXGBkpTGcIQ4z4MNTlZ1Z8.jpg?width=216&crop=smart&auto=webp&s=707b8327f2902a91f4cf83feb60da3dd748be6e3', 'width': 216}, {'height': 267, 'url': 'https://external-preview.redd.it/TjiTlgiWbsxdnyY98cs5AlXGBkpTGcIQ4z4MNTlZ1Z8.jpg?width=320&crop=smart&auto=webp&s=805aed05f46179a5af9352580ce971058b47d648', 'width': 320}, {'height': 534, 'url': 'https://external-preview.redd.it/TjiTlgiWbsxdnyY98cs5AlXGBkpTGcIQ4z4MNTlZ1Z8.jpg?width=640&crop=smart&auto=webp&s=ce8617ceb90cb689128f2f4708d54a21c6fb469b', 'width': 640}, {'height': 801, 'url': 'https://external-preview.redd.it/TjiTlgiWbsxdnyY98cs5AlXGBkpTGcIQ4z4MNTlZ1Z8.jpg?width=960&crop=smart&auto=webp&s=2d266cfd4c4da8e51b7f31b1fcc47aeac1a3fad1', 'width': 960}, {'height': 901, 'url': 'https://external-preview.redd.it/TjiTlgiWbsxdnyY98cs5AlXGBkpTGcIQ4z4MNTlZ1Z8.jpg?width=1080&crop=smart&auto=webp&s=b9cc16f689b2b9962ae5953693c6459697d9718f', 'width': 1080}], 'source': {'height': 1002, 'url': 'https://external-preview.redd.it/TjiTlgiWbsxdnyY98cs5AlXGBkpTGcIQ4z4MNTlZ1Z8.jpg?auto=webp&s=b279fd80f47916e1367353b636998f9c22825235', 'width': 1200}, 'variants': {}}]}
Using llama3.2:1B
10
Hello guys! I've been learning and making project/chat bots using python and ollama. It's been fun being able to run the 1b model everywhere. Even on my Raspberry pi! However, I want to try and not use ollama and use the model downloaded from meta directly. I have no idea where to start though. I've downloaded the files directly from meta already but don't know how to use them. Any advice would help! Thanks!
2024-12-31T02:10:28
https://www.reddit.com/r/LocalLLaMA/comments/1hq45a4/using_llama321b/
SilverBoi01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq45a4
false
null
t3_1hq45a4
/r/LocalLLaMA/comments/1hq45a4/using_llama321b/
false
false
self
10
null
Best coding LLM - Mac M3 48G
12
Hey folks, I have github copilot at work and while its great to have it in vscode to write some units tests, i find it fairly useless compared to Claude for any coding tasks. I was wondering what everyone thinks is the best model that I can run with ollama and my 48G M3. The focus is primarily programming so I would be more interested in that aspect. Thanks!
2024-12-31T02:53:46
https://www.reddit.com/r/LocalLLaMA/comments/1hq4z2y/best_coding_llm_mac_m3_48g/
keftes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq4z2y
false
null
t3_1hq4z2y
/r/LocalLLaMA/comments/1hq4z2y/best_coding_llm_mac_m3_48g/
false
false
self
12
null
Setting up for Local Llama & Semantic Kernel development using VSCode
1
[removed]
2024-12-31T03:16:38
https://www.reddit.com/r/LocalLLaMA/comments/1hq5esv/setting_up_for_local_llama_semantic_kernel/
AIForOver50Plus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq5esv
false
null
t3_1hq5esv
/r/LocalLLaMA/comments/1hq5esv/setting_up_for_local_llama_semantic_kernel/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YSpStdXqyIF1V0IsUtMtxqKpFCZJVyJPwANZscf_1bQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KcUQoj5jCwe6WyRprNAPsOQZ-VeC8qey3lwiR2iRl3E.jpg?width=108&crop=smart&auto=webp&s=4d7bd2506714db0e1a14973be58557b04a8b5246', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/KcUQoj5jCwe6WyRprNAPsOQZ-VeC8qey3lwiR2iRl3E.jpg?width=216&crop=smart&auto=webp&s=86777b12d811899142828931c7b60f23e1bc3979', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/KcUQoj5jCwe6WyRprNAPsOQZ-VeC8qey3lwiR2iRl3E.jpg?width=320&crop=smart&auto=webp&s=bca4f7270a4ad5ba846d44c5412f4c184e475b83', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/KcUQoj5jCwe6WyRprNAPsOQZ-VeC8qey3lwiR2iRl3E.jpg?auto=webp&s=7c341f1b3885128292fe60e33405125e64cc7cd6', 'width': 480}, 'variants': {}}]}
Quantum-Enhanced LLaMA Solves IMO 2024 Problem 1: A Deep Dive into Mathematical Reasoning Through Quantum Computing
1
[removed]
2024-12-31T03:27:26
https://www.reddit.com/r/LocalLLaMA/comments/1hq5m2z/quantumenhanced_llama_solves_imo_2024_problem_1_a/
Nandakishor_ml
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq5m2z
false
null
t3_1hq5m2z
/r/LocalLLaMA/comments/1hq5m2z/quantumenhanced_llama_solves_imo_2024_problem_1_a/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PQwS354jLj3iHsKkU6f2IBQ51OL_ev_OFOAjvbuFMJo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=108&crop=smart&auto=webp&s=7d0a57f4604b91049ed8cf462e63b5feaf0eaa85', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=216&crop=smart&auto=webp&s=04a1dff44d429bb05225a304c38d0996d53ed28c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=320&crop=smart&auto=webp&s=be4d355392f63a010645cafd22029e22c5314a6c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=640&crop=smart&auto=webp&s=47b424afc0e5c0d8779d93b03d5a58b6134499bb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=960&crop=smart&auto=webp&s=f39edcef07f7e078f307ac0e94cf43bae2b59fd9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?width=1080&crop=smart&auto=webp&s=41d20b386798b9873a44c71891e15a7608f2a93c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/76JsuRjMgKESCNRu6euIyP8SvG41tcHOlIiHOEri-PQ.jpg?auto=webp&s=9d9321f5aa535d3bebaad8f67910cf69f67fb07f', 'width': 1200}, 'variants': {}}]}
Gambling with language models: One clueless investor's attempt at beating the stock market with ModernBert
73
2024-12-31T03:36:19
https://yehudacohen.substack.com/p/gambling-with-language-models
Manwith2plans
yehudacohen.substack.com
1970-01-01T00:00:00
0
{}
1hq5s89
false
null
t3_1hq5s89
/r/LocalLLaMA/comments/1hq5s89/gambling_with_language_models_one_clueless/
false
false
default
73
{'enabled': False, 'images': [{'id': 'iyK-gcpcvacQdCNwAkj36UAKBrGuh8nnWkhnJ87DLRU', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/bFtJX-NtvWp0D4TeNp_tc_u6I5n9_fhPH-saUb2oMjE.jpg?width=108&crop=smart&auto=webp&s=3ad5c429c7f95617bd6f1691d4ed4ca08f5886b3', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/bFtJX-NtvWp0D4TeNp_tc_u6I5n9_fhPH-saUb2oMjE.jpg?width=216&crop=smart&auto=webp&s=f9461a95f7f59a7123c098dccb1548fb7448edff', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/bFtJX-NtvWp0D4TeNp_tc_u6I5n9_fhPH-saUb2oMjE.jpg?width=320&crop=smart&auto=webp&s=392a996d3bcc0509a96fedaa43074844e6280d56', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/bFtJX-NtvWp0D4TeNp_tc_u6I5n9_fhPH-saUb2oMjE.jpg?width=640&crop=smart&auto=webp&s=cd0d51f5ce5b11eea6efb1dddefc76e82b870162', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/bFtJX-NtvWp0D4TeNp_tc_u6I5n9_fhPH-saUb2oMjE.jpg?width=960&crop=smart&auto=webp&s=80e774d021119c8b7fa8e5efbc64dbbef28cadce', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bFtJX-NtvWp0D4TeNp_tc_u6I5n9_fhPH-saUb2oMjE.jpg?auto=webp&s=3c406688d39fc9c66f23536cc662908c9f981933', 'width': 1024}, 'variants': {}}]}
Finetuning model for generating descriptions of expeditions
1
[removed]
2024-12-31T03:45:01
https://www.reddit.com/r/LocalLLaMA/comments/1hq5y3y/finetuning_model_for_generating_descriptions_of/
MariaFitz345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hq5y3y
false
null
t3_1hq5y3y
/r/LocalLLaMA/comments/1hq5y3y/finetuning_model_for_generating_descriptions_of/
false
false
self
1
null