title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Deepseek V3 via Hyperbolic is 0.25$/1M despite of inputs/outputs are not being stored.
0
Hi, today I saw that Deepseek V3 is available on Hyperbolic with a pricing of 0.25$/1M. I got acceptable performance in a few trials. Moreover, when I checked the ToS, I read that inputs and outputs are not stored. If this is true, this pricing seems too good for me. 131k context is great. And also I think that because of the model is already trained with 8bit, FP8 quantization is not a problem too. So what is the catch? Am I missing something? https://preview.redd.it/yxhaqhgjucde1.png?width=550&format=png&auto=webp&s=8d7018eb4670748facd553268b6bc090e7762926
2025-01-16T13:11:31
https://www.reddit.com/r/LocalLLaMA/comments/1i2okih/deepseek_v3_via_hyperbolic_is_0251m_despite_of/
gkon7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2okih
false
null
t3_1i2okih
/r/LocalLLaMA/comments/1i2okih/deepseek_v3_via_hyperbolic_is_0251m_despite_of/
false
false
https://b.thumbs.redditm…mGzBkFHkaqtc.jpg
0
null
Need any SaaS idea for my final year project ?
1
[removed]
2025-01-16T13:17:17
https://www.reddit.com/r/LocalLLaMA/comments/1i2ooit/need_any_saas_idea_for_my_final_year_project/
Own-Signature-2439
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2ooit
false
null
t3_1i2ooit
/r/LocalLLaMA/comments/1i2ooit/need_any_saas_idea_for_my_final_year_project/
false
false
self
1
null
What's your go-to OS models to build side fun projects?
1
[removed]
2025-01-16T13:40:01
https://www.reddit.com/r/LocalLLaMA/comments/1i2p4ki/whats_your_goto_os_models_to_build_side_fun/
Acceptable-Hotel-680
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2p4ki
false
null
t3_1i2p4ki
/r/LocalLLaMA/comments/1i2p4ki/whats_your_goto_os_models_to_build_side_fun/
false
false
self
1
null
Why can't GPUs have removable memory like PC ram?
188
Was thinking, why doesn't Intel, Nvidia, or AMD come up with the idea of being able to expand the memory? I get it that DDR6 is pricey but if one of them were to create modules and sell them wouldn't they be able to profit? Image if Intel came out with this first, I bet most of us will max out the vram and the whole community will push away from Nvidia and create better or comparable frameworks other cuda. Thoughts ?
2025-01-16T13:43:00
https://www.reddit.com/r/LocalLLaMA/comments/1i2p6n3/why_cant_gpus_have_removable_memory_like_pc_ram/
Delicious-Farmer-234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2p6n3
false
null
t3_1i2p6n3
/r/LocalLLaMA/comments/1i2p6n3/why_cant_gpus_have_removable_memory_like_pc_ram/
false
false
self
188
null
How to use / implement Agentic AI / frameworks, for pipelines / task based processes?
0
Hi, I'm looking into optimizing existing business processes in marketing, sales etc. Usually, processes look a bit like a process diagram. The closest thing that I can think of to partly automate things and interact with all the required software systems would be something like a workflow automation (n8n e.g.) and then work with status values, retrieve data, put the data into an AI and ask it to do the task, enrich (or similar) the data and update it in the source system. That again, will trigger step two of the process. Agentic frameworks seem to be more creative and not just part of a process? Crew AI seems to be closest one with tasks, compared to others? For a concrete example maybe: There is a new lead with an email. The process would be: 1. Is there as website for this email? 2. Are there any people info on the website? 3. What org structure does the company have? 4. In which of the following x industries is the lead? And then, write the information, if retrieved into the CRM. Ideally, as low code. What would be a good approach in a case like this?
2025-01-16T13:45:18
https://www.reddit.com/r/LocalLLaMA/comments/1i2p8ah/how_to_use_implement_agentic_ai_frameworks_for/
Chris8080
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2p8ah
false
null
t3_1i2p8ah
/r/LocalLLaMA/comments/1i2p8ah/how_to_use_implement_agentic_ai_frameworks_for/
false
false
self
0
null
Seems like used 3090 price is up near $850/$900?
78
I'm looking for a bit of a sanity check here; it seems like used 3090's on eBay are up from around $650-$700 two weeks ago to $850-$1000 depending on the model after the disappointing 5090 announcement. Is this still a decent value proposition for an inference box? I'm about to pull the trigger on an H12SSL-i, but am on the fence about whether to wait for a potentially non-existent price drop on 3090 after 5090's are actually available and people try to flip their current cards. Short term goal is 70b Q4 inference server and NVLink for training non-language models. Any thoughts from secondhand GPU purchasing veterans?
2025-01-16T13:50:36
https://www.reddit.com/r/LocalLLaMA/comments/1i2pbyp/seems_like_used_3090_price_is_up_near_850900/
Synaps3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2pbyp
false
null
t3_1i2pbyp
/r/LocalLLaMA/comments/1i2pbyp/seems_like_used_3090_price_is_up_near_850900/
false
false
self
78
null
What's your go-to OS models to build side fun projects?
1
Hey everyone, I'm looking for some recommendations around choosing good OS llm for side project. Recently got some free API credits to experiment with OS LLMs and was thinking to try out models like LLama 3.3, Qwen 2.5 / DeepSeek-V2 for small side project. Which models are great for fun/creative projects based on your experience? around: \- code related outputs \- writing related outputs \- Creativity
2025-01-16T13:54:35
https://www.reddit.com/r/LocalLLaMA/comments/1i2pen9/whats_your_goto_os_models_to_build_side_fun/
codes_astro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2pen9
false
null
t3_1i2pen9
/r/LocalLLaMA/comments/1i2pen9/whats_your_goto_os_models_to_build_side_fun/
false
false
self
1
null
Now you can running InternLM3 8B using Qualcomm NPU with PowerServe!
32
We introduced PowerServe, a serving framework designed specifically for Qualcomm NPU. Now we have already support Qwen, Llama and InternLM3 8B. Github: powerserve-project/PowerServe: High-speed and easy-use LLM serving framework for local deployment (github.com) Current open-source serving frameworks perform poorly in prefill speed on mobile devices, mainly due to limited CPU computing power. So we design PowerServe, a serving framework designed specifically for Qualcomm NPU, which achieves a prefill speed of 1000 tokens/s of tokens per second for 3B models. This represents a 100x speedup compared to llama.cpp's 15 tokens per second. For InternLM 8B, you can run it with 250 tokens/s, significantly accelerating the prefill speed. [Running InternLM3 8B with Qualcomm 8Gen3 NPU](https://reddit.com/link/1i2pvpc/video/y93hx9ss6dde1/player) [Performance comparison between Llama.cpp and PowerServe.](https://preview.redd.it/pzxssjtw6dde1.png?width=2056&format=png&auto=webp&s=2918387653f5940983a6718fd10bf9659d458e52)
2025-01-16T14:17:23
https://www.reddit.com/r/LocalLLaMA/comments/1i2pvpc/now_you_can_running_internlm3_8b_using_qualcomm/
Zealousideal_Bad_52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2pvpc
false
null
t3_1i2pvpc
/r/LocalLLaMA/comments/1i2pvpc/now_you_can_running_internlm3_8b_using_qualcomm/
false
false
https://b.thumbs.redditm…wWpx6Nr_BBOk.jpg
32
{'enabled': False, 'images': [{'id': 'OCr2jJ8K6HIPoei8HiA8bfJG9ukTY_y-Cryax_JXUuE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vqjX5SwLNlGMS1SSiHAT804_sRvPBHuMxyMU2GJpb1U.jpg?width=108&crop=smart&auto=webp&s=b5ea33c7adf2d08ea7ab08de0c63d3eafe1f7bd5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vqjX5SwLNlGMS1SSiHAT804_sRvPBHuMxyMU2GJpb1U.jpg?width=216&crop=smart&auto=webp&s=7c3465b2b7a7460e77161aeb16fd82e22270284f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vqjX5SwLNlGMS1SSiHAT804_sRvPBHuMxyMU2GJpb1U.jpg?width=320&crop=smart&auto=webp&s=7ec908691b1ae9e1dff6c39143211a2a77e80f43', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vqjX5SwLNlGMS1SSiHAT804_sRvPBHuMxyMU2GJpb1U.jpg?width=640&crop=smart&auto=webp&s=6fe0aaff34a61b4f8f3fa0a386d0f9d14fa68f78', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vqjX5SwLNlGMS1SSiHAT804_sRvPBHuMxyMU2GJpb1U.jpg?width=960&crop=smart&auto=webp&s=8eadcfd5369bbf387ffe249bb01fe92d24669b5e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vqjX5SwLNlGMS1SSiHAT804_sRvPBHuMxyMU2GJpb1U.jpg?width=1080&crop=smart&auto=webp&s=2c3422807d685e984595d1379248dc1f65f675ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vqjX5SwLNlGMS1SSiHAT804_sRvPBHuMxyMU2GJpb1U.jpg?auto=webp&s=c56f704bf80e0ee425c72c16c0557f39ec16458a', 'width': 1200}, 'variants': {}}]}
O1 instruction-following dataset ?
0
I feel like I'm searching very, very hard, but I can't find any at all. Any help would be greatly appreciated.
2025-01-16T14:18:50
https://www.reddit.com/r/LocalLLaMA/comments/1i2pwsg/o1_instructionfollowing_dataset/
Wonderful-Excuse4922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2pwsg
false
null
t3_1i2pwsg
/r/LocalLLaMA/comments/1i2pwsg/o1_instructionfollowing_dataset/
false
false
self
0
null
🚀 Launching OpenLIT: Open source dashboard for AI engineering & LLM data
1
[removed]
2025-01-16T14:55:18
https://www.reddit.com/r/LocalLLaMA/comments/1i2qodf/launching_openlit_open_source_dashboard_for_ai/
patcher99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2qodf
false
null
t3_1i2qodf
/r/LocalLLaMA/comments/1i2qodf/launching_openlit_open_source_dashboard_for_ai/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uHsAav9RKiWd3kTLwtLa5v_12D_QwDFSGh0CSnfUQ8I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjK61tuTEC6gpI61YcC-wDZdOVCnPWBCfptioT_wqSc.jpg?width=108&crop=smart&auto=webp&s=d4e022fd9037174bb743344d190673a76d665920', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NjK61tuTEC6gpI61YcC-wDZdOVCnPWBCfptioT_wqSc.jpg?width=216&crop=smart&auto=webp&s=f755039e0dfe39a4163e3b21b136f0ca931320dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NjK61tuTEC6gpI61YcC-wDZdOVCnPWBCfptioT_wqSc.jpg?width=320&crop=smart&auto=webp&s=766e6388b5fb49548514b5ba1614ffd4a5e9a3c5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NjK61tuTEC6gpI61YcC-wDZdOVCnPWBCfptioT_wqSc.jpg?width=640&crop=smart&auto=webp&s=a4070d1d9c3e5b48718e3296e081f5aaaa4b54e6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NjK61tuTEC6gpI61YcC-wDZdOVCnPWBCfptioT_wqSc.jpg?width=960&crop=smart&auto=webp&s=ef533f5f662bf7fc4347195bbe4cabef5ea84390', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/NjK61tuTEC6gpI61YcC-wDZdOVCnPWBCfptioT_wqSc.jpg?auto=webp&s=1e7df54523c4b5507738936b142566c2315b5d90', 'width': 1024}, 'variants': {}}]}
Introducing Kokoro.js: a new JavaScript library for running Kokoro TTS (82M) locally in the browser w/ WASM.
331
2025-01-16T14:55:36
https://v.redd.it/uv6trvpgddde1
xenovatech
v.redd.it
1970-01-01T00:00:00
0
{}
1i2qokt
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/uv6trvpgddde1/DASHPlaylist.mpd?a=1739631352%2CZjk1OTBkYzU4MmIwNGQyN2JmYTFjOTdhZjc1YTNkMjg5ODMyZjc1OGNiZmEyODVkYWVmODkwZjI2MTFiMGZkNQ%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/uv6trvpgddde1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/uv6trvpgddde1/HLSPlaylist.m3u8?a=1739631352%2COGJhZGI3YzVmYmYxOTIyZTU2MjNmMzVmNTFhNjE1NjIxMGI5NWY1MTk1ODkzOTZlMjI1YzEyZDQ0ZDNkZmVmYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uv6trvpgddde1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1i2qokt
/r/LocalLLaMA/comments/1i2qokt/introducing_kokorojs_a_new_javascript_library_for/
false
false
https://external-preview…dcbb9d8c1d7d16fb
331
{'enabled': False, 'images': [{'id': 'c2Y2dHB4cGdkZGRlMblxftDnj1ubBLQxBS031TPNonm7GOuytqVIBIDUD3XU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/c2Y2dHB4cGdkZGRlMblxftDnj1ubBLQxBS031TPNonm7GOuytqVIBIDUD3XU.png?width=108&crop=smart&format=pjpg&auto=webp&s=4244eff69a553137faffb80c8abbd284705748f5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/c2Y2dHB4cGdkZGRlMblxftDnj1ubBLQxBS031TPNonm7GOuytqVIBIDUD3XU.png?width=216&crop=smart&format=pjpg&auto=webp&s=19748fca404e56e33c7528151e973b8a68d1cb7b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/c2Y2dHB4cGdkZGRlMblxftDnj1ubBLQxBS031TPNonm7GOuytqVIBIDUD3XU.png?width=320&crop=smart&format=pjpg&auto=webp&s=33707a132e5931f83ba3f21d1d7a005fc83e756f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/c2Y2dHB4cGdkZGRlMblxftDnj1ubBLQxBS031TPNonm7GOuytqVIBIDUD3XU.png?width=640&crop=smart&format=pjpg&auto=webp&s=130f4796673724f5f243adf611b2136519fa9715', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/c2Y2dHB4cGdkZGRlMblxftDnj1ubBLQxBS031TPNonm7GOuytqVIBIDUD3XU.png?width=960&crop=smart&format=pjpg&auto=webp&s=701ce3a7207fed0b68c0f61194951b8bb91745bf', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/c2Y2dHB4cGdkZGRlMblxftDnj1ubBLQxBS031TPNonm7GOuytqVIBIDUD3XU.png?width=1080&crop=smart&format=pjpg&auto=webp&s=dfb2455f114a1354a16c9a720e354e9613c04756', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/c2Y2dHB4cGdkZGRlMblxftDnj1ubBLQxBS031TPNonm7GOuytqVIBIDUD3XU.png?format=pjpg&auto=webp&s=6cc871624d8e57b6070d03652bdaf9fae69c4db5', 'width': 1080}, 'variants': {}}]}
Help with transformers pipeline and Nemotron usage
0
Hello everyone, I have been exploring Nemotron with the transformers library alone, and I am looking forward to fine-tuning it with a specific question-answering dataset. Still, I have been struggling to get coherent text out of it. Here is a snippet of the code I have been trying: import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline model_name = "nvidia/Llama-3.1-Nemotron-70B-Instruct-HF" #final_model_path = "model/test_model-3" tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = 'left' bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_compute_dtype=torch.float16, bnb_4bit_use_double_quant=True ) model = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, device_map="auto", cache_dir = '/nvme0n1-disk/user' ) # model.load_adapter(final_model_path, adapter_name="adapter") # model.enable_adapters() model.config.pad_token_id = tokenizer.pad_token_id model.config.use_cache = False model_pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) response = model_pipe('Tell me a joke', max_new_tokens=32, do_sample=False) print(response)import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline model_name = "nvidia/Llama-3.1-Nemotron-70B-Instruct-HF" #final_model_path = "model/test_model-3" tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = 'left' bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_compute_dtype=torch.float16, bnb_4bit_use_double_quant=True ) model = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, device_map="auto", cache_dir = '/nvme0n1-disk/user' ) # model.load_adapter(final_model_path, adapter_name="adapter") # model.enable_adapters() model.config.pad_token_id = tokenizer.pad_token_id model.config.use_cache = False model_pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) response = model_pipe('Tell me a joke', max_new_tokens=32, do_sample=False) print(response) And here is the text I get out of the pipeline: \[{'generated\_text': 'Tell me a joke!DER!DERDERDER!!!!!!!!!!!!!!!!!!!!!!!!!!'}\] If I try the question, 'What colour is the sky?' the output is the same: \[{'generated\_text': 'What colour is the sky?!!DER!!!!!!!!!!!!!!!!!!!!!!!!!!!!!'}\] Here is another example: \[{'generated\_text': 'What year was the first iPhone launched?!!!!DER!!!!!!!!!!!!!!!!!!!!!!!!!!!'}\] I am new to this task and do not know what's happening. Sometimes the output is just a spam of '!!!!!!', other times it is '\\\\\\\\\\\\\\\\\\\\\\'. I managed to get a coherent answer once or twice, but I could not find any pattern. I tried to clear the cache, but it did not solve the problem. Can you give me some ideas?
2025-01-16T14:57:28
https://www.reddit.com/r/LocalLLaMA/comments/1i2qq0v/help_with_transformers_pipeline_and_nemotron_usage/
Fondurex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2qq0v
false
null
t3_1i2qq0v
/r/LocalLLaMA/comments/1i2qq0v/help_with_transformers_pipeline_and_nemotron_usage/
false
false
self
0
null
Haystack vs LangChain: Which Framework Should You Choose for Your AI Projects?
1
[removed]
2025-01-16T15:40:28
https://www.reddit.com/r/LocalLLaMA/comments/1i2ror2/haystack_vs_langchain_which_framework_should_you/
Direct_Examination_2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2ror2
false
null
t3_1i2ror2
/r/LocalLLaMA/comments/1i2ror2/haystack_vs_langchain_which_framework_should_you/
false
false
self
1
null
Fine-tune Wizard Vicuna
1
[removed]
2025-01-16T15:49:47
https://www.reddit.com/r/LocalLLaMA/comments/1i2rwbd/finetune_wizard_vicuna/
DullAd8434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2rwbd
false
null
t3_1i2rwbd
/r/LocalLLaMA/comments/1i2rwbd/finetune_wizard_vicuna/
false
false
self
1
null
New Open Source Model by AI Dungeon Trained Let You Fail and Die.
1
# One frustration we’ve heard from many AI Dungeon players is that AI models are too nice, never letting them fail or die. So we decided to fix that. We trained a model we call Wayfarer where adventures are much more challenging with failure and death happening frequently. We released it on AI Dungeon several weeks ago and players loved it, so we’ve decided to open source the model for anyone to experience unforgivingly brutal AI adventures! Would love to hear your feedback as we plan to continue to improve and open source similar models. [https://huggingface.co/LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B)
2025-01-16T16:44:25
https://www.reddit.com/r/LocalLLaMA/comments/1i2t6gq/new_open_source_model_by_ai_dungeon_trained_let/
Nick_AIDungeon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2t6gq
false
null
t3_1i2t6gq
/r/LocalLLaMA/comments/1i2t6gq/new_open_source_model_by_ai_dungeon_trained_let/
false
false
self
1
{'enabled': False, 'images': [{'id': 'dfXFVex1RZb0J4Tzjl4qlMOOe8rgEhD7ZT2Osnr_R3s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=108&crop=smart&auto=webp&s=1752a52b5ae71b6f45f220a73253d31d8140e5cb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=216&crop=smart&auto=webp&s=59fcdaeacbe15210726bc9d203b5c407e9b2a7ab', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=320&crop=smart&auto=webp&s=bd3cbc2166e77778bcd83842b7a8006afc72a1e8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=640&crop=smart&auto=webp&s=b4e0c7c2b4ca39f93de51f07eb093a2f49bbb220', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=960&crop=smart&auto=webp&s=6c7cd76736ced597d005f80bfc7b9ad3e99cd7f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=1080&crop=smart&auto=webp&s=60184090f4b273e2c71876278e6dc08a6b25c673', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?auto=webp&s=2d760d98bbf4c159fb4f9fe5e153227fab4eaab7', 'width': 1200}, 'variants': {}}]}
Introducing Wayfarer: a brutally challenging roleplay model trained to let you fail and die.
467
One frustration we’ve heard from many AI Dungeon players is that AI models are too nice, never letting them fail or die. So we decided to fix that. We trained a model we call Wayfarer where adventures are much more challenging with failure and death happening frequently. We released it on AI Dungeon several weeks ago and players loved it, so we’ve decided to open source the model for anyone to experience unforgivingly brutal AI adventures! Would love to hear your feedback as we plan to continue to improve and open source similar models. \[https://huggingface.co/LatitudeGames/Wayfarer-12B\](https://huggingface.co/LatitudeGames/Wayfarer-12B)
2025-01-16T16:46:20
https://www.reddit.com/r/LocalLLaMA/comments/1i2t82i/introducing_wayfarer_a_brutally_challenging/
Nick_AIDungeon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2t82i
false
null
t3_1i2t82i
/r/LocalLLaMA/comments/1i2t82i/introducing_wayfarer_a_brutally_challenging/
false
false
self
467
{'enabled': False, 'images': [{'id': 'dfXFVex1RZb0J4Tzjl4qlMOOe8rgEhD7ZT2Osnr_R3s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=108&crop=smart&auto=webp&s=1752a52b5ae71b6f45f220a73253d31d8140e5cb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=216&crop=smart&auto=webp&s=59fcdaeacbe15210726bc9d203b5c407e9b2a7ab', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=320&crop=smart&auto=webp&s=bd3cbc2166e77778bcd83842b7a8006afc72a1e8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=640&crop=smart&auto=webp&s=b4e0c7c2b4ca39f93de51f07eb093a2f49bbb220', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=960&crop=smart&auto=webp&s=6c7cd76736ced597d005f80bfc7b9ad3e99cd7f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?width=1080&crop=smart&auto=webp&s=60184090f4b273e2c71876278e6dc08a6b25c673', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EbN08YejXmjCogEAl7aVDvq1yz7MPCUt8h-hzbZx1OA.jpg?auto=webp&s=2d760d98bbf4c159fb4f9fe5e153227fab4eaab7', 'width': 1200}, 'variants': {}}]}
FREE LLM API?
0
I have a mega project and I needed a third llm that is free. Does anyone recommend any?
2025-01-16T16:51:51
https://www.reddit.com/r/LocalLLaMA/comments/1i2tcrg/free_llm_api/
Lost_midia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2tcrg
false
null
t3_1i2tcrg
/r/LocalLLaMA/comments/1i2tcrg/free_llm_api/
false
false
self
0
null
Creating a random post
1
To see if I can create a random post
2025-01-16T17:06:09
https://www.reddit.com/r/LocalLLaMA/comments/1i2tozz/creating_a_random_post/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2tozz
false
null
t3_1i2tozz
/r/LocalLLaMA/comments/1i2tozz/creating_a_random_post/
false
false
self
1
null
Cohere For AI Open Science Community launches new LLM cohort focused on Multilingual Long-Context Understanding
1
[removed]
2025-01-16T17:08:29
https://www.reddit.com/r/LocalLLaMA/comments/1i2tqz1/cohere_for_ai_open_science_community_launches_new/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2tqz1
false
null
t3_1i2tqz1
/r/LocalLLaMA/comments/1i2tqz1/cohere_for_ai_open_science_community_launches_new/
false
false
self
1
null
Cohere For AI Open Science Community launches new LLM cohort focused on Multilingual Long-Context Understanding
1
[removed]
2025-01-16T17:09:21
https://www.reddit.com/r/LocalLLaMA/comments/1i2trny/cohere_for_ai_open_science_community_launches_new/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2trny
false
null
t3_1i2trny
/r/LocalLLaMA/comments/1i2trny/cohere_for_ai_open_science_community_launches_new/
false
false
self
1
null
Cohere For AI's Open Science Community announces a new LLM research cohort dedicated to advancing multilingual long-context understanding
1
[removed]
2025-01-16T17:12:33
https://www.reddit.com/r/LocalLLaMA/comments/1i2tu9w/cohere_for_ais_open_science_community_announces_a/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2tu9w
false
null
t3_1i2tu9w
/r/LocalLLaMA/comments/1i2tu9w/cohere_for_ais_open_science_community_announces_a/
false
false
self
1
null
Cohere's Open Science Community announces a new LLM research cohort dedicated to advancing multilingual long-context understanding
1
[removed]
2025-01-16T17:13:22
https://www.reddit.com/r/LocalLLaMA/comments/1i2tuyi/coheres_open_science_community_announces_a_new/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2tuyi
false
null
t3_1i2tuyi
/r/LocalLLaMA/comments/1i2tuyi/coheres_open_science_community_announces_a_new/
false
false
self
1
null
We added streaming automatic tool use to llama-cpp-python in RAGLite
1
[removed]
2025-01-16T17:14:22
https://www.reddit.com/r/LocalLLaMA/comments/1i2tvt1/we_added_streaming_automatic_tool_use_to/
lsorber
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2tvt1
false
null
t3_1i2tvt1
/r/LocalLLaMA/comments/1i2tvt1/we_added_streaming_automatic_tool_use_to/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NRftcPLqy7hGpYVqUX-vTjNEWYSTr8Hr-5gvEz4Wzww', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2gnFKMfsRmmx8L4N8HLiEYXgOOz18NAfKZvrrNL4IvY.jpg?width=108&crop=smart&auto=webp&s=e6dce4f9c6fc56c9c7abb9d230a8439518f04bba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2gnFKMfsRmmx8L4N8HLiEYXgOOz18NAfKZvrrNL4IvY.jpg?width=216&crop=smart&auto=webp&s=bde63d7f892ba504aa9944a6d0c8f4f09f6e6f4e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2gnFKMfsRmmx8L4N8HLiEYXgOOz18NAfKZvrrNL4IvY.jpg?width=320&crop=smart&auto=webp&s=ff05ff16be58ef5dbe7ffcc5ce1d772b1ab1e712', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2gnFKMfsRmmx8L4N8HLiEYXgOOz18NAfKZvrrNL4IvY.jpg?width=640&crop=smart&auto=webp&s=abd8529039d1ec09256bee1a97c0d596d976151e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2gnFKMfsRmmx8L4N8HLiEYXgOOz18NAfKZvrrNL4IvY.jpg?width=960&crop=smart&auto=webp&s=8c557e7c0b35157d8c000263b4e74a7545306aef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2gnFKMfsRmmx8L4N8HLiEYXgOOz18NAfKZvrrNL4IvY.jpg?width=1080&crop=smart&auto=webp&s=46ed101ed12c42c6d44d36a4507c792645e11383', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2gnFKMfsRmmx8L4N8HLiEYXgOOz18NAfKZvrrNL4IvY.jpg?auto=webp&s=d2e9712238bd7d24b38916e30d6354bf3fef4b43', 'width': 1200}, 'variants': {}}]}
Cohere's Open Science Community launches a new LLM research cohort dedicated to advancing multilingual long-context understanding
1
[removed]
2025-01-16T17:16:50
https://www.reddit.com/r/LocalLLaMA/comments/1i2ty0x/coheres_open_science_community_launches_a_new_llm/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2ty0x
false
null
t3_1i2ty0x
/r/LocalLLaMA/comments/1i2ty0x/coheres_open_science_community_launches_a_new_llm/
false
false
self
1
null
We added streaming automatic tool use to llama-cpp-python in RAGLite
1
[removed]
2025-01-16T17:22:15
https://www.reddit.com/r/LocalLLaMA/comments/1i2u2ru/we_added_streaming_automatic_tool_use_to/
lsorber
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2u2ru
false
null
t3_1i2u2ru
/r/LocalLLaMA/comments/1i2u2ru/we_added_streaming_automatic_tool_use_to/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MSB-JFoadGRi1hNKbWRY4DwR91jxpQCtW_NR8spYoRY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=108&crop=smart&auto=webp&s=3d2fd8000f1cb91098de5b16c621cb0e51cefd2f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=216&crop=smart&auto=webp&s=d9881601217e3fb2e77d15c02b5a6397177defb6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=320&crop=smart&auto=webp&s=86d127335520b5164d6c281f1c892fdffd45ccdc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=640&crop=smart&auto=webp&s=148ec014650d85a2845f9500208d2191ab132c5d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=960&crop=smart&auto=webp&s=333b257845ca3e771a2b3030b3d92d778226053f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=1080&crop=smart&auto=webp&s=4b8d9433d3fa23214d543347270ba011387c8698', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?auto=webp&s=ee766166ed7876adeb01c60c8ec239030530da1f', 'width': 1200}, 'variants': {}}]}
Cohere's Open Science Community is advancing multilingual long-context understanding
1
[removed]
2025-01-16T17:24:04
https://www.reddit.com/r/LocalLLaMA/comments/1i2u49x/coheres_open_science_community_is_advancing/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2u49x
false
null
t3_1i2u49x
/r/LocalLLaMA/comments/1i2u49x/coheres_open_science_community_is_advancing/
false
false
self
1
null
Cohere advancing multilingual long-context understanding
1
[deleted]
2025-01-16T17:25:23
[deleted]
1970-01-01T00:00:00
0
{}
1i2u5cv
false
null
t3_1i2u5cv
/r/LocalLLaMA/comments/1i2u5cv/cohere_advancing_multilingual_longcontext/
false
false
default
1
null
Cohere For AI's Open Science Community launches a new LLM research cohort dedicated to advancing multilingual long-context understanding
1
[removed]
2025-01-16T17:26:23
https://www.reddit.com/r/LocalLLaMA/comments/1i2u684/cohere_for_ais_open_science_community_launches_a/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2u684
false
null
t3_1i2u684
/r/LocalLLaMA/comments/1i2u684/cohere_for_ais_open_science_community_launches_a/
false
false
self
1
null
We added streaming automatic tool use to llama-cpp-python in RAGLite
1
[removed]
2025-01-16T17:27:05
https://www.reddit.com/r/LocalLLaMA/comments/1i2u6w3/we_added_streaming_automatic_tool_use_to/
lsorber
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2u6w3
false
null
t3_1i2u6w3
/r/LocalLLaMA/comments/1i2u6w3/we_added_streaming_automatic_tool_use_to/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MSB-JFoadGRi1hNKbWRY4DwR91jxpQCtW_NR8spYoRY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=108&crop=smart&auto=webp&s=3d2fd8000f1cb91098de5b16c621cb0e51cefd2f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=216&crop=smart&auto=webp&s=d9881601217e3fb2e77d15c02b5a6397177defb6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=320&crop=smart&auto=webp&s=86d127335520b5164d6c281f1c892fdffd45ccdc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=640&crop=smart&auto=webp&s=148ec014650d85a2845f9500208d2191ab132c5d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=960&crop=smart&auto=webp&s=333b257845ca3e771a2b3030b3d92d778226053f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?width=1080&crop=smart&auto=webp&s=4b8d9433d3fa23214d543347270ba011387c8698', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/euZdN60E4W2v9SxbE68nJA3UIRBFKJV08fErGEbKM_U.jpg?auto=webp&s=ee766166ed7876adeb01c60c8ec239030530da1f', 'width': 1200}, 'variants': {}}]}
Cohere Community is advancing multilingual long-context understanding
1
[removed]
2025-01-16T17:27:58
https://www.reddit.com/r/LocalLLaMA/comments/1i2u7nk/cohere_community_is_advancing_multilingual/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2u7nk
false
null
t3_1i2u7nk
/r/LocalLLaMA/comments/1i2u7nk/cohere_community_is_advancing_multilingual/
false
false
self
1
null
Cohere is advancing multilingual long-context understanding
1
To see if I can create a post
2025-01-16T17:29:35
https://www.reddit.com/r/LocalLLaMA/comments/1i2u91c/cohere_is_advancing_multilingual_longcontext/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2u91c
false
null
t3_1i2u91c
/r/LocalLLaMA/comments/1i2u91c/cohere_is_advancing_multilingual_longcontext/
false
false
self
1
null
I created a vscode extension that does inline edits using deepseek
43
2025-01-16T17:34:09
https://v.redd.it/wo2fucji5ede1
United-Rush4073
/r/LocalLLaMA/comments/1i2ucxu/i_created_a_vscode_extension_that_does_inline/
1970-01-01T00:00:00
0
{}
1i2ucxu
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wo2fucji5ede1/DASHPlaylist.mpd?a=1739770453%2CNzFjM2Y4MWQ0M2QzMzBiZGMyODUyMzc3ZTAwY2QyYTA2MWUyNWI1YmM0MWNmZmE5ZjZlZTU4ZmJlYTI3YTA2Yw%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/wo2fucji5ede1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wo2fucji5ede1/HLSPlaylist.m3u8?a=1739770453%2CN2FjNGU5MzBkYWE5MGZlZDQ5YTkxNmVlMDAyN2YwMmExMDdjNmJhMWFhMTYzNzlkMTk2NmUxMmIwODA3NzU3Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wo2fucji5ede1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1i2ucxu
/r/LocalLLaMA/comments/1i2ucxu/i_created_a_vscode_extension_that_does_inline/
false
false
https://external-preview…9f0be36d0057f507
43
{'enabled': False, 'images': [{'id': 'c2Y1NHdiamk1ZWRlMSXLRLoBTWH7BkELeo8cMATHejXfU-O8HPWWGk2XwKZI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c2Y1NHdiamk1ZWRlMSXLRLoBTWH7BkELeo8cMATHejXfU-O8HPWWGk2XwKZI.png?width=108&crop=smart&format=pjpg&auto=webp&s=a100a5daac05b0edbc263684e0c917918b3a4300', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/c2Y1NHdiamk1ZWRlMSXLRLoBTWH7BkELeo8cMATHejXfU-O8HPWWGk2XwKZI.png?width=216&crop=smart&format=pjpg&auto=webp&s=afc1658d5116808f2da3e80759a973bb4a027018', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/c2Y1NHdiamk1ZWRlMSXLRLoBTWH7BkELeo8cMATHejXfU-O8HPWWGk2XwKZI.png?width=320&crop=smart&format=pjpg&auto=webp&s=38e690cfada62d5ac36d2284028a2cfe1edec5bb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/c2Y1NHdiamk1ZWRlMSXLRLoBTWH7BkELeo8cMATHejXfU-O8HPWWGk2XwKZI.png?width=640&crop=smart&format=pjpg&auto=webp&s=1c478b78979b73026e8dd5d5206ae00533c36c25', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/c2Y1NHdiamk1ZWRlMSXLRLoBTWH7BkELeo8cMATHejXfU-O8HPWWGk2XwKZI.png?width=960&crop=smart&format=pjpg&auto=webp&s=92aea9145c96cd6d4643ad88da5160020fcce234', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/c2Y1NHdiamk1ZWRlMSXLRLoBTWH7BkELeo8cMATHejXfU-O8HPWWGk2XwKZI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8a25f87d05cd23b476c50ff8d77160bd3272752e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/c2Y1NHdiamk1ZWRlMSXLRLoBTWH7BkELeo8cMATHejXfU-O8HPWWGk2XwKZI.png?format=pjpg&auto=webp&s=262b7415b4903857549f365f0c8d01816aec5980', 'width': 1920}, 'variants': {}}]}
Domain search like HF chat
0
How to approach building web search to specific domains or urls like hugging face chat
2025-01-16T17:35:23
https://www.reddit.com/r/LocalLLaMA/comments/1i2udz3/domain_search_like_hf_chat/
DataNebula
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2udz3
false
null
t3_1i2udz3
/r/LocalLLaMA/comments/1i2udz3/domain_search_like_hf_chat/
false
false
self
0
null
Check out my Local LLM Livestreamer and Youtuber
1
[removed]
2025-01-16T17:45:51
https://www.reddit.com/r/LocalLLaMA/comments/1i2umyh/check_out_my_local_llm_livestreamer_and_youtuber/
Ok-Investment-8941
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2umyh
false
null
t3_1i2umyh
/r/LocalLLaMA/comments/1i2umyh/check_out_my_local_llm_livestreamer_and_youtuber/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XibLbvrTTWDH_seocM7xdXikxjCk_bpMgcZEwShv9zA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uUxpqFz_kxSrEyxH7kmLBEQgUcf3fbTElR4dXLySlgo.jpg?width=108&crop=smart&auto=webp&s=fc2642f59611bbc64dc2cdfb1e68d2af12da3612', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/uUxpqFz_kxSrEyxH7kmLBEQgUcf3fbTElR4dXLySlgo.jpg?width=216&crop=smart&auto=webp&s=825ca92d09fd8ea6fdc1145e0ecfa1d45375538a', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/uUxpqFz_kxSrEyxH7kmLBEQgUcf3fbTElR4dXLySlgo.jpg?auto=webp&s=22d027bfccf65dda4ecd17e0776c9f16b9e0c3f0', 'width': 300}, 'variants': {}}]}
Locally run AI Streamer on an RTX 3050
1
[removed]
2025-01-16T17:48:51
https://www.reddit.com/r/LocalLLaMA/comments/1i2upk6/locally_run_ai_streamer_on_an_rtx_3050/
Ok-Investment-8941
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2upk6
false
null
t3_1i2upk6
/r/LocalLLaMA/comments/1i2upk6/locally_run_ai_streamer_on_an_rtx_3050/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XibLbvrTTWDH_seocM7xdXikxjCk_bpMgcZEwShv9zA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uUxpqFz_kxSrEyxH7kmLBEQgUcf3fbTElR4dXLySlgo.jpg?width=108&crop=smart&auto=webp&s=fc2642f59611bbc64dc2cdfb1e68d2af12da3612', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/uUxpqFz_kxSrEyxH7kmLBEQgUcf3fbTElR4dXLySlgo.jpg?width=216&crop=smart&auto=webp&s=825ca92d09fd8ea6fdc1145e0ecfa1d45375538a', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/uUxpqFz_kxSrEyxH7kmLBEQgUcf3fbTElR4dXLySlgo.jpg?auto=webp&s=22d027bfccf65dda4ecd17e0776c9f16b9e0c3f0', 'width': 300}, 'variants': {}}]}
Techniques for simulating a "group chat"?
9
I'm a bit new to this, but from what I've read it seems like there are two common techniques for generating a conversation among more than two parties: 1. Prompt a single model to write a "script" portraying the conversation between the specified characters. 2. Come up with a system to swap contexts each time a new "character" begins speaking. The first option is nice because the model ensures that the conversation flows naturally between characters, but it seems like you'd lose some of the benefits of the chat model's training because it's not necessarily going to generate that dialog using the chat template. This is a problem for my application because I'd like to be able to parse the "script" into a series of messages, each with an attached speaker (rather than dumping the whole thing into a text field). The second option seems like it'd overcome this problem, but I'm not sure how to facilitate a flow of conversation between speakers. Presumably each generation will end by reverse-prompting the user/instruction rather than another character. Can I get it to not do that just with prompting, or do I need to do something more clever? I assume to a large extent I'm just going to have to try things out and see what works, but since this is presumably a pretty common problem I'm curious how others have approached it, or if there is some standard solution I'm overlooking.
2025-01-16T17:57:24
https://www.reddit.com/r/LocalLLaMA/comments/1i2uwwo/techniques_for_simulating_a_group_chat/
StewedAngelSkins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2uwwo
false
null
t3_1i2uwwo
/r/LocalLLaMA/comments/1i2uwwo/techniques_for_simulating_a_group_chat/
false
false
self
9
null
Building a Fully Local AI-Powered News and Livestream Pipeline on an RTX 3050
1
[removed]
2025-01-16T18:03:19
https://www.reddit.com/r/LocalLLaMA/comments/1i2v22s/building_a_fully_local_aipowered_news_and/
Ok-Investment-8941
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2v22s
false
null
t3_1i2v22s
/r/LocalLLaMA/comments/1i2v22s/building_a_fully_local_aipowered_news_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XibLbvrTTWDH_seocM7xdXikxjCk_bpMgcZEwShv9zA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uUxpqFz_kxSrEyxH7kmLBEQgUcf3fbTElR4dXLySlgo.jpg?width=108&crop=smart&auto=webp&s=fc2642f59611bbc64dc2cdfb1e68d2af12da3612', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/uUxpqFz_kxSrEyxH7kmLBEQgUcf3fbTElR4dXLySlgo.jpg?width=216&crop=smart&auto=webp&s=825ca92d09fd8ea6fdc1145e0ecfa1d45375538a', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/uUxpqFz_kxSrEyxH7kmLBEQgUcf3fbTElR4dXLySlgo.jpg?auto=webp&s=22d027bfccf65dda4ecd17e0776c9f16b9e0c3f0', 'width': 300}, 'variants': {}}]}
Best GPU(s) to run Llama3.3 locally?
0
What kind of server setup do you guys use or recommend to run Llama3.3 70B via ollama without any issues or delays?
2025-01-16T18:11:05
https://www.reddit.com/r/LocalLLaMA/comments/1i2v8tq/best_gpus_to_run_llama33_locally/
Pointfit_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2v8tq
false
null
t3_1i2v8tq
/r/LocalLLaMA/comments/1i2v8tq/best_gpus_to_run_llama33_locally/
false
false
self
0
null
LangChain Alternative?
1
Hello. I’ve noticed that LangChain docs are pretty bad, and it is just slow. Does anyone know a good LOCAL, OPEN-SOURCE, NON-AGENT alternative for it that would work like this simply: ``` llm = model.load(“file”) document = load.file(“my_novel”) chat_history = “<custom>Hi, what is this?<data>{for_file_placeholder}</data></custom><answer>” print(llm.long_answer(document, chat_history, “{for_file_placeholder}”) ``` Thank you!
2025-01-16T18:11:52
https://www.reddit.com/r/LocalLLaMA/comments/1i2v9g9/langchain_alternative/
yukiarimo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2v9g9
false
null
t3_1i2v9g9
/r/LocalLLaMA/comments/1i2v9g9/langchain_alternative/
false
false
self
1
null
Best Open Source web access?
1
What are the best open source repos for building something which can access the internet and retrieve you live information from websites, the internet etc - I’m aware so far of perplexica, firecrawl..any others?
2025-01-16T18:20:48
https://www.reddit.com/r/LocalLLaMA/comments/1i2vh21/best_open_source_web_access/
BadTacticss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2vh21
false
null
t3_1i2vh21
/r/LocalLLaMA/comments/1i2vh21/best_open_source_web_access/
false
false
self
1
null
RTX 4070 8GB VRAM - What's the best highest parameter model with quantization I can fine-tune?
1
Thinking maybe Gemma 2 9B. Any suggestions?
2025-01-16T18:26:00
https://www.reddit.com/r/LocalLLaMA/comments/1i2vlj3/rtx_4070_8gb_vram_whats_the_best_highest/
raidedclusteranimd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2vlj3
false
null
t3_1i2vlj3
/r/LocalLLaMA/comments/1i2vlj3/rtx_4070_8gb_vram_whats_the_best_highest/
false
false
self
1
null
Costs to run Llama 3.3 on cloud?
1
I'm just exploring an idea to have llama 3.3 run a vtuber streaming chat. But trying to understand the costs with hosting it on the cloud (and where?). And if llama 3.3 can be set up with special instructions in the same way a custom GPT could? Like, let's say the llama 3.3 was chatting non stop for 3 hours? How much would that cost? I understand it's cheaper than GPT4o, but I don't understand how that translates to the actual hosting price.
2025-01-16T18:27:55
https://www.reddit.com/r/LocalLLaMA/comments/1i2vn7y/costs_to_run_llama_33_on_cloud/
MosskeepForest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2vn7y
false
null
t3_1i2vn7y
/r/LocalLLaMA/comments/1i2vn7y/costs_to_run_llama_33_on_cloud/
false
false
self
1
null
PC Build for Fine-tuning And Runnling Local LLMs in 500 dollars
1
[removed]
2025-01-16T19:19:18
https://www.reddit.com/r/LocalLLaMA/comments/1i2wuyo/pc_build_for_finetuning_and_runnling_local_llms/
footballminati
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2wuyo
false
null
t3_1i2wuyo
/r/LocalLLaMA/comments/1i2wuyo/pc_build_for_finetuning_and_runnling_local_llms/
false
false
self
1
null
benchmarks and real world comparisons QwQ 72B vs. DeepSeek V3 vs. Claude 3.5 Sonnet vs. Llama405B
13
I'm looking specifically at these models and want to understand how they compare in real world situations. Hoping someone has a good table and details on what model did best for a particular task.
2025-01-16T19:20:51
https://www.reddit.com/r/LocalLLaMA/comments/1i2ww9q/benchmarks_and_real_world_comparisons_qwq_72b_vs/
Vegetable_Sun_9225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2ww9q
false
null
t3_1i2ww9q
/r/LocalLLaMA/comments/1i2ww9q/benchmarks_and_real_world_comparisons_qwq_72b_vs/
false
false
self
13
null
idea: enhanced google search with ai
0
What actually lacking in current websearch ai provided by all of big techa are they just search by generating user query using llm ans summarize the results.. pretty basic. Chances of failure is high : query generated goes wrong, search summarization not able to cover or miss a main part or chunk. So to tackle and just to increase a level up ( not to have complete expectation like ai solves and it make rounds in google .. even if it does it won't comeback with what your are expecting) So after going through papers like test time compute, rag and more.my approach is.. Steps 1. Take the input query, get top 5 urls results ( utilising google powerful perfect results). I.e 5 docs ( url + text) 2. Now loop input query + doc and ask llm to genrate 5 questions.. ao total 25 questions with 5 calls. 3. Now do some clustering and get top cluster ( it contains perfectly good and variated queries) 4. Now perform each query search and collect all results. 5. These are your results enhanced by ai. Advantages: 1. First step provide relevant contnet from google 2. Generatw 5 each variattion makes exploration 3. Keep those results raw and go through them. This approach allowa ai to explore and generate constrained results that definitely answers your intention. I used wordllama( numpy NN for llama405b model forst layer tuned ) for embedding. And normal qwen2.5:0.5b model is sufficient. Please do share thoughts on this
2025-01-16T19:37:29
https://www.reddit.com/r/LocalLLaMA/comments/1i2xafd/idea_enhanced_google_search_with_ai/
Diligent-Resident289
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2xafd
false
null
t3_1i2xafd
/r/LocalLLaMA/comments/1i2xafd/idea_enhanced_google_search_with_ai/
false
false
self
0
null
Thoughts on an open-source AI Agent Marketplace?
1
[removed]
2025-01-16T19:39:35
https://www.reddit.com/r/LocalLLaMA/comments/1i2xc6a/thoughts_on_an_opensource_ai_agent_marketplace/
Fluid-Eye-8872
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2xc6a
false
null
t3_1i2xc6a
/r/LocalLLaMA/comments/1i2xc6a/thoughts_on_an_opensource_ai_agent_marketplace/
false
false
self
1
null
Thoughts on an open ai agent marketplace?
1
[removed]
2025-01-16T19:41:33
https://www.reddit.com/r/LocalLLaMA/comments/1i2xdun/thoughts_on_an_open_ai_agent_marketplace/
Fluid-Eye-8872
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2xdun
false
null
t3_1i2xdun
/r/LocalLLaMA/comments/1i2xdun/thoughts_on_an_open_ai_agent_marketplace/
false
false
self
1
null
Thoughts on an open source AI Agent Marketplace?
8
I've been thinking about how scattered AI agent projects are and how expensive LLMs will be in terms of GPU costs, especially for larger projects in the future. There are two main problems I've identified. First, we have cool stuff on GitHub, but it’s tough to figure out which ones are reliable or to run them if you’re not super technical. There are emerging AI agent marketplaces for non-technical people, but it is difficult to trust an AI agent without seeing them as they still require customization. The second problem is that as LLMs become more advanced, creating AI agents that require more GPU power will be difficult. So, in the next few years, I think larger companies will completely monopolize AI agents of scale because they will be the only ones able to afford the GPU power for advanced models. In fact, if there was a way to do this, the general public could benefit more. So my idea is a website that ranks these open-source AI agents by performance (e.g., the top 5 for coding tasks, the top five for data analysis, etc.) and then provides a simple ‘Launch’ button to run them on a cloud GPU for non-technical users (with the GPU cost paid by users in a pay as you go model). Users could upload a dataset or input a prompt, and boom—the agent does the work. Meanwhile, the community can upvote or provide feedback on which agents actually work best because they are open-source. I think that for the top 5-10 agents, the website can provide efficiency ratings on different LLMs with no cost to the developers as an incentive to code open source (in the future). In line with this, for larger AI agent models that require more GPU power, the website can integrate a crowd-funding model where a certain benchmark is reached, and the agent will run. Everyone who contributes to the GPU cost can benefit from the agent once the benchmark is reached, and people can see the work of the coder/s each day. I see this option as more catered for passion projects/independent research where, otherwise, the developers or researchers will not have enough funds to test their agents. This could be a continuous funding effort for people really needing/believing in the potential of that agent, causing big models to need updating, retraining, or fine-tuning. The website can also offer closed repositories, and developers can choose the repo type they want to use. However, I think community feedback and the potential to run the agents on different LLMs for no cost to test their efficiencies is a good incentive for developers to choose open-source development. I see the open-source models as being perceived as more reliable by the community and having continuous feedback. If done well, this platform could democratize access to advanced AI agents, bridging the gap between complex open-source code and real-world users who want to leverage it without huge setup costs. It can also create an incentive to prevent larger corporations from monopolizing AI research and advanced agents due to GPU costs. Any thoughts on this? I would appreciate any comments/dms.
2025-01-16T19:43:06
https://www.reddit.com/r/LocalLLaMA/comments/1i2xf52/thoughts_on_an_open_source_ai_agent_marketplace/
StatisticianSome5986
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2xf52
false
null
t3_1i2xf52
/r/LocalLLaMA/comments/1i2xf52/thoughts_on_an_open_source_ai_agent_marketplace/
false
false
self
8
null
Context >
57
2025-01-16T19:49:05
https://i.redd.it/281mgak4uede1.png
MrCyclopede
i.redd.it
1970-01-01T00:00:00
0
{}
1i2xk8h
false
null
t3_1i2xk8h
/r/LocalLLaMA/comments/1i2xk8h/context/
false
false
https://b.thumbs.redditm…4o8C1mNQ0oNk.jpg
57
{'enabled': True, 'images': [{'id': 'yQLePChOd_ZAaf2TQQWHFmEWLX5rd-PGd8UctVFuccI', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/281mgak4uede1.png?width=108&crop=smart&auto=webp&s=0cd88afc8be8b02a90c498388d76cebc75fee277', 'width': 108}, {'height': 222, 'url': 'https://preview.redd.it/281mgak4uede1.png?width=216&crop=smart&auto=webp&s=4b64b311ba8ded33ea1708c3d839364c20d6bb7e', 'width': 216}, {'height': 330, 'url': 'https://preview.redd.it/281mgak4uede1.png?width=320&crop=smart&auto=webp&s=3c7578a0ce338fe418f6f25131b58c56d2c58866', 'width': 320}, {'height': 660, 'url': 'https://preview.redd.it/281mgak4uede1.png?width=640&crop=smart&auto=webp&s=0cb3b1f73fa5516de74f745937554917437aeb73', 'width': 640}], 'source': {'height': 702, 'url': 'https://preview.redd.it/281mgak4uede1.png?auto=webp&s=7596234f6f3be967a1685f312e813ab82389d43a', 'width': 680}, 'variants': {}}]}
Is it possible to identify or extract training data from embedding models like BERT?
3
I'm looking for methods to check for training data leakage from BERT like embedding models. Like if the training data used to fine tune embedding model contains a unique name like "John Fitzgerald Diddy Lamar Kumar" and his name is mentioned along with the crime he created multiple times in a training data. Given that I know his full name, can I identify the crime he committed just using the final trained embedding model? Simply, how do I attack the embedding model to extract information or identify information from the model?
2025-01-16T20:04:25
https://www.reddit.com/r/LocalLLaMA/comments/1i2xxi6/is_it_possible_to_identify_or_extract_training/
Lazy_Wedding_1383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2xxi6
false
null
t3_1i2xxi6
/r/LocalLLaMA/comments/1i2xxi6/is_it_possible_to_identify_or_extract_training/
false
false
self
3
null
NVIDIA Project DIGITS: A New Milestone in AI Computing
1
[removed]
2025-01-16T20:13:29
https://www.reddit.com/r/LocalLLaMA/comments/1i2y529/nvidia_project_digits_a_new_milestone_in_ai/
jmbadu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2y529
false
null
t3_1i2y529
/r/LocalLLaMA/comments/1i2y529/nvidia_project_digits_a_new_milestone_in_ai/
false
false
self
1
null
My local ai server ( first PC build…. ever 😂)
1
[removed]
2025-01-16T20:15:32
https://v.redd.it/1gh6pueuyede1
fluffyboogasuga
v.redd.it
1970-01-01T00:00:00
0
{}
1i2y6ra
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1gh6pueuyede1/DASHPlaylist.mpd?a=1739650545%2CNDZlMGZiOWM2YjI4MWM0N2RmN2UxNDRmNWQ3NTI1M2FlOGZmZjg0YzU3MTgyNjlhNTYyYWU3MWJkNjRhMjA3Zg%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/1gh6pueuyede1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/1gh6pueuyede1/HLSPlaylist.m3u8?a=1739650545%2CMGU5MGEzYjk3MTRkMzFlY2VjMmViMjFkZWI4Y2ZjNTU3ZDNmNTc0NDcwMWQ3ZTg3M2MyMjQ2ZjA3YTdkM2Y0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1gh6pueuyede1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1i2y6ra
/r/LocalLLaMA/comments/1i2y6ra/my_local_ai_server_first_pc_build_ever/
false
false
https://external-preview…108142a258b2cf61
1
{'enabled': False, 'images': [{'id': 'NzFrem01N3V5ZWRlMYUiIH4H211pw1pOCWeY8wAFS4gmjlhtZBwTT15VsyOz', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NzFrem01N3V5ZWRlMYUiIH4H211pw1pOCWeY8wAFS4gmjlhtZBwTT15VsyOz.png?width=108&crop=smart&format=pjpg&auto=webp&s=292012cbb713bfcf8333488b619874b3f4d96cc2', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NzFrem01N3V5ZWRlMYUiIH4H211pw1pOCWeY8wAFS4gmjlhtZBwTT15VsyOz.png?width=216&crop=smart&format=pjpg&auto=webp&s=9d304696e868ffeaf5c36f09e0126475babd0354', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/NzFrem01N3V5ZWRlMYUiIH4H211pw1pOCWeY8wAFS4gmjlhtZBwTT15VsyOz.png?width=320&crop=smart&format=pjpg&auto=webp&s=15c54c8883606d40a400a7468375d8c31e6f27c0', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/NzFrem01N3V5ZWRlMYUiIH4H211pw1pOCWeY8wAFS4gmjlhtZBwTT15VsyOz.png?width=640&crop=smart&format=pjpg&auto=webp&s=ebd0c4448793c020880d5c8ae2eebabcb34d75cb', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/NzFrem01N3V5ZWRlMYUiIH4H211pw1pOCWeY8wAFS4gmjlhtZBwTT15VsyOz.png?width=960&crop=smart&format=pjpg&auto=webp&s=ef6b8f10ba4f9cb56a9e9c0fb1f2d07c57121a8b', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/NzFrem01N3V5ZWRlMYUiIH4H211pw1pOCWeY8wAFS4gmjlhtZBwTT15VsyOz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=96b1cb9aa5634a17924f5a6fdb64edbe6dcfbd82', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/NzFrem01N3V5ZWRlMYUiIH4H211pw1pOCWeY8wAFS4gmjlhtZBwTT15VsyOz.png?format=pjpg&auto=webp&s=641c07233b523237bf675b986d6fb1cb9162fe01', 'width': 1080}, 'variants': {}}]}
Is DeepSeek V3 overhyped?
124
Have been using DeepSeek V3 for some time after all the time it came out. Coding wise (I work on web frontend, mostly react/svelte), I do not find it nearly as impressive as 3.5 Sonnet. The benchmarks seems to be matching, but the feel is just different, sometimes DeepSeek does give interesting stuff when asked. For me personally, it feels like a base 405B that has even been further scaled, it has little scars of brutal human RLHF (unlike OAI, LLaMa and etc Models). It just doesn't have that taste of Claude 3.5 Sonnet. Curious what you guys think
2025-01-16T20:17:06
https://www.reddit.com/r/LocalLLaMA/comments/1i2y810/is_deepseek_v3_overhyped/
YourAverageDev0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2y810
false
null
t3_1i2y810
/r/LocalLLaMA/comments/1i2y810/is_deepseek_v3_overhyped/
false
false
self
124
null
Thoughts on Langfuse?
6
And the other observatbility frameworks. Did you find them useful? Which one do you recommend? What about their eval features (eg llm as a judge)
2025-01-16T20:22:15
https://www.reddit.com/r/LocalLLaMA/comments/1i2ycgi/thoughts_on_langfuse/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2ycgi
false
null
t3_1i2ycgi
/r/LocalLLaMA/comments/1i2ycgi/thoughts_on_langfuse/
false
false
self
6
null
What is the best VS code AI extension?
1
[removed]
2025-01-16T20:55:20
https://www.reddit.com/r/LocalLLaMA/comments/1i2z3vn/what_is_the_best_vs_code_ai_extension/
SkylarNox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2z3vn
false
null
t3_1i2z3vn
/r/LocalLLaMA/comments/1i2z3vn/what_is_the_best_vs_code_ai_extension/
false
false
self
1
null
Could alignment be done by incorporating user feedback at scale on personal LLMs that are designed to be digital twins of the users?
1
[removed]
2025-01-16T21:04:04
https://www.reddit.com/r/LocalLLaMA/comments/1i2zbdf/could_alignment_be_done_by_incorporating_user/
Memetic1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2zbdf
false
null
t3_1i2zbdf
/r/LocalLLaMA/comments/1i2zbdf/could_alignment_be_done_by_incorporating_user/
false
false
self
1
null
What is the best VS code AI extension?
1
[removed]
2025-01-16T21:31:48
https://www.reddit.com/r/LocalLLaMA/comments/1i2zya6/what_is_the_best_vs_code_ai_extension/
SkylarNox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i2zya6
false
null
t3_1i2zya6
/r/LocalLLaMA/comments/1i2zya6/what_is_the_best_vs_code_ai_extension/
false
false
self
1
null
I am really amazed by GPT4All v3.6 where Reasoner v1 has a built-in Javascript code interpreter tool, is this a function of the model file(it couldn't be right?) or is it GPT4All's inbuilt magical power?
1
[removed]
2025-01-16T21:34:04
https://www.reddit.com/r/LocalLLaMA/comments/1i3003d/i_am_really_amazed_by_gpt4all_v36_where_reasoner/
SoulTrapPrisonPlanet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3003d
false
null
t3_1i3003d
/r/LocalLLaMA/comments/1i3003d/i_am_really_amazed_by_gpt4all_v36_where_reasoner/
false
false
self
1
null
Built a Pretty Neat Expense Tracker WebApp in 2 Nights - That works with local Ollama! (and sometimes with Gemini.)
1
2025-01-16T21:35:52
https://v.redd.it/vnl2uksocfde1
Diligent-Builder7762
/r/LocalLLaMA/comments/1i301jy/built_a_pretty_neat_expense_tracker_webapp_in_2/
1970-01-01T00:00:00
0
{}
1i301jy
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vnl2uksocfde1/DASHPlaylist.mpd?a=1739784957%2CYTFhMDMwMDNlZjM2NTQ4ZjI1ZTI3MzIzZTM4OWI3NzRjMDM1NDJmN2E5NzM3ODhiZDE0Y2U0MTU5Yzg0OTg2OA%3D%3D&v=1&f=sd', 'duration': 187, 'fallback_url': 'https://v.redd.it/vnl2uksocfde1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/vnl2uksocfde1/HLSPlaylist.m3u8?a=1739784957%2CZTlhYWRhZDg2M2FhOGFkMjU4YTMzMWYxZGRlZjk5M2MyNjllODI5Y2ZmMmZmZTlkNDc1YWIwOGIxNDZlZWU1ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vnl2uksocfde1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1i301jy
/r/LocalLLaMA/comments/1i301jy/built_a_pretty_neat_expense_tracker_webapp_in_2/
false
false
https://external-preview…553e7380d90c30b4
1
{'enabled': False, 'images': [{'id': 'ejlhMThrc29jZmRlMddcT64w_IuEiUB_mdZAWkbFhyAEQN02jtr4-4uquhpN', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ejlhMThrc29jZmRlMddcT64w_IuEiUB_mdZAWkbFhyAEQN02jtr4-4uquhpN.png?width=108&crop=smart&format=pjpg&auto=webp&s=0e277cf37eb3f0b90261fcc1eeb3717c29841fd4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ejlhMThrc29jZmRlMddcT64w_IuEiUB_mdZAWkbFhyAEQN02jtr4-4uquhpN.png?width=216&crop=smart&format=pjpg&auto=webp&s=16e417e08d6bb9c523fc4f27aae83ac456f24acc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ejlhMThrc29jZmRlMddcT64w_IuEiUB_mdZAWkbFhyAEQN02jtr4-4uquhpN.png?width=320&crop=smart&format=pjpg&auto=webp&s=b2441bc31c31bb1d5901bde8dae7b4045b9bd985', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ejlhMThrc29jZmRlMddcT64w_IuEiUB_mdZAWkbFhyAEQN02jtr4-4uquhpN.png?width=640&crop=smart&format=pjpg&auto=webp&s=80946d9418a07d6a8353e8179760ae4f50c1403a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ejlhMThrc29jZmRlMddcT64w_IuEiUB_mdZAWkbFhyAEQN02jtr4-4uquhpN.png?width=960&crop=smart&format=pjpg&auto=webp&s=33095e711ef4717d9421926272a409cbe9864569', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ejlhMThrc29jZmRlMddcT64w_IuEiUB_mdZAWkbFhyAEQN02jtr4-4uquhpN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=43ee996d8a50925ed750451e8462693fed338339', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ejlhMThrc29jZmRlMddcT64w_IuEiUB_mdZAWkbFhyAEQN02jtr4-4uquhpN.png?format=pjpg&auto=webp&s=37cfac464f350fb7a0a6e2c1ccee011ace9a3d52', 'width': 1920}, 'variants': {}}]}
AI app for proctored exams
1
[removed]
2025-01-16T21:37:33
https://i.redd.it/wuig1zehdfde1.jpeg
Gunplexityyy
i.redd.it
1970-01-01T00:00:00
0
{}
1i302v7
false
null
t3_1i302v7
/r/LocalLLaMA/comments/1i302v7/ai_app_for_proctored_exams/
false
false
https://a.thumbs.redditm…EYaI5UMaDMU8.jpg
1
{'enabled': True, 'images': [{'id': 'lnbZVkmeX9SKz9ozjLw8rXPwvDTWO_skib-n9JFOKic', 'resolutions': [{'height': 196, 'url': 'https://preview.redd.it/wuig1zehdfde1.jpeg?width=108&crop=smart&auto=webp&s=72e267180bb88d38f13cf7cda48b115b3d6c931a', 'width': 108}, {'height': 393, 'url': 'https://preview.redd.it/wuig1zehdfde1.jpeg?width=216&crop=smart&auto=webp&s=4bcee13d5a09ea5026a64d3fe3614bf7debcdbea', 'width': 216}, {'height': 583, 'url': 'https://preview.redd.it/wuig1zehdfde1.jpeg?width=320&crop=smart&auto=webp&s=79283476d8167998c5c1c29f2d275d037bb81003', 'width': 320}, {'height': 1166, 'url': 'https://preview.redd.it/wuig1zehdfde1.jpeg?width=640&crop=smart&auto=webp&s=91c660d6ca86f2ffb02c1a5299990bc046c0b0ac', 'width': 640}, {'height': 1749, 'url': 'https://preview.redd.it/wuig1zehdfde1.jpeg?width=960&crop=smart&auto=webp&s=960d121540d7c457f6e95a0eef5067e0e0d0ca39', 'width': 960}, {'height': 1968, 'url': 'https://preview.redd.it/wuig1zehdfde1.jpeg?width=1080&crop=smart&auto=webp&s=f9f7386dd9a5052d6f1440c32722476bf775bcb6', 'width': 1080}], 'source': {'height': 1968, 'url': 'https://preview.redd.it/wuig1zehdfde1.jpeg?auto=webp&s=8236038a8e4261c14a5b0c7f2d2e9b00b3f0739e', 'width': 1080}, 'variants': {}}]}
What LLMs Do You Recommend For a RTX 2060 (6 GB) For Roleplay?
7
i’m interested in both sfw, nsfw and even nsfl roleplay
2025-01-16T21:48:24
https://www.reddit.com/r/LocalLLaMA/comments/1i30bjx/what_llms_do_you_recommend_for_a_rtx_2060_6_gb/
AdvertisingOk6742
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i30bjx
false
null
t3_1i30bjx
/r/LocalLLaMA/comments/1i30bjx/what_llms_do_you_recommend_for_a_rtx_2060_6_gb/
false
false
self
7
null
Optimizing OLLAMA with LLAMA 3.3 and QWEN
1
[removed]
2025-01-16T21:58:04
https://www.reddit.com/r/LocalLLaMA/comments/1i30ja9/optimizing_ollama_with_llama_33_and_qwen/
Impossible_Jello_129
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i30ja9
false
null
t3_1i30ja9
/r/LocalLLaMA/comments/1i30ja9/optimizing_ollama_with_llama_33_and_qwen/
false
false
self
1
null
when I ship my AI app to prod without guardrails
1
2025-01-16T22:01:56
https://i.redd.it/4qlqawkthfde1.png
vellum_ai_dev
i.redd.it
1970-01-01T00:00:00
0
{}
1i30mmk
false
null
t3_1i30mmk
/r/LocalLLaMA/comments/1i30mmk/when_i_ship_my_ai_app_to_prod_without_guardrails/
false
false
default
1
{'enabled': True, 'images': [{'id': '4qlqawkthfde1', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/4qlqawkthfde1.png?width=108&crop=smart&auto=webp&s=ccd595ce8affd83ef9cf777bb3a849ac047b54ab', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/4qlqawkthfde1.png?width=216&crop=smart&auto=webp&s=fe6b7f42912533bc91cf4f6fffa9a0e04cc98381', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/4qlqawkthfde1.png?width=320&crop=smart&auto=webp&s=085550bd13e81e1239bf29a3567e8ee09ad078b6', 'width': 320}, {'height': 799, 'url': 'https://preview.redd.it/4qlqawkthfde1.png?width=640&crop=smart&auto=webp&s=ff96432a3bd38c13b39d100228a214dff1d09058', 'width': 640}, {'height': 1199, 'url': 'https://preview.redd.it/4qlqawkthfde1.png?width=960&crop=smart&auto=webp&s=30dac0ca09876c47c925ecb4cfbec79d00ee4720', 'width': 960}], 'source': {'height': 1209, 'url': 'https://preview.redd.it/4qlqawkthfde1.png?auto=webp&s=ec6c9ed9fa524348bb2d9aaebe707ad028204313', 'width': 968}, 'variants': {}}]}
Blogger: User Profile: God entered into my body, like a body. my same size. this is holy ghost baptism. my name is Bob Hickman
1
2025-01-16T22:02:07
https://www.blogger.com/profile/17341363441235422222
Which_Cheek_3214
blogger.com
1970-01-01T00:00:00
0
{}
1i30ms5
false
null
t3_1i30ms5
/r/LocalLLaMA/comments/1i30ms5/blogger_user_profile_god_entered_into_my_body/
false
false
default
1
null
Where do people get news about upcoming LLM releases?
44
I’m curious about how people stay up-to-date on news about upcoming LLM releases, especially ones that haven’t been released yet. Are there specific websites, forums, newsletters, or communities you follow to learn about this kind of stuff?
2025-01-16T22:17:01
https://www.reddit.com/r/LocalLLaMA/comments/1i30yy4/where_do_people_get_news_about_upcoming_llm/
gamblingapocalypse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i30yy4
false
null
t3_1i30yy4
/r/LocalLLaMA/comments/1i30yy4/where_do_people_get_news_about_upcoming_llm/
false
false
self
44
null
Looking for an alternative to cursor tab (inline code completions)
2
As I mentioned in the title, I'm looking for inline code completion ai tools. Something that I could use with my own api key would be nice. GitHub copilot is counterproductive. Supermaven is paid. Codeium wasn't so nice. Looking for something like cursor tab though I know that speed and efficiency can't be replicated with any other product. Anything comparable would be nice.
2025-01-16T22:18:15
https://www.reddit.com/r/LocalLLaMA/comments/1i30zxq/looking_for_an_alternative_to_cursor_tab_inline/
retarDEYd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i30zxq
false
null
t3_1i30zxq
/r/LocalLLaMA/comments/1i30zxq/looking_for_an_alternative_to_cursor_tab_inline/
false
false
self
2
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:19:44
https://www.reddit.com/r/LocalLLaMA/comments/1i31166/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31166
false
null
t3_1i31166
/r/LocalLLaMA/comments/1i31166/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:20:02
https://www.reddit.com/r/LocalLLaMA/comments/1i311fl/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i311fl
false
null
t3_1i311fl
/r/LocalLLaMA/comments/1i311fl/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:20:21
https://www.reddit.com/r/LocalLLaMA/comments/1i311pg/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i311pg
false
null
t3_1i311pg
/r/LocalLLaMA/comments/1i311pg/common_myths_about_private_browsing/
false
false
self
1
null
4
1
[removed]
2025-01-16T22:20:39
https://www.reddit.com/r/LocalLLaMA/comments/1i311y3/4/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i311y3
false
null
t3_1i311y3
/r/LocalLLaMA/comments/1i311y3/4/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:21:04
https://www.reddit.com/r/LocalLLaMA/comments/1i312ag/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i312ag
false
null
t3_1i312ag
/r/LocalLLaMA/comments/1i312ag/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:21:22
https://www.reddit.com/r/LocalLLaMA/comments/1i312j3/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i312j3
false
null
t3_1i312j3
/r/LocalLLaMA/comments/1i312j3/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:21:40
https://www.reddit.com/r/LocalLLaMA/comments/1i312rx/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i312rx
false
null
t3_1i312rx
/r/LocalLLaMA/comments/1i312rx/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:21:59
https://www.reddit.com/r/LocalLLaMA/comments/1i3131r/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3131r
false
null
t3_1i3131r
/r/LocalLLaMA/comments/1i3131r/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:22:17
https://www.reddit.com/r/LocalLLaMA/comments/1i313ak/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i313ak
false
null
t3_1i313ak
/r/LocalLLaMA/comments/1i313ak/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:22:35
https://www.reddit.com/r/LocalLLaMA/comments/1i313jx/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i313jx
false
null
t3_1i313jx
/r/LocalLLaMA/comments/1i313jx/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:22:54
https://www.reddit.com/r/LocalLLaMA/comments/1i313t7/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i313t7
false
null
t3_1i313t7
/r/LocalLLaMA/comments/1i313t7/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:23:12
https://www.reddit.com/r/LocalLLaMA/comments/1i31428/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31428
false
null
t3_1i31428
/r/LocalLLaMA/comments/1i31428/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:23:30
https://www.reddit.com/r/LocalLLaMA/comments/1i314aq/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i314aq
false
null
t3_1i314aq
/r/LocalLLaMA/comments/1i314aq/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:23:48
https://www.reddit.com/r/LocalLLaMA/comments/1i314jo/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i314jo
false
null
t3_1i314jo
/r/LocalLLaMA/comments/1i314jo/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:24:07
https://www.reddit.com/r/LocalLLaMA/comments/1i314sl/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i314sl
false
null
t3_1i314sl
/r/LocalLLaMA/comments/1i314sl/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:24:26
https://www.reddit.com/r/LocalLLaMA/comments/1i31521/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31521
false
null
t3_1i31521
/r/LocalLLaMA/comments/1i31521/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:24:43
https://www.reddit.com/r/LocalLLaMA/comments/1i315bf/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i315bf
false
null
t3_1i315bf
/r/LocalLLaMA/comments/1i315bf/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:25:02
https://www.reddit.com/r/LocalLLaMA/comments/1i315jq/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i315jq
false
null
t3_1i315jq
/r/LocalLLaMA/comments/1i315jq/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:25:20
https://www.reddit.com/r/LocalLLaMA/comments/1i315sh/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i315sh
false
null
t3_1i315sh
/r/LocalLLaMA/comments/1i315sh/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:25:38
https://www.reddit.com/r/LocalLLaMA/comments/1i3161o/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3161o
false
null
t3_1i3161o
/r/LocalLLaMA/comments/1i3161o/common_myths_about_private_browsing/
false
false
self
1
null
Common Myths about Private Browsing
1
[removed]
2025-01-16T22:25:56
https://www.reddit.com/r/LocalLLaMA/comments/1i3169k/common_myths_about_private_browsing/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3169k
false
null
t3_1i3169k
/r/LocalLLaMA/comments/1i3169k/common_myths_about_private_browsing/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:28:25
https://www.reddit.com/r/LocalLLaMA/comments/1i3189z/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i3189z
false
null
t3_1i3189z
/r/LocalLLaMA/comments/1i3189z/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:30:40
https://www.reddit.com/r/LocalLLaMA/comments/1i31a64/lift_yourself/
input_output_stream3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31a64
false
null
t3_1i31a64
/r/LocalLLaMA/comments/1i31a64/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:33:25
https://www.reddit.com/r/LocalLLaMA/comments/1i31cey/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31cey
false
null
t3_1i31cey
/r/LocalLLaMA/comments/1i31cey/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:35:49
https://www.reddit.com/r/LocalLLaMA/comments/1i31ea8/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31ea8
false
null
t3_1i31ea8
/r/LocalLLaMA/comments/1i31ea8/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:36:13
https://www.reddit.com/r/LocalLLaMA/comments/1i31el6/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31el6
false
null
t3_1i31el6
/r/LocalLLaMA/comments/1i31el6/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:36:34
https://www.reddit.com/r/LocalLLaMA/comments/1i31euz/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31euz
false
null
t3_1i31euz
/r/LocalLLaMA/comments/1i31euz/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:36:57
https://www.reddit.com/r/LocalLLaMA/comments/1i31f5l/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31f5l
false
null
t3_1i31f5l
/r/LocalLLaMA/comments/1i31f5l/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:37:18
https://www.reddit.com/r/LocalLLaMA/comments/1i31ffx/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31ffx
false
null
t3_1i31ffx
/r/LocalLLaMA/comments/1i31ffx/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:37:40
https://www.reddit.com/r/LocalLLaMA/comments/1i31fpy/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31fpy
false
null
t3_1i31fpy
/r/LocalLLaMA/comments/1i31fpy/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:38:02
https://www.reddit.com/r/LocalLLaMA/comments/1i31fzd/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31fzd
false
null
t3_1i31fzd
/r/LocalLLaMA/comments/1i31fzd/lift_yourself/
false
false
self
1
null
Lift Yourself
1
[removed]
2025-01-16T22:41:26
https://www.reddit.com/r/LocalLLaMA/comments/1i31ip0/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31ip0
false
null
t3_1i31ip0
/r/LocalLLaMA/comments/1i31ip0/lift_yourself/
false
false
self
1
null
What is ElevenLabs doing? How is it so good?
393
Basically the title. What's their trick? On everything but voice, local models are pretty good for what they are, but ElevenLabs just blows everyone out of the water. Is it full Transformer? Some sort of Diffuser? Do they model the human anatomy to add accuracy to the model?
2025-01-16T22:42:26
https://www.reddit.com/r/LocalLLaMA/comments/1i31ji5/what_is_elevenlabs_doing_how_is_it_so_good/
Independent_Aside225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31ji5
false
null
t3_1i31ji5
/r/LocalLLaMA/comments/1i31ji5/what_is_elevenlabs_doing_how_is_it_so_good/
false
false
self
393
null
Lift Yourself
1
[removed]
2025-01-16T22:50:51
https://www.reddit.com/r/LocalLLaMA/comments/1i31q6x/lift_yourself/
UpsetApplication9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1i31q6x
false
null
t3_1i31q6x
/r/LocalLLaMA/comments/1i31q6x/lift_yourself/
false
false
self
1
null