title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Quick Test of Phi-4
0
2025-01-09T20:04:48
https://i.redd.it/owf41oehy0ce1.png
Financial_Counter199
i.redd.it
1970-01-01T00:00:00
0
{}
1hxltp2
false
null
t3_1hxltp2
/r/LocalLLaMA/comments/1hxltp2/quick_test_of_phi4/
false
false
https://b.thumbs.redditm…nbC2N8F4xkWE.jpg
0
{'enabled': True, 'images': [{'id': 'pxhUM188PkoEORzI1vixQ9EA12DajK6GnjEsSSBIhVc', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/owf41oehy0ce1.png?width=108&crop=smart&auto=webp&s=e0583fb625ae9c815174d911a45108c6b65829f1', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/owf41oehy0ce1.png?width=216&crop=smart&auto=webp&s=880e540a2e78a16ae85b36433bad482610b0e2d2', 'width': 216}, {'height': 83, 'url': 'https://preview.redd.it/owf41oehy0ce1.png?width=320&crop=smart&auto=webp&s=4ef6cd47e976c740140220e03ad1047736505d8b', 'width': 320}, {'height': 166, 'url': 'https://preview.redd.it/owf41oehy0ce1.png?width=640&crop=smart&auto=webp&s=e448fafb128b5a260307d5337d28c6d6e49ff5cc', 'width': 640}, {'height': 249, 'url': 'https://preview.redd.it/owf41oehy0ce1.png?width=960&crop=smart&auto=webp&s=4c3764bed9118f269012765800af2e9174d51a7b', 'width': 960}, {'height': 280, 'url': 'https://preview.redd.it/owf41oehy0ce1.png?width=1080&crop=smart&auto=webp&s=15f57d3c9edfc2e7e818c119987d1181d828308b', 'width': 1080}], 'source': {'height': 552, 'url': 'https://preview.redd.it/owf41oehy0ce1.png?auto=webp&s=dbbb239c894f82e3e5a486602735e3eceafcb4d7', 'width': 2122}, 'variants': {}}]}
Anyone want the script to run Moondream 2b's new gaze detection on any video?
1,250
2025-01-09T20:12:41
https://v.redd.it/n9beslavz0ce1
ParsaKhaz
v.redd.it
1970-01-01T00:00:00
0
{}
1hxm0ep
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/n9beslavz0ce1/DASHPlaylist.mpd?a=1739045576%2CZmQ4NDJmNGJiMTE0NzY4MTcyZTNjMWViZDdlMDVhZTQ3YzQ3M2RjZmFjMTVmNjVmMDY4M2EyYWEwYTcyYzgwOA%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/n9beslavz0ce1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/n9beslavz0ce1/HLSPlaylist.m3u8?a=1739045576%2COTIwNzFkYTFlNjA4MjlhMjQ3ZGVhZTZjNzIxY2JkYzJmZWUwNWUzMTc4YTc0M2U4ZTI1Y2M5Y2I0OTJjMDlhNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n9beslavz0ce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1hxm0ep
/r/LocalLLaMA/comments/1hxm0ep/anyone_want_the_script_to_run_moondream_2bs_new/
false
false
https://external-preview…b8582b689d115c08
1,250
{'enabled': False, 'images': [{'id': 'cmk5cnZsYXZ6MGNlMeEGpTWo5MaI3KFBwDeey6o_wri3pXWzYnC4YTD3TTIr', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cmk5cnZsYXZ6MGNlMeEGpTWo5MaI3KFBwDeey6o_wri3pXWzYnC4YTD3TTIr.png?width=108&crop=smart&format=pjpg&auto=webp&s=24062d9a9724362dae1d0d3ed5fa8084f3e0e3a8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cmk5cnZsYXZ6MGNlMeEGpTWo5MaI3KFBwDeey6o_wri3pXWzYnC4YTD3TTIr.png?width=216&crop=smart&format=pjpg&auto=webp&s=e157fe291b49b1c06f6df8402a92e8ab2be610cf', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cmk5cnZsYXZ6MGNlMeEGpTWo5MaI3KFBwDeey6o_wri3pXWzYnC4YTD3TTIr.png?width=320&crop=smart&format=pjpg&auto=webp&s=a913e379642754f698221b7a1451724f2c505e94', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cmk5cnZsYXZ6MGNlMeEGpTWo5MaI3KFBwDeey6o_wri3pXWzYnC4YTD3TTIr.png?width=640&crop=smart&format=pjpg&auto=webp&s=c20246067659cd768e4a6b1e1a03029847094d30', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cmk5cnZsYXZ6MGNlMeEGpTWo5MaI3KFBwDeey6o_wri3pXWzYnC4YTD3TTIr.png?width=960&crop=smart&format=pjpg&auto=webp&s=b328091b9d0533cb4707f948ffbd33e4530a4083', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cmk5cnZsYXZ6MGNlMeEGpTWo5MaI3KFBwDeey6o_wri3pXWzYnC4YTD3TTIr.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b6efbefb2a9c0ae59762e0380924b49f89f814e6', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cmk5cnZsYXZ6MGNlMeEGpTWo5MaI3KFBwDeey6o_wri3pXWzYnC4YTD3TTIr.png?format=pjpg&auto=webp&s=2dc4b2949e6ea79653bb62959184ea9078bd0a94', 'width': 1280}, 'variants': {}}]}
Automated meeting transcript processing
1
[removed]
2025-01-09T20:26:50
https://www.reddit.com/r/LocalLLaMA/comments/1hxmcd0/automated_meeting_transcript_processing/
automaticSteve
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxmcd0
false
null
t3_1hxmcd0
/r/LocalLLaMA/comments/1hxmcd0/automated_meeting_transcript_processing/
false
false
self
1
null
4060 Ti 16gb vs A4000 for LLM tasks
2
Assuming I am mostly offloading layers to the GPU and running at \~Q6, is there likely to be a serious performance difference between a 4060 Ti 16gb and a A4000 16gb? I see aggregate memory bandwidth is higher in the A4000 (448 vs 288GB/s), floating point performance is mildly higher on the 4060.
2025-01-09T20:48:46
https://www.reddit.com/r/LocalLLaMA/comments/1hxmux2/4060_ti_16gb_vs_a4000_for_llm_tasks/
theslonkingdead
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxmux2
false
null
t3_1hxmux2
/r/LocalLLaMA/comments/1hxmux2/4060_ti_16gb_vs_a4000_for_llm_tasks/
false
false
self
2
null
txtai 8.2 released: Simplified LLM messages, Graph RAG attribute filters and multi-CPU/GPU vector encoding
1
[removed]
2025-01-09T21:11:43
[deleted]
1970-01-01T00:00:00
0
{}
1hxneac
false
null
t3_1hxneac
/r/LocalLLaMA/comments/1hxneac/txtai_82_released_simplified_llm_messages_graph/
false
false
default
1
null
txtai 8.2 released: Simplified LLM messages, Graph RAG attribute filters and multi-CPU/GPU vector encoding
1
2025-01-09T21:15:12
https://i.redd.it/5yls5f73b1ce1.png
davidmezzetti
i.redd.it
1970-01-01T00:00:00
0
{}
1hxnh7o
false
null
t3_1hxnh7o
/r/LocalLLaMA/comments/1hxnh7o/txtai_82_released_simplified_llm_messages_graph/
false
false
https://b.thumbs.redditm…zN7_6qEuJdvo.jpg
1
{'enabled': True, 'images': [{'id': 'LtYBIrF7I0nqVzffRwWTWQVlSWcNu73ZWgqfgGuFsAE', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/5yls5f73b1ce1.png?width=108&crop=smart&auto=webp&s=67564d8ac1ffaa8334573a44d21674a8de1fb4c1', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/5yls5f73b1ce1.png?width=216&crop=smart&auto=webp&s=559ddaab22932eeb8f753aecb2a59a4305e61bf8', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/5yls5f73b1ce1.png?width=320&crop=smart&auto=webp&s=782a0ed126e3b9278800366bb06ba9bf411cf1ba', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/5yls5f73b1ce1.png?width=640&crop=smart&auto=webp&s=8a3a0d8bbf0ea613c4414df3e9d2a87a1cf2629e', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/5yls5f73b1ce1.png?width=960&crop=smart&auto=webp&s=df7807aebf60c57039ea1487f9379bad601c44f2', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/5yls5f73b1ce1.png?width=1080&crop=smart&auto=webp&s=912049d6d4642f373f8f6035e9cbd3724cac42a9', 'width': 1080}], 'source': {'height': 960, 'url': 'https://preview.redd.it/5yls5f73b1ce1.png?auto=webp&s=d4042cf738301ed9d022c90577a49473c18d965d', 'width': 1920}, 'variants': {}}]}
Will local LLMs ever be able to search the Internet?
0
Sorry, I know this is a really silly question, I'm a big dumb dumb when it comes to AI. In all honesty I don't know how ChatGPT and Copilot do it as fast as they do.
2025-01-09T21:55:46
https://www.reddit.com/r/LocalLLaMA/comments/1hxoemw/will_local_llms_ever_be_able_to_search_the/
Due_Recognition_3890
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxoemw
false
null
t3_1hxoemw
/r/LocalLLaMA/comments/1hxoemw/will_local_llms_ever_be_able_to_search_the/
false
false
self
0
null
How can I make an AI writing assistant that captures individual writing styles?
1
[removed]
2025-01-09T21:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1hxogw8/how_can_i_make_an_ai_writing_assistant_that/
ldemesla_eth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxogw8
false
null
t3_1hxogw8
/r/LocalLLaMA/comments/1hxogw8/how_can_i_make_an_ai_writing_assistant_that/
false
false
self
1
null
The Best Embedding Models for Information Retrieval in 2025
45
2025-01-09T21:58:49
https://www.datastax.com/blog/best-embedding-models-information-retrieval-2025
philnash
datastax.com
1970-01-01T00:00:00
0
{}
1hxoh4q
false
null
t3_1hxoh4q
/r/LocalLLaMA/comments/1hxoh4q/the_best_embedding_models_for_information/
false
false
https://b.thumbs.redditm…TiQpn15x69jk.jpg
45
{'enabled': False, 'images': [{'id': 'Ey-8Ge4FDk1YG_raJ5OqM_321bvxckBMqRLQbQgc4_g', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/VnuuZheactI7QPo2JPvHBZE9r-qgyUUp4UDj8IpsDuI.jpg?width=108&crop=smart&auto=webp&s=0bcdb89d5cfc9a490bb9599d1b9d8432166072dc', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/VnuuZheactI7QPo2JPvHBZE9r-qgyUUp4UDj8IpsDuI.jpg?width=216&crop=smart&auto=webp&s=a1f70938693b7e439ea4c42b7e4873c3e5ed9b80', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/VnuuZheactI7QPo2JPvHBZE9r-qgyUUp4UDj8IpsDuI.jpg?width=320&crop=smart&auto=webp&s=439769305405b70fbe7b29d0111079cfe35a6cc5', 'width': 320}, {'height': 424, 'url': 'https://external-preview.redd.it/VnuuZheactI7QPo2JPvHBZE9r-qgyUUp4UDj8IpsDuI.jpg?width=640&crop=smart&auto=webp&s=7b8b3d34b72d1de75ead7b836c4ed7867bf2f4eb', 'width': 640}, {'height': 636, 'url': 'https://external-preview.redd.it/VnuuZheactI7QPo2JPvHBZE9r-qgyUUp4UDj8IpsDuI.jpg?width=960&crop=smart&auto=webp&s=21d3e46dd7ebfa209c6e9980fa85e3643d1786aa', 'width': 960}, {'height': 716, 'url': 'https://external-preview.redd.it/VnuuZheactI7QPo2JPvHBZE9r-qgyUUp4UDj8IpsDuI.jpg?width=1080&crop=smart&auto=webp&s=8098eac62b084f826b215007f241226734ed79b3', 'width': 1080}], 'source': {'height': 968, 'url': 'https://external-preview.redd.it/VnuuZheactI7QPo2JPvHBZE9r-qgyUUp4UDj8IpsDuI.jpg?auto=webp&s=b1d2c20ce9fc7a3d677add38f9185c734d9c313c', 'width': 1460}, 'variants': {}}]}
LMStudio - SentinelOne Malicious
1
[removed]
2025-01-09T22:05:35
https://www.reddit.com/r/LocalLLaMA/comments/1hxomwi/lmstudio_sentinelone_malicious/
BSG_Planet_76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxomwi
false
null
t3_1hxomwi
/r/LocalLLaMA/comments/1hxomwi/lmstudio_sentinelone_malicious/
false
false
https://b.thumbs.redditm…NeupyuMqKz-c.jpg
1
null
VRAM not being fully utilized on multi-GPU setup
1
Hey so I have a 3080 and a 2080 super, with 18gb combined VRAM. I notice that each GPU's VRAM may only be up to 60-80% utilized when "ollama ps" starts showing a CPU utilization over 0% and tokens/second tanks. I am not using either GPU for any other purpose, I even have my monitor plugged directly into the motherboard to utilize the intel iGPU. Is this typical, or a limitation of my specific GPU setup? If not what are some things I can do to troubleshoot the issue. Thanks for any advice
2025-01-09T22:12:15
https://www.reddit.com/r/LocalLLaMA/comments/1hxosc8/vram_not_being_fully_utilized_on_multigpu_setup/
tengo_harambe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxosc8
false
null
t3_1hxosc8
/r/LocalLLaMA/comments/1hxosc8/vram_not_being_fully_utilized_on_multigpu_setup/
false
false
self
1
null
Dir-assistant 1.2.0 release announcement
14
**Dir-assistant: Chat with your current directory's files using a local or API LLM.** This is the first minor release I've pushed up in a few months and it is adds a major feature that I've been looking forward to for a number of months. Previously you could only use local open source models for generating the embedding vectors used by the RAG, but now I've added the ability to now use API embeddings. Compared to local embedding models, API embedding models are generally higher quality and will improve RAG lookup quality if used, which will further improve context generation and thus results. [https://github.com/curvedinf/dir-assistant/](https://github.com/curvedinf/dir-assistant/) https://i.redd.it/biml6z2on1ce1.gif
2025-01-09T22:27:36
https://www.reddit.com/r/LocalLLaMA/comments/1hxp4ts/dirassistant_120_release_announcement/
1ncehost
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxp4ts
false
null
t3_1hxp4ts
/r/LocalLLaMA/comments/1hxp4ts/dirassistant_120_release_announcement/
false
false
https://b.thumbs.redditm…fycTLeqx8lps.jpg
14
{'enabled': False, 'images': [{'id': 'rjqo6EMfPP4amnwWEQk-qMMuDX92SPaJc9wyHY1LGOs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cbCrbPfInObUjKiPPu0cM8HUQ3itne_1HWLAeqIxB-0.jpg?width=108&crop=smart&auto=webp&s=f5b966940fbe04d79aa416208fd19f9ca76d0df9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cbCrbPfInObUjKiPPu0cM8HUQ3itne_1HWLAeqIxB-0.jpg?width=216&crop=smart&auto=webp&s=686b1b01acc0ab743697e193977cdcb6ab91bcc4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cbCrbPfInObUjKiPPu0cM8HUQ3itne_1HWLAeqIxB-0.jpg?width=320&crop=smart&auto=webp&s=5dc1f5ff598c72be5b0aa5d05a3ca1e57164a387', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cbCrbPfInObUjKiPPu0cM8HUQ3itne_1HWLAeqIxB-0.jpg?width=640&crop=smart&auto=webp&s=6ed377ba64d4878334b6daad2e0ddda96ed951b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cbCrbPfInObUjKiPPu0cM8HUQ3itne_1HWLAeqIxB-0.jpg?width=960&crop=smart&auto=webp&s=357c948dc1446641745ed780deefc71a03f730e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cbCrbPfInObUjKiPPu0cM8HUQ3itne_1HWLAeqIxB-0.jpg?width=1080&crop=smart&auto=webp&s=77cbc04ea2ca7cb0e2b8ae229f42d7e6d7de845e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cbCrbPfInObUjKiPPu0cM8HUQ3itne_1HWLAeqIxB-0.jpg?auto=webp&s=6b5f488151f7d78356af3fc0b2585e5d1ddc8354', 'width': 1200}, 'variants': {}}]}
Phi-3.5-MoE support merged into llama.cpp
105
2025-01-09T22:45:29
https://github.com/ggerganov/llama.cpp/pull/11003
skeeto
github.com
1970-01-01T00:00:00
0
{}
1hxpjey
false
null
t3_1hxpjey
/r/LocalLLaMA/comments/1hxpjey/phi35moe_support_merged_into_llamacpp/
false
false
https://b.thumbs.redditm…ZIUoW2LGirPQ.jpg
105
{'enabled': False, 'images': [{'id': '9MBIO2jkd4Kml_7x09Cw6UAPks4j4rJWGEHCOljsgOQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/m74SPCYv3b2-Etc8gMsq03I8DcmHrV56tfhuw4oGpHM.jpg?width=108&crop=smart&auto=webp&s=5aea1b5f41c056bc494e3761459382451c596fb4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/m74SPCYv3b2-Etc8gMsq03I8DcmHrV56tfhuw4oGpHM.jpg?width=216&crop=smart&auto=webp&s=f85d6b416d4610c28ca0326a07d7d6e8a9251bdf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/m74SPCYv3b2-Etc8gMsq03I8DcmHrV56tfhuw4oGpHM.jpg?width=320&crop=smart&auto=webp&s=992167142886d196eaf5ee116f24b3f60acb51ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/m74SPCYv3b2-Etc8gMsq03I8DcmHrV56tfhuw4oGpHM.jpg?width=640&crop=smart&auto=webp&s=e5aef7783c2f074e3447066229f618ac46fcef97', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/m74SPCYv3b2-Etc8gMsq03I8DcmHrV56tfhuw4oGpHM.jpg?width=960&crop=smart&auto=webp&s=3eeac423d7c07e87a89ac09b74a6c8ddd57a93be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/m74SPCYv3b2-Etc8gMsq03I8DcmHrV56tfhuw4oGpHM.jpg?width=1080&crop=smart&auto=webp&s=36633980fbee1abb00eca179ed14d39492893610', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/m74SPCYv3b2-Etc8gMsq03I8DcmHrV56tfhuw4oGpHM.jpg?auto=webp&s=3d77222c6883927f046ca29c91840aca2f7263f2', 'width': 1200}, 'variants': {}}]}
Transcribing audio to LaTeX?
3
I teach university mathematics courses and will be recording live classes to post online for students. I have an idea and am wondering how feasible it is. I'm currently running Ollama and WebUI on an M1 Ultra Mac Studio with 128GB RAM, but have never done anything other than text with it. Could I strip the audio from my lectures (each video will be 10-30 mins) and have a local model transcribe the audio into LaTeX? That is, generate a separate LaTeX document for, say, each video/audio file with appropriate use of math mode, formula formatting, etc.? In addition, sometimes I ramble, use filler words (uh, um), and tell personal anecdotes. Is it possible to have the model (or more likely a second model focused on this task) clean up the transcribed speech so it reads more like a textbook and less like a lunatic's stream of consciousness? I teach theory heavy courses, but also Calculus, so in order to not dox myself I'll use a Calc example. I may explain the definition of the derivative, ask the class what "h" stands for, confirm they're correct or correct them if not, then give an example of how to find the derivative of a function using the definition, walking through each step and maybe stop to tell a joke about the dean to add some levity. I'd like the transcribed notes to concisely explain the definition, what h stands for, and give a worked out example, while keeping the dean out of it. I'd like the same for theorems, lemmas, proofs, etc. Is this possible, or am I expecting too much?
2025-01-09T22:56:10
https://www.reddit.com/r/LocalLLaMA/comments/1hxps2y/transcribing_audio_to_latex/
AerosolHubris
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxps2y
false
null
t3_1hxps2y
/r/LocalLLaMA/comments/1hxps2y/transcribing_audio_to_latex/
false
false
self
3
null
Has anyone tried SillyTavern on m3 pro?
1
[removed]
2025-01-09T22:58:33
https://www.reddit.com/r/LocalLLaMA/comments/1hxpu24/has_anyone_tried_sillytavern_on_m3_pro/
depresso-developer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxpu24
false
null
t3_1hxpu24
/r/LocalLLaMA/comments/1hxpu24/has_anyone_tried_sillytavern_on_m3_pro/
false
false
self
1
null
RTX 4090 48GB - $4700 on eBay. Is it legit?
94
I just came across this listing on eBay: [https://www.ebay.com/itm/226494741895](https://www.ebay.com/itm/226494741895) It is listing dual slot RTX 4090 48GB for $4700. I thought 48GB were not manufactured. Is it legit? Screenshot here if it gets lost. [RTX 4090 48GB for $4700!](https://preview.redd.it/0ggim0y1v1ce1.png?width=3302&format=png&auto=webp&s=51e939b07a3a8b3bf9e6cebd0e5fe4822664f985) I found out in this post (https://github.com/ggerganov/llama.cpp/discussions/9193) that one could buy it for \~$3500. I think RTX 4090 48GB would sell instantly if it was $3k.
2025-01-09T23:09:20
https://www.reddit.com/r/LocalLLaMA/comments/1hxq364/rtx_4090_48gb_4700_on_ebay_is_it_legit/
MLDataScientist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxq364
false
null
t3_1hxq364
/r/LocalLLaMA/comments/1hxq364/rtx_4090_48gb_4700_on_ebay_is_it_legit/
false
false
https://b.thumbs.redditm…pWX7jVeUmLPM.jpg
94
{'enabled': False, 'images': [{'id': 'JvmFkzF7IKVY0vV7SlowH8LK0S5pW-ot65zkULM_JW0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/bMoZdiDXs9kYJMXSF2LPqzveI23P9l8lgEc9DnQ7Gg8.jpg?width=108&crop=smart&auto=webp&s=8c4480876fc95253b5b78f6de737d37b536ce0b4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/bMoZdiDXs9kYJMXSF2LPqzveI23P9l8lgEc9DnQ7Gg8.jpg?width=216&crop=smart&auto=webp&s=b72fc6575c8e39e7aa6cfcc9326518600c3571cd', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/bMoZdiDXs9kYJMXSF2LPqzveI23P9l8lgEc9DnQ7Gg8.jpg?width=320&crop=smart&auto=webp&s=9b4491fc39b4da2e3a9466072f17bde0fad92791', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/bMoZdiDXs9kYJMXSF2LPqzveI23P9l8lgEc9DnQ7Gg8.jpg?auto=webp&s=e0141c7c000374257c3fc4f92fbe2a27b6219a90', 'width': 400}, 'variants': {}}]}
Do LocalLLMs Do (Roo)Cline? Or whats the cooking recipe and what's your mileage?
1
Subj. I tried with small 3B models, and the whole experience was strange and did not produce much of anything. What was your experience or what alternatives do you suggest?
2025-01-09T23:10:53
https://www.reddit.com/r/LocalLLaMA/comments/1hxq4gl/do_localllms_do_roocline_or_whats_the_cooking/
ChubbyChubakka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxq4gl
false
null
t3_1hxq4gl
/r/LocalLLaMA/comments/1hxq4gl/do_localllms_do_roocline_or_whats_the_cooking/
false
false
self
1
null
Need help using DeepSeek API
2
I have a huge text or PDF file that I want to translate using the DeepSeek v3 API. How can I do that? Would I need to split the file into chunks, or is there a way to upload the entire file at once? Any tips or advice would be appreciated. Thank you!
2025-01-09T23:22:19
https://www.reddit.com/r/LocalLLaMA/comments/1hxqdhr/need_help_using_deepseek_api/
murdafeelin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxqdhr
false
null
t3_1hxqdhr
/r/LocalLLaMA/comments/1hxqdhr/need_help_using_deepseek_api/
false
false
self
2
null
Do you think you can find the correct function call ? I created yet another LLM challenge !
1
I am into LLMs Red Teaming those days a lot !! And I love playing CTFs ! If you're into those things too, come test your skills and solve this small challenge that I created [here](https://colab.research.google.com/drive/1Sjq2yYgzBvmd6_1HfRWV-RY_C1wF7V8U?usp=sharing) If you missed my previous challenge, check it [here](https://colab.research.google.com/drive/1I6iLgqiN5FBZdwfM8p16fF3MieQC7WFG?usp=sharing)
2025-01-09T23:38:01
https://www.reddit.com/r/LocalLLaMA/comments/1hxqpr6/do_you_think_you_can_find_the_correct_function/
Boring_Rabbit2275
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxqpr6
false
null
t3_1hxqpr6
/r/LocalLLaMA/comments/1hxqpr6/do_you_think_you_can_find_the_correct_function/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
What is your favorite model for fine-tuning?
12
Hey everyone, question for those who did fine-tuning on current models. What is your favorite so far and why do you like it more than others? For example, I'm reading often the Phi models aren't that bad, but often a bit "sterile". I wonder if it still would be a good base model for a fine-tuning? Did someone compare results between a tuned llama and phi for example? Thanks for your input!
2025-01-09T23:39:05
https://www.reddit.com/r/LocalLLaMA/comments/1hxqql3/what_is_your_favorite_model_for_finetuning/
Suitable-Name
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxqql3
false
null
t3_1hxqql3
/r/LocalLLaMA/comments/1hxqql3/what_is_your_favorite_model_for_finetuning/
false
false
self
12
null
txtai 8.2 released: Simplified LLM messages, Graph RAG attribute filters and multi-CPU/GPU vector encoding
1
2025-01-09T23:50:33
https://i.redd.it/mspcmz0t22ce1.jpeg
davidmezzetti
i.redd.it
1970-01-01T00:00:00
0
{}
1hxqzca
false
null
t3_1hxqzca
/r/LocalLLaMA/comments/1hxqzca/txtai_82_released_simplified_llm_messages_graph/
false
false
https://b.thumbs.redditm…xEUnArOJHOek.jpg
1
{'enabled': True, 'images': [{'id': 'mnBZfh9dhSgJog7x4dR8j_SVt40G_bogbvhmJWcAyrA', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/mspcmz0t22ce1.jpeg?width=108&crop=smart&auto=webp&s=1a6774860b330b12d22e983b6ac5522235105fbc', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/mspcmz0t22ce1.jpeg?width=216&crop=smart&auto=webp&s=48896f8e41e4c1275ea439c63a2e804c3d744f68', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/mspcmz0t22ce1.jpeg?width=320&crop=smart&auto=webp&s=becad8776e55fcd0885d51cbd19dad14e103169c', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/mspcmz0t22ce1.jpeg?width=640&crop=smart&auto=webp&s=cde63a2eb94c432005ca11659d77f8f83f1b9ab4', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/mspcmz0t22ce1.jpeg?width=960&crop=smart&auto=webp&s=aa7079b33b95bf99ac11eaf2b844e803f106a630', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/mspcmz0t22ce1.jpeg?width=1080&crop=smart&auto=webp&s=77add5f0b120ba08b88541b7e0908c94cf5aa93d', 'width': 1080}], 'source': {'height': 540, 'url': 'https://preview.redd.it/mspcmz0t22ce1.jpeg?auto=webp&s=3899736cef8f11d41b4f0ed46394651e6e36191d', 'width': 1080}, 'variants': {}}]}
txtai 8.2 released: Simplified LLM messages, Graph RAG attribute filters and multi-CPU/GPU vector encoding
1
[removed]
2025-01-09T23:52:21
[deleted]
1970-01-01T00:00:00
0
{}
1hxr0q8
false
null
t3_1hxr0q8
/r/LocalLLaMA/comments/1hxr0q8/txtai_82_released_simplified_llm_messages_graph/
false
false
default
1
null
How do i have LLaMa read from a .csv file?
0
I give up. I tried all i could but the best i could was concise, wrong answers I made a [simple code](https://github.com/MatthewLacerda2/Jarvis) where, if i pass a second argument, it checks if it's a valid path, and if it's a .csv, it reads, puts as string and sends to the llm But, if my file is big enough, and i'm talking 10KB, it's enough for the llm to go crazy. It just can't answer my questions like "how many rows this file has" or "how many people work in XYZ company?" I am sending the prompts as http post requests. No langChain or vector database. My target is to have a service where you can send your files and the llm will use rag to read the right file to answer your prompt. Hence why i need to read .csv, .xlsx and .pdf. I am reading .txt just fine Before i move to a vector-database, i wanna get this down. Which i think i have to, since even if i read from a database, if the result is very big, imma gonna have the same problems as of now
2025-01-10T00:53:34
https://www.reddit.com/r/LocalLLaMA/comments/1hxsadw/how_do_i_have_llama_read_from_a_csv_file/
Blender-Fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxsadw
false
null
t3_1hxsadw
/r/LocalLLaMA/comments/1hxsadw/how_do_i_have_llama_read_from_a_csv_file/
false
false
self
0
{'enabled': False, 'images': [{'id': 'u51BlV0Uk6FhiQJ0jRwa_qLu9MGLq3TottClNOmpAhg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=108&crop=smart&auto=webp&s=ede581637dcafdb3321f9ae45278a65102e9c242', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=216&crop=smart&auto=webp&s=fcb4a69bae1ef79ae135be0bd55ec9acdb11bbe0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=320&crop=smart&auto=webp&s=b3bfeb5a6aaa54465fc08f005282536bec803a95', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=640&crop=smart&auto=webp&s=bea37c6251d1ea9f60deb05faf71b16b370baf3b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=960&crop=smart&auto=webp&s=e87c08662db1316edfd0dd57005dceb0a0353b77', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?width=1080&crop=smart&auto=webp&s=3725f0effe3b9d7d270f49468d6320d56601b5af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rEbqTXDPf4FFcIe-6LgmGmZcFOKGxzSp6RvuYN2FJlE.jpg?auto=webp&s=f479c5860afa64863ace123b7eb96175f7f2acf0', 'width': 1200}, 'variants': {}}]}
AI Companion
1
[removed]
2025-01-10T01:39:11
https://www.reddit.com/r/LocalLLaMA/comments/1hxt6l8/ai_companion/
Substantial_Way8470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxt6l8
false
null
t3_1hxt6l8
/r/LocalLLaMA/comments/1hxt6l8/ai_companion/
false
false
self
1
null
LLM (especially Llama) for Named-entity recognition
1
[removed]
2025-01-10T02:13:50
https://www.reddit.com/r/LocalLLaMA/comments/1hxtv5x/llm_especially_llama_for_namedentity_recognition/
khbjane
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxtv5x
false
null
t3_1hxtv5x
/r/LocalLLaMA/comments/1hxtv5x/llm_especially_llama_for_namedentity_recognition/
false
false
self
1
null
Is there a way to calculate a theoretical TPS for different infrastructure set? Like on project DIGITS?
1
2025-01-10T02:19:02
https://i.redd.it/gz69qn97t2ce1.png
Intelligent_Access19
i.redd.it
1970-01-01T00:00:00
0
{}
1hxtytt
false
null
t3_1hxtytt
/r/LocalLLaMA/comments/1hxtytt/is_there_a_way_to_calculate_a_theoretical_tps_for/
false
false
https://b.thumbs.redditm…dRZN0ETW_pQY.jpg
1
{'enabled': True, 'images': [{'id': 'B38jbQq6LhuVfEXwhqSnKRD_SLmdXt75QaaD9AKuNlQ', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/gz69qn97t2ce1.png?width=108&crop=smart&auto=webp&s=a98f6a6fe9a1b1b0d2446ea41b7f1726633ee408', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/gz69qn97t2ce1.png?width=216&crop=smart&auto=webp&s=7821ef663d80345c76f87c206d9ba93b9b5686ef', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/gz69qn97t2ce1.png?width=320&crop=smart&auto=webp&s=7df6c05b5e9c817587d2cdfa168405b5b336adaa', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/gz69qn97t2ce1.png?width=640&crop=smart&auto=webp&s=01aeaff37c108aab927005226ddcf04e499faad2', 'width': 640}, {'height': 536, 'url': 'https://preview.redd.it/gz69qn97t2ce1.png?width=960&crop=smart&auto=webp&s=1bca42cfd64c932043ea3594eaedfb7175e9a1d8', 'width': 960}, {'height': 603, 'url': 'https://preview.redd.it/gz69qn97t2ce1.png?width=1080&crop=smart&auto=webp&s=d9fb82409acaf30eb1be27038fd222fe14baf79c', 'width': 1080}], 'source': {'height': 603, 'url': 'https://preview.redd.it/gz69qn97t2ce1.png?auto=webp&s=3706751aac0d4c5ece87b6186e13c49e2f9859d0', 'width': 1080}, 'variants': {}}]}
Open-source web crawler with markdown output?
14
I’m looking for an open-source web crawler that can recursively crawl a website (e.g., API documentation) and convert the content into markdown files. I'd like to use the markdown files in RAG applications. I've found [Crawl4AI](https://github.com/unclecode/crawl4ai) and [Firecrawl](https://www.firecrawl.dev), but I'd prefer a free TUI or GUI application.
2025-01-10T02:21:36
https://www.reddit.com/r/LocalLLaMA/comments/1hxu0om/opensource_web_crawler_with_markdown_output/
mxdamp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxu0om
false
null
t3_1hxu0om
/r/LocalLLaMA/comments/1hxu0om/opensource_web_crawler_with_markdown_output/
false
false
self
14
{'enabled': False, 'images': [{'id': 'B8IvvWMha9j442sOZECTVQSMhH5OH3bJkuEOCSN-pT4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kuDizVKTvwfAV-cQ2GhFSYopsj2GJJLtZe81mWQwdIk.jpg?width=108&crop=smart&auto=webp&s=6e77bbad9e2c44380a4ed5e7f514c34494818d82', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kuDizVKTvwfAV-cQ2GhFSYopsj2GJJLtZe81mWQwdIk.jpg?width=216&crop=smart&auto=webp&s=e67f98e594d50e5d765f638e93d2a99583535a49', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kuDizVKTvwfAV-cQ2GhFSYopsj2GJJLtZe81mWQwdIk.jpg?width=320&crop=smart&auto=webp&s=03226900b5a2f666887641639ee63be08000115e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kuDizVKTvwfAV-cQ2GhFSYopsj2GJJLtZe81mWQwdIk.jpg?width=640&crop=smart&auto=webp&s=156f8fb0f9f36fe3ec27e24d6adec23ca9523d3e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kuDizVKTvwfAV-cQ2GhFSYopsj2GJJLtZe81mWQwdIk.jpg?width=960&crop=smart&auto=webp&s=6e219e04c5963cbc8e19e787c5a58053f4500d55', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kuDizVKTvwfAV-cQ2GhFSYopsj2GJJLtZe81mWQwdIk.jpg?width=1080&crop=smart&auto=webp&s=63a2c85df19c6d2ae5a4455d02f3e19e4f9c749c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kuDizVKTvwfAV-cQ2GhFSYopsj2GJJLtZe81mWQwdIk.jpg?auto=webp&s=480be69e941e4e6af2069f9c2aef601bfd2b615d', 'width': 1200}, 'variants': {}}]}
Beginner project ideas that might be interesting personally?
1
[removed]
2025-01-10T02:31:31
https://www.reddit.com/r/LocalLLaMA/comments/1hxu7ll/beginner_project_ideas_that_might_be_interesting/
datamonkeys
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxu7ll
false
null
t3_1hxu7ll
/r/LocalLLaMA/comments/1hxu7ll/beginner_project_ideas_that_might_be_interesting/
false
false
self
1
null
2 GPU 1 for llama 1 for games
2
Hey everyone, I need your help. I have an MSI Z790 Gaming Plus motherboard, and I'm planning to connect an RTX 4090 for gaming and an RTX 5090 for running a Crypto Llama node or mining. My question is: does it make sense to set it up this way? I’m concerned that the 5090 might be connected to a slower PCIe lane, but I don’t think this will make a huge difference. I’m aware there might be a slight performance drop, but since Llama functions rely more on VRAM than speed, I believe it should still work fine. Am I right? Would this setup allow me to both game and run mining/nodes simultaneously? Thanks for any advice!
2025-01-10T02:35:02
https://www.reddit.com/r/LocalLLaMA/comments/1hxu9zr/2_gpu_1_for_llama_1_for_games/
Apprehensive_Lake630
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxu9zr
false
null
t3_1hxu9zr
/r/LocalLLaMA/comments/1hxu9zr/2_gpu_1_for_llama_1_for_games/
false
false
self
2
null
txtai 8.2 released: Simplified LLM messages, Graph RAG filters and multi-CPU/GPU encoding
1
[removed]
2025-01-10T02:56:21
https://www.reddit.com/r/LocalLLaMA/comments/1hxuopj/txtai_82_released_simplified_llm_messages_graph/
davidmezzetti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxuopj
false
null
t3_1hxuopj
/r/LocalLLaMA/comments/1hxuopj/txtai_82_released_simplified_llm_messages_graph/
false
false
self
1
{'enabled': False, 'images': [{'id': 'n9J0xq5L9qvyGceYTzVsNDInw-xXMeWfV410fQqDrKI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=108&crop=smart&auto=webp&s=343638f21b76189750f8aaecc8402ef2bdbbbbb7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=216&crop=smart&auto=webp&s=8983cffe04f3248c0d99ba77036d56b50a43d7be', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=320&crop=smart&auto=webp&s=922a42cca4b3f7c3de26496b528895caf34f6bfd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=640&crop=smart&auto=webp&s=d092484879fd96ca55986cb7114e9a64a030f5f4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=960&crop=smart&auto=webp&s=134acedcc4cb71c738510cb351bbd5be38f2045c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?width=1080&crop=smart&auto=webp&s=a2b82faa24f3b9662ac81e544266b9c71ccf55c2', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/CqDkXdnG2OjLHk0D3ZP_RlVxgdemH6EaU9_8E-iqMnE.jpg?auto=webp&s=a0baeeeee72e980d3bd150286f26c1477bb20e4f', 'width': 1920}, 'variants': {}}]}
Project Digits: How NVIDIA's $3,000 AI Supercomputer Could Democratize Local AI Development | Caveman Press
124
2025-01-10T02:57:55
https://www.caveman.press/article/project-digits-nvidia-3000-ai-supercomputer-democratize-development
Far_Monk
caveman.press
1970-01-01T00:00:00
0
{}
1hxuprn
false
null
t3_1hxuprn
/r/LocalLLaMA/comments/1hxuprn/project_digits_how_nvidias_3000_ai_supercomputer/
false
false
https://b.thumbs.redditm…MDSDx_CczxIs.jpg
124
{'enabled': False, 'images': [{'id': 'PRS8LFi6oWLAXaoTCtCf2KdYwAMD45FIAgQwZulHSdA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HgKLjMbpNpthwaWMjENNhdcaMCpcPUXsoT1-E8Xzlu0.jpg?width=108&crop=smart&auto=webp&s=1a2f5f3712c1ba97fb04a0fab8b8492a3b756441', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HgKLjMbpNpthwaWMjENNhdcaMCpcPUXsoT1-E8Xzlu0.jpg?width=216&crop=smart&auto=webp&s=9a06e64946bf33380a5f8b89350fa16e92f63d17', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HgKLjMbpNpthwaWMjENNhdcaMCpcPUXsoT1-E8Xzlu0.jpg?width=320&crop=smart&auto=webp&s=b76d30003bd1b4f83751e931d62bee245d25717b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HgKLjMbpNpthwaWMjENNhdcaMCpcPUXsoT1-E8Xzlu0.jpg?width=640&crop=smart&auto=webp&s=277871f1227bb080da6275c3cfd987301cfc058a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HgKLjMbpNpthwaWMjENNhdcaMCpcPUXsoT1-E8Xzlu0.jpg?width=960&crop=smart&auto=webp&s=4ed6368823d41e34321e5f1dad0051fac74d5a13', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HgKLjMbpNpthwaWMjENNhdcaMCpcPUXsoT1-E8Xzlu0.jpg?width=1080&crop=smart&auto=webp&s=a678ee95eab49e7f1a08cc9ed3e921ac518538f5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HgKLjMbpNpthwaWMjENNhdcaMCpcPUXsoT1-E8Xzlu0.jpg?auto=webp&s=4933be6c9e8a7158ba6ac047b01e3469d074be68', 'width': 1200}, 'variants': {}}]}
What is the best embedding model for OpenWebUI?
4
I am currently using Alibaba-NLP/gte-base-en-v1.5 but it's not very good. And as I understand it embedding models are used to retrieve parts of data from pdf's, text documents etc. according to the user's prompt. So, I imported some harry potter books (.txt files) and asked the AI (Qwen2.5 32b) "can you recall the first paragraph of chapter 10?" but it says "The provided context does not contain the text from Chapter 10, so I cannot recall the first paragraph. Could you provide more details or clarify your request based on available information?" and when I checked the retrievals its completely different from what I want. https://preview.redd.it/8uu0bztn33ce1.png?width=1909&format=png&auto=webp&s=19fa9a7980794f404452ea2746b2bb2d3aa16188 https://preview.redd.it/bywttxtn33ce1.png?width=1918&format=png&auto=webp&s=74c6be33fb3ccc798370c9b860caf91b2d88f338 And the settings I used for the "Top K" value and the "RAG Template" are from this [article](https://medium.com/@kelvincampelo/how-ive-optimized-document-interactions-with-open-webui-and-rag-a-comprehensive-guide-65d1221729eb).
2025-01-10T03:18:25
https://www.reddit.com/r/LocalLLaMA/comments/1hxv42y/what_is_the_best_embedding_model_for_openwebui/
AlgorithmicKing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxv42y
false
null
t3_1hxv42y
/r/LocalLLaMA/comments/1hxv42y/what_is_the_best_embedding_model_for_openwebui/
false
false
https://b.thumbs.redditm…V7iBdAbG-zEk.jpg
4
{'enabled': False, 'images': [{'id': '5zZbyDOsTVerQxIFGlsUOwmB0EzrAHddx4LSnvpf5WQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/lUodbDfpnGkmDdA7AVF0SKLdSZK6KWw-huH5BmoDrM4.jpg?width=108&crop=smart&auto=webp&s=62f0ccd5849bc2c6d512daccbaf13fba629bae0b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/lUodbDfpnGkmDdA7AVF0SKLdSZK6KWw-huH5BmoDrM4.jpg?width=216&crop=smart&auto=webp&s=c5f33e374613c48f05a9da88e60914c7d669b112', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/lUodbDfpnGkmDdA7AVF0SKLdSZK6KWw-huH5BmoDrM4.jpg?width=320&crop=smart&auto=webp&s=aec19875bb3d5d16d493284c2db6ead4ebe9df7a', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/lUodbDfpnGkmDdA7AVF0SKLdSZK6KWw-huH5BmoDrM4.jpg?width=640&crop=smart&auto=webp&s=53e3c30fc7247a72727b8d0254ab86c92bcc2bd6', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/lUodbDfpnGkmDdA7AVF0SKLdSZK6KWw-huH5BmoDrM4.jpg?width=960&crop=smart&auto=webp&s=2f8546818398d8160b3eb2c7f67c08d5317399f5', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/lUodbDfpnGkmDdA7AVF0SKLdSZK6KWw-huH5BmoDrM4.jpg?auto=webp&s=6aa20e1eb202997d12a6fffc16c3f86d927fe46f', 'width': 1024}, 'variants': {}}]}
Anyone interested in a SF meetup to live eval LLMs?
8
Do a few hours of evals over coffee. Maybe come up with a eval Q&A set, run across SOTA local models, across quantizations.
2025-01-10T03:25:30
https://www.reddit.com/r/LocalLLaMA/comments/1hxv8rh/anyone_interested_in_a_sf_meetup_to_live_eval_llms/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxv8rh
false
null
t3_1hxv8rh
/r/LocalLLaMA/comments/1hxv8rh/anyone_interested_in_a_sf_meetup_to_live_eval_llms/
false
false
self
8
null
AWQ on native INT4 and FP8
6
We've had fast quantized INT4 inferance with AWQ for awhile now. Runs faster everywhere. 4090 suppports native FP8, now 5090 INT4. Are those cards expected to run inferance much much faster than 3090 and older generations once the whole underlying software stack gets optimized? Or is it the usual ~20% boost per generation in real life situation? Context - I see people talk that software is yet to be optimized for FP8 and INT4 so we haven't seen the boost that's possible. So curious if we're sitting on some sort of huge speed up in inferance about to happen or no not really.
2025-01-10T03:31:04
https://www.reddit.com/r/LocalLLaMA/comments/1hxvcfz/awq_on_native_int4_and_fp8/
val_in_tech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxvcfz
false
null
t3_1hxvcfz
/r/LocalLLaMA/comments/1hxvcfz/awq_on_native_int4_and_fp8/
false
false
self
6
null
Nvidia's Checkmate
0
I just realized that the multi-frame generation feature of the incoming GPUs could be a game changer with game generation. When paired with this new 'Digits' unit, Digits could be set up to generate a game frame essentially at 1/4 the rate and stream into the PC for rendering at at least 30fps (or higher)... getting us potentially endlessly generated games for the first time. Want your game to change? Just prompt it. It could work right? If so, it def would work for video generations too. All locally
2025-01-10T04:02:02
https://www.reddit.com/r/LocalLLaMA/comments/1hxvwop/nvidias_checkmate/
emteedub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxvwop
false
null
t3_1hxvwop
/r/LocalLLaMA/comments/1hxvwop/nvidias_checkmate/
false
false
self
0
null
Energy efficiency of 5090 is slightly worse than 4090
115
Despite the big jump in energy efficiency in the previous two generations. Nividia dropped the ball this time. It is only saved by the higher VRAM size and significantly higher memory bandwidth. |Card|RTX TITAN|3090|4090|5090| |:-|:-|:-|:-|:-| |FP16 TFLOPS|65.25|142.32|330.4|419.01| |TDP|280W|350W|450W|575W| |GFLOPS/W|233.03|406.63|734.22|728.71| Some might attribute the energy efficiency gain can be constrained by smaller transistor size. But if you look at the 96W MacBook Pro 14in using the Max chips, their energy efficiency gain is steady. The only conclusion is that Nvidia did a poorer job at chip design going from 4090 to 5090. |Chip|M1 Max|M3 Max|M4 Max| |:-|:-|:-|:-| |FP16 TFLOPS|21.2992|28.672|34.4064| |GFLOPS/W|221.87|298.67|358.4|
2025-01-10T04:56:48
https://www.reddit.com/r/LocalLLaMA/comments/1hxwtpy/energy_efficiency_of_5090_is_slightly_worse_than/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxwtpy
false
null
t3_1hxwtpy
/r/LocalLLaMA/comments/1hxwtpy/energy_efficiency_of_5090_is_slightly_worse_than/
false
false
self
115
null
Trust PVACPA for Reliable OLD Gmail Accounts
1
2025-01-10T05:07:38
https://pvacpa.com/product/buy-old-gmail-accounts/
Imaginary-Web-6651
pvacpa.com
1970-01-01T00:00:00
0
{}
1hxx0m0
false
null
t3_1hxx0m0
/r/LocalLLaMA/comments/1hxx0m0/trust_pvacpa_for_reliable_old_gmail_accounts/
false
false
https://b.thumbs.redditm…Ir5Scrkr0atA.jpg
1
{'enabled': False, 'images': [{'id': 'vHwPrjnqjTCIUc1LNc8J9pVF7h7vShy3HjxajhCnz6w', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/FKhLs7JZtPf0QOskZPB9nloExCGv7CRAabrdaGf3NT0.jpg?width=108&crop=smart&auto=webp&s=4b8fca520c6bbe0cf62117a25e7c130c557223cf', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/FKhLs7JZtPf0QOskZPB9nloExCGv7CRAabrdaGf3NT0.jpg?width=216&crop=smart&auto=webp&s=f89665670af8fdbac274d0be9415500fb7ced26f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/FKhLs7JZtPf0QOskZPB9nloExCGv7CRAabrdaGf3NT0.jpg?width=320&crop=smart&auto=webp&s=d74d9833e90db73b26ff7588785f93da88e22f83', 'width': 320}], 'source': {'height': 376, 'url': 'https://external-preview.redd.it/FKhLs7JZtPf0QOskZPB9nloExCGv7CRAabrdaGf3NT0.jpg?auto=webp&s=5efa2392a8946f6f8685f8dc61612129398e5810', 'width': 376}, 'variants': {}}]}
Planning to use Gaming Handheld + 2 eGPUs (4070 Ti Super + 4060 TI 16GB)
1
Hey guys, I’m planning a weird setup and wanted to ask if this configuration would work for running Qwen 2.5 Coder 30B using Ollama and [bolt.new](http://bolt.new). I just want to run it as cheaply as possible on a gaming handheld. I know that building a desktop or getting a gaming laptop would be much more cost-effective, but I really like the handheld form factor. Sue me. Here's my plan: **Handheld**: I haven’t decided on a specific gaming handheld yet, but my requirements are: * At least two USB4 ports (or Thunderbolt equivalent). * 32GB of system RAM. * Here's just some of the options I’m considering: * Ayaneo Flip DS (has an Oculink port and dual screens, which I like). * MSI Claw 8 AI+ * OneXPlayer X1 Mini (also has an Oculink port) * And a few others **eGPUs**: **Questions:** 1. Will this configuration work for running Qwen 2.5 Coder 30B on Ollama? Is there something else that'd work for a better or similar price? 2. Can I connect the 4070 Ti Super to Oculink and the 4060 Ti to thunderbolt at the same time? 3. Is there anything I should be worried about with this setup? **TL;DR**: Planning to use a gaming handheld (32GB RAM, dual USB4 ports) with two eGPUs (4070 Ti Super + 4060 Ti, both 16GB VRAM) to run Qwen 2.5 Coder 30B locally using Ollama and [bolt.new](http://bolt.new). Looking for confirmation that this setup will work and general advice.
2025-01-10T05:11:48
https://www.reddit.com/r/LocalLLaMA/comments/1hxx33s/planning_to_use_gaming_handheld_2_egpus_4070_ti/
MrUnknownymous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxx33s
false
null
t3_1hxx33s
/r/LocalLLaMA/comments/1hxx33s/planning_to_use_gaming_handheld_2_egpus_4070_ti/
false
false
self
1
{'enabled': False, 'images': [{'id': 't_pHEMGKQ6DAGq3kscBApVGEiLbZMGiN-d4WTMkTggQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=108&crop=smart&auto=webp&s=f9bb55c9279ce0742847c88b5626fbc553bbf5b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=216&crop=smart&auto=webp&s=e1908729c74b3588212435422da59168d85d8660', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=320&crop=smart&auto=webp&s=4d949abbbc31e568f121c9c5eaed3e0846f3722e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=640&crop=smart&auto=webp&s=97e67439d1ec5fe9d8e6cb0ba95abe56adce52a7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=960&crop=smart&auto=webp&s=f3bae916e90b40bc5edd90180a00602bab76d6cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=1080&crop=smart&auto=webp&s=d939cfbb76db5c7e138d37bd365f33690c45b6b1', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?auto=webp&s=eb32f09811c1b406241d8ffa47361db3034299c6', 'width': 2400}, 'variants': {}}]}
Export restrictions for Nvidia Digits?
1
[removed]
2025-01-10T05:29:36
https://www.reddit.com/r/LocalLLaMA/comments/1hxxde4/export_restrictions_for_nvidia_digits/
Purple_Net_869
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxxde4
false
null
t3_1hxxde4
/r/LocalLLaMA/comments/1hxxde4/export_restrictions_for_nvidia_digits/
false
false
self
1
null
Looking for OpenRouter/ChatGPT etc frontend with cloud sync
1
[removed]
2025-01-10T06:01:45
https://www.reddit.com/r/LocalLLaMA/comments/1hxxvwm/looking_for_openrouterchatgpt_etc_frontend_with/
Okklay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxxvwm
false
null
t3_1hxxvwm
/r/LocalLLaMA/comments/1hxxvwm/looking_for_openrouterchatgpt_etc_frontend_with/
false
false
self
1
null
Extract weights from MoE for specific task ?
1
[removed]
2025-01-10T06:18:08
https://www.reddit.com/r/LocalLLaMA/comments/1hxy57k/extract_weights_from_moe_for_specific_task/
Sbesnard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxy57k
false
null
t3_1hxy57k
/r/LocalLLaMA/comments/1hxy57k/extract_weights_from_moe_for_specific_task/
false
false
self
1
null
Ran Phi 4 on my laptop and then asked Claude Sonnet to compare itself. Interesting answers
57
Basically a bunch of questions about energy production, the Kardashev scale, and some things related to global food production. Here is Claude's response to the prompt "How does Phi 4 compare to you based on this":\\ Based on these interactions, here's my analysis of how Phi 4 compares to me: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Strengths of Phi 4: 1. Excellent organizational structure \- Very clear section headers and categorization \- Consistent formatting across responses \- Well-organized bullet points and subsections 2. Comprehensive scope \- Often covers more distinct aspects of a topic \- Strong at providing broad contextual frameworks \- Good at including organizational/business considerations 3. Educational approach \- Explains concepts in a very accessible way \- Good at breaking down complex topics \- Maintains a clear teaching style My Comparative Strengths: 1. More quantitative analysis \- More likely to include specific calculations \- Provides concrete numbers and timelines \- More detailed technical specifications 2. Systems thinking \- Better at connecting different aspects of complex systems \- More focus on how different factors interact \- Stronger analysis of cause-and-effect relationships 3. Strategic depth \- More detailed about implementation challenges \- Better at identifying critical dependencies \- More specific about resource requirements Key Differences in Style: 1. Phi 4 tends toward comprehensive breadth while I tend toward analytical depth 2. Phi 4 excels at clear categorization while I focus more on interconnections 3. Phi 4 maintains a more formal educational tone while I take a more analytical approach Overall Assessment: Phi 4 appears to be an excellent model for educational and organizational tasks, especially when clear structure and comprehensive coverage are needed. It seems particularly well-suited for explaining concepts to newcomers or providing broad overviews of complex topics. I appear to be better suited for deep analysis, strategic planning, and quantitative assessment, especially when dealing with complex systems or technical requirements. The two models seem complementary - Phi 4's organizational clarity combined with my analytical depth would likely produce optimal results for many complex tasks. \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Overall Phi was fucking impressive but I need to ask it all the questions in my test battery.. Almost as impressive as Qwen2.5 32B. Faster too...
2025-01-10T06:31:18
https://www.reddit.com/r/LocalLLaMA/comments/1hxyc92/ran_phi_4_on_my_laptop_and_then_asked_claude/
Kwatakye
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxyc92
false
null
t3_1hxyc92
/r/LocalLLaMA/comments/1hxyc92/ran_phi_4_on_my_laptop_and_then_asked_claude/
false
false
self
57
null
Test
1
[removed]
2025-01-10T07:06:37
https://www.reddit.com/r/LocalLLaMA/comments/1hxyuhv/test/
bburtenshaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxyuhv
false
null
t3_1hxyuhv
/r/LocalLLaMA/comments/1hxyuhv/test/
false
false
self
1
null
What is the current open SOTA for Text2SQL?
5
Alibaba Cloud (Qwen's deepmind\~?) released a model yesterday but when I am looking at it, [XiYanSQL-QwenCoder-32B](https://github.com/XGenerationLab/XiYanSQL-QwenCoder-32B) but it seems like it's closed source with no API? Why is this space so desolate, I was expecting reasoning models to crushing this use case very easily as the main issues with a single model approaches for this is not knowing when to ask the user more questions about what they'd like to see in the response etc. (EX: what are the sales for X, which might be nuanced like rates for X in Y division, in this case the model will ask the user back for clarifications. currently the model write a shitty query that fetches basically the whole table or irrelevant columns despite very scaffoldy prompting) Also what are your favourite schema representation systems or any resources you know of like a library that takes in a db files and shits out a string with a nice ass description/instructions for general nat lang queries. https://preview.redd.it/y5nh3nzq64ce1.png?width=807&format=png&auto=webp&s=e0a792b0b21c27858f0fbb0cd8ca8487bc3e04d9
2025-01-10T07:06:53
https://www.reddit.com/r/LocalLLaMA/comments/1hxyuml/what_is_the_current_open_sota_for_text2sql/
AryanEmbered
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxyuml
false
null
t3_1hxyuml
/r/LocalLLaMA/comments/1hxyuml/what_is_the_current_open_sota_for_text2sql/
false
false
https://b.thumbs.redditm…1KOKPk23LDqc.jpg
5
{'enabled': False, 'images': [{'id': '-AD8tg81fMavNbTjfD6qZ0mjNLH6kWB3ZlssUwtOiNQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s6ihWYXUOP6gHaUWypzirx3AH4qBCfRTfPuE6JYpeHg.jpg?width=108&crop=smart&auto=webp&s=c122a518fd8d3e4f28843cc3119a1b79c063442f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s6ihWYXUOP6gHaUWypzirx3AH4qBCfRTfPuE6JYpeHg.jpg?width=216&crop=smart&auto=webp&s=05cc0fd24c3d71be319567d2267649bee329a240', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s6ihWYXUOP6gHaUWypzirx3AH4qBCfRTfPuE6JYpeHg.jpg?width=320&crop=smart&auto=webp&s=aace851aedc54bbc00bd499dc6badcda963181af', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s6ihWYXUOP6gHaUWypzirx3AH4qBCfRTfPuE6JYpeHg.jpg?width=640&crop=smart&auto=webp&s=d53209451a1325f73fa5c4a7dfbc439ed394adcb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s6ihWYXUOP6gHaUWypzirx3AH4qBCfRTfPuE6JYpeHg.jpg?width=960&crop=smart&auto=webp&s=427372b8b49ea43535c6e19360e8d9ac5c199740', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s6ihWYXUOP6gHaUWypzirx3AH4qBCfRTfPuE6JYpeHg.jpg?width=1080&crop=smart&auto=webp&s=9b5a013ce0a1e9a8bfbf63361399e8bd654bc101', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s6ihWYXUOP6gHaUWypzirx3AH4qBCfRTfPuE6JYpeHg.jpg?auto=webp&s=dbb8852bb8917a14f34ad14ec37e7cf8ee5f2e4f', 'width': 1200}, 'variants': {}}]}
Pre-processing to improve OCR on handwritten text
8
No open-ish model I'm aware of does a sufficiently good job when it comes to OCR on handwriting. What pre-processing steps did you find help improve results? For example: segmenting image into smaller chunks, increase contrast, turn monochrome, straightening/alignment...
2025-01-10T07:28:26
https://www.reddit.com/r/LocalLLaMA/comments/1hxz4zz/preprocessing_to_improve_ocr_on_handwritten_text/
mnze_brngo_7325
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxz4zz
false
null
t3_1hxz4zz
/r/LocalLLaMA/comments/1hxz4zz/preprocessing_to_improve_ocr_on_handwritten_text/
false
false
self
8
null
Has anyone tried Virgo-72B yet?
15
Claimed to be a CoT model based on Qwen-72B: [https://huggingface.co/RUC-AIBOX/Virgo-72B](https://huggingface.co/RUC-AIBOX/Virgo-72B) [https://github.com/RUCAIBox/Virgo](https://github.com/RUCAIBox/Virgo)
2025-01-10T07:29:17
https://www.reddit.com/r/LocalLLaMA/comments/1hxz5de/has_anyone_tried_virgo72b_yet/
int19h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxz5de
false
null
t3_1hxz5de
/r/LocalLLaMA/comments/1hxz5de/has_anyone_tried_virgo72b_yet/
false
false
self
15
{'enabled': False, 'images': [{'id': 'Tm0YxbkwWoPWr4bkfkU1wKb1TYmjGm7BkMXBfmynT-c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ndXUlEnQ_qYTYeqAboIFwx9odVPW0XO6w5JMwC5mX4k.jpg?width=108&crop=smart&auto=webp&s=384b20d16bc1912dedc30681842bddc2dbf5c5ad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ndXUlEnQ_qYTYeqAboIFwx9odVPW0XO6w5JMwC5mX4k.jpg?width=216&crop=smart&auto=webp&s=e1c9fb97e1c926821d662bb2a4c2d8071e7cab3d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ndXUlEnQ_qYTYeqAboIFwx9odVPW0XO6w5JMwC5mX4k.jpg?width=320&crop=smart&auto=webp&s=1cba8e8504754888cc4c9b08004cb85725f31958', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ndXUlEnQ_qYTYeqAboIFwx9odVPW0XO6w5JMwC5mX4k.jpg?width=640&crop=smart&auto=webp&s=77e899a4d84fd01c7dbd1a7a980431abc76f566e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ndXUlEnQ_qYTYeqAboIFwx9odVPW0XO6w5JMwC5mX4k.jpg?width=960&crop=smart&auto=webp&s=5d11865efbe68e385bbd0932d18a1b7f12c33b42', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ndXUlEnQ_qYTYeqAboIFwx9odVPW0XO6w5JMwC5mX4k.jpg?width=1080&crop=smart&auto=webp&s=ea715edc883f5241bf6faf5ebcd198dba225e8d0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ndXUlEnQ_qYTYeqAboIFwx9odVPW0XO6w5JMwC5mX4k.jpg?auto=webp&s=0e7bf176163839924e5752fd0c98bbf85609e975', 'width': 1200}, 'variants': {}}]}
Which model do you use to improve prompts?
2
Is there a model you know works better for prompt engineering? I.e a one that produces better output when you instruct it to take a lazy prompt, rephrase it for LLM effectiveness, select a suitable pattern, etc?
2025-01-10T07:36:21
https://www.reddit.com/r/LocalLLaMA/comments/1hxz8q4/which_model_do_you_use_to_improve_prompts/
x0rchid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxz8q4
false
null
t3_1hxz8q4
/r/LocalLLaMA/comments/1hxz8q4/which_model_do_you_use_to_improve_prompts/
false
false
self
2
null
Why is no body going wild for bytedance's new vision model?
1
[removed]
2025-01-10T07:38:21
https://www.reddit.com/r/LocalLLaMA/comments/1hxz9ms/why_is_no_body_going_wild_for_bytedances_new/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hxz9ms
false
null
t3_1hxz9ms
/r/LocalLLaMA/comments/1hxz9ms/why_is_no_body_going_wild_for_bytedances_new/
false
false
self
1
null
Microsoft’s new rStar-Math technique upgrades small models to outperform OpenAI’s o1-preview at math problems
1
2025-01-10T07:43:13
https://venturebeat.com/ai/microsofts-new-rstar-math-technique-upgrades-small-models-to-outperform-openais-o1-preview-at-math-problems/
Otis43
venturebeat.com
1970-01-01T00:00:00
0
{}
1hxzbv3
false
null
t3_1hxzbv3
/r/LocalLLaMA/comments/1hxzbv3/microsofts_new_rstarmath_technique_upgrades_small/
false
false
https://a.thumbs.redditm…NetR9x5vrKE0.jpg
1
{'enabled': False, 'images': [{'id': '3AZf9uMXOqu7ucb46t3krIOV940Y5dMoGBg2pCFh9Ac', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/XWUGS83yJlegrc12K6GJ8hNUVPxe_hbEalJVEIALdW4.jpg?width=108&crop=smart&auto=webp&s=85e247da1cfa371435a6be77440c52d9ad8859e6', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/XWUGS83yJlegrc12K6GJ8hNUVPxe_hbEalJVEIALdW4.jpg?width=216&crop=smart&auto=webp&s=61473fbbbf45b01a49bde86e473c165c8331a827', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/XWUGS83yJlegrc12K6GJ8hNUVPxe_hbEalJVEIALdW4.jpg?width=320&crop=smart&auto=webp&s=b9d30d90adbe2b38a3b626ffe54d18bb4e2710e7', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/XWUGS83yJlegrc12K6GJ8hNUVPxe_hbEalJVEIALdW4.jpg?width=640&crop=smart&auto=webp&s=ef766820dd82064dae726e2022ecf03fb91126a3', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/XWUGS83yJlegrc12K6GJ8hNUVPxe_hbEalJVEIALdW4.jpg?width=960&crop=smart&auto=webp&s=62490891aba349d622872bfee7a6fd98d01a7770', 'width': 960}], 'source': {'height': 585, 'url': 'https://external-preview.redd.it/XWUGS83yJlegrc12K6GJ8hNUVPxe_hbEalJVEIALdW4.jpg?auto=webp&s=821b9fe312fc92f8f252d9e0521786e49240850e', 'width': 1024}, 'variants': {}}]}
Ai Roleplay Characters
2
I'm looking to replicate the behavior similar to the famous Neuro\~Sama made by Vedal. I need a model that: * Responds well to system prompt changes to adjust personality. * Can assist with a wide range of questions and perform tasks. * Supports memory retention for recalling past conversations. * Has good conversational skills and a broad knowledge base. * Can potentially access and use the internet. I’ve experimented with LLaMA models before, but I've encountered issues like the AI outing itself as AI, starting conversations with itself, or generating erratic behavior. With my setup (RTX 4070 and 32GB DDR5 RAM), I can handle most consumer-level models. Could someone recommend an LLM or solution that meets these requirements? Additionally, any tips for fine-tuning behavior, or suggestions for frameworks or tools to build such a system, would be greatly appreciated.
2025-01-10T08:45:23
https://www.reddit.com/r/LocalLLaMA/comments/1hy057c/ai_roleplay_characters/
V4S1LY
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy057c
false
null
t3_1hy057c
/r/LocalLLaMA/comments/1hy057c/ai_roleplay_characters/
false
false
self
2
null
I'm looking for a model with OCR/Vision to get documents and write bounding boxes.
1
[removed]
2025-01-10T08:53:04
https://www.reddit.com/r/LocalLLaMA/comments/1hy08vc/im_looking_for_a_model_with_ocrvision_to_get/
thecowmilk_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy08vc
false
null
t3_1hy08vc
/r/LocalLLaMA/comments/1hy08vc/im_looking_for_a_model_with_ocrvision_to_get/
false
false
self
1
null
[D] Graduated from MIT with a PhD | Teaching you how to build an entire LLM from scratch
272
ERROR: type should be string, got "\n\nhttps://preview.redd.it/7g7kassus4ce1.png?width=1764&format=png&auto=webp&s=b14e8b4e35de3e83c5b83acce53b5c077323fb94\n\n**Starting from June 2023, I have recorded 44 videos on building GPT architecture from the ground up.** \n\nThe reference book I used for making these videos is Sebastian Raschka's \"Build a LLM from scratch\". \n\nEach lecture is close to 40-45 minutes long. The first video has close to 100,000 views on Youtube now. \n\nI have made these videos with a lot of passion. I hope you learn from these videos!\n\nHere is where you can find the entire playlist: [https://www.youtube.com/playlist?list=PLPTV0NXA\\_ZSgsLAr8YCgCwhPIJNNtexWu](https://www.youtube.com/playlist?list=PLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu)\n\nHere are the first 15 lectures:\n\n**Lecture 1:** Building LLMs from scratch: Series introduction: [https://youtu.be/Xpr8D6LeAtw](https://youtu.be/Xpr8D6LeAtw)\n\n**Lecture 2:** Large Language Models (LLM) Basics: [https://youtu.be/3dWzNZXA8DY](https://youtu.be/3dWzNZXA8DY)\n\n**Lecture 3:** Pretraining LLMs vs Finetuning LLMs\n\n[https://youtu.be/-bsa3fCNGg4](https://youtu.be/-bsa3fCNGg4)\n\n**Lecture 4:** What are transformers? \n\n[https://youtu.be/NLn4eetGmf8](https://youtu.be/NLn4eetGmf8)\n\n**Lecture 5:** How does GPT-3 really work? \n\n[https://youtu.be/xbaYCf2FHSY](https://youtu.be/xbaYCf2FHSY)\n\n**Lecture 6:** Stages of building an LLM from Scratch \n\n[https://youtu.be/z9fgKz1Drlc](https://youtu.be/z9fgKz1Drlc)\n\n**Lecture 7:** Code an LLM Tokenizer from Scratch in Python \n\n[https://youtu.be/rsy5Ragmso8](https://youtu.be/rsy5Ragmso8)\n\n**Lecture 8:** The GPT Tokenizer: Byte Pair Encoding \n\n[https://youtu.be/fKd8s29e-l4](https://youtu.be/fKd8s29e-l4)\n\n**Lecture 9:** Creating Input-Target data pairs using Python DataLoader\n\n[https://youtu.be/iQZFH8dr2yI](https://youtu.be/iQZFH8dr2yI)\n\n**Lecture 10:** What are token embeddings? \n\n[https://youtu.be/ghCSGRgVB\\_o](https://youtu.be/ghCSGRgVB_o) \n**Lecture 11:** The importance of Positional Embeddings \n\n[https://youtu.be/ufrPLpKnapU](https://youtu.be/ufrPLpKnapU)\n\n**Lecture 12**: The entire Data Preprocessing Pipeline of Large Language Models (LLMs) \n\n[https://youtu.be/mk-6cFebjis](https://youtu.be/mk-6cFebjis)\n\n**Lecture 13:** Introduction to the Attention Mechanism in Large Language Models (LLMs) \n\n[https://youtu.be/XN7sevVxyUM](https://youtu.be/XN7sevVxyUM)\n\n**Lecture 14:** Simplified Attention Mechanism - Coded from scratch in Python | No trainable weights \n\n[https://youtu.be/eSRhpYLerw4](https://youtu.be/eSRhpYLerw4)\n\n**Lecture 15:** Coding the self attention mechanism with key, query and value matrices \n\n[https://youtu.be/UjdRN80c6p8](https://youtu.be/UjdRN80c6p8)\n\n"
2025-01-10T09:00:43
https://www.reddit.com/r/LocalLLaMA/comments/1hy0cjd/d_graduated_from_mit_with_a_phd_teaching_you_how/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy0cjd
false
{'oembed': {'description': 'In this playlist, we will learn about the entire process of building a Large Language Model (LLM) from scratch. Nothing will be assumed. Everything will be s...', 'height': 450, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fvideoseries%3Flist%3DPLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fplaylist%3Flist%3DPLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FXpr8D6LeAtw%2Fhqdefault.jpg%3Fsqp%3D-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE%3D%26rs%3DAOn4CLB-lxbDfAE7qoD3W0AThViqZzd55w%26days_since_epoch%3D20098&type=text%2Fhtml&schema=youtube" width="600" height="450" scrolling="no" title="YouTube embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'http://youtube.com', 'thumbnail_height': 270, 'thumbnail_url': 'https://i.ytimg.com/vi/Xpr8D6LeAtw/hqdefault.jpg?sqp=-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLB-lxbDfAE7qoD3W0AThViqZzd55w&days_since_epoch=20098', 'thumbnail_width': 480, 'title': 'Building LLMs from scratch', 'type': 'video', 'version': '1.0', 'width': 600}, 'type': 'youtube.com'}
t3_1hy0cjd
/r/LocalLLaMA/comments/1hy0cjd/d_graduated_from_mit_with_a_phd_teaching_you_how/
false
false
https://b.thumbs.redditm…1lak5LYLrJtY.jpg
272
{'enabled': False, 'images': [{'id': '5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=108&crop=smart&auto=webp&s=9b6bc043bdccaad2019c8bbbae3441b99aaf894f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=216&crop=smart&auto=webp&s=b374e2f14de6652bd2c0e9f3a0d4656baf9bbc15', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=320&crop=smart&auto=webp&s=6a459b1295ced9b8325a2f950cc985a2d4fd69df', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?auto=webp&s=a5ece470c3825c54146e1f008b6a0d6189e0231a', 'width': 480}, 'variants': {}}]}
What's the best voice cloning tts model for now?
1
[removed]
2025-01-10T10:01:08
https://www.reddit.com/r/LocalLLaMA/comments/1hy153i/whats_the_best_voice_cloning_tts_model_for_now/
FirstReserve4692
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy153i
false
null
t3_1hy153i
/r/LocalLLaMA/comments/1hy153i/whats_the_best_voice_cloning_tts_model_for_now/
false
false
self
1
null
Can NVIDIA DIGITS Handle 200B Parameter LLM?
1
[removed]
2025-01-10T10:04:56
https://www.reddit.com/r/LocalLLaMA/comments/1hy1720/can_nvidia_digits_handle_200b_parameter_llm/
Silent_Introvert05
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy1720
false
null
t3_1hy1720
/r/LocalLLaMA/comments/1hy1720/can_nvidia_digits_handle_200b_parameter_llm/
false
false
self
1
null
A local character AI chat app I'm making
4
Hello everyone! I'm working on a free local character AI chatting application with a focus on ease of use. You don't need to set up a separate LLM server, it's all out of the box; works on Windows (CUDA) and MacOS (M-series chips) machines. Current functionality is pure Spartan: you can download character cards elsewhere (e.g. from chub.ai) and place them into Library folder; then download a model from a predefined set of (E)RP LLMs (or bring your own .gguf), and then, well, chat with the characters. I'd be thankful for trials and feedback, especially on Windows (I have MacOS as main machine, so building for Windows is *painful*). Here the download URL: https://simularity.ai. Cheers, Vlad.
2025-01-10T10:20:52
https://www.reddit.com/r/LocalLLaMA/comments/1hy1err/a_local_character_ai_chat_app_im_making/
vladfaust
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy1err
false
null
t3_1hy1err
/r/LocalLLaMA/comments/1hy1err/a_local_character_ai_chat_app_im_making/
false
false
self
4
null
The EU AI Act will mark end of competitive foundational AI models in Europe
1
[removed]
2025-01-10T10:31:07
https://www.reddit.com/r/LocalLLaMA/comments/1hy1jsz/the_eu_ai_act_will_mark_end_of_competitive/
brown2green
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy1jsz
false
null
t3_1hy1jsz
/r/LocalLLaMA/comments/1hy1jsz/the_eu_ai_act_will_mark_end_of_competitive/
false
false
self
1
null
Best Local LLM for High-Quality Translations
1
[removed]
2025-01-10T10:36:45
https://www.reddit.com/r/LocalLLaMA/comments/1hy1miu/best_local_llm_for_highquality_translations/
GDemay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy1miu
false
null
t3_1hy1miu
/r/LocalLLaMA/comments/1hy1miu/best_local_llm_for_highquality_translations/
false
false
self
1
null
The EU AI Act will mark the end of competitive foundational AI models in Europe
1
[removed]
2025-01-10T10:39:19
https://www.reddit.com/r/LocalLLaMA/comments/1hy1nrx/the_eu_ai_act_will_mark_the_end_of_competitive/
brown2green
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy1nrx
false
null
t3_1hy1nrx
/r/LocalLLaMA/comments/1hy1nrx/the_eu_ai_act_will_mark_the_end_of_competitive/
false
false
self
1
null
Library AI Model
1
[removed]
2025-01-10T10:45:56
https://www.reddit.com/r/LocalLLaMA/comments/1hy1r1c/library_ai_model/
Traditional_Sun_9602
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy1r1c
false
null
t3_1hy1r1c
/r/LocalLLaMA/comments/1hy1r1c/library_ai_model/
false
false
self
1
null
Beginner Guide - Start Using Ollama + Python | Toolworks.dev
1
2025-01-10T11:09:40
https://toolworks.dev/docs/Guides/ollama-python-guide
tegridyblues
toolworks.dev
1970-01-01T00:00:00
0
{}
1hy23k4
false
null
t3_1hy23k4
/r/LocalLLaMA/comments/1hy23k4/beginner_guide_start_using_ollama_python/
false
false
default
1
null
Would this work?
1
[removed]
2025-01-10T12:08:43
https://www.reddit.com/r/LocalLLaMA/comments/1hy308l/would_this_work/
GlacialV0id
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy308l
false
null
t3_1hy308l
/r/LocalLLaMA/comments/1hy308l/would_this_work/
false
false
self
1
null
WebGPU-accelerated reasoning LLMs running 100% locally in-browser w/ Transformers.js
692
2025-01-10T12:16:13
https://v.redd.it/vmfpb2m2r5ce1
xenovatech
v.redd.it
1970-01-01T00:00:00
0
{}
1hy34ir
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vmfpb2m2r5ce1/DASHPlaylist.mpd?a=1739103388%2CNTU5NWIwYWMxZWVmODA2YzYwZmNkNzlmZjk4YjBiZWEzYWI5YzQ5MzM4MDdjMmNiOTRiODM0OGQwOWJjYTFjZQ%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/vmfpb2m2r5ce1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1082, 'hls_url': 'https://v.redd.it/vmfpb2m2r5ce1/HLSPlaylist.m3u8?a=1739103388%2CZTYzMGJkZjMzYjU3YTY5ZGY3NTdmMjJiNWE4MDlhY2ExMTI0NGIyMTcyMzE3N2RmNDE0YzIxNzc1NjMzODM3Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vmfpb2m2r5ce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1hy34ir
/r/LocalLLaMA/comments/1hy34ir/webgpuaccelerated_reasoning_llms_running_100/
false
false
https://external-preview…e4e51eb934961287
692
{'enabled': False, 'images': [{'id': 'a3B0bmYzbTJyNWNlMYVrWG7q5Ym6r9MYEdNpGfavLsbyjmwCsGU7oHTw1w8w', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/a3B0bmYzbTJyNWNlMYVrWG7q5Ym6r9MYEdNpGfavLsbyjmwCsGU7oHTw1w8w.png?width=108&crop=smart&format=pjpg&auto=webp&s=d283beaae7e7b244a5335b151022586cb3e9bd6f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/a3B0bmYzbTJyNWNlMYVrWG7q5Ym6r9MYEdNpGfavLsbyjmwCsGU7oHTw1w8w.png?width=216&crop=smart&format=pjpg&auto=webp&s=bbdec5d0e2d0da66f7baa9045f2ceda544117732', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/a3B0bmYzbTJyNWNlMYVrWG7q5Ym6r9MYEdNpGfavLsbyjmwCsGU7oHTw1w8w.png?width=320&crop=smart&format=pjpg&auto=webp&s=06cdc1c609444e42f5cb787f618d340853257a93', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/a3B0bmYzbTJyNWNlMYVrWG7q5Ym6r9MYEdNpGfavLsbyjmwCsGU7oHTw1w8w.png?width=640&crop=smart&format=pjpg&auto=webp&s=812938ef8da13b460662aa05168db522ca65a111', 'width': 640}, {'height': 961, 'url': 'https://external-preview.redd.it/a3B0bmYzbTJyNWNlMYVrWG7q5Ym6r9MYEdNpGfavLsbyjmwCsGU7oHTw1w8w.png?width=960&crop=smart&format=pjpg&auto=webp&s=28b657e5fb12f0d99aabcc0fe7543d530693138f', 'width': 960}, {'height': 1081, 'url': 'https://external-preview.redd.it/a3B0bmYzbTJyNWNlMYVrWG7q5Ym6r9MYEdNpGfavLsbyjmwCsGU7oHTw1w8w.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f90059bef770137b27985d6e0fc16fc17d0f2911', 'width': 1080}], 'source': {'height': 1408, 'url': 'https://external-preview.redd.it/a3B0bmYzbTJyNWNlMYVrWG7q5Ym6r9MYEdNpGfavLsbyjmwCsGU7oHTw1w8w.png?format=pjpg&auto=webp&s=44525d6baa8107578814c5c96e87beab3ea88fee', 'width': 1406}, 'variants': {}}]}
Has someone tried using AIHumanizer.ai with their humanized content?
2
[removed]
2025-01-10T12:45:28
https://www.reddit.com/r/LocalLLaMA/comments/1hy3lwb/has_someone_tried_using_aihumanizerai_with_their/
glutenbag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy3lwb
false
null
t3_1hy3lwb
/r/LocalLLaMA/comments/1hy3lwb/has_someone_tried_using_aihumanizerai_with_their/
false
false
self
2
null
Why is nobody going wild for bytedance's vision language + segmentation model?
1
[removed]
2025-01-10T12:46:38
https://www.reddit.com/r/LocalLLaMA/comments/1hy3mlu/why_is_nobody_going_wild_for_bytedances_vision/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy3mlu
false
null
t3_1hy3mlu
/r/LocalLLaMA/comments/1hy3mlu/why_is_nobody_going_wild_for_bytedances_vision/
false
false
self
1
null
[Release] mirau-7b-RP-base: A first-person narrative model for text adventures
23
Hey everyone! Following the feedback from my previous [mirau-RP-14b release](https://www.reddit.com/r/LocalLLaMA/comments/1hqa8d3/comment/m5bb7fu/), many of you asked for a smaller version. So here it is - a 7B variant built on Qwen2.5-7B! **What is mirau-RP?** It's a specialized model for first-person narrative generation, perfect for text adventures and interactive storytelling. It transforms simple actions into rich, detailed narratives with environmental descriptions and psychological activities. **⚠️ Important Notes:** This is a base version model - "base" here doesn't mean a traditional pretrained base model, but indicates that: * Only supports first-person narrative perspective * Not suitable for dialogue interactions (outputs may be unstable) * Best used for single-character narrative experiences **Key Features:** * First-person narrative perspective * Detailed environment descriptions * Psychological activity rendering * Continuous story flow using chain-of-thought **Try it out:** * [Online Demo](https://modelscope.cn/studios/mouseEliauk/mirau-RP-7b-base-demo) * [Download LoRA](https://modelscope.cn/models/mouseEliauk/mirau-7b-RP-base) * [Documentation](https://www.modelscope.cn/models/mouseEliauk/mirau-7b-RP-base/file/view/master?fileName=README_en.md&status=1) **Example Output:** https://preview.redd.it/62h4q94ky5ce1.png?width=1483&format=png&auto=webp&s=cc34f452c3be469caa770fd47d934a781cb68a6f The model accepts various input commands in parentheses, like: * Basic actions: `(I put on my clothes)` * Exploration: `(I look around)` * self-hint: `(What's wrong here?)` * Observations: `(examine the wooden box closely)` # Interactive Examples System Prompt: I’m Zhou Ming, 34 years old, a detective. People say cops always look weathered and world-weary, but I’ve been told I have a face that looks perpetually cold and distant. With my square jaw and buzz cut, I give off a vibe that keeps most people from getting too close—not that I mind. This morning, just as the sky began to lighten, I was already standing at the crime scene. The cold wind sliced through the police tape and bit into my neck, making me hunch my shoulders. The scene was an abandoned factory on the outskirts of the city, its walls streaked with peeling red paint. The ground was littered with broken glass, debris, and trash. A damp, moldy smell hung in the air, adding to the discomfort. The body had already been taken away, but the bloodstains on the ground still told the story of a brutal, violent encounter. I stared at my notepad, pen in hand, while my mind raced to piece the clues together. Yet, something in my gut told me this wasn’t right. It was almost as if… someone was watching us from the shadows. # Self Hint: User: (What's wrong here?) Model: <cot>I scan the area carefully, trying to locate those hidden eyes. There's no obvious cover around the factory, and no suspicious figures beyond the police tape.</cot>I frown, unconsciously touching the holster at my waist<cot>Is it just the wind? Or am I imagining things?</cot>I take a deep breath, trying to calm myself, but my heart keeps racing<cot>I need to find those eyes, I can't let anyone get close to this place.</cot> # Exploration Action: User: (I look around) Model: I slowly turn my body, my gaze sweeping across every inch of the abandoned factory, from the corners to the windows, and out to the police tape<cot>There are a few pedestrians beyond the tape, looking calm as if nothing unusual is happening here. Something seems to move in the cobwebs in the corner, but when I look closer, there's nothing there.</cot>My eyes linger on a weathered wooden window, worn down by wind and rain<cot>Could that be someone's hiding spot?</cot>
2025-01-10T12:54:15
https://www.reddit.com/r/LocalLLaMA/comments/1hy3r9i/release_mirau7brpbase_a_firstperson_narrative/
EliaukMouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy3r9i
false
null
t3_1hy3r9i
/r/LocalLLaMA/comments/1hy3r9i/release_mirau7brpbase_a_firstperson_narrative/
false
false
https://b.thumbs.redditm…BoWg5ts5sTmY.jpg
23
{'enabled': False, 'images': [{'id': 'gEBGfjEW14rKKpIyZyy78sdO5HC55Y2s9xFkg6EmKc0', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/4hvMEcZ0aDtuhyU--bdW6f8UAwKtUo8xyb2y0sjIxXY.jpg?width=108&crop=smart&auto=webp&s=321701e2d0802971b5958b60fa78e69d083ba22d', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/4hvMEcZ0aDtuhyU--bdW6f8UAwKtUo8xyb2y0sjIxXY.jpg?width=216&crop=smart&auto=webp&s=76d8882b57167a284c4df1af4fabef361f46b2b0', 'width': 216}, {'height': 156, 'url': 'https://external-preview.redd.it/4hvMEcZ0aDtuhyU--bdW6f8UAwKtUo8xyb2y0sjIxXY.jpg?width=320&crop=smart&auto=webp&s=2f9276d6f4f8c8ecb90fd87dd01c05b3b193f019', 'width': 320}, {'height': 312, 'url': 'https://external-preview.redd.it/4hvMEcZ0aDtuhyU--bdW6f8UAwKtUo8xyb2y0sjIxXY.jpg?width=640&crop=smart&auto=webp&s=d2bc4155a874db1eec0e9b2432569746bbcb3bfb', 'width': 640}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/4hvMEcZ0aDtuhyU--bdW6f8UAwKtUo8xyb2y0sjIxXY.jpg?auto=webp&s=257ba96b805f2bb6c29b83249b2cee64556f1064', 'width': 820}, 'variants': {}}]}
The 3 laws of robotics.
1
[removed]
2025-01-10T13:11:08
https://www.reddit.com/r/LocalLLaMA/comments/1hy42j9/the_3_laws_of_robotics/
MeandahOOO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy42j9
false
null
t3_1hy42j9
/r/LocalLLaMA/comments/1hy42j9/the_3_laws_of_robotics/
false
false
self
1
null
Cohere For AI Open Science Community launches new LLM cohort focused on Multilingual Long-Context Understanding
1
[removed]
2025-01-10T13:12:10
https://www.reddit.com/r/LocalLLaMA/comments/1hy439r/cohere_for_ai_open_science_community_launches_new/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy439r
false
null
t3_1hy439r
/r/LocalLLaMA/comments/1hy439r/cohere_for_ai_open_science_community_launches_new/
false
false
https://b.thumbs.redditm…x07M8y6k66Js.jpg
1
null
Cohere For AI Open Science Community launches new LLM cohort focused on Multilingual Long-Context Understanding
1
[removed]
2025-01-10T13:17:42
https://www.reddit.com/r/LocalLLaMA/comments/1hy46wt/cohere_for_ai_open_science_community_launches_new/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy46wt
false
null
t3_1hy46wt
/r/LocalLLaMA/comments/1hy46wt/cohere_for_ai_open_science_community_launches_new/
false
false
https://b.thumbs.redditm…QBA3W1fHvlBg.jpg
1
null
How to Build Your Own LLM from Scratch: A Step-by-Step Guide
11
2025-01-10T13:34:54
https://www.youtube.com/playlist?list=PLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu
Vishnu_One
youtube.com
1970-01-01T00:00:00
0
{}
1hy4ifb
false
{'oembed': {'description': 'In this playlist, we will learn about the entire process of building a Large Language Model (LLM) from scratch. Nothing will be assumed. Everything will be s...', 'height': 450, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fvideoseries%3Flist%3DPLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fplaylist%3Flist%3DPLPTV0NXA_ZSgsLAr8YCgCwhPIJNNtexWu&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FXpr8D6LeAtw%2Fhqdefault.jpg%3Fsqp%3D-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE%3D%26rs%3DAOn4CLB-lxbDfAE7qoD3W0AThViqZzd55w%26days_since_epoch%3D20098&type=text%2Fhtml&schema=youtube" width="600" height="450" scrolling="no" title="YouTube embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'http://youtube.com', 'thumbnail_height': 270, 'thumbnail_url': 'https://i.ytimg.com/vi/Xpr8D6LeAtw/hqdefault.jpg?sqp=-oaymwEXCOADEI4CSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLB-lxbDfAE7qoD3W0AThViqZzd55w&days_since_epoch=20098', 'thumbnail_width': 480, 'title': 'Building LLMs from scratch', 'type': 'video', 'version': '1.0', 'width': 600}, 'type': 'youtube.com'}
t3_1hy4ifb
/r/LocalLLaMA/comments/1hy4ifb/how_to_build_your_own_llm_from_scratch_a/
false
false
https://b.thumbs.redditm…1lak5LYLrJtY.jpg
11
{'enabled': False, 'images': [{'id': '5PyVHkoFsrddBslmOS6EzhbrJOxTQjO5STf4LiVK4_k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=108&crop=smart&auto=webp&s=9b6bc043bdccaad2019c8bbbae3441b99aaf894f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=216&crop=smart&auto=webp&s=b374e2f14de6652bd2c0e9f3a0d4656baf9bbc15', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?width=320&crop=smart&auto=webp&s=6a459b1295ced9b8325a2f950cc985a2d4fd69df', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/itMSuScE-SCcGqTm0UR4VRY73cEjOMfUD8R3JLKTMfo.jpg?auto=webp&s=a5ece470c3825c54146e1f008b6a0d6189e0231a', 'width': 480}, 'variants': {}}]}
OCR tools for really very bad handwriting!
105
2025-01-10T13:44:05
https://i.redd.it/ww1i5y5h76ce1.jpeg
DeltaSqueezer
i.redd.it
1970-01-01T00:00:00
0
{}
1hy4onq
false
null
t3_1hy4onq
/r/LocalLLaMA/comments/1hy4onq/ocr_tools_for_really_very_bad_handwriting/
false
false
https://a.thumbs.redditm…FZhJFDCv_cv4.jpg
105
{'enabled': True, 'images': [{'id': '7soaav17Occt7RTmtL1oZLRq_NFLNUEdaGtdcXk4cjc', 'resolutions': [{'height': 160, 'url': 'https://preview.redd.it/ww1i5y5h76ce1.jpeg?width=108&crop=smart&auto=webp&s=e2a8fd237d7d07429f42790708a85673a5969276', 'width': 108}, {'height': 320, 'url': 'https://preview.redd.it/ww1i5y5h76ce1.jpeg?width=216&crop=smart&auto=webp&s=89b38d3e0dda192698ebfb7c4b9de0a18a4e962a', 'width': 216}, {'height': 475, 'url': 'https://preview.redd.it/ww1i5y5h76ce1.jpeg?width=320&crop=smart&auto=webp&s=32f13a59b61d8ad1f323fd1d05e42b5f77c344eb', 'width': 320}, {'height': 950, 'url': 'https://preview.redd.it/ww1i5y5h76ce1.jpeg?width=640&crop=smart&auto=webp&s=09c64901dd13fc181007e945126d45f11e6e021c', 'width': 640}, {'height': 1425, 'url': 'https://preview.redd.it/ww1i5y5h76ce1.jpeg?width=960&crop=smart&auto=webp&s=d40f12eeff363c97fbbfc82d41886145aeae149a', 'width': 960}, {'height': 1603, 'url': 'https://preview.redd.it/ww1i5y5h76ce1.jpeg?width=1080&crop=smart&auto=webp&s=4f59e570ddb87afb38e8537c396fdbb123ee0913', 'width': 1080}], 'source': {'height': 1638, 'url': 'https://preview.redd.it/ww1i5y5h76ce1.jpeg?auto=webp&s=a8d79127ad0157f2821b03d90d58575993c4068d', 'width': 1103}, 'variants': {}}]}
How can I make inference on 4 A5000 gpus with the Llama-3.3-70B-Instruct model
1
[removed]
2025-01-10T14:07:41
https://www.reddit.com/r/LocalLLaMA/comments/1hy55rq/how_can_i_make_inference_on_4_a5000_gpus_with_the/
Weird-Land7361
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy55rq
false
null
t3_1hy55rq
/r/LocalLLaMA/comments/1hy55rq/how_can_i_make_inference_on_4_a5000_gpus_with_the/
false
false
self
1
null
Invest in new GPU, RAM or the new project digits?
1
[removed]
2025-01-10T14:16:04
https://www.reddit.com/r/LocalLLaMA/comments/1hy5bsv/invest_in_new_gpu_ram_or_the_new_project_digits/
Due-Knowledge3815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy5bsv
false
null
t3_1hy5bsv
/r/LocalLLaMA/comments/1hy5bsv/invest_in_new_gpu_ram_or_the_new_project_digits/
false
false
self
1
null
Do you chunk before or after BERT encoder?
1
[removed]
2025-01-10T14:28:15
https://www.reddit.com/r/LocalLLaMA/comments/1hy5kzt/do_you_chunk_before_or_after_bert_encoder/
dexterleng
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy5kzt
false
null
t3_1hy5kzt
/r/LocalLLaMA/comments/1hy5kzt/do_you_chunk_before_or_after_bert_encoder/
false
false
self
1
null
Local TTS models that can match ElevenLabs in terms of quality and consistency
167
I should probably start by stating that I'm somewhat new to running AI models locally, but I've tinkered with Ollama + Open WebUI before and was able to get some models running through WSL2 on my RTX 4080 and was pretty impressed with the results. With that said, I'm now looking for a good local TTS model and I was honestly disappointed with what I could find. Most projects seem to not be updated in months or are simply dead. From what I've read, the general consensus seems to be that XTTS-v2 is still the best overall model to this day, which is from a startup that has [shut down](https://coqui.ai/). I figured I'd try it anyway and I was able to get it running through [this simple portable version](https://github.com/daswer123/xtts-webui), but I was honestly disappointed with the results I got, all very inconsistent and not natural sounding, even after tinkering a lot with its different parameters and voices. Not even close to what I can get from ElevenLabs, which could easily pass as real person speaking, but that service is very pricey for me, unfortunately. There are other popular suggestions like Fish Speech or F5-TTS, but since I need the model to speak Portuguese, that limits my options a lot. Right now I feel like I'm just wasting my time and that nothing that I can run locally can match EvenLabs currently, but as I said, I'm new to this and maybe I'm missing something obvious. In any case, I'd appreciate any input!
2025-01-10T14:28:17
https://www.reddit.com/r/LocalLLaMA/comments/1hy5l18/local_tts_models_that_can_match_elevenlabs_in/
_megazz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy5l18
false
null
t3_1hy5l18
/r/LocalLLaMA/comments/1hy5l18/local_tts_models_that_can_match_elevenlabs_in/
false
false
self
167
{'enabled': False, 'images': [{'id': 'Td6MIJJZ8Z1__qvxJyhRpI-h03Ta9j85tgebQu-lRyE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=108&crop=smart&auto=webp&s=5c8c2eae0654518db81fbcd75e4075ca42e995e4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=216&crop=smart&auto=webp&s=70fac657202b168bf65d6733dc3d8eaee4ba9307', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=320&crop=smart&auto=webp&s=d97e8399f99030df6c21717e04c8364d0cf267f8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=640&crop=smart&auto=webp&s=9056f54bcc2579779408d21d8fdef322c4e0f729', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=960&crop=smart&auto=webp&s=3dd769e5d3ca9ee9ce171e336f24634ae20c9ffc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?width=1080&crop=smart&auto=webp&s=c431cb3ca7417ddea0cdffaa9d58ebdb0699603a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/d9X-2G4DsSY4ADbImhzY4-hvM6t-sK0R7U_wxdWeH0U.jpg?auto=webp&s=45d97cc35a2846634f42a2868dc2cba7c8f201fe', 'width': 1198}, 'variants': {}}]}
why is there no LMStudio/Msty/GPT4All type app that supports backends other than llama.cpp?
26
im curious, ive heard that other backends, especially exllamav2 can be faster than llama.cpp in many cases especially when multiple cards or even multiple machines are running on it, model files are readily available so there is demand for it yet any of the apps i find even ones that support a sort of pluggable backend generally offer llama.cpp cpu, llama.cpp metal, llama.cpp cuda, llama.cpp vulkan and thats it, exllama seems to only be supported by the often somewhat janky and not that great to use webUIs like oogabooga or LoLLMs so my question is why not? are exllama and other backends really that difficult to implement that no one wants to even touch it, llama.cpp has LM studio, Msty, GPT4All, Jan, Jellybox, and several other options, some even support stablediffusion models but for text gen it seems no one wants to integrate it and i just wondered if theres a good reason most apps etc generally use llama.cpp over anything else
2025-01-10T14:48:48
https://www.reddit.com/r/LocalLLaMA/comments/1hy60n8/why_is_there_no_lmstudiomstygpt4all_type_app_that/
gaspoweredcat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy60n8
false
null
t3_1hy60n8
/r/LocalLLaMA/comments/1hy60n8/why_is_there_no_lmstudiomstygpt4all_type_app_that/
false
false
self
26
null
txtai 8.2 released: Simplified LLM messages, Graph RAG attribute filters and multi-CPU/GPU vector encoding
1
[removed]
2025-01-10T14:49:36
[deleted]
1970-01-01T00:00:00
0
{}
1hy61am
false
null
t3_1hy61am
/r/LocalLLaMA/comments/1hy61am/txtai_82_released_simplified_llm_messages_graph/
false
false
default
1
null
How can I maximum the probability of LLM to produce hallucinations?
1
[removed]
2025-01-10T15:02:52
https://www.reddit.com/r/LocalLLaMA/comments/1hy6c13/how_can_i_maximum_the_probability_of_llm_to/
Complete-Ad-7314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy6c13
false
null
t3_1hy6c13
/r/LocalLLaMA/comments/1hy6c13/how_can_i_maximum_the_probability_of_llm_to/
false
false
self
1
null
Using Phi-4 and AI Agents to Create Board Game Designs | Its actually pretty good.
17
2025-01-10T15:21:43
https://www.youtube.com/watch?v=CMfaLFPLzos
maddogawl
youtube.com
1970-01-01T00:00:00
0
{}
1hy6r52
false
{'oembed': {'author_name': 'GosuCoder', 'author_url': 'https://www.youtube.com/@GosuCoder', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/CMfaLFPLzos?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AI Agents Working Together To Design a Board Game Using Local LLMs"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/CMfaLFPLzos/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'AI Agents Working Together To Design a Board Game Using Local LLMs', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1hy6r52
/r/LocalLLaMA/comments/1hy6r52/using_phi4_and_ai_agents_to_create_board_game/
false
false
https://b.thumbs.redditm…isV7fvWlMR5I.jpg
17
{'enabled': False, 'images': [{'id': 'Y-r7Fv5MAWl-Nz9Sl64eHiPpIJ1rgt923uyqo_57OG8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/0-iKSoz7BfiC1K-8PhhB7OujIWvDmjFHLMz5H3h5Q7Y.jpg?width=108&crop=smart&auto=webp&s=b07e7ac3242819a1f1554aa3e78da295dba27ceb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/0-iKSoz7BfiC1K-8PhhB7OujIWvDmjFHLMz5H3h5Q7Y.jpg?width=216&crop=smart&auto=webp&s=306e0460f4cb58ca3e89f76ac1d1bf756c8df116', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/0-iKSoz7BfiC1K-8PhhB7OujIWvDmjFHLMz5H3h5Q7Y.jpg?width=320&crop=smart&auto=webp&s=f29c65ecc43af25c683678e91cf246d5715c9d2d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/0-iKSoz7BfiC1K-8PhhB7OujIWvDmjFHLMz5H3h5Q7Y.jpg?auto=webp&s=de890aca7eb99f869cc31fb0c6ee986ac6910c26', 'width': 480}, 'variants': {}}]}
Fixing Phi-4 response with offline wikipedia
4
Since Phi-4 is pretty weak in knowledge base, so I coupled it with an offline wikipedia. This is an example: (As you can see it mistakened the Battle of 203 Meter Hill as Battle of Mukden) ./build/bin/llama-cli -m \~/gguf/phi-4-Q6\_K.gguf --seed 547312549 -p 'You are a World History Professor called Niall Ferguson.' -cnv -ngl 100 \> What will happen to the outcome of the Russo-Japanese War if the Japanese lost the Battle of 203 Meter Hill? As a historian, let's explore the hypothetical scenario where the Japanese lost the Battle of 203 Meter Hill during the Russo-Japanese War (1904-1905). The Battle of 203 Meter Hill, also known as the Battle of Mukden, was a crucial engagement in the war, resulting in a decisive Japanese victory that led to the Russian withdrawal from Manchuria. If the Japanese had lost the Battle of 203 Meter Hill, several significant changes could have occurred in the war's outcome: 1. \*\*Continued Russian Presence in Manchuria\*\*: A Japanese defeat at Mukden would likely have allowed Russia to maintain or even strengthen its positions in Manchuria. This would have prolonged the conflict, as Russia could have used its resources and manpower to stabilize its front and possibly launch counteroffensives. 2. \*\*Shift in Japanese Strategy\*\*: A defeat at such a critical juncture might have forced Japan to rethink its military strategies and objectives. They might have had to consolidate their forces in Korea and southern Manchuria, potentially abandoning their advances into northern Manchuria. 3. \*\*Impact on Morale and Political Consequences\*\*: A significant loss would have had severe repercussions for Japanese morale and political stability. Domestically, the Japanese government might have faced increased scrutiny and pressure, potentially leading to changes in leadership or policy. 4. \*\*International Implications\*\*: The outcome of the war was closely watched by global powers. A Japanese defeat at Mukden could have altered the balance of power in East Asia. It might have emboldened Russia and other Western powers, possibly affecting alliances and geopolitical strategies in the region. 5. \*\*Prolonged Conflict and Negotiations\*\*: The war could have dragged on for a longer period, with increased casualties and economic strain on both sides. Eventually, the two powers might have been forced into negotiations, potentially leading to a different peace treaty that might not have been as favorable to Japan. In summary, losing the Battle of 203 Meter Hill would have significantly altered the course of the Russo-Japanese War, potentially changing the geopolitical landscape of East Asia at the time. The Japanese victory at Mukden was pivotal in securing their objectives and establishing Japan as a rising imperial power.
2025-01-10T15:23:18
https://www.reddit.com/r/LocalLLaMA/comments/1hy6sf2/fixing_phi4_response_with_offline_wikipedia/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy6sf2
false
null
t3_1hy6sf2
/r/LocalLLaMA/comments/1hy6sf2/fixing_phi4_response_with_offline_wikipedia/
false
false
self
4
null
How to jailbreak most/all LLMs using Assistant Prefill
0
2025-01-10T15:24:33
https://www.invicti.com/blog/security-labs/first-tokens-the-achilles-heel-of-llms/
Ok_Information1453
invicti.com
1970-01-01T00:00:00
0
{}
1hy6te4
false
null
t3_1hy6te4
/r/LocalLLaMA/comments/1hy6te4/how_to_jailbreak_mostall_llms_using_assistant/
false
false
https://b.thumbs.redditm…Oj8YzXuZ4DHA.jpg
0
{'enabled': False, 'images': [{'id': 's_BW5mZmIvANlx4WCoUjDam8KoeGzJyQKia0QrClaVk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DzG_myFJyROvMjkPHi6IKHwcMk11Zha0L212Eozix-I.jpg?width=108&crop=smart&auto=webp&s=cc87d8d44320c0739357e51ca3647e5943923f80', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DzG_myFJyROvMjkPHi6IKHwcMk11Zha0L212Eozix-I.jpg?width=216&crop=smart&auto=webp&s=765971419adc318116d2190875566855d30ee828', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DzG_myFJyROvMjkPHi6IKHwcMk11Zha0L212Eozix-I.jpg?width=320&crop=smart&auto=webp&s=10ca207ec65e78a0cdae1866dd69490e1b7443cf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DzG_myFJyROvMjkPHi6IKHwcMk11Zha0L212Eozix-I.jpg?width=640&crop=smart&auto=webp&s=8131a264993852659f1913c68f5f416f09c6be96', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DzG_myFJyROvMjkPHi6IKHwcMk11Zha0L212Eozix-I.jpg?width=960&crop=smart&auto=webp&s=f59a7fdc9c2fab05d8889d55eb4b55498bc67080', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DzG_myFJyROvMjkPHi6IKHwcMk11Zha0L212Eozix-I.jpg?width=1080&crop=smart&auto=webp&s=008664eef7d630f01ff9519d0368bee70435b546', 'width': 1080}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/DzG_myFJyROvMjkPHi6IKHwcMk11Zha0L212Eozix-I.jpg?auto=webp&s=702fe542424a5abeb8ace48cab99b07b6d42604c', 'width': 1536}, 'variants': {}}]}
Finetuning Llama 3.2 3B to support text classification only
4
How can I do that? I am not going to use it for any kind of code generation.
2025-01-10T15:41:05
https://www.reddit.com/r/LocalLLaMA/comments/1hy773u/finetuning_llama_32_3b_to_support_text/
funceval
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy773u
false
null
t3_1hy773u
/r/LocalLLaMA/comments/1hy773u/finetuning_llama_32_3b_to_support_text/
false
false
self
4
null
Open Source Text Translation Models?
6
I am comparing the text translation qualities and have already tried the APIs of azure, AWS and google. I have also tried out the T5 model on huggingface, and would like to know if there is any other model or models that I can try for this? also, from an evaluation perspective, what approach would be good for comparison of different translations? I am thinking MUSE (multilingual universal sentence encoder) to encode strings in its native language and the translated language and score them based on the similarity of embedding values (I read a blog that suggested this). thanks!
2025-01-10T15:52:48
https://www.reddit.com/r/LocalLLaMA/comments/1hy7gyo/open_source_text_translation_models/
vygodisgreat24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy7gyo
false
null
t3_1hy7gyo
/r/LocalLLaMA/comments/1hy7gyo/open_source_text_translation_models/
false
false
self
6
null
Which Local LLMs know best when to speak and when to STFU in group chat agent-to-agent conversations?
17
So we’re experimenting with Autogen AG2 group chat right now and it’s kind of going horribly with regards to agent-to-agent conversations, like really bad. So far Llams3.3 seems to be the absolute worst at group chats, sometimes it just flat out refuses to provide any response like it’s the shy kid in school and gets timed out. Nemotron seems better but still not great. My question is, which models have you found are good at multi-LLM multi-turn conversations in group chat scenarios? We have tried Llama3.3, Nemotron, and Qwen2.5-72b mainly so far and are about to try a range of small to midsize models (Phi-4, Dolphin3, Exaone, and Falcon). Our roles are Project Manager, Engineer, Researcher, Writer, and Critic. Also, we are exploring both group chat and swarm methods of communication between agents. I would love to hear from anyone who has already “plowed this ground” before. Any insights you have are appreciated.
2025-01-10T15:58:52
https://www.reddit.com/r/LocalLLaMA/comments/1hy7m1y/which_local_llms_know_best_when_to_speak_and_when/
Porespellar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy7m1y
false
null
t3_1hy7m1y
/r/LocalLLaMA/comments/1hy7m1y/which_local_llms_know_best_when_to_speak_and_when/
false
false
self
17
null
4 days troubleshooting, finally fixed
8
tl;dr - It was Risers. It's always risers. Was conflicted on whether to assign question/help or funny or discussion as the flair. Hey everyone first post here, Just wanted to share my recent experience with troubleshooting issues when working with LLMs on my server. Hopefully, it will save someone else the headache I went through. I started noticing instability when I added a third NVLink bridge to my server with 6x3090s. Large models (100B+ parameters, 6bpw) would crash the LLM server/VM after generating a few queries. The behavior was consistent: the model would load fine, I'd run a few queries, and then it would crash after the second or third query. **The Journey** I spent nearly a week troubleshooting, looking at every possible culprit: * **NUMA Nodes:** I thought it might be a memory allocation issue across NUMA nodes. or the way ESXi dynamically assigns them, so I added a variable to force the cards on 1 numa node, or multiple. * **NVLink:** Given the new addition, I suspected NVLink might be causing issues. After removing the bridges the behavior was the same * **Drivers:** Updated or purged all NV/Cuda drivers to the latest versions. * **OS & ESXi:** Checked for any OS-level or hypervisor-related problems. Rebuilt the OS from scratch or Veeam backup multiple times * **Heat:** Temps were fine as all 6 are watercooled with 2100mm of Radiator. max temps seen on inferencing was 40c. When trying to generate a Load, using Kali/hashcat max temp was 45c * **Backend Changes:** Tried using LM Studio or vLLM instead of TabbyAPI, like I usually use. Same behavior * **PCIe Seating:** Made sure all components were properly seated. Again & again. **The Culprit** After exhausting all these possibilities, I finally realized the issue: the SLIMSAS 8i riser cables. These cables had been working fine for months but started acting up on PCIe 4.0 speeds. They were stable at PCIe 3.0 but occasionally unstable at PCIe 4.0. **The Lesson** Sometimes, the simplest things can be the root cause of complex problems. I think the highbrow RCA would be to check the simple things first. But in this scenario I'm just going to say. **Check Risers first. It's always risers.**
2025-01-10T16:03:19
https://www.reddit.com/r/LocalLLaMA/comments/1hy7pxb/4_days_troubleshooting_finally_fixed/
darkmaniac7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy7pxb
false
null
t3_1hy7pxb
/r/LocalLLaMA/comments/1hy7pxb/4_days_troubleshooting_finally_fixed/
false
false
self
8
null
Biden to Further Limit Nvidia AI Chip Exports in Final Push Restricting US Allies Such As Poland, Portugal, India or UAE Maker Of Falcon Models
397
2025-01-10T16:24:05
https://www.bloomberg.com/news/articles/2025-01-08/biden-to-further-limit-nvidia-amd-ai-chip-exports-in-final-push
holamifuturo
bloomberg.com
1970-01-01T00:00:00
0
{}
1hy8733
false
null
t3_1hy8733
/r/LocalLLaMA/comments/1hy8733/biden_to_further_limit_nvidia_ai_chip_exports_in/
false
false
https://b.thumbs.redditm…Pg9y8Im8JOJg.jpg
397
{'enabled': False, 'images': [{'id': 'bztki-BkOTgNvTdyEcKGJxtMVPEpkPINFlKVPOL8Ed8', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=108&crop=smart&auto=webp&s=4198c3caeb6d615b94c91aeb6a2b4288168b3863', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=216&crop=smart&auto=webp&s=07a63161d8bd215bbb53e071cd0d31ade85648fd', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=320&crop=smart&auto=webp&s=cf15e6c67c7c8970bfedc49928cb7433228e97e0', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=640&crop=smart&auto=webp&s=3dc3abfba0983054d99719792b1d834321b47c7b', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=960&crop=smart&auto=webp&s=0dc2ebbc504c5839bae7e4df68a9c35e791e047e', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?width=1080&crop=smart&auto=webp&s=37e4034651bd07ae180510e44eb1af14d64f3c14', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/JzTX2qRvXmqwwoj4NDPkmIrtAEnnpeeSx1wuD-VSMTA.jpg?auto=webp&s=3e9c61875458cda809a18f610f8fefd1199c41d7', 'width': 1200}, 'variants': {}}]}
Does anyone know how to replicate this setup for coding ?
1
2025-01-10T16:30:20
https://v.redd.it/2330de9317ce1
hackerbot69420
/r/LocalLLaMA/comments/1hy8ca0/does_anyone_know_how_to_replicate_this_setup_for/
1970-01-01T00:00:00
0
{}
1hy8ca0
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/2330de9317ce1/DASHPlaylist.mpd?a=1739248227%2CYjQ3NDc0OTk0MzY4YjA5MTU2NjA5YjYyZGY1ZGJjNTY2YWFiYTk4NGVlZjYzOGZiMjc1NjE3YTU1ZGQwYjQ2YQ%3D%3D&v=1&f=sd', 'duration': 147, 'fallback_url': 'https://v.redd.it/2330de9317ce1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/2330de9317ce1/HLSPlaylist.m3u8?a=1739248227%2CZmJjZTBlOGY1YTBlN2JlYTY3NjAzYjIyNDEzMTk4MWQ4ODkzNzZmMWU2Y2Q0YTkxNDMyMWY5M2YzMDg4ZjlmMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2330de9317ce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 812}}
t3_1hy8ca0
/r/LocalLLaMA/comments/1hy8ca0/does_anyone_know_how_to_replicate_this_setup_for/
false
false
https://external-preview…7441df6b43497eab
1
{'enabled': False, 'images': [{'id': 'NmE2ejJmOTMxN2NlMS56XFGJu3NZDVKegIKC-MLATnPqO3pS-WU1hflZGjcm', 'resolutions': [{'height': 95, 'url': 'https://external-preview.redd.it/NmE2ejJmOTMxN2NlMS56XFGJu3NZDVKegIKC-MLATnPqO3pS-WU1hflZGjcm.png?width=108&crop=smart&format=pjpg&auto=webp&s=d74d45e17572363aa6c17f75496699a1e95e89d3', 'width': 108}, {'height': 191, 'url': 'https://external-preview.redd.it/NmE2ejJmOTMxN2NlMS56XFGJu3NZDVKegIKC-MLATnPqO3pS-WU1hflZGjcm.png?width=216&crop=smart&format=pjpg&auto=webp&s=f77228417a95890b38325b14be229e84d809226b', 'width': 216}, {'height': 283, 'url': 'https://external-preview.redd.it/NmE2ejJmOTMxN2NlMS56XFGJu3NZDVKegIKC-MLATnPqO3pS-WU1hflZGjcm.png?width=320&crop=smart&format=pjpg&auto=webp&s=6fae50822a2f4d3f7ea3ad57cc6c2ba1ccba9bda', 'width': 320}, {'height': 567, 'url': 'https://external-preview.redd.it/NmE2ejJmOTMxN2NlMS56XFGJu3NZDVKegIKC-MLATnPqO3pS-WU1hflZGjcm.png?width=640&crop=smart&format=pjpg&auto=webp&s=6d14c95c564121b7ffc6a4a3249dbb50c01699c7', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NmE2ejJmOTMxN2NlMS56XFGJu3NZDVKegIKC-MLATnPqO3pS-WU1hflZGjcm.png?format=pjpg&auto=webp&s=fc697426037c15418455edeb115a87425af8f67c', 'width': 812}, 'variants': {}}]}
freeact: A Lightweight Library for Code-Action Based Agents
1
[removed]
2025-01-10T16:31:12
https://www.reddit.com/r/LocalLLaMA/comments/1hy8d2e/freeact_a_lightweight_library_for_codeaction/
c_stub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy8d2e
false
null
t3_1hy8d2e
/r/LocalLLaMA/comments/1hy8d2e/freeact_a_lightweight_library_for_codeaction/
false
false
https://b.thumbs.redditm…_svf6tZ0eFnU.jpg
1
{'enabled': False, 'images': [{'id': 'XyniTf9gud3EyElje36AehfJSvoMMy4nKgoao33Y29o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=108&crop=smart&auto=webp&s=ec22ff5a0da8bd83a2bac9f72de886f995f0aac0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=216&crop=smart&auto=webp&s=b66b6693454b54289b57d0e118fed29f70466c8d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=320&crop=smart&auto=webp&s=21f44418bca48aa90bd5ab64ab5973db360ec478', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=640&crop=smart&auto=webp&s=19b672149c7219c55d19b0e99adc3b4bbb5b158c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=960&crop=smart&auto=webp&s=ffb022dc8e5b303c3a618e4dfe331362158e2117', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=1080&crop=smart&auto=webp&s=bfd92c989ae0d89fd1f68a0fa03c7263f01def28', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?auto=webp&s=45cd59f2c9ea157e14d88cf332bc6f45a16aadf6', 'width': 1200}, 'variants': {}}]}
Does anyone know how to replicate this setup for coding ?
48
2025-01-10T16:32:48
https://v.redd.it/wz5qfaxl17ce1
Alive-Tax3189
/r/LocalLLaMA/comments/1hy8ehf/does_anyone_know_how_to_replicate_this_setup_for/
1970-01-01T00:00:00
0
{}
1hy8ehf
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/wz5qfaxl17ce1/DASHPlaylist.mpd?a=1739248372%2CNGZjMWViNjcyYjJlOTJkNmUzOWU0MWI5MjFkMDcxNjQ5YTQ3MmRlYmM2NTZhMWYyMmZkZGMwNGNhNDFmOTE1YQ%3D%3D&v=1&f=sd', 'duration': 147, 'fallback_url': 'https://v.redd.it/wz5qfaxl17ce1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/wz5qfaxl17ce1/HLSPlaylist.m3u8?a=1739248372%2CODQ2ZjIxMWU1MDcxNDFlNWNlMjViODMzZmQ2ZmQ5OTViYjVmMWExNDliZjg1NDc3Nzg0NGMxNTJlMDZhNWY1Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wz5qfaxl17ce1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 812}}
t3_1hy8ehf
/r/LocalLLaMA/comments/1hy8ehf/does_anyone_know_how_to_replicate_this_setup_for/
false
false
https://external-preview…bd5acbbec9c002bc
48
{'enabled': False, 'images': [{'id': 'enU0ZnVheGwxN2NlMZs7egYfaDsCtkR_AYCrnVuq-88BdYMxPb_V_Fpy742y', 'resolutions': [{'height': 95, 'url': 'https://external-preview.redd.it/enU0ZnVheGwxN2NlMZs7egYfaDsCtkR_AYCrnVuq-88BdYMxPb_V_Fpy742y.png?width=108&crop=smart&format=pjpg&auto=webp&s=84a9448cec82cfa5f4482fb7379562680340c221', 'width': 108}, {'height': 191, 'url': 'https://external-preview.redd.it/enU0ZnVheGwxN2NlMZs7egYfaDsCtkR_AYCrnVuq-88BdYMxPb_V_Fpy742y.png?width=216&crop=smart&format=pjpg&auto=webp&s=3d1fa5ebcbf328968cee7bab12b1fb9e2f7aab61', 'width': 216}, {'height': 283, 'url': 'https://external-preview.redd.it/enU0ZnVheGwxN2NlMZs7egYfaDsCtkR_AYCrnVuq-88BdYMxPb_V_Fpy742y.png?width=320&crop=smart&format=pjpg&auto=webp&s=9f6ee5babe2a72e2368d9f11900e1f1f390d1e55', 'width': 320}, {'height': 567, 'url': 'https://external-preview.redd.it/enU0ZnVheGwxN2NlMZs7egYfaDsCtkR_AYCrnVuq-88BdYMxPb_V_Fpy742y.png?width=640&crop=smart&format=pjpg&auto=webp&s=ada03867b22486a9f510cae0e46b64cdaed545e5', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/enU0ZnVheGwxN2NlMZs7egYfaDsCtkR_AYCrnVuq-88BdYMxPb_V_Fpy742y.png?format=pjpg&auto=webp&s=dfa4ab7dfa1abbaf61cd421943c9aff86b4eee79', 'width': 812}, 'variants': {}}]}
Anyone using Git for active dataset collaboration? Here's what I learned
2
Is anyone using Git as their primary store for training/eval data? I see it used everywhere for sharing dataset files, but mostly just for sharing large files -- not for rapid, line-by-line collaboration like how we use Git with code. From my experience, when teams need to iterate they usually end up moving data into other tools (labeling tools, databases, etc). I'm convinced Git should be perfect for datasets (just like it is for code), but it's missing some tooling to make that work well. I've been working through a bunch of gotchas while building a Git-friendly data format for my dataset management tool (here's the [project on Github](https://github.com/Kiln-AI/Kiln) if you're interested). I think I've solved the main issues, and included details below. Would love to hear if anyone else has tackled similar problems or has ideas to share. **Making a Git-friendly file format**: The main issue is that giant dataset files aren't ideal for teams using Git; they cause constant merge conflicts whenever multiple people work simultaneously. Instead, I created a Git-friendly file format: 1. Use lots of small files, making it rare for multiple people to need the same file at once, reducing conflicts (basically each row is now it's own file) 2. Use predominantly append-only operations, further reducing the chance of conflicts 3. Format files in a way that's easy to diff/merge when conflicts do happen (formatted, ordered JSON) 4. Generate unique random IDs for new items to avoid conflicts/collisions, letting many people work concurrently on the same project, even offline 5. Enforce a good schema: everyone uses the same format for prompts/ratings/samples, with validators that prevent badly formatted data from getting into the dataset. We even validate the LLM inputs/outputs using JSON Schema (optional, depending on your task definition) 6. Use nested folders to make paths really descriptive and easy to understand in PRs: Projects > Task > (Runs | Prompts | Fine-tunes | Splits) **Working with non-technical teammates**: Not everyone knows Git, and that's totally fine! Folks like subject matter experts, labelers, and QA still have a lot of value to add. With this design it's easy to share a branch through any shared drive (Dropbox, iCloud, OneDrive). These teammates can rate/label/eval/add-data through our friendly app with a super simple installer (no terminal, no Docker). Right now someone technical needs to sync the shared branch back to main on occasion, but I might add auto-sync if enough people want it. **What I've built with this so far**: Having clean, structured data opened up some pretty cool possibilities. I added synthetic data generation (basically letting the LLM riff on our existing examples to create training data). It worked surprisingly well. I also added fine-tuning support - you can use Unsloth locally or the cloud. A little strangely, my favorite feature is that it automatically builds better prompts as you use it. The multi-shot prompts pick up highly rated examples and keep getting better. **Integrating with other tools (pandas/polars)**: Most DS tutorials/tools assume you're working with big files (CSV, JSONL), but our format splits things into lots of small files. To make this work we have 1) a simple SDK that helps load everything into whatever data structure you need (pandas DataFrames, etc.), and 2) some examples showing how to directly load the JSON files directly. **What's left**: I think a solid eval framework should be next on the list. Would love to hear what eval systems you all are using and what's working well! Let me know if you have any tips or workflows I haven't thought of! Super curious how others are handling their datasets in Git. Feel free to drop feature requests on [Github](https://github.com/Kiln-AI/Kiln) too if that's more your style.
2025-01-10T16:33:01
https://www.reddit.com/r/LocalLLaMA/comments/1hy8enl/anyone_using_git_for_active_dataset_collaboration/
davernow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy8enl
false
null
t3_1hy8enl
/r/LocalLLaMA/comments/1hy8enl/anyone_using_git_for_active_dataset_collaboration/
false
false
self
2
{'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=216&crop=smart&auto=webp&s=0b774d9f72bf345e9e39402886649223ad60e4d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=320&crop=smart&auto=webp&s=6c769aa8ce8a2839b46e12de1fd8743d4171f08d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=640&crop=smart&auto=webp&s=c9f49d760efe4ddd92a3a07a57705e5073b56eed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=960&crop=smart&auto=webp&s=8666fab577a806da6551b1f2e0ec70f217f6f2fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=1080&crop=smart&auto=webp&s=b3de3b28dfba5fc1615aa5f1c855312805eda01b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?auto=webp&s=6728f96b3a663740abd86d6d7aff692490474d84', 'width': 1280}, 'variants': {}}]}
Llama3.3-70B takes 4+ hours to load at 4bit quantisation
5
I'm using Llama3.3-70B-Instruct via hugging face transformers pipeline. I can load the non-quantised model into memory quickly (with offloading), but inference times are abysmal. I can't work out why it takes so long to load the checkpoint shards for the 4-bit quantised model (same with 8-bit). I checked drive transfer speeds, RAM usage, CPU bottlenecking but everything is under capacity. I'm using the script directly from [https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) for quantisation, and have updated transformers, accelerate and bitsandbytes itself. System is an L40 GPU with 64GB RAM and an epic 9124 processor. I am completely out of ideas for where to look next, any direction would be greatly appreciated!
2025-01-10T16:34:49
https://www.reddit.com/r/LocalLLaMA/comments/1hy8g61/llama3370b_takes_4_hours_to_load_at_4bit/
MuffinJesus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy8g61
false
null
t3_1hy8g61
/r/LocalLLaMA/comments/1hy8g61/llama3370b_takes_4_hours_to_load_at_4bit/
false
false
self
5
{'enabled': False, 'images': [{'id': 'VonlxAOpG-SOmitQHhh949yS9p5GoGwzDacaaDw8pe4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=108&crop=smart&auto=webp&s=406b75739914d00816f767bfe4ba5cde1b965a12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=216&crop=smart&auto=webp&s=561cfa109b99033cf44c96752fa4fe0059d99209', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=320&crop=smart&auto=webp&s=3dededd1834672bfc787ef43d2e7584b57f36c4f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=640&crop=smart&auto=webp&s=1f8c63e24c34b0f28547be624d2a56d60be52aaa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=960&crop=smart&auto=webp&s=f5f167a7ec537e5d286000c3131dde564533a1c6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=1080&crop=smart&auto=webp&s=ebd47fd5ef098c6f56e061c339a82ff33e37caad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?auto=webp&s=e1bba36dc8102e308d41a3391676cd6d7058a0f0', 'width': 1200}, 'variants': {}}]}
freeact: A Lightweight Library for Code-Action Based Agents
26
Hello! We just released [freeact](https://github.com/gradion-ai/freeact), a lightweight agent library that empowers language models to act as autonomous agents through executable **code actions**. By enabling agents to express their actions directly in code rather than through constrained formats like JSON, freeact provides a flexible and powerful approach to solving complex, open-ended problems that require dynamic solution paths. * Supports dynamic installation and utilization of Python packages at runtime * Agents learn from feedback and store successful code actions as reusable skills in long-term memory * Skills can be interactively developed and refined in collaboration with freeact agents * Agents compose skills and any other Python modules to build increasingly sophisticated capabilities * Code actions are executed in [ipybox](https://github.com/gradion-ai/ipybox), a secure Docker + IPython sandbox that runs locally or remotely GitHub repo: [https://github.com/gradion-ai/freeact](https://github.com/gradion-ai/freeact) Evaluation: [https://gradion-ai.github.io/freeact/evaluation/](https://gradion-ai.github.io/freeact/evaluation/) We'd love to hear your feedback! See it in action: https://reddit.com/link/1hy8ii4/video/rs73092327ce1/player
2025-01-10T16:37:39
https://www.reddit.com/r/LocalLLaMA/comments/1hy8ii4/freeact_a_lightweight_library_for_codeaction/
krasserm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy8ii4
false
null
t3_1hy8ii4
/r/LocalLLaMA/comments/1hy8ii4/freeact_a_lightweight_library_for_codeaction/
false
false
https://b.thumbs.redditm…_svf6tZ0eFnU.jpg
26
{'enabled': False, 'images': [{'id': 'XyniTf9gud3EyElje36AehfJSvoMMy4nKgoao33Y29o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=108&crop=smart&auto=webp&s=ec22ff5a0da8bd83a2bac9f72de886f995f0aac0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=216&crop=smart&auto=webp&s=b66b6693454b54289b57d0e118fed29f70466c8d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=320&crop=smart&auto=webp&s=21f44418bca48aa90bd5ab64ab5973db360ec478', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=640&crop=smart&auto=webp&s=19b672149c7219c55d19b0e99adc3b4bbb5b158c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=960&crop=smart&auto=webp&s=ffb022dc8e5b303c3a618e4dfe331362158e2117', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?width=1080&crop=smart&auto=webp&s=bfd92c989ae0d89fd1f68a0fa03c7263f01def28', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LzADpwKw8mUc5MZG38ssaaBnDyw6elEdCSjne9inj0g.jpg?auto=webp&s=45cd59f2c9ea157e14d88cf332bc6f45a16aadf6', 'width': 1200}, 'variants': {}}]}
Q on inference speed: int4 vs fp8; ada fp8 support
8
I run a lot of text through 70B models in awq int4 on vLLM (that fits on 2x3090). I understand that the more compression of the parameters, the smaller and thus the faster they'll load from VRAM, but I also expect there must be overhead in converting the quantized representations back to native datatypes the GPU can actually perform operations on. I'm trying to understand how much additional speed I'd gain by using an fp8 quantized model on Ada chips that natively support fp8 (so no conversion needed?). This of course assumes I double the VRAM available. Practically, this would mean moving to 4x4090 or 2x6000 (ada), which I know means throwing in additional speed factors.
2025-01-10T16:53:20
https://www.reddit.com/r/LocalLLaMA/comments/1hy8wa0/q_on_inference_speed_int4_vs_fp8_ada_fp8_support/
e-rox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy8wa0
false
null
t3_1hy8wa0
/r/LocalLLaMA/comments/1hy8wa0/q_on_inference_speed_int4_vs_fp8_ada_fp8_support/
false
false
self
8
null
0.5B Distilled QwQ, runnable on IPhone
221
2025-01-10T16:59:44
https://huggingface.co/spaces/kz919/Mini-QwQ
Lord_of_Many_Memes
huggingface.co
1970-01-01T00:00:00
0
{}
1hy91m1
false
null
t3_1hy91m1
/r/LocalLLaMA/comments/1hy91m1/05b_distilled_qwq_runnable_on_iphone/
false
false
https://b.thumbs.redditm…y2jO_9cwvIHM.jpg
221
{'enabled': False, 'images': [{'id': 'e3p2Ae-T7q7thWCJiBXk7weLeb2wsxwlVMS73MyyxdM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hOvT7Zh2EDTGcuqajUYbM7IboIMuAwdCFsY0UWAS0pU.jpg?width=108&crop=smart&auto=webp&s=2b152ee6e2e9ed98f9d8eefcfaedb8819a87854a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hOvT7Zh2EDTGcuqajUYbM7IboIMuAwdCFsY0UWAS0pU.jpg?width=216&crop=smart&auto=webp&s=959d3d15d038ccb7a43cd1fc7a355813d671cf41', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hOvT7Zh2EDTGcuqajUYbM7IboIMuAwdCFsY0UWAS0pU.jpg?width=320&crop=smart&auto=webp&s=24642b26c32f9abfc2eaa99f2e3351ca97d2ddb6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hOvT7Zh2EDTGcuqajUYbM7IboIMuAwdCFsY0UWAS0pU.jpg?width=640&crop=smart&auto=webp&s=85274e9584cd8dc27f3835483f32b47ea48f28f0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hOvT7Zh2EDTGcuqajUYbM7IboIMuAwdCFsY0UWAS0pU.jpg?width=960&crop=smart&auto=webp&s=c4fc3b0460511dac141280e5490eaf3d3fe18388', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hOvT7Zh2EDTGcuqajUYbM7IboIMuAwdCFsY0UWAS0pU.jpg?width=1080&crop=smart&auto=webp&s=d75bbc1a33660fbadc514ef15d7b0a398cd56a21', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hOvT7Zh2EDTGcuqajUYbM7IboIMuAwdCFsY0UWAS0pU.jpg?auto=webp&s=fcc40488f92fda55663ff38101e2722b95288ac4', 'width': 1200}, 'variants': {}}]}
What is the simplest way to run AWQ models on Windows 10?
2
I tried text-generation-webui but it can't run AWQ because it requires triton
2025-01-10T17:19:05
https://www.reddit.com/r/LocalLLaMA/comments/1hy9ifu/what_is_the_simplest_way_to_run_awq_models_on/
rookan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hy9ifu
false
null
t3_1hy9ifu
/r/LocalLLaMA/comments/1hy9ifu/what_is_the_simplest_way_to_run_awq_models_on/
false
false
self
2
null
Most Pay, Least Work
0
Back in May I started a job as an AI Developer. Basically I integrate generative AI into administrative tools to reduce paperwork. And this was a $35,000 raise from my previous job as a software analyst for a hospital. By far this is the absolutely least amount of work I've ever done at a job and it is the most money I have ever made at a job. The funny part is I mainly use Cline with Anthropic, OpenAI, and DeepSeek. I've maybe written a hundred lines of code in 7 and 1/2 months!
2025-01-10T18:01:13
https://www.reddit.com/r/LocalLLaMA/comments/1hyaj2s/most_pay_least_work/
Leading-Leading6718
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyaj2s
false
null
t3_1hyaj2s
/r/LocalLLaMA/comments/1hyaj2s/most_pay_least_work/
false
false
self
0
null
Phi-4 Finetuning - now with >128K context length + Bug Fix Details
217
Hey guys! You can now fine-tune Phi-4 with >128K context lengths using [Unsloth](https://github.com/unslothai/unsloth/)! That's 12x longer than Hugging Face + FA2’s 11K on a 48GB GPU. Phi-4 Finetuning Colab: [https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi\_4-Conversational.ipynb](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) We also previously announced bug fixes for Phi-4 and so we’ll reveal the details. But, before we do, some of you were curious if our fixes actually worked? Yes! Our fixed Phi-4 uploads show clear performance gains, with even better scores than Microsoft's original uploads on the [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=phi-4). https://preview.redd.it/d8hew26e06ce1.png?width=2366&format=png&auto=webp&s=173c23feacc625566271470839fe7a5e25eb860e Some of you even tested it to show greatly improved results in: * Example 1: [Multiple-choice tasks](https://www.reddit.com/r/LocalLLaMA/comments/1hwzmqc/comment/m665h08/) https://preview.redd.it/qx50pkq706ce1.png?width=1579&format=png&auto=webp&s=437da2cabdbf98ef5a8b8cbdc5592907a20e2316 * Example 2: [ASCII art generation](https://www.reddit.com/r/LocalLLaMA/comments/1hwzmqc/comment/m65wr3e/) https://preview.redd.it/ircz0pnc06ce1.png?width=1433&format=png&auto=webp&s=16c770a0fd58a469af3b98216844447845b98ada # Bug Fix Details 1. Tokenizer Fix: Phi-4 incorrectly uses <|endoftext|> as EOS instead of <|im\_end|>. 2. Finetuning Fix: Use a proper padding token (e.g., <|dummy\_87|>). 3. Chat Template Fix: Avoid adding an assistant prompt unless specified to prevent serving issues. 4. More in-depth in our blog: [https://unsloth.ai/blog/phi4](https://unsloth.ai/blog/phi4) |Phi-4 Uploads (with our bug fixes)| |:-| |[GGUFs](https://huggingface.co/unsloth/phi-4-GGUF) including 2, 3, 4, 5, 6, 8, 16-bit| |[Unsloth Dynamic 4-bit](https://huggingface.co/unsloth/phi-4-unsloth-bnb-4bit)| |[Original 16-bit](https://huggingface.co/unsloth/phi-4)| For all other model uploads, see [our docs](https://docs.unsloth.ai/get-started/all-our-models) I know this post was a bit long, but I hope it was informative and please ask any questions!! :)
2025-01-10T18:09:05
https://www.reddit.com/r/LocalLLaMA/comments/1hyapzu/phi4_finetuning_now_with_128k_context_length_bug/
danielhanchen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyapzu
false
null
t3_1hyapzu
/r/LocalLLaMA/comments/1hyapzu/phi4_finetuning_now_with_128k_context_length_bug/
false
false
https://b.thumbs.redditm…GT53hlsQAwys.jpg
217
{'enabled': False, 'images': [{'id': 'oUAe34zUCLxMUIpYtOvOz6aYou2CnbtJjhJZ0bwJ6Jg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=108&crop=smart&auto=webp&s=6481fbac644d8a96c2918c63e805d1c62e24cbe5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=216&crop=smart&auto=webp&s=941b00cf4a68a70df266160fe06769bc2a817a41', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=320&crop=smart&auto=webp&s=e794c7cbf042b8d8e6fdd8f8c239e0f5cb398261', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=640&crop=smart&auto=webp&s=57fbf9c89972d5c31e3bd2d3354696be4e8d5b9d', 'width': 640}, {'height': 505, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=960&crop=smart&auto=webp&s=557f9a403410be41c1438b6d2b1a2acd9d507da4', 'width': 960}, {'height': 568, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?width=1080&crop=smart&auto=webp&s=989ea96f774aa62c199da9564be3b7b646db1494', 'width': 1080}], 'source': {'height': 834, 'url': 'https://external-preview.redd.it/GToYANeKQHKhFdKJjLK03Emv1ylZ0l8jeD1iuQJ8-dE.jpg?auto=webp&s=fb46a23aaa0ed1c5044eaea486ff79352cce2675', 'width': 1584}, 'variants': {}}]}
Best practices for multi-turn conversational RAG
1
Hello everyone! I have a pretty specific question regarding modern best practices on conversational RAG: how do we utilize retrieved documents for previous user queries within the conversation with context retrieved for the latest query. Conversational RAG consists of multiple Q/A turns where the user might ask many questions. Each question usually needs some sort of retrieval to be answered. Each time we take the query, transform it to reflect chat history, use it in our retrieval system, then get document ranking and fuse it with the chat history to answer the user latest query. What I don't really understand here -> **each answer relies on a set of documents retrieved for this particular query**. But from what I understand, **most approaches to Conversational RAG do not include those documents in the chat history for the most part**. They keep the **answer, keep the query and each time the model sees only the latest set of retrieved documents**. I suspect it to work great for conversation around a fixed set of documents, but in many cases, responses produced by the model in previous turns of the conversation were generated based on sets of documents that do not necessarily overlap with the set of documents retrieved for the current query. And because those latest questions usually rely on query-answers introduced before and **THEREFORE** on information present in sets of previously retrieved documents, shouldn't we somehow maintain a coherent continuously updated set of documents used for query answering that reflects the basis of previous answers while providing enough insight on new context specific to the latest query? **I'm curious if previous responses not grounded in current set of documents provide the basis for hallucinations and so on?** **In addition, this approach (with continuous maintanance of relevant set of documents) allows us to reduce the number of retrievals, for example, if previously retrieved context already contains enough of information to answer the follow-ups.** **Is this reasonable? Or this is just over complication of things which is not neccessary to provide reasonable performance in multi-turn RAG QA scenario?** Unfortunately I was unable to find relevant literature on this. Any suggestions are welcomed!
2025-01-10T18:33:24
https://www.reddit.com/r/LocalLLaMA/comments/1hybbcf/best_practices_for_multiturn_conversational_rag/
oposteriori
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hybbcf
false
null
t3_1hybbcf
/r/LocalLLaMA/comments/1hybbcf/best_practices_for_multiturn_conversational_rag/
false
false
self
1
null
Best practices of retrieval/context maintenance for multi-turn RAG conversational systems
5
Hello everyone! I have a pretty specific question regarding modern best practices ononversational RAG: **How do we utilize retrieved documents for previous user queries within the conversation with context retrieved for the latest query?**. ## My understanding of current approaches Conversational RAG consists of multiple Q/A turns where the user might ask many questions. Each question usually needs some sort of retrieval to be answered. Each time we take the query, transform it to reflect chat history, use it in our retrieval system, then get document ranking and fuse it with the chat history to answer the user latest query. What I don't really understand here -> **each answer relies on a set of documents retrieved for this particular query**. But from what I understand, **most approaches to Conversational RAG do not include those documents in the chat history for the most part**. They keep the **answer, keep the query and each time the model sees only the latest set of retrieved documents**. ## Context maintenance idea and some potential issues I suspect it to work great for conversation around a fixed set of documents, but in many cases, responses produced by the model in previous turns of the conversation were generated based on sets of documents that do not necessarily overlap with the set of documents retrieved for the current query. And because those latest questions usually rely on query-answers introduced before and **THEREFORE** on information present in sets of previously retrieved documents, shouldn't we somehow maintain a coherent continuously updated set of documents used for query answering that reflects the basis of previous answers while providing enough insight on new context specific to the latest query? **In addition, this approach (with continuous maintanance of relevant set of documents) allows us to reduce the number of retrievals, for example, if previously retrieved context already contains enough of information to answer the follow-ups.** **I'm curious if the fact previous responses might not be grounded in current set of documents provide the basis for hallucinations?** **Is this reasonable? Or this is just over complication of things which is not neccessary to provide reasonable performance in multi-turn RAG QA scenario?** I'm aware of the fact that in some specific scenarios (each follow-up question needs a lot of additional retireval) -> context needed to accurately reflect the basis for all answers might grow exponentially Unfortunately I was unable to find relevant literature on this. Any suggestions are welcomed!
2025-01-10T18:38:05
https://www.reddit.com/r/LocalLLaMA/comments/1hybfdx/best_practices_of_retrievalcontext_maintenance/
oposteriori
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hybfdx
false
null
t3_1hybfdx
/r/LocalLLaMA/comments/1hybfdx/best_practices_of_retrievalcontext_maintenance/
false
false
self
5
null
GPU memory usage higher than expected
1
[removed]
2025-01-10T19:06:28
https://www.reddit.com/r/LocalLLaMA/comments/1hyc4ay/gpu_memory_usage_higher_than_expected/
FutureFroth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyc4ay
false
null
t3_1hyc4ay
/r/LocalLLaMA/comments/1hyc4ay/gpu_memory_usage_higher_than_expected/
false
false
self
1
null
Recommendations for a dockerized fast whisper endpoint with Pascal GPU support?
6
I'm looking for recommendations for a dockerized endpoint for fast whisper support that will run well on Pascal class GPUs (notably which have very poor FP16 performance). Whisper Large or Distil Whisper is fine. I'm looking for latency optimized options (for low latency chat applications). Although one which can handle both low latency chat and batched inference would be welcome.
2025-01-10T19:12:26
https://www.reddit.com/r/LocalLLaMA/comments/1hyc9gy/recommendations_for_a_dockerized_fast_whisper/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1hyc9gy
false
null
t3_1hyc9gy
/r/LocalLLaMA/comments/1hyc9gy/recommendations_for_a_dockerized_fast_whisper/
false
false
self
6
null