title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Should I get a 14 inch M4 Max 128GB for 123B models? | 25 | Top-end, unbinned, 40 core one.
I heard it throttles down and reduces the t/s for the 14 inch? Is the fan noise unbearable? Also, how is the generation speed for a 123B 16k context prompt? (Prompt Processing doesn't really count since I can cache it)
Space black if that matters | 2024-11-28T18:55:06 | https://www.reddit.com/r/LocalLLaMA/comments/1h2300d/should_i_get_a_14_inch_m4_max_128gb_for_123b/ | TheLocalDrummer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2300d | false | null | t3_1h2300d | /r/LocalLLaMA/comments/1h2300d/should_i_get_a_14_inch_m4_max_128gb_for_123b/ | false | false | self | 25 | null |
QwQ coding .... I am terrified how good is .... | 192 | `llama-cli.exe --model QwQ-32B-Preview-Q4_K_M.gguf --color --threads 30 --keep -1 --n-predict -1 --ctx-size 16384 -ngl 99 --simple-io -e --multiline-input --no-display-prompt --conversation --no-mmap --in-prefix "<|im_end|>\n<|im_start|>user\n" --in-suffix "<|im_end|>\n<|im_start|>assistant\n" -p "<|im_start|>system\nYou are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step." --top-k 20 --top-p 0.8 --temp 0.7 --repeat-penalty 1.05`
prompt
`"Provide complete working code for a realistic looking tree in Python using the Turtle graphics library and a recursive algorithm."`
Final code - used more or less 5k tokens each generation
import turtle
import random
# Define color palettes
branch_colors = ["saddle brown", "sienna", "peru"]
leaf_colors = ["lime green", "forest green", "dark green"]
# Set up the screen and turtle
screen = turtle.Screen()
pen = turtle.Turtle()
pen.speed(0) # Set speed to fastest
pen.hideturtle() # Hide the turtle cursor
screen.delay(0) # Set delay to 0 for faster drawing
# Function to draw the tree
def draw_tree(branch_length, min_length, angle):
if branch_length < min_length:
pen.color(random.choice(leaf_colors))
leaf_size = random.randint(8, 12)
pen.dot(leaf_size)
pen.color(random.choice(branch_colors))
return
else:
pen.color(random.choice(branch_colors))
pen_size = branch_length / 20 + random.uniform(-0.5, 0.5)
pen.pensize(max(1, pen_size)) # Ensure pen size is at least 1
pen.forward(branch_length)
new_length = branch_length * (random.uniform(0.6, 0.8)) # Vary the scaling factor
# Draw multiple sub-branches
num_sub_branches = random.randint(2, 4) # Random number of sub-branches
total_angle = angle * (num_sub_branches - 1)
for i in range(num_sub_branches):
branch_angle = angle * i - total_angle / 2 + random.randint(-10, 10)
pen.left(branch_angle)
draw_tree(new_length, min_length, angle)
pen.right(branch_angle)
pen.backward(branch_length)
# Set initial position
pen.penup()
pen.goto(0, -200)
pen.pendown()
pen.setheading(90) # Point upwards
pen.color(random.choice(branch_colors))
# Draw the tree
draw_tree(100, 10, random.randint(20, 40))
# Keep the window open
screen.mainloop()
Look on the result! QwQ (best of 5 generations)
https://preview.redd.it/4j0sa9ibxo3e1.png?width=1293&format=png&auto=webp&s=a5bbea45feaa8c7301929ee604d9e7d8ef1cc5b9
qwen coder 32b instruct q4km (best of 5 generations)
https://preview.redd.it/qiluh95nxo3e1.png?width=967&format=png&auto=webp&s=06493c82fabc5673e5d1fa51ed8aad356ccde1ad
Seems much better in coding than qwen 32b! ... wtf | 2024-11-28T19:08:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h23bg5/qwq_coding_i_am_terrified_how_good_is/ | Healthy-Nebula-3603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h23bg5 | false | null | t3_1h23bg5 | /r/LocalLLaMA/comments/1h23bg5/qwq_coding_i_am_terrified_how_good_is/ | false | false | 192 | null |
|
Funniest joke according to QwQ after thinking for 1000 tokens: "Why don't scientists trust atoms? Because they make up everything." | 170 | Prompt:
https://preview.redd.it/nfgj6rnc1p3e1.png?width=1026&format=png&auto=webp&s=0a0866499b07acc1862f7ab3fa87ab3bbe0d36aa
https://preview.redd.it/m3qk99dk1p3e1.png?width=992&format=png&auto=webp&s=296aa9e435f502abdf90425f31d64d91a9a20388
Full output: [https://pastebin.com/XXpj7JKj](https://pastebin.com/XXpj7JKj) | 2024-11-28T19:27:15 | https://www.reddit.com/r/LocalLLaMA/comments/1h23pjc/funniest_joke_according_to_qwq_after_thinking_for/ | cpldcpu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h23pjc | false | null | t3_1h23pjc | /r/LocalLLaMA/comments/1h23pjc/funniest_joke_according_to_qwq_after_thinking_for/ | false | false | 170 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]} |
|
I was told deepseek is good at math and coding - so I threw the following problem | 1 | [removed] | 2024-11-28T19:31:49 | https://www.reddit.com/r/LocalLLaMA/comments/1h23szs/i_was_told_deepseek_is_good_at_math_and_coding_so/ | rahul_msft | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h23szs | false | null | t3_1h23szs | /r/LocalLLaMA/comments/1h23szs/i_was_told_deepseek_is_good_at_math_and_coding_so/ | false | false | 1 | null |
|
Perplexity AI PRO - 1 YEAR PLAN OFFER - 75% OFF | 1 | [removed] | 2024-11-28T19:59:28 | MReus11R | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h24dfn | false | null | t3_1h24dfn | /r/LocalLLaMA/comments/1h24dfn/perplexity_ai_pro_1_year_plan_offer_75_off/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'XRY_r6OzFp645YHnBkMWOSaTrnX4RwZFIvWsAhATl0c', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/o3rr4vab7p3e1.jpeg?width=108&crop=smart&auto=webp&s=3173595624cc6cc177d231f7a1f3978f47c89fa1', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/o3rr4vab7p3e1.jpeg?width=216&crop=smart&auto=webp&s=bad9037f94ed64b28aa7f48228b22166e9dfe8f6', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/o3rr4vab7p3e1.jpeg?width=320&crop=smart&auto=webp&s=6c040e5065ef9a86acaa954648f2d7c335c97c05', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/o3rr4vab7p3e1.jpeg?width=640&crop=smart&auto=webp&s=02e763cd5af36eee4e92935be672ee77879cafdb', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/o3rr4vab7p3e1.jpeg?width=960&crop=smart&auto=webp&s=be7ec42cf4968fed6830096a1c40a9af2563f0f2', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/o3rr4vab7p3e1.jpeg?width=1080&crop=smart&auto=webp&s=abab4e487b6ffffab30796437fcd7222edc3b03d', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://preview.redd.it/o3rr4vab7p3e1.jpeg?auto=webp&s=f4370732e87b5cdd9bf34952c845374d9774ee6c', 'width': 2000}, 'variants': {}}]} |
||
QwQ-32B-Preview, the experimental reasoning model from the Qwen team is now available on HuggingChat unquantized for free! | 473 | 2024-11-28T20:09:35 | https://huggingface.co/chat/models/Qwen/QwQ-32B-Preview | SensitiveCranberry | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h24lax | false | null | t3_1h24lax | /r/LocalLLaMA/comments/1h24lax/qwq32bpreview_the_experimental_reasoning_model/ | false | false | 473 | {'enabled': False, 'images': [{'id': 'kf_we4OdqdVm6kFM7eKRnewZSFaKTou37n2rMV5D0hc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-mO3K8VWzHyuuBSt-xbCM2UjbXHzZ3-odKfb37fL578.jpg?width=108&crop=smart&auto=webp&s=ab22793b06367fc24ad933d3a79b8d4cfff66180', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-mO3K8VWzHyuuBSt-xbCM2UjbXHzZ3-odKfb37fL578.jpg?width=216&crop=smart&auto=webp&s=ca13c8b600d092292e87773ac6fbead7be2b6ca9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-mO3K8VWzHyuuBSt-xbCM2UjbXHzZ3-odKfb37fL578.jpg?width=320&crop=smart&auto=webp&s=980d96498e3312ee995c4ea774557ebd22caa990', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-mO3K8VWzHyuuBSt-xbCM2UjbXHzZ3-odKfb37fL578.jpg?width=640&crop=smart&auto=webp&s=c1e8af9f3bdb79c215210ce6395f47ecb42b48e9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-mO3K8VWzHyuuBSt-xbCM2UjbXHzZ3-odKfb37fL578.jpg?width=960&crop=smart&auto=webp&s=6fa343e39ba8d1eec376c10624ba36f1d5e18339', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-mO3K8VWzHyuuBSt-xbCM2UjbXHzZ3-odKfb37fL578.jpg?width=1080&crop=smart&auto=webp&s=df1fe34c380f860eeccc641c3ed48fb6fb1a9ae7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-mO3K8VWzHyuuBSt-xbCM2UjbXHzZ3-odKfb37fL578.jpg?auto=webp&s=7f59e903620506e41a962f1063fe357e64c8548e', 'width': 1200}, 'variants': {}}]} |
||
M1 Max 64GB vs AWS g4dn.12xlarge with 4x Tesla T4 side by side ollama speed | 10 | 2024-11-28T20:17:00 | https://v.redd.it/wh26rvhz9p3e1 | 330d | /r/LocalLLaMA/comments/1h24qxp/m1_max_64gb_vs_aws_g4dn12xlarge_with_4x_tesla_t4/ | 1970-01-01T00:00:00 | 0 | {} | 1h24qxp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wh26rvhz9p3e1/DASHPlaylist.mpd?a=1735546630%2CODkyOWVmM2E5MmE2OGZlYjdlNGJmY2I3MjVhY2Q2OGIyZmRhZDI2NDVhYzdiYTlmNjU1ODhhMGQyOTQwOTBjMQ%3D%3D&v=1&f=sd', 'duration': 559, 'fallback_url': 'https://v.redd.it/wh26rvhz9p3e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/wh26rvhz9p3e1/HLSPlaylist.m3u8?a=1735546630%2CMDY0YjY5ZjBjODc1MDYwMWE0OGUzZWU5ODMwYjcxNjBhY2FlOTUxZmVkNTFiNWNkM2FhMjllNmM5Y2U0OTNiZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wh26rvhz9p3e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1h24qxp | /r/LocalLLaMA/comments/1h24qxp/m1_max_64gb_vs_aws_g4dn12xlarge_with_4x_tesla_t4/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'ZDNxdW52MWdhcDNlMdDKTi9QlTv9JCQI7QKpmyz-weZhmN2cbmnzrzMwpKWI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZDNxdW52MWdhcDNlMdDKTi9QlTv9JCQI7QKpmyz-weZhmN2cbmnzrzMwpKWI.png?width=108&crop=smart&format=pjpg&auto=webp&s=dd54563ae92145be80f3a91b39e2c8af6a4ed728', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZDNxdW52MWdhcDNlMdDKTi9QlTv9JCQI7QKpmyz-weZhmN2cbmnzrzMwpKWI.png?width=216&crop=smart&format=pjpg&auto=webp&s=87490f174909d4c7482a6630ee0758b7a8bdfa9b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZDNxdW52MWdhcDNlMdDKTi9QlTv9JCQI7QKpmyz-weZhmN2cbmnzrzMwpKWI.png?width=320&crop=smart&format=pjpg&auto=webp&s=fe3bfae07f396efecfe267f7398768a43bd79fdc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZDNxdW52MWdhcDNlMdDKTi9QlTv9JCQI7QKpmyz-weZhmN2cbmnzrzMwpKWI.png?width=640&crop=smart&format=pjpg&auto=webp&s=6a4b0e7ed50eb7f6e3145df136cfd4199364f324', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZDNxdW52MWdhcDNlMdDKTi9QlTv9JCQI7QKpmyz-weZhmN2cbmnzrzMwpKWI.png?width=960&crop=smart&format=pjpg&auto=webp&s=bd51e2305c1d867495755583e46ad7d4ebf24455', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZDNxdW52MWdhcDNlMdDKTi9QlTv9JCQI7QKpmyz-weZhmN2cbmnzrzMwpKWI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e0017d496ebe3fbd431b6bb37a87c3ff36615957', 'width': 1080}], 'source': {'height': 2304, 'url': 'https://external-preview.redd.it/ZDNxdW52MWdhcDNlMdDKTi9QlTv9JCQI7QKpmyz-weZhmN2cbmnzrzMwpKWI.png?format=pjpg&auto=webp&s=934069bc2cdcbc1b51dabad8cee16b84adb776fa', 'width': 4096}, 'variants': {}}]} |
||
Just a thought | 0 | If open source models continue to become better and better,at this pace we will soon have second brain and we will become more dumb | 2024-11-28T20:44:16 | https://www.reddit.com/r/LocalLLaMA/comments/1h25alf/just_a_thought/ | TheLogiqueViper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h25alf | false | null | t3_1h25alf | /r/LocalLLaMA/comments/1h25alf/just_a_thought/ | false | false | self | 0 | null |
The technical component of creating a custom AI-agent | 1 | [removed] | 2024-11-28T20:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/1h25e2c/the_technical_component_of_creating_a_custom/ | just-beginner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h25e2c | false | null | t3_1h25e2c | /r/LocalLLaMA/comments/1h25e2c/the_technical_component_of_creating_a_custom/ | false | false | self | 1 | null |
Is it possible to use Coral TPU as a secondary accelerator together with GPU as main processor? | 1 | [removed] | 2024-11-28T20:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h25f76/is_it_possible_to_use_coral_tpu_as_a_secondary/ | Loud-Economics-9795 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h25f76 | false | null | t3_1h25f76 | /r/LocalLLaMA/comments/1h25f76/is_it_possible_to_use_coral_tpu_as_a_secondary/ | false | false | self | 1 | null |
Vulkan - why isn't it everywhere | 1 | [removed] | 2024-11-28T21:02:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h25nxe/vulkan_why_isnt_it_everywhere/ | dp3471 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h25nxe | false | null | t3_1h25nxe | /r/LocalLLaMA/comments/1h25nxe/vulkan_why_isnt_it_everywhere/ | false | false | self | 1 | null |
Usage of Coral dual TPU as a secondary accelerator | 1 | [removed] | 2024-11-28T21:10:16 | https://www.reddit.com/r/LocalLLaMA/comments/1h25tvz/usage_of_coral_dual_tpu_as_a_secondary_accelerator/ | Fer1us | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h25tvz | false | null | t3_1h25tvz | /r/LocalLLaMA/comments/1h25tvz/usage_of_coral_dual_tpu_as_a_secondary_accelerator/ | false | false | self | 1 | null |
RoPE has precision errors when used with BFloat16 | 31 | This recent paper points out a major issue with RoPE and long contexts: [**When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context Training**](https://arxiv.org/pdf/2411.13476)
>Despite the computational advantages of BFloat16, we have identified a critical issue: when combined with BFloat16, the relative positional encoding properties of RoPE are broken, especially in long-context scenarios. As shown in Figure 1, this breakdown occurs because of BFloat16’s limited precision. As the training window size increases, numerical errors accumulate, exacerbating the issue and resulting in a more substantial discrepancy. In contrast, this degradation disappears when using Float32, which maintains the integrity of RoPE’s relative positional encoding. Our empirical observations confirm that this breakdown diminishes the benefits RoPE offers for long-context training.
They've got a proposed way to address the problem, of course, but I figured that people around here would be interested in knowing that the problem exists in the first place.
It probably explains some of the problems training at longer sequence lengths and maybe some of the instability after 8K or so...
>Restarting position IDs enhances model performance but introduces a significant drawback: the model can only learn the full spectrum of rotational angles when processing sequences that reach or exceed the context length. This limitation hinders the model’s ability to generalize to longer context length scenarios because, as we increase the context window size, collecting sufficient long sequences to fill the entire context window becomes impractical due to the scarcity of such lengthy data.
TL;DR:
>In summary, the main contributions of this paper are as follows:
>• We found that the relative properties of RoPE are compromised under BFloat16 precision.
>• We identified that the first token of a sequence contributes to the deviation of RoPE’s relative properties, which should be preserved in theory. Moreover, this deviation becomes more pronounced with larger training window sizes.
>• Based on these observations, we introduce a practical approach, AnchorAttention, for long-context continuous training, which improves the model’s ability to handle long contexts, utilizes less than 50% of the training time required by standard attention training, and requires minimal modifications to existing training pipelines. | 2024-11-28T21:40:58 | https://www.reddit.com/r/LocalLLaMA/comments/1h26gbp/rope_has_precision_errors_when_used_with_bfloat16/ | AutomataManifold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h26gbp | false | null | t3_1h26gbp | /r/LocalLLaMA/comments/1h26gbp/rope_has_precision_errors_when_used_with_bfloat16/ | false | false | self | 31 | null |
How do you know how big the models are in RAM or Disk? | 0 | This might be a stupid question, but I'm quite interested in model size in terms of RAM/disk space. Is there an easy way to find this out (didn't spot it on HF or NVIDIA)? | 2024-11-28T22:34:34 | https://www.reddit.com/r/LocalLLaMA/comments/1h27i4i/how_do_you_know_how_big_the_models_are_in_ram_or/ | tdaawg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h27i4i | false | null | t3_1h27i4i | /r/LocalLLaMA/comments/1h27i4i/how_do_you_know_how_big_the_models_are_in_ram_or/ | false | false | self | 0 | null |
New architecture scaling | 5 | The new Alibaba QwQ 32B is exceptional for its size and is pretty much SOTA in terms of benchmarks, we had deepseek r1 lite a few days ago which should be 15B parameters if it's like the last DeepSeek Lite. It got me thinking what would happen if we had this architecture with the next generation of scaled up base models (GPT-5), after all the efficiency gains we've had since GPT-4's release(Yi-lightning was around GPT-4 level and the training only costed 3 million USD), it makes me wonder what would happen in the next few months along with the new inference scaling laws and test time training. What are your thoughts? | 2024-11-28T22:48:19 | https://www.reddit.com/r/LocalLLaMA/comments/1h27rl3/new_architecture_scaling/ | user0069420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h27rl3 | false | null | t3_1h27rl3 | /r/LocalLLaMA/comments/1h27rl3/new_architecture_scaling/ | false | false | self | 5 | null |
Memoripy: AI Memory Made Smarter – Now with OpenRouter Support and 400+ Stars | 36 | Hey r/LocalLLaMA!
I’ve been working on [Memoripy](https://github.com/caspianmoon/memoripy), a Python library that brings real memory capabilities to AI applications. Whether you’re building conversational AI, virtual assistants, or projects that need consistent, context-aware responses, Memoripy offers structured short-term and long-term memory storage to keep interactions meaningful over time.
Memoripy organizes interactions into short-term and long-term memory, prioritizing recent events while preserving important details for future use. This ensures the AI maintains relevant context without being overwhelmed by unnecessary data.
With semantic clustering, similar memories are grouped together, allowing the AI to retrieve relevant context quickly and efficiently. To mimic how we forget and reinforce information, Memoripy features memory decay and reinforcement, where less useful memories fade while frequently accessed ones stay sharp.
One of the key aspects of Memoripy is its focus on local storage. It’s designed to work seamlessly with locally hosted LLMs, making it a great fit for privacy-conscious developers who want to avoid external API calls. Memoripy also integrates with OpenAI and Ollama.
**What’s New?**
Thanks to contributions from **FrancescoCaracciolo** and **sjwang05**, Memoripy now includes:
* **Support for Arbitrary Chat Completion Endpoints**: Use any endpoint that works best for your setup.
* **OpenRouter Integration**: Expanded support for more flexible workflows.
* **Bug Fixes**: A smoother, more reliable experience based on community feedback.
**A Huge Thank You**
Memoripy just hit **400+ stars on GitHub**, and I couldn’t have done it without your support! Your feedback and contributions have been invaluable in making this library what it is today.
If this sounds like something you could use, check it out on [GitHub](https://github.com/caspianmoon/memoripy)! It’s open-source, and I’d love to hear what you think, how you’d use it, or what features you’d like to see next. Let me know what you want to see next! | 2024-11-29T00:01:42 | https://www.reddit.com/r/LocalLLaMA/comments/1h2941u/memoripy_ai_memory_made_smarter_now_with/ | xazarall | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2941u | false | null | t3_1h2941u | /r/LocalLLaMA/comments/1h2941u/memoripy_ai_memory_made_smarter_now_with/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': '1wbt_hLLwsp3x86sLEj6-9VD8XNMnyfnPjtYYbH-duY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2QMN6qoJs9WX48egTTI9_laq1MF3ixV410r256LMkjA.jpg?width=108&crop=smart&auto=webp&s=2a00c6d2f2c70d1e364570224c6725c0d1395cbc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2QMN6qoJs9WX48egTTI9_laq1MF3ixV410r256LMkjA.jpg?width=216&crop=smart&auto=webp&s=6ff23597e0be0011903301e8a90ccad830a0768a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2QMN6qoJs9WX48egTTI9_laq1MF3ixV410r256LMkjA.jpg?width=320&crop=smart&auto=webp&s=d4d9ce9cef012e5ec6aa458b680ddcf86b4a6dfb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2QMN6qoJs9WX48egTTI9_laq1MF3ixV410r256LMkjA.jpg?width=640&crop=smart&auto=webp&s=a44667a7ea409f1b2839ed46012711ee8265df99', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2QMN6qoJs9WX48egTTI9_laq1MF3ixV410r256LMkjA.jpg?width=960&crop=smart&auto=webp&s=f590fbee986f7e85c419610260a48216fe638274', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2QMN6qoJs9WX48egTTI9_laq1MF3ixV410r256LMkjA.jpg?width=1080&crop=smart&auto=webp&s=5ee06942641250cd4bfa9577105d745e14454b52', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2QMN6qoJs9WX48egTTI9_laq1MF3ixV410r256LMkjA.jpg?auto=webp&s=aaad274fb5e9747ff8428851816d028be5dcf842', 'width': 1200}, 'variants': {}}]} |
Calculating GPT-2's Inference Speedups | 1 | 2024-11-29T00:26:30 | https://njkumar.com/calculating-gpt2s-inference-speedups/ | fendiwap1234 | njkumar.com | 1970-01-01T00:00:00 | 0 | {} | 1h29jyl | false | null | t3_1h29jyl | /r/LocalLLaMA/comments/1h29jyl/calculating_gpt2s_inference_speedups/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bWXVvwa8flCrmkYJvLXfE5G12bSSSTbkElYwWaDiCi0', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?width=108&crop=smart&auto=webp&s=2cd1045517eda93c2aaafc19130bea85c7466318', 'width': 108}], 'source': {'height': 120, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?auto=webp&s=6d730f0aadb2da7eefca105ee16d8e99ecfca4a6', 'width': 124}, 'variants': {}}]} |
||
Calculating GPT-2’s Inference Speedups | 7 | 2024-11-29T00:41:05 | https://njkumar.com/calculating-gpt2s-inference-speedups/ | fendiwap1234 | njkumar.com | 1970-01-01T00:00:00 | 0 | {} | 1h29sw7 | false | null | t3_1h29sw7 | /r/LocalLLaMA/comments/1h29sw7/calculating_gpt2s_inference_speedups/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'bWXVvwa8flCrmkYJvLXfE5G12bSSSTbkElYwWaDiCi0', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?width=108&crop=smart&auto=webp&s=2cd1045517eda93c2aaafc19130bea85c7466318', 'width': 108}], 'source': {'height': 120, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?auto=webp&s=6d730f0aadb2da7eefca105ee16d8e99ecfca4a6', 'width': 124}, 'variants': {}}]} |
||
MyOllama: A Free, Open-Source Mobile Client for Ollama LLMs (iOS/Android) | 0 | Hey everyone! 👋
I wanted to share MyOllama, an open-source mobile client I've been working on that lets you interact with Ollama-based LLMs on your mobile devices. If you're into LLM development or research, this might be right up your alley.
\*\*What makes it cool:\*\*
\* Completely free and open-source
\* No cloud BS - runs entirely on your local machine
\* Built with Flutter (iOS & Android support)
\* Works with various LLM models (Llama, Gemma, Qwen, Mistral)
\* Image recognition support
\* Markdown support
\* Available in English, Korean, and Japanese
\*\*Technical stuff you might care about:\*\*
\* Remote LLM access via IP config
\* Custom prompt engineering
\* Persistent conversation management
\* Privacy-focused architecture
\* No subscription fees (ever!)
\* Easy API integration with Ollama backend
\*\*Where to get it:\*\*
\* GitHub: https://github.com/bipark/my\_ollama\_app
\* App Store: https://apps.apple.com/us/app/my-ollama/id6738298481
The whole thing is released under GNU license, so feel free to fork it and make it your own!
Let me know if you have any questions or feedback. Would love to hear your thoughts! 🚀
Edit: Thanks for all the feedback, everyone! Really appreciate the support!
| 2024-11-29T01:38:44 | https://www.reddit.com/r/LocalLLaMA/comments/1h2asn9/myollama_a_free_opensource_mobile_client_for/ | billythepark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2asn9 | false | null | t3_1h2asn9 | /r/LocalLLaMA/comments/1h2asn9/myollama_a_free_opensource_mobile_client_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'rPdLQouZMFXkNodsIHClpC1YdRaWpeVP9vjyzKtaj5E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wkhje95SgmH1gdXv2_ki9QCL_fJA1NE2VBorKX_1ivw.jpg?width=108&crop=smart&auto=webp&s=edb270e52f236f126eb3f816bd24f1112d2dc7e4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wkhje95SgmH1gdXv2_ki9QCL_fJA1NE2VBorKX_1ivw.jpg?width=216&crop=smart&auto=webp&s=97a3b87f342e553f7d894a404795a07e74bb2db3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wkhje95SgmH1gdXv2_ki9QCL_fJA1NE2VBorKX_1ivw.jpg?width=320&crop=smart&auto=webp&s=24004f915cd9b8e070adbf7ac34e8064b05b4397', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wkhje95SgmH1gdXv2_ki9QCL_fJA1NE2VBorKX_1ivw.jpg?width=640&crop=smart&auto=webp&s=cbf73dd48f31fde3f58d8e46dfaa3cc421e5ed0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wkhje95SgmH1gdXv2_ki9QCL_fJA1NE2VBorKX_1ivw.jpg?width=960&crop=smart&auto=webp&s=215fcc4b4912619e5a5f8081dd06d1585dfa5ec9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wkhje95SgmH1gdXv2_ki9QCL_fJA1NE2VBorKX_1ivw.jpg?width=1080&crop=smart&auto=webp&s=1ce34d4d3f4e9ac3d5f342aaae2871f925f4fc08', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wkhje95SgmH1gdXv2_ki9QCL_fJA1NE2VBorKX_1ivw.jpg?auto=webp&s=5bc14203bbeda53848a3d8963f12f49b15ee67fd', 'width': 1200}, 'variants': {}}]} |
3 new 8B Roleplay / Creative models, L 3.1 // Doc to get maximum performance from all models (any repo/any model). | 34 | Hey there from DavidAU:
Three new Roleplay / Creative models @ 8B , Llama 3.1. All are uncensored. These models are primarily RP models first, based on top RP models. Example generations at each repo. Dirty Harry has shortest output, InBetween is medium, and BigTalker is longer output (averages).
Note that each model's output will also vary too - prose, detail, sentence etc. (see examples at each repo).
Models can also be used for any creative use / genre too.
Repo includes extensive parameter, sampler and advanced sampler docs (30+ pages) which can be used for these models and/or any model/repo.
This doc covers quants, manual/automatic generation control, all samplers and parameters and a lot more. Separate doc link below, doc link is also on all model repo pages at my repo.
Models (ordered by average output length):
[https://huggingface.co/DavidAU/L3.1-RP-Hero-Dirty\_Harry-8B-GGUF](https://huggingface.co/DavidAU/L3.1-RP-Hero-Dirty_Harry-8B-GGUF)
[https://huggingface.co/DavidAU/L3.1-RP-Hero-InBetween-8B-GGUF](https://huggingface.co/DavidAU/L3.1-RP-Hero-InBetween-8B-GGUF)
[https://huggingface.co/DavidAU/L3.1-RP-Hero-BigTalker-8B-GGUF](https://huggingface.co/DavidAU/L3.1-RP-Hero-BigTalker-8B-GGUF)
Doc Link - For all models, all repos:
[https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers\_Parameters](https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters) | 2024-11-29T01:45:17 | https://www.reddit.com/r/LocalLLaMA/comments/1h2awnz/3_new_8b_roleplay_creative_models_l_31_doc_to_get/ | Dangerous_Fix_5526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2awnz | false | null | t3_1h2awnz | /r/LocalLLaMA/comments/1h2awnz/3_new_8b_roleplay_creative_models_l_31_doc_to_get/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'jjZpyZQb1XTop9bpTwKmdi5f9xIXAVrEUVTM-Xv0orA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Wna8jM_4dxv1SmQKhWyfBW_HwRhUu5WWZf0LQxvaGw4.jpg?width=108&crop=smart&auto=webp&s=a8162962296b637195b02a6890ab29f9a95e87ed', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Wna8jM_4dxv1SmQKhWyfBW_HwRhUu5WWZf0LQxvaGw4.jpg?width=216&crop=smart&auto=webp&s=d35aa8c013fc555f67931e1a3db66d1021df988b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Wna8jM_4dxv1SmQKhWyfBW_HwRhUu5WWZf0LQxvaGw4.jpg?width=320&crop=smart&auto=webp&s=e3a466e282c2b849fb6fe380ce80e9b027e941f2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Wna8jM_4dxv1SmQKhWyfBW_HwRhUu5WWZf0LQxvaGw4.jpg?width=640&crop=smart&auto=webp&s=7c570f1f59ec918eec22235c94374eef6c0cbc59', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Wna8jM_4dxv1SmQKhWyfBW_HwRhUu5WWZf0LQxvaGw4.jpg?width=960&crop=smart&auto=webp&s=0b38b4276c9785ef1aaa8fab1e70fa47994d3e03', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Wna8jM_4dxv1SmQKhWyfBW_HwRhUu5WWZf0LQxvaGw4.jpg?width=1080&crop=smart&auto=webp&s=da90738b940e2c87192a7197ab657ff1fb31d7bc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Wna8jM_4dxv1SmQKhWyfBW_HwRhUu5WWZf0LQxvaGw4.jpg?auto=webp&s=f547756320841a1296603d9315a055f2fca5cc1e', 'width': 1200}, 'variants': {}}]} |
AI Clone your friends from iMessage conversation | 0 | Hey everyone! I spent the last month working on a Llama chatbot client that lets you clone your friends from your iMessage texts on Mac! Would really appreciate any feedback if you decide to try it. Because we are in beta only the **first 100 users** will be free: [intongues.com](http://intongues.com)
Please let me know if anyone has questions! | 2024-11-29T01:51:26 | https://www.reddit.com/r/LocalLLaMA/comments/1h2b0ii/ai_clone_your_friends_from_imessage_conversation/ | Apprehensive_Sun907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2b0ii | false | null | t3_1h2b0ii | /r/LocalLLaMA/comments/1h2b0ii/ai_clone_your_friends_from_imessage_conversation/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'aikuqEYOQrDZLtKUz09H89PXuxt1Zp1o-j3iWqlTsV4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Xfx7yaNBJmOCMEq00YfjZBD8UEy0--8nL1MhR9-lxBE.jpg?width=108&crop=smart&auto=webp&s=b99680814eb4ed7f0e5fd383e1afa49ffff7a236', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Xfx7yaNBJmOCMEq00YfjZBD8UEy0--8nL1MhR9-lxBE.jpg?width=216&crop=smart&auto=webp&s=027818084d0cf0829554f7775f5c187acea72ed6', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Xfx7yaNBJmOCMEq00YfjZBD8UEy0--8nL1MhR9-lxBE.jpg?width=320&crop=smart&auto=webp&s=179a992c9bc597700dd2530d53e6f4bda6aa8b36', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/Xfx7yaNBJmOCMEq00YfjZBD8UEy0--8nL1MhR9-lxBE.jpg?width=640&crop=smart&auto=webp&s=7746734627dab9c1e912fbfa2743b853ca77541c', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/Xfx7yaNBJmOCMEq00YfjZBD8UEy0--8nL1MhR9-lxBE.jpg?width=960&crop=smart&auto=webp&s=1d3148a607c75d584e80f27c75ac8edb620a31aa', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/Xfx7yaNBJmOCMEq00YfjZBD8UEy0--8nL1MhR9-lxBE.jpg?width=1080&crop=smart&auto=webp&s=b37dfc1450d6e60688f03cc0403d732c1cbacc23', 'width': 1080}], 'source': {'height': 1256, 'url': 'https://external-preview.redd.it/Xfx7yaNBJmOCMEq00YfjZBD8UEy0--8nL1MhR9-lxBE.jpg?auto=webp&s=d4f3d300aa5be8eee115fb17de8c4fb3f4722b58', 'width': 2400}, 'variants': {}}]} |
How to train Llama on retrieving information from documents? | 5 | I have over 1B pages spread in over 10k documents (docx). What I want is something like:
Set some parameters (I have issue X that have Y variant) and I want an action plan based on the input. So far I've seen the approach where you need to fine-tune setting a whole lot of questions for each document and feeding Llama with that, but it's humanely inviable to do that. Is there an alternative approach for it?
Also, those documents have the author's name on it and I would like to cite those author's on the answer. | 2024-11-29T02:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h2bcwq/how_to_train_llama_on_retrieving_information_from/ | grebysama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2bcwq | false | null | t3_1h2bcwq | /r/LocalLLaMA/comments/1h2bcwq/how_to_train_llama_on_retrieving_information_from/ | false | false | self | 5 | null |
NEW! Leaked System prompts from v0 - Vercels AI component generator. New project structure and XXL long System prompt (+-14000Tokens) (100% legit) | 157 | Hey LLAMA's Gang! It's me again with some more system prompt leaks from v0's component generating tool.
If you are familiar with v0, you will know there have been some awesome new updates lately.
Since the last leak I released they have updated v0 to have the following capabilities.
# Key Updates:
1. **Full-Stack Application Support** (11/21/24):
* Ability to create and run full-stack Next.js and React apps.
* Generate multiple files at once.
* Deploy and link to Vercel projects, including using Vercel environment variables.
* Features include dynamic routes, RSCs, route handlers, and server actions.
* Deploy Blocks to Vercel with custom subdomains.
2. **Environment Variables**:
* Secure connections to databases, APIs, and external services are now supported.
3. **UI Generation Enhancements** (11/23/24):
* Select specific sections of a UI generation for targeted edits.
4. **Improved Code Completeness** (11/23/24):
* v0 now ensures it doesn't omit code in generations.
5. **Version Management for Blocks** (11/25/24):
* Easily switch between or revert to older Block versions.
6. **Console Output View** (11/26/24):
* A new Console tab allows viewing logs and outputs directly in v0.
7. **404 Page Enhancements** (11/26/24):
* Displays possible routes when a 404 page is encountered.
8. **Unread Log Notifications** (11/27/24):
* Notifications for unread logs or errors in the Console.
This new system prompt is super long, up to 14000 tokens. Crazy stuff! You can actually see all the new system prompts for updated capabilities listed above.
Please note I am not 100% sure that the order of the prompt is correct or that it is 100% complete, as It was so long and quite difficult to get the full thing and piece it together.
I have verified most of this by reaching the same conclusions through multiple different methods for getting the system prompts and reaching the same answers.
.............
Hope this helps you people trying to stay at the forefront on AI component generation!
[https://github.com/2-fly-4-ai/V0-system-prompt/blob/main/v0-system-prompt(updated%2029-11-2024)](https://github.com/2-fly-4-ai/V0-system-prompt/blob/main/v0-system-prompt(updated%2029-11-2024))
| 2024-11-29T02:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/1h2bdqy/new_leaked_system_prompts_from_v0_vercels_ai/ | Odd-Environment-7193 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2bdqy | false | null | t3_1h2bdqy | /r/LocalLLaMA/comments/1h2bdqy/new_leaked_system_prompts_from_v0_vercels_ai/ | false | false | self | 157 | null |
AI is starting to get really good (Kling AI) AI Generated K-drama | 1 | [removed] | 2024-11-29T02:51:43 | https://www.reddit.com/r/LocalLLaMA/comments/1h2c1wr/ai_is_starting_to_get_really_good_kling_ai_ai/ | JobConnect4951 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2c1wr | false | null | t3_1h2c1wr | /r/LocalLLaMA/comments/1h2c1wr/ai_is_starting_to_get_really_good_kling_ai_ai/ | false | false | self | 1 | null |
Qwen2-audio vs Gemini for speech analysis? | 1 | [removed] | 2024-11-29T03:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/1h2c90z/qwen2audio_vs_gemini_for_speech_analysis/ | Cl4rk-sh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2c90z | false | null | t3_1h2c90z | /r/LocalLLaMA/comments/1h2c90z/qwen2audio_vs_gemini_for_speech_analysis/ | false | false | self | 1 | null |
Summary: The big AI events of November | 184 | * **Alibaba** released its new model, **QwQ 32B Preview**, which integrates reasoning capabilities before responding. The model competes with, and sometimes surpasses, OpenAI's **o1-preview** model.
* **Alibaba** opensourced the model **Qwen2.5 Coder 32B**, which offers comparable capabilities to leading proprietary language models in the coding domain.
* **DeepSeek** unveiled its new AI model, **DeepSeek-R1-Lite-Preview**, which incorporates reasoning capabilities and delivers impressive performance on the **AIME** and **MATH** benchmarks, matching the level of OpenAI's **o1-preview**.
* **Suno** upgraded its AIpowered music generator to **v4**, introducing new features and performance improvements.
* **Mistral AI** launched the **Pixtral Large** model, a multimodal language model excelling in image recognition and advanced performance metrics.
* **Google** introduced two experimental models, **gemini-exp-1114** and **gemini-exp-1121**, currently leading the arena chatbot with enhanced performance.
source: [https://nhlocal.github.io/AiTimeline/](https://nhlocal.github.io/AiTimeline/) | 2024-11-29T03:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h2cmnh/summary_the_big_ai_events_of_november/ | nh_local | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2cmnh | false | null | t3_1h2cmnh | /r/LocalLLaMA/comments/1h2cmnh/summary_the_big_ai_events_of_november/ | false | false | self | 184 | null |
[NSFW]Sautoyrs: A web application that generates romantic and explicit sexual stories automatically using a large language model | 4 | DEMO: [https://sautoyrs.oopus.info/](https://sautoyrs.oopus.info/)
GitHub: [https://github.com/sudoghut/Sautoyrs](https://github.com/sudoghut/Sautoyrs)
Notes:
- The application supports only browsers compatible with WebGPU. Most mobile browsers do not support WebGPU. We highly recommend using Chrome or Firefox on a desktop or laptop to access Sautoyrs.
- Sautoyrs downloads the Gemma-2-Ataraxy-9B model (~5.2 GB) to your browser cache. To remove the cache, click the settings icon and select "Remove LLM Cache."
- For reference, here are generation speed cases on Nvidia GTX 3070:
> Prefill: 14.7263 tokens/sec, Decoding: 1.3206 tokens/sec
> Prefill: 38.6973 tokens/sec, Decoding: 2.2036 tokens/sec
Here is the generation speed case on Apple M1 MacBook Pro
> prefill: 10.8472 tokens/sec, decoding: 4.5792 tokens/sec
- This application contains mature content and is intended for adult audiences only. Please use it responsibly and in compliance with all applicable laws and regulations.
Usage:

| 2024-11-29T03:34:31 | https://www.reddit.com/r/LocalLLaMA/comments/1h2crec/nsfwsautoyrs_a_web_application_that_generates/ | oopus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2crec | false | null | t3_1h2crec | /r/LocalLLaMA/comments/1h2crec/nsfwsautoyrs_a_web_application_that_generates/ | false | false | nsfw | 4 | null |
How do QWQ and R1 determine if they need more reasoning steps without special tokens like O1? | 20 | Hey everyone! 👋
I've been diving deep into O1-like models recently, especially after seeing Alibaba's QWQ and Deepseek's R1. I'm particularly interested in their reasoning mechanisms.
In my current work with O1-like models (mainly for roleplay applications), I use a two-model approach:
\- Main model for generation
\- A Verifier (RM) to check if the output is satisfactory
\- If not satisfied, I append a special reasoning token and let the model continue
This approach works pretty well, and interestingly, O1's technical report also mentions using special reasoning tokens.
However, I noticed something curious: Neither QWQ nor R1 seem to use these special tokens or PRM during their reasoning process. This makes me wonder:
\- How do they determine if their current output is correct?
\- What mechanism do they use to decide whether to continue reasoning?
Would love to hear your thoughts and insights on this! Has anyone else noticed this difference or knows more about their implementation? | 2024-11-29T03:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h2d526/how_do_qwq_and_r1_determine_if_they_need_more/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2d526 | false | null | t3_1h2d526 | /r/LocalLLaMA/comments/1h2d526/how_do_qwq_and_r1_determine_if_they_need_more/ | false | false | self | 20 | null |
Building a PC for LocalLLaMA. How much does the CPU matter? | 1 | [removed] | 2024-11-29T04:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/1h2e0bj/building_a_pc_for_localllama_how_much_does_the/ | darksnes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2e0bj | false | null | t3_1h2e0bj | /r/LocalLLaMA/comments/1h2e0bj/building_a_pc_for_localllama_how_much_does_the/ | false | false | self | 1 | null |
tiny models that suck least at function calling? | 6 | Anyone have any thoughts?
I'm playing with qwen2.5-coder:0.5b and llama3.2:1b on ollama. They both support tools, but seem to go haywire and return a tools call even when the user message isn't relevant to the tool. For example, running the weather example will hallucinate a random city with each response. Are there any small models capable of this more or less or is it just not the right expectation for such a small model? | 2024-11-29T05:22:17 | https://www.reddit.com/r/LocalLLaMA/comments/1h2ejgv/tiny_models_that_suck_least_at_function_calling/ | sha256md5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2ejgv | false | null | t3_1h2ejgv | /r/LocalLLaMA/comments/1h2ejgv/tiny_models_that_suck_least_at_function_calling/ | false | false | self | 6 | null |
TextCraft 1.0.6 Update: Talk to Your AI Directly in Word Comments | 6 | 2024-11-29T06:09:44 | https://github.com/suncloudsmoon/TextCraft | SuccessIsHardWork | github.com | 1970-01-01T00:00:00 | 0 | {} | 1h2fayr | false | null | t3_1h2fayr | /r/LocalLLaMA/comments/1h2fayr/textcraft_106_update_talk_to_your_ai_directly_in/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'cDUzGWyYXcnT2yWMYw1LQ_c20DPwsFx0KULC2tlISQY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ItqPT8_JScY623exTz4_PUMzATZWbkYnKit86GbV5nA.jpg?width=108&crop=smart&auto=webp&s=c9869bf754ee1a8009548630699d06273accfa53', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ItqPT8_JScY623exTz4_PUMzATZWbkYnKit86GbV5nA.jpg?width=216&crop=smart&auto=webp&s=8741a9f8f082529e1b3454baad102355fe7919f9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ItqPT8_JScY623exTz4_PUMzATZWbkYnKit86GbV5nA.jpg?width=320&crop=smart&auto=webp&s=221d61b0568d490742f75843e599f765cc7c12a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ItqPT8_JScY623exTz4_PUMzATZWbkYnKit86GbV5nA.jpg?width=640&crop=smart&auto=webp&s=b99e28862edd7c9d849dfdc12de258f7d4afe1a4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ItqPT8_JScY623exTz4_PUMzATZWbkYnKit86GbV5nA.jpg?width=960&crop=smart&auto=webp&s=10045e18546d4b4efbdfe23db1309cea02d2d1ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ItqPT8_JScY623exTz4_PUMzATZWbkYnKit86GbV5nA.jpg?width=1080&crop=smart&auto=webp&s=b7eda9f409c5a5fb7938412abeb66ba59bcd929c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ItqPT8_JScY623exTz4_PUMzATZWbkYnKit86GbV5nA.jpg?auto=webp&s=c920ff07301444caeafa3eee39bfe5a1ba10bb67', 'width': 1200}, 'variants': {}}]} |
||
Olama not using Gpu | 1 | [removed] | 2024-11-29T06:13:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h2fcz9/olama_not_using_gpu/ | Expensive_Grocery747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2fcz9 | false | null | t3_1h2fcz9 | /r/LocalLLaMA/comments/1h2fcz9/olama_not_using_gpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 't_pHEMGKQ6DAGq3kscBApVGEiLbZMGiN-d4WTMkTggQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=108&crop=smart&auto=webp&s=f9bb55c9279ce0742847c88b5626fbc553bbf5b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=216&crop=smart&auto=webp&s=e1908729c74b3588212435422da59168d85d8660', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=320&crop=smart&auto=webp&s=4d949abbbc31e568f121c9c5eaed3e0846f3722e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=640&crop=smart&auto=webp&s=97e67439d1ec5fe9d8e6cb0ba95abe56adce52a7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=960&crop=smart&auto=webp&s=f3bae916e90b40bc5edd90180a00602bab76d6cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=1080&crop=smart&auto=webp&s=d939cfbb76db5c7e138d37bd365f33690c45b6b1', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?auto=webp&s=eb32f09811c1b406241d8ffa47361db3034299c6', 'width': 2400}, 'variants': {}}]} |
Change my mind: AI Twitter should switch to Bluesky | 1 | [removed] | 2024-11-29T06:36:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h2fps1/change_my_mind_ai_twitter_should_switch_to_bluesky/ | SignificantWords | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2fps1 | false | null | t3_1h2fps1 | /r/LocalLLaMA/comments/1h2fps1/change_my_mind_ai_twitter_should_switch_to_bluesky/ | false | false | self | 1 | null |
Help on training a custom langauge unit based vocoder | 0 | Need help and resources if any available on training and fine-tuning a custom langauge unit based vocoder for speech generation
Thank you | 2024-11-29T06:53:45 | https://www.reddit.com/r/LocalLLaMA/comments/1h2fyvh/help_on_training_a_custom_langauge_unit_based/ | spiritleader473882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2fyvh | false | null | t3_1h2fyvh | /r/LocalLLaMA/comments/1h2fyvh/help_on_training_a_custom_langauge_unit_based/ | false | false | self | 0 | null |
Trying the QwQ-32B-Preview-Q4_K_M-GGUF and so close to fully on my GPU lol | 4 | Im trying to test this out and Im literally offloading 1 layer to the CPU lol. Am i doing something wrong? On ubuntu with 2MB used on the card already so its nothing. Using this to run it:
./llama-cli --model /root/.qwq/qwq-32b-preview-q4\_k\_m.gguf --color --threads 30 --keep -1 --n-predict -1 --ctx-size 16384 --gpu-layers 64 --simple-io -e --multiline-input --no-display-prompt --conversation --in-prefix "<|im\_end|>\\n<|im\_start|>user\\n" --in-suffix "<|im\_end|>\\n<|im\_start|>assistant\\n" -p "<|im\_start|>system\\nYou are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step and only respond in english." --top-k 20 --top-p 0.8 --temp 0.7 --repeat-penalty 1.05
Now it has 65 layers and if i remove the --gpu-layers or set it to the full 65, i get OOM. If i do 64 layers it works fine. Im hoping im missing a flag or something but this is hilarious and frustrating! | 2024-11-29T06:55:33 | https://www.reddit.com/r/LocalLLaMA/comments/1h2fzsz/trying_the_qwq32bpreviewq4_k_mgguf_and_so_close/ | Nimrod5000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2fzsz | false | null | t3_1h2fzsz | /r/LocalLLaMA/comments/1h2fzsz/trying_the_qwq32bpreviewq4_k_mgguf_and_so_close/ | false | false | self | 4 | null |
Made me smile and relate so well. Deepseek R1-Lite. | 1 | 2024-11-29T07:34:26 | CrzyFlky | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2gjzc | false | null | t3_1h2gjzc | /r/LocalLLaMA/comments/1h2gjzc/made_me_smile_and_relate_so_well_deepseek_r1lite/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'dU3QgmAucvdO3yMw7ojk082yumxT-gdeKvVtA7aEdPM', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/s3rcu014ns3e1.png?width=108&crop=smart&auto=webp&s=239ca82807d64b8974c1e42dc5b043582599e05c', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/s3rcu014ns3e1.png?width=216&crop=smart&auto=webp&s=208add77b77d2e5466bb53155046dcb5e9299df6', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/s3rcu014ns3e1.png?width=320&crop=smart&auto=webp&s=fcd1b1430c705f94c17a65c43da450ea0f761e94', 'width': 320}, {'height': 334, 'url': 'https://preview.redd.it/s3rcu014ns3e1.png?width=640&crop=smart&auto=webp&s=9ed7c2e27f6e0c74d80aea0a12365649508b13cf', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/s3rcu014ns3e1.png?width=960&crop=smart&auto=webp&s=38bb51565ed10a883c7834a27ffa3d5a1c0cb5bf', 'width': 960}, {'height': 564, 'url': 'https://preview.redd.it/s3rcu014ns3e1.png?width=1080&crop=smart&auto=webp&s=7cea8d2a48b83afc064bfeeaf7332c67d7e7c9f8', 'width': 1080}], 'source': {'height': 616, 'url': 'https://preview.redd.it/s3rcu014ns3e1.png?auto=webp&s=f12bcc851ba35524671a3285d977f7f6a39be21e', 'width': 1178}, 'variants': {}}]} |
|||
Parameter selection for pc without GPU | 1 | [removed] | 2024-11-29T07:40:45 | https://www.reddit.com/r/LocalLLaMA/comments/1h2gn70/parameter_selection_for_pc_without_gpu/ | Puzzleheaded_Loan461 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2gn70 | false | null | t3_1h2gn70 | /r/LocalLLaMA/comments/1h2gn70/parameter_selection_for_pc_without_gpu/ | false | false | self | 1 | null |
Accurate 4-bit quantization for Tulu 3 and OLMo 2 | 7 | I quantized Tulu 3 and OLMo 2:
\- 4-bit
\- symmetric quantization
\- AutoRound
\- GPTQ format
\- Apache 2.0 license
The models are all compatible with most inference frameworks.
https://preview.redd.it/nvmkl9yaos3e1.png?width=679&format=png&auto=webp&s=9b338794b2adcc1c66fdfc728c10b65af342dc0b
https://preview.redd.it/pt67sz7dos3e1.png?width=679&format=png&auto=webp&s=e14e34dae9ee456f84cad26d22e71e40b31cc41b
Except for Tulu 3 8B, quantization doesn't degrade the model's accuracy, at least according to MMLU.
The models are here:
[https://huggingface.co/collections/kaitchup/tulu-3-and-olmo-2-quantized-67481ed7e5d2e40141d2ec2c](https://huggingface.co/collections/kaitchup/tulu-3-and-olmo-2-quantized-67481ed7e5d2e40141d2ec2c) | 2024-11-29T07:41:31 | https://www.reddit.com/r/LocalLLaMA/comments/1h2gnki/accurate_4bit_quantization_for_tulu_3_and_olmo_2/ | TheKaitchup | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2gnki | false | null | t3_1h2gnki | /r/LocalLLaMA/comments/1h2gnki/accurate_4bit_quantization_for_tulu_3_and_olmo_2/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'xIyIe70R1vwuxBlzVKLeWW_YP3ZiYnt5tN5y5QrTsTI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NXCf3wJdQp1J1WHw_nVHs4F2Q4kuCBJa2gbqXYY1cco.jpg?width=108&crop=smart&auto=webp&s=0184cdcfce3546618d3289385f9007d49617b1eb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NXCf3wJdQp1J1WHw_nVHs4F2Q4kuCBJa2gbqXYY1cco.jpg?width=216&crop=smart&auto=webp&s=8129b5771f02062563a8cab11389fb84436c8c14', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NXCf3wJdQp1J1WHw_nVHs4F2Q4kuCBJa2gbqXYY1cco.jpg?width=320&crop=smart&auto=webp&s=ea3540a1708b8dc47add94d207a3f4ff28638cfb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NXCf3wJdQp1J1WHw_nVHs4F2Q4kuCBJa2gbqXYY1cco.jpg?width=640&crop=smart&auto=webp&s=2bb586c754233e3d8d2e66cf402421056ca5dfb2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NXCf3wJdQp1J1WHw_nVHs4F2Q4kuCBJa2gbqXYY1cco.jpg?width=960&crop=smart&auto=webp&s=dcf4cab863e99293eb895c27336d8a033d4ef898', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NXCf3wJdQp1J1WHw_nVHs4F2Q4kuCBJa2gbqXYY1cco.jpg?width=1080&crop=smart&auto=webp&s=0f2da47dbbc9ffabe973b020f33f862ab6557a84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NXCf3wJdQp1J1WHw_nVHs4F2Q4kuCBJa2gbqXYY1cco.jpg?auto=webp&s=c0deb34a27c3dc8883a65973082bafa0e5d04b5b', 'width': 1200}, 'variants': {}}]} |
|
I made this free online tool to digest a repo into a prompt | 141 | 2024-11-29T08:01:01 | https://v.redd.it/6pnnqnkvrs3e1 | MrCyclopede | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2gx0w | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6pnnqnkvrs3e1/DASHPlaylist.mpd?a=1735459273%2CMWNhNDM2ZDc5MDU2NWI0Yzk4YzNhYzRkMjEwMGEyODc1MGVjMGY0MDNhZThjZTY0YWFlMzUwNzAxNmNkZjFhMg%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/6pnnqnkvrs3e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6pnnqnkvrs3e1/HLSPlaylist.m3u8?a=1735459273%2CYjBmZGQ4YjU5MTVjOGViNzhiNzBkOTEwZGU2YWEyZTI5NjBmZjFiMDE4YzYyMTRmMDk3OTE2MDE1YzA4MDQwYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6pnnqnkvrs3e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1h2gx0w | /r/LocalLLaMA/comments/1h2gx0w/i_made_this_free_online_tool_to_digest_a_repo/ | false | false | 141 | {'enabled': False, 'images': [{'id': 'd205bGJzanZyczNlMT-Zol0d_0WGgot3V2KODgQ4coo2M1COClmcY-uCWvFp', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d205bGJzanZyczNlMT-Zol0d_0WGgot3V2KODgQ4coo2M1COClmcY-uCWvFp.png?width=108&crop=smart&format=pjpg&auto=webp&s=88374b92bf9f3a35dcd5b5bf9740ed3ce6d833dc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d205bGJzanZyczNlMT-Zol0d_0WGgot3V2KODgQ4coo2M1COClmcY-uCWvFp.png?width=216&crop=smart&format=pjpg&auto=webp&s=2492e826f5e18bacd0dcf380e7d4d6a1ceb610ba', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d205bGJzanZyczNlMT-Zol0d_0WGgot3V2KODgQ4coo2M1COClmcY-uCWvFp.png?width=320&crop=smart&format=pjpg&auto=webp&s=e30c40e4b7bcaa5d9828c02a4d33e4940d278cc4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d205bGJzanZyczNlMT-Zol0d_0WGgot3V2KODgQ4coo2M1COClmcY-uCWvFp.png?width=640&crop=smart&format=pjpg&auto=webp&s=6e39b787bac9d06e5781dc629c278dbb1827d745', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d205bGJzanZyczNlMT-Zol0d_0WGgot3V2KODgQ4coo2M1COClmcY-uCWvFp.png?width=960&crop=smart&format=pjpg&auto=webp&s=801a6bf460c8b6984a8cc12b3abc1d375b59b911', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d205bGJzanZyczNlMT-Zol0d_0WGgot3V2KODgQ4coo2M1COClmcY-uCWvFp.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ee402901aa3e16f0b8bb39a50b2ca2498e67da0d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d205bGJzanZyczNlMT-Zol0d_0WGgot3V2KODgQ4coo2M1COClmcY-uCWvFp.png?format=pjpg&auto=webp&s=7f58cdea9328c3e3dd2a30b9f395157a2de7e073', 'width': 1920}, 'variants': {}}]} |
||
QwQ-32B (Q5_K_L) being kind of sus | 1 | 2024-11-29T08:10:24 | KT313 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2h1ue | false | null | t3_1h2h1ue | /r/LocalLLaMA/comments/1h2h1ue/qwq32b_q5_k_l_being_kind_of_sus/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'tUBZOd2GJ1NAUXT_Dk3TenY-WIixahBeOHzoJO1N-wc', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/te3bzk3pts3e1.png?width=108&crop=smart&auto=webp&s=8e4072b4cb6b8923413a7b154bfd7ec81c259baf', 'width': 108}, {'height': 80, 'url': 'https://preview.redd.it/te3bzk3pts3e1.png?width=216&crop=smart&auto=webp&s=0d1010a58207373cec7b20f8145f727463446317', 'width': 216}, {'height': 118, 'url': 'https://preview.redd.it/te3bzk3pts3e1.png?width=320&crop=smart&auto=webp&s=21642b275afe139b026603ebda96f008607a2c9e', 'width': 320}, {'height': 237, 'url': 'https://preview.redd.it/te3bzk3pts3e1.png?width=640&crop=smart&auto=webp&s=a27007ad6c68ae4b7bc708e16306c6c68d1ef9db', 'width': 640}], 'source': {'height': 352, 'url': 'https://preview.redd.it/te3bzk3pts3e1.png?auto=webp&s=efb2a3a82b4bbf63d219afa629f07f7d6c8dad17', 'width': 949}, 'variants': {}}]} |
|||
QwQ-32B (Q5_K_L), interesting introduction | 1 | 2024-11-29T08:12:51 | KT313 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2h31r | false | null | t3_1h2h31r | /r/LocalLLaMA/comments/1h2h31r/qwq32b_q5_k_l_interesting_introduction/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'OmVgd8C7YobAe6e5aRaSLo-xEYMt7ZZsGxNP0lsO3yo', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/z2669nb5us3e1.png?width=108&crop=smart&auto=webp&s=0f607e4a82690af1ffe5ceffd2aad48002cab72f', 'width': 108}, {'height': 80, 'url': 'https://preview.redd.it/z2669nb5us3e1.png?width=216&crop=smart&auto=webp&s=c173534854f90421a90af87c04930866116fee39', 'width': 216}, {'height': 118, 'url': 'https://preview.redd.it/z2669nb5us3e1.png?width=320&crop=smart&auto=webp&s=033a23ffb7e47cac0dc861463751f93ab96224f1', 'width': 320}, {'height': 237, 'url': 'https://preview.redd.it/z2669nb5us3e1.png?width=640&crop=smart&auto=webp&s=74dfe54a465a587acae70da417a328a976745d7e', 'width': 640}], 'source': {'height': 352, 'url': 'https://preview.redd.it/z2669nb5us3e1.png?auto=webp&s=54bd343a9fe79cf5ad10f16576bdabdf105951f0', 'width': 949}, 'variants': {}}]} |
|||
Building a chatbot for a packaging company with a large and dynamic database | 1 | [removed] | 2024-11-29T08:13:33 | https://www.reddit.com/r/LocalLLaMA/comments/1h2h3ec/building_a_chatbot_for_a_packaging_company_with_a/ | No_Possibility_7588 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2h3ec | false | null | t3_1h2h3ec | /r/LocalLLaMA/comments/1h2h3ec/building_a_chatbot_for_a_packaging_company_with_a/ | false | false | self | 1 | null |
Alternative to "Chat with RTX" for loading private files and ask about it? | 0 | Hi!
I've been trying to figure out the best solution to host a local LLM and use it to create a database of my pictures, documents, PDFs, and so on and ask the LLM about it.
**Example:** My idea is to ask my local LLM for important information so I don’t have to search for it manually like IDs, car information, tax documents, and more.
I thought "Chat with RTX" would be a good solution, but it turned out to be quite messy to set up. I spent hours trying to fix missing functions and update packages in the virtual Python environment, but I gave up.
So, is there a good alternative for my use case? Maybe something what works with ollama? :) | 2024-11-29T08:13:53 | https://www.reddit.com/r/LocalLLaMA/comments/1h2h3k9/alternative_to_chat_with_rtx_for_loading_private/ | Naernoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2h3k9 | false | null | t3_1h2h3k9 | /r/LocalLLaMA/comments/1h2h3k9/alternative_to_chat_with_rtx_for_loading_private/ | false | false | self | 0 | null |
QwQ-32B (Q5_K_L) being kind of sus | 0 | 2024-11-29T08:13:54 | KT313 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2h3kb | false | null | t3_1h2h3kb | /r/LocalLLaMA/comments/1h2h3kb/qwq32b_q5_k_l_being_kind_of_sus/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'x8VILtiDTnsnnZlCKthZobF6ELjSEhUShYOVN7RiRk0', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/0vy0gwg9us3e1.png?width=108&crop=smart&auto=webp&s=cfa77443d4d731636c1ca85a9f145adeff7fd661', 'width': 108}, {'height': 80, 'url': 'https://preview.redd.it/0vy0gwg9us3e1.png?width=216&crop=smart&auto=webp&s=0c1ee9559de541853f1290a1d050596523a554c4', 'width': 216}, {'height': 118, 'url': 'https://preview.redd.it/0vy0gwg9us3e1.png?width=320&crop=smart&auto=webp&s=9dc00b533d5e71189e07e705ae0aa2d82e6ea77a', 'width': 320}, {'height': 237, 'url': 'https://preview.redd.it/0vy0gwg9us3e1.png?width=640&crop=smart&auto=webp&s=ebab31d8768799850ce9ff1c0b4272c08e70b0f7', 'width': 640}], 'source': {'height': 352, 'url': 'https://preview.redd.it/0vy0gwg9us3e1.png?auto=webp&s=dbc4d21006c02721fc9ff2581f9cfb37310eb1f2', 'width': 949}, 'variants': {}}]} |
|||
LLM/AI for pdf’s | 1 | [removed] | 2024-11-29T08:34:20 | https://www.reddit.com/r/LocalLLaMA/comments/1h2hdu2/llmai_for_pdfs/ | Defiant-Attention978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2hdu2 | false | null | t3_1h2hdu2 | /r/LocalLLaMA/comments/1h2hdu2/llmai_for_pdfs/ | false | false | self | 1 | null |
Best MacBook Pro configuration for running LLMs locally (priority: model size over inference speed) | 1 | [removed] | 2024-11-29T08:42:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h2hhqw/best_macbook_pro_configuration_for_running_llms/ | Aggravating_Study235 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2hhqw | false | null | t3_1h2hhqw | /r/LocalLLaMA/comments/1h2hhqw/best_macbook_pro_configuration_for_running_llms/ | false | false | self | 1 | null |
I've made an "ultimate" guide about building and using `llama.cpp` | 311 | https://steelph0enix.github.io/posts/llama-cpp-guide/
This post is relatively long, but i've been writing it for over a month and i wanted it to be pretty comprehensive.
It will guide you throught the building process of llama.cpp, for CPU and GPU support (w/ Vulkan), describe how to use some core binaries (`llama-server`, `llama-cli`, `llama-bench`) and explain most of the configuration options for the `llama.cpp` and LLM samplers.
Suggestions and PRs are welcome. | 2024-11-29T08:44:21 | https://www.reddit.com/r/LocalLLaMA/comments/1h2hioi/ive_made_an_ultimate_guide_about_building_and/ | SteelPh0enix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2hioi | false | null | t3_1h2hioi | /r/LocalLLaMA/comments/1h2hioi/ive_made_an_ultimate_guide_about_building_and/ | false | false | self | 311 | {'enabled': False, 'images': [{'id': 'XZSWdkwcWZ5X_Vd1OYq5ZVrfs--2qDkAa37FQuWsrMk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=108&crop=smart&auto=webp&s=aadb4b6cd053db4a1c45926ead166d1f157ddea6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=216&crop=smart&auto=webp&s=24b5c396fde815646492c083b714eae6c59a1cb0', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=320&crop=smart&auto=webp&s=a9c2c59bfd59d5a49f3818cb7b0ed9ec358d6276', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=640&crop=smart&auto=webp&s=316480920221b9fb8928b676452d143067eefc83', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=960&crop=smart&auto=webp&s=2fd3e606dd223730143c5ec9c7411f3f2063c910', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=1080&crop=smart&auto=webp&s=88dc734b160100815e095f1ee8b5356d42cc201f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?auto=webp&s=c56adb140f25da1a1c8bdc480d6b6f2650c709cf', 'width': 1200}, 'variants': {}}]} |
Weird performance in python | 1 | [removed] | 2024-11-29T09:16:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h2hytf/weird_performance_in_python/ | SyhkiiTryThings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2hytf | false | null | t3_1h2hytf | /r/LocalLLaMA/comments/1h2hytf/weird_performance_in_python/ | false | false | self | 1 | null |
New AI Startup Maisa with old model & reasoning technique + web search Is really supperior beat claude & o1 the first that can answer this question completely correct | 0 | 2024-11-29T09:56:18 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2ihx7 | false | null | t3_1h2ihx7 | /r/LocalLLaMA/comments/1h2ihx7/new_ai_startup_maisa_with_old_model_reasoning/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'gDVC_dsKaCmGiXXx75CSca6CrGRbg730bIN7PtVbESg', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/26y64zvbct3e1.png?width=108&crop=smart&auto=webp&s=15c0b8defe5df02329091e8782281a6319620eb2', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/26y64zvbct3e1.png?width=216&crop=smart&auto=webp&s=35ee13d1fb12513a30559e5eb2e38d928e145174', 'width': 216}, {'height': 194, 'url': 'https://preview.redd.it/26y64zvbct3e1.png?width=320&crop=smart&auto=webp&s=f56e343f7182638cfcee5a6b1c5184324bbe8376', 'width': 320}, {'height': 388, 'url': 'https://preview.redd.it/26y64zvbct3e1.png?width=640&crop=smart&auto=webp&s=a6bbc750a2876c157e12fdf96ce69dbb34d71ded', 'width': 640}, {'height': 582, 'url': 'https://preview.redd.it/26y64zvbct3e1.png?width=960&crop=smart&auto=webp&s=d62bb067605e6c08a5752974cd5af04f9b1794c3', 'width': 960}, {'height': 655, 'url': 'https://preview.redd.it/26y64zvbct3e1.png?width=1080&crop=smart&auto=webp&s=d130eb8b39b9ce0746f992c9b208fe688b2e813f', 'width': 1080}], 'source': {'height': 889, 'url': 'https://preview.redd.it/26y64zvbct3e1.png?auto=webp&s=bef260b4f8dd9a601ce75f905fe4056f19b60b72', 'width': 1465}, 'variants': {}}]} |
|||
Is 24GB Macbook M4 pro good to play with small LLM/Diffusion models. | 0 | I will be starting a Phd. I have access to the GPU cluster. But I need a laptop to research and study the behavior of the LLM/Diffusion models. I am a bit hesitant to buy 48GB ram version as its so expensive. Please guide me. | 2024-11-29T10:11:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h2ipot/is_24gb_macbook_m4_pro_good_to_play_with_small/ | Vjraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2ipot | false | null | t3_1h2ipot | /r/LocalLLaMA/comments/1h2ipot/is_24gb_macbook_m4_pro_good_to_play_with_small/ | false | false | self | 0 | null |
Local IOS LLM | 0 | Hi everyone,
Just wondering here if there is a LLM Studio app for iphone? I would like to make an api connection from my phone as the server with apps that run on my phone such as obsidian and obsidian webclipper. Can anyone point me to some trusted resources, ive seen some solutions but non open source and mostly made by individuals, would prefer it if LLM Studio was available on the phone :) | 2024-11-29T10:18:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h2it4b/local_ios_llm/ | Smokeey1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2it4b | false | null | t3_1h2it4b | /r/LocalLLaMA/comments/1h2it4b/local_ios_llm/ | false | false | self | 0 | null |
Question about hardware | 1 | [removed] | 2024-11-29T10:27:42 | https://www.reddit.com/r/LocalLLaMA/comments/1h2ixq9/question_about_hardware/ | Electrical_Access167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2ixq9 | false | null | t3_1h2ixq9 | /r/LocalLLaMA/comments/1h2ixq9/question_about_hardware/ | false | false | self | 1 | null |
GLM-Edge series: 1.5b ~ 5B, Chat & Vision | 81 | 2024-11-29T10:36:42 | https://huggingface.co/collections/THUDM/glm-edge-6743283c5809de4a7b9e0b8b | AaronFeng47 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h2j25e | false | null | t3_1h2j25e | /r/LocalLLaMA/comments/1h2j25e/glmedge_series_15b_5b_chat_vision/ | false | false | 81 | {'enabled': False, 'images': [{'id': 'tlgn4Bzjb5wUXYXKzb2t1RVM01-k90lJ3PHtlXt6u5E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nElgj46uLXZXTlcTGlPnGaqqBATzdVOTx8lzmYB5oXw.jpg?width=108&crop=smart&auto=webp&s=cff497db011aa4d16c52a4aed95ee3c21dd837be', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nElgj46uLXZXTlcTGlPnGaqqBATzdVOTx8lzmYB5oXw.jpg?width=216&crop=smart&auto=webp&s=c8d72db20ac32036fe793fcbb49c0082139b4986', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nElgj46uLXZXTlcTGlPnGaqqBATzdVOTx8lzmYB5oXw.jpg?width=320&crop=smart&auto=webp&s=70f921b3ba78cf00a2eac01ad43dace12df67ea8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nElgj46uLXZXTlcTGlPnGaqqBATzdVOTx8lzmYB5oXw.jpg?width=640&crop=smart&auto=webp&s=80c02438bf235383e643278b9b16cb1d9d22782b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nElgj46uLXZXTlcTGlPnGaqqBATzdVOTx8lzmYB5oXw.jpg?width=960&crop=smart&auto=webp&s=a4c87c0811fb7cc0ab17a5c6ce377f3c9e83801a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nElgj46uLXZXTlcTGlPnGaqqBATzdVOTx8lzmYB5oXw.jpg?width=1080&crop=smart&auto=webp&s=fbb774c0b61930e92dbeaed301fbcd0aeb9d6023', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nElgj46uLXZXTlcTGlPnGaqqBATzdVOTx8lzmYB5oXw.jpg?auto=webp&s=dd9e407e8e2ff4d6bd9dca7637ffb662469bc579', 'width': 1200}, 'variants': {}}]} |
||
Any LLM powered tool to auto generate & run bash, python codes | 1 | [removed] | 2024-11-29T10:49:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h2j8px/any_llm_powered_tool_to_auto_generate_run_bash/ | Eastern_Guarantee857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2j8px | false | null | t3_1h2j8px | /r/LocalLLaMA/comments/1h2j8px/any_llm_powered_tool_to_auto_generate_run_bash/ | false | false | self | 1 | null |
Is Anthropic's MCP the Missing Piece for Local LLMs? A Deep Dive | 0 | Hey everyone!
After I saw some interesting discussion about Anthropic's new Model Context Protocol (MCP) with people going between a revolution and some fad, I wanted to deep dive a little inside and boy, I'm not deceived.
While Anthropic launched it with Claude Desktop, here's the cool part - it's fully open source and so could work with any LLM, including our local models!
Think of it as giving wings to your local LLMs - they can now securely access your files, databases, and tools without any cloud involvement. Want to run Llama or Mistral locally while giving them the same capabilities as Claude? That's exactly what MCP could enable.
Here's the [link](https://www.metadocs.co/2024/11/29/anthropic-model-context-protocol-how-to-give-wings-to-local-mode/) to my article, so don't hesitate!
I really think this is a big win for the Open Source community and I can't wait ti have my open source Claude desktop.
So What do you think ? Would love to have your ideas!
| 2024-11-29T11:00:37 | https://www.reddit.com/r/LocalLLaMA/comments/1h2je5t/is_anthropics_mcp_the_missing_piece_for_local/ | ravediamond000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2je5t | false | null | t3_1h2je5t | /r/LocalLLaMA/comments/1h2je5t/is_anthropics_mcp_the_missing_piece_for_local/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '94fanjK5z-8DC2mIozC6T6EUC0ejIwybMdraWu1-P1g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/nLvFZxW8WaJb41gyYKGeW37dzuMTRKoac64lB99xBWI.jpg?width=108&crop=smart&auto=webp&s=6e259133926b400f433d3a5f5c14c47e546bbea1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/nLvFZxW8WaJb41gyYKGeW37dzuMTRKoac64lB99xBWI.jpg?width=216&crop=smart&auto=webp&s=be943321a3cfdb794bca02137f43f6a07680cd38', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/nLvFZxW8WaJb41gyYKGeW37dzuMTRKoac64lB99xBWI.jpg?width=320&crop=smart&auto=webp&s=5c6b58ccb762d439cab4fc118309412e968e3f97', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/nLvFZxW8WaJb41gyYKGeW37dzuMTRKoac64lB99xBWI.jpg?width=640&crop=smart&auto=webp&s=f777febc6fda79acc871ccda0333d25db5f9d01a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/nLvFZxW8WaJb41gyYKGeW37dzuMTRKoac64lB99xBWI.jpg?width=960&crop=smart&auto=webp&s=ec22cd86d4278f9f2e7ee761ee2fe77e216e0025', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/nLvFZxW8WaJb41gyYKGeW37dzuMTRKoac64lB99xBWI.jpg?auto=webp&s=a774c2bf4bdaeb77fed93856150510dbf753493c', 'width': 1024}, 'variants': {}}]} |
Finetune LLM specialized for RAG | 1 | Hello, I need to finetune a LLM which will be used primarily for retrieval augmented generation tasks. In the finetuning dataset I am planning of including corpora of tasks such as knowledge recall, reasoning, math.. but I am wondering: are there datasets of tasks as close as possible to RAG (i.e. answer the user's question given the following information)? I have done a little research but I wasn't able to find anything relevant. Thank you! | 2024-11-29T11:14:43 | https://www.reddit.com/r/LocalLLaMA/comments/1h2jljq/finetune_llm_specialized_for_rag/ | hertric | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2jljq | false | null | t3_1h2jljq | /r/LocalLLaMA/comments/1h2jljq/finetune_llm_specialized_for_rag/ | false | false | self | 1 | null |
Help Deciding Between A6000, Dual 3090s, or a 4090 for LLM Tasks | 0 |
Hey everyone,
I’m currently planning to build a new rig for working with large language models (LLMs). The primary use cases are inference and occasional training, so I want a setup that’s powerful and future-proof for my needs.
After doing some research, I’ve narrowed down my GPU options to:
1. NVIDIA A6000
2. Dual 3090s
3. NVIDIA 4090
Key Points I’m Considering:
VRAM: I know that LLM tasks can require a lot of VRAM, especially during training. The A6000 has 48GB, while the 3090 and 4090 have 24GB each. However, with dual 3090s, I can double the capacity if model parallelism is feasible.
Performance: I want fast inference speeds and solid training capabilities without bottlenecks.
Compatibility and Build Requirements:
For dual 3090s, I’ll need a build that supports NVLink (and I’m aware NVLink doesn’t aggregate VRAM, so parallelization will be key).
The A6000 is attractive for its workstation-grade features but might need special considerations for cooling and power.
The 4090 seems to hit a sweet spot for consumer-grade high performance, but I’m unsure how it stacks up for LLMs compared to the others as it has low VRAM.
Cost: Budget isn’t a deal-breaker, but I want to make the most sensible choice for my use case.
What I’m Looking For:
Build Recommendations: What kind of CPU, motherboard, and PSU would best support each option? I want something scalable and reliable.
Cooling Advice: For any of these cards, what cooling solutions would you recommend? I’ve heard dual 3090s can get really hot.
Real-World LLM Performance: Does anyone have experience using these GPUs specifically for LLM inference/training? How do they compare in terms of efficiency and practicality?
I’d really appreciate any insights or feedback you can provide. If anyone’s gone through a similar decision process, I’d love to hear how you made your choice and how it’s working out for you. I've never actually built a machine like this and we're kind of in a hurry as a company so any help or recommendation is appreciated.
Thanks in advance!
(This post was written by chatgpt, why confuse others when chatgpt can explain the situation way better than me?)
| 2024-11-29T11:27:59 | https://www.reddit.com/r/LocalLLaMA/comments/1h2jsgi/help_deciding_between_a6000_dual_3090s_or_a_4090/ | Su1tz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2jsgi | false | null | t3_1h2jsgi | /r/LocalLLaMA/comments/1h2jsgi/help_deciding_between_a6000_dual_3090s_or_a_4090/ | false | false | self | 0 | null |
Key Information Extract multi-page PDF file | 1 | [removed] | 2024-11-29T11:35:26 | https://www.reddit.com/r/LocalLLaMA/comments/1h2jwd8/key_information_extract_multipage_pdf_file/ | I3estNF | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2jwd8 | false | null | t3_1h2jwd8 | /r/LocalLLaMA/comments/1h2jwd8/key_information_extract_multipage_pdf_file/ | false | false | self | 1 | null |
Finetuning doesn't finetune | 1 | Hi,
I'm trying to finetune Phi-3 mini 4k instruct (based on the example provided on their hugging face page) for Named Entity Recognition (NER). I put in a training dataset with roughly 2.5k rows (each is about 3 sentences from PubMed as user input and json schema with entities as output).
My system prompt is:
Please identify all the named entities mentioned in the input sentence provided below. The entities may have category "Disease" or "Chemical". Use **ONLY** the categories "Chemical" or "Disease". Do not include any other categories. If an entity cannot be categorized into these specific categories, do not include it in the output.
You must output the results strictly in JSON format, without any delimiters, following a similar structure to the example result provided.
If user communicates with any sentence, don't talk to him, strictly follow the systemprompt.
Example user input and assistant response:
User:
Famotidine-associated delirium.A series of six cases.Famotidine is a histamine H2-receptor antagonist used in inpatient settings for prevention of stress ulcers and is showing increasing popularity because of its low cost.
Assistant:
[{"category": "Chemical", "entity": "Famotidine"}, {"category": "Disease", "entity": "delirium"}, {"category": "Chemical", "entity": "Famotidine"}, {"category": "Disease", "entity": "ulcers"}]
Im using SFTTtrainer from trl.
Problem 1:
No matter what hyperparameters I use I still get 0.000000 loss after 20 steps (if I put validation set, I get 0.000000 loss as well after a few steps). When I test it manually on a random item from training dataset, I don't get fully correct answer.
Problem 2:
I tested unmodified model and modiifed model, they input exact same results, as if no finetuning happend
unmodified_pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer, device='cuda')
peft_model = peft.PeftModel.from_pretrained(model, "checkpoint_dir/checkpoint-291")
peft_model.eval()
peft_pipeline = pipeline("text-generation", model=peft_model, tokenizer=tokenizer, device='cuda')
# test is processed testing dataset
output1 = peft_pipeline(test, **generation_args)
output2 = nlp(test, **generation_args)
output1 = peft_pipeline(test, **generation_args)
output2 = nlp(test, **generation_args)
When I do output1==output2, it returns True.
If anyone gives me any ideas on how to fix it, I'd appreciate it. | 2024-11-29T12:34:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h2ku2k/finetuning_doesnt_finetune/ | IvanOG_Ranger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2ku2k | false | null | t3_1h2ku2k | /r/LocalLLaMA/comments/1h2ku2k/finetuning_doesnt_finetune/ | false | false | self | 1 | null |
Whisper (Whisper.cpp/WhisperKit) for live transcription - why no prompt caching? | 26 | Hi everyone! Some quick questions for today:
1. Why do most streaming-based implementations of Whisper process incoming audio in chunks and then stitch the transcript together?
2. Why not cache the encoded content and then keep that in memory and simply encode more incoming audio?
3. If Whisper is an autoregressive model, and it encodes audio in a sequential manner... why not just keep a running KV cache of encoded audio and update it? Why process in separate batches?
We see this kind of run-on caching a lot in e.g. LLM backends - Llama.cpp and MLX\_lm for instance both implement prompt caching. The encoded KV cache is saved so that next time a prompt is passed in, the already encoded part of the conversation history doesn't need to be calculated again.
And yet I can't find any open source implementations of Whisper that do this - unless I'm just really misunderstanding the code (which is very possible). From what I can see of the codebase; Whisper.cpp seems to do sliding chunks and stitches them together. And you can see the pitfalls when you use it for live transcription; there's clear errors introduced where the chunks overlap and get stitched together.
I've yet to get deep into WhisperKit, but considering it has those same hallmark errors when shifting from one chunk to the next, I can only assume it too has a stitch-together implementation.
KV cache reuse / keeping a running KV cache would eliminate those errors. It would also majorly reduce the complexity with having to implement custom logic for processing multiple chunks and stitching them together in a sliding window fashion. You could just have one stream of audio coming in, and one stream of decoded text coming out.
Cleaner code, no having to compute overlapping sections more than once, no reduction in transcript accuracy versus doing inference on a static file... IMO seems to good to be true. It leads me to think that maybe run-on prompt caching like we see with LLMs is just simply not possible with Whisper..? That seems the simplest explanation. But I don't understand why that's the case. Anyone happen to know? | 2024-11-29T12:37:42 | https://www.reddit.com/r/LocalLLaMA/comments/1h2kvu2/whisper_whispercppwhisperkit_for_live/ | mark-lord | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2kvu2 | false | null | t3_1h2kvu2 | /r/LocalLLaMA/comments/1h2kvu2/whisper_whispercppwhisperkit_for_live/ | false | false | self | 26 | null |
Best way programs/ projects to fine tune an LLM locally? And what can you train with 16gb? | 27 | What is the best “program” idk what to call it? Open source project to fine tune an LLM locally?
And with 16gb vram, what is the most optimal model to train?
(Personal use cases/ custom data / OR mimicking myself (cloning my texting style and some knowledge from journal entries )(RP)/ whatnot) | 2024-11-29T12:52:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h2l5cv/best_way_programs_projects_to_fine_tune_an_llm/ | Deluded-1b-gguf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2l5cv | false | null | t3_1h2l5cv | /r/LocalLLaMA/comments/1h2l5cv/best_way_programs_projects_to_fine_tune_an_llm/ | false | false | self | 27 | null |
**"Yo, Run LLaMA 31B Instruct Like a Pro (on a Budget)! Let’s Talk Cheap Hardware + Big Brain Vibes"** | 1 | [removed] | 2024-11-29T12:58:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h2l8h9/yo_run_llama_31b_instruct_like_a_pro_on_a_budget/ | Funny_Acanthaceae285 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2l8h9 | false | null | t3_1h2l8h9 | /r/LocalLLaMA/comments/1h2l8h9/yo_run_llama_31b_instruct_like_a_pro_on_a_budget/ | false | false | self | 1 | null |
Send LLaMA 3.1B Instruct Like a Pro (on a Budget)! Let’s Talk Cheap Hardware + Big Brain Vibes | 1 | [removed] | 2024-11-29T13:00:29 | https://www.reddit.com/r/LocalLLaMA/comments/1h2l9xm/send_llama_31b_instruct_like_a_pro_on_a_budget/ | Funny_Acanthaceae285 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2l9xm | false | null | t3_1h2l9xm | /r/LocalLLaMA/comments/1h2l9xm/send_llama_31b_instruct_like_a_pro_on_a_budget/ | false | false | self | 1 | null |
Looking for Open-Source Tools to Build a Voice Chatbot with TTS, GPT, and Speech-to-Text | 1 | [removed] | 2024-11-29T13:03:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h2lc13/looking_for_opensource_tools_to_build_a_voice/ | tamir12312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2lc13 | false | null | t3_1h2lc13 | /r/LocalLLaMA/comments/1h2lc13/looking_for_opensource_tools_to_build_a_voice/ | false | false | self | 1 | null |
Automated GitHub Issue Creation using Structured Generation | 6 | Check this demo using SmolLM2, XGrammar and MLC WebLLM to convert repo complaints into structured GitHub issues (the model runs locally!) [https://huggingface.co/spaces/reach-vb/github-issue-generator-webgpu](https://huggingface.co/spaces/reach-vb/github-issue-generator-webgpu) | 2024-11-29T13:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/1h2lchf/automated_github_issue_creation_using_structured/ | loubnabnl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2lchf | false | null | t3_1h2lchf | /r/LocalLLaMA/comments/1h2lchf/automated_github_issue_creation_using_structured/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'oC_xJ0xXDJDidVFlW1CVp6ZD2mAhfz-o7F03mOy9W1w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3DiAiou0dpE19Xiq5ayTsw0ybNPozukmzHxOq2K82zI.jpg?width=108&crop=smart&auto=webp&s=248e05bc9e46e8101129fca7b49b4d82aa0596b3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3DiAiou0dpE19Xiq5ayTsw0ybNPozukmzHxOq2K82zI.jpg?width=216&crop=smart&auto=webp&s=679710f2c8063337deb94557139117026bdd4111', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3DiAiou0dpE19Xiq5ayTsw0ybNPozukmzHxOq2K82zI.jpg?width=320&crop=smart&auto=webp&s=2fc908310dec596c81fb41678a2272c48918ef19', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3DiAiou0dpE19Xiq5ayTsw0ybNPozukmzHxOq2K82zI.jpg?width=640&crop=smart&auto=webp&s=6bc251ff47e868bf6a5300b806d58a4f016e6df1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3DiAiou0dpE19Xiq5ayTsw0ybNPozukmzHxOq2K82zI.jpg?width=960&crop=smart&auto=webp&s=e2b5285be2c01f5e84b54fe9f9b7e1558b3c2c76', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3DiAiou0dpE19Xiq5ayTsw0ybNPozukmzHxOq2K82zI.jpg?width=1080&crop=smart&auto=webp&s=f1a41820e6ea1c356416cf0edcb648189e6c329b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3DiAiou0dpE19Xiq5ayTsw0ybNPozukmzHxOq2K82zI.jpg?auto=webp&s=1a4e0851997ec9313ecee54c310ff7582a03b1f2', 'width': 1200}, 'variants': {}}]} |
Fine-tuning on the geometric method of Spinoza to build reasoning capabilities in LLMs | 1 | [removed] | 2024-11-29T13:06:42 | https://www.reddit.com/r/LocalLLaMA/comments/1h2le46/finetuning_on_the_geometric_method_of_spinoza_to/ | Anarsheep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2le46 | false | null | t3_1h2le46 | /r/LocalLLaMA/comments/1h2le46/finetuning_on_the_geometric_method_of_spinoza_to/ | false | false | self | 1 | null |
Just installed 2x 7900 XTX on am5 platform | 1 | [removed] | 2024-11-29T13:13:17 | https://www.reddit.com/r/LocalLLaMA/comments/1h2li87/just_installed_2x_7900_xtx_on_am5_platform/ | Frosty_Chest8025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2li87 | false | null | t3_1h2li87 | /r/LocalLLaMA/comments/1h2li87/just_installed_2x_7900_xtx_on_am5_platform/ | false | false | self | 1 | null |
Extracting specific data from PdF Reports | 3 | So I have a bunch of reports (2-3000) and they are all in PDF format and different structures. The reports are from building sites and contain specific incidents such as crushing injury etc. I want to be able to use an agent to go through get reports and extract certain data and present it as such: Site Name, Year, Construction type, incident 1 (description), incident 2 (description) etc. I have no idea where to start having been limited to basic ready made rag setups like gpt4all. Any advice? | 2024-11-29T13:17:53 | https://www.reddit.com/r/LocalLLaMA/comments/1h2lla2/extracting_specific_data_from_pdf_reports/ | grumpyarcpal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2lla2 | false | null | t3_1h2lla2 | /r/LocalLLaMA/comments/1h2lla2/extracting_specific_data_from_pdf_reports/ | false | false | self | 3 | null |
How can I optimize the cost-performance ratio of my motherboard, CPU, graphics card, and memory? | 1 | I recently purchased 4 Tesla P40 GPUs, which have a memory bandwidth of 348GB/s. I plan to buy another 4 4060TI 16G GPUs or 4 P4s, and then invest in a set of EPYC 9654 or EPYC 7002 series CPUs, along with either 24 sticks of 16G DDR5 memory or 16 sticks of 16G DDR4 memory. I understand that the 9654 can achieve a memory bandwidth of about 390GB/s, making it a good match for the P40, while the memory bandwidth of the EPYC 7002 series CPUs is roughly compatible with the P4. I haven't decided on a motherboard yet.
My initial plan is to buy two EPYC 9654 processors, which I intend to use to run two separate DEEPSEEK V2.5 instances. The four P40 GPUs will handle a 72B QWEN2.5 LLM, and the four 4060ti GPUs will be used for an int4 precision llama 3.1 70B llm.
If I buy two EPYC 7002 series CPUs, I intend to use one EPYC 7002 along with two P4 cards to run DEEPSEEK V2.5. Is it also possible to run two? The other four P40 cards will run a 72B QWEN2.5 LLM.
What method do you think best maximizes the use of motherboard slots, ensuring that all hardware is compatible and can fully leverage the capabilities of the motherboard and memory? | 2024-11-29T13:21:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h2lnmo/how_can_i_optimize_the_costperformance_ratio_of/ | No-Librarian8438 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2lnmo | false | null | t3_1h2lnmo | /r/LocalLLaMA/comments/1h2lnmo/how_can_i_optimize_the_costperformance_ratio_of/ | false | false | self | 1 | null |
AI/ML/LLM Black Friday : What Are Your Best Deals? | 1 | [removed] | 2024-11-29T13:24:46 | https://www.reddit.com/r/LocalLLaMA/comments/1h2lpok/aimlllm_black_friday_what_are_your_best_deals/ | __bee_07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2lpok | false | null | t3_1h2lpok | /r/LocalLLaMA/comments/1h2lpok/aimlllm_black_friday_what_are_your_best_deals/ | false | false | self | 1 | null |
Aion-RP-Llama-3.1-8B: The New Roleplaying Virtuoso in Town (Fully Uncensored) | 17 | Hey everyone,
I wanted to introduce Aion-RP-Llama-3.1-8B, a new, fully uncensored model that excels at roleplaying. It scores slightly better than "Llama-3.1-8B-Instruct" on the „character eval” portion of the RPBench-Auto benchmark, while being uncensored and producing more “natural” and „human-like” outputs.
# Where to Access
* **Weights:** Available on Hugging Face:[ aion-labs/Aion-RP-Llama-3.1-8B](https://huggingface.co/aion-labs/Aion-RP-Llama-3.1-8B).
* **Try It:** Use the model for free at[ aionlabs.ai](https://aionlabs.ai).
# Some things worth knowing about
* **Default Temperature:** 0.7 (recommended). Using a temperature of 1.0 may result in nonsensical output sometimes.
* **System Prompt:** Not required, but including detailed instructions in a system prompt can significantly enhance the output.
I’ll do my best to answer any questions :) | 2024-11-29T13:25:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h2lqdq/aionrpllama318b_the_new_roleplaying_virtuoso_in/ | AverageButWonderful | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2lqdq | false | null | t3_1h2lqdq | /r/LocalLLaMA/comments/1h2lqdq/aionrpllama318b_the_new_roleplaying_virtuoso_in/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'fNx5vllf4Kc0fXGWBofIv5uw4nz2d5xI_u5k9rXGNcI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IJUaIENxikYFaHzXBHLh0nfIHBitv2VuMigbs5Uugjs.jpg?width=108&crop=smart&auto=webp&s=d68e1d957da5337f1f234b577893c0b6c21d72a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IJUaIENxikYFaHzXBHLh0nfIHBitv2VuMigbs5Uugjs.jpg?width=216&crop=smart&auto=webp&s=0031a574b1209b35391e6adeece5e901a1684e67', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IJUaIENxikYFaHzXBHLh0nfIHBitv2VuMigbs5Uugjs.jpg?width=320&crop=smart&auto=webp&s=19014e2ee1b7b3b9739a9091cf782d5529824b43', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IJUaIENxikYFaHzXBHLh0nfIHBitv2VuMigbs5Uugjs.jpg?width=640&crop=smart&auto=webp&s=5dad84a55b47e6165797202266473da46184ea8d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IJUaIENxikYFaHzXBHLh0nfIHBitv2VuMigbs5Uugjs.jpg?width=960&crop=smart&auto=webp&s=089c4c8b5b03c83b118f450f33a7e054b392bffb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IJUaIENxikYFaHzXBHLh0nfIHBitv2VuMigbs5Uugjs.jpg?width=1080&crop=smart&auto=webp&s=47064030ddf8d0201fc16cee2774f50644b6137f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IJUaIENxikYFaHzXBHLh0nfIHBitv2VuMigbs5Uugjs.jpg?auto=webp&s=fa3278017a51a5189e78e159f2694d99b0983925', 'width': 1200}, 'variants': {}}]} |
Dual RX 7900 XTX | 19 |
Just installed 2x 7900 XTX on a ryzen am5 motherboard in Ubuntu 24.04. And here just some results.
First model which I tried was qwen2.5:72b and it fits 96% in gpu and 4% goes to CPU.
So it is pretty much bottlenecked, but usable for testing.Now I run qwen2.5-coder:32b-instruct-q8\_0 and it fits all in VRAM.
Both GPUs having vram used about 70%.
When I use Ollama and set Environment="OLLAMA\_NUM\_PARALLEL=3" the memory usage increases a little.
When running multiple prompts same time, the power usage goes about 550W total system usage from the wall.The motherboard supports 2x 8x pcie 4.0 slots to CPU.
I have 1000W Psu which looks like to handle the load so far.
Is there any software which would use the both cards simultaneously, increasing the inference speed?
Now I think each gpu is used one by one.
**Here are some Ollama benchmarks:**
* qwen2.5-coder:32b-instruct-q8\_0 size 39GB (3 parallel) Prompt: write to me drupal10 custom module to integrate with jitsi
* Power usage: about 400W total for GPUs.
* The other GPU graphic pipe is 90% (83Celsius) and the other 50% (60Celsius).
​
total duration: 2m13.671637912s
load duration: 15.677831ms
prompt eval count: 1602 token(s)
prompt eval duration: 2.612s
prompt eval rate: 613.32 tokens/s
eval count: 1841 token(s)
eval duration: 2m11.005s
eval rate: 14.05 tokens/s
Which software would work best for multiple GPUs as a endpoint provider? I guess Ollama is not the best option for server or is it? | 2024-11-29T13:27:31 | https://www.reddit.com/r/LocalLLaMA/comments/1h2lrh2/dual_rx_7900_xtx/ | badabimbadabum2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2lrh2 | false | null | t3_1h2lrh2 | /r/LocalLLaMA/comments/1h2lrh2/dual_rx_7900_xtx/ | false | false | self | 19 | null |
Considering fine tuning a chatgpt with my writting | 1 | [removed] | 2024-11-29T13:29:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h2lsfn/considering_fine_tuning_a_chatgpt_with_my_writting/ | Cassoulet-vaincra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2lsfn | false | null | t3_1h2lsfn | /r/LocalLLaMA/comments/1h2lsfn/considering_fine_tuning_a_chatgpt_with_my_writting/ | false | false | self | 1 | null |
CPU attributes for running GGUF models | 1 | [removed] | 2024-11-29T13:45:31 | https://www.reddit.com/r/LocalLLaMA/comments/1h2m3pc/cpu_attributes_for_running_gguf_models/ | Material_Yam_3535 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2m3pc | false | null | t3_1h2m3pc | /r/LocalLLaMA/comments/1h2m3pc/cpu_attributes_for_running_gguf_models/ | false | false | self | 1 | null |
Quantize big model for free? | 16 | Hope this shows up, but I'd like to merge something to qwen-vl 72b. I have it working and while the model is alright, I'd like to take a crack at magnum-vl or turbocat-vl.
Unfortunately my internet sucks so downloading 160gb of weights twice, merging, converting to EXL2 and then uploading would take me the better part of a week.
Sure I could rent some runpod or whatever but that costs money and I have no clue if the model will be good. Is there some free service where I can get this done? I know people were running models on collab or kaggle, either of those give enough disk space or GPU time to complete the task? | 2024-11-29T13:48:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h2m5od/quantize_big_model_for_free/ | a_beautiful_rhind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2m5od | false | null | t3_1h2m5od | /r/LocalLLaMA/comments/1h2m5od/quantize_big_model_for_free/ | false | false | self | 16 | null |
A curated list of MCP servers that let AI models access local files, git, databases, and more
| 1 | [removed] | 2024-11-29T14:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/1h2mr7z/a_curated_list_of_mcp_servers_that_let_ai_models/ | NyproTheGeek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2mr7z | false | null | t3_1h2mr7z | /r/LocalLLaMA/comments/1h2mr7z/a_curated_list_of_mcp_servers_that_let_ai_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'FDqgGuI4h80CE0Ls7weP9kwps-tptYLRExNhg66GLSo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rhFyK_5ImR795UBDiX6HXnDUtq-S2k8UrmEJFUadgso.jpg?width=108&crop=smart&auto=webp&s=24fea2713ff478a69b17e208278c4cd96fa75ace', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rhFyK_5ImR795UBDiX6HXnDUtq-S2k8UrmEJFUadgso.jpg?width=216&crop=smart&auto=webp&s=90f2c3312ea0fd2024899a37f5ff3c9af7806063', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rhFyK_5ImR795UBDiX6HXnDUtq-S2k8UrmEJFUadgso.jpg?width=320&crop=smart&auto=webp&s=1a9968973e7607f84185d21a9d42da588bec6a04', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rhFyK_5ImR795UBDiX6HXnDUtq-S2k8UrmEJFUadgso.jpg?width=640&crop=smart&auto=webp&s=21edcc38858224f3e978cb1287d857d27c5898e5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rhFyK_5ImR795UBDiX6HXnDUtq-S2k8UrmEJFUadgso.jpg?width=960&crop=smart&auto=webp&s=23b3c62a090ec9d3d821e962188160c72b92d2f0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rhFyK_5ImR795UBDiX6HXnDUtq-S2k8UrmEJFUadgso.jpg?width=1080&crop=smart&auto=webp&s=a6e2f23ee88337ca4c8b8f0b48312de34b80c290', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rhFyK_5ImR795UBDiX6HXnDUtq-S2k8UrmEJFUadgso.jpg?auto=webp&s=ca0a37dd987e4c7f298a9b4ebdea537add02bd5b', 'width': 1200}, 'variants': {}}]} |
QwQ - Best Way to Separate Thought Process from Final Output? | 45 | I am trying to use QwQ as a drop-in replacement for other LLM's (Claude sonnet in this case), but I am having some unique challenges in using it - mostly because of the way it thinks out loud:
**Main Question:** How can I reliably and consistently determine where the CoT (Chain-of-Thought) ends, and the answer begins?
**1.** Sometimes QwQ writes \*\*Final **Answer**\*\*, sometimes it writes \*\*Final **Solution**\*\* (LOL btw), and sometimes something else. It also doesn't reliably listen to requests to mark it's answer in a specific way. Is there a token of some kind I can detect that marks the end of thought - or some other technique for delineating the thinking from the answer?
**2.** With Claude Sonnet, I often ask it to output key details wrapped in pre-defined XML tags. Even if I can get QwQ to listen to this (sometimes it does), it will also include data wrapped in the same tags as part of it's *thinking* process. Because of this, when I parse out the data with the tags, the results include the tags from when the model was thinking as well. The final solution is most often right, but again, I can't separate it out from the thoughts.
**3.** Because of how long QwQ *thinks* before it answers, I believe it frequently loses sight of certain key aspects of the prompt - specifically around formatting. Again, back to my first issue... if there was a way to detect when the thinking is done, I believe injecting a system prompt *reminder* of sorts at this point could perhaps improve prompt adherence. I've noticed prompt adherence get's worse after long chains of thought.
I would love some suggestions on how to handle the above. Solving issue #1 above would have the most positive impact and make this model a lot more useful and interesting. | 2024-11-29T14:35:41 | https://www.reddit.com/r/LocalLLaMA/comments/1h2n4ct/qwq_best_way_to_separate_thought_process_from/ | JustinPooDough | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2n4ct | false | null | t3_1h2n4ct | /r/LocalLLaMA/comments/1h2n4ct/qwq_best_way_to_separate_thought_process_from/ | false | false | self | 45 | null |
Docker + Colab + Unsloth = Most Convenient VLM Fine-Tuning Environment on local 16 GB RAM GPU! | 1 | [Tech used in my workflow](https://preview.redd.it/szhcv25wtu3e1.png?width=982&format=png&auto=webp&s=2e22602c4dc55c7a40ec09587e01dc336164206d)
[VLM Fine-Tuning Going on Locally](https://preview.redd.it/j7ldyrswtu3e1.png?width=750&format=png&auto=webp&s=fe8fc0dd991342043c38b2d3b357b76997ffb55a)
[Utilization of 99% with 12.1 of VRAM used from 16 GB GPU](https://preview.redd.it/fzqc9nhxtu3e1.png?width=996&format=png&auto=webp&s=4e8fddb2156ba9bd49420c6e0cf5f7f907724f5d)
Since unsloth has finally been updated to support fine-tuning VLMs ([Llama 3.2 Vision (11B)](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing), [Qwen 2.5 VL (7B)](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing) and [Pixtral (12B) 2409](https://colab.research.google.com/drive/1K9ZrdwvZRE96qGkCq_e88FgV3MLnymQq?usp=sharing)) a few days ago, I tried setting up a local environment on my gaming laptop with RTX3080 Ti using docker, then connect the local runtime to colab and love it so much as I can:
* Work directly with all the files on my laptop with complete data privacy
* Keep the model files on my laptop so the download is only needed once
* Does not need to keep reinstalling libraries every time I reconnect to colab
Note: unsloth notebooks are very well designed and they are a pleasure to work with.
For anyone who has the RTX 3080 Ti, RTX 3090, or RTX 4090, I would recommend to try out the set-up for fine-tuning!
If you are interested, you can read more here: [One GPU Is All You Need: Fine-Tuning 12B Vision-Language Models Locally | Nov, 2024 | AI Advances](https://medium.com/ai-advances/one-gpu-is-all-you-need-fine-tuning-12b-vision-language-models-locally-21438284dc66) | 2024-11-29T14:57:21 | https://www.reddit.com/r/LocalLLaMA/comments/1h2nka2/docker_colab_unsloth_most_convenient_vlm/ | AdOdd4004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2nka2 | false | null | t3_1h2nka2 | /r/LocalLLaMA/comments/1h2nka2/docker_colab_unsloth_most_convenient_vlm/ | false | false | 1 | null |
|
Docker + Colab + Unsloth = Most Convenient VLM Fine-Tuning Environment on local 16GB RAM GPU! | 89 | Since unsloth has finally been updated to support fine-tuning VLMs ([Llama 3.2 Vision (11B)](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing), [Qwen 2.5 VL (7B)](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing) and [Pixtral (12B) 2409](https://colab.research.google.com/drive/1K9ZrdwvZRE96qGkCq_e88FgV3MLnymQq?usp=sharing)) a few days ago, I tried setting up a local environment on my gaming laptop with RTX3080 Ti using docker, then connect the local runtime to colab and love it so much as I can:
* Work directly with all the files on my laptop with complete data privacy
* Keep the model files on my laptop so the download is only needed once
* Does not need to keep reinstalling libraries every time I reconnect to colab
Note: unsloth notebooks are very well designed and they are a pleasure to work with.
For anyone who has the RTX 3080 Ti, RTX 3090, or RTX 4090, I would recommend to try out the set-up for fine-tuning!
If you are interested, you can read more here: [One GPU Is All You Need: Fine-Tuning 12B Vision-Language Models Locally | Nov, 2024 | AI Advances](https://medium.com/ai-advances/one-gpu-is-all-you-need-fine-tuning-12b-vision-language-models-locally-21438284dc66) | 2024-11-29T15:04:04 | https://www.reddit.com/r/LocalLLaMA/comments/1h2npkw/docker_colab_unsloth_most_convenient_vlm/ | AdOdd4004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2npkw | false | null | t3_1h2npkw | /r/LocalLLaMA/comments/1h2npkw/docker_colab_unsloth_most_convenient_vlm/ | false | false | self | 89 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
AtheneV2 - this model i heard beats claude sonnet 3.5 | 0 | https://ollama.com/library/athene-v2 | 2024-11-29T16:10:19 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2p512 | false | null | t3_1h2p512 | /r/LocalLLaMA/comments/1h2p512/athenev2_this_model_i_heard_beats_claude_sonnet_35/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'HgGKV5Avibq0xcKqlVG03O7RPDGvH09jsz_a5CGXJvc', 'resolutions': [{'height': 175, 'url': 'https://preview.redd.it/1x43ywec7v3e1.png?width=108&crop=smart&auto=webp&s=2ba36e51bcdad8f18929cc4a8ed5a1357ed4213a', 'width': 108}, {'height': 350, 'url': 'https://preview.redd.it/1x43ywec7v3e1.png?width=216&crop=smart&auto=webp&s=d0fb25c09def63977834a648d502cb760fe7677d', 'width': 216}, {'height': 518, 'url': 'https://preview.redd.it/1x43ywec7v3e1.png?width=320&crop=smart&auto=webp&s=2af4934fc6bc494f116961a8c19dec48914c4bb5', 'width': 320}, {'height': 1037, 'url': 'https://preview.redd.it/1x43ywec7v3e1.png?width=640&crop=smart&auto=webp&s=fad9f4519e375f7dc809b46e798ecd2949f6cee2', 'width': 640}, {'height': 1555, 'url': 'https://preview.redd.it/1x43ywec7v3e1.png?width=960&crop=smart&auto=webp&s=67445ef18d7933bbf8162b1d543af0681b8ef08d', 'width': 960}, {'height': 1750, 'url': 'https://preview.redd.it/1x43ywec7v3e1.png?width=1080&crop=smart&auto=webp&s=8122405a5cfb64c0ca2e4426ce0b67ddf3931bd0', 'width': 1080}], 'source': {'height': 1750, 'url': 'https://preview.redd.it/1x43ywec7v3e1.png?auto=webp&s=626407d78ab4cd9e0bfd7b638dcffd759c5891ad', 'width': 1080}, 'variants': {}}]} |
||
AtheneV2 72B- fine tuned version of qwen specializes in code completion , mathematics and log extraction | 118 | 2024-11-29T16:32:28 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2pmzd | false | null | t3_1h2pmzd | /r/LocalLLaMA/comments/1h2pmzd/athenev2_72b_fine_tuned_version_of_qwen/ | false | false | 118 | {'enabled': True, 'images': [{'id': 'xVnK5DqIVZhe06YvPn8WCS9E8p0XE9CIqIrlAPzwhDk', 'resolutions': [{'height': 176, 'url': 'https://preview.redd.it/90unt7nabv3e1.png?width=108&crop=smart&auto=webp&s=73ec5e42f3e9a5188da475a7794a4215a6d1e649', 'width': 108}, {'height': 353, 'url': 'https://preview.redd.it/90unt7nabv3e1.png?width=216&crop=smart&auto=webp&s=44e4a5b2459b11a573302f2d26162008b2257cce', 'width': 216}, {'height': 524, 'url': 'https://preview.redd.it/90unt7nabv3e1.png?width=320&crop=smart&auto=webp&s=6858e5463571563f80f96108a43f9af144248a30', 'width': 320}, {'height': 1048, 'url': 'https://preview.redd.it/90unt7nabv3e1.png?width=640&crop=smart&auto=webp&s=ef012ea07286b110caf8063bff9139763d6c49c3', 'width': 640}, {'height': 1572, 'url': 'https://preview.redd.it/90unt7nabv3e1.png?width=960&crop=smart&auto=webp&s=341d05357fdc3c2727b7c7e5f477444ec664b238', 'width': 960}, {'height': 1769, 'url': 'https://preview.redd.it/90unt7nabv3e1.png?width=1080&crop=smart&auto=webp&s=8dd901211a79b3fb0538915102ea675240cb694e', 'width': 1080}], 'source': {'height': 1769, 'url': 'https://preview.redd.it/90unt7nabv3e1.png?auto=webp&s=766dd9de11eae031c44f43cd9af8ab6564cad32b', 'width': 1080}, 'variants': {}}]} |
|||
I built a free tool to create portfolios in 1 click (because I was too lazy to code mine 😅) | 12 | Hey everyone,
I wanted to share my latest project with you all. So here’s the story: I’m a developer (and like many of you here, I love AI, no-code tools, and tinkering). Recently, I needed a portfolio to showcase my projects, skills, and services. I thought, “Okay, let’s build one.” But then reality hit me… **I was way too lazy to code the whole thing myself.** 😅
Between designing something that looks good, optimizing for SEO, making it responsive, and setting up hosting, I realized I didn’t have the time or energy for it. That’s when I thought, **why not build a tool to make this easier?** Something not just for me, but for anyone who wants a clean, professional portfolio without all the hassle.
So, I rolled up my sleeves and built **Freel**. It’s **free**, super easy to use, and lets you create a portfolio in minutes. You can import your LinkedIn in one click, add your projects, services, skills, and even track how it’s performing with built-in analytics. It’s fully hosted, blazing fast, mobile-friendly, and SEO-optimized.
I developed it myself, so if you find any bugs or feel like it’s missing something, **please let me know!** I’d love to hear your feedback, ideas, or feature requests to make it better for everyone.
Here’s the link if you want to check it out: [freel.io](https://freel.io)
I hope it’s as helpful for you as I wanted it to be for me. 😊 Feel free to reach out! | 2024-11-29T16:41:41 | https://www.reddit.com/r/LocalLLaMA/comments/1h2puii/i_built_a_free_tool_to_create_portfolios_in_1/ | aminebenani94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2puii | false | null | t3_1h2puii | /r/LocalLLaMA/comments/1h2puii/i_built_a_free_tool_to_create_portfolios_in_1/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'TURnzdht_bivqTHTyW390M1E2Emm5uqEZL_qq_---eA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/I1n8l2oapllINSdX8cy7YBiVxMLRhl7ikgCzKvyKFYI.jpg?width=108&crop=smart&auto=webp&s=56155de6be427e503f50c3e988e398154f430172', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/I1n8l2oapllINSdX8cy7YBiVxMLRhl7ikgCzKvyKFYI.jpg?width=216&crop=smart&auto=webp&s=71e21af02ac097f4d79aa4832a1b1b34ffac66ff', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/I1n8l2oapllINSdX8cy7YBiVxMLRhl7ikgCzKvyKFYI.jpg?width=320&crop=smart&auto=webp&s=829268205cb80a3f2cc24950c43a4285106a39cc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/I1n8l2oapllINSdX8cy7YBiVxMLRhl7ikgCzKvyKFYI.jpg?width=640&crop=smart&auto=webp&s=a1199770e1056a907b562dd58084f5526d2da62f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/I1n8l2oapllINSdX8cy7YBiVxMLRhl7ikgCzKvyKFYI.jpg?width=960&crop=smart&auto=webp&s=e16d91a52f54abd304c1356560ccd0c78fa7a336', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/I1n8l2oapllINSdX8cy7YBiVxMLRhl7ikgCzKvyKFYI.jpg?width=1080&crop=smart&auto=webp&s=0f4ce534f455b967fddc7c427fb2e7782d65378f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/I1n8l2oapllINSdX8cy7YBiVxMLRhl7ikgCzKvyKFYI.jpg?auto=webp&s=e70ed713b26a6a9f09a00155d7bbd28fa2266beb', 'width': 1200}, 'variants': {}}]} |
[BLACK FRIDAY] Perplexity AI PRO - 1 YEAR PLAN OFFER - 75% OFF | 1 | [removed] | 2024-11-29T16:42:35 | MReus11R | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2pv7k | false | null | t3_1h2pv7k | /r/LocalLLaMA/comments/1h2pv7k/black_friday_perplexity_ai_pro_1_year_plan_offer/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'MK_5pxyOqH2lL-JJzPcySwK4UhzEGstQg5WeNj-R564', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/xpeljru2dv3e1.jpeg?width=108&crop=smart&auto=webp&s=2d52dcb09edc9285a5490c73f404d915676e1370', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/xpeljru2dv3e1.jpeg?width=216&crop=smart&auto=webp&s=3910dc71e9a116677555bed8082e42e8c1da1c6f', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/xpeljru2dv3e1.jpeg?width=320&crop=smart&auto=webp&s=7a2642c4bd0af2f9dd6d37b3c0503db609084c3f', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/xpeljru2dv3e1.jpeg?width=640&crop=smart&auto=webp&s=42537e75dc8c68c6797ef6802d0cb7411330a972', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/xpeljru2dv3e1.jpeg?width=960&crop=smart&auto=webp&s=a52331df7f1cce2fea5db3b50269859f3490f85d', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/xpeljru2dv3e1.jpeg?width=1080&crop=smart&auto=webp&s=aebf23831e36517e1a5678a664e1691cfcdc62d1', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://preview.redd.it/xpeljru2dv3e1.jpeg?auto=webp&s=29b92978ad27ac63cd71771f957ebaa41fff9cb5', 'width': 2000}, 'variants': {}}]} |
||
Optimizing XTTS-v2: Vocalize the first *Harry Potter* book in 10 minutes & ~10GB VRAM. | 1 | [removed] | 2024-11-29T17:03:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h2qc7p/optimizing_xttsv2_vocalize_the_first_harry_potter/ | Similar_Choice_9241 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2qc7p | false | null | t3_1h2qc7p | /r/LocalLLaMA/comments/1h2qc7p/optimizing_xttsv2_vocalize_the_first_harry_potter/ | false | false | self | 1 | null |
More info on "Qwen with Questions" quantized at "q_8" using MathJax | 0 | Just more interesting results on QWQ. Been having fun testing it for sure.
https://preview.redd.it/16m3okltnv3e1.png?width=3840&format=png&auto=webp&s=6eb1a978add47da41fc13f6831f7faecba7e4f3c
| 2024-11-29T17:43:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h2r9if/more_info_on_qwen_with_questions_quantized_at_q_8/ | SAV_NC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2r9if | false | null | t3_1h2r9if | /r/LocalLLaMA/comments/1h2r9if/more_info_on_qwen_with_questions_quantized_at_q_8/ | false | false | 0 | null |
|
Best r0lep1lay models under 36GB VRAM? | 1 | [removed] | 2024-11-29T18:30:39 | https://www.reddit.com/r/LocalLLaMA/comments/1h2sctp/best_r0lep1lay_models_under_36gb_vram/ | Aggravating-Can-4392 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2sctp | false | null | t3_1h2sctp | /r/LocalLLaMA/comments/1h2sctp/best_r0lep1lay_models_under_36gb_vram/ | false | false | self | 1 | null |
I tested the best models that can run on an iphone 14Pro | 55 | I have an iphone 14pro, these are the best models I can run on it at useable speeds.
Llama-3.2-3B Q8
Qwen-2.5-3B Q8
Gemma-2-2b Q8
phi-mini-instruct Q8
Using the PocketPal app i’ve been able to run anything under 4.2GB, but that can be really slow. Speed to intelligence is best at around 3.5GB ish.
I wish the new mistral 3B was released :(
Please let me know if i’m missing anything worthwhile, or better than these. | 2024-11-29T18:38:21 | https://www.reddit.com/r/LocalLLaMA/comments/1h2sjeu/i_tested_the_best_models_that_can_run_on_an/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2sjeu | false | null | t3_1h2sjeu | /r/LocalLLaMA/comments/1h2sjeu/i_tested_the_best_models_that_can_run_on_an/ | false | false | self | 55 | null |
qwen2.5 14b + bolt.new | 1 | [removed] | 2024-11-29T19:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/1h2t80m/qwen25_14b_boltnew/ | Expensive_Grocery747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2t80m | false | null | t3_1h2t80m | /r/LocalLLaMA/comments/1h2t80m/qwen25_14b_boltnew/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 't_pHEMGKQ6DAGq3kscBApVGEiLbZMGiN-d4WTMkTggQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=108&crop=smart&auto=webp&s=f9bb55c9279ce0742847c88b5626fbc553bbf5b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=216&crop=smart&auto=webp&s=e1908729c74b3588212435422da59168d85d8660', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=320&crop=smart&auto=webp&s=4d949abbbc31e568f121c9c5eaed3e0846f3722e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=640&crop=smart&auto=webp&s=97e67439d1ec5fe9d8e6cb0ba95abe56adce52a7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=960&crop=smart&auto=webp&s=f3bae916e90b40bc5edd90180a00602bab76d6cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?width=1080&crop=smart&auto=webp&s=d939cfbb76db5c7e138d37bd365f33690c45b6b1', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/r_VBSj_S5VT9KDr7w9hNmFJR8hQg8Wtg2p31yTX1st8.jpg?auto=webp&s=eb32f09811c1b406241d8ffa47361db3034299c6', 'width': 2400}, 'variants': {}}]} |
Simple Prebuilt Local AI Assistant with tools and vision! | 4 | https://reddit.com/link/1h2tlmn/video/cev4uqi55w3e1/player
[https://github.com/cp3249/splaa](https://github.com/cp3249/splaa) | 2024-11-29T19:23:58 | https://www.reddit.com/r/LocalLLaMA/comments/1h2tlmn/simple_prebuilt_local_ai_assistant_with_tools_and/ | Cloudscrypts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2tlmn | false | null | t3_1h2tlmn | /r/LocalLLaMA/comments/1h2tlmn/simple_prebuilt_local_ai_assistant_with_tools_and/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'FDLk0xsyFA194_hHjrZm_ohzTbe85aeYMGJjpCMpXE8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/raJJ0qX75xgiT7Bj3dOnRpYl3qdq3hXFKuGZxSdrqAQ.jpg?width=108&crop=smart&auto=webp&s=23554116acd30ea83e54013e163d94ce31fff773', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/raJJ0qX75xgiT7Bj3dOnRpYl3qdq3hXFKuGZxSdrqAQ.jpg?width=216&crop=smart&auto=webp&s=512f35143f1bd869dee6f7a938613976e49a8b66', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/raJJ0qX75xgiT7Bj3dOnRpYl3qdq3hXFKuGZxSdrqAQ.jpg?width=320&crop=smart&auto=webp&s=0aaeedac977aa63be0f6382647c073850144acfc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/raJJ0qX75xgiT7Bj3dOnRpYl3qdq3hXFKuGZxSdrqAQ.jpg?width=640&crop=smart&auto=webp&s=46c42d3dda325ea3f12d97cf4a4579d75df53479', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/raJJ0qX75xgiT7Bj3dOnRpYl3qdq3hXFKuGZxSdrqAQ.jpg?width=960&crop=smart&auto=webp&s=e48c8bd6117fc43305e5201041ac45a781396e1b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/raJJ0qX75xgiT7Bj3dOnRpYl3qdq3hXFKuGZxSdrqAQ.jpg?width=1080&crop=smart&auto=webp&s=9bfd00f673d90da0e22f27ee3199b3cbb6d54e5d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/raJJ0qX75xgiT7Bj3dOnRpYl3qdq3hXFKuGZxSdrqAQ.jpg?auto=webp&s=81c61e6cd43ea973e080ebc5d6fe22470630f361', 'width': 1200}, 'variants': {}}]} |
|
Best approach for creating a GitHub repo Q&A system with a local LLM? | 1 | [removed] | 2024-11-29T19:32:26 | https://www.reddit.com/r/LocalLLaMA/comments/1h2tsn0/best_approach_for_creating_a_github_repo_qa/ | Quiet_Figure_4483 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2tsn0 | false | null | t3_1h2tsn0 | /r/LocalLLaMA/comments/1h2tsn0/best_approach_for_creating_a_github_repo_qa/ | false | false | self | 1 | null |
OpenRouter client with API vision? | 2 | I know this isn't local strictly, but I'm not sure where else to ask where people know about these things.
Simple question. I'm looking for PC and Android OpenRouter clients that include API vision capabilities to models, does anyone know any? So far, I haven't had any luck. The closest thing is the OR playground, but it has a 42 message limit for some reason. Are there any alternatives that I can use on PC and Android? | 2024-11-29T19:34:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h2tu2h/openrouter_client_with_api_vision/ | Concheria | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2tu2h | false | null | t3_1h2tu2h | /r/LocalLLaMA/comments/1h2tu2h/openrouter_client_with_api_vision/ | false | false | self | 2 | null |
Introducing whisper_cpp_macos_utils: A Terminal Workflow for Audio Transcription on macOS | 6 | Hey r/LocalLLaMA,
I wanted to share [**whisper\_cpp\_macos\_utils**](https://github.com/mdbecker/whisper_cpp_macos_utils), a project I created to help streamline audio transcription on macOS using OpenAI’s Whisper via [whisper.cpp](https://github.com/ggerganov/whisper.cpp). This is a lightweight, terminal-based solution that glues together tools like QuickTime Player, BlackHole-2ch, and FFmpeg with bash scripts for an efficient, fully local workflow.
**Why I Built This:**
During meetings, I wanted to focus on discussions instead of taking notes, so I created this to record, process, and transcribe audio files locally without relying on cloud services or standalone apps. It’s ideal for anyone who prefers a shell-based approach and is comfortable with open-source tools.
**Key Features:**
* **Terminal-First Workflow:** Designed for users who love working in the shell.
* **Modular Design:** Use individual scripts for tasks like audio retrieval, conversion, and transcription, or chain them together for full automation.
* **Local Processing:** Compile whisper.cpp directly on your machine for privacy and performance.
* **Lightweight:** No extra bloat—just well-known tools like FFmpeg and Whisper.cpp, glued together with bash.
* **Flexible:** Generic scripts that can be easily adapted or customized to suit your needs.
**What’s New:**
I’ve worked hard to make the scripts more generic and easier for others to use. That said, these changes might have introduced bugs—if you find any, please submit an issue on the repo. Better yet, feel free to submit a fix or new feature!
**Who’s It For?**
* Terminal-savvy users who value control and transparency.
* Privacy-conscious professionals who prefer local tools over cloud solutions.
* DIY enthusiasts who want a simple, open-source alternative to standalone apps.
**How to Get Started:**
You’ll need a few basics installed (Homebrew, BlackHole-2ch, FFmpeg, Xcode tools). Check out the [README](https://github.com/mdbecker/whisper_cpp_macos_utils#readme) for setup instructions and examples.
**Feedback and Contributions Wanted!**
If you try it out, let me know what you think! I’d love to hear how it works for you, and contributions are always welcome. Whether it’s a bug fix, feature idea, or general feedback, your input will help make this project better for everyone.
**Repo Link:** [https://github.com/mdbecker/whisper\_cpp\_macos\_utils](https://github.com/mdbecker/whisper_cpp_macos_utils)
Looking forward to hearing your thoughts! | 2024-11-29T19:53:34 | https://www.reddit.com/r/LocalLLaMA/comments/1h2u9ed/introducing_whisper_cpp_macos_utils_a_terminal/ | beckerfuffle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2u9ed | false | null | t3_1h2u9ed | /r/LocalLLaMA/comments/1h2u9ed/introducing_whisper_cpp_macos_utils_a_terminal/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'yTBwwZkaOwXXD8AZpL1JfJLKRJ-N-x_SUvHgGS1SmB8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YIV_ZDrkDJhBUboyNXvqR8yBWrgUGjwY7gaAis_vJNg.jpg?width=108&crop=smart&auto=webp&s=8276d81d26b1ae9409a0ea7b7b7e97cb7d80945e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YIV_ZDrkDJhBUboyNXvqR8yBWrgUGjwY7gaAis_vJNg.jpg?width=216&crop=smart&auto=webp&s=21d488ccdea883bf4f70f345a8e2dccb77f79a40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YIV_ZDrkDJhBUboyNXvqR8yBWrgUGjwY7gaAis_vJNg.jpg?width=320&crop=smart&auto=webp&s=de635f39e3eb7d97c42334a8ee8864082deb760b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YIV_ZDrkDJhBUboyNXvqR8yBWrgUGjwY7gaAis_vJNg.jpg?width=640&crop=smart&auto=webp&s=0c4e85fb9c0e0b82cb382be1c5b7a161a576832e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YIV_ZDrkDJhBUboyNXvqR8yBWrgUGjwY7gaAis_vJNg.jpg?width=960&crop=smart&auto=webp&s=76d043489b3da2e1803fc79e655d7f8ee5652f3f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YIV_ZDrkDJhBUboyNXvqR8yBWrgUGjwY7gaAis_vJNg.jpg?width=1080&crop=smart&auto=webp&s=7404f70364cbd14cafa7858bd7114f09315fad6a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YIV_ZDrkDJhBUboyNXvqR8yBWrgUGjwY7gaAis_vJNg.jpg?auto=webp&s=49efebd356a8c91904898ca90703876042d82015', 'width': 1200}, 'variants': {}}]} |
Question about transformers library and latency | 1 | When the softmax is calculated from the output of the QK attention table, doesn’t the CPU have to retrieve the weights from the GPU and then calculate softmax? I assume the GPU can’t preform softmax or non linear activation functions, and that those happen CPU side. Is there a way to speed this up? | 2024-11-29T20:11:20 | https://www.reddit.com/r/LocalLLaMA/comments/1h2unmd/question_about_transformers_library_and_latency/ | Ok-Cicada-5207 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2unmd | false | null | t3_1h2unmd | /r/LocalLLaMA/comments/1h2unmd/question_about_transformers_library_and_latency/ | false | false | self | 1 | null |
from interior image to 3D interactive model | 4 | hello guys , hope you are well , is their anyone who know or has idea on how to convert an image of interior (panorama) into 3D model using AI .
| 2024-11-29T20:34:53 | https://www.reddit.com/r/LocalLLaMA/comments/1h2v66n/from_interior_image_to_3d_interactive_model/ | LahmeriMohamed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2v66n | false | null | t3_1h2v66n | /r/LocalLLaMA/comments/1h2v66n/from_interior_image_to_3d_interactive_model/ | false | false | self | 4 | null |
What is the best open source platform to set up AI to format Emails from financial data? | 4 | Hello, I have a bookkeeping business. I have been playing with LLAMA 3.2 8b and seems pretty smart on analyzing numbers. I am looking to streamline this to be able to upload financial data like a profit and loss to analyze and format follow up email for more info in to wonky numbers. Anyone know of a good github open source platform to analyze financial data like a trial balance or P&L? | 2024-11-29T20:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/1h2vfq4/what_is_the_best_open_source_platform_to_set_up/ | TwistNecessary7182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2vfq4 | false | null | t3_1h2vfq4 | /r/LocalLLaMA/comments/1h2vfq4/what_is_the_best_open_source_platform_to_set_up/ | false | false | self | 4 | null |
george-ai: An API leveraging AI to make it easy to control a computer with natural language. | 1 | [removed] | 2024-11-29T20:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/1h2vkov/georgeai_an_api_leveraging_ai_to_make_it_easy_to/ | logan__keenan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2vkov | false | null | t3_1h2vkov | /r/LocalLLaMA/comments/1h2vkov/georgeai_an_api_leveraging_ai_to_make_it_easy_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'a0FN5OdPmhefWG088YunlyTYSsvZ3Q-wrf-WNc3BiK0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6lKJfKe4fFvIIVr_wgfPOm2SWj_xSu1lQT8AEqxgThg.jpg?width=108&crop=smart&auto=webp&s=317baec88aa71841b961a2b403b53d4617fc28d3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6lKJfKe4fFvIIVr_wgfPOm2SWj_xSu1lQT8AEqxgThg.jpg?width=216&crop=smart&auto=webp&s=5980039802d3f0504985621d3d7b7273f683fa0b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6lKJfKe4fFvIIVr_wgfPOm2SWj_xSu1lQT8AEqxgThg.jpg?width=320&crop=smart&auto=webp&s=054b0e76def9c8beeef1ae251f0ee43bd2dda46f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6lKJfKe4fFvIIVr_wgfPOm2SWj_xSu1lQT8AEqxgThg.jpg?width=640&crop=smart&auto=webp&s=3f415e52c3e6fe8c454fbfb7126a920ad8cfffa0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6lKJfKe4fFvIIVr_wgfPOm2SWj_xSu1lQT8AEqxgThg.jpg?width=960&crop=smart&auto=webp&s=f4c07c851039c32999b91ad24115ac2baaa0073f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6lKJfKe4fFvIIVr_wgfPOm2SWj_xSu1lQT8AEqxgThg.jpg?width=1080&crop=smart&auto=webp&s=d4537125adf3c7cd7fee6ac463a94893fc53280f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6lKJfKe4fFvIIVr_wgfPOm2SWj_xSu1lQT8AEqxgThg.jpg?auto=webp&s=6c6dc5f0ff34d7905230a70fb724fa6565a625b3', 'width': 1200}, 'variants': {}}]} |
|
What LLLM would be best to use as a teacher for a child? | 1 | [removed] | 2024-11-29T20:54:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h2vla3/what_lllm_would_be_best_to_use_as_a_teacher_for_a/ | antool13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2vla3 | false | null | t3_1h2vla3 | /r/LocalLLaMA/comments/1h2vla3/what_lllm_would_be_best_to_use_as_a_teacher_for_a/ | false | false | self | 1 | null |
george-ai: An API leveraging AI to make it easy to control a computer with natural language. | 65 | 2024-11-29T20:57:54 | https://v.redd.it/kk68etykmw3e1 | logan__keenan | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2vo3k | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kk68etykmw3e1/DASHPlaylist.mpd?a=1735505889%2CMjcwZGEyNTI5NzcxYmQ1OTJmYTFiMjU3ODVjMTVkNGYyNGZmMDE0NTRmOWQwYjE4ODcwZWY3OWY5ZTY0ZjZlZQ%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/kk68etykmw3e1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/kk68etykmw3e1/HLSPlaylist.m3u8?a=1735505889%2CMjVmZThkNWM4OTA0OTUxN2NiYzA3N2EzMzJkNWJlZjcyYjA0YTY4Y2IyODliMzAzNjQ1NzM3YTFkMmQ3MDNiZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kk68etykmw3e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}} | t3_1h2vo3k | /r/LocalLLaMA/comments/1h2vo3k/georgeai_an_api_leveraging_ai_to_make_it_easy_to/ | false | false | 65 | {'enabled': False, 'images': [{'id': 'bGw2Mmh1eWttdzNlMYO8Onh6DVyGr650yVP4zVXqRTCM5cf7NW6IjGnqkdzY', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/bGw2Mmh1eWttdzNlMYO8Onh6DVyGr650yVP4zVXqRTCM5cf7NW6IjGnqkdzY.png?width=108&crop=smart&format=pjpg&auto=webp&s=e4ce805bacd61e86be605663fdf27958869722cf', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/bGw2Mmh1eWttdzNlMYO8Onh6DVyGr650yVP4zVXqRTCM5cf7NW6IjGnqkdzY.png?width=216&crop=smart&format=pjpg&auto=webp&s=4ca6f318365afaa182edc2b76ec2c9792b1282fe', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/bGw2Mmh1eWttdzNlMYO8Onh6DVyGr650yVP4zVXqRTCM5cf7NW6IjGnqkdzY.png?width=320&crop=smart&format=pjpg&auto=webp&s=a23ee5e785c18fc52f7cb4279558a5d650976d58', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/bGw2Mmh1eWttdzNlMYO8Onh6DVyGr650yVP4zVXqRTCM5cf7NW6IjGnqkdzY.png?width=640&crop=smart&format=pjpg&auto=webp&s=b4ac12aa5842aa8a92d53dc9438ce28125005ebe', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/bGw2Mmh1eWttdzNlMYO8Onh6DVyGr650yVP4zVXqRTCM5cf7NW6IjGnqkdzY.png?width=960&crop=smart&format=pjpg&auto=webp&s=c9c4403d1f7e578d95604322d8440a67f80a3af8', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/bGw2Mmh1eWttdzNlMYO8Onh6DVyGr650yVP4zVXqRTCM5cf7NW6IjGnqkdzY.png?width=1080&crop=smart&format=pjpg&auto=webp&s=96e60fd706d1b808f9168a5aa82aa8593a3fd43c', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/bGw2Mmh1eWttdzNlMYO8Onh6DVyGr650yVP4zVXqRTCM5cf7NW6IjGnqkdzY.png?format=pjpg&auto=webp&s=7daee2335b1af08ac8855ba70a67e6f59352bd07', 'width': 3456}, 'variants': {}}]} |
||
Best creative writing LLM to fit in 16gb vram? | 8 | What is a great LLM for creative writing in 16gb vram? | 2024-11-29T21:25:55 | https://www.reddit.com/r/LocalLLaMA/comments/1h2wa6e/best_creative_writing_llm_to_fit_in_16gb_vram/ | Deluded-1b-gguf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2wa6e | false | null | t3_1h2wa6e | /r/LocalLLaMA/comments/1h2wa6e/best_creative_writing_llm_to_fit_in_16gb_vram/ | false | false | self | 8 | null |
OOD Multimodal Generation is Cu̷̻̇̆ṙ̵̘̌Se̶̪͊̍d̸̢́̚ | 1 | 2024-11-29T21:30:13 | Fun_Concept5414 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2wdky | false | null | t3_1h2wdky | /r/LocalLLaMA/comments/1h2wdky/ood_multimodal_generation_is_cursed/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'EV_B0jNFpo5WHg4nDxxvhs5lCgcujRHMC9ZvFoDW9HI', 'resolutions': [{'height': 145, 'url': 'https://preview.redd.it/l6dftry5sw3e1.png?width=108&crop=smart&auto=webp&s=0c5b64867f8a0583d79e1efd1e4f1fdc9a811131', 'width': 108}, {'height': 291, 'url': 'https://preview.redd.it/l6dftry5sw3e1.png?width=216&crop=smart&auto=webp&s=5c5b71f2ebb71432772a5898dad05ea875df11ef', 'width': 216}, {'height': 431, 'url': 'https://preview.redd.it/l6dftry5sw3e1.png?width=320&crop=smart&auto=webp&s=af936a19c4e182c099d0add403c50546bbd48a05', 'width': 320}, {'height': 863, 'url': 'https://preview.redd.it/l6dftry5sw3e1.png?width=640&crop=smart&auto=webp&s=17e79d931c0c9d8eea9a371475f4f601eed42620', 'width': 640}], 'source': {'height': 1034, 'url': 'https://preview.redd.it/l6dftry5sw3e1.png?auto=webp&s=b0088fa380f193b94ee4c92ba7115fa840528651', 'width': 766}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.