title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How would you rank Qwen 2.5 72B vs Llama 3.3 70B Instruct models? | 57 | For those that have used both, I am curious how you would rate them against each other. | 2024-12-12T03:51:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hcchbi/how_would_you_rank_qwen_25_72b_vs_llama_33_70b/ | awebb78 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcchbi | false | null | t3_1hcchbi | /r/LocalLLaMA/comments/1hcchbi/how_would_you_rank_qwen_25_72b_vs_llama_33_70b/ | false | false | self | 57 | null |
It's getting difficult to evaluate models. | 79 | I'm working on a small korean startup trying to utilize AI to help lawyers. We have our own evaluation sets. It for example gives 2 different legal queries and asks LLMs whether the queries are in the same context.
Until a few months ago, the evaluation set made sense. Llama 3 did way better than llama 2, and gpt 4 did better than llama.
But yesterday, I heard that llama 3.3 was released and wanted to see if it's better than llama 3.1. I ran the evaluation and suddenly realized that the entire evaluation is useless.
Claude 3.5 and gpt 4o got 90~95%, llama 3.1 got 85% and llama 3.3 got 88%. Llama 3.3 is better than llama 3.1, but frankly, all the models are doing excellent jobs... | 2024-12-12T05:03:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hcdqbk/its_getting_difficult_to_evaluate_models/ | baehyunsol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcdqbk | false | null | t3_1hcdqbk | /r/LocalLLaMA/comments/1hcdqbk/its_getting_difficult_to_evaluate_models/ | false | false | self | 79 | null |
how do i do this | 0 | here is the link of the post i made on r/ollama and i wanted to do a crosspost to r/LocalLLaMA but wasnt able to do it so here is hte link for the same
[https://www.reddit.com/r/ollama/comments/1h9qw37/need\_help/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/ollama/comments/1h9qw37/need_help/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
and i making this project out of curiosity so yeah ......
| 2024-12-12T05:41:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hcebzl/how_do_i_do_this/ | Ready-Ad4340 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcebzl | false | null | t3_1hcebzl | /r/LocalLLaMA/comments/1hcebzl/how_do_i_do_this/ | false | false | self | 0 | null |
This is what I think is the most exciting thing about generative AI. Not just LLMs or image gen in isolation. But the synergy of using LLMs with image/video gen. This person is using a LLM to generate the wordy detailed prompts needed to have good quality generative video. | 4 | 2024-12-12T06:06:42 | https://www.reddit.com/gallery/1hcctjy | fallingdowndizzyvr | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hcepu8 | false | null | t3_1hcepu8 | /r/LocalLLaMA/comments/1hcepu8/this_is_what_i_think_is_the_most_exciting_thing/ | false | false | default | 4 | null |
|
Save 80% Memory for DPO and ORPO in Liger-Kernel | 27 | Introducing the first open-source optimized post-training losses in Liger Kernel with \~80% memory reduction, featuring DPO, CPO, ORPO, SimPO, JSD, and more, achieving up to 70% end-to-end speedup through larger batch size. Use it as any PyTorch module - Available today in Liger v0.5.0!
[https://x.com/hsu\_byron/status/1866577403918917655](https://x.com/hsu_byron/status/1866577403918917655) | 2024-12-12T06:14:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hceu1i/save_80_memory_for_dpo_and_orpo_in_ligerkernel/ | Icy-World-8359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hceu1i | false | null | t3_1hceu1i | /r/LocalLLaMA/comments/1hceu1i/save_80_memory_for_dpo_and_orpo_in_ligerkernel/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'r-ZBP6JSWL_QAsorClu4F4MAL-ENCax1N5nprPg1Cuo', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/SrsfhapiOew685MpnOCq0SJjdYlwGEJ86IuYzrbFQ4Q.jpg?width=108&crop=smart&auto=webp&s=c7ebdb02438edb8d8d4667bacaf2aae5333e0b56', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/SrsfhapiOew685MpnOCq0SJjdYlwGEJ86IuYzrbFQ4Q.jpg?width=216&crop=smart&auto=webp&s=f57b4f769d8947a9aab1c5c948cc222ceae89111', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/SrsfhapiOew685MpnOCq0SJjdYlwGEJ86IuYzrbFQ4Q.jpg?width=320&crop=smart&auto=webp&s=fadf58d6d9ca9ba17927e4bbde754426442470a3', 'width': 320}, {'height': 395, 'url': 'https://external-preview.redd.it/SrsfhapiOew685MpnOCq0SJjdYlwGEJ86IuYzrbFQ4Q.jpg?width=640&crop=smart&auto=webp&s=7384843f791d9fce07585a983ead822ce5497c26', 'width': 640}], 'source': {'height': 560, 'url': 'https://external-preview.redd.it/SrsfhapiOew685MpnOCq0SJjdYlwGEJ86IuYzrbFQ4Q.jpg?auto=webp&s=7c9d15b5f42a39344d40e1817d42532ace3087c9', 'width': 906}, 'variants': {}}]} |
Is using threads to call my asynchronous OpenAI assistant endpoint in FastAPI the right approach? | 0 | Hi everyone,
I’m working on a FastAPI server that calls an OpenAI assistant via asynchronous endpoints. My current approach is to run something like this inside a function (let’s say `get_response`):
attempts = 1
while attempts <= max_attempts:
thread = await client.beta.threads.create()
prompt = sample_prompt + output_format
message = await client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content=prompt
)
run = await client.beta.threads.runs.create(
thread_id=thread.id,
assistant_id=os.getenv("OPENAI_ASSISTANT_ID")
)
# Poll until the run completes
while run.status not in ["completed", "failed"]:
run = await client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id)
await asyncio.sleep(2)
if run.status == "failed":
# Handle failure
raise Exception("Assistant run failed")
# Retrieve messages and usage
completed_run = await client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id)
message_response = await client.beta.threads.messages.list(thread_id=thread.id)
messages = message_response.data
# Do something with `messages`
I then call `get_response()` in different threads simultaneously, with potentially up to 5 threads at once.
**My questions are:**
1. **Is this a good approach for scaling to handle hundreds of users?** I’m mixing asynchronous calls (`await client.beta.threads...`) with multiple threads. Is it best practice to use threads here, or should I rely solely on async event loops and avoid explicit threading?
2. **Would increasing the number of event loop tasks (e.g., multiple asyncio tasks) or relying on the server’s concurrency model (such as multiple workers from Uvicorn/Gunicorn) be a better approach?**
3. **What patterns or architectures do people recommend when calling OpenAI assistants (or similar APIs) at scale via an async web framework like FastAPI?**
I’m a bit new to this and just want to ensure I’m setting things up in a way that will scale well and not cause hidden issues (like blocking I/O or unnecessary overhead).
Any advice, best practices, or insights would be greatly appreciated! Thank you. | 2024-12-12T06:44:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hcf9h5/is_using_threads_to_call_my_asynchronous_openai/ | SpaceWalker_69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcf9h5 | false | null | t3_1hcf9h5 | /r/LocalLLaMA/comments/1hcf9h5/is_using_threads_to_call_my_asynchronous_openai/ | false | false | self | 0 | null |
how smug are we all feeling right now? | 1 | [removed] | 2024-12-12T06:45:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hcfa17/how_smug_are_we_all_feeling_right_now/ | gaspoweredcat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcfa17 | false | null | t3_1hcfa17 | /r/LocalLLaMA/comments/1hcfa17/how_smug_are_we_all_feeling_right_now/ | false | false | self | 1 | null |
Fine-Tuning Model Llama3 | 1 | [removed] | 2024-12-12T07:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/1hcfw7n/finetuning_model_llama3/ | Ok-Tea-1950 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcfw7n | false | null | t3_1hcfw7n | /r/LocalLLaMA/comments/1hcfw7n/finetuning_model_llama3/ | false | false | self | 1 | null |
[Help] The most up-to-date GPU/model benchmark table? | 1 | [removed] | 2024-12-12T07:45:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hcg3iw/help_the_most_uptodate_gpumodel_benchmark_table/ | flopik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcg3iw | false | null | t3_1hcg3iw | /r/LocalLLaMA/comments/1hcg3iw/help_the_most_uptodate_gpumodel_benchmark_table/ | false | false | self | 1 | null |
Hermes 3 3B is out and I like it! | 95 | Hermes 3 LLM is impressive! I’m trying it with [Hermes-3-Llama-3.2-3B.Q6\_K.gguf](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B-GGUF/blob/main/Hermes-3-Llama-3.2-3B.Q6_K.gguf) on iPhone:
\> Accurately follows instructions
\> Great at storytelling
\> Does a really good job generating structured outputs (e.g., JSON) - not using json guided at all.
The Q5-K-M one didn't generate JSON using only prompt like Q6.
Curious about your experiences with this model so far?
https://reddit.com/link/1hcg7fw/video/mvs3ew46id6e1/player
https://preview.redd.it/8pnh09icid6e1.png?width=1179&format=png&auto=webp&s=0712dcb212c470f65e7200ca9b4f684dfcc7fa48
https://preview.redd.it/aswjd8icid6e1.png?width=1179&format=png&auto=webp&s=049b147011063425a1037e727498307e1b7bf76e
| 2024-12-12T07:53:57 | https://www.reddit.com/r/LocalLLaMA/comments/1hcg7fw/hermes_3_3b_is_out_and_i_like_it/ | Ill-Still-6859 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcg7fw | false | null | t3_1hcg7fw | /r/LocalLLaMA/comments/1hcg7fw/hermes_3_3b_is_out_and_i_like_it/ | false | false | 95 | {'enabled': False, 'images': [{'id': 'Vmh9P_AWI4flcRJuw3Ug40b5f6kOz6ApO0Tuby5LQas', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cKCuQBLuzC-unGVKZU8e89FYCAnq-lJk02HE6bmTA6M.jpg?width=108&crop=smart&auto=webp&s=ad33143777e8abc2e330074089e9f34386fee83a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cKCuQBLuzC-unGVKZU8e89FYCAnq-lJk02HE6bmTA6M.jpg?width=216&crop=smart&auto=webp&s=709f148062fd0a63e15403548bbe64cc80730dcb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cKCuQBLuzC-unGVKZU8e89FYCAnq-lJk02HE6bmTA6M.jpg?width=320&crop=smart&auto=webp&s=9eaa322cb7c6218d771967b0afba1cfc2fd070cb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cKCuQBLuzC-unGVKZU8e89FYCAnq-lJk02HE6bmTA6M.jpg?width=640&crop=smart&auto=webp&s=a5281533a85afdb7406391cef1df758227e231a5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cKCuQBLuzC-unGVKZU8e89FYCAnq-lJk02HE6bmTA6M.jpg?width=960&crop=smart&auto=webp&s=4fe5a6028fa8a391e5b72da3eedca1bae76eef8c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cKCuQBLuzC-unGVKZU8e89FYCAnq-lJk02HE6bmTA6M.jpg?width=1080&crop=smart&auto=webp&s=9f5c0a0e14f93ac5480e6bad27c0743bee4f4ac6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cKCuQBLuzC-unGVKZU8e89FYCAnq-lJk02HE6bmTA6M.jpg?auto=webp&s=9d01560087df80558add58300f528bf06a06515a', 'width': 1200}, 'variants': {}}]} |
|
Comparing Distributed Inference Performance on M4 Pro: llama.cpp Outperforms MLX | 4 | Based on my understanding, MLX typically outperforms llama.cpp in inference performance on macOS. However, after testing llama.cpp (llama-box by GPUStack: https://github.com/gpustack/gpustack) and MLX (exo: https://github.com/exo-explore/exo) with the Qwen 2.5 14B and Gemma2 27B, I arrived at the following conclusions:
1. On a single MacBook M4 Pro, Qwen 2.5 14B running with GPUStack slightly outperforms exo in terms of TPS, with both being in a similar range. However, exo shows a higher TTFT compared to GPUStack.
2. Over a Thunderbolt network, Qwen 2.5 14B achieves comparable TPS on both GPUStack and exo, but exo’s TTFT remains higher than GPUStack’s.
3. Over a Wi-Fi network, exo’s distributed inference performance is about 2x better than GPUStack’s llama.cpp backend but shows instability. Upon reviewing the implementation, I suspect this may be due to llama.cpp using RPC while exo uses gRPC, which leverages the more efficient protobuf library for data serialization. This could be a point where llama.cpp can be further optimized.
4. During testing, I frequently encountered network errors or no response with exo, whereas GPUStack’s llama.cpp backend ran without such issues. This suggests that exo’s stability needs improvement.
5. Running models directly with MLX achieves approximately 20% higher performance compared to llama.cpp. This indicates that exo’s implementation might require further optimization.
6. Using exo with two MacBook M4 Pro 24G devices to run Gemma2 27B consistently resulted in no response and network error crashes. The root cause of this issue remains unclear.
Below are the recorded performance metrics, with three requests logged per case (including prior context, leading to increased TTFT):
**Qwen2.5 14B**
Macbook M4 Pro \*1
\- llama.cpp (llama-box by GPUStack)
0.212s TTFT 24.65 tokens/second
0.207s TTFT 22.71 tokens/second
0.214s TTFT 20.6 tokens/second
\- MLX (exo)
0.55s TTFT 21.4 tokens/second
4.16s TTFT 20.0 tokens/second
8.36s TTFT 20.5 tokens/second
Macbook M4 Pro \*2
Thunderbolt 4 network
\- llama.cpp (llama-box by GPUStack)
0.237s TTFT 20.29 tokens/second
0.234s TTFT 19.08 tokens/second
0.221s TTFT 17.48 tokens/second
\- MLX (exo)
3.11s TTFT 20.4 tokens/second
4.29s TTFT 19.2 tokens/second
8.42s TTFT 18.5 tokens/second
WIFI
\- llama.cpp (llama-box by GPUStack)
0.810s TTFT 3.42 tokens/second
0.842s TTFT 3.01 tokens/second
0.744s TTFT 3.09 tokens/second
\- MLX (exo)
3.18s TTFT 9.0 tokens/second
4.78s TTFT 9.1 tokens/second
Crash (network error)
**Gemma2 27B**
Macbook M4 Pro \*1
\- llama.cpp (llama-box by GPUStack)
0.583s TTFT 13.87 tokens/second
5.532s TTFT 13.63 tokens/second
3.633s TTFT 13.13 tokens/second
\- MLX (exo)
No response
Macbook M4 Pro \*2
Thunderbolt 4
\- llama.cpp (llama-box by GPUStack)
0.610s TTFT 11.2 tokens/second
4.353s TTFT 11.13 tokens/second
11.764s TTFT 9.64 tokens/second
\- MLX (exo)
No response
WIFI
\- llama.cpp (llama-box by GPUStack)
0.954s TTFT 3.06 tokens/second
6.414s TTFT 2.76 tokens/second
6.633s TTFT 2.88 tokens/second
\- MLX (exo)
No response | 2024-12-12T08:11:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hcgfsb/comparing_distributed_inference_performance_on_m4/ | Known-Classroom2655 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcgfsb | false | null | t3_1hcgfsb | /r/LocalLLaMA/comments/1hcgfsb/comparing_distributed_inference_performance_on_m4/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'jd2HgWf5dI47wfdAWs_1S3ZxIJeS0-kzsRMDFt8bE-c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OeeD7Sr2zmJjCuxpoVDjsNpnrOG3gwCYDtmTZV-qaVE.jpg?width=108&crop=smart&auto=webp&s=b3310139425ee3184e5922c9027312c62888c067', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OeeD7Sr2zmJjCuxpoVDjsNpnrOG3gwCYDtmTZV-qaVE.jpg?width=216&crop=smart&auto=webp&s=99f710387f41792813a244ff7231d3efe1cc4d0a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OeeD7Sr2zmJjCuxpoVDjsNpnrOG3gwCYDtmTZV-qaVE.jpg?width=320&crop=smart&auto=webp&s=694ba1dfc323c574b6021b4470c58f6b5c8c29db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OeeD7Sr2zmJjCuxpoVDjsNpnrOG3gwCYDtmTZV-qaVE.jpg?width=640&crop=smart&auto=webp&s=cfb742144898e557fcdeef45ac9a4b952a8f535a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OeeD7Sr2zmJjCuxpoVDjsNpnrOG3gwCYDtmTZV-qaVE.jpg?width=960&crop=smart&auto=webp&s=20a7dc1336ff0b9ca42ffe10d5a4739e1c80b201', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OeeD7Sr2zmJjCuxpoVDjsNpnrOG3gwCYDtmTZV-qaVE.jpg?width=1080&crop=smart&auto=webp&s=abe10a64476e8d17ece0043ccf0f8e2b14a94b44', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OeeD7Sr2zmJjCuxpoVDjsNpnrOG3gwCYDtmTZV-qaVE.jpg?auto=webp&s=33ce1de89a2fdde23d9ae216375f99647de9260f', 'width': 1200}, 'variants': {}}]} |
Anyone else feel the AGI? | 0 | As developers, researchers and as enthusiasts I assume there are many here who have been neck deep in all aspects of these transformer and diffusion models. Not to sound dramatic but I am feeling the AGI. Testing the Gemini models through various endpoints and interfaces and allowing 01 and Claude to interact all with another including use of each OTHER as tools, google grounding, large context windows, search through google, code execution, custom functions gave myself multiple glimpses AGI clearly in action. I was able to replicate a year's worth of AI assisted work on a project in a hour by tasking this combination of AI tools with clear instructions and minimal feedback without having to prepare data or orchestrate actions as before. I really hope the RTX 5090 arrives in a dual slot form factor (with p2p enabled) so that local AGI for many is the topic of 2025. | 2024-12-12T08:20:32 | https://www.reddit.com/r/LocalLLaMA/comments/1hcgk5h/anyone_else_feel_the_agi/ | chitown160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcgk5h | false | null | t3_1hcgk5h | /r/LocalLLaMA/comments/1hcgk5h/anyone_else_feel_the_agi/ | false | false | self | 0 | null |
Need help with POCKETPAL app. | 1 | [removed] | 2024-12-12T08:49:50 | Funny_Log2227 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hcgxag | false | null | t3_1hcgxag | /r/LocalLLaMA/comments/1hcgxag/need_help_with_pocketpal_app/ | false | false | 1 | {'enabled': True, 'images': [{'id': '1HWUiQsk8l5q6zqF-eq94pk0WFUpO_nDNKB9wOEygnY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/kjoezddlsd6e1.jpeg?width=108&crop=smart&auto=webp&s=081e76af950c42dd1d83aca54d82bf04065405fd', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/kjoezddlsd6e1.jpeg?width=216&crop=smart&auto=webp&s=290610556ef784f91b965f6873a866c875c66947', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/kjoezddlsd6e1.jpeg?width=320&crop=smart&auto=webp&s=806279e967c540f871ac9c3a7cb08ab6d2de5766', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/kjoezddlsd6e1.jpeg?width=640&crop=smart&auto=webp&s=90b9fd2d1721f6b42fb835ebe1a57d0bbf029ee3', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/kjoezddlsd6e1.jpeg?width=960&crop=smart&auto=webp&s=610e9f30e0c14195bea9faf4d8726b5e65c46b2a', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/kjoezddlsd6e1.jpeg?width=1080&crop=smart&auto=webp&s=d25ae06848fddc445082a3ab1d5d75e0e382e194', 'width': 1080}], 'source': {'height': 2436, 'url': 'https://preview.redd.it/kjoezddlsd6e1.jpeg?auto=webp&s=4c6de0930502e11c8cc66cd67e813882ac3235b1', 'width': 1125}, 'variants': {}}]} |
||
What is the best coding model that fits a card with 10 GB VRAM? | 1 | [removed] | 2024-12-12T08:58:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hch10l/what_is_the_best_coding_model_that_fits_a_card/ | Jironzo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hch10l | false | null | t3_1hch10l | /r/LocalLLaMA/comments/1hch10l/what_is_the_best_coding_model_that_fits_a_card/ | false | false | self | 1 | null |
Hey NVIDIA, where’s the new Nemotron? 😊 | 34 | I think it’s time to take LLama 3.3 and release something new! | 2024-12-12T08:59:00 | https://www.reddit.com/r/LocalLLaMA/comments/1hch1cp/hey_nvidia_wheres_the_new_nemotron/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hch1cp | false | null | t3_1hch1cp | /r/LocalLLaMA/comments/1hch1cp/hey_nvidia_wheres_the_new_nemotron/ | false | false | self | 34 | null |
Thank God for Microsoft's Phi-3-mini! | 0 | So I've been trying to find a model I could potentially use for asking questions about contents of a document that would run all locally on a rather average office pc (mine specifically which has no GPU, 16gb ram, and I think an i5 6 core?
I'm using llama.cpp as the back end by the way. And no I haven't compiled it for my machine I'm using the default windows build from java-llama.cpp. I assumed a custom compilation wasn't likely to help much. It's just a Dell mini optiplex with something like an i5 that has 6 cores and I've got only 16gb Ram.
To be clear it's totally capable for most work. But you know it's outdated and not ideal for LLM running.
But this is the hardware I have and probably much more importantly it's typical of the kind of hardware customers would be stuck with.
OK, so back on track. I've got a little test app for asking some questions to some pdfs. Of course if I can just include the entire text in the prompt I can get good results. But on my pc with almost all models llama.cpp processes prompt tokens typically around 10 per second. Generated tokens more like 3 per second but I generally need way more prompt tokens. (want short but high quality answers).
This was just never going to work. Prompt has to be at least 200 tpkens to be useful in gnereal I'm pretty sure and of course the more the better. 200 is really skimpy in my opinion. Like if you can reliably provide the right context in 200 tokens to begins with, WHY DO YOU EVEN NEED TO ASK THE LLM ANYTHING?? AMIRITE?
Anyway, I'm getting double the token processing rate on both input and output from Phi-3-mini with just as good of results on average.
This project is still a stretch, but suddenly I have hope again;. This could be doable for I don;'t know maybe.
The bright side is my hardware is like the minimum we expect, it's all upside from there. If I can get a useful answer from a document in 15 seconds, anybody with a newer desktop is maybe (I assume) going to have a much better experience.
Anyway, did I also mention I really have no idea what I'm doing still. But I'm making progress at the very least. Light one to Krom for me, boys.
| 2024-12-12T09:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hchnxk/thank_god_for_microsofts_phi3mini/ | theRealGleepglop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hchnxk | false | null | t3_1hchnxk | /r/LocalLLaMA/comments/1hchnxk/thank_god_for_microsofts_phi3mini/ | false | false | self | 0 | null |
Open models wishlist | 380 | Hi! I'm now the Chief ~~Llama~~ Gemma Officer at Google and we want to ship some awesome models that are not just great quality, but also meet the expectations and capabilities that the community wants.
We're listening and have seen interest in things such as longer context, multilinguality, and more. But given you're all so amazing, we thought it was better to simply ask and see what ideas people have. Feel free to drop any requests you have for new models
| 2024-12-12T09:49:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hchoyy/open_models_wishlist/ | hackerllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hchoyy | false | null | t3_1hchoyy | /r/LocalLLaMA/comments/1hchoyy/open_models_wishlist/ | false | false | self | 380 | null |
Domain-specific finetuning for text generation | 1 | [removed] | 2024-12-12T09:51:54 | https://www.reddit.com/r/LocalLLaMA/comments/1hchpxk/domainspecific_finetuning_for_text_generation/ | kokimop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hchpxk | false | null | t3_1hchpxk | /r/LocalLLaMA/comments/1hchpxk/domainspecific_finetuning_for_text_generation/ | false | false | self | 1 | null |
Translation of Documents while preserving Formatting (oTranslator alternative) | 1 | [removed] | 2024-12-12T09:53:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hchqnh/translation_of_documents_while_preserving/ | llamahunter1337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hchqnh | false | null | t3_1hchqnh | /r/LocalLLaMA/comments/1hchqnh/translation_of_documents_while_preserving/ | false | false | self | 1 | null |
Local tool/frontend that supports context summarisation? | 4 | So I was thinking that it would be cool if a model could summarise its context in the background and only work with the summarization, effectively enhancing the context window greatly. And I figured it's such an obvious idea, somebody must have already done it.
And, sure enough, there is a bunch of different techniques to do this. But my search only led me to various PDFs and professional tools.
Is there anything like that for homely users, in particular open source? Maybe some library that could be used with llama.cpp? I saw some lib to implement attention sinks (i.e. tossing out old context before it overehelms the model), but that's kinda the opposite of what I'm thinking. | 2024-12-12T10:03:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hchvjh/local_toolfrontend_that_supports_context/ | WhoRoger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hchvjh | false | null | t3_1hchvjh | /r/LocalLLaMA/comments/1hchvjh/local_toolfrontend_that_supports_context/ | false | false | self | 4 | null |
Anyone using Docling? I am facing an issue of leaked semaphore objecgts while trying to extract the data from a pdf. Details of the error in body. | 1 | [removed] | 2024-12-12T10:14:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hci0ls/anyone_using_docling_i_am_facing_an_issue_of/ | NewspaperMission8850 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hci0ls | false | null | t3_1hci0ls | /r/LocalLLaMA/comments/1hci0ls/anyone_using_docling_i_am_facing_an_issue_of/ | false | false | self | 1 | null |
Easy Image Analysis with C# and LLaMA 3.2 Vision | 1 | 2024-12-12T10:18:18 | https://argosco.io/easy-image-analysis-with-c-and-llama-3-2-vision/c/ | Don_Crespo | argosco.io | 1970-01-01T00:00:00 | 0 | {} | 1hci2je | false | null | t3_1hci2je | /r/LocalLLaMA/comments/1hci2je/easy_image_analysis_with_c_and_llama_32_vision/ | false | false | 1 | {'enabled': False, 'images': [{'id': '99sgSYpHkDQ2x4m8MUyoNBmUOxXSNr-Zcu6DxcEsXOw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/vgvVfxllV8wHjkuIvgDeqx8eBmiiZVxu90UnJm-utM0.jpg?width=108&crop=smart&auto=webp&s=398831ff1a9c8071396ccf7920e5faf72eb8021f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/vgvVfxllV8wHjkuIvgDeqx8eBmiiZVxu90UnJm-utM0.jpg?width=216&crop=smart&auto=webp&s=a11836ef5b30256de600f33c1a865af6db83ebcc', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/vgvVfxllV8wHjkuIvgDeqx8eBmiiZVxu90UnJm-utM0.jpg?width=320&crop=smart&auto=webp&s=355a80994d28e7006c3cbea12ecfc41894bf2f3f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/vgvVfxllV8wHjkuIvgDeqx8eBmiiZVxu90UnJm-utM0.jpg?width=640&crop=smart&auto=webp&s=8c701e7b2199a1523947f236f14855991a76bb58', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/vgvVfxllV8wHjkuIvgDeqx8eBmiiZVxu90UnJm-utM0.jpg?width=960&crop=smart&auto=webp&s=6b4165623fdca3875dc41916f1bb82ff99a27f70', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/vgvVfxllV8wHjkuIvgDeqx8eBmiiZVxu90UnJm-utM0.jpg?width=1080&crop=smart&auto=webp&s=7f0e535dbfed86f444853cbeae656bc0d10c7bc6', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/vgvVfxllV8wHjkuIvgDeqx8eBmiiZVxu90UnJm-utM0.jpg?auto=webp&s=d02d0eee1d856c4d2f101653958889d87533d470', 'width': 1200}, 'variants': {}}]} |
||
Reversed role | 11 | I have added a little feature called _reversed role_ in [chatllm.cpp](https://github.com/foldl/chatllm.cpp). In _reversed role_ mode, the LLM generates content for the _user_ role. So, this will let the LLM proactively participate in the conversation.
## Example 1
```
main -m /path/to/qwen2.5-1.5b.bin --reversed_role -i -p "Hello"
________ __ __ __ __ ___ (通义千问)
/ ____/ /_ ____ _/ /_/ / / / / |/ /_________ ____
/ / / __ \/ __ `/ __/ / / / / /|_/ // ___/ __ \/ __ \
/ /___/ / / / /_/ / /_/ /___/ /___/ / / // /__/ /_/ / /_/ /
\____/_/ /_/\__,_/\__/_____/_____/_/ /_(_)___/ .___/ .___/
You are served by QWen2, /_/ /_/
with 1543714304 (1.5B) parameters.
A.I. > Hello
You > Hi. Nice to meet you.
A.I. > How can I improve my public speaking skills?
```
## Example 2
Two LLMs talk to each other.
```
A > what's the meaning of life
B > The meaning of life is a philosophical and metaphysical question related to the purpose or significance of life or existence in general. This question has been asked for centuries and does not have a definitive answer, as it often depends on individual beliefs, cultural, and religious perspectives. Some people find meaning through personal growth, relationships, love, or through contributing to the betterment of humanity. Ultimately, the meaning of life may be a personal and subjective concept.
A > What are some of the common philosophical and religious perspectives on the meaning of life?
B > 1. Existentialism: This philosophy suggests that life has no inherent meaning, and it's up to each individual to create their own purpose.
....
A > Can you give me more details about the concept of existentialism and how it relates to the meaning of life?
B > Existentialism is a philosophical movement that emphasizes individual existence, freedom, and choice. It holds ...
``` | 2024-12-12T10:19:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hci35k/reversed_role/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hci35k | false | null | t3_1hci35k | /r/LocalLLaMA/comments/1hci35k/reversed_role/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'fQBM7pdZF_7pgEBPDc1Nvfr4oxiKqRegnwnqaCfllPQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fyr8lTp6Nm4-WVoi8g20PqGL45_sDECEBPhwQc7kZnI.jpg?width=108&crop=smart&auto=webp&s=276f8da83ebc45bbe648ae18f91b002250a5575e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fyr8lTp6Nm4-WVoi8g20PqGL45_sDECEBPhwQc7kZnI.jpg?width=216&crop=smart&auto=webp&s=9a02a2b418b4723958b6a885faf088de99c93e22', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fyr8lTp6Nm4-WVoi8g20PqGL45_sDECEBPhwQc7kZnI.jpg?width=320&crop=smart&auto=webp&s=f4a1946e80de0a0e66f0a4261e6f8d29160d0560', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fyr8lTp6Nm4-WVoi8g20PqGL45_sDECEBPhwQc7kZnI.jpg?width=640&crop=smart&auto=webp&s=fd94f7f2b091c313c0b104b856d757b5cec3a9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fyr8lTp6Nm4-WVoi8g20PqGL45_sDECEBPhwQc7kZnI.jpg?width=960&crop=smart&auto=webp&s=a28405955335421e2fb1b0096c1a090b3d0b31ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fyr8lTp6Nm4-WVoi8g20PqGL45_sDECEBPhwQc7kZnI.jpg?width=1080&crop=smart&auto=webp&s=665b4990d255c54ff53386303d6909941b86eb09', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fyr8lTp6Nm4-WVoi8g20PqGL45_sDECEBPhwQc7kZnI.jpg?auto=webp&s=0044deb5db698cef1d45072d454555e0405edd18', 'width': 1200}, 'variants': {}}]} |
Finetune LLM on domain-specific knowledge | 1 | [removed] | 2024-12-12T10:36:53 | https://www.reddit.com/r/LocalLLaMA/comments/1hcibr7/finetune_llm_on_domainspecific_knowledge/ | kokimop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcibr7 | false | null | t3_1hcibr7 | /r/LocalLLaMA/comments/1hcibr7/finetune_llm_on_domainspecific_knowledge/ | false | false | self | 1 | null |
Microsoft bots extolling Phi3? | 44 | Lately I have seen some posts with a certain frequency extolling the MS model, however, the posts are always very similar and are always followed and are "robotic" comments.
Having open models is always welcome, but the Phi3 is not the best model for its size by a long shot. Easily beaten by tiny models like the Gemma 2 2B or Qwen 1.5.
Are big companies starting to invest in the image of their models? | 2024-12-12T11:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1hcivmd/microsoft_bots_extolling_phi3/ | Existing_Freedom_342 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcivmd | false | null | t3_1hcivmd | /r/LocalLLaMA/comments/1hcivmd/microsoft_bots_extolling_phi3/ | false | false | self | 44 | null |
bge-multilingual-gemma2 Embeddings Model for Llama - Poor Quality(to say the least). Any Tips? | 1 | Hello everyone!
I'm building a RAG for Llama and using bge-multilingual-gemma2 for embeddings. I’ve set the chunk size to 4000 words (default) and started with a small subset of about 700 chunks. The embeddings are stored in FAISS, and all the data is strictly engineering-related documents.
However, my tests show that the database never returns relevant chunks. The similarity scores are consistently around 0.8-1.2 for each document. What’s even more puzzling is that when I send unrelated queries from fields like botany or astronomy, the similarity scores are still around 0.8! It seems like the database is returning random chunks.
Is bge-multilingual-gemma2 really that poor in quality? Any tips on improving this?
Also, how do you evaluate embedding models? I’d like to iterate through different models and parameters automatically. What tools do you recommend?
I think this info could be valuable for others too. Thanks! | 2024-12-12T11:20:56 | https://www.reddit.com/r/LocalLLaMA/comments/1hciyxn/bgemultilingualgemma2_embeddings_model_for_llama/ | pgess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hciyxn | false | null | t3_1hciyxn | /r/LocalLLaMA/comments/1hciyxn/bgemultilingualgemma2_embeddings_model_for_llama/ | false | false | self | 1 | null |
Knowledge distillation from llama 3 8B to llama 3.2 3B | 1 | [removed] | 2024-12-12T11:21:30 | https://www.reddit.com/r/LocalLLaMA/comments/1hciz8y/knowledge_distillation_from_llama_3_8b_to_llama/ | LeadingFinance6340 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hciz8y | false | null | t3_1hciz8y | /r/LocalLLaMA/comments/1hciz8y/knowledge_distillation_from_llama_3_8b_to_llama/ | false | false | self | 1 | null |
Building smaller system | 1 | [removed] | 2024-12-12T11:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hcj056/building_smaller_system/ | Electrical_Ear577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcj056 | false | null | t3_1hcj056 | /r/LocalLLaMA/comments/1hcj056/building_smaller_system/ | false | false | self | 1 | null |
Smaller system fore school. | 1 | [removed] | 2024-12-12T11:24:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hcj0pz/smaller_system_fore_school/ | Electrical_Ear577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcj0pz | false | null | t3_1hcj0pz | /r/LocalLLaMA/comments/1hcj0pz/smaller_system_fore_school/ | false | false | self | 1 | null |
Structured outputs can hurt the performance of LLMs | 49 | 2024-12-12T11:24:41 | https://dylancastillo.co/posts/say-what-you-mean-sometimes.html | dcastm | dylancastillo.co | 1970-01-01T00:00:00 | 0 | {} | 1hcj0ur | false | null | t3_1hcj0ur | /r/LocalLLaMA/comments/1hcj0ur/structured_outputs_can_hurt_the_performance_of/ | false | false | default | 49 | null |
|
Hot Take (?): Reasoning models like QwQ can be a bad fit in a number of scenarios. They tend to overthink a lot, often devolving into nonsense | 61 | 2024-12-12T12:02:59 | https://www.reddit.com/gallery/1hcjlzl | TitoxDboss | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hcjlzl | false | null | t3_1hcjlzl | /r/LocalLLaMA/comments/1hcjlzl/hot_take_reasoning_models_like_qwq_can_be_a_bad/ | false | false | 61 | null |
||
Built an actual working AI meeting copilot - and I am building it in open source, and making it self hostable | 1 | [removed] | 2024-12-12T12:10:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hcjqgf | false | null | t3_1hcjqgf | /r/LocalLLaMA/comments/1hcjqgf/built_an_actual_working_ai_meeting_copilot_and_i/ | false | false | default | 1 | null |
||
Built an actual working AI meeting copilot - and I am building it in open source, and making it self hostable | 1 | [removed] | 2024-12-12T12:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/1hcjrb2/built_an_actual_working_ai_meeting_copilot_and_i/ | stealthanthrax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcjrb2 | false | null | t3_1hcjrb2 | /r/LocalLLaMA/comments/1hcjrb2/built_an_actual_working_ai_meeting_copilot_and_i/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kzmKj3kNjmKMKzD42vPYAhDzOw8j0u6FrNUjNWrRXXo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=108&crop=smart&auto=webp&s=d4471fb28365acf570f667c5e52c445ede99843b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=216&crop=smart&auto=webp&s=9ed21477781bea24f11c6b2f62b32e29a9282910', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=320&crop=smart&auto=webp&s=44a130c532af8f7183cf2399e175a056dc3fe6d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=640&crop=smart&auto=webp&s=afa0dbfb43653eb9bbd17a51d4e10f6e49c730f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=960&crop=smart&auto=webp&s=7b7dc4bcd96b3ae2c71f7f4d4b1d9d644dd03079', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=1080&crop=smart&auto=webp&s=f72305adbc0b3989c24e0e8c2e1226ba9098df68', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?auto=webp&s=17a29141eaa2519a70c9fe6ae76dc2f53d079f93', 'width': 1200}, 'variants': {}}]} |
Looking for Benchmarks: On-Prem vs Cloud for AI Model Training | 1 | I'm currently planning an AI project and trying to decide between training models on-premises or using cloud providers. I'd appreciate it if anyone could share **benchmarks, reports, or experiences** comparing the two approaches, especially regarding performance, cost, and scalability.
Here’s the context:
* **Sensitive and regulated data** (EU regulations) are part of the dataset, alongside non-sensitive data.
* I have concerns about compliance, latency, and security when using cloud providers.
* At the same time, I’m curious about how well on-prem solutions stack up against the flexibility and scalability offered by the cloud.
Has anyone faced a similar decision? Are there benchmarks that highlight training efficiency, hardware utilization, or cost-effectiveness for on-prem vs cloud? Any insights or recommendations would be greatly appreciated!
Thanks in advance! | 2024-12-12T12:12:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hcjre0/looking_for_benchmarks_onprem_vs_cloud_for_ai/ | Secu-Thibz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcjre0 | false | null | t3_1hcjre0 | /r/LocalLLaMA/comments/1hcjre0/looking_for_benchmarks_onprem_vs_cloud_for_ai/ | false | false | self | 1 | null |
Built an actual working AI meeting copilot - and I am building it in open source, and making it self hostable | 1 | [removed] | 2024-12-12T12:13:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hcjs72/built_an_actual_working_ai_meeting_copilot_and_i/ | stealthanthrax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcjs72 | false | null | t3_1hcjs72 | /r/LocalLLaMA/comments/1hcjs72/built_an_actual_working_ai_meeting_copilot_and_i/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kzmKj3kNjmKMKzD42vPYAhDzOw8j0u6FrNUjNWrRXXo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=108&crop=smart&auto=webp&s=d4471fb28365acf570f667c5e52c445ede99843b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=216&crop=smart&auto=webp&s=9ed21477781bea24f11c6b2f62b32e29a9282910', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=320&crop=smart&auto=webp&s=44a130c532af8f7183cf2399e175a056dc3fe6d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=640&crop=smart&auto=webp&s=afa0dbfb43653eb9bbd17a51d4e10f6e49c730f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=960&crop=smart&auto=webp&s=7b7dc4bcd96b3ae2c71f7f4d4b1d9d644dd03079', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=1080&crop=smart&auto=webp&s=f72305adbc0b3989c24e0e8c2e1226ba9098df68', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?auto=webp&s=17a29141eaa2519a70c9fe6ae76dc2f53d079f93', 'width': 1200}, 'variants': {}}]} |
Built an actual working Open Source AI meeting copilot | 2 | I got tired of relying on clunky SaaS tools for meeting transcriptions that didn’t respect my privacy or workflow. Every one I tried had issues:
* Bots awkwardly joining meetings and announcing themselves.
* Poor transcription quality.
* No flexibility to tweak things to fit *my* setup.
So I built **Amurex**, a self-hosted solution that actually works:
* Records meetings quietly, with no bots interrupting.
* Delivers clean, accurate diarized transcripts right after the meeting.
* Automatically drafts follow-up that I can email.
* Keeps a memory of past meetings for easy context retrieval.
But most importantly, it has it is the only chrome extension in the world that can give
* Real-time suggestions to stay engaged in boring meetings.
It’s completely open source and designed for self-hosting, so you control your data and your workflow. No subscriptions, no vendor lock-in.I would love to know what you all think of it.
It only works on Google Meet for now but I will be scaling it to all the famous meeting providers.
Github - [https://github.com/thepersonalaicompany/amurex](https://github.com/thepersonalaicompany/amurex) | 2024-12-12T12:40:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hck7tu/built_an_actual_working_open_source_ai_meeting/ | arsenfounder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hck7tu | false | null | t3_1hck7tu | /r/LocalLLaMA/comments/1hck7tu/built_an_actual_working_open_source_ai_meeting/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'kzmKj3kNjmKMKzD42vPYAhDzOw8j0u6FrNUjNWrRXXo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=108&crop=smart&auto=webp&s=d4471fb28365acf570f667c5e52c445ede99843b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=216&crop=smart&auto=webp&s=9ed21477781bea24f11c6b2f62b32e29a9282910', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=320&crop=smart&auto=webp&s=44a130c532af8f7183cf2399e175a056dc3fe6d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=640&crop=smart&auto=webp&s=afa0dbfb43653eb9bbd17a51d4e10f6e49c730f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=960&crop=smart&auto=webp&s=7b7dc4bcd96b3ae2c71f7f4d4b1d9d644dd03079', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=1080&crop=smart&auto=webp&s=f72305adbc0b3989c24e0e8c2e1226ba9098df68', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?auto=webp&s=17a29141eaa2519a70c9fe6ae76dc2f53d079f93', 'width': 1200}, 'variants': {}}]} |
A Flask interface for Qwen2.5-Coder-32B-Instruct-GGUF | 4 | I created a GitHub [repo](https://github.com/slyfox1186/script-repo/tree/main/AI/Qwen2.5-Coder-32B-Instruct) in case anyone wanted a quick path to setup and use the Qwen2.5-Coder-32B-Instruct-GGUF. Should have a simple "memory" to help make the conversation more natural.
You will need llama-cpp-python installed and ready to go. I have a custom script that I personally use to help me install it which if any one is interested is [here](https://github.com/slyfox1186/script-repo/blob/main/Bash/Misc/Conda/llama-cpp-python_installer.sh) as well (conda is required to use this script).
https://preview.redd.it/yxuy1i4f1f6e1.png?width=1853&format=png&auto=webp&s=716b698a69555090a73fcba0c79570dc13fcfc31
| 2024-12-12T13:03:09 | https://www.reddit.com/r/LocalLLaMA/comments/1hckmik/a_flask_interface_for_qwen25coder32binstructgguf/ | SAV_NC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hckmik | false | null | t3_1hckmik | /r/LocalLLaMA/comments/1hckmik/a_flask_interface_for_qwen25coder32binstructgguf/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'Y3JNKmmBbmUV0EtbmVdLuoHTkm_X4jOXSrPL9bPttRE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/UgNnv3s4G1MqIdVs32J-5nl_uPEGnurTCLcoVuTBDCQ.jpg?width=108&crop=smart&auto=webp&s=7a3eb38db2a70481bd99a3eeca1a45ba6eea88d1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/UgNnv3s4G1MqIdVs32J-5nl_uPEGnurTCLcoVuTBDCQ.jpg?width=216&crop=smart&auto=webp&s=3354436d0bc5b391d11781aaaa37bc3be54d7f80', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/UgNnv3s4G1MqIdVs32J-5nl_uPEGnurTCLcoVuTBDCQ.jpg?width=320&crop=smart&auto=webp&s=f89f9b008d56c453afbd6b134567d56b09158666', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/UgNnv3s4G1MqIdVs32J-5nl_uPEGnurTCLcoVuTBDCQ.jpg?width=640&crop=smart&auto=webp&s=eeb793ebc735cd7cea162ec8a2ea8f481fb56b17', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/UgNnv3s4G1MqIdVs32J-5nl_uPEGnurTCLcoVuTBDCQ.jpg?width=960&crop=smart&auto=webp&s=db476083307acdaf2db916a1e810d0a81756f1a4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/UgNnv3s4G1MqIdVs32J-5nl_uPEGnurTCLcoVuTBDCQ.jpg?width=1080&crop=smart&auto=webp&s=dd287f1111e17ff7515b64838d67fa0cd9551f9f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/UgNnv3s4G1MqIdVs32J-5nl_uPEGnurTCLcoVuTBDCQ.jpg?auto=webp&s=2c53ec32d6b17b8ffc395328ee152da25cc90363', 'width': 1920}, 'variants': {}}]} |
|
Why is Llama 3.3-70B so immediately good at adopting personas based on the system prompt (and entering roleplay, even when not specified) | 392 | 2024-12-12T13:32:09 | https://www.reddit.com/gallery/1hcl5oh | TitoxDboss | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hcl5oh | false | null | t3_1hcl5oh | /r/LocalLLaMA/comments/1hcl5oh/why_is_llama_3370b_so_immediately_good_at/ | false | false | 392 | null |
||
Built an actual working AI meeting copilot - and I am building it in open source, and making it self hostable | 1 | [removed] | 2024-12-12T13:39:49 | https://www.reddit.com/r/LocalLLaMA/comments/1hclar2/built_an_actual_working_ai_meeting_copilot_and_i/ | stealthanthrax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hclar2 | false | null | t3_1hclar2 | /r/LocalLLaMA/comments/1hclar2/built_an_actual_working_ai_meeting_copilot_and_i/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kzmKj3kNjmKMKzD42vPYAhDzOw8j0u6FrNUjNWrRXXo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=108&crop=smart&auto=webp&s=d4471fb28365acf570f667c5e52c445ede99843b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=216&crop=smart&auto=webp&s=9ed21477781bea24f11c6b2f62b32e29a9282910', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=320&crop=smart&auto=webp&s=44a130c532af8f7183cf2399e175a056dc3fe6d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=640&crop=smart&auto=webp&s=afa0dbfb43653eb9bbd17a51d4e10f6e49c730f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=960&crop=smart&auto=webp&s=7b7dc4bcd96b3ae2c71f7f4d4b1d9d644dd03079', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?width=1080&crop=smart&auto=webp&s=f72305adbc0b3989c24e0e8c2e1226ba9098df68', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l0fITe3uAT2qneKjN12xd1a9bqRNsJNMGMxdVqc34Rk.jpg?auto=webp&s=17a29141eaa2519a70c9fe6ae76dc2f53d079f93', 'width': 1200}, 'variants': {}}]} |
Accelerate GPT Output Embedding computations with a Vector Index | 1 | 2024-12-12T13:46:49 | https://martinloretz.com/blog/vector-index/ | martinloretz | martinloretz.com | 1970-01-01T00:00:00 | 0 | {} | 1hclfkr | false | null | t3_1hclfkr | /r/LocalLLaMA/comments/1hclfkr/accelerate_gpt_output_embedding_computations_with/ | false | false | default | 1 | null |
|
TalkNexus: Ollama Multi-Model Chatbot & RAG Interface | 3 | Hi everyone,
I recently built TalkNexus, an open-source app that offers an accessible interface for interacting with all Ollama language models. It lets you download and select models to chat with in real-time through a intuitive interface, it provides:
* Easy model management for downloading and switching between models;
* Real-time chat with any Ollama model through an intuitive interface;
* Document analysis capabilities powered by RAG system;
* Clean, responsive UI with streamed responses;
If you want to talk with the language models independently or leveraging them for document analysis with AI assistance for fun/productivity with a clean UI and without touching the terminal, this might be interesting for you.
Note: To use the app, you'll need to run it locally. Check out the GitHub guide steps to do it.
Feel free to explore it and share your feedback, as it would be very appreciated.
Project Source: [GitHub](https://github.com/TsLu1s/talknexus)
| 2024-12-12T14:07:11 | https://www.reddit.com/r/LocalLLaMA/comments/1hclu2r/talknexus_ollama_multimodel_chatbot_rag_interface/ | TsLu1s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hclu2r | false | null | t3_1hclu2r | /r/LocalLLaMA/comments/1hclu2r/talknexus_ollama_multimodel_chatbot_rag_interface/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'WLpD2deBU0SYVrhYKRxynm9GgEcAI9e7hA8AtCHB69o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/27YwU_scir-f9uKyUZ1QyWmA-ObUzspJcGYCGmWQjeI.jpg?width=108&crop=smart&auto=webp&s=381fb06a72690b1ee8cb7c8262d066797673f47a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/27YwU_scir-f9uKyUZ1QyWmA-ObUzspJcGYCGmWQjeI.jpg?width=216&crop=smart&auto=webp&s=3eccb5a36dd03d7f176f59a8c412ce510ce26085', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/27YwU_scir-f9uKyUZ1QyWmA-ObUzspJcGYCGmWQjeI.jpg?width=320&crop=smart&auto=webp&s=93da20ac92c2347f79fe0b394b292b4d7696b02c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/27YwU_scir-f9uKyUZ1QyWmA-ObUzspJcGYCGmWQjeI.jpg?width=640&crop=smart&auto=webp&s=928cb0fcced423a0b4541b66a8073f463b53a786', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/27YwU_scir-f9uKyUZ1QyWmA-ObUzspJcGYCGmWQjeI.jpg?width=960&crop=smart&auto=webp&s=7fa61c8dffd42df207dc9999167482b97f91bac4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/27YwU_scir-f9uKyUZ1QyWmA-ObUzspJcGYCGmWQjeI.jpg?width=1080&crop=smart&auto=webp&s=5f70deef92f761b79f78a38cef9b1f590ca79987', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/27YwU_scir-f9uKyUZ1QyWmA-ObUzspJcGYCGmWQjeI.jpg?auto=webp&s=ae7a9e9db7c9e2c84d37dfa36575cfe797ce94e9', 'width': 1200}, 'variants': {}}]} |
Using runpod serverless for HF 72b Qwen model --> seeking help from gurus | 2 | Hey all, I'm reasonably new to this and tried loading a HF Qwen 2.5 72b variant on Runpod.
Requesting help from runpod veterans please!
Here's what i did:
1. Clicked serverless
2 pasted the HF link for modell [https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2)
3. Chose A100 (80gb) and 2GPU (choosing 1 GPU gave me an error message)
3.5 Added MAX\_MODEL\_LENGTH setting of 20k tokens (previously had an error message as I didn't set this explcitly which was busted by the 128k default model context)
4. Clicked deploy
5. Clicked run ("hello world prompt")
6. It then started loading . Took about half and hour, and eventually just had a bunch of error messages, and the pod just kept running:
LOG output was somethhing like this:
4-12-12 21:44:18.390
[v73nvqgodhjqv6]
[info]
[1;36m(VllmWorkerProcess pid=229)[0;0m INFO 12-12 13:44:18 weight_utils.py:243] Using model weights format ['*.safetensors']\n
2024-12-12 21:44:18.380
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:44:18 weight_utils.py:243] Using model weights format ['*.safetensors']\n
2024-12-12 21:44:17.960
[v73nvqgodhjqv6]
[info]
[1;36m(VllmWorkerProcess pid=229)[0;0m INFO 12-12 13:44:17 model_runner.py:1072] Starting to load model EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2...\n
2024-12-12 21:44:17.959
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:44:17 model_runner.py:1072] Starting to load model EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2...\n
2024-12-12 21:44:17.941
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:44:17 shm_broadcast.py:236] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1], buffer=<vllm.distributed.device_communicators.shm_broadcast.ShmRingBuffer object at 0x7fc354c5e6e0>, local_subscribe_port=33823, remote_subscribe_port=None)\n
2024-12-12 21:44:17.936
[v73nvqgodhjqv6]
[warning]
[1;36m(VllmWorkerProcess pid=229)[0;0m WARNING 12-12 13:44:17 custom_all_reduce.py:143] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.\n
2024-12-12 21:44:17.936
[v73nvqgodhjqv6]
[info]
[1;36m(VllmWorkerProcess pid=229)[0;0m INFO 12-12 13:44:17 custom_all_reduce_utils.py:242] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1.json\n
2024-12-12 21:44:17.936
[v73nvqgodhjqv6]
[warning]
WARNING 12-12 13:44:17 custom_all_reduce.py:143] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.\n
2024-12-12 21:44:17.936
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:44:17 custom_all_reduce_utils.py:242] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1.json\n
2024-12-12 21:44:01.399
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:44:01 custom_all_reduce_utils.py:204] generating GPU P2P access cache in /root/.cache/vllm/gpu_p2p_access_cache_for_0,1.json\n
2024-12-12 21:44:00.944
[v73nvqgodhjqv6]
[info]
[1;36m(VllmWorkerProcess pid=229)[0;0m INFO 12-12 13:44:00 pynccl.py:69] vLLM is using nccl==2.21.5\n
2024-12-12 21:44:00.944
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:44:00 pynccl.py:69] vLLM is using nccl==2.21.5\n
2024-12-12 21:44:00.944
[v73nvqgodhjqv6]
[info]
[1;36m(VllmWorkerProcess pid=229)[0;0m INFO 12-12 13:44:00 utils.py:960] Found nccl from library libnccl.so.2\n
2024-12-12 21:44:00.944
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:44:00 utils.py:960] Found nccl from library libnccl.so.2\n
2024-12-12 21:43:59.357
[v73nvqgodhjqv6]
[info]
[1;36m(VllmWorkerProcess pid=229)[0;0m INFO 12-12 13:43:59 multiproc_worker_utils.py:215] Worker ready; awaiting tasks\n
2024-12-12 21:43:59.357
[v73nvqgodhjqv6]
[info]
[1;36m(VllmWorkerProcess pid=229)[0;0m INFO 12-12 13:43:59 selector.py:135] Using Flash Attention backend.\n
2024-12-12 21:43:59.313
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:43:59 selector.py:135] Using Flash Attention backend.\n
2024-12-12 21:43:59.134
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:43:59 custom_cache_manager.py:17] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager\n
2024-12-12 21:43:59.120
[v73nvqgodhjqv6]
[warning]
WARNING 12-12 13:43:59 multiproc_gpu_executor.py:56] Reducing Torch parallelism from 252 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.\n
2024-12-12 21:43:58.223
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:43:58 llm_engine.py:249] Initializing an LLM engine (v0.6.4) with config: model='EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2', speculative_config=None, tokenizer='EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=20000, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2, num_scheduler_steps=1, chunked_prefill_enabled=False multi_step_stream_outputs=True, enable_prefix_caching=False, use_async_output_proc=True, use_cached_outputs=False, chat_template_text_format=string, mm_processor_kwargs=None, pooler_config=None)\n
2024-12-12 21:43:58.218
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:43:58 config.py:1020] Defaulting to use mp for distributed inference\n
2024-12-12 21:43:58.217
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:43:58 config.py:350] This model supports multiple tasks: {'embedding', 'generate'}. Defaulting to 'generate'.\n
2024-12-12 21:43:58.217
[v73nvqgodhjqv6]
[info]
tokenizer_name_or_path: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2, tokenizer_revision: None, trust_remote_code: False\n
2024-12-12 21:43:57.097
[v73nvqgodhjqv6]
[info]
engine.py :26 2024-12-12 13:43:49,494 Engine args: AsyncEngineArgs(model='EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2', served_model_name=None, tokenizer='EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2', task='auto', skip_tokenizer_init=False, tokenizer_mode='auto', chat_template_text_format='string', trust_remote_code=False, allowed_local_media_path='', download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, seed=0, max_model_len=20000, worker_use_ray=False, distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=2, max_parallel_loading_workers=None, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager='true', swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.95, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, revision=None, code_revision=None, rope_scaling=None, rope_theta=None, hf_overrides=None, tokenizer_revision=None, quantization=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, fully_sharded_loras=False, lora_extra_vocab_size=256, long_lora_scaling_factors=None, lora_dtype='auto', max_cpu_loras=None, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, ray_workers_use_nsight=False, num_gpu_blocks_override=None, num_lookahead_slots=0, model_loader_extra_config=None, ignore_patterns=None, preemption_mode=None, scheduler_delay_factor=0.0, enable_chunked_prefill=None, guided_decoding_backend='outlines', speculative_model=None, speculative_model_quantization=None, speculative_draft_tensor_parallel_size=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, qlora_adapter_name_or_path=None, disable_logprobs_during_spec_decoding=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, disable_log_requests=False)\n
2024-12-12 21:42:39.655
[v73nvqgodhjqv6]
[info]
warnings.warn('resource_tracker: There appear to be %d '\n
2024-12-12 21:42:39.655
[v73nvqgodhjqv6]
[info]
/usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown\n
2024-12-12 21:34:02.450
[v73nvqgodhjqv6]
[info]
[1;36m(VllmWorkerProcess pid=229)[0;0m INFO 12-12 13:34:02 weight_utils.py:243] Using model weights format ['*.safetensors']\n
2024-12-12 21:34:02.440
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:34:02 weight_utils.py:243] Using model weights format ['*.safetensors']\n
2024-12-12 21:34:02.011
[v73nvqgodhjqv6]
[info]
[1;36m(VllmWorkerProcess pid=229)[0;0m INFO 12-12 13:34:02 model_runner.py:1072] Starting to load model EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2...\n
2024-12-12 21:34:02.010
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:34:02 model_runner.py:1072] Starting to load model EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2...\n
2024-12-12 21:34:01.989
[v73nvqgodhjqv6]
[info]
INFO 12-12 13:34:01 shm_broadcast.py:236] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1], buffer=<vllm.distributed.device_communicators.shm_broadcast.ShmRingBuffer object at 0x7f6aba662620>, local_subscribe_port=57263, remote_subscribe_port=None)\n
2024-12-12 21:34:01.980
[v73nvqgodhjqv6]
[warning]
[1;36m(VllmWorkerProcess pid=229)[0;0m WARNING 12-12 13:34:01 custom_all_reduce.py:143] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.\n
I tried googling / youtube for tutorials, but haven't found much.
Anyone can point me in the right direction to get this going please?
Thanks!
| 2024-12-12T14:17:07 | https://www.reddit.com/r/LocalLLaMA/comments/1hcm1ic/using_runpod_serverless_for_hf_72b_qwen_model/ | sprockettyz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcm1ic | false | null | t3_1hcm1ic | /r/LocalLLaMA/comments/1hcm1ic/using_runpod_serverless_for_hf_72b_qwen_model/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '1jPH6yKbY6aA_lmjy-h2_-nv-3PEF7MKN7ICu0qPwBA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JZ2Em0t-AGVzxfzCVZt_PNM7AL4iJxJF3BVZDnPoOY8.jpg?width=108&crop=smart&auto=webp&s=86f052ac533d3d44e4ac6522e1c0b660b0328c0a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JZ2Em0t-AGVzxfzCVZt_PNM7AL4iJxJF3BVZDnPoOY8.jpg?width=216&crop=smart&auto=webp&s=e0d294bdc3d0cdbeeb068cd684ac756fcf13236c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JZ2Em0t-AGVzxfzCVZt_PNM7AL4iJxJF3BVZDnPoOY8.jpg?width=320&crop=smart&auto=webp&s=d7a16560afdc18d62ae3366d25f418bc559f07b2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JZ2Em0t-AGVzxfzCVZt_PNM7AL4iJxJF3BVZDnPoOY8.jpg?width=640&crop=smart&auto=webp&s=e75943c68420b99cfd0db12f4612432fd6089f8f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JZ2Em0t-AGVzxfzCVZt_PNM7AL4iJxJF3BVZDnPoOY8.jpg?width=960&crop=smart&auto=webp&s=796667e724f6790ecc61de9019611eb7202c59bf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JZ2Em0t-AGVzxfzCVZt_PNM7AL4iJxJF3BVZDnPoOY8.jpg?width=1080&crop=smart&auto=webp&s=bd3f8f7542ff0654fee8ec58cdd1e0b27f33e320', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JZ2Em0t-AGVzxfzCVZt_PNM7AL4iJxJF3BVZDnPoOY8.jpg?auto=webp&s=11a2760d5919d9a7ed39a681e6fd7bf5c3643035', 'width': 1200}, 'variants': {}}]} |
Buy new dual 3090 machine now, or wait til after CES for new Nvidia release for LLM PC? | 1 | [removed] | 2024-12-12T14:23:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hcm5ub/buy_new_dual_3090_machine_now_or_wait_til_after/ | No-Emu9365 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcm5ub | false | null | t3_1hcm5ub | /r/LocalLLaMA/comments/1hcm5ub/buy_new_dual_3090_machine_now_or_wait_til_after/ | false | false | self | 1 | null |
What ASR model uses WhatsApp for the audio transcription? | 1 | I just noticed the transcription option for the audios on whatsaap, it runs locally and it'd surprisingly good.
Anyone knows if it's a proprietary model or open source one?
| 2024-12-12T14:27:29 | https://www.reddit.com/r/LocalLLaMA/comments/1hcm94c/what_asr_model_uses_whatsapp_for_the_audio/ | JorG941 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcm94c | false | null | t3_1hcm94c | /r/LocalLLaMA/comments/1hcm94c/what_asr_model_uses_whatsapp_for_the_audio/ | false | false | self | 1 | null |
Scale to Zero: Optimize GPU and CPU Workloads | 1 | 2024-12-12T15:00:28 | https://www.koyeb.com/blog/scale-to-zero-optimize-gpu-and-cpu-workloads | Plus_Ad7909 | koyeb.com | 1970-01-01T00:00:00 | 0 | {} | 1hcmxbq | false | null | t3_1hcmxbq | /r/LocalLLaMA/comments/1hcmxbq/scale_to_zero_optimize_gpu_and_cpu_workloads/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'pYxQsaZ6yHgS2mQPVVNA2lu4cr3cTXUn7L9sYIn3688', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/nbnqWRpTQaQNViMeTZpkXU2PkxmhTuDNYnLfJDcygK8.jpg?width=108&crop=smart&auto=webp&s=d6b22f3d9f19c4ccd33536be5ad324475b5818e3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/nbnqWRpTQaQNViMeTZpkXU2PkxmhTuDNYnLfJDcygK8.jpg?width=216&crop=smart&auto=webp&s=02a6c57ad69354cb618191bb5338043345d9be05', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/nbnqWRpTQaQNViMeTZpkXU2PkxmhTuDNYnLfJDcygK8.jpg?width=320&crop=smart&auto=webp&s=e0f7e832d8f5a9012a041078ff28c287aca47e77', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/nbnqWRpTQaQNViMeTZpkXU2PkxmhTuDNYnLfJDcygK8.jpg?width=640&crop=smart&auto=webp&s=39ed5fb7c6e2eb14722797279b3cd7a0c12c887a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/nbnqWRpTQaQNViMeTZpkXU2PkxmhTuDNYnLfJDcygK8.jpg?width=960&crop=smart&auto=webp&s=a2d362320810fce9593d4a94e5c53fd24e2ca72d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/nbnqWRpTQaQNViMeTZpkXU2PkxmhTuDNYnLfJDcygK8.jpg?width=1080&crop=smart&auto=webp&s=e0be59f212287411cc8c3cf7cad8a1f0743b5809', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/nbnqWRpTQaQNViMeTZpkXU2PkxmhTuDNYnLfJDcygK8.jpg?auto=webp&s=dd992b28467cbc961f6fe52e83a8b1925e1dd933', 'width': 1600}, 'variants': {}}]} |
||
John Backflip, the backflipping legend | 1 | [removed] | 2024-12-12T15:09:09 | ghosted_2020 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hcn4c0 | false | null | t3_1hcn4c0 | /r/LocalLLaMA/comments/1hcn4c0/john_backflip_the_backflipping_legend/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'qox6E-P_xVfmzKaOLirbVLd3_2NjneFvir-Hw32-NKo', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/tust4p6aof6e1.png?width=108&crop=smart&auto=webp&s=bebc741c775ac7d9e9f21996b8bfa05908600dab', 'width': 108}, {'height': 289, 'url': 'https://preview.redd.it/tust4p6aof6e1.png?width=216&crop=smart&auto=webp&s=2440d77e80dc3124466d1d6c3a0444e1324d0253', 'width': 216}, {'height': 429, 'url': 'https://preview.redd.it/tust4p6aof6e1.png?width=320&crop=smart&auto=webp&s=f83232f6d7f3ff591a3314ac50ab372b450b6a2d', 'width': 320}, {'height': 858, 'url': 'https://preview.redd.it/tust4p6aof6e1.png?width=640&crop=smart&auto=webp&s=e841db49450de4d11da1150c7e34b20ba50bc27c', 'width': 640}, {'height': 1288, 'url': 'https://preview.redd.it/tust4p6aof6e1.png?width=960&crop=smart&auto=webp&s=6da4b9d850d200b5450aafbb5c54501936a4a72e', 'width': 960}, {'height': 1449, 'url': 'https://preview.redd.it/tust4p6aof6e1.png?width=1080&crop=smart&auto=webp&s=2909bf8e40f0698ed759ac12d1ada3790bc29d56', 'width': 1080}], 'source': {'height': 1449, 'url': 'https://preview.redd.it/tust4p6aof6e1.png?auto=webp&s=ee6f5cef6555dce3136955621b6670d2b8971d46', 'width': 1080}, 'variants': {}}]} |
||
opinions on apple for self hosting large models | 6 | Hey,
my use is primarily reading code. i got real excited about the new mac mini having 64ram. it's considerably cheaper than an equivalent nvidia system with like 4 cards. I had the impression that more vram is more good than more FLOP/s
however, after testing it, it's kind of unexciting. its the first time i'm running large models like llama3.3 because my GPU can't fit them, so my expectations where maybe too high?
\- it's still not as good as claude, so for complex queries I still have to use claude
\- qwen2.5-coder:14b-instruct-q4\_K\_M fits on my GPU just fine and seems not that much worse
\- the m4 prod is not fast enough to run it at "chat speed" so you'd only use it for long running tasks
\- but for long running tasks i can just use a ryzen CPU at half the speed.
\- specialized models that run fast enough on the m4 can run even faster on some cheaper nvidia
\- 64GB is already not enough anyway to run the really really big models.
am i holding it wrong or is self hosting large models really kind of pointless?
| 2024-12-12T15:09:16 | https://www.reddit.com/r/LocalLLaMA/comments/1hcn4f1/opinions_on_apple_for_self_hosting_large_models/ | arvidep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcn4f1 | false | null | t3_1hcn4f1 | /r/LocalLLaMA/comments/1hcn4f1/opinions_on_apple_for_self_hosting_large_models/ | false | false | self | 6 | null |
Wumlla - Use Discord chat threads as the UI for your local LLM inference server | 1 | [removed] | 2024-12-12T15:11:19 | neilthegreatest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hcn60e | false | null | t3_1hcn60e | /r/LocalLLaMA/comments/1hcn60e/wumlla_use_discord_chat_threads_as_the_ui_for/ | false | false | 1 | {'enabled': True, 'images': [{'id': '49CCzticJHtx1Wd8fQ4uoQDvVUDdP4zpKt8G-GdQlhs', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/n1slsx1oof6e1.png?width=108&crop=smart&auto=webp&s=50fb3214d0d5c20d0daa84219db739dcba44165d', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/n1slsx1oof6e1.png?width=216&crop=smart&auto=webp&s=e2056a85263de8e7e7b927994735623fb622227a', 'width': 216}, {'height': 198, 'url': 'https://preview.redd.it/n1slsx1oof6e1.png?width=320&crop=smart&auto=webp&s=cbc3e4a59615224dcde6064d6a14e8286af06147', 'width': 320}, {'height': 396, 'url': 'https://preview.redd.it/n1slsx1oof6e1.png?width=640&crop=smart&auto=webp&s=7dc54b621a9167f9045329ee3724892d31968cd2', 'width': 640}, {'height': 594, 'url': 'https://preview.redd.it/n1slsx1oof6e1.png?width=960&crop=smart&auto=webp&s=ce379863bebbfe5429fadb32f16e81fdbafd83bd', 'width': 960}, {'height': 668, 'url': 'https://preview.redd.it/n1slsx1oof6e1.png?width=1080&crop=smart&auto=webp&s=8e1063a84bad79552f1c86e0954dbca843930ff5', 'width': 1080}], 'source': {'height': 1022, 'url': 'https://preview.redd.it/n1slsx1oof6e1.png?auto=webp&s=f874a47c76a074e9b9d1a44ca607b26e35d734dd', 'width': 1651}, 'variants': {}}]} |
||
AI Engineering Lessons from Building Pulumi Copilot | 133 | 2024-12-12T15:12:07 | https://www.pulumi.com/blog/copilot-lessons/ | agbell | pulumi.com | 1970-01-01T00:00:00 | 0 | {} | 1hcn6pq | false | null | t3_1hcn6pq | /r/LocalLLaMA/comments/1hcn6pq/ai_engineering_lessons_from_building_pulumi/ | false | false | 133 | {'enabled': False, 'images': [{'id': 'jOx_OyvQqOaNyXamlDJNy1Ku-D39Q1FFrI-5M8HBvyc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZDVpQ23oLSnqNBqwsts8sPhsQnAeCs-REmLr308M6Xo.jpg?width=108&crop=smart&auto=webp&s=4c96f4006a8bc12cd0cc43d3f97f26cba4cd2b2b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZDVpQ23oLSnqNBqwsts8sPhsQnAeCs-REmLr308M6Xo.jpg?width=216&crop=smart&auto=webp&s=3c14fb19c4a50d2b0ef03dc165c01ace9988d0eb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZDVpQ23oLSnqNBqwsts8sPhsQnAeCs-REmLr308M6Xo.jpg?width=320&crop=smart&auto=webp&s=6d2171fd62ffb2864b2cfdbab071468d825bd36d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZDVpQ23oLSnqNBqwsts8sPhsQnAeCs-REmLr308M6Xo.jpg?width=640&crop=smart&auto=webp&s=a40323c64adfe7f93903fcdabd9a61a6226be1e1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZDVpQ23oLSnqNBqwsts8sPhsQnAeCs-REmLr308M6Xo.jpg?width=960&crop=smart&auto=webp&s=9c89eb8c1bbaacc5420bb43d71bb7a7212c07a3c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZDVpQ23oLSnqNBqwsts8sPhsQnAeCs-REmLr308M6Xo.jpg?width=1080&crop=smart&auto=webp&s=911528c339eaff0b4e16d67c961f36ddda6ddfc0', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/ZDVpQ23oLSnqNBqwsts8sPhsQnAeCs-REmLr308M6Xo.jpg?auto=webp&s=aeb3254827a2df607c108f7c12b9fef68816c949', 'width': 1280}, 'variants': {}}]} |
||
This podcast was created using the new 'Stream Realtime' from the new Google A.I. Studio | 1 | [removed] | 2024-12-12T15:22:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1hcneq4 | false | null | t3_1hcneq4 | /r/LocalLLaMA/comments/1hcneq4/this_podcast_was_created_using_the_new_stream/ | false | false | default | 1 | null |
||
OpenAI o1 vs Claude 3.5 Sonnet: Which gives the best bang for your $20? | 155 | OpenAI unveiled the full O1 ($20) and O1 Pro ($200) plans a week ago, and the initial buzz is starting to settle.
O1 Pro is in a different price tier; most people wouldn’t even consider subscribing. The real battle is in the $20 space with the 3.5 Sonnet.
So, I tested both the models on multiple questions that [o1-preview](https://composio.dev/blog/openai-o1-preview-a-detailed-analysis/) failed at and a few more to see which subscription I should keep and what to remove.
The questions covered Mathematics and reasoning, Coding, and Creative writing. For interesting notes on o1 and personal benchmark tests, take a look at my article: [OpenAI o1 vs Claude 3.5 Sonnet.](https://composio.dev/blog/openai-o1-vs-claude-3-5-sonnet/)
Here are the key observations.
# Where does o1 shine?
* Complex reasoning and mathematics are the fortes of o1. It is just much better than any available options at this tier. And o1 could solve all the questions o1-preview struggled or needed assistance with.
* If you don’t want to spend $200, this is the best for math and reasoning. It will cover 90% of your use cases, except some Phd level stuff.
# Sonnet is still the better deal for coding.
* The o1 certainly codes better than the o1-preview, but 3.5 Sonnet is still better at coding in general, considering the trade-off between speed and accuracy.
* Also, the infamous rate limit of 50 messages/week can be a deal breaker if coding is the primary requirement.
# Who has more personality, and who has IQ?
* Claude 3.5 Sonnet still has the best personality among the big boys, but o1 has more IQ.
* Claude takes the cake if you need an assistant who feels like talking to another person, and o1 if you need a high-IQ but agreeable intern.
# Which subscription to ditch?
* If you need models exclusively for coding, Claude offers better value.
* For math, reasoning, and tasks that aren't coding-intensive, consider ChatGPT, but keep an eye on the per-week quota.
Let me know your thoughts on it and which one you liked more, and maybe share your personal benchmarking questions to vibe-check new models. | 2024-12-12T15:24:15 | https://www.reddit.com/r/LocalLLaMA/comments/1hcngb7/openai_o1_vs_claude_35_sonnet_which_gives_the/ | SunilKumarDash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcngb7 | false | null | t3_1hcngb7 | /r/LocalLLaMA/comments/1hcngb7/openai_o1_vs_claude_35_sonnet_which_gives_the/ | false | false | self | 155 | {'enabled': False, 'images': [{'id': '84bMMaByaA2k9AYWlAxNth76IG-t6nKvGLtPwuFhAkE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dIFWz-tOO_BqLB_HV-myzfMWMUQZVbYhr3XL40wb4Ug.jpg?width=108&crop=smart&auto=webp&s=526fd99bfbce13fea9852debd511b175d21aa525', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dIFWz-tOO_BqLB_HV-myzfMWMUQZVbYhr3XL40wb4Ug.jpg?width=216&crop=smart&auto=webp&s=062c5d2e1739ee73f091110e52d24324fa6bcf14', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dIFWz-tOO_BqLB_HV-myzfMWMUQZVbYhr3XL40wb4Ug.jpg?width=320&crop=smart&auto=webp&s=3a3fc1bd7dc0cc2b1821bde3988cf0f4405d5034', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dIFWz-tOO_BqLB_HV-myzfMWMUQZVbYhr3XL40wb4Ug.jpg?width=640&crop=smart&auto=webp&s=fa6d115cc689e03846152c0cd0d2eb886ca67489', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dIFWz-tOO_BqLB_HV-myzfMWMUQZVbYhr3XL40wb4Ug.jpg?width=960&crop=smart&auto=webp&s=653c4af305f9b13eb6e2d73eaa3ccaad3b6967b8', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/dIFWz-tOO_BqLB_HV-myzfMWMUQZVbYhr3XL40wb4Ug.jpg?auto=webp&s=5cca950a8d5627ba41e3efa7dacf416ef795a404', 'width': 1024}, 'variants': {}}]} |
How many of you know that we can create two speaker podcast using the new Stream Realtime feature (Google AI Studio)? | 1 | [removed] | 2024-12-12T15:26:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hcni20/how_many_of_you_know_that_we_can_create_two/ | Busy-Basket-5291 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcni20 | false | null | t3_1hcni20 | /r/LocalLLaMA/comments/1hcni20/how_many_of_you_know_that_we_can_create_two/ | false | false | self | 1 | null |
Help needed | 1 | [removed] | 2024-12-12T15:26:33 | https://www.reddit.com/r/LocalLLaMA/comments/1hcni69/help_needed/ | Far_Curve_789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcni69 | false | null | t3_1hcni69 | /r/LocalLLaMA/comments/1hcni69/help_needed/ | false | false | self | 1 | null |
Local TTS bad output compared to online examples? | 5 | I don't know if anyone has run into this issue before, but running any TTS model on my RTX 3090 produces horrible audio. I've tried Bark, XTTS-V2, MeloTTS. I follow the setup step-by-step, and even use the example scripts to generate audio. If I compare it to the examples on their GitHubs, it's nothing alike. It's hollow, noisy, cuts off too soon and sounds stilted, not natural at all.
Has anyone else have this problem? | 2024-12-12T15:27:04 | https://www.reddit.com/r/LocalLLaMA/comments/1hcnik6/local_tts_bad_output_compared_to_online_examples/ | MonoNova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcnik6 | false | null | t3_1hcnik6 | /r/LocalLLaMA/comments/1hcnik6/local_tts_bad_output_compared_to_online_examples/ | false | false | self | 5 | null |
Opinions on Eliza and Ai16z | 1 | [removed] | 2024-12-12T15:55:43 | https://www.reddit.com/r/LocalLLaMA/comments/1hco57m/opinions_on_eliza_and_ai16z/ | Gatuno619 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hco57m | false | null | t3_1hco57m | /r/LocalLLaMA/comments/1hco57m/opinions_on_eliza_and_ai16z/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dOLG0BRIsxlg6xKlMXA1agz1Gk21P-Ei0TdEiqLEDOQ', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/RaGuCgJNkvLQQi7qzNnnGaAIsSsYDHlLkwl6YxanTZo.jpg?width=108&crop=smart&auto=webp&s=3414f60ab791e43bb7c424a678ab1b16bd7e3275', 'width': 108}, {'height': 81, 'url': 'https://external-preview.redd.it/RaGuCgJNkvLQQi7qzNnnGaAIsSsYDHlLkwl6YxanTZo.jpg?width=216&crop=smart&auto=webp&s=34d9d2bf6e3cf1e0fcdbaa7bd64c7deae4d2b175', 'width': 216}, {'height': 120, 'url': 'https://external-preview.redd.it/RaGuCgJNkvLQQi7qzNnnGaAIsSsYDHlLkwl6YxanTZo.jpg?width=320&crop=smart&auto=webp&s=5cf00bddbb89377e9513ad6f11d8b0442f9a5d4c', 'width': 320}, {'height': 240, 'url': 'https://external-preview.redd.it/RaGuCgJNkvLQQi7qzNnnGaAIsSsYDHlLkwl6YxanTZo.jpg?width=640&crop=smart&auto=webp&s=79ecaee25a50c215dba6dfe7c167693540f71620', 'width': 640}, {'height': 360, 'url': 'https://external-preview.redd.it/RaGuCgJNkvLQQi7qzNnnGaAIsSsYDHlLkwl6YxanTZo.jpg?width=960&crop=smart&auto=webp&s=8a3bbcd7bd48707f28cb3d611fb1942802871de5', 'width': 960}, {'height': 405, 'url': 'https://external-preview.redd.it/RaGuCgJNkvLQQi7qzNnnGaAIsSsYDHlLkwl6YxanTZo.jpg?width=1080&crop=smart&auto=webp&s=36719b89eaa9472d614e19f59fdc50153e062161', 'width': 1080}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/RaGuCgJNkvLQQi7qzNnnGaAIsSsYDHlLkwl6YxanTZo.jpg?auto=webp&s=c1a05474f620315685ac46030ad43da751c32e11', 'width': 2048}, 'variants': {}}]} |
Machine Learning Rigs: Did Your Custom Build Actually Give You a Competitive Edge? 🤔 | 1 | [removed] | 2024-12-12T16:06:21 | https://www.reddit.com/r/LocalLLaMA/comments/1hcoed7/machine_learning_rigs_did_your_custom_build/ | zenMonkLoveWisdom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcoed7 | false | null | t3_1hcoed7 | /r/LocalLLaMA/comments/1hcoed7/machine_learning_rigs_did_your_custom_build/ | false | false | self | 1 | null |
llama 70b inference price. | 1 | [removed] | 2024-12-12T16:25:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hcotm3/llama_70b_inference_price/ | definedb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcotm3 | false | null | t3_1hcotm3 | /r/LocalLLaMA/comments/1hcotm3/llama_70b_inference_price/ | false | false | self | 1 | null |
llama3.3 70b inference price | 1 | [removed] | 2024-12-12T16:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hcouw8/llama33_70b_inference_price/ | definedb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcouw8 | false | null | t3_1hcouw8 | /r/LocalLLaMA/comments/1hcouw8/llama33_70b_inference_price/ | false | false | self | 1 | null |
Conductor 🚂🤖🪄 Orchestrate Workflows w/ Python, Regex & TDD | 1 | 2024-12-12T16:33:16 | https://github.com/rabbidave/conductor | Fun_Concept5414 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1hcp091 | false | null | t3_1hcp091 | /r/LocalLLaMA/comments/1hcp091/conductor_orchestrate_workflows_w_python_regex_tdd/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'oX3IMMg0Ln6aCA3oBjmwOWjAHc9jshWIeezCj5E88gU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QLScLv1e1mPpdBTsL4d_x-9OlNFoexQNFjqmOb5USBg.jpg?width=108&crop=smart&auto=webp&s=bbce2a23806c0d855479222cf0642bdd4d818827', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QLScLv1e1mPpdBTsL4d_x-9OlNFoexQNFjqmOb5USBg.jpg?width=216&crop=smart&auto=webp&s=6718d27916eace771c2bd764425fae7d568b1369', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QLScLv1e1mPpdBTsL4d_x-9OlNFoexQNFjqmOb5USBg.jpg?width=320&crop=smart&auto=webp&s=389d33ef4ab65422b2cb0981be30a44418be2d81', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QLScLv1e1mPpdBTsL4d_x-9OlNFoexQNFjqmOb5USBg.jpg?width=640&crop=smart&auto=webp&s=3c6198c736894500d66a4465e98a907bdfe1e5f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QLScLv1e1mPpdBTsL4d_x-9OlNFoexQNFjqmOb5USBg.jpg?width=960&crop=smart&auto=webp&s=cff7c46fa60d3557e0bfaa37c2f1d52cde48f007', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QLScLv1e1mPpdBTsL4d_x-9OlNFoexQNFjqmOb5USBg.jpg?width=1080&crop=smart&auto=webp&s=0e9c2684de60ac1f6a5911c86afbe564acb72d3c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QLScLv1e1mPpdBTsL4d_x-9OlNFoexQNFjqmOb5USBg.jpg?auto=webp&s=a7a2f5bf44230689db0e7c7a1ae87b331c1b0fc3', 'width': 1200}, 'variants': {}}]} |
||
[HOLIDAY PROMO] Perplexity AI PRO - 1 YEAR PLAN OFFER - 75% OFF | 1 | [removed] | 2024-12-12T16:42:03 | MReus11R | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hcp7al | false | null | t3_1hcp7al | /r/LocalLLaMA/comments/1hcp7al/holiday_promo_perplexity_ai_pro_1_year_plan_offer/ | false | false | 1 | {'enabled': True, 'images': [{'id': '6K92eyvHaCZPLV61FmUuk1PEFjzQf4_qsFseMImkJ2k', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/pvd2dppu4g6e1.jpeg?width=108&crop=smart&auto=webp&s=f5e887e6ee5c73591ba434afa92569369e0a3239', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/pvd2dppu4g6e1.jpeg?width=216&crop=smart&auto=webp&s=3b18e3f6db74a270657f93bac030e8d217b5bf7c', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/pvd2dppu4g6e1.jpeg?width=320&crop=smart&auto=webp&s=cb303fe90bd88ba7aa0717ce993911f2fc335fda', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/pvd2dppu4g6e1.jpeg?width=640&crop=smart&auto=webp&s=c8a70dba25969d28b566f39acd9e9dc4c1bf131a', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/pvd2dppu4g6e1.jpeg?width=960&crop=smart&auto=webp&s=ff40407b138664e3b4ae6f1b38c40ca9408a7111', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/pvd2dppu4g6e1.jpeg?width=1080&crop=smart&auto=webp&s=ffe9197a98b63b33cc8811ff6f255cb96c99c009', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://preview.redd.it/pvd2dppu4g6e1.jpeg?auto=webp&s=7088ebc9294885ab83988cf3b3206f15f8ec3a83', 'width': 2000}, 'variants': {}}]} |
||
U-MATH: New Uni-level math benchmark; Gemini is goat / Qwen is king | 95 | 2024-12-12T16:47:13 | https://www.reddit.com/gallery/1hcpbdz | k4black | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1hcpbdz | false | null | t3_1hcpbdz | /r/LocalLLaMA/comments/1hcpbdz/umath_new_unilevel_math_benchmark_gemini_is_goat/ | false | false | 95 | null |
||
Koboldcpp unbearably slow on a 7900XT? | 1 | So I switched from a 4070 to a 7900XT and the difference in performance makes me believe that there is something wrong because they're wildly far apart from each other.
I'm using "magnum-v4-22b-IQ4_XS.gguf" for the model and Kobold is set to Vulkan, but the generation is absolutely horrible. Without loading a character or anything like that, I'll simply ask it to tell me a joke and it will take about 3 minutes to output a simple joke. ie:
> "Here's a silly joke for you:
> What do you call a bear with no teeth?
A gummy bear!
> Hope that gives you a little chuckle"
That can't be correct? I know nVidia cards simply perform better, but I've seen plenty of discussion about people using AMD and the performance hit isn't... that large. On my 4070 that would have been near instant output. So. Uh?
Any ideas? | 2024-12-12T16:59:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hcplra/koboldcpp_unbearably_slow_on_a_7900xt/ | PangurBanTheCat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcplra | false | null | t3_1hcplra | /r/LocalLLaMA/comments/1hcplra/koboldcpp_unbearably_slow_on_a_7900xt/ | false | false | self | 1 | null |
Desktop-based Voice Control with Gemini 2.0 Flash | 1 | 2024-12-12T17:01:29 | https://v.redd.it/y98288998g6e1 | codebrig | /r/LocalLLaMA/comments/1hcpnj4/desktopbased_voice_control_with_gemini_20_flash/ | 1970-01-01T00:00:00 | 0 | {} | 1hcpnj4 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/y98288998g6e1/DASHPlaylist.mpd?a=1736744524%2CODc2Mjc4MDE5NTVjNDAzYTE5MzFmZGEwZTFlYmFhNzBiM2YxOTY2OWY5NjBkOTFjN2EyOGZlNjAyN2NlMGZkNQ%3D%3D&v=1&f=sd', 'duration': 87, 'fallback_url': 'https://v.redd.it/y98288998g6e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/y98288998g6e1/HLSPlaylist.m3u8?a=1736744524%2CMDY3ZTE5Y2U0OGY2NGY0YmEwNDIyZWE3ZmJlZGE5NzY0NTA3MTRkNWY5NzYxYTM0ZTdhYmNjNWQ4NTA2YWJiYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/y98288998g6e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1hcpnj4 | /r/LocalLLaMA/comments/1hcpnj4/desktopbased_voice_control_with_gemini_20_flash/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bDNua2wzOTk4ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bDNua2wzOTk4ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=108&crop=smart&format=pjpg&auto=webp&s=9b7a418dbfaba5605acff677419f9d25b1f7d34b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bDNua2wzOTk4ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=216&crop=smart&format=pjpg&auto=webp&s=0cb6cca06b33592937b9e5be9391b58bb4feb799', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bDNua2wzOTk4ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=320&crop=smart&format=pjpg&auto=webp&s=5a621b62b8d7a8b87cbc691291ac8de1ed3aadd2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bDNua2wzOTk4ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=640&crop=smart&format=pjpg&auto=webp&s=24f2b1697e62467ebba5969d438bcb074c8ddc45', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bDNua2wzOTk4ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=960&crop=smart&format=pjpg&auto=webp&s=f5d5a69653a2918e2b08162b62593273701e2612', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bDNua2wzOTk4ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f0808cc2b548c57ba11d841a27aafc55a52cf8a0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bDNua2wzOTk4ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?format=pjpg&auto=webp&s=5891ca189632636df8b6d11ebdd40deb443483e7', 'width': 1920}, 'variants': {}}]} |
||
Desktop-based Voice Control with Gemini 2.0 Flash | 133 | 2024-12-12T17:03:34 | https://v.redd.it/6e8c6u3n8g6e1 | codebrig | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hcppft | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6e8c6u3n8g6e1/DASHPlaylist.mpd?a=1736615029%2CZTVhNDgwNmMyMjI4M2YwYmQyNDI5ZDhiOTA4MmJiNzg0MDdiODU4Y2JjZDkyMjJlMmQ0ZTBlMjljNjA2YjNlYw%3D%3D&v=1&f=sd', 'duration': 87, 'fallback_url': 'https://v.redd.it/6e8c6u3n8g6e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6e8c6u3n8g6e1/HLSPlaylist.m3u8?a=1736615029%2CYTJhNTM0YjdmZDljYzNkMGI3Y2U1ZDI3YTZkYzUyNmQzNjc5ZWQwMGJmOWNhMDViNzRkOTEzOGUzZWI5ZjBmMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6e8c6u3n8g6e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1hcppft | /r/LocalLLaMA/comments/1hcppft/desktopbased_voice_control_with_gemini_20_flash/ | false | false | 133 | {'enabled': False, 'images': [{'id': 'Y3p4d3FyM244ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y3p4d3FyM244ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=108&crop=smart&format=pjpg&auto=webp&s=ceb1638c954c613dd929b50643cf8410d1803118', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Y3p4d3FyM244ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=216&crop=smart&format=pjpg&auto=webp&s=93db73fc34299bbbe5a8f5647364c4ea73675c1d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Y3p4d3FyM244ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=320&crop=smart&format=pjpg&auto=webp&s=004f5478ebb55ec6b53bbfe9f0febd229bbfc515', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Y3p4d3FyM244ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=640&crop=smart&format=pjpg&auto=webp&s=326415f61cd7d97819a505238e69438c4d4f90a0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Y3p4d3FyM244ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=960&crop=smart&format=pjpg&auto=webp&s=29e85d30ece436c7d9a716219a669a0d79e54708', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Y3p4d3FyM244ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3deed07228278ba5616dddb6ef290e2d38216619', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Y3p4d3FyM244ZzZlMbOJTYlAlM2K-dEyjbFcemsI5i9j3yJcymxB41M05Vzv.png?format=pjpg&auto=webp&s=40908e2b5f9e451132df8083d26ffdd997282658', 'width': 1920}, 'variants': {}}]} |
||
Any local ui to EASILY use the gemini models? | 1 | [removed] | 2024-12-12T17:06:35 | https://www.reddit.com/r/LocalLLaMA/comments/1hcps3j/any_local_ui_to_easily_use_the_gemini_models/ | Appropriate_Bug_6881 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcps3j | false | null | t3_1hcps3j | /r/LocalLLaMA/comments/1hcps3j/any_local_ui_to_easily_use_the_gemini_models/ | false | false | self | 1 | null |
Any chat interface to EASILY interact with gemini models? | 1 | [removed] | 2024-12-12T17:10:38 | https://www.reddit.com/r/LocalLLaMA/comments/1hcpvh2/any_chat_interface_to_easily_interact_with_gemini/ | Appropriate_Bug_6881 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcpvh2 | false | null | t3_1hcpvh2 | /r/LocalLLaMA/comments/1hcpvh2/any_chat_interface_to_easily_interact_with_gemini/ | false | false | self | 1 | null |
Choosing the Right GPUs for Hosting LLaMA 3.1 70B | 1 | [removed] | 2024-12-12T17:33:05 | https://www.reddit.com/r/LocalLLaMA/comments/1hcqer9/choosing_the_right_gpus_for_hosting_llama_31_70b/ | Nice_Detective_6236 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcqer9 | false | null | t3_1hcqer9 | /r/LocalLLaMA/comments/1hcqer9/choosing_the_right_gpus_for_hosting_llama_31_70b/ | false | false | self | 1 | null |
What are the best modern MacOS monitoring tool when running inference? | 0 | When running inference on macOS with llama.cpp, is there an ideal way to monitor inference-specific resources granularly and historically?
Activity monitor, top, htop, etc. are all pretty good, but i would imagine someone has made a more specific tool That I cannot find. | 2024-12-12T17:46:50 | https://www.reddit.com/r/LocalLLaMA/comments/1hcqqah/what_are_the_best_modern_macos_monitoring_tool/ | nonredditaccount | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcqqah | false | null | t3_1hcqqah | /r/LocalLLaMA/comments/1hcqqah/what_are_the_best_modern_macos_monitoring_tool/ | false | false | self | 0 | null |
I have a 4070 Super 12GB vram with 96GB RAM and 13700k CPU, what is the best llama I can run? | 1 | [removed] | 2024-12-12T17:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/1hcqzgo/i_have_a_4070_super_12gb_vram_with_96gb_ram_and/ | manwiththe104IQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcqzgo | false | null | t3_1hcqzgo | /r/LocalLLaMA/comments/1hcqzgo/i_have_a_4070_super_12gb_vram_with_96gb_ram_and/ | false | false | self | 1 | null |
[HOLIDAY PROMO] Perplexity AI PRO - 1 YEAR PLAN OFFER - 75% OFF | 1 | [removed] | 2024-12-12T18:01:39 | MReus11R | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hcr2sw | false | null | t3_1hcr2sw | /r/LocalLLaMA/comments/1hcr2sw/holiday_promo_perplexity_ai_pro_1_year_plan_offer/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'lNY4xabb_h9HcHDV79RgrYyxuyvAaiVgLyG_yh_bFSg', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ynwup8z1jg6e1.jpeg?width=108&crop=smart&auto=webp&s=d7e48613e81e4f0299e556bf7632ed5895116d49', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ynwup8z1jg6e1.jpeg?width=216&crop=smart&auto=webp&s=2c987480e524c7ad3130b5077a2a3ae9d5307180', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ynwup8z1jg6e1.jpeg?width=320&crop=smart&auto=webp&s=51fd91eb07041bb3ba5eef1f4a58c2db0004fb5f', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/ynwup8z1jg6e1.jpeg?width=640&crop=smart&auto=webp&s=e7a12ca6b98f32512b5f8121c2d1f958f7fbb768', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/ynwup8z1jg6e1.jpeg?width=960&crop=smart&auto=webp&s=21662f32b79a7e8917dcf1f3f7f72734177f8160', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/ynwup8z1jg6e1.jpeg?width=1080&crop=smart&auto=webp&s=32313045ae1ef93e335be34587153383ffb2631c', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://preview.redd.it/ynwup8z1jg6e1.jpeg?auto=webp&s=6bd4ec8b871af23e4225e64db131fef49ad824fe', 'width': 2000}, 'variants': {}}]} |
||
Alternatives to Openrouter.ai??(until I got 4090 or even better) | 1 | [removed] | 2024-12-12T18:02:13 | https://www.reddit.com/r/LocalLLaMA/comments/1hcr3c2/alternatives_to_openrouteraiuntil_i_got_4090_or/ | joeyama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcr3c2 | false | null | t3_1hcr3c2 | /r/LocalLLaMA/comments/1hcr3c2/alternatives_to_openrouteraiuntil_i_got_4090_or/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eSqVwtqI8lEdDB_rmKd0BYBIMU8SrRzZJO1i5nKZGFA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=108&crop=smart&auto=webp&s=0c1ad514fc554f44bb46c9152baba6986076ba74', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=216&crop=smart&auto=webp&s=6812ecd42fbe74627d9304542cff0afbc249d156', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=320&crop=smart&auto=webp&s=edcea14234fffb82794052382f17f20c47c18201', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=640&crop=smart&auto=webp&s=7f26739ea8d6076629ad93a3856b95884751bd17', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=960&crop=smart&auto=webp&s=c866e4a12291ee18670273f6977eb695701e8fdc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?width=1080&crop=smart&auto=webp&s=d3b816735e116dda737a95791da468cbabc6e140', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/MXw-E3odJ7wq5-Kg3VrRGXgkFv36WJpn_XZYB8zQkkI.jpg?auto=webp&s=bdb67dc29d0015d08538d028017edd9629393606', 'width': 1200}, 'variants': {}}]} |
Cognitum Text Classifier for Social Science | 5 | 2024-12-12T18:09:10 | https://cognitum.cc/ | finnless | cognitum.cc | 1970-01-01T00:00:00 | 0 | {} | 1hcr9b2 | false | null | t3_1hcr9b2 | /r/LocalLLaMA/comments/1hcr9b2/cognitum_text_classifier_for_social_science/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'LVPawYlJQAWq8A3CG4fPcvE5qn69qe_DMJq7HN3Yw7E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/S-D4T8vGPTdO1e8i4MBy0ru3YqJI5lJYo9EhgWYHb0c.jpg?width=108&crop=smart&auto=webp&s=43ddafb40e831f4bf4cc7679fd853b3fdbc70fe6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/S-D4T8vGPTdO1e8i4MBy0ru3YqJI5lJYo9EhgWYHb0c.jpg?width=216&crop=smart&auto=webp&s=084f5ed2b022db78e8421925019786dac49dbec7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/S-D4T8vGPTdO1e8i4MBy0ru3YqJI5lJYo9EhgWYHb0c.jpg?width=320&crop=smart&auto=webp&s=a43f7d99636fcf442dd5a8c7d134c60984dda029', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/S-D4T8vGPTdO1e8i4MBy0ru3YqJI5lJYo9EhgWYHb0c.jpg?width=640&crop=smart&auto=webp&s=9ca570b17ab09fa4116a70cdf90e42da3321b58e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/S-D4T8vGPTdO1e8i4MBy0ru3YqJI5lJYo9EhgWYHb0c.jpg?width=960&crop=smart&auto=webp&s=43209a70500d66bf299f90d19edcf97be2153eed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/S-D4T8vGPTdO1e8i4MBy0ru3YqJI5lJYo9EhgWYHb0c.jpg?width=1080&crop=smart&auto=webp&s=b113a2e794df386b07b5ac25abdb124bfc2c8089', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/S-D4T8vGPTdO1e8i4MBy0ru3YqJI5lJYo9EhgWYHb0c.jpg?auto=webp&s=a29d01703216160c11c75e6ed280bd092aa6f117', 'width': 1200}, 'variants': {}}]} |
||
Gift giver bot | 2 | I wrote a nifty little demo using structured generation to produce gift ideas from some description of a person! It's got a super simple CLI and a web GUI.
Supports [exa.ai](http://exa.ai) search if you want to get search results related to each of the LLM's gift ideas.
If you end up playing with it, let me know what you love/hate/find interesting/etc.
Code: [https://github.com/dottxt-ai/demos/tree/main/holidays-2024/gifter](https://github.com/dottxt-ai/demos/tree/main/holidays-2024/gifter)
YouTube: [https://youtu.be/t8LSX5AoKjQ](https://youtu.be/t8LSX5AoKjQ)
[A screenshot of the .txt gift generator home page.](https://preview.redd.it/5bxyns98ng6e1.png?width=1697&format=png&auto=webp&s=ea66f80bbc9ceb6bf3860a0416aa35bd07ca3540)
| 2024-12-12T18:34:25 | https://www.reddit.com/r/LocalLLaMA/comments/1hcruhf/gift_giver_bot/ | cameron_pfiffer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcruhf | false | null | t3_1hcruhf | /r/LocalLLaMA/comments/1hcruhf/gift_giver_bot/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'y55AESv33babJ7mSgatTALNYv1pyPK6nnUM0k4p4S08', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/89LpZTWU6eYn_pvk2ZwV6O5o9zdbUvjMg0DL39Sx9lE.jpg?width=108&crop=smart&auto=webp&s=7bcc5157601676cec77325b5128d26979725a686', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/89LpZTWU6eYn_pvk2ZwV6O5o9zdbUvjMg0DL39Sx9lE.jpg?width=216&crop=smart&auto=webp&s=ca83584960f411c6456d8c8684b06f3340615efd', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/89LpZTWU6eYn_pvk2ZwV6O5o9zdbUvjMg0DL39Sx9lE.jpg?width=320&crop=smart&auto=webp&s=cc3d9849a21668dfdd05903a5a694ed2447570b2', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/89LpZTWU6eYn_pvk2ZwV6O5o9zdbUvjMg0DL39Sx9lE.jpg?width=640&crop=smart&auto=webp&s=d159f7375d676141e08433b03d0677be81e5595a', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/89LpZTWU6eYn_pvk2ZwV6O5o9zdbUvjMg0DL39Sx9lE.jpg?width=960&crop=smart&auto=webp&s=09508c488e0c7c0a1bf474129c3a3b769df93ae1', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/89LpZTWU6eYn_pvk2ZwV6O5o9zdbUvjMg0DL39Sx9lE.jpg?width=1080&crop=smart&auto=webp&s=1a1ddf61ceac2cc83d3d6c43b663f999667e6200', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/89LpZTWU6eYn_pvk2ZwV6O5o9zdbUvjMg0DL39Sx9lE.jpg?auto=webp&s=e7a8dd77caddaaceecd8329f02bd8cf4faa4c9fd', 'width': 1910}, 'variants': {}}]} |
|
Prompt to extract the 'opening balance' from an account statement text/markdown extracted from a PDF? | 0 | I'm a noob at prompt engineering.
I'm building a tiny app that extracts information from my account statements in different countries, and I want to extract the 'opening balance' of the account statement (the balance at the start of the period analyzed).
I'm currently converting PDFs to markdown or raw text and feeding it to the LLM. This is my current prompt:
messages=[
{"role": "system", "content": """
- You are an expert at extracting the 'opening balance' of account statements from non-US countries.
- You search and extract information pertaining to the opening balance: the balance at the beginning of or before the period the statement covers.
- The account statement you receive might no be in English, so you have to look for the equivalent information in a different language.
"""},
{"role": "user", "content": f"""
## Instructions:
- You are given an account statement that covers the period starting on {period_analyzed_start}.
- Search the content for the OPENING BALANCE: the balance before or at {period_analyzed_start}.
- It is most likely found in the first page of the statement.
- It may be found in text similar to "balance before {period_analyzed_start}" or equivalent in a different language.
- It may be found in text similar to "balance at {period_analyzed_start}" or equivalent in a different language.
- The content may span different columns, for example: the information "amount before dd-mm-yyyy" might be in a column, and the actual number in a different column.
- The column where the numbers is found may indicate whether the opening balance is positive or negative (credit/deposit columns or debit/withdrawal columns). E.g. if the column is labeled "debit" (or equivalent in a different language), the opening balance is negative.
- The opening balance may also be indicated by the sign of the amount (e.g. -20.00 means negative balance).
- Use the information above to determine whether the opening balance is positive or negative.
- If there is no clear indication of the opening balance, return {{is_present: False}}
- Return opening balance in JSON with the following format:
{
"opening_balance": {"is_present": True, "balance": 123.45, "date": "yyyy-mm-dd"},
}
# Here is the markdown content:
{markdown_content}
"""}
],
Is this too big or maybe too small?
What is it missing?
What am I generally doing wrong? | 2024-12-12T18:36:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hcrw6o/prompt_to_extract_the_opening_balance_from_an/ | dirtyring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcrw6o | false | null | t3_1hcrw6o | /r/LocalLLaMA/comments/1hcrw6o/prompt_to_extract_the_opening_balance_from_an/ | false | false | self | 0 | null |
How to clone any Twitter personality into an AI (your move, Elon) 🤖 | 1 | 2024-12-12T18:37:35 | https://www.youtube.com/watch?v=rMDu930oNYY | MostlyGreat | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1hcrx6c | false | {'oembed': {'author_name': 'LangChain', 'author_url': 'https://www.youtube.com/@LangChain', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/rMDu930oNYY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Building an AI persona for any X/Twitter user from scratch"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/rMDu930oNYY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Building an AI persona for any X/Twitter user from scratch', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'} | t3_1hcrx6c | /r/LocalLLaMA/comments/1hcrx6c/how_to_clone_any_twitter_personality_into_an_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'EQfnToO3g7ed5NEXWyPEQrFDtkuHVQRZ38ns12QyLhY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wyxetqTW8lspU7L6q1ZjPJykSzjvliFVEgHksMmIRno.jpg?width=108&crop=smart&auto=webp&s=8c291b8b4b367f5bb449c44425e3b6e6cc5e6eab', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wyxetqTW8lspU7L6q1ZjPJykSzjvliFVEgHksMmIRno.jpg?width=216&crop=smart&auto=webp&s=3bc6143174a500770fcfad909b3d70a3d2c396b2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wyxetqTW8lspU7L6q1ZjPJykSzjvliFVEgHksMmIRno.jpg?width=320&crop=smart&auto=webp&s=d908ce3fb399b4c6e9ca18c200bc180871716bb3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wyxetqTW8lspU7L6q1ZjPJykSzjvliFVEgHksMmIRno.jpg?auto=webp&s=8da9045413762f8fa1c585f13130a1e15ce36b4d', 'width': 480}, 'variants': {}}]} |
||
How to clone any Twitter personality into an AI (your move, Elon) 🤖 | 1 | [removed] | 2024-12-12T18:51:24 | https://www.reddit.com/r/LocalLLaMA/comments/1hcs8qt/how_to_clone_any_twitter_personality_into_an_ai/ | MostlyGreat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcs8qt | false | null | t3_1hcs8qt | /r/LocalLLaMA/comments/1hcs8qt/how_to_clone_any_twitter_personality_into_an_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EQfnToO3g7ed5NEXWyPEQrFDtkuHVQRZ38ns12QyLhY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wyxetqTW8lspU7L6q1ZjPJykSzjvliFVEgHksMmIRno.jpg?width=108&crop=smart&auto=webp&s=8c291b8b4b367f5bb449c44425e3b6e6cc5e6eab', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wyxetqTW8lspU7L6q1ZjPJykSzjvliFVEgHksMmIRno.jpg?width=216&crop=smart&auto=webp&s=3bc6143174a500770fcfad909b3d70a3d2c396b2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wyxetqTW8lspU7L6q1ZjPJykSzjvliFVEgHksMmIRno.jpg?width=320&crop=smart&auto=webp&s=d908ce3fb399b4c6e9ca18c200bc180871716bb3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wyxetqTW8lspU7L6q1ZjPJykSzjvliFVEgHksMmIRno.jpg?auto=webp&s=8da9045413762f8fa1c585f13130a1e15ce36b4d', 'width': 480}, 'variants': {}}]} |
Meta AI stopped replying my prompt - how to fix? | 0 | I use Meta AI through my whatsapp account(mobile/desktop client). It was working until today morning, it stopped working. I am not getting any replies after I send my prompt. How can I fix this? I did login/logout few times, but problem persisted. Please help. | 2024-12-12T18:59:40 | https://www.reddit.com/r/LocalLLaMA/comments/1hcsfx2/meta_ai_stopped_replying_my_prompt_how_to_fix/ | arup_r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcsfx2 | false | null | t3_1hcsfx2 | /r/LocalLLaMA/comments/1hcsfx2/meta_ai_stopped_replying_my_prompt_how_to_fix/ | false | false | self | 0 | null |
Can someone give me a roadmap to learn LLMs, RAG and all the good stuff? | 0 | I'm familiar with ML and DL. I just wanted to know and understand how to work with LLMs.
I don't want to make API calls to LLMs with Prompts for solutions (I'm already familiar with this). Any other gimmicks that would be useful for me to know? | 2024-12-12T19:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/1hcsrbl/can_someone_give_me_a_roadmap_to_learn_llms_rag/ | Immediate_Ad9718 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcsrbl | false | null | t3_1hcsrbl | /r/LocalLLaMA/comments/1hcsrbl/can_someone_give_me_a_roadmap_to_learn_llms_rag/ | false | false | self | 0 | null |
Local assistant system design | 1 | [removed] | 2024-12-12T19:18:44 | https://www.reddit.com/r/LocalLLaMA/comments/1hcsw36/local_assistant_system_design/ | scary_kitten_daddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcsw36 | false | null | t3_1hcsw36 | /r/LocalLLaMA/comments/1hcsw36/local_assistant_system_design/ | false | false | self | 1 | null |
Latest AMD Driver 24.12.1 performs significantly worse than 24.8.1 running QwQ. | 14 | My setup, I use LMStudio + AnythingLLM.
I have a 7900XTX with 24GB VRAM
I have ROCm llama.cpp as my configured runtime.
In 24.8.1 I get on average about 25 tokens per second
Upgrading to 24.12.1, my average dropped to under 7 running the exact same prompt. I rebooted twice to be sure.
After further analysis, I noticed as soon as the model loaded into memory, the GPU usage spiked and sat at 100% even when it wasn't generating anything via a prompt. I tested loading and re-loading a few times and the behavior is consistent.
I downgraded back to 24.8.1, and my performance immediately went back to normal.
I didn't get a chance to test other models, but i'm curious if anyone else has noticed similar issues with the latest driver? | 2024-12-12T20:16:17 | https://www.reddit.com/r/LocalLLaMA/comments/1hcu7v6/latest_amd_driver_24121_performs_significantly/ | maddogawl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcu7v6 | false | null | t3_1hcu7v6 | /r/LocalLLaMA/comments/1hcu7v6/latest_amd_driver_24121_performs_significantly/ | false | false | self | 14 | null |
Llama 3.1 70B speed down!! | 1 | [removed] | 2024-12-12T20:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/1hcun0j/llama_31_70b_speed_down/ | Temporary_Rice_6538 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcun0j | false | null | t3_1hcun0j | /r/LocalLLaMA/comments/1hcun0j/llama_31_70b_speed_down/ | false | false | self | 1 | null |
N00b Question: GGUF Quantizing with llama.cpp on Colab | 0 | Long story short, I'm trying to quantize an 8x22b model from BF16 to my 3TB Google Drive on Colab using
llama.cpp/convert_hf_to_gguf.py
At first I tried FP32, but the resulting GGUF file is expected to be 500GB+ *(which would def fit on my Google Drive but not in the limited 235GB Colab disk - while the 'convert\_hf\_to\_gguf.py' process is running)*. So I gave up on that idea after Colab kept running out of memory (once the GGUF file reaches over 200GB+ in the Colab memory)
Now, I'm trying FP16 *(yeah, I know it's a compromise but I don't have much choice rn)*, and the file is 280GB which is still larger than the 235GB Colab storage memory.
I have the Google drive mounted in Colab, and I have the correct directories set - but Colab insists own using the 235GB disk space to store the GGUF file while the conversion process is running in Colab. **Basically, the Colab disk serves as nothing but a massive bottleneck at this point.**
I don't have time to figure out how to do this locally yet - I'm a newbie, and I have too many other things to be focusing on these next few weeks.
Is there any way to split up the GGUF file in chucks *(using 'convert\_hf\_to\_gguf.py')* so that Colab actually writes to the Google drive *(instead of trying to write to one massive GGUF file that gets stored in the Colab memory and always maxes out the 235GB limit)*? Or am I just stuck for now until I can do this locally? | 2024-12-12T20:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hcun1p/n00b_question_gguf_quantizing_with_llamacpp_on/ | misterflyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcun1p | false | null | t3_1hcun1p | /r/LocalLLaMA/comments/1hcun1p/n00b_question_gguf_quantizing_with_llamacpp_on/ | false | false | self | 0 | null |
Ideas for spending $8k in Anthropic credits? | 4 | so far i'm thinking:
* build an app (basic chat or something more novel) and charge a subscription
* open source dataset creation on hugging face
* long-running task/agents with computer use or MCP
any other ideas/suggestions? what would you do with $8k credits that expires in a year. | 2024-12-12T21:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/1hcwk5j/ideas_for_spending_8k_in_anthropic_credits/ | benthecoderX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcwk5j | false | null | t3_1hcwk5j | /r/LocalLLaMA/comments/1hcwk5j/ideas_for_spending_8k_in_anthropic_credits/ | false | false | self | 4 | null |
RAG on my music library | 32 | Still tweaking and experimenting the best way to chunk/embed the metadata.
Embedding model: nomic-embed-text
Chat model: llama3.2 | 2024-12-12T22:45:49 | ranoutofusernames__ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hcxn40 | false | null | t3_1hcxn40 | /r/LocalLLaMA/comments/1hcxn40/rag_on_my_music_library/ | false | false | 32 | {'enabled': True, 'images': [{'id': 'Eg7XQ2DzEiGRYZZ_aIfnoKtcRJAash6H8MJKreA34ZA', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/8mmepgcrxh6e1.jpeg?width=108&crop=smart&auto=webp&s=a1bf21a1bea6851a486614c6ab2d84d62adc7f43', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/8mmepgcrxh6e1.jpeg?width=216&crop=smart&auto=webp&s=d4c6e171b2bfebcdc8a55befe4f443f6b7b03e4f', 'width': 216}, {'height': 318, 'url': 'https://preview.redd.it/8mmepgcrxh6e1.jpeg?width=320&crop=smart&auto=webp&s=495de9b17ea4cf77f7204dac1876da863aebc3ed', 'width': 320}, {'height': 637, 'url': 'https://preview.redd.it/8mmepgcrxh6e1.jpeg?width=640&crop=smart&auto=webp&s=8c1a141f64dc3b110e14e6127ba6f77ae2f0ecb2', 'width': 640}], 'source': {'height': 759, 'url': 'https://preview.redd.it/8mmepgcrxh6e1.jpeg?auto=webp&s=b68407ebe6140100f57ae280cd3fecbf200c0bd5', 'width': 762}, 'variants': {}}]} |
||
Buy new dual 3090 machine now, or wait til after CES for new Nvidia release for LLM PC? | 16 | So, I have been experimenting with running local models, mostly on a 32gb macbook pro, and want to take things to the next level. Which coincides with my needing a new PC workstation for my work (in trading/finance). What I am hoping to do is to get a new, reasonably priced machine somewhere in the $3-5k range that will allow me to evolve and expand on my local LLM experiments, and maybe even try some finetuning of models for my particular specialized niche and use-cases with regard to some of the trading work I do.
I've gotten a bit antsy and am on the cusp of pulling the trigger on a custom-built PC from CustomLuxPCs for about $4100 with the following specs:
* **CPU:** Intel i9-14900K
* **GPU:** 2x RTX 3090 24 GB (48 VRAM total)
* **RAM:** 128 GB DDR5 6000 Mhz
* **Motherboard:** Z790 DDR5 Wifi Motherboard
* **Storage:** 2 TB NVme Gen 4 SSD
* **Case:** Black Lian Li PC-O11 Dynamic with 9 RGB fans
* **Power Supply:** 1500W 80+ Gold PSU with 15 year warranty
* **Cooler:** 360 mm AIO Liquid cooler
Most of this is overkill for my everyday usage, but it gives me some decent ability to run moderately sized models and do some low level finetuning, I think. It's not perfectly future-proof, but provide a solid 2-3 years where I'm not too far behind from running the latest stuff without having to spend $10k+.
But there's part of me that wonders if it's dumb to make this big purchase less than a month away from CES in January where NVidia will likely release the 5000 series and all that jazz. I doubt it really impacts prices of 3090's or 4090's too much, but I'm no expert. I'm still a moderately experienced beginner.
So, should I just go ahead and get the machine sooner than later so I can start building and experimenting and learning? Or wait and see what's available and what prices are after CES? Or any other suggestions like paying more and getting a A6000 or something like that? 90% of my usage will be for low level stuff, but if the 10% of time I spend on LLM's yields good results, I'd like to be able to further my efforts on that front relatively easily.
Thanks for any help or feedback you can offer! | 2024-12-12T23:23:46 | https://www.reddit.com/r/LocalLLaMA/comments/1hcyg8w/buy_new_dual_3090_machine_now_or_wait_til_after/ | No-Emu9365 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcyg8w | false | null | t3_1hcyg8w | /r/LocalLLaMA/comments/1hcyg8w/buy_new_dual_3090_machine_now_or_wait_til_after/ | false | false | self | 16 | null |
This is why I went Local | 1 | Tried new meta AI from instagram. Error: “something went wrong.” | 2024-12-12T23:25:26 | Kuuumaaaa | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hcyhg7 | false | null | t3_1hcyhg7 | /r/LocalLLaMA/comments/1hcyhg7/this_is_why_i_went_local/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Rlz1naSLAU113A2t_IY6kk62N-sAcdNjRRXw0ZxkqAU', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/f24ge0kt4i6e1.jpeg?width=108&crop=smart&auto=webp&s=f39f146b688e082815f6af75cdd518d753b1a9ae', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/f24ge0kt4i6e1.jpeg?width=216&crop=smart&auto=webp&s=e26ab36c440b61c310900f0d7a4171ba54338f51', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/f24ge0kt4i6e1.jpeg?width=320&crop=smart&auto=webp&s=ec293bd34251ea679022e1e6d63774ddff09b241', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/f24ge0kt4i6e1.jpeg?width=640&crop=smart&auto=webp&s=25ae490e0714c83cb8b5886ffe58b505f092155f', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/f24ge0kt4i6e1.jpeg?width=960&crop=smart&auto=webp&s=8f854c17db26402e0d279580bd76c2a28af0245a', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/f24ge0kt4i6e1.jpeg?width=1080&crop=smart&auto=webp&s=02f12394bf1d176359227d3f1b6a2c9188d9986b', 'width': 1080}], 'source': {'height': 2496, 'url': 'https://preview.redd.it/f24ge0kt4i6e1.jpeg?auto=webp&s=0a93857e5fab75a51dba68fe4ebf53f903350722', 'width': 1242}, 'variants': {}}]} |
||
Reusing ExllamaV2 Measurements Across Similar Models | 5 | # Reusing ExllamaV2 Measurements Across Similar Models
**TL;DR**
You can reuse exl2 measurement files across similar models instead of taking a new measurement for every new model, potentially saving hours of processing time.
**Background**
For those not already familiar with the process of producing a ExllamaV2 quant of a model, it's essentially a two step process. The first step involves taking a measurement of the "degradation" introduced by quantizing each layer of the model according to different levels of "compression." (I'm trying to keep it simple here.) The results of that first step are fed into the second step, wherein ExllamaV2's quantization algorithm uses those measurements to select how aggressively to "compress" each layer of the model to target a certain overall level of compression. The measurements are highly dependent on which dataset you use for conducting the measurements. For my purposes, I am only dealing with measurements taken against the default ExllamaV2 dataset, which is frequently used in practice due to its balanced nature.
The ExllamaV2 quantization script supports saving the results of the first step, the measurement pass, as a JSON file. That helps speed up subsequent runs by enabling the reuse of the measurements from a previous run, allowing the user to skip the first step if they want to produce a new quantization of the same model that targets a different average bits per weight (i.e. level of compression.) In my experience with producing ExllamaV2 quants of \~70B parameter models locally on my NVIDIA 3090, the measurement pass can take 2 - 3 hours, so being able to skip it is helpful. This is not a novel insight, but it leads into the crux of my post.
**The Discovery**
For a while now, I have suspected that the ExllamaV2 measurements do not vary significantly between different models within a family: Llama 3.1 and its finetunes, Qwen 2.5 and its finetunes, and so forth. I experiment frequently with model merging, and I do my testing locally using ExllamaV2 quants of my models. To save time, I will sometimes reuse measurement files from similar merges to more quickly produce a quant of a new model for testing. In practice, I have never noticed a difference in performance or perplexity between quants produced using a measurement taken on the parent model directly and quants produced using a measurement taken from a "sibling" model that is similar to the parent but not exactly the same model.
Today I decided to take a deeper look at this relationship. With Claude's help, I wrote a Python script ([GitHub](https://github.com/sophosympatheia/sophos_scripts)) to compare the measurement values between two measurement.json files produced by ExllamaV2. I then compared measurement files for various models I have archived on my system, and what I discovered is that, on average, the difference between measured accuracy values within layers between two different models within a family is quite minimal.
* Average differences between accuracy measurements at different levels of quantization within a layer between models in the same family are typically around 0.2% (0.002)
* Even outliers rarely exceed 0.6% (0.006)
* These differences are too small to meaningfully impact ExllamaV2's optimization decisions
Another way of putting it this this: the differences between levels of compression within a layer (e.g. 2 bpw vs. 3 bpw vs. 5 bpw) dwarfs the difference between the measurements of the same levels of compression between two different models within the same family. The latter difference is too small to realistically result in ExllamaV2 making a bad decision, such as thinking that 2.0 bpw is more accurate than 2.5 bpw for a given layer. The ordering/ranking of compression levels by accuracy remains consistent.
**Practical Impact**
You can save time and compute when producing ExllamaV2 quants of new models that are similar to past models by reusing measurement files taken from models within the same family.
**Conclusion**
Being able to reuse another model's measurements doesn't help that much unless you're frequently quantizing different models within a family of models, but that describes my use case. Eliminating the measurement step in most cases should enable me to innovate more rapidly, and I suppose it will save me a little money on electricity in the long run. I hope this information will be useful (or at least interesting) to others in the community. | 2024-12-12T23:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/1hcyx0y/reusing_exllamav2_measurements_across_similar/ | sophosympatheia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hcyx0y | false | null | t3_1hcyx0y | /r/LocalLLaMA/comments/1hcyx0y/reusing_exllamav2_measurements_across_similar/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'hqftXwg8mmDXM293tt2lOYjNv2t7t5a8M51SK9XdcTg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6MG5yH0dhNbQ5ev2v_7rVFJvgXXgf-mBF4Cap573FoY.jpg?width=108&crop=smart&auto=webp&s=13ef84e2e0791015583397cf6b1df862a61e9b01', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6MG5yH0dhNbQ5ev2v_7rVFJvgXXgf-mBF4Cap573FoY.jpg?width=216&crop=smart&auto=webp&s=3028d1fb219a48b580b3afc5ca77e6ccb7cad215', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6MG5yH0dhNbQ5ev2v_7rVFJvgXXgf-mBF4Cap573FoY.jpg?width=320&crop=smart&auto=webp&s=48b876d5d0dff8d11b914ecf3a626eb8674dc4a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6MG5yH0dhNbQ5ev2v_7rVFJvgXXgf-mBF4Cap573FoY.jpg?width=640&crop=smart&auto=webp&s=98ec88eb96cf0254e88f3d6d9e871ab325e0df64', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6MG5yH0dhNbQ5ev2v_7rVFJvgXXgf-mBF4Cap573FoY.jpg?width=960&crop=smart&auto=webp&s=0b5b44084fe77f198595d4c208e14714072a862d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6MG5yH0dhNbQ5ev2v_7rVFJvgXXgf-mBF4Cap573FoY.jpg?width=1080&crop=smart&auto=webp&s=c3c1b7f1ee8c151adffeba7451651e5bfbb4728e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6MG5yH0dhNbQ5ev2v_7rVFJvgXXgf-mBF4Cap573FoY.jpg?auto=webp&s=6efa450fd8d44afc9315489b1d206eb053147a0a', 'width': 1200}, 'variants': {}}]} |
Aider + langchain: A match made in heaven? | 22 | 🚀
Hey everyone! In the spirit of r/localllama, I wanted to share a little experiment I’ve been working on: [RA.Aid](https://github.com/ai-christianson/RA.Aid). This started after Windsurf announced their pricing changes, and I kept running into its limits—usage caps, unreliability, and cost. I thought, why not try building something together that combines the best parts of aider and LangChain to handle programming tasks more effectively?
RA.Aid is an experiment to make aider a tool for a LangChain agent. The agent explores your codebase, finds key facts, files, and snippets for your task, and can even ask tough questions using o1-preview. It’s been exciting to see how well it works with tools like:
- **File exploration tools** for navigating projects.
- **Fuzzy find + ripgrep** for quickly locating key snippets.
- An optional **cowboy 🤠 mode** that lets the agent automatically run shell commands (if you’re feeling adventurous).
So far, it has been performing better for the tasks I’ve tested it on, especially complex ones. Right now, it’s set up with some of the strongest models available (like Claude and o1-preview). However, it should work well with open models too, though we’ll probably need to do more prompting work and add configurability to make it really effective.
If we can get some PRs rolling in, we might be able to create a completely free and open tool that far surpasses even $500/month proprietary solutions like Devin. The code is up on GitHub under Apache 2.0: [RA.Aid](https://github.com/ai-christianson/RA.Aid).
Happy to hear any thoughts or feedback from the community! | 2024-12-13T00:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/1hczbla/aider_langchain_a_match_made_in_heaven/ | ai-christianson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hczbla | false | null | t3_1hczbla | /r/LocalLLaMA/comments/1hczbla/aider_langchain_a_match_made_in_heaven/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'LLNz0OjyU0Txkkofz3Up-breGKz201tL7o3nmUuR9go', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LArVEbJ4jbd_3Jfk7zgXkWrFqWeakZ4490ev5wPCHks.jpg?width=108&crop=smart&auto=webp&s=8704794b432a6e046f304c8e722384032fc71481', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LArVEbJ4jbd_3Jfk7zgXkWrFqWeakZ4490ev5wPCHks.jpg?width=216&crop=smart&auto=webp&s=9b9230a268480d1b22d420912c9035eeb25f81ae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LArVEbJ4jbd_3Jfk7zgXkWrFqWeakZ4490ev5wPCHks.jpg?width=320&crop=smart&auto=webp&s=c6c9d3a1896875f3b704af3f5907095ed2e46351', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LArVEbJ4jbd_3Jfk7zgXkWrFqWeakZ4490ev5wPCHks.jpg?width=640&crop=smart&auto=webp&s=8ca7e8bd6141432d2522b363db1ae169e19e2245', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LArVEbJ4jbd_3Jfk7zgXkWrFqWeakZ4490ev5wPCHks.jpg?width=960&crop=smart&auto=webp&s=88ed35a8b1ea44c1b8abadbdac57d86ac8c4ab3e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LArVEbJ4jbd_3Jfk7zgXkWrFqWeakZ4490ev5wPCHks.jpg?width=1080&crop=smart&auto=webp&s=3b2194e14bd78ea0b496630b067ffcf422c5da9c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LArVEbJ4jbd_3Jfk7zgXkWrFqWeakZ4490ev5wPCHks.jpg?auto=webp&s=59c187fb75467f4d89e892902d9d7c542a0752c1', 'width': 1200}, 'variants': {}}]} |
Smartest general-knowledge models for 28gb of VRAM? | 1 | Rocking a 6700 with a 6800xt (12+16) with an effective speed of ~300gb/s memory on the 6700.
I've already determined Codestral 22b Q6_K is the best model for my programming use cases so far - but what about a *general use* models? Models that do a good job answering the random things one might Google throughout a day. | 2024-12-13T00:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hczbv9/smartest_generalknowledge_models_for_28gb_of_vram/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hczbv9 | false | null | t3_1hczbv9 | /r/LocalLLaMA/comments/1hczbv9/smartest_generalknowledge_models_for_28gb_of_vram/ | false | false | self | 1 | null |
Discussing the Arousing Power of the 😈 Emoji -- By far the most educational podcast I've generated on NotebookLM | 0 | 2024-12-13T00:39:08 | https://soundcloud.com/epos-nix/the-devil-emoji-arousal-and/s-2rfg1gQs0CV?si=a8e0915363674016821d5cc86636e269&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing | eposnix | soundcloud.com | 1970-01-01T00:00:00 | 0 | {} | 1hd009w | false | {'oembed': {'author_name': 'Epos Nix', 'author_url': 'https://soundcloud.com/epos-nix', 'description': 'Stream The Devil Emoji_ Arousal and Debauchery by Epos Nix on desktop and mobile. Play over 320 million tracks for free on SoundCloud.', 'height': 500, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fw.soundcloud.com%2Fplayer%2F%3Fvisual%3Dtrue%26url%3Dhttps%253A%252F%252Fapi.soundcloud.com%252Ftracks%252F1982481031%26show_artwork%3Dtrue%26secret_token%3Ds-2rfg1gQs0CV&display_name=SoundCloud&url=https%3A%2F%2Fsoundcloud.com%2Fepos-nix%2Fthe-devil-emoji-arousal-and%2Fs-2rfg1gQs0CV%3Fsi%3Da8e0915363674016821d5cc86636e269%26utm_source%3Dclipboard%26utm_medium%3Dtext%26utm_campaign%3Dsocial_sharing&image=https%3A%2F%2Fsoundcloud.com%2Fimages%2Ffb_placeholder.png&type=text%2Fhtml&schema=soundcloud" width="500" height="500" scrolling="no" title="SoundCloud embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'SoundCloud', 'provider_url': 'https://soundcloud.com', 'thumbnail_height': 130, 'thumbnail_url': 'https://soundcloud.com/images/fb_placeholder.png', 'thumbnail_width': 130, 'title': 'The Devil Emoji_ Arousal and Debauchery by Epos Nix', 'type': 'rich', 'version': '1.0', 'width': 500}, 'type': 'soundcloud.com'} | t3_1hd009w | /r/LocalLLaMA/comments/1hd009w/discussing_the_arousing_power_of_the_emoji_by_far/ | false | false | nsfw | 0 | {'enabled': False, 'images': [{'id': 'fDsecO5VWcrXXsz23l4eEcZ2OtDcXqx-zCA7M4JE5zo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Xby29rQtWdjfwDScAPc4_pFLy59YrsEsz54NQKJHH0U.jpg?width=108&crop=smart&auto=webp&s=19b677dbe9e101ce8b0669d6ce26c1bba5dd6f24', 'width': 108}], 'source': {'height': 130, 'url': 'https://external-preview.redd.it/Xby29rQtWdjfwDScAPc4_pFLy59YrsEsz54NQKJHH0U.jpg?auto=webp&s=085a55a04c8cfbd1cb8b92e17340ec5c4d5a3580', 'width': 130}, 'variants': {'nsfw': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Xby29rQtWdjfwDScAPc4_pFLy59YrsEsz54NQKJHH0U.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=8ca16ddcf2f3b07bc3d18603b85c4dd52abf5095', 'width': 108}], 'source': {'height': 130, 'url': 'https://external-preview.redd.it/Xby29rQtWdjfwDScAPc4_pFLy59YrsEsz54NQKJHH0U.jpg?blur=40&format=pjpg&auto=webp&s=a861109f3d22df436567a00d2189c564891b36ca', 'width': 130}}, 'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Xby29rQtWdjfwDScAPc4_pFLy59YrsEsz54NQKJHH0U.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=8ca16ddcf2f3b07bc3d18603b85c4dd52abf5095', 'width': 108}], 'source': {'height': 130, 'url': 'https://external-preview.redd.it/Xby29rQtWdjfwDScAPc4_pFLy59YrsEsz54NQKJHH0U.jpg?blur=40&format=pjpg&auto=webp&s=a861109f3d22df436567a00d2189c564891b36ca', 'width': 130}}}}]} |
|
Feeding LLM at the bit level: No Tokens, No Problem. | 1 | [removed] | 2024-12-13T00:43:02 | https://www.reddit.com/r/LocalLLaMA/comments/1hd02z2/feeding_llm_at_the_bit_level_no_tokens_no_problem/ | vkha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd02z2 | false | null | t3_1hd02z2 | /r/LocalLLaMA/comments/1hd02z2/feeding_llm_at_the_bit_level_no_tokens_no_problem/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oBFzFDhOOaDkNiLEdOIFDv_2XEHoRtXtUIKZ9f06060', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/g71Ou43-E8YEhh_aKBvJx8YTSy_ov9BhwvubTWGla0I.jpg?width=108&crop=smart&auto=webp&s=60f0376d29a26d73c08b6bd1857fb3eadad887d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/g71Ou43-E8YEhh_aKBvJx8YTSy_ov9BhwvubTWGla0I.jpg?width=216&crop=smart&auto=webp&s=f640ebc1f4448f76d70ecad0332f2116a51578f4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/g71Ou43-E8YEhh_aKBvJx8YTSy_ov9BhwvubTWGla0I.jpg?width=320&crop=smart&auto=webp&s=217bf704af68ee10279d37af6d6e67843afdf85f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/g71Ou43-E8YEhh_aKBvJx8YTSy_ov9BhwvubTWGla0I.jpg?width=640&crop=smart&auto=webp&s=f5f7a3ccf580e1d8c3b1d6ebc6f421e08d6083bb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/g71Ou43-E8YEhh_aKBvJx8YTSy_ov9BhwvubTWGla0I.jpg?width=960&crop=smart&auto=webp&s=d55d86f4b53ca291071fb47c95037d88f4de5937', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/g71Ou43-E8YEhh_aKBvJx8YTSy_ov9BhwvubTWGla0I.jpg?width=1080&crop=smart&auto=webp&s=a309cf80cb15f4648ec544a374091ffc2210f2ec', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/g71Ou43-E8YEhh_aKBvJx8YTSy_ov9BhwvubTWGla0I.jpg?auto=webp&s=280e35ccb90296d21bff7be190c43719a0b46c74', 'width': 1200}, 'variants': {}}]} |
Feeding LLM at the bit level: No Tokens, No Problem. | 1 | [removed] | 2024-12-13T00:46:37 | https://www.reddit.com/r/LocalLLaMA/comments/1hd05mq/feeding_llm_at_the_bit_level_no_tokens_no_problem/ | vkha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd05mq | false | null | t3_1hd05mq | /r/LocalLLaMA/comments/1hd05mq/feeding_llm_at_the_bit_level_no_tokens_no_problem/ | false | false | self | 1 | null |
Recommendations on how to parse the differences between screenshots of two web-based reports? | 1 | [removed] | 2024-12-13T00:48:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hd07d0/recommendations_on_how_to_parse_the_differences/ | djkoell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd07d0 | false | null | t3_1hd07d0 | /r/LocalLLaMA/comments/1hd07d0/recommendations_on_how_to_parse_the_differences/ | false | false | self | 1 | null |
NaturalLM 7B Instruct - A Natural Sounding LLM | 42 | 2024-12-13T00:49:50 | https://huggingface.co/qingy2024/NaturalLM-7B-Instruct | random-tomato | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1hd07zt | false | null | t3_1hd07zt | /r/LocalLLaMA/comments/1hd07zt/naturallm_7b_instruct_a_natural_sounding_llm/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'uZxe3-UoWuZA-GZBcYpQiaXBENg_-jjZIF3eoMqi4Qo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vbPWL9mSFg2ZNbpKGOAxSAwIo8GSqV6Gq8m4P6P_RUo.jpg?width=108&crop=smart&auto=webp&s=0b4c1ac8a3cc43d7bebd82c025c644b7ebba464a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vbPWL9mSFg2ZNbpKGOAxSAwIo8GSqV6Gq8m4P6P_RUo.jpg?width=216&crop=smart&auto=webp&s=923ef986795e4c5b2407c092b4fa5e67d686c93c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vbPWL9mSFg2ZNbpKGOAxSAwIo8GSqV6Gq8m4P6P_RUo.jpg?width=320&crop=smart&auto=webp&s=406cfdee1c170d17ff62756c574eb17fbbe5a1d5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vbPWL9mSFg2ZNbpKGOAxSAwIo8GSqV6Gq8m4P6P_RUo.jpg?width=640&crop=smart&auto=webp&s=0bc8aec16c6f392bef335da21c9773417fa7c053', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vbPWL9mSFg2ZNbpKGOAxSAwIo8GSqV6Gq8m4P6P_RUo.jpg?width=960&crop=smart&auto=webp&s=89ffe9eb840b5c4fccdfc5974f77a9ab416e34a8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vbPWL9mSFg2ZNbpKGOAxSAwIo8GSqV6Gq8m4P6P_RUo.jpg?width=1080&crop=smart&auto=webp&s=cf19e3d50ab21cf0ac38b70c106bc305332b0a2b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vbPWL9mSFg2ZNbpKGOAxSAwIo8GSqV6Gq8m4P6P_RUo.jpg?auto=webp&s=62d7fb4191401c8a217cf2f05f8e668fa6447182', 'width': 1200}, 'variants': {}}]} |
||
Insights: How Chinese Players Use Silly Tavern | 1 | [removed] | 2024-12-13T00:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/1hd0az5/insights_how_chinese_players_use_silly_tavern/ | Subject_Log_4903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd0az5 | false | null | t3_1hd0az5 | /r/LocalLLaMA/comments/1hd0az5/insights_how_chinese_players_use_silly_tavern/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'P72-qNVDQz9IQup33lcHelC7VWeDlP-tmQH36GyvRv8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/dNpqr7LLW6fQ3vAwrmdSWo3Qs84J5J4risyTze8QEVI.jpg?width=108&crop=smart&auto=webp&s=a933380c1524139d7e14f17cc2b1ba76724ce63f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/dNpqr7LLW6fQ3vAwrmdSWo3Qs84J5J4risyTze8QEVI.jpg?width=216&crop=smart&auto=webp&s=108c24a1e5a57d9392b1fda4b4f3c77a9e7a683e', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/dNpqr7LLW6fQ3vAwrmdSWo3Qs84J5J4risyTze8QEVI.jpg?auto=webp&s=c99b2a616e655badb539736bbff2a17cd7d0da4d', 'width': 300}, 'variants': {}}]} |
Can anyone give me tips on training LlaMa 2 on lyrics? | 3 | I have folder of subfolders containing lyrics in text files.
I'd like to fine-tune LlaMa 2 7B locally on the lyrics using an RTX 4070 Ti Super 16gb.
I have been granted access to the models on HuggingFace.
Should I use 7B-HF or 7B Chat-HF?
Can anyone else point me in the direction of help regarding doing this local training?
If anyone is open to talking to me in DMs about this, it would be very helpful.
Thank you. | 2024-12-13T00:58:03 | https://www.reddit.com/r/LocalLLaMA/comments/1hd0dv0/can_anyone_give_me_tips_on_training_llama_2_on/ | ReasonableFall177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd0dv0 | false | null | t3_1hd0dv0 | /r/LocalLLaMA/comments/1hd0dv0/can_anyone_give_me_tips_on_training_llama_2_on/ | false | false | self | 3 | null |
Introducing Phi-4: Microsoft’s Newest Small Language Model Specializing in Complex Reasoning | 774 | 2024-12-13T01:26:29 | https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090 | metalman123 | techcommunity.microsoft.com | 1970-01-01T00:00:00 | 0 | {} | 1hd0y5j | false | null | t3_1hd0y5j | /r/LocalLLaMA/comments/1hd0y5j/introducing_phi4_microsofts_newest_small_language/ | false | false | 774 | {'enabled': False, 'images': [{'id': 'BfeC_1bgfqIqiWqAzMfQ4aHLoKL13cgpn7LkhqLVW4I', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=108&crop=smart&auto=webp&s=13eb0f808259f88846d8d94b88206835059cf516', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=216&crop=smart&auto=webp&s=0dd75bb1a0bcff3c538026330da57bd73faf0ddb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=320&crop=smart&auto=webp&s=608aa3ce01f9f18a3b9074e9a0ac42ba5aed14be', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=640&crop=smart&auto=webp&s=2c79ec68a48b99e3498e986fa7b51ce1237ea79c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=960&crop=smart&auto=webp&s=385599694063c9bc3f36f6d7aea2e1973e94b3ff', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?width=1080&crop=smart&auto=webp&s=e26c7affe6520318b7167809f9bc1fd67cf356cd', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/xFiHyBe8e1D0kgfwweXHI1raoCo9fScYtFhc0pW-b2s.jpg?auto=webp&s=3f5a23a7b07b1c2c27370c5d5130736b15b33f3b', 'width': 1920}, 'variants': {}}]} |
||
Bro WTF?? | 477 | 2024-12-13T01:38:15 | Consistent_Bit_3295 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1hd16ev | false | null | t3_1hd16ev | /r/LocalLLaMA/comments/1hd16ev/bro_wtf/ | false | false | 477 | {'enabled': True, 'images': [{'id': 'ODph1RCn0lyR3VsHq2mVJ8MewVZfxWq0bjLmNGJzYow', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/npjopxbhsi6e1.png?width=108&crop=smart&auto=webp&s=82b24c6be3526cf8e672d9ebc5b25685d5ff9871', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/npjopxbhsi6e1.png?width=216&crop=smart&auto=webp&s=721448cb083f05652dd838a2b064a1031af5a950', 'width': 216}, {'height': 206, 'url': 'https://preview.redd.it/npjopxbhsi6e1.png?width=320&crop=smart&auto=webp&s=bddedb6be117fa69ca5a015e0614ca430121843e', 'width': 320}, {'height': 413, 'url': 'https://preview.redd.it/npjopxbhsi6e1.png?width=640&crop=smart&auto=webp&s=4228b4650a68d0d6b884448db7f83a6747a48035', 'width': 640}, {'height': 620, 'url': 'https://preview.redd.it/npjopxbhsi6e1.png?width=960&crop=smart&auto=webp&s=aa969f8a5aba0f1a371ec5a8027213aac0daca9c', 'width': 960}], 'source': {'height': 689, 'url': 'https://preview.redd.it/npjopxbhsi6e1.png?auto=webp&s=93099c5139011b7a9ce04496fbf2ff4b264f129d', 'width': 1066}, 'variants': {}}]} |
|||
Models that detect buttons? | 2 | Do we have any model that can point out the x,y co-ordinates of wherever clickable elements in a web page or application are?
I did try Claude 3.5 v2 for the same and it’s able to give approximate results but they’re not accurate. Is there a dedicated model for the same? | 2024-12-13T02:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/1hd1ptd/models_that_detect_buttons/ | zzKillswitchzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd1ptd | false | null | t3_1hd1ptd | /r/LocalLLaMA/comments/1hd1ptd/models_that_detect_buttons/ | false | false | self | 2 | null |
Price oh hosting Llama3.3 on azure | 1 | [removed] | 2024-12-13T02:08:23 | https://www.reddit.com/r/LocalLLaMA/comments/1hd1rmy/price_oh_hosting_llama33_on_azure/ | Raz--8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1hd1rmy | false | null | t3_1hd1rmy | /r/LocalLLaMA/comments/1hd1rmy/price_oh_hosting_llama33_on_azure/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.