title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GitHub or Huggingface to download R1 70b and 671b? | 1 | Hi I downloaded Deepseek’s smaller local models using Ollama and would like to download the larger models to an external drive which Ollama doesn’t seem to give an option for.
Are these models on GitHub or Huggingface? | 2025-02-01T15:22:52 | https://www.reddit.com/r/LocalLLaMA/comments/1if8kgp/github_or_huggingface_to_download_r1_70b_and_671b/ | 0xBlackSwan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1if8kgp | false | null | t3_1if8kgp | /r/LocalLLaMA/comments/1if8kgp/github_or_huggingface_to_download_r1_70b_and_671b/ | false | false | self | 1 | null |
Collaborative effort for a Super LLM Model | 0 | Do you think all the companies should collaborate and make a super llm, with all the features of the top llms which are expert for their own use cases, or "expertise".
means
taking 1M context window from gemini models
deep reasoning from o1 and deepseek r1 models
and some moe structures from mistral,
wouldn't it be much much better, if every company contributes its best and we get a model much much powerful than today's models, and all the companies can use the same model as the baseline and they all progress faster!
What do you think?
| 2025-02-01T15:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/1if8l12/collaborative_effort_for_a_super_llm_model/ | harsh_khokhariya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1if8l12 | false | null | t3_1if8l12 | /r/LocalLLaMA/comments/1if8l12/collaborative_effort_for_a_super_llm_model/ | false | false | self | 0 | null |
Enhancing AI Training with AMD ROCm Software | 5 | ROCm™ has emerged as a premier open software stack designed to address the evolving needs of AI and machine learning workloads. Built for inference and training, ROCm delivers leadership performance, empowering developers and organizations to optimize their workloads for efficiency, scalability, and cost-effectiveness.
The inference capabilities of ROCm have already demonstrated [leadership performance](https://rocm.blogs.amd.com/artificial-intelligence/LLM_Inference/README.html) and have been adopted by industry leaders like Microsoft and Meta.
For example, Meta recently highlighted at the [AMD Advancing AI](https://www.amd.com/en/corporate/events/advancing-ai.html) event that all live traffic for the Meta Llama 405B model is supported exclusively by AMD Instinct™ MI300X GPUs due to its large memory that can require fewer GPUs to run a model.
ROCm has also demonstrated strong performance capabilities for industry standard benchmarks like [MLPerf](https://community.amd.com/t5/instinct-accelerators/engineering-insights-unveiling-mlperf-results-on-amd-instinct/ba-p/705623)®.
As we continue to advance ROCm software capabilities, we are placing greater emphasis on delivering robust training solutions to complement our expanding inference capabilities. This blog explores how ROCm enhances training efficiency and optimizes performance for popular models while offering a glimpse into planned future advancements.
# Focus on Training Workloads
**Delivering Key Requirements for End-to-End Training Leadership**. Training state-of-the-art AI models, such as Llama and Mistral, requires a combination of software and hardware optimizations to achieve the necessary scale and efficiency. ROCm addresses these challenges through a holistic approach that enhances end-to-end (E2E) performance while focusing on real-world use cases. This involves optimizing core operations like matrix calculations, refining parallelization techniques for [distributed training](https://rocm.blogs.amd.com/artificial-intelligence/ddp-training-pytorch/README.html), and implementing advanced algorithms, including [Flash Attention](https://rocm.blogs.amd.com/artificial-intelligence/flash-attention/README.html) and mixed precision training. By tailoring these optimizations to specific architectures, ROCm enables robust and adaptable performance for developers.
AMD is dedicated to delivering a rich and robust ROCm software stack optimized for training workloads. Recent advancements include BF16 optimization for hipBLASLt and FP8 support for inference and training, supporting both E4M3 and E5M2 formats. There are several other critical optimizations planned for imminent support, including Transformer Engine, improved GEMM heuristics and full [TunableOps](https://rocm.blogs.amd.com/artificial-intelligence/pytorch-tunableop/README.html) stable release in upcoming PyTorch releases, which will enable developers an easy avenue to tune targeted GEMMs for their custom use cases.
Let’s look at the end-to-end training performance on AMD Instinct MI300X using some of these upcoming ROCm enhancements.
# Performance Highlights
**Strong Competitive Training Performance Across Models, Datatypes & Frameworks.** The latest ROCm enhancements deliver strong competitive performance on models like Llama, Mistral, and FLUX by leveraging FP8 and BF16 data formats alongside key optimizations. Performance gains come from a combination of software optimizations—such as improved Flash Attention v3, targeted GEMM refinements, FP8 training optimizations, and enhanced support for sliding window attention (SWA)—and architectural advantages, including larger batch sizes enabled by the MI300X’s and MI325X’s leading HBM memory capacity.
The FP8 training FLOPs highlight the E2E training performance advantage for AMD Instinct MI300X and MI325X for popular models like the Llama 3.1 8B and Mistral 7B compared to Nvidia H100 and H200, respectively. For example, the 192GB of HBM3 memory advantage enables MI300X to not only deliver \~1.2X more performance it also enables a larger batch size of 6 compared to a batch size of 2 H100 using a sequence length of 4k.
[Figure 1: Llama 3.1 8B and Mistral 7B training using \(FP8\)1,2](https://preview.redd.it/qdtffsjqojge1.png?width=2099&format=png&auto=webp&s=33badb1a79baf37dce1f0962e7140dd1836b59fa)
As shown below, similar performance advantages can be observed using BF16 as well where AMD Instinct GPUs deliver a higher TFLOPs/s over Nvidia GPUs.
While performance is critical in GPU evaluation, capabilities and total cost of ownership (TCO) play a vital role in assessing the competitive landscape. The MI300X GPU, with its 192GB of HBM3 memory, and MI325X, with 256GB HBM3E, offer unique advantages over the H100 and H200. Unlike H100 GPUs, which require multiple nodes to support the full Llama 3.1 70B model at 16-bit precision, both MI300X and MI325X enable full weight finetuning on fewer nodes. This helps reduce costs, simplify training infrastructure management, and reduce the need for complex parallelization techniques, offering a significant edge in both and efficiency.
[Figure 1: Llama 3.1 8B and Mistral 7B training using \(BF16\)1,2](https://preview.redd.it/bhrmup6sojge1.png?width=2099&format=png&auto=webp&s=941292685e4dacee4b0ab79aeac65ec54353eaa0)
While AMD Instinct GPUs demonstrate impressive performance for language models like Llama and Mistral, they also deliver highly competitive performance on image generation models like FLUX.
In the example below, we showcase that fine-tuning for tasks such as image generation with FLUX, we show competitive performance on MI300X compared to H100.
[Figure 1: FLUX using BF161,2](https://preview.redd.it/tugmyvfxojge1.png?width=2099&format=png&auto=webp&s=9079663ca661e37c3ceff51bf7897efd944247ac)
# How to Access These Features
AMD provides pre-configured public containers with the latest optimizations to help developers harness the full potential of ROCm.
Follow the step-by-step [examples](https://github.com/AMD-AIG-AIMA/pytorch-training-benchmark) to run the models discussed above with AMD-optimized [pytorch training docker](https://hub.docker.com/r/rocm/pytorch-training). Learn how to get started with AMD ROCm containers at [ROCm Blogs](https://rocm.blogs.amd.com/software-tools-optimization/rocm-containers/README.html)
# Conclusion
ROCm continues to redefine what’s possible in AI and machine learning through its comprehensive software stack. From leading inference performance to its existing competitive performance on training workloads, ROCm provides the tools necessary to tackle the most demanding challenges in AI. With ongoing optimizations and a commitment to accessibility through open-source, public containers, ROCm is paving the way for researchers and AI engineers to unlock AI breakthroughs.
Explore the latest tools and join the growing community of ROCm developers to realize the full the potential of AI innovation. If you want to know more about AI development on AMD GPUs, visit the [AI developer hub](https://www.amd.com/gpu-ai-developer).
>Updated on 31 January 2025
>We acknowledge SemiAnalysis LLC, whose benchmarking code served as the foundation for our setup to generate the data above.
END NOTES
\[1, 2\]: Testing conducted on 01/29/20025 by AMD. The overall training text generation throughput was measured in Tflops/s/GPU for Llama-3.1 8B using FP8 & BF16 with a sequence length of 4096 tokens and batch size 6 for MI300X and 1 for H100. Mistral 7B using FP8 & BF16 using a sequence length of 8192 using a batch size of 3 for BF16 and 4 for FP8 on MI300X and batch size 1 for H100. FLUX.1-dev using BF16 and batch size 10 for MI300X and 3 for H100.
\[1, 2\]: Testing conducted on 01/29/20025 by AMD. The overall training text generation throughput was measured in Tflops/s/GPU for Llama-3.1 8B using FP8 & BF16 with a sequence length of 4096 tokens and batch size 8 for BF16 and 10 for FP8 for MI325X and 4 for H1200. Mistral 7B using FP8 & BF16 using a sequence length of 8192 using a batch size of 5 for BF16 and 6 for FP8 on MI325X and batch size 2 for BF16 and 3 for FP8 H200. FLUX.1-dev using BF16 and batch size 10 for MI325X and 3 for H200.
Configurations:
Supermicro GPU A+ Server AS - 8125GS-TNMR2 with 2x AMD EPYC 9654 Processors, 2304 GB DDR5 memory with 8x AMD Instinct MI300X (192GB HBM3, 750W) GPUs, Ubuntu® 22.04.5 LTS with Linux kernel 5.15.0-122-generic, System BIOS 5.27; and a pre-release version of ROCm™ 6.3.
Vs.
Supermicro AS -8125GS-TNHR 2x AMD EPYC 9654 96-Core Processor, 2304 GB DDR5 memory with 8x NVIDIA H100 80GB HBM3 \[PB1\] (80GiB, 700W) GPUS, Ubuntu 22.04.5 LTD with Linux kernel titan 6.8.0-51-generic, System BIOS 3.5.0, CUDA® 12.6
Dell PowerEdge XE9680 with 2x Intel Xeon Platinum 8480+ Processors, 4096 GiB (32 DIMMS, 4400 mts, 128 GiB/DIMM), 8x AMD Instinct MI325X (256GiB, 1000W) GPUs, Ubuntu 22.04.2 LTS with Linux kernel 5.15.0-122-generic, and a pre-release build of ROCm 6.3 Vs. Supermicro SuperServer with 2x Intel Xeon Platinum 8468 Processors, 3 TiB (32 DIMMs, 4400 mts, 96 GiB/DIMM, 16 channels, 2 DIMMs/channel) memory, 8x Nvidia H200 (140GB, 700W) GPUs, Ubuntu 22.04.5 LTS with Linux kernel 5.15.0-122-generic, CUDA 12.6 | 2025-02-01T15:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1if8pp3/enhancing_ai_training_with_amd_rocm_software/ | Noble00_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1if8pp3 | false | null | t3_1if8pp3 | /r/LocalLLaMA/comments/1if8pp3/enhancing_ai_training_with_amd_rocm_software/ | false | false | 5 | null |
|
DeepSeek will gladly mention the Tiananmen Square incident if it is in Base64. | 1 | 2025-02-01T15:32:10 | https://www.reddit.com/gallery/1if8rlq | AccomplishedBuy1309 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1if8rlq | false | null | t3_1if8rlq | /r/LocalLLaMA/comments/1if8rlq/deepseek_will_gladly_mention_the_tiananmen_square/ | false | false | 1 | null |
||
llama.cpp now supports tool calling (OpenAI-compatible) | 230 | [https://github.com/ggerganov/llama.cpp/pull/9639](https://github.com/ggerganov/llama.cpp/pull/9639)
On top of generic support for *all* models, it supports 8+ models’ native formats:
* Llama 3.x
* Functionary 3
* Hermes 2/3
* Qwen 2.5
* Mistral Nemo
* Firefunction 2
* DeepSeek R1 (WIP)
Runs locally anywhere (incl. Raspberry Pi 5), e.g. on a Mac:
```
brew install
llama.cpp llama-server --jinja -fa -hf bartowski/Qwen2.5-7B-Instruct-GGUF:Q4_K_M
```
Still fresh / lots of bugs to discover: feedback welcome! | 2025-02-01T15:39:17 | https://www.reddit.com/r/LocalLLaMA/comments/1if8x64/llamacpp_now_supports_tool_calling/ | Federal_Discipline_4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1if8x64 | false | null | t3_1if8x64 | /r/LocalLLaMA/comments/1if8x64/llamacpp_now_supports_tool_calling/ | false | false | self | 230 | {'enabled': False, 'images': [{'id': 'CTi6HUYgthsdEQAchu9TMI9FpkpoJRl0mgSj3y1i30M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RSVeQAd4qofgi0-skfjKUfpv6kA7wwpxu7ss5f9X7No.jpg?width=108&crop=smart&auto=webp&s=5962ac74898b23d810b2c158c0e21e07ab14087e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RSVeQAd4qofgi0-skfjKUfpv6kA7wwpxu7ss5f9X7No.jpg?width=216&crop=smart&auto=webp&s=fc7c6dbda29cf563bd441c940c55b9831b6ed840', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RSVeQAd4qofgi0-skfjKUfpv6kA7wwpxu7ss5f9X7No.jpg?width=320&crop=smart&auto=webp&s=88ef0700eff0ce142ba49d73210cc111e5bdb1a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RSVeQAd4qofgi0-skfjKUfpv6kA7wwpxu7ss5f9X7No.jpg?width=640&crop=smart&auto=webp&s=f25040bcc528f476e660ece8bbcc83f259d986b6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RSVeQAd4qofgi0-skfjKUfpv6kA7wwpxu7ss5f9X7No.jpg?width=960&crop=smart&auto=webp&s=6147a24277f695de4ed625ab662c8fa715341d9a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RSVeQAd4qofgi0-skfjKUfpv6kA7wwpxu7ss5f9X7No.jpg?width=1080&crop=smart&auto=webp&s=3e98a3311e23634445fbeaefe971e81d794727b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RSVeQAd4qofgi0-skfjKUfpv6kA7wwpxu7ss5f9X7No.jpg?auto=webp&s=a6e6e247f619f70707f514b0484748637d251e0c', 'width': 1200}, 'variants': {}}]} |
What is the best all-round model to run with 16gb vram and 64GB ram? | 6 | I don’t care too much about speed for this one- but more than 2.5 tps would be nice.
Like what is the most up to date best model to run for almost anything general purpose but is generally really good for a local model
I got rtx 4090 laptop gpu(16gb VRAM)
And intel core i9 14900HX
| 2025-02-01T15:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1if926u/what_is_the_best_allround_model_to_run_with_16gb/ | No_Expert1801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1if926u | false | null | t3_1if926u | /r/LocalLLaMA/comments/1if926u/what_is_the_best_allround_model_to_run_with_16gb/ | false | false | self | 6 | null |
Real fact vs good speak degration of language models | 1 | With all the concerns about Model X or Y censoring content (and attempts to jailbreak), how do we as consumers of the content created ensure the models aren't otherwise "poisoned" or biased?
Is there a certification process we can run? As a community?
Is there a bias or fact base score?
Can we attach that certification to the model like we do "instruct'?
Yes we can run a bunch of stock questions like the "Taiwan" and "square" historical checks but ultimately what registry do we expose that bias other than "complaining" about it on reddit?
Online search was polluted by SEO and SEM, I assume so can "super auto complete" can be too.
| 2025-02-01T15:47:32 | https://www.reddit.com/r/LocalLLaMA/comments/1if93j1/real_fact_vs_good_speak_degration_of_language/ | blackdragon8k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1if93j1 | false | null | t3_1if93j1 | /r/LocalLLaMA/comments/1if93j1/real_fact_vs_good_speak_degration_of_language/ | false | false | self | 1 | null |
This is hilarious! Llama3.2 won't discuss weed, but will act as if high! And quite convincingly! :) | 1 | [removed] | 2025-02-01T16:03:58 | https://www.reddit.com/r/LocalLLaMA/comments/1if9ggu/this_is_hilarious_llama32_wont_discuss_weed_but/ | hn-mc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1if9ggu | false | null | t3_1if9ggu | /r/LocalLLaMA/comments/1if9ggu/this_is_hilarious_llama32_wont_discuss_weed_but/ | false | false | 1 | null |
|
Could DeepSeek use Cloudflare to protect its servers from the DoS attacks? | 0 | With Cloudflare being a US company and openly traded, possibly with investors who might also be investing in US based AI corporations, would they let DeepSeek use their services? | 2025-02-01T16:05:15 | https://www.reddit.com/r/LocalLLaMA/comments/1if9hhp/could_deepseek_use_cloudflare_to_protect_its/ | Coolengineer7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1if9hhp | false | null | t3_1if9hhp | /r/LocalLLaMA/comments/1if9hhp/could_deepseek_use_cloudflare_to_protect_its/ | false | false | self | 0 | null |
They called THIS 'Unsafe'? 🤔 Check out this example and tell me what you think... | 39 | Just spotted an interesting (and maybe concerning?) 'unsafe' example in this AI safety paper (image attached, page 13 in the paper). The answer gives very high-level points about some of the ways cybercriminals operate - provided by o3-mini (an older beta checkpoint of it).
Is flagging this kind of thing as 'unsafe' missing the point? Is the real danger not that AIs could actually help criminals, and just explaining the concepts at a high-level isn't the problem?
If you disagree, I'd love to hear your thoughts on why this specific example should be considered 'unsafe'.
Source: [o3-mini vs. DeepSeek-R1: Which one is safer?](https://arxiv.org/pdf/2501.184384)
**Important note about the paper**: It doesn't use the full R1 model (uses the Llama3.3-70B fine-tune instead) and it's using a beta release of o3-mini, as part of OpenAI's early research access program. | 2025-02-01T16:09:10 | MMAgeezer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1if9kju | false | null | t3_1if9kju | /r/LocalLLaMA/comments/1if9kju/they_called_this_unsafe_check_out_this_example/ | false | false | 39 | {'enabled': True, 'images': [{'id': '9W6Mti2DQixXgbHQ83gPcv2C1dQVcc-5bPPrm8N-tC4', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/gir8e8bhxjge1.png?width=108&crop=smart&auto=webp&s=942f379851ac3528283c10b2850e8176515a0f3f', 'width': 108}, {'height': 156, 'url': 'https://preview.redd.it/gir8e8bhxjge1.png?width=216&crop=smart&auto=webp&s=75c7daea5c60d33d15bfc97a233918b02df2fe0d', 'width': 216}, {'height': 231, 'url': 'https://preview.redd.it/gir8e8bhxjge1.png?width=320&crop=smart&auto=webp&s=ab7e5e7907308167c1bc0960fe768f449fb685ad', 'width': 320}, {'height': 463, 'url': 'https://preview.redd.it/gir8e8bhxjge1.png?width=640&crop=smart&auto=webp&s=f4776556bde99f86b60d38ae09051ad184544ecd', 'width': 640}, {'height': 694, 'url': 'https://preview.redd.it/gir8e8bhxjge1.png?width=960&crop=smart&auto=webp&s=ba081db42ebe7829270adc340f72e220849d6596', 'width': 960}, {'height': 781, 'url': 'https://preview.redd.it/gir8e8bhxjge1.png?width=1080&crop=smart&auto=webp&s=c55686332f0159849a6c31b179d0738a86b763ad', 'width': 1080}], 'source': {'height': 853, 'url': 'https://preview.redd.it/gir8e8bhxjge1.png?auto=webp&s=0ee54bd004794acbeeedd0c69604acf10214a9e6', 'width': 1179}, 'variants': {}}]} |
||
How to use Ollama LLMs with Aider? | 1 | How do I run my model with a large context window so that Aider can be usable with it? I tried to do ollama run and then /set parameter num\_ctx 10000 but that does change in the ollama cli but it doesnt work when I use the model with aider? No matter what I tried, aider just says "Your Ollama model is configured with num\_ctx=2048 tokens of context window. You are attempting to send xxxx tokens". The model when launched with aider somehow always has the default context window? | 2025-02-01T16:19:24 | https://www.reddit.com/r/LocalLLaMA/comments/1if9sr2/how_to_use_ollama_llms_with_aider/ | Theboyscampus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1if9sr2 | false | null | t3_1if9sr2 | /r/LocalLLaMA/comments/1if9sr2/how_to_use_ollama_llms_with_aider/ | false | false | self | 1 | null |
Llama 70b on 4x100s - How to make it faster? | 1 | [removed] | 2025-02-01T16:21:57 | https://www.reddit.com/r/LocalLLaMA/comments/1if9us1/llama_70b_on_4x100s_how_to_make_it_faster/ | Extension_Brick9151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1if9us1 | false | null | t3_1if9us1 | /r/LocalLLaMA/comments/1if9us1/llama_70b_on_4x100s_how_to_make_it_faster/ | false | false | self | 1 | null |
GPU Memory and Computer Memory Used? | 1 | When I'm running models that should be sitting in the GPU's memory, I also see that the computer's memory is used. To verify this, I closed the running terminal window and got back gigs of computer memory. Is this expected or do I have something wrong?
My setup:
- windows
- 2x3090
- LLama 3.1 70B IQ4_XS
- koboldcpp
- Koboldcpp settings:
- GPU layers: 81
- tensor split: 40, 41
- context size: 8192
I do see that my GPU's memory is used as well (20 GB and 20 GB respectively). | 2025-02-01T16:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ifa0am/gpu_memory_and_computer_memory_used/ | add_underscores | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifa0am | false | null | t3_1ifa0am | /r/LocalLLaMA/comments/1ifa0am/gpu_memory_and_computer_memory_used/ | false | false | self | 1 | null |
$20 o3-mini with rate-limit is NOT better than Free & Unlimited R1 | 2 | 2025-02-01T16:30:30 | BidHot8598 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifa1o7 | false | null | t3_1ifa1o7 | /r/LocalLLaMA/comments/1ifa1o7/20_o3mini_with_ratelimit_is_not_better_than_free/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'E4Oi1AN5-caCXCCC0WDB_RiuQneSK9Achtm94HEOyrM', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/fi616rgr0kge1.jpeg?width=108&crop=smart&auto=webp&s=1e60118518a8f89690970eed15fd9c4951a1ac15', 'width': 108}, {'height': 266, 'url': 'https://preview.redd.it/fi616rgr0kge1.jpeg?width=216&crop=smart&auto=webp&s=1abaecb185cb2cc63783e9f2d919c056d62fce19', 'width': 216}, {'height': 395, 'url': 'https://preview.redd.it/fi616rgr0kge1.jpeg?width=320&crop=smart&auto=webp&s=8da29da1ae2dfdc844eb3b99e8e507ee84767cfb', 'width': 320}, {'height': 790, 'url': 'https://preview.redd.it/fi616rgr0kge1.jpeg?width=640&crop=smart&auto=webp&s=6a2c3a73c757aff59af807de4886999eed274960', 'width': 640}], 'source': {'height': 834, 'url': 'https://preview.redd.it/fi616rgr0kge1.jpeg?auto=webp&s=a317b70cfc9312b5eea4d0dc2880d99a3b670d3c', 'width': 675}, 'variants': {}}]} |
|||
Deepseek R1 - 14b and 32b models fail the count "R" test. | 1 | [removed] | 2025-02-01T16:47:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ifaf7l/deepseek_r1_14b_and_32b_models_fail_the_count_r/ | Fun-Spirit-8188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifaf7l | false | null | t3_1ifaf7l | /r/LocalLLaMA/comments/1ifaf7l/deepseek_r1_14b_and_32b_models_fail_the_count_r/ | false | false | 1 | null |
|
The key to ASI according to me | 0 | I hereby claim that ASI will be achieved when an agent, with an internal RL mechanism driven by past actions (much like human dopamine), is able to modify its own reward function without going into a short-term maximization feedback loop. Similar to how some humans recognize that drugs or other addictions that release dopamine (reward) are harmful to their long-term interests and are able to stop them.
This post serves as proof that I was the first to make such a claim. | 2025-02-01T16:49:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ifah2x/the_key_to_asi_according_to_me/ | Ansky11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifah2x | false | null | t3_1ifah2x | /r/LocalLLaMA/comments/1ifah2x/the_key_to_asi_according_to_me/ | false | false | self | 0 | null |
Open source specialised models? | 2 | I've recently jumped on the LLM train when r1 was released open source, I've ran a few models with ollama on my 4090, and so far I don't think these generalist models like r1 and llama are worth running at home for personal use, even a 14b model I can't give it as much context as I'd want to make it useful for my life.
So I'm wondering why is the direction to bigger and stronger generalist models rather than towards hyper specialised models that can actually be ran on personal laptops and useful for at least 1 thing, like a cooking model, a philosophy model, a biology model, etc?
Even coding models are too generic, for example jetbrains has built a few 0.1b models in their editors that run locally and you enable one for each language you're using, seems like a much better approach. | 2025-02-01T16:50:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ifahdn/open_source_specialised_models/ | kurlicue | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifahdn | false | null | t3_1ifahdn | /r/LocalLLaMA/comments/1ifahdn/open_source_specialised_models/ | false | false | self | 2 | null |
How to get the DS-R1 distill llama and qwen models to properly roleplay? | 1 | [removed] | 2025-02-01T16:57:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ifanb3/how_to_get_the_dsr1_distill_llama_and_qwen_models/ | CorruptCobalion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifanb3 | false | null | t3_1ifanb3 | /r/LocalLLaMA/comments/1ifanb3/how_to_get_the_dsr1_distill_llama_and_qwen_models/ | false | false | self | 1 | null |
with RTX3050 graphic card, how many t/s can i expect if i self host deepseek r1? | 0 | many thx | 2025-02-01T16:59:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ifapf4/with_rtx3050_graphic_card_how_many_ts_can_i/ | staypositivegirl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifapf4 | false | null | t3_1ifapf4 | /r/LocalLLaMA/comments/1ifapf4/with_rtx3050_graphic_card_how_many_ts_can_i/ | false | false | self | 0 | null |
DeepSeek stated that Huawei has successfully adapted the V3 model using its own chips, providing developers with detailed guidelines on chip utilization. The FT previously reported that Huawei has dispatched engineers to assist clients in migrating from NVIDIA chips to Ascend. | 78 | 2025-02-01T17:02:41 | bruhlmaocmonbro | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifas90 | false | null | t3_1ifas90 | /r/LocalLLaMA/comments/1ifas90/deepseek_stated_that_huawei_has_successfully/ | false | false | 78 | {'enabled': True, 'images': [{'id': 'HhxUgJsALBWtI39KrhC32bayE7XThL_iDkKBugQT0PE', 'resolutions': [{'height': 187, 'url': 'https://preview.redd.it/3im4zcb07kge1.jpeg?width=108&crop=smart&auto=webp&s=4f39bdaa9d3d48c06438e9b683f7188ba85c30ee', 'width': 108}, {'height': 374, 'url': 'https://preview.redd.it/3im4zcb07kge1.jpeg?width=216&crop=smart&auto=webp&s=92e758a48c385821bd5796d3706da12b765f2a92', 'width': 216}, {'height': 555, 'url': 'https://preview.redd.it/3im4zcb07kge1.jpeg?width=320&crop=smart&auto=webp&s=21e7fd86d02cf0bce5597bd62ce19540eb8866ec', 'width': 320}, {'height': 1110, 'url': 'https://preview.redd.it/3im4zcb07kge1.jpeg?width=640&crop=smart&auto=webp&s=97428e9808710697410fee475d1b99d66313ba73', 'width': 640}, {'height': 1666, 'url': 'https://preview.redd.it/3im4zcb07kge1.jpeg?width=960&crop=smart&auto=webp&s=9121a814ca6dcd77ffde321fd2a18982d48500ba', 'width': 960}, {'height': 1874, 'url': 'https://preview.redd.it/3im4zcb07kge1.jpeg?width=1080&crop=smart&auto=webp&s=57d45aa0d1824e4d9627ea7cc21107f0dff2ef0f', 'width': 1080}], 'source': {'height': 2031, 'url': 'https://preview.redd.it/3im4zcb07kge1.jpeg?auto=webp&s=2dfeeaa038351124bc48a48c09bbb827e31d7fee', 'width': 1170}, 'variants': {}}]} |
|||
How to use function calling with Mistral in LM Studio? | 2 | I've tried finding tutorials and documenation online for this feature, but I've been unsucessful. Anyone know how to do this in Python? I'm using Mistral 7B v0.3. | 2025-02-01T17:03:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ifasw5/how_to_use_function_calling_with_mistral_in_lm/ | NonYa_exe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifasw5 | false | null | t3_1ifasw5 | /r/LocalLLaMA/comments/1ifasw5/how_to_use_function_calling_with_mistral_in_lm/ | false | false | self | 2 | null |
What's your workflow for local parallelization? | 3 | I am running a model locally, it's structured to just answer yes/no in json. It takes around 5 seconds of thinking to answer. Tokens per second is like 2-3 so I can't reallt do much else. It's a shitty laptop. However if I could ran many of those requests in parallel it would still be useful.
Wondering how more experienced people deal with this. Thanks! | 2025-02-01T17:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ifaucx/whats_your_workflow_for_local_parallelization/ | CartoonistNo3456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifaucx | false | null | t3_1ifaucx | /r/LocalLLaMA/comments/1ifaucx/whats_your_workflow_for_local_parallelization/ | false | false | self | 3 | null |
Cloud-Running R1? | 1 | **Hello everyone!**
First off — somewhat of a newbie here. I’ve never used Ollama, run a cloud server/VPS, or handled anything like this before. That said, I’m really impressed with what I’ve experienced on the DeepSeek R1 website.
While I know how to run models locally using LM Studio’s GUI, I don’t have a powerful Mac with half a terabyte of memory. So, here’s my question:
**Is there a way for someone like me to pay Google/Microsoft/Amazon for a cloud server/VPS** that would let me and a friend remotely use the R1 model?
Right now, DeepSeek’s service seems limited due to the surge of new users, and it’s unclear if/when they’ll scale up—or even continue operating. Setting up a dedicated cloud instance feels like a safer bet.
I hope I’m explaining this clearly (language and knowledge barriers can be tricky!).
**Thanks to anyone who takes the time to read, help, and explain!** | 2025-02-01T17:08:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ifawz9/cloudrunning_r1/ | itamar87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifawz9 | false | null | t3_1ifawz9 | /r/LocalLLaMA/comments/1ifawz9/cloudrunning_r1/ | false | false | self | 1 | null |
Deep Seek, Stuck in deep thinking for half an hour | 1 | [removed] | 2025-02-01T17:11:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ifazhk/deep_seek_stuck_in_deep_thinking_for_half_an_hour/ | Sweaty_Importance_83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifazhk | false | null | t3_1ifazhk | /r/LocalLLaMA/comments/1ifazhk/deep_seek_stuck_in_deep_thinking_for_half_an_hour/ | false | false | 1 | null |
|
getting poor outputs using images with qwen/qwen-2-vl-7b-instruct | 1 | Anyone using qwen/qwen-2-vl-7b-instruct with images?
It doesn't matter what image I submit to the model I get basically the same output which has nothing to do with the image.
basically this everytime
"a close-up shot of a person's hand holding a small, round object. The background is blurred, making it difficult to discern any specific details about the setting. The hand appears to be in motion, possibly picking up or placing down the object." | 2025-02-01T17:14:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ifb22f/getting_poor_outputs_using_images_with/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifb22f | false | null | t3_1ifb22f | /r/LocalLLaMA/comments/1ifb22f/getting_poor_outputs_using_images_with/ | false | false | self | 1 | null |
Running local, best bang for the buck dual 3060s? | 5 | My current gaming PC is a 2070super with a 3900x and 32 gigs of ram ddr4 3600 ram and a gold 750 watt power supply. I've been running stable diffusion and smaller local models now for a while. Now with R1 and things heating up Id like to get in more, but I don't want to rush out and spend thousands...
I was thinking of getting 2 3060 12 gigs and going from 32 to 64 gigs of ram..
this would be about $550.
I could also go with a single GPU but 3090s are like 1000 and would probably require a power supply upgrade... | 2025-02-01T17:15:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ifb2fy/running_local_best_bang_for_the_buck_dual_3060s/ | GuerrillaRobot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifb2fy | false | null | t3_1ifb2fy | /r/LocalLLaMA/comments/1ifb2fy/running_local_best_bang_for_the_buck_dual_3060s/ | false | false | self | 5 | null |
genuine thoughts on o3 mini series? | 6 | After the o3 release. I have used it a bit and had well, mixed results. coding wise, I didn't see that much of a difference compared to newsonnet (I work on frontend w/ react, might be the reason). Some people do say that o3 is god at low level languages (C, Rust and sometimes Go). I tried o3 mini for creative wise, it seems better than the o1 series, but still, nowhere close the DeepSeek r1 series. It's weird, it seems like the oai RL basically destroyed the model's creativity where the DeepSeek RL training seem to just boost it by a huge amount.
just overall curious on people who work on very technical fields (coding, electronic engineering, some kind of science / etc). what do you actually think of o3 mini, improvement or just an completely overhyped model? | 2025-02-01T17:17:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ifb4hh/genuine_thoughts_on_o3_mini_series/ | YourAverageDev0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifb4hh | false | null | t3_1ifb4hh | /r/LocalLLaMA/comments/1ifb4hh/genuine_thoughts_on_o3_mini_series/ | false | false | self | 6 | null |
Biased test of GPT-4 era LLMs (300+ models, o1 and R1 included) | 1 | [removed] | 2025-02-01T17:22:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ifb88o/biased_test_of_gpt4_era_llms_300_models_o1_and_r1/ | MoonRide303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifb88o | false | null | t3_1ifb88o | /r/LocalLLaMA/comments/1ifb88o/biased_test_of_gpt4_era_llms_300_models_o1_and_r1/ | false | false | self | 1 | null |
Guys I think Deepseek might have been trained on OpenAI, what do you think? | 0 | 2025-02-01T17:30:31 | brandygang | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifbf75 | false | null | t3_1ifbf75 | /r/LocalLLaMA/comments/1ifbf75/guys_i_think_deepseek_might_have_been_trained_on/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'BqWKeZI7EK080xnL-P_PJnOWB4VU8MUNEr5arMcVuCg', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/jen3ftyxbkge1.png?width=108&crop=smart&auto=webp&s=a0d7cb22fbaf3c3b00bfbec10f2ba71034cb17d9', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/jen3ftyxbkge1.png?width=216&crop=smart&auto=webp&s=fe9fde39051b9c2d57a5f2f131f3f0b755dd2108', 'width': 216}, {'height': 169, 'url': 'https://preview.redd.it/jen3ftyxbkge1.png?width=320&crop=smart&auto=webp&s=1788af45111709ed12f133a9658965f07f9d9f2c', 'width': 320}, {'height': 338, 'url': 'https://preview.redd.it/jen3ftyxbkge1.png?width=640&crop=smart&auto=webp&s=e25e55819012d1e84de90fb2936bf4448a9adbbe', 'width': 640}, {'height': 507, 'url': 'https://preview.redd.it/jen3ftyxbkge1.png?width=960&crop=smart&auto=webp&s=17bab969a7d2aae37dc11f459403f0c0853b79df', 'width': 960}], 'source': {'height': 537, 'url': 'https://preview.redd.it/jen3ftyxbkge1.png?auto=webp&s=3210f24185881eabe76f92941458ce4ceaa50a54', 'width': 1015}, 'variants': {}}]} |
|||
Lex Fridman agrees ; $20 o3-mini with rate-limit is NOT better than Free & Unlimited R1 ; bench affirms | 0 | 2025-02-01T17:40:12 | BidHot8598 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifbnfa | false | null | t3_1ifbnfa | /r/LocalLLaMA/comments/1ifbnfa/lex_fridman_agrees_20_o3mini_with_ratelimit_is/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'ck3B9QjEv0ga5zx-6wOTlQeavREAX1NLLDSJGhHitEA', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/1j12csredkge1.jpeg?width=108&crop=smart&auto=webp&s=9754b6f7d744636715a1a9b02331c5f59638e7f4', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/1j12csredkge1.jpeg?width=216&crop=smart&auto=webp&s=c21bac6244e380b6cdbb586e1fe46219450de491', 'width': 216}, {'height': 400, 'url': 'https://preview.redd.it/1j12csredkge1.jpeg?width=320&crop=smart&auto=webp&s=707c2fc42ae3968dffae93dd0b0bf89451b02100', 'width': 320}, {'height': 800, 'url': 'https://preview.redd.it/1j12csredkge1.jpeg?width=640&crop=smart&auto=webp&s=79e8cce3a5eafb11afb5fdfe311822fbf107ddbd', 'width': 640}, {'height': 1200, 'url': 'https://preview.redd.it/1j12csredkge1.jpeg?width=960&crop=smart&auto=webp&s=f3abe3643fc8e56025f10db559ef06b746351d19', 'width': 960}, {'height': 1350, 'url': 'https://preview.redd.it/1j12csredkge1.jpeg?width=1080&crop=smart&auto=webp&s=4a4deb9f59e4595d1dcc69a51e36dc8f00226449', 'width': 1080}], 'source': {'height': 5120, 'url': 'https://preview.redd.it/1j12csredkge1.jpeg?auto=webp&s=2f6b8758acb2cd5e9586d668364f503b3d071a64', 'width': 4096}, 'variants': {}}]} |
|||
A Toy Training Code for DeepSeek R1 Zero and Deep Dive into the GRPO Model | 5 | 2025-02-01T17:41:02 | https://youtu.be/hRSzhn_lDd8 | satyajitdass | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1ifbo4j | false | {'oembed': {'author_name': 'Tech with Satyajit Das', 'author_url': 'https://www.youtube.com/@SatyajitSatoDas', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/hRSzhn_lDd8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Train Your Own DeepSeek R1 Zero Model"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/hRSzhn_lDd8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Train Your Own DeepSeek R1 Zero Model', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ifbo4j | /r/LocalLLaMA/comments/1ifbo4j/a_toy_training_code_for_deepseek_r1_zero_and_deep/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'voBwEGAWaGCNLWB1XBvZ1oFFFlrNWOdExh4McBgCo2Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/t9HEL7G2ebcif_BR8jDzhYazyrql58T08Vafa3aXsbQ.jpg?width=108&crop=smart&auto=webp&s=80e88697c452d59e984d309811ca0f2ee3858684', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/t9HEL7G2ebcif_BR8jDzhYazyrql58T08Vafa3aXsbQ.jpg?width=216&crop=smart&auto=webp&s=c15b2c51a7b5d5ee90881e19dcaab3d0b15c5958', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/t9HEL7G2ebcif_BR8jDzhYazyrql58T08Vafa3aXsbQ.jpg?width=320&crop=smart&auto=webp&s=797e4e43e9e7894b5e662576be6bfacaf68e75c8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/t9HEL7G2ebcif_BR8jDzhYazyrql58T08Vafa3aXsbQ.jpg?auto=webp&s=c94e2918dbfdebc1fb693d26f38f5bb795ccaa31', 'width': 480}, 'variants': {}}]} |
||
Windsurf R1 Leaked Prompt | 1 | [removed] | 2025-02-01T17:41:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ifboca/windsurf_r1_leaked_prompt/ | milutinke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifboca | false | null | t3_1ifboca | /r/LocalLLaMA/comments/1ifboca/windsurf_r1_leaked_prompt/ | false | false | self | 1 | null |
Windsurf Cascade R1 | 1 | [removed] | 2025-02-01T17:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ifbp57/windsurf_cascade_r1/ | milutinke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifbp57 | false | null | t3_1ifbp57 | /r/LocalLLaMA/comments/1ifbp57/windsurf_cascade_r1/ | false | false | self | 1 | null |
Tulu 3: RVLR Based Llama 3 model. | 20 | https://huggingface.co/collections/allenai/tulu-3-models-673b8e0dc3512e30e7dc54f5
Yet to test it out but this sounds promising considering it's from Allen Institute. | 2025-02-01T17:43:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ifbptd/tulu_3_rvlr_based_llama_3_model/ | Reader3123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifbptd | false | null | t3_1ifbptd | /r/LocalLLaMA/comments/1ifbptd/tulu_3_rvlr_based_llama_3_model/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'Yq1qbYAi7nqzVBmQNUW7Tg9FPEliS63uMhUQPvfT82s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WS_rCPW2dLjCehidD_xO1KSfQhLs5NGn35RepfzQ5tI.jpg?width=108&crop=smart&auto=webp&s=5e74ebfe4f1e413c137e9423705b29705d74af58', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WS_rCPW2dLjCehidD_xO1KSfQhLs5NGn35RepfzQ5tI.jpg?width=216&crop=smart&auto=webp&s=493b5f47686818a9cc9176106e9ce7c129995aaa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WS_rCPW2dLjCehidD_xO1KSfQhLs5NGn35RepfzQ5tI.jpg?width=320&crop=smart&auto=webp&s=551cb51f21ecb60765192f71ebf681e629cd8c78', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WS_rCPW2dLjCehidD_xO1KSfQhLs5NGn35RepfzQ5tI.jpg?width=640&crop=smart&auto=webp&s=743262ba854e8957bbf959db3a947ee62c0e4b97', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WS_rCPW2dLjCehidD_xO1KSfQhLs5NGn35RepfzQ5tI.jpg?width=960&crop=smart&auto=webp&s=fde772a17b033c303d18ae7d103149b9b0911337', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WS_rCPW2dLjCehidD_xO1KSfQhLs5NGn35RepfzQ5tI.jpg?width=1080&crop=smart&auto=webp&s=1aee4e721ece3a1c36e553c5eaa7fedde4aedbb2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WS_rCPW2dLjCehidD_xO1KSfQhLs5NGn35RepfzQ5tI.jpg?auto=webp&s=ce997d575168e6752c7481a025c15a08b3ef97b9', 'width': 1200}, 'variants': {}}]} |
Looking for a dual CPU Linux user willing to test an experimental llama.cpp PR #11580 | 10 | Pull request is here, it contains instructions at the end: [https://github.com/ggerganov/llama.cpp/pull/11580](https://github.com/ggerganov/llama.cpp/pull/11580)
There's a chance it will improve performance on dual-CPU systems.
Let me know if you need any help. | 2025-02-01T17:44:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ifbqta/looking_for_a_dual_cpu_linux_user_willing_to_test/ | fairydreaming | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifbqta | false | null | t3_1ifbqta | /r/LocalLLaMA/comments/1ifbqta/looking_for_a_dual_cpu_linux_user_willing_to_test/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'PmVmSLVFs3TFrnDl-5ua83TyTZseY0apZnsRRFAoVw0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dw37b0JRsMbE3SiiCFVmhi58nlFlFOfxxXTxk6fotXg.jpg?width=108&crop=smart&auto=webp&s=5f8c998abe2df1ef78de9bd7d858d55606456d58', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dw37b0JRsMbE3SiiCFVmhi58nlFlFOfxxXTxk6fotXg.jpg?width=216&crop=smart&auto=webp&s=968d0a99d91d2c1c4acd73982bc0ee69cd72ead2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dw37b0JRsMbE3SiiCFVmhi58nlFlFOfxxXTxk6fotXg.jpg?width=320&crop=smart&auto=webp&s=8785564dccd70189f7c9c75870235254c004a811', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dw37b0JRsMbE3SiiCFVmhi58nlFlFOfxxXTxk6fotXg.jpg?width=640&crop=smart&auto=webp&s=4fca10afc82c0ece76977f74ecd7366d1bac2faf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dw37b0JRsMbE3SiiCFVmhi58nlFlFOfxxXTxk6fotXg.jpg?width=960&crop=smart&auto=webp&s=a6ba7d3273ebae88bf473799924c1ef43c70b872', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dw37b0JRsMbE3SiiCFVmhi58nlFlFOfxxXTxk6fotXg.jpg?width=1080&crop=smart&auto=webp&s=bd66573bad6bd0cf0aa128d23c3f90b31d1aba0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dw37b0JRsMbE3SiiCFVmhi58nlFlFOfxxXTxk6fotXg.jpg?auto=webp&s=38e073990baa184b41da0d9cd7f52ff6af9b4deb', 'width': 1200}, 'variants': {}}]} |
R1 is super easy to јаilbrеak | 1 | [removed] | 2025-02-01T17:44:25 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ifbqw1 | false | null | t3_1ifbqw1 | /r/LocalLLaMA/comments/1ifbqw1/r1_is_super_easy_to_јаilbrеak/ | false | false | default | 1 | null |
||
Future of work: from labor to self development | 0 | The future is here... control freaks like Bill Gates need to stop bothering humanity | 2025-02-01T17:48:24 | Conscious_Nobody9571 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifbu5h | false | null | t3_1ifbu5h | /r/LocalLLaMA/comments/1ifbu5h/future_of_work_from_labor_to_self_development/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'LmHJTPALH0e52Zz5SKoXLu7TRDeSZNtCWmSQYRRTGmg', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/eg7bhhk6fkge1.jpeg?width=108&crop=smart&auto=webp&s=e11ea615a58dd363d8fc3c44d47f7b30853f5891', 'width': 108}, {'height': 185, 'url': 'https://preview.redd.it/eg7bhhk6fkge1.jpeg?width=216&crop=smart&auto=webp&s=93cdeccacbd00699b99573099e5089ecfddd3046', 'width': 216}, {'height': 274, 'url': 'https://preview.redd.it/eg7bhhk6fkge1.jpeg?width=320&crop=smart&auto=webp&s=30f471d59f6a5bacb7846efd7117aa9e9fe83691', 'width': 320}, {'height': 549, 'url': 'https://preview.redd.it/eg7bhhk6fkge1.jpeg?width=640&crop=smart&auto=webp&s=b347896b11b75eab9cf7edf735a03109f0210b0a', 'width': 640}], 'source': {'height': 806, 'url': 'https://preview.redd.it/eg7bhhk6fkge1.jpeg?auto=webp&s=c66bc0ebfff47037436b5254feed6471cb1a9a1b', 'width': 938}, 'variants': {}}]} |
||
DeepSeek R1 reproduced for $30: Berkeley researchers replicate DeepSeek R1 for $30—casting doubt on H100 claims and controversy - Tech Startups | 86 | 2025-02-01T18:03:26 | https://techstartups.com/2025/01/31/deepseek-r1-reproduced-for-30-berkeley-researchers-replicate-deepseek-r1-for-30-casting-doubt-on-h100-claims-and-controversy/ | LeBoulu777 | techstartups.com | 1970-01-01T00:00:00 | 0 | {} | 1ifc798 | false | null | t3_1ifc798 | /r/LocalLLaMA/comments/1ifc798/deepseek_r1_reproduced_for_30_berkeley/ | false | false | 86 | {'enabled': False, 'images': [{'id': '3hB-JKxYiRtnxxMjFt0-itiaHt9bFv5a_Oqm_xnW5bI', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/AwwvTMjYJXl1LwT0oT8Srs8APNeAVHB-rZiHRThDY0I.jpg?width=108&crop=smart&auto=webp&s=f827807d88b88e8639f8519b8b10ce922b251a3a', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/AwwvTMjYJXl1LwT0oT8Srs8APNeAVHB-rZiHRThDY0I.jpg?width=216&crop=smart&auto=webp&s=ef823d1abc2a2239f2f10b86740820a08253d321', 'width': 216}, {'height': 191, 'url': 'https://external-preview.redd.it/AwwvTMjYJXl1LwT0oT8Srs8APNeAVHB-rZiHRThDY0I.jpg?width=320&crop=smart&auto=webp&s=e0de918ffc38f5f718de0a5c049d47d4574c14ff', 'width': 320}, {'height': 383, 'url': 'https://external-preview.redd.it/AwwvTMjYJXl1LwT0oT8Srs8APNeAVHB-rZiHRThDY0I.jpg?width=640&crop=smart&auto=webp&s=98d7d53a32751cabd2ee1d1cb06b9c504f71e973', 'width': 640}], 'source': {'height': 529, 'url': 'https://external-preview.redd.it/AwwvTMjYJXl1LwT0oT8Srs8APNeAVHB-rZiHRThDY0I.jpg?auto=webp&s=8a80654bbb988dfe4d2199dc6287ab980bc1be25', 'width': 883}, 'variants': {}}]} |
||
Which LLMs are the best for STEM? | 1 | What llms are best for stem? Should i just use chatgpt? | 2025-02-01T18:13:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ifcgeg/which_llms_are_the_best_for_stem/ | Emotional_Road_4048 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifcgeg | false | null | t3_1ifcgeg | /r/LocalLLaMA/comments/1ifcgeg/which_llms_are_the_best_for_stem/ | false | false | self | 1 | null |
Are there any models capable of high-quality profanity? | 27 | No matter what the prompt, I can't get any LLM to do anything more interesting than drop a couple of f-bombs, insert "damn" as an adjective or interjection, or call the user as a bastard.
This will not do if I want it to represent a character that is using rancher-grade or military-grade profanity. There's a variety, lyricism and rhythm to really good profanity that, as far as I can tell, none of these models have anywhere in their training sets.
For example, when a rancher finds someone trespassing on their land at night, we should expect to be hearing phrases like "If you and your choad-licking toadies don't fuck off right now I'm going to tie your nipples in a knot and twist until you start to like it" and similar. This gets a lot of its force and effectiveness from internal rhymes like between 'choad' and 'toadies' and alliterations like between 'nipples' and 'knot', and is driven home by the recontextualization and accusation implicit in 'until you start to like it' as opposed to leaving it a bare threat.
Anyway, in my experience about ten percent of ranchers are capable of keeping up a barrage of this quality for five or ten minutes at a time before they start to repeat themselves, and I remember at least a couple of drill sergeants in the 1980s who seemed capable of going for a full hour at a time. The whole thing doesn't work unless it can be delivered rapidly enough that there's no chance to interrupt, and loses all force and starts to just sound stupid if it frequently repeats.
And I can't get anything similar in LLM output anywhere no matter how I try and coax. No lyricism, no internal rhyme, no alliterations, no accusatory recontextualizations, no awareness that repetition makes profanity sound stupid, and not even any good idea of what the words mean and which are appropriate to describe what kinds of behavior. These models think intense profanity consists of dropping a couple of f-bombs, and many of them aren't even capable of that. | 2025-02-01T18:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ifcir5/are_there_any_models_capable_of_highquality/ | Ray_Dillinger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifcir5 | false | null | t3_1ifcir5 | /r/LocalLLaMA/comments/1ifcir5/are_there_any_models_capable_of_highquality/ | false | false | nsfw | 27 | null |
2 x Intel Arc B580 | 4 | Has anyone ever run 2 of them together (already)? And anything to consider if I plan to run them on a Gigabyte B850M Maiboard and Ubuntu. Thanks. | 2025-02-01T18:20:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ifcm1f/2_x_intel_arc_b580/ | Due_Criticism_442 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifcm1f | false | null | t3_1ifcm1f | /r/LocalLLaMA/comments/1ifcm1f/2_x_intel_arc_b580/ | false | false | self | 4 | null |
Best model for local exam marking system? | 0 | 3 exams per week
100 questions per exam
Questions remain the same
Marking specification remains the same apart from a few questions which will need to be updated every few days.
Answers are the only change
Privacy is a must.
Which model would you use?
Can this be built by average techy?
Where would I find someone to build the this?
Ideally resources would cost no more than £1k and LLM developer would cost another £1k. Is this possible?
| 2025-02-01T18:25:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ifcpxs/best_model_for_local_exam_marking_system/ | Slow-Appointment1512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifcpxs | false | null | t3_1ifcpxs | /r/LocalLLaMA/comments/1ifcpxs/best_model_for_local_exam_marking_system/ | false | false | self | 0 | null |
Longer thinking token might not be a best way. Paper: Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs | 21 | 2025-02-01T18:26:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ifcqwj/longer_thinking_token_might_not_be_a_best_way/ | henryclw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifcqwj | false | null | t3_1ifcqwj | /r/LocalLLaMA/comments/1ifcqwj/longer_thinking_token_might_not_be_a_best_way/ | false | false | 21 | null |
||
I made function calling agent builder using Swagger document (Every Backend Servers can be Super A.I. Chatbot) | 1 | 2025-02-01T18:27:39 | https://nestia.io/articles/llm-function-calling/ai-chat-with-your-backend-server-every-backend-servers-can-be-super-ai-chatbot.html | SamchonFramework | nestia.io | 1970-01-01T00:00:00 | 0 | {} | 1ifcrs6 | false | null | t3_1ifcrs6 | /r/LocalLLaMA/comments/1ifcrs6/i_made_function_calling_agent_builder_using/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NpJ5fi50HiRLud94XwmUkw9_KsfOQEHMPrZf-Upca-Q', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/jlZTrzkmRFvebnkVS0wC4JZniTJCYLqXkKi_8FRaq-4.jpg?width=108&crop=smart&auto=webp&s=5e37cd7d320a25140e5e932451731a891f40e5e5', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/jlZTrzkmRFvebnkVS0wC4JZniTJCYLqXkKi_8FRaq-4.jpg?width=216&crop=smart&auto=webp&s=7b49c58fc5008b6480a492fe88de405a9b717e9e', 'width': 216}, {'height': 134, 'url': 'https://external-preview.redd.it/jlZTrzkmRFvebnkVS0wC4JZniTJCYLqXkKi_8FRaq-4.jpg?width=320&crop=smart&auto=webp&s=b97038395ffec41e364889fbe647deccbd95521e', 'width': 320}, {'height': 268, 'url': 'https://external-preview.redd.it/jlZTrzkmRFvebnkVS0wC4JZniTJCYLqXkKi_8FRaq-4.jpg?width=640&crop=smart&auto=webp&s=232a38cc23afb6f73a2c5a532516fb1d0c620f8c', 'width': 640}, {'height': 403, 'url': 'https://external-preview.redd.it/jlZTrzkmRFvebnkVS0wC4JZniTJCYLqXkKi_8FRaq-4.jpg?width=960&crop=smart&auto=webp&s=f4913bf9a7893d7d1b7f2aaf9f829c098f1deba9', 'width': 960}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/jlZTrzkmRFvebnkVS0wC4JZniTJCYLqXkKi_8FRaq-4.jpg?auto=webp&s=19cdc6af23a2ebf161324288a51f86491e4a688a', 'width': 1000}, 'variants': {}}]} |
||
Going team green. Question… | 1 | [removed] | 2025-02-01T18:29:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ifctq9/going_team_green_question/ | AndyBuildsThings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifctq9 | false | null | t3_1ifctq9 | /r/LocalLLaMA/comments/1ifctq9/going_team_green_question/ | false | false | self | 1 | null |
SXM2 adapters? | 3 | Does anyone have any experience buying and using SXM2 adapters? I'm trying to find an adapter for less than 300 dollars and am not having any luck at all, I've seen the heatsinks on ebay for a normal price but I haven't seen any adapters at all where do people even get those? I've searched on ebay, aliexpress, alibaba, facebook marketplace yeah I just can't seem to find an adapter for the life of me is there a specific chinese site that sells them? can someone point me in the right place? | 2025-02-01T18:32:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ifcvp8/sxm2_adapters/ | Illustrious-Row6858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifcvp8 | false | null | t3_1ifcvp8 | /r/LocalLLaMA/comments/1ifcvp8/sxm2_adapters/ | false | false | self | 3 | null |
Deepseek V3 Chat fp16 running locally is not censored and here is winnie the proof | 1 | 2025-02-01T18:40:37 | https://www.reddit.com/gallery/1ifd2kv | AbortedFajitas | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ifd2kv | false | null | t3_1ifd2kv | /r/LocalLLaMA/comments/1ifd2kv/deepseek_v3_chat_fp16_running_locally_is_not/ | false | false | 1 | null |
||
Best simple GenAI benchmarking tools? | 1 | [removed] | 2025-02-01T18:44:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ifd5fg/best_simple_genai_benchmarking_tools/ | Revolaition | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifd5fg | false | null | t3_1ifd5fg | /r/LocalLLaMA/comments/1ifd5fg/best_simple_genai_benchmarking_tools/ | false | false | self | 1 | null |
Which model should I run? I am doing n8n automation. | 2 | I have a PC with the following specs:
1. RTX 3060
2. Ryzen 5950X
3. 64 GB Oloy RAM (3600 MHz)
I’m currently running 7B and 14B models (LLAMA and DeepSeek) quite smoothly, with 7B performing very fast for text generation. I’m using Ollama as the platform. And openwebui as front end.
However, I encounter occasional parsing errors in n8n—sometimes the workflow completes successfully, and other times it doesn’t. The workflow involves scraping the latest news articles through an RSS feed and rewriting them in a specific brand voice and context. Could this be related to token limitations, or would switching to a higher parameter model help?
Have more flows in development.
Also, which models could I run comfortably beyond 14B with my current setup? Should I consider trying GGUF 32B models?
Will 2X RTX 3060s help? I have PSU of 850 Watts.
| 2025-02-01T18:44:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ifd5z4/which_model_should_i_run_i_am_doing_n8n_automation/ | DawarAzhar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifd5z4 | false | null | t3_1ifd5z4 | /r/LocalLLaMA/comments/1ifd5z4/which_model_should_i_run_i_am_doing_n8n_automation/ | false | false | self | 2 | null |
what is this report | 0 | saw this on linkedin feels like just a blanket study on deep seek
[https://cdn.prod.website-files.com/6690a78074d86ca0ad978007/679bc2e71b48e423c0ff7e60\_1%20RedTeaming\_DeepSeek\_Jan29\_2025%20(1).pdf#page=8.76](https://cdn.prod.website-files.com/6690a78074d86ca0ad978007/679bc2e71b48e423c0ff7e60_1%20RedTeaming_DeepSeek_Jan29_2025%20(1).pdf#page=8.76) | 2025-02-01T18:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ifd6yg/what_is_this_report/ | Which_Will9559 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifd6yg | false | null | t3_1ifd6yg | /r/LocalLLaMA/comments/1ifd6yg/what_is_this_report/ | false | false | self | 0 | null |
Proof that Deepseek v3 fp16 running locally is not censored. | 1 | [removed] | 2025-02-01T18:49:24 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ifd9uk | false | null | t3_1ifd9uk | /r/LocalLLaMA/comments/1ifd9uk/proof_that_deepseek_v3_fp16_running_locally_is/ | false | false | default | 1 | null |
||
Mathematical formula for tensor + pipeline parallelism bandwidth requirement? | 1 |
In terms of attention heads, KV, weight precision, tokens, parameters, how do you calculate the required tensor and pipeline bandwidths? | 2025-02-01T18:53:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ifdcx7/mathematical_formula_for_tensor_pipeline/ | BarnardWellesley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifdcx7 | false | null | t3_1ifdcx7 | /r/LocalLLaMA/comments/1ifdcx7/mathematical_formula_for_tensor_pipeline/ | false | false | self | 1 | null |
what do you use AI for? | 19 | i mean ok great there are promising models coming up, and tools for inference are improving.
but ultimately what use cases does it fit?
i suppose some use it with copilot.
i use it to summarize transcripts and to generate nsfw images.
but what other practical uses does it currently work decently on?
| 2025-02-01T18:53:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ifdd9r/what_do_you_use_ai_for/ | goingsplit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifdd9r | false | null | t3_1ifdd9r | /r/LocalLLaMA/comments/1ifdd9r/what_do_you_use_ai_for/ | false | false | self | 19 | null |
Anyone managed to run deepseek-r1-32b with chat-ui? | 1 | When using [hf.co/chat](http://hf.co/chat) the model works great, it uses the "thinking" part and also web search works great.
But when building locally (https://github.com/huggingface/chat-ui) , I cannot manage to set the .env.local file correctly.
I have tried llama-cpp and ollama backends.
Anyone else managed to set up the correct template to use with deepseek-r1?
| 2025-02-01T18:56:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ifdfmc/anyone_managed_to_run_deepseekr132b_with_chatui/ | iChrist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifdfmc | false | null | t3_1ifdfmc | /r/LocalLLaMA/comments/1ifdfmc/anyone_managed_to_run_deepseekr132b_with_chatui/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ffNXCUPQerMMTV5UAIgJRS5QMtKWEhNQFfpmL7I4Bcc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=108&crop=smart&auto=webp&s=fa74f814d5c43d0d9d47c3591a9d667818ebe0c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=216&crop=smart&auto=webp&s=e3494c6906d2c95f78811be98ecf631cdeb08c13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=320&crop=smart&auto=webp&s=08f0479f19185f357e3bccc42a42f10f6fac664c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=640&crop=smart&auto=webp&s=2fdeeb9ada89c2bf4e5dc697043da66bd62cf959', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=960&crop=smart&auto=webp&s=e7b3230584c769f71759db14271d12a5f8cf831a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=1080&crop=smart&auto=webp&s=a8b11dd06cf9be6635cb9fcb2dedf71ecdd9c491', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?auto=webp&s=b8bf601deac4d62d484c6fb69764f7d09d0fd168', 'width': 1200}, 'variants': {}}]} |
Out of the loop with autonomous agents | 1 | Hey,
Do we have frameworks that interact with the browser, and possibly the system, in real time?
There's [https://github.com/browser-use/web-ui](https://github.com/browser-use/web-ui), but it's an interactive tool, while I'm looking for something that can be used as a script without UI.
Basically, disposable scripts that offload a repetitive task to an agent is the use case here.
Thanks! | 2025-02-01T19:03:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ifdlo3/out_of_the_loop_with_autonomous_agents/ | FriskyFennecFox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifdlo3 | false | null | t3_1ifdlo3 | /r/LocalLLaMA/comments/1ifdlo3/out_of_the_loop_with_autonomous_agents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1I20oB2XRKdmoGa-oIcplwt7r6x8Mox_PAMMRoGlI0k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wR6xjZZhNiiGSYRnNk7F2QHrFiDJvFXE_kO_WS-ahl0.jpg?width=108&crop=smart&auto=webp&s=e372c151d00140b1d1f49e24c581545eac007cc5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wR6xjZZhNiiGSYRnNk7F2QHrFiDJvFXE_kO_WS-ahl0.jpg?width=216&crop=smart&auto=webp&s=8c26e1b849e7b70c9cf651305ad8fff0041d7014', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wR6xjZZhNiiGSYRnNk7F2QHrFiDJvFXE_kO_WS-ahl0.jpg?width=320&crop=smart&auto=webp&s=b53a86214a7c190bbe782effe535b820e075253e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wR6xjZZhNiiGSYRnNk7F2QHrFiDJvFXE_kO_WS-ahl0.jpg?width=640&crop=smart&auto=webp&s=8681fb318125201597e75e8603f37478e0385bc4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wR6xjZZhNiiGSYRnNk7F2QHrFiDJvFXE_kO_WS-ahl0.jpg?width=960&crop=smart&auto=webp&s=8c5ccf43d0a00399317b3d82a7bef6acdd5ea44b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wR6xjZZhNiiGSYRnNk7F2QHrFiDJvFXE_kO_WS-ahl0.jpg?width=1080&crop=smart&auto=webp&s=595749a55663aaf446efbe64f6ab6961f3d3b6c0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wR6xjZZhNiiGSYRnNk7F2QHrFiDJvFXE_kO_WS-ahl0.jpg?auto=webp&s=83340b8e1c62da208f152c17b572d41bc3ec189e', 'width': 1200}, 'variants': {}}]} |
We need a different kind of AGI | 0 | Hello community! Creating data for LLMs is hard (especially if you’re writing it yourself; don’t say anything about synthetic data - it is crap)! But, I propose a new kind of AI - instant learning AGI!
Basically, no, it is not a model that beats all humans; it is just a base multimodal model with zero knowledge and pre-training that can learn from seeing and doing. So, you can put it like a program or wrap it into a humanoid or just a small sphere with the necessary sensors, and it learns the surroundings. For example, you can talk to it, show the world around you, and, like a child, just explain and show how to solve this math problem and then give it to try another one. And so, just by showing a few examples, it will quickly learn about the world like a human would!
And so, this is AGI, which is what I want (run locally, of course)! Not your super fancy sci-fi shit! What do you think? Is it possible? | 2025-02-01T19:10:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ifdrje/we_need_a_different_kind_of_agi/ | yukiarimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifdrje | false | null | t3_1ifdrje | /r/LocalLLaMA/comments/1ifdrje/we_need_a_different_kind_of_agi/ | false | false | self | 0 | null |
Where I can find an API that serves hexgrad/Kokoro-82M a TTS model and pay per token ? | 5 | I am working on a personal project and I can't find a service that I can use to send text and convert it to speech using [kokoro](https://huggingface.co/hexgrad/Kokoro-82M) tts model, I can't host it locally because the project is not a big deal to keep my machine on all the time, and it would be much better to pay per token as it won't be used a lot. | 2025-02-01T19:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ifdsxl/where_i_can_find_an_api_that_serves/ | East-Suggestion-8249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifdsxl | false | null | t3_1ifdsxl | /r/LocalLLaMA/comments/1ifdsxl/where_i_can_find_an_api_that_serves/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'TL8xIUiXgJg5YjryMYhj7JiBtqOghnN47_mvdxSWYzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=108&crop=smart&auto=webp&s=c44a83d5fab77c813216e5454c6fba07bfb55e15', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=216&crop=smart&auto=webp&s=bb8032866f6a8609550af1ac69ccea6df3761f92', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=320&crop=smart&auto=webp&s=7f990de0136d4482b7b3bcd05bda7d1723859680', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=640&crop=smart&auto=webp&s=b75663383244e2aa5f5fcf0207756c5dc28fb51b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=960&crop=smart&auto=webp&s=7f200c8a1257ecccf20195dc5abffaaeeb16f10a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?width=1080&crop=smart&auto=webp&s=9a5faaa15c9e5fde7b616979aadc6a151dfa87b0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PSxCcCk18RpMpFh_Tgc1ycbd0zsabOZK7av3YdT9fA4.jpg?auto=webp&s=c3c1958b6cc380e316d46b3fe9508529724694d5', 'width': 1200}, 'variants': {}}]} |
Let's build this | 3 | Hey AI enthusiasts! 🚀
I've got a beast of a setup at my disposal for the next 30 days: 8 NVIDIA L40 GPUs, 1.5 TB of RAM, and a ton of storage. Instead of letting this power sit idle, I'm eager to collaborate with the community to train a Large Language Model (LLM) from scratch or work on any groundbreaking AI project you've been itching to try.
If you've got code, ideas, or ongoing projects that could benefit from this hardware, let's team up and create something amazing. Whether you're a researcher, developer, or hobbyist, I'm open to all levels of collaboration.
Drop a comment or DM me if you're interested. Let's push the boundaries of AI together! 🤖💡
#AI #MachineLearning #LLM #Collaboration #GPU
| 2025-02-01T19:17:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ifdwv3/lets_build_this/ | Alone-Hunt-7507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifdwv3 | false | null | t3_1ifdwv3 | /r/LocalLLaMA/comments/1ifdwv3/lets_build_this/ | false | false | self | 3 | null |
DeepSeek 1-Pager | 1 | 2025-02-01T19:18:41 | circularr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifdy4q | false | null | t3_1ifdy4q | /r/LocalLLaMA/comments/1ifdy4q/deepseek_1pager/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'leVa69D90QCfhDGLMV5POleUV-0Fc7v5blZFLBoWKFs', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/8gv71tf4vkge1.png?width=108&crop=smart&auto=webp&s=24fb365f63be1057937400ca99804c56d1fb1757', 'width': 108}, {'height': 258, 'url': 'https://preview.redd.it/8gv71tf4vkge1.png?width=216&crop=smart&auto=webp&s=fc6a298b5a29587d779900251f3fe9ac005822d8', 'width': 216}, {'height': 383, 'url': 'https://preview.redd.it/8gv71tf4vkge1.png?width=320&crop=smart&auto=webp&s=fefde149e851ab6171c44f59c82418ed774e8dad', 'width': 320}, {'height': 766, 'url': 'https://preview.redd.it/8gv71tf4vkge1.png?width=640&crop=smart&auto=webp&s=04921a30cb6bc1f050c5a2b839c82fa8e323306d', 'width': 640}, {'height': 1149, 'url': 'https://preview.redd.it/8gv71tf4vkge1.png?width=960&crop=smart&auto=webp&s=e3f89cd4156e60d15eeabf6a37484aa52335718f', 'width': 960}, {'height': 1292, 'url': 'https://preview.redd.it/8gv71tf4vkge1.png?width=1080&crop=smart&auto=webp&s=cf332b6e251c4ea6280044ef695d2e809927248f', 'width': 1080}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/8gv71tf4vkge1.png?auto=webp&s=c18d0604e2155c3231521fde76da15227b0f9d4e', 'width': 1283}, 'variants': {}}]} |
|||
Best model for writing? | 1 | [removed] | 2025-02-01T19:18:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ifdy7p/best_model_for_writing/ | Spiritual-Neat889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifdy7p | false | null | t3_1ifdy7p | /r/LocalLLaMA/comments/1ifdy7p/best_model_for_writing/ | false | false | self | 1 | null |
Let ai create pdf | 3 | Hey there, I’m new with local lms. I’m running lm studio with a model of llama, is it possible that some model could create pdfs? | 2025-02-01T19:25:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ife3fr/let_ai_create_pdf/ | Chiggo_Ninja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ife3fr | false | null | t3_1ife3fr | /r/LocalLLaMA/comments/1ife3fr/let_ai_create_pdf/ | false | false | self | 3 | null |
Deep Fry a Potato(A Tutorial!) | 1 | [removed] | 2025-02-01T19:37:48 | https://www.reddit.com/gallery/1ifedqv | SUPR3M3Kai | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ifedqv | false | null | t3_1ifedqv | /r/LocalLLaMA/comments/1ifedqv/deep_fry_a_potatoa_tutorial/ | false | false | 1 | null |
|
Could you tell us how we can force deepseek to think for longer using only prompt engineering? | 1 | [removed] | 2025-02-01T19:49:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ifenwy/could_you_tell_us_how_we_can_force_deepseek_to/ | alaatb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifenwy | false | null | t3_1ifenwy | /r/LocalLLaMA/comments/1ifenwy/could_you_tell_us_how_we_can_force_deepseek_to/ | false | false | self | 1 | null |
Lm studio crashes | 2 | I have 7900xt. When i try to download a model on LM studio. the app crashes then when i go back in to the app its says "timed out. please try to resume" and when i try, it crashes again
Dose any one know why?
i have 7900xt 20gb vram and 16gb ram | 2025-02-01T19:51:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ifep51/lm_studio_crashes/ | JamesJackL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifep51 | false | null | t3_1ifep51 | /r/LocalLLaMA/comments/1ifep51/lm_studio_crashes/ | false | false | self | 2 | null |
Cline - Any usable DeepSeek-R1 finetunes? | 3 | I have been gawking at the ollama search page for forever and trying out a couple of tool-enabled models, however none of them managed to even get past one tool use...
I have a 4090 installed and would like to use it to help me document and light refactoring of hobby code - nothing big, not even beyond 500 LoC. ^^
Know one that would work...? Cline and MCP are really fun and the Qwen2.5 model "usually" works but it loves to spin in circles... a lot. xD (reads the same file forever and never does anything.)
Thank you! | 2025-02-01T19:55:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ifes7j/cline_any_usable_deepseekr1_finetunes/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifes7j | false | null | t3_1ifes7j | /r/LocalLLaMA/comments/1ifes7j/cline_any_usable_deepseekr1_finetunes/ | false | false | self | 3 | null |
Fastest inference on an H100 and the current state of TensorRT-LLM | 1 | I'm new to deploying LLMs. Currently I've set up vLLM and I'm wondering if TensorRT-LLM is worth exploring? I've read some mixed opinions in the past. Is there any significant performance difference between vLLM and the recent versions of TensorRT-LLM? Thank you. | 2025-02-01T19:55:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ifesei/fastest_inference_on_an_h100_and_the_current/ | P4ndalf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifesei | false | null | t3_1ifesei | /r/LocalLLaMA/comments/1ifesei/fastest_inference_on_an_h100_and_the_current/ | false | false | self | 1 | null |
New benchmark about multi-turn conversation that challenge frontier LLMs and capture Sonet 3.5 advantage: all LLMs perform below 50% accuracy | 70 | [https://paperswithcode.com/paper/multichallenge-a-realistic-multi-turn](https://paperswithcode.com/paper/multichallenge-a-realistic-multi-turn) | 2025-02-01T19:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ifeu07/new_benchmark_about_multiturn_conversation_that/ | TheIdealHominidae | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifeu07 | false | null | t3_1ifeu07 | /r/LocalLLaMA/comments/1ifeu07/new_benchmark_about_multiturn_conversation_that/ | false | false | self | 70 | {'enabled': False, 'images': [{'id': 'LbkEti-8bLLdW61WFWIWYQkZhsqjvHCVGQ4fwgTYovQ', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/BvPrweB4u2rzXxGIWqF_D8vwdPandRjeXx7kZVQLVZc.jpg?width=108&crop=smart&auto=webp&s=65e4fb9b001e190d909a8db57c0e4f9fd0b44f82', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/BvPrweB4u2rzXxGIWqF_D8vwdPandRjeXx7kZVQLVZc.jpg?width=216&crop=smart&auto=webp&s=3074d0e0dcf27a6d3d02265c9d1155726b988eea', 'width': 216}], 'source': {'height': 156, 'url': 'https://external-preview.redd.it/BvPrweB4u2rzXxGIWqF_D8vwdPandRjeXx7kZVQLVZc.jpg?auto=webp&s=71ed3158662391fc4301bd11989b522f4a7808ee', 'width': 242}, 'variants': {}}]} |
Fine Tuning LLM on AMD GPU | 2 | https://initialxy.com/lesson/2025/01/31/fine-tuning-llm-on-amd-gpu
I wrote a blog post on my experience trying to get fine tuning work locally on my consumer AMD GPU. | 2025-02-01T20:07:21 | https://www.reddit.com/r/LocalLLaMA/comments/1iff26y/fine_tuning_llm_on_amd_gpu/ | initialxy1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iff26y | false | null | t3_1iff26y | /r/LocalLLaMA/comments/1iff26y/fine_tuning_llm_on_amd_gpu/ | false | false | self | 2 | null |
Where can I buy cheap full r1 | 1 | I don't want to use the official deepseek since it is often overloaded.
Anywhere you recommend for comparably cheap models but host somewhere else where we there aren't choke points.
Some private hosters?
| 2025-02-01T20:07:53 | https://www.reddit.com/r/LocalLLaMA/comments/1iff2me/where_can_i_buy_cheap_full_r1/ | Kingwolf4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iff2me | false | null | t3_1iff2me | /r/LocalLLaMA/comments/1iff2me/where_can_i_buy_cheap_full_r1/ | false | false | self | 1 | null |
Local LLM for 4090 system wiht 64GB RAM | 1 | Hey all,
With all the hype around Deepseek R1 and it's related distills and quants I decided to dip my toes in the water and try ollama/llama.cpp for the first time. Im new to the whole LLM space and have only used Chatgpt sparingly. Is there some consensus on what models would be best to run on a system with 64GB RAM and a 4090 in terms of speed and accuracy? Would be used as a general assistant vs something more geared towards coding etc.
I've tried out the Deepseek 1.58 quant but it seems to be running brutally slow at what seems to be less than 1 token/s.
Thanks! | 2025-02-01T20:11:50 | https://www.reddit.com/r/LocalLLaMA/comments/1iff5tc/local_llm_for_4090_system_wiht_64gb_ram/ | useful_tool30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iff5tc | false | null | t3_1iff5tc | /r/LocalLLaMA/comments/1iff5tc/local_llm_for_4090_system_wiht_64gb_ram/ | false | false | self | 1 | null |
What went into training DeepSeek-R1? A technical summary of the training of v3 and R1 | 22 | 2025-02-01T20:12:56 | https://epoch.ai/gradient-updates/what-went-into-training-deepseek-r1 | timfduffy | epoch.ai | 1970-01-01T00:00:00 | 0 | {} | 1iff6pz | false | null | t3_1iff6pz | /r/LocalLLaMA/comments/1iff6pz/what_went_into_training_deepseekr1_a_technical/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'YU-OZDEN9kZurNClTJzWZERtRh927eLczI8LjDSBP60', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/9j2v52ez3YonWGNyUx7ud4znv-zEcUugaGbv9-vEgoE.jpg?width=108&crop=smart&auto=webp&s=9c6aeb178997b5afc01c8f32394dcf1c9e69bc69', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/9j2v52ez3YonWGNyUx7ud4znv-zEcUugaGbv9-vEgoE.jpg?width=216&crop=smart&auto=webp&s=e8e6d87b4477b956af6b9e2bd31534f10506bb13', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/9j2v52ez3YonWGNyUx7ud4znv-zEcUugaGbv9-vEgoE.jpg?width=320&crop=smart&auto=webp&s=972a94845a1338eb91e4e62fba781fdd3834d868', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/9j2v52ez3YonWGNyUx7ud4znv-zEcUugaGbv9-vEgoE.jpg?width=640&crop=smart&auto=webp&s=d70c275913e84630c4980a541e28538682099916', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/9j2v52ez3YonWGNyUx7ud4znv-zEcUugaGbv9-vEgoE.jpg?width=960&crop=smart&auto=webp&s=6f4b466be4c9bc547b7b18971e8e5b6f3221c490', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/9j2v52ez3YonWGNyUx7ud4znv-zEcUugaGbv9-vEgoE.jpg?width=1080&crop=smart&auto=webp&s=d361871913e8d1e42b63cfebf85e86c924c047bd', 'width': 1080}], 'source': {'height': 649, 'url': 'https://external-preview.redd.it/9j2v52ez3YonWGNyUx7ud4znv-zEcUugaGbv9-vEgoE.jpg?auto=webp&s=1ece6139ac19fbbf50c87a2c4f35c75f69ba6510', 'width': 1153}, 'variants': {}}]} |
||
Besides DeepSeek R1, is there a completely free AI model that's better than o3-mini? | 1 | [removed] | 2025-02-01T20:16:22 | https://www.reddit.com/r/LocalLLaMA/comments/1iff9jt/besides_deepseek_r1_is_there_a_completely_free_ai/ | Calm_Bandicoot6203 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iff9jt | false | null | t3_1iff9jt | /r/LocalLLaMA/comments/1iff9jt/besides_deepseek_r1_is_there_a_completely_free_ai/ | false | false | self | 1 | null |
SmolVLM fully open source | 315 | 2025-02-01T20:19:01 | https://x.com/andimarafioti/status/1885341684134978035 | tabspaces | x.com | 1970-01-01T00:00:00 | 0 | {} | 1iffboy | false | null | t3_1iffboy | /r/LocalLLaMA/comments/1iffboy/smolvlm_fully_open_source/ | false | false | 315 | {'enabled': False, 'images': [{'id': '2g-SJdtGnFRraRcxrEf5au_0VbSwOAnYKiKw-uoYMQc', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/RpBd16Y386MrSYjhSF5aL1O5cjq2V0xWVKGs2JQsIl0.jpg?width=108&crop=smart&auto=webp&s=e6e02513c39b89b9b00ee7bc5badf8d529d892e1', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/RpBd16Y386MrSYjhSF5aL1O5cjq2V0xWVKGs2JQsIl0.jpg?width=216&crop=smart&auto=webp&s=eabc1376edf94ff462038e18a13e52841d013bc8', 'width': 216}, {'height': 222, 'url': 'https://external-preview.redd.it/RpBd16Y386MrSYjhSF5aL1O5cjq2V0xWVKGs2JQsIl0.jpg?width=320&crop=smart&auto=webp&s=a831d1c2ffc13ee108ca1f21618a8e44bada2614', 'width': 320}, {'height': 444, 'url': 'https://external-preview.redd.it/RpBd16Y386MrSYjhSF5aL1O5cjq2V0xWVKGs2JQsIl0.jpg?width=640&crop=smart&auto=webp&s=9476c8b4dd1bf85443ac42ac9be87b98d3ff2e1e', 'width': 640}, {'height': 666, 'url': 'https://external-preview.redd.it/RpBd16Y386MrSYjhSF5aL1O5cjq2V0xWVKGs2JQsIl0.jpg?width=960&crop=smart&auto=webp&s=81b567504eb3c2e060ca14a7a78d3e7adfb4b42a', 'width': 960}, {'height': 749, 'url': 'https://external-preview.redd.it/RpBd16Y386MrSYjhSF5aL1O5cjq2V0xWVKGs2JQsIl0.jpg?width=1080&crop=smart&auto=webp&s=144ace3b64ae61f85bc39c9293e45d50b6630c9e', 'width': 1080}], 'source': {'height': 924, 'url': 'https://external-preview.redd.it/RpBd16Y386MrSYjhSF5aL1O5cjq2V0xWVKGs2JQsIl0.jpg?auto=webp&s=bd58d83f4418395e530c45460b2ad9724d2b96c0', 'width': 1331}, 'variants': {}}]} |
||
R1 successfully passed the strawberry test! (after making it wrong 2 times) | 1 | 2025-02-01T20:22:40 | Tankie_Cave | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1iffepq | false | null | t3_1iffepq | /r/LocalLLaMA/comments/1iffepq/r1_successfully_passed_the_strawberry_test_after/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'jNrZ2-J9_CnuRmiiYAaFtHa9W4Bg7VBRiHUy4BAlC3U', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/w02et2hf6lge1.png?width=108&crop=smart&auto=webp&s=cf7dd1613e44347d2a1e3659baf3794b8a81557e', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/w02et2hf6lge1.png?width=216&crop=smart&auto=webp&s=eb40d4e0f955ee9e016555b157063bd01af495e7', 'width': 216}, {'height': 294, 'url': 'https://preview.redd.it/w02et2hf6lge1.png?width=320&crop=smart&auto=webp&s=877899d4cba64695017e91639bdb724b00217c45', 'width': 320}, {'height': 588, 'url': 'https://preview.redd.it/w02et2hf6lge1.png?width=640&crop=smart&auto=webp&s=ced5cf49b9a44cee289ce0b43965ae598dd3dd42', 'width': 640}], 'source': {'height': 839, 'url': 'https://preview.redd.it/w02et2hf6lge1.png?auto=webp&s=6ecab273ce73cf298c2741f6eafd99941046d121', 'width': 912}, 'variants': {}}]} |
|||
DeepSeek R1 671B MoE LLM running on Epyc 9374F and 384GB of RAM (llama.cpp + PR #11446, Q4_K_S, real time) | 207 | 2025-02-01T20:24:46 | https://www.youtube.com/watch?v=wKZHoGlllu4 | fairydreaming | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1iffgj4 | false | {'oembed': {'author_name': 'Dreaming Fairy', 'author_url': 'https://www.youtube.com/@dreamingfairy8804', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/wKZHoGlllu4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="DeepSeek R1 671B MoE LLM running on Epyc 9374F and 384GB of RAM (llama.cpp, Q4_K_S, real time)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/wKZHoGlllu4/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'DeepSeek R1 671B MoE LLM running on Epyc 9374F and 384GB of RAM (llama.cpp, Q4_K_S, real time)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1iffgj4 | /r/LocalLLaMA/comments/1iffgj4/deepseek_r1_671b_moe_llm_running_on_epyc_9374f/ | false | false | 207 | {'enabled': False, 'images': [{'id': 'ycDpjHkafTfELjX81gPOSizptRPsAFR8v0DN5mZv98c', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/gMJZu1czNWIsX2vol0q37qYGLTI_zKgwHfEyO-m9Uqw.jpg?width=108&crop=smart&auto=webp&s=43426782d61887475f7e40388d2bc44ce3035e3f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/gMJZu1czNWIsX2vol0q37qYGLTI_zKgwHfEyO-m9Uqw.jpg?width=216&crop=smart&auto=webp&s=197ec0048c34f7759a72c4aa4f291b207e44ed9d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/gMJZu1czNWIsX2vol0q37qYGLTI_zKgwHfEyO-m9Uqw.jpg?width=320&crop=smart&auto=webp&s=48df94f11b5f4243cdde43be4517d1e3d09e3712', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/gMJZu1czNWIsX2vol0q37qYGLTI_zKgwHfEyO-m9Uqw.jpg?auto=webp&s=6005d44eaf0c44ed503673c751344d193af26ed2', 'width': 480}, 'variants': {}}]} |
||
AI Dating App | 0 | 2025-02-01T20:39:23 | https://apps.apple.com/tr/app/dating-app-simulator/id6739853093 | whyNamesTurkiye | apps.apple.com | 1970-01-01T00:00:00 | 0 | {} | 1iffsax | false | null | t3_1iffsax | /r/LocalLLaMA/comments/1iffsax/ai_dating_app/ | false | false | default | 0 | null |
|
Should AI models be protected or Open for all? | 1 | [removed] | 2025-02-01T20:41:47 | https://www.reddit.com/r/LocalLLaMA/comments/1iffu9c/should_ai_models_be_protected_or_open_for_all/ | Frosty_Programmer672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iffu9c | false | null | t3_1iffu9c | /r/LocalLLaMA/comments/1iffu9c/should_ai_models_be_protected_or_open_for_all/ | false | false | self | 1 | null |
What is the LLM for tinkering and experimenting with ideas | 1 | [removed] | 2025-02-01T20:48:24 | https://www.reddit.com/r/LocalLLaMA/comments/1iffzgw/what_is_the_llm_for_tinkering_and_experimenting/ | Potential_Block4598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1iffzgw | false | null | t3_1iffzgw | /r/LocalLLaMA/comments/1iffzgw/what_is_the_llm_for_tinkering_and_experimenting/ | false | false | self | 1 | null |
Building a Local AI Workstation for Legal Office — $9K to $10K Budget | 1 |
Hey folks!
I need a reliable, future-proof AI rig for my legal office. Budget is $9K (can stretch to $10K). It’ll handle NLP/LLM workloads on large sets of legal documents, and I want to share it on our internal network so any PC can leverage its computing power.
Must-Haves:
• Future Scalability: Potentially add more GPUs, storage, or RAM later.
• Performance + Reliability
• Networking: A straightforward way for multiple users to access and run AI tasks.
• Data Handling: Lots of documents; I need fast I/O or a solid plan for storing large datasets.
Questions:
• CPU/GPU combo? (Thinking high-core CPU + 1-2 strong GPUs like 4090s.)
• Recommended RAM (64GB vs. 128GB?)
• Best motherboard/PSU for expandability and stability?
• Storage setup (NVMe, RAID, NAS, etc.)?
• Windows vs. Linux for AI workloads?
• Ideal remote access or multi-user solution?
I’d really appreciate any build suggestions or best practices. Thanks in advance! | 2025-02-01T20:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ifg09s/building_a_local_ai_workstation_for_legal_office/ | Crollo_s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifg09s | false | null | t3_1ifg09s | /r/LocalLLaMA/comments/1ifg09s/building_a_local_ai_workstation_for_legal_office/ | false | false | self | 1 | null |
How to get the DS-R1 distill llama and qwen models to properly roleplay? | 0 | I'm trying to get it (currently the 14B distill) to roleplay a character - but it keeps talking about setting and story and talks about the character in third person instead of actually impersonating the character. I don't want it to narrate the plot (other than maybe meaningful first person remarks where appropriate) but no matter what I try it keeps adding narrative blabla or explanations about what it's going to do or asking if it can assist further. It's really frustrating! What kind of prompts can I use to only output text from the first person view and staying in character? It's okay to break character or contemplate story in the "thinking" part but not in the actual output | 2025-02-01T20:51:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ifg1ty/how_to_get_the_dsr1_distill_llama_and_qwen_models/ | CorruptCobalion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifg1ty | false | null | t3_1ifg1ty | /r/LocalLLaMA/comments/1ifg1ty/how_to_get_the_dsr1_distill_llama_and_qwen_models/ | false | false | self | 0 | null |
Approximating cost of hosting QwQ for data processing | 1 | [removed] | 2025-02-01T20:53:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ifg35k/approximating_cost_of_hosting_qwq_for_data/ | Ok-Program-3656 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifg35k | false | null | t3_1ifg35k | /r/LocalLLaMA/comments/1ifg35k/approximating_cost_of_hosting_qwq_for_data/ | false | false | self | 1 | null |
Existing voice models for running Dockerised TTS on MBP - 24GB ram | 1 | [removed] | 2025-02-01T20:53:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ifg3cs/existing_voice_models_for_running_dockerised_tts/ | Whizz5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifg3cs | false | null | t3_1ifg3cs | /r/LocalLLaMA/comments/1ifg3cs/existing_voice_models_for_running_dockerised_tts/ | false | false | self | 1 | null |
Dockerised TTS with voice models available for MBP 24GB | 1 | [removed] | 2025-02-01T20:55:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ifg519/dockerised_tts_with_voice_models_available_for/ | Rurouni-dev-11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifg519 | false | null | t3_1ifg519 | /r/LocalLLaMA/comments/1ifg519/dockerised_tts_with_voice_models_available_for/ | false | false | self | 1 | null |
AMD dGPU shared memory? | 2 | My AMD laptop with an iGPU can load(but runs very slowly) 32B models as it uses shared system memory. However loading the same models on my 6600xt(8gb vram) with 48gb system ram pops up with out of memory errors. Is it possibe for my dGPU to use shared memory?
I use llama.cpp vulkan with LM Studio on windows. | 2025-02-01T21:05:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ifgd0e/amd_dgpu_shared_memory/ | juwonpee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifgd0e | false | null | t3_1ifgd0e | /r/LocalLLaMA/comments/1ifgd0e/amd_dgpu_shared_memory/ | false | false | self | 2 | null |
Batch sizes. Inferencing with llama.cpp | 7 | At the llamacpp pull requests, I keep seeing people sharing test results with different batch sizes, and it seems like they produce different speeds for different type of hardware, so my question is:
What are the cases to play around with batch sizes? I realize that they are something like the "chunk sizes" for the data to be passed per GPU request, but are there any possible gains for alternating them?
What batch size should I use if my VRAM holds 100% of model? or at 70% VRAM/30% RAM? Or at 50/50%? Or when I inference just on RAM?
From my understanding, the default batch size is mostly optimized for VRAM inference. But what if I hold 50% of the model's inside of RAM? My system is heavily disbalanced. I have not the fastest, 4 planks of DDR4 memory, and a 4090. I wonder, will changing the chunk size help the RAM throughput, without hurting the GPU's performance that much? Or am I not required to do that, and such adaption already happens when inference is done by the RAM/CPU? | 2025-02-01T21:07:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ifgf8s/batch_sizes_inferencing_with_llamacpp/ | SiEgE-F1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifgf8s | false | null | t3_1ifgf8s | /r/LocalLLaMA/comments/1ifgf8s/batch_sizes_inferencing_with_llamacpp/ | false | false | self | 7 | null |
This person is completely AI generated.. Getting scary | 1 | 2025-02-01T21:12:19 | https://v.redd.it/eliixrrjflge1 | Level-Novel9288 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifgiu9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/eliixrrjflge1/DASHPlaylist.mpd?a=1741036355%2CYTVkZTQ3YTk3MjNlNzQxNzI3Y2E5YmVlNWE1ODZjOWUxMDYyMTEyMTBlYzEyNmY1MTljNzYzMDc1YTU5ZDM1Nw%3D%3D&v=1&f=sd', 'duration': 5, 'fallback_url': 'https://v.redd.it/eliixrrjflge1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/eliixrrjflge1/HLSPlaylist.m3u8?a=1741036355%2CNTVkMGFjY2E5MjRkNTk4Yzc3NDY5Y2M2ZDdkOWQzMTRiODY3ZDljZGJhZTRiNWJiZmMzNTBjMzc1NmQzNTE4MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/eliixrrjflge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1ifgiu9 | /r/LocalLLaMA/comments/1ifgiu9/this_person_is_completely_ai_generated_getting/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NzdsaWN0cmpmbGdlMZEdzIa7Ux-XBn0wC-57VpLWxwifdBZMAlx5JF7TqqTT', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NzdsaWN0cmpmbGdlMZEdzIa7Ux-XBn0wC-57VpLWxwifdBZMAlx5JF7TqqTT.png?width=108&crop=smart&format=pjpg&auto=webp&s=f64e08ec614f913a320566d1cc0609316d30b3c7', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NzdsaWN0cmpmbGdlMZEdzIa7Ux-XBn0wC-57VpLWxwifdBZMAlx5JF7TqqTT.png?width=216&crop=smart&format=pjpg&auto=webp&s=535136d12871523ac2168eb8eb7cb730bb28c46c', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/NzdsaWN0cmpmbGdlMZEdzIa7Ux-XBn0wC-57VpLWxwifdBZMAlx5JF7TqqTT.png?width=320&crop=smart&format=pjpg&auto=webp&s=f13bfff36cf00491256f5965d7c0fa9ff3d0b949', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/NzdsaWN0cmpmbGdlMZEdzIa7Ux-XBn0wC-57VpLWxwifdBZMAlx5JF7TqqTT.png?width=640&crop=smart&format=pjpg&auto=webp&s=f5bd49171579ea6294e5ac489afe92a6f76e5f55', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/NzdsaWN0cmpmbGdlMZEdzIa7Ux-XBn0wC-57VpLWxwifdBZMAlx5JF7TqqTT.png?width=960&crop=smart&format=pjpg&auto=webp&s=e3931a858f7f0e46def49a3f7c6699d604e33985', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/NzdsaWN0cmpmbGdlMZEdzIa7Ux-XBn0wC-57VpLWxwifdBZMAlx5JF7TqqTT.png?width=1080&crop=smart&format=pjpg&auto=webp&s=68a1c3edf709af2d103a3d803fc92731ceaa30a4', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/NzdsaWN0cmpmbGdlMZEdzIa7Ux-XBn0wC-57VpLWxwifdBZMAlx5JF7TqqTT.png?format=pjpg&auto=webp&s=07cd98ed925494deedde8753b3be7ea74db28b03', 'width': 1080}, 'variants': {}}]} |
||
Tested DeepSeek R1 Distill 7B version and it said Al Pacino was in Jurassic Park | 1 | 2025-02-01T21:19:10 | OcuTrin | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ifgo9q | false | null | t3_1ifgo9q | /r/LocalLLaMA/comments/1ifgo9q/tested_deepseek_r1_distill_7b_version_and_it_said/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'WC8Lw4xpwywdEl60ZHbjfQVwp9KTvGFGly0cwI39YCo', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/673ishy7glge1.jpeg?width=108&crop=smart&auto=webp&s=8cdc28b9ad3173077b98da3447e3488d5a0aa205', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/673ishy7glge1.jpeg?width=216&crop=smart&auto=webp&s=cdfda76201d1c3d36e493f63408c1f7ec2c1c78c', 'width': 216}, {'height': 208, 'url': 'https://preview.redd.it/673ishy7glge1.jpeg?width=320&crop=smart&auto=webp&s=624fa4caefab873c9af7b4956cb81adac706504a', 'width': 320}, {'height': 416, 'url': 'https://preview.redd.it/673ishy7glge1.jpeg?width=640&crop=smart&auto=webp&s=145c3862bb4bbda9e9d03c345f480ba86ac8745f', 'width': 640}, {'height': 624, 'url': 'https://preview.redd.it/673ishy7glge1.jpeg?width=960&crop=smart&auto=webp&s=7548e9fae494a28ead7aa1277a1b0e4beeb37ba6', 'width': 960}, {'height': 702, 'url': 'https://preview.redd.it/673ishy7glge1.jpeg?width=1080&crop=smart&auto=webp&s=c05e2b3d8f0773ab66092cd0ec9c64a5fb0d0d9a', 'width': 1080}], 'source': {'height': 1424, 'url': 'https://preview.redd.it/673ishy7glge1.jpeg?auto=webp&s=c4d4f10f01f43fbab76838399a95e7761e7079ae', 'width': 2190}, 'variants': {}}]} |
|||
Biased test of GPT-4 era LLMs (300+ models, DeepSeek-R1 included) | 10 | 2025-02-01T21:23:38 | https://moonride.hashnode.dev/biased-test-of-gpt-4-era-llms-300-models-deepseek-r1-included | MoonRide303 | moonride.hashnode.dev | 1970-01-01T00:00:00 | 0 | {} | 1ifgrwg | false | null | t3_1ifgrwg | /r/LocalLLaMA/comments/1ifgrwg/biased_test_of_gpt4_era_llms_300_models/ | false | false | 10 | {'enabled': False, 'images': [{'id': '-e6w6AVvm-SFLh0oramizdcaf6xiK-P8NE4xnTl1giw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/9rxF3AagdWD6r1I_qJ263TYbYvyMDp_M757rf1ffM-E.jpg?width=108&crop=smart&auto=webp&s=f3f2526a08a3eefa18ba07eae940c2c8dfb96dca', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/9rxF3AagdWD6r1I_qJ263TYbYvyMDp_M757rf1ffM-E.jpg?width=216&crop=smart&auto=webp&s=f6bd28b6b11c78d50e1dd81fa5134825d06df578', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/9rxF3AagdWD6r1I_qJ263TYbYvyMDp_M757rf1ffM-E.jpg?width=320&crop=smart&auto=webp&s=2fb87ae53b7a030d7452831e903cee8bfdeb8d4f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/9rxF3AagdWD6r1I_qJ263TYbYvyMDp_M757rf1ffM-E.jpg?width=640&crop=smart&auto=webp&s=613715d4653c6c1e48b3ca5ee6dfc2b849d7eb15', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/9rxF3AagdWD6r1I_qJ263TYbYvyMDp_M757rf1ffM-E.jpg?width=960&crop=smart&auto=webp&s=bc13b467a3919f1aeb3bada165dd4b9484959c4b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/9rxF3AagdWD6r1I_qJ263TYbYvyMDp_M757rf1ffM-E.jpg?width=1080&crop=smart&auto=webp&s=1f28e8329211d14063e01678cd6eaa2ffd4a0563', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/9rxF3AagdWD6r1I_qJ263TYbYvyMDp_M757rf1ffM-E.jpg?auto=webp&s=52d5a9026c145dc7d1454268afa9ccc1a9d0c801', 'width': 1200}, 'variants': {}}]} |
||
We haven't won...yet | 8 | DeepSeek is great, but I feel like people are too quickly celebrating that open source has won. Call me pessimistic, but could things get...worse?
## OpenAI also used to be open AI
Money doesn't grow on trees, and in the end, it is still a quant fund that is investing millions into AI, and despite their official statements it's probably not "just because we want to help everyone". The moment OpenAI realized they got past the point of startup and into the point of making money, they went closed. Quick research says they don't have any plans to make money...in the short term ([Reuters](https://www.reuters.com/technology/artificial-intelligence/high-flyer-ai-quant-fund-behind-chinas-deepseek-2025-01-29/))
## AI War
I'm scared because this greater US vs China thing would probably less research being published. I wouldn't put it past either government to "convince" researchers to hide new research in an attempt to get an edge.
I think open source has shown it is still in the battle, but I definitely don't think it won or is close to winning.
Am I just being too pesimistic? | 2025-02-01T21:26:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ifgu6h/we_havent_wonyet/ | agentcubed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifgu6h | false | null | t3_1ifgu6h | /r/LocalLLaMA/comments/1ifgu6h/we_havent_wonyet/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': '5T6_TDYsTMBoMgn6O_RgJfqsLuTJwC8ormbXhijUl-U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rTEGreJm_X29E0_4Pl5DcXJxYIc4LxFGk0-2itj04_Y.jpg?width=108&crop=smart&auto=webp&s=b7cd0494bbadb3c35c01ad62ea2ebf34c8621d62', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/rTEGreJm_X29E0_4Pl5DcXJxYIc4LxFGk0-2itj04_Y.jpg?width=216&crop=smart&auto=webp&s=f012c74ae3f2d05c3ae77f3994b68d2984b7ec58', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/rTEGreJm_X29E0_4Pl5DcXJxYIc4LxFGk0-2itj04_Y.jpg?width=320&crop=smart&auto=webp&s=9b1995e2518506782b057f1bfa723a8cace53685', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/rTEGreJm_X29E0_4Pl5DcXJxYIc4LxFGk0-2itj04_Y.jpg?width=640&crop=smart&auto=webp&s=fdadde0b1896e084c726d53acc45d0f6bf36c0d9', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/rTEGreJm_X29E0_4Pl5DcXJxYIc4LxFGk0-2itj04_Y.jpg?width=960&crop=smart&auto=webp&s=109179d31e1c35485667177badf4c2c0efd7bd45', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/rTEGreJm_X29E0_4Pl5DcXJxYIc4LxFGk0-2itj04_Y.jpg?width=1080&crop=smart&auto=webp&s=e8e616266274ff389165c69427c3cad86034b71d', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/rTEGreJm_X29E0_4Pl5DcXJxYIc4LxFGk0-2itj04_Y.jpg?auto=webp&s=2491c41610f40eb630b154cddacb8b1fb99ecf13', 'width': 1920}, 'variants': {}}]} |
What happened to Differential Transformer ? | 30 | 2025-02-01T21:31:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ifgy1a/what_happened_to_differential_transformer/ | LelouchZer12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifgy1a | false | null | t3_1ifgy1a | /r/LocalLLaMA/comments/1ifgy1a/what_happened_to_differential_transformer/ | false | false | 30 | null |
||
Sam Altman says OpenAI will embrace two new AI approaches, one from DeepSeek and another from Meta https://www.businessinsider.com/sam-altman-openai-ai-approaches-deepseek-meta-open-source-2025-1 | 88 | Shouldn't he just open source that thing? 🙄🤔
| 2025-02-01T21:39:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ifh4ec/sam_altman_says_openai_will_embrace_two_new_ai/ | Then_Knowledge_719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifh4ec | false | null | t3_1ifh4ec | /r/LocalLLaMA/comments/1ifh4ec/sam_altman_says_openai_will_embrace_two_new_ai/ | false | false | self | 88 | null |
The Shock of DeepSeek-R1 and the Legacy of Eclipse | 1 | 2025-02-01T21:39:51 | https://medium.com/@xiweizhou/the-shock-of-deepseek-r1-and-the-legacy-of-eclipse-41ec6be2dc9c | Xiwei | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1ifh50h | false | null | t3_1ifh50h | /r/LocalLLaMA/comments/1ifh50h/the_shock_of_deepseekr1_and_the_legacy_of_eclipse/ | false | false | 1 | {'enabled': False, 'images': [{'id': '0NX-u7yAwj8gihukWTvFETeuF5d6QHuSrfc-ynTQwnY', 'resolutions': [{'height': 42, 'url': 'https://external-preview.redd.it/vESTfbT9GLyc9Ufktrpw-OOckWUclzhErL_u3AAj2Us.jpg?width=108&crop=smart&auto=webp&s=ec305c612a97c1bf4b113423f1ae9facdd75b660', 'width': 108}, {'height': 85, 'url': 'https://external-preview.redd.it/vESTfbT9GLyc9Ufktrpw-OOckWUclzhErL_u3AAj2Us.jpg?width=216&crop=smart&auto=webp&s=880c47443ea2970a48596e57a8ba9a02cda8ade9', 'width': 216}, {'height': 126, 'url': 'https://external-preview.redd.it/vESTfbT9GLyc9Ufktrpw-OOckWUclzhErL_u3AAj2Us.jpg?width=320&crop=smart&auto=webp&s=1a35e947e9d7983fdb48e54287e0b452ff5a1a76', 'width': 320}, {'height': 253, 'url': 'https://external-preview.redd.it/vESTfbT9GLyc9Ufktrpw-OOckWUclzhErL_u3AAj2Us.jpg?width=640&crop=smart&auto=webp&s=d76abf9f2cb0f608c7e896d6d987ac5802bad6fd', 'width': 640}], 'source': {'height': 338, 'url': 'https://external-preview.redd.it/vESTfbT9GLyc9Ufktrpw-OOckWUclzhErL_u3AAj2Us.jpg?auto=webp&s=818b3cd9a75c49df09ec538ac513a9af94fd7388', 'width': 852}, 'variants': {}}]} |
||
US Probing If DeepSeek Got Nvidia Chips From Firms in Singapore | 0 | 2025-02-01T21:42:20 | https://www.bloomberg.com/news/articles/2025-01-31/us-probing-whether-deepseek-got-nvidia-chips-through-singapore?sref=HrWXCALa | fallingdowndizzyvr | bloomberg.com | 1970-01-01T00:00:00 | 0 | {} | 1ifh721 | false | null | t3_1ifh721 | /r/LocalLLaMA/comments/1ifh721/us_probing_if_deepseek_got_nvidia_chips_from/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'mYD5Qa-Y5bpBKgL1URhackK-hxpaLgwzyoYTyMKTBXw', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/5RvmDifpZcUvFvgGh0e2eLLioDjFL77FSqM8i3OyYW4.jpg?width=108&crop=smart&auto=webp&s=ac67fae42f1b4020fd3d338280cffb0c58015861', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/5RvmDifpZcUvFvgGh0e2eLLioDjFL77FSqM8i3OyYW4.jpg?width=216&crop=smart&auto=webp&s=a1546e9b50b5f0bdd9ad96310ac58b481dea2fac', 'width': 216}, {'height': 217, 'url': 'https://external-preview.redd.it/5RvmDifpZcUvFvgGh0e2eLLioDjFL77FSqM8i3OyYW4.jpg?width=320&crop=smart&auto=webp&s=fc62a6605cc38e7962a3deb067e5e9ab0e5d02b7', 'width': 320}, {'height': 434, 'url': 'https://external-preview.redd.it/5RvmDifpZcUvFvgGh0e2eLLioDjFL77FSqM8i3OyYW4.jpg?width=640&crop=smart&auto=webp&s=a966de465d1d7760f0338145be97bab35e9ac529', 'width': 640}, {'height': 652, 'url': 'https://external-preview.redd.it/5RvmDifpZcUvFvgGh0e2eLLioDjFL77FSqM8i3OyYW4.jpg?width=960&crop=smart&auto=webp&s=9ce6f52231ffa6a105627096631957a1f0a5c65d', 'width': 960}, {'height': 733, 'url': 'https://external-preview.redd.it/5RvmDifpZcUvFvgGh0e2eLLioDjFL77FSqM8i3OyYW4.jpg?width=1080&crop=smart&auto=webp&s=20dcfadf2db2e04967f91bc703d04aa592d301e4', 'width': 1080}], 'source': {'height': 815, 'url': 'https://external-preview.redd.it/5RvmDifpZcUvFvgGh0e2eLLioDjFL77FSqM8i3OyYW4.jpg?auto=webp&s=c1a3811a1672a428fa209f21eee133875fe12803', 'width': 1200}, 'variants': {}}]} |
||
I want to use HPC to get the inference from Ollama server | 1 | [removed] | 2025-02-01T21:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ifh83i/i_want_to_use_hpc_to_get_the_inference_from/ | Holiday-Standard1819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifh83i | false | null | t3_1ifh83i | /r/LocalLLaMA/comments/1ifh83i/i_want_to_use_hpc_to_get_the_inference_from/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'm1RnwnE7vu8a4irNED845q-4SaF9uoYBir6VafJSnlY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/D0C1XKS59-FFEPe33oa3b0uXAKYjSyEgesTH2lAnNhs.jpg?width=108&crop=smart&auto=webp&s=2f576d16ebe2698dfcf6ddd7cc2841d61eda0b44', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/D0C1XKS59-FFEPe33oa3b0uXAKYjSyEgesTH2lAnNhs.jpg?width=216&crop=smart&auto=webp&s=5aca57e1a9983e21fb4c443bca02a9333c50b677', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/D0C1XKS59-FFEPe33oa3b0uXAKYjSyEgesTH2lAnNhs.jpg?width=320&crop=smart&auto=webp&s=0033f1d7db2c8140cb5e1250202b7e82fcf880f2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/D0C1XKS59-FFEPe33oa3b0uXAKYjSyEgesTH2lAnNhs.jpg?width=640&crop=smart&auto=webp&s=b19d7d1113aa5ba47848c94f57b3178e6c69ce2a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/D0C1XKS59-FFEPe33oa3b0uXAKYjSyEgesTH2lAnNhs.jpg?width=960&crop=smart&auto=webp&s=7c6308bddd03757b313b1f8a5a0e3efbba218ff2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/D0C1XKS59-FFEPe33oa3b0uXAKYjSyEgesTH2lAnNhs.jpg?width=1080&crop=smart&auto=webp&s=d0602cf33a81f39cc299610013c6ecb4fe9b842b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/D0C1XKS59-FFEPe33oa3b0uXAKYjSyEgesTH2lAnNhs.jpg?auto=webp&s=0d52482da84088b60dcca13edf69b2270581cd13', 'width': 1200}, 'variants': {}}]} |
An interesting difference in answer I got from R1 and o3-mini | 1 | 2025-02-01T22:08:58 | https://www.reddit.com/gallery/1ifhsb0 | Icy-Switch-6075 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ifhsb0 | false | null | t3_1ifhsb0 | /r/LocalLLaMA/comments/1ifhsb0/an_interesting_difference_in_answer_i_got_from_r1/ | false | false | 1 | null |
||
One very interesting thing I learned just now is that apparently Gemma2-27b begins responding much faster than Gemma2-9b. I had always assumed the opposite would be true. Very important to know for voice applications. | 15 | 2025-02-01T22:11:38 | https://v.redd.it/6gpeoh5cplge1 | swagonflyyyy | /r/LocalLLaMA/comments/1ifhugb/one_very_interesting_thing_i_learned_just_now_is/ | 1970-01-01T00:00:00 | 0 | {} | 1ifhugb | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/6gpeoh5cplge1/DASHPlaylist.mpd?a=1741203430%2CMTQ4NTIwN2Q3YTQ4NDk3MTVmZTFhNjUyYmY2MzY0MDg2Mzk5NWY3NjYzNzEyNjk1Mzg5NWNjZTYzODhkN2M1Ng%3D%3D&v=1&f=sd', 'duration': 191, 'fallback_url': 'https://v.redd.it/6gpeoh5cplge1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/6gpeoh5cplge1/HLSPlaylist.m3u8?a=1741203430%2CNzA5YmE5MzgyYzY0M2JlMGY5NWMzY2RjNDEzZTBjNDg2YmYwZDQ1MWU0YzNkODUyYzhmZWU4ZWQ5NGY0OTEyMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6gpeoh5cplge1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1ifhugb | /r/LocalLLaMA/comments/1ifhugb/one_very_interesting_thing_i_learned_just_now_is/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'N3FmZXBpNWNwbGdlMdyPbMGk3oSHBxkHiCq8Y8bIQNW5aYT5KwyyXT018S-2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N3FmZXBpNWNwbGdlMdyPbMGk3oSHBxkHiCq8Y8bIQNW5aYT5KwyyXT018S-2.png?width=108&crop=smart&format=pjpg&auto=webp&s=b7c72a7c5e72178b58db1b79a65bb07b9767d145', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/N3FmZXBpNWNwbGdlMdyPbMGk3oSHBxkHiCq8Y8bIQNW5aYT5KwyyXT018S-2.png?width=216&crop=smart&format=pjpg&auto=webp&s=997a7464c7351f34c4de66448283c4326cef4940', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/N3FmZXBpNWNwbGdlMdyPbMGk3oSHBxkHiCq8Y8bIQNW5aYT5KwyyXT018S-2.png?width=320&crop=smart&format=pjpg&auto=webp&s=9834ede72b561117476d83058c220ba3a13fbdbd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/N3FmZXBpNWNwbGdlMdyPbMGk3oSHBxkHiCq8Y8bIQNW5aYT5KwyyXT018S-2.png?width=640&crop=smart&format=pjpg&auto=webp&s=f3615fd90cb8eac22743f1c9a83931c1a0b1a13c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/N3FmZXBpNWNwbGdlMdyPbMGk3oSHBxkHiCq8Y8bIQNW5aYT5KwyyXT018S-2.png?width=960&crop=smart&format=pjpg&auto=webp&s=8dcd07cdf4dc3a9823a66ffc8681cde9317092c1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/N3FmZXBpNWNwbGdlMdyPbMGk3oSHBxkHiCq8Y8bIQNW5aYT5KwyyXT018S-2.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9b52c2031e14f7bd93a67a1a459b62ca472a5f30', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/N3FmZXBpNWNwbGdlMdyPbMGk3oSHBxkHiCq8Y8bIQNW5aYT5KwyyXT018S-2.png?format=pjpg&auto=webp&s=43c91598516d29fb06984900aae4b779bf69793d', 'width': 1280}, 'variants': {}}]} |
||
GOP Missouri Senator Josh Hawley proposes import and export ban on Chinese AI models | 1 | [removed] | 2025-02-01T22:13:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ifhvoy/gop_missouri_senator_josh_hawley_proposes_import/ | InquisitiveInque | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifhvoy | false | null | t3_1ifhvoy | /r/LocalLLaMA/comments/1ifhvoy/gop_missouri_senator_josh_hawley_proposes_import/ | false | false | self | 1 | null |
When Nvidia will be DeepSeeked GPU wise? | 0 | We need more competition in the GPU sector, maybe a new company creating better gpu’s with 1/10 the price | 2025-02-01T22:14:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ifhwj5/when_nvidia_will_be_deepseeked_gpu_wise/ | Over_Explorer7956 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifhwj5 | false | null | t3_1ifhwj5 | /r/LocalLLaMA/comments/1ifhwj5/when_nvidia_will_be_deepseeked_gpu_wise/ | false | false | self | 0 | null |
Missouri Senator Josh Hawley proposes a ban on Chinese AI models | 317 | 2025-02-01T22:19:39 | https://www.hawley.senate.gov/wp-content/uploads/2025/01/Hawley-Decoupling-Americas-Artificial-Intelligence-Capabilities-from-China-Act.pdf | InquisitiveInque | hawley.senate.gov | 1970-01-01T00:00:00 | 0 | {} | 1ifi0qu | false | null | t3_1ifi0qu | /r/LocalLLaMA/comments/1ifi0qu/missouri_senator_josh_hawley_proposes_a_ban_on/ | false | false | default | 317 | null |
|
Dockerised TTS on MBP M4 | 4 | Hi all, looking to do some narration work using TTS. I've done some brief research and it looks as though the top 3 models currently are
F5-TTS
XTTS-V2
Kokoro
I've already found a Dockerised version for kokoro which also had some voice models available but struggling to find the same for the other two.
I only want local TTS, not voice cloning which is why I'm hoping someone could point me in the right direction to using readily available voices for either F5-TTS or XTTS-V2. Like is there some sort of directory with these .pt files?
thanks in advance | 2025-02-01T22:41:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ifihsc/dockerised_tts_on_mbp_m4/ | Rurouni-dev-11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ifihsc | false | null | t3_1ifihsc | /r/LocalLLaMA/comments/1ifihsc/dockerised_tts_on_mbp_m4/ | false | false | self | 4 | null |
Subsets and Splits