title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What are the best use cases for slow token generation? (High parameter model, minimal VRAM/ 64G ram) | 0 | So I got 16gb vram and 64 gb ram,
And I can’t really run 40b+ models at high speed, so I was wondering. What are good use cases for using 70b+ models (at q4)
With slow generation speed?
| 2024-12-07T11:18:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h8qf4x/what_are_the_best_use_cases_for_slow_token/ | Deluded-1b-gguf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8qf4x | false | null | t3_1h8qf4x | /r/LocalLLaMA/comments/1h8qf4x/what_are_the_best_use_cases_for_slow_token/ | false | false | self | 0 | null |
Deploy production-ready computer use agent | 1 | 2024-12-07T11:20:04 | https://playground.gca.dev/ | mbartu | playground.gca.dev | 1970-01-01T00:00:00 | 0 | {} | 1h8qfy4 | false | null | t3_1h8qfy4 | /r/LocalLLaMA/comments/1h8qfy4/deploy_productionready_computer_use_agent/ | false | false | default | 1 | null |
|
Deploy production ready computer use agent | 1 | Hi, when trying computer use, it's necessary to launch a container. However, this makes testing the computer use concept a bit difficult. For this reason, we've created a playground that you can try without having to launch a container. Also, feedback would be great for us
[https://playground.gca.dev/](https://playground.gca.dev/) | 2024-12-07T11:22:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h8qh6t/deploy_production_ready_computer_use_agent/ | mbartu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8qh6t | false | null | t3_1h8qh6t | /r/LocalLLaMA/comments/1h8qh6t/deploy_production_ready_computer_use_agent/ | false | false | self | 1 | null |
How to run llama 3 on Google colab with open webUI ? | 1 | [removed] | 2024-12-07T11:23:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h8qhei/how_to_run_llama_3_on_google_colab_with_open_webui/ | Sea-Snow-6111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8qhei | false | null | t3_1h8qhei | /r/LocalLLaMA/comments/1h8qhei/how_to_run_llama_3_on_google_colab_with_open_webui/ | false | false | self | 1 | null |
How to run ollama with open webUI in Google colab free gpu ? | 1 | [removed] | 2024-12-07T11:24:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h8qi24/how_to_run_ollama_with_open_webui_in_google_colab/ | Sea-Snow-6111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8qi24 | false | null | t3_1h8qi24 | /r/LocalLLaMA/comments/1h8qi24/how_to_run_ollama_with_open_webui_in_google_colab/ | false | false | self | 1 | null |
Llama 3.3 on a 4090 - quick feedback | 48 | Hey team,
on my 4090 the most basic ollama pull and ollama run for llama3.3 leads to the following:
\- succesful startup, vram obviously filled up;
\- a quick test with a prompt asking for a summary of a 1500 word interview gets me a high-quality summary of 214 words in about 220 seconds, which is, you guessed it, about a word per second.
So if you want to try it, at least know that you can with a 4090. Slow of course, but we all know there are further speed-ups possible. Future's looking bright - thanks to the meta team! | 2024-12-07T11:44:55 | https://www.reddit.com/r/LocalLLaMA/comments/1h8qsal/llama_33_on_a_4090_quick_feedback/ | latentmag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8qsal | false | null | t3_1h8qsal | /r/LocalLLaMA/comments/1h8qsal/llama_33_on_a_4090_quick_feedback/ | false | false | self | 48 | null |
Can LLMs count now? | 1 | 2024-12-07T11:45:35 | https://www.reddit.com/gallery/1h8qsnc | Accurate-Material275 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1h8qsnc | false | null | t3_1h8qsnc | /r/LocalLLaMA/comments/1h8qsnc/can_llms_count_now/ | false | false | 1 | null |
||
How to solve the STT Cutoff Problem | 2 | Hello folks,
I've been working on an agentic solution where you can have an autonomous agent taking live calls. We're using a pipeline of Speech to Text, LLM for generating responses and then Text to Speech. In this pipeline, Speech to text is causing some issues because it's difficult to determine when exactly a sentence is over since the user can take pauses. Moreover, when multiple inputs go into LLM, multiple responses are generated and they queue up for Text to speech. How would you solve this problem? How would you also handle cases where the user interrupts the agent? | 2024-12-07T11:55:18 | https://www.reddit.com/r/LocalLLaMA/comments/1h8qxlm/how_to_solve_the_stt_cutoff_problem/ | Leo2000Immortal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8qxlm | false | null | t3_1h8qxlm | /r/LocalLLaMA/comments/1h8qxlm/how_to_solve_the_stt_cutoff_problem/ | false | false | self | 2 | null |
Stellar week for Open VLMs - across sizes 🔥 | 85 | VLMs are going through quite an open revolution AND on-device friendly sizes:
1. Google DeepMind w/ PaliGemma2 - 3B, 10B & 28B: https://huggingface.co/collections/google/paligemma-2-release-67500e1e1dbfdd4dee27ba48
2. OpenGVLabs w/ InternVL 2.5 - 1B, 2B, 4B, 8B, 26B, 38B & 78B: https://huggingface.co/collections/OpenGVLab/internvl-25-673e1019b66e2218f68d7c1c
3. Qwen w/ Qwen 2 VL - 2B, 7B & 72B: https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d
4. Microsoft w/ FlorenceVL - 3B & 8B: https://huggingface.co/jiuhai
5. Moondream2 w/ 0.5B: https://huggingface.co/vikhyatk/
What a time to be alive! 🔥 | 2024-12-07T12:22:41 | https://www.reddit.com/r/LocalLLaMA/comments/1h8rd7y/stellar_week_for_open_vlms_across_sizes/ | vaibhavs10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8rd7y | false | null | t3_1h8rd7y | /r/LocalLLaMA/comments/1h8rd7y/stellar_week_for_open_vlms_across_sizes/ | false | false | self | 85 | {'enabled': False, 'images': [{'id': 'gthN__jaNfZPrVO9Qoro3OiHuNP2ZU3-PhCyPGhukak', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/C_UZ4gTcylYvoUh53wh_AbetjDWTDkfTxGTNgrRfhNs.jpg?width=108&crop=smart&auto=webp&s=dcab32acbee316e6acfa907633edfb5cd8054918', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/C_UZ4gTcylYvoUh53wh_AbetjDWTDkfTxGTNgrRfhNs.jpg?width=216&crop=smart&auto=webp&s=93fcb373b4424a6511a941187024422514de6e50', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/C_UZ4gTcylYvoUh53wh_AbetjDWTDkfTxGTNgrRfhNs.jpg?width=320&crop=smart&auto=webp&s=98de84e57b1c2cf911f4be623c8cb1bbcff877b7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/C_UZ4gTcylYvoUh53wh_AbetjDWTDkfTxGTNgrRfhNs.jpg?width=640&crop=smart&auto=webp&s=013c22ed9f0dcf265069f9622837e634dcd3dd07', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/C_UZ4gTcylYvoUh53wh_AbetjDWTDkfTxGTNgrRfhNs.jpg?width=960&crop=smart&auto=webp&s=d1fa7e92cc27244742a9d351eeeea273cd5c474c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/C_UZ4gTcylYvoUh53wh_AbetjDWTDkfTxGTNgrRfhNs.jpg?width=1080&crop=smart&auto=webp&s=c751ee1d6239bf736b910d6a394888b6cdd6ab4b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/C_UZ4gTcylYvoUh53wh_AbetjDWTDkfTxGTNgrRfhNs.jpg?auto=webp&s=69519c99ab41edb8e2707fb0c71baf28c93bffd5', 'width': 1200}, 'variants': {}}]} |
3-4x 3090 Build. Which Epyc Rome to choose? | 1 | [removed] | 2024-12-07T12:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/1h8rmf7/34x_3090_build_which_epyc_rome_to_choose/ | EcoBair | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8rmf7 | false | null | t3_1h8rmf7 | /r/LocalLLaMA/comments/1h8rmf7/34x_3090_build_which_epyc_rome_to_choose/ | false | false | self | 1 | null |
FlashAttention - FineTuning Llama3.2 | 1 | [removed] | 2024-12-07T12:45:16 | https://www.reddit.com/r/LocalLLaMA/comments/1h8rqao/flashattention_finetuning_llama32/ | King_Ivanchuk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8rqao | false | null | t3_1h8rqao | /r/LocalLLaMA/comments/1h8rqao/flashattention_finetuning_llama32/ | false | false | self | 1 | null |
I made very basic ollama monitoring trayicon-app. You can try from github | 7 | I wanted to share a little tool I created that might make your life easier when working with Ollama. It's called Ollama Monitor - a lightweight system tray app that lets you keep an eye on your models without getting in the way.
Why I Made This: I found myself constantly checking if my models were running or not, so I thought "why not make something simple that just sits in the system tray?" Hope you find it useful too!
The project is open source and MIT licensed. It's completely **FREE .**Try it out and let me know what you think! Any feedback or suggestions are welcome
[GitHub Repository](https://github.com/ysfemreAlbyrk/ollama-monitor) | 2024-12-07T12:48:12 | https://www.reddit.com/r/LocalLLaMA/comments/1h8rs2n/i_made_very_basic_ollama_monitoring_trayiconapp/ | Zooght | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8rs2n | false | null | t3_1h8rs2n | /r/LocalLLaMA/comments/1h8rs2n/i_made_very_basic_ollama_monitoring_trayiconapp/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'r3Z2C18pfK72v6JUXkg0Z3I6ZKDghpvmSQZepvzzgVI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IunkS2U-vby3R9hirPWU1BvR7rvEXGGtVOsPZSFUmVE.jpg?width=108&crop=smart&auto=webp&s=2c27a8846c5cd84d44d9812e396942b9e9b65858', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IunkS2U-vby3R9hirPWU1BvR7rvEXGGtVOsPZSFUmVE.jpg?width=216&crop=smart&auto=webp&s=e2fe40f49aa08d27cba251c4ac6633df1eebc7f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IunkS2U-vby3R9hirPWU1BvR7rvEXGGtVOsPZSFUmVE.jpg?width=320&crop=smart&auto=webp&s=da164626842c0878ed7f4bcff79b23ad958a25ea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IunkS2U-vby3R9hirPWU1BvR7rvEXGGtVOsPZSFUmVE.jpg?width=640&crop=smart&auto=webp&s=183fc10672a1934e0fd79e26ca94230f789afb07', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IunkS2U-vby3R9hirPWU1BvR7rvEXGGtVOsPZSFUmVE.jpg?width=960&crop=smart&auto=webp&s=706fc6d785424990ce9e7d4b0f9af440c120e734', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IunkS2U-vby3R9hirPWU1BvR7rvEXGGtVOsPZSFUmVE.jpg?width=1080&crop=smart&auto=webp&s=07e72ac9e4efa45bcb780e52d75363da255566fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IunkS2U-vby3R9hirPWU1BvR7rvEXGGtVOsPZSFUmVE.jpg?auto=webp&s=a78cfaff05a0d3d7a450ce9827d487eb582f5b55', 'width': 1200}, 'variants': {}}]} |
CPU bottleneck? Fully loaded model in VRAM, ollama help | 1 | 2024-12-07T12:52:16 | Rockends | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h8rujl | false | null | t3_1h8rujl | /r/LocalLLaMA/comments/1h8rujl/cpu_bottleneck_fully_loaded_model_in_vram_ollama/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'oLieOlgHbK0yth7pSz-J7TgilOcxWbkKAy03YHqVsAc', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/29sqa8g2bf5e1.png?width=108&crop=smart&auto=webp&s=0ee396158eb219da017123b280cae5ac3dde5b08', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/29sqa8g2bf5e1.png?width=216&crop=smart&auto=webp&s=de11dad88a152cb051357de37fcb5ab0c07e3ef5', 'width': 216}, {'height': 173, 'url': 'https://preview.redd.it/29sqa8g2bf5e1.png?width=320&crop=smart&auto=webp&s=e1cef37d0e001f598803f498fc10e2cf749cef2a', 'width': 320}, {'height': 346, 'url': 'https://preview.redd.it/29sqa8g2bf5e1.png?width=640&crop=smart&auto=webp&s=1cd4668e859369e6b77d7bfb5ee4aa7524af9cde', 'width': 640}, {'height': 519, 'url': 'https://preview.redd.it/29sqa8g2bf5e1.png?width=960&crop=smart&auto=webp&s=db87e51cd18badd9c986e8d1f75ff18f167381e5', 'width': 960}, {'height': 584, 'url': 'https://preview.redd.it/29sqa8g2bf5e1.png?width=1080&crop=smart&auto=webp&s=1c17a17ddb7be393542ef0d62d2fee347fbdfeb8', 'width': 1080}], 'source': {'height': 814, 'url': 'https://preview.redd.it/29sqa8g2bf5e1.png?auto=webp&s=f4b0f753850aa3bbd8f8200554953a744d1c9476', 'width': 1504}, 'variants': {}}]} |
|||
Some notes on running a 6 GPU AI Server | 58 | I'm trying to start a generative AI based business, and part of that has been setting up a backend running open source models to power my apps. I figured I'd share some of what I've learned for anyone trying to do something similar.
I tried a few different motherboards, and settled on this one: [https://www.aliexpress.us/item/3256807575428102.html](https://www.aliexpress.us/item/3256807575428102.html)
Dirt cheap at about $120, and it takes LGA 2011-3 CPU's which you can get for from Chinese ebay sellers for almost nothing. Definitely one of the cheaper ways to get to 80 PCIe lanes. I got a v3 matched pair for about $15 and a v4 matched pair for about $100. Couldn't get the v4 to work (DOA), and I haven't really seen a reason to upgrade from the v3 yet. Compared to my first attempt using a repurposed mining motherboard, I LOVE this motherboard. With my previous board I could never get all my GPU's to show up properly using risers, but with this board you can fit all the GPU's directly plugged in and everything just works. It also takes 256gb of DDR4, so you can run some beefy llama.cpp models in addition to GPU engines.
Speaking of GPUs, I'm running 3x 4090, 2x3090 (with NVlink I never got working) and 1x4060ti. I want to replace the 4060ti with another 4090 but I have to figure out why the credit card companies stopped sending me new cards first. I'm running all of that off of one 1600w power supply. I know I'm way under-powered for this many GPUs, but I haven't run into any issues yet even running at max capacity. In the beginning I created a startup script that would power limit the GPUs (sudo nvidia-smi -i <GPU\_ID> -pl <WATT\_LIMIT>). From what I've read you can get the best power usage/compute ratio at around 70% power. But the more I've thought about it, I don't think it actually makes sense for what I'm doing. If it was just me, a 30% reduction in power for a 10% performance hit might be worth it. But with a lot of simultaneous paying users, I think 30% more power usage for 10% more "capacity" ends up being worth it. Somehow I haven't had any power issues running all GPU's running models simultaneously unthrottled. I don't dare try training.
For inference, I've been using TabbyAPI with exl2 quants of Midnight-Miqu-70B-v1.5. Each instance takes up 2x22gb of ram, so 2x3090s and 2x4090s. In order to keep everything consistent, I run each tabby instance as a service and export cuda device environmental variables. It looks like this:
`[Unit]`
`Description=Tabby API Service`
[`After=network.target`](http://After=network.target)
`[Service]`
`Environment="CUDA_VISIBLE_DEVICES=0,1"`
`ExecStart=/bin/bash -l -c "source /mnt/sdc/miniconda3/etc/profile.d/conda.sh && conda activate tabbyapi && echo 'Activated Conda' && /mnt/sdb/tabbyAPI/start.sh"`
`WorkingDirectory=/mnt/sdb/tabbyAPI`
`Restart=always`
`User=user`
`Group=user`
`StandardOutput=journal`
`StandardError=journal`
`[Install]`
[`WantedBy=multi-user.target`](http://WantedBy=multi-user.target)
Just do `sudo nano /etc/systemd/system/tabbyapi.service`, paste your service configuration, `sudo systemctl daemon-reload`, `sudo systemctl start tabbyapi.service`, and `sudo systemctl enable tabbyapi.service.`
This activates the tabbyapi conda environment, sets the first and second GPU as the visible GPUs, and starts tabbyAPI on system boot. The second tabbyAPI service uses the same conda environment, exports device 3,4, and runs from a separate cloned repo. I could never figure out how to launch multiple instances from the same repo using different tabby config files.
In front of tabbyAPI, I'm running [litellm](https://github.com/BerriAI/litellm) as a proxy. Since I'm running two identical models with the same name, calls get split between them and load balanced. Which is super useful because you can basically combine multiple servers/clusters/backends for easy scaling. And being able to generate API keys with a set input/output costs is pretty cool. It's like being able to make prepaid giftcards for your server. I also run this as a service that starts on boot. I just wish they had local stable diffusion support.
And while we're on the topic of stable diffusion, on my last 4090 I managed to cram together three [sd.next](https://github.com/vladmandic/automatic) instances, each running a SDXL/Pony model on a different port. I like vladmandic/sdnext because it has a built in que system in case of simultaneous requests. I don't think there's parallel batching for stable diffusion like there is for LLMs, but if you using a lightning model on a 4090, you can easily get 2-3 seconds for a 1024x1024 image. I wish there was a better way run multiple models at once, but changing models on one instance takes way too much time. I've seen and tried this [multi user stable diffusion project](https://github.com/wolverinn/stable-diffusion-multi-user), but I could never get it to work properly. So to change image models my users basically have to copy and paste a new URL/endpoint specific to each model.
Here is an example of my stable diffusion service:
`[Unit]`
`Description=Web UI Service for Stable Diffusion`
[`After=network.target`](http://After=network.target)
`[Service]`
`Environment="CUDA_VISIBLE_DEVICES=2"`
`ExecStart=/bin/bash /mnt/sdb/automatic/webui.sh --ckpt /mnt/sdb/automatic/models/Stable-diffusion/tamePonyThe_v25.safetensors --port 7860 --listen --log /mnt/sdb/automatic/log.txt --api-log --ui-config /mnt/sdb/automatic/ui-config.yml --freeze`
`WorkingDirectory=/mnt/sdb/automatic`
`Restart=always`
`User=user`
`Group=user`
`StandardOutput=journal`
`StandardError=journal`
`[Install]`
[`WantedBy=multi-user.target`](http://WantedBy=multi-user.target)
The 4060ti I reserve for miscellaneous fuckery like text to voice. I haven't found a way to scale local text to voice for multiple users so it's kind of just in limbo. I'm thinking of just filling it up with stable diffusion 1.5 models for now. They're old but neat, and hardly take up any resources compared to SDXL.
I don't have physical access to my server, which is a huge pain in the ass sometimes. I do not have a safe place for expensive equipment, so I keep the server in my partner's office, accessing it remotely with tailscale. The issue is anytime I install or upgrade anything with a lot of packages, it seems there is a reasonable chance my system will lock up and need a hard reboot. Usually if I don' touch it, it is very stable. But there is not someone onsite 24/7 to kick the server, which would result in unacceptable outages if something happened. To get around this, I found this device: [https://www.aliexpress.us/item/3256806110401064.html](https://www.aliexpress.us/item/3256806110401064.html)
You can hook it to the board's power/reset switch inputs, and power cycle remotely. Just needed to install tailscale on the device OS. I had never heard of this kind of thing before, but it works very well and gives peace of mind. Most people probably do not have this issue, but it was not an obvious solution to me, so I figured I'd mention it.
I wasted a lot of time manually starting programs, exporting environmental variables, trying to keep track of what GPUs go to which program in a text file, and I'd dread having my server crash or needing to reboot. Now, with everything set up to start automatically, I never stress about anything unless I'm upgrading. It just runs. This is all probably very obvious to people very familiar with Ubuntu, but it took me way too long fucking around to get to this point. Hopefully these ramblings are somewhat helpful to someone. | 2024-12-07T12:57:10 | https://www.reddit.com/r/LocalLLaMA/comments/1h8rxla/some_notes_on_running_a_6_gpu_ai_server/ | Scam_Altman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8rxla | false | null | t3_1h8rxla | /r/LocalLLaMA/comments/1h8rxla/some_notes_on_running_a_6_gpu_ai_server/ | false | false | self | 58 | null |
Test LLama Mesh Blender AddOn | 66 | 2024-12-07T13:25:42 | https://v.redd.it/7wfr9al2hf5e1 | Gioxyer | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h8sfyh | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7wfr9al2hf5e1/DASHPlaylist.mpd?a=1736169955%2CMWUwZDlmNTEwODgzYWJmMTE2YzY3MWZmNGEzOTdiNTUzOTZkMmIwZGZhN2FiYjE1ZjZjODk2YTc0ZjY4NTQ0Yg%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/7wfr9al2hf5e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/7wfr9al2hf5e1/HLSPlaylist.m3u8?a=1736169955%2CNjRlMWJjYzU5NzM1Y2YxZDZkMTIwOGZiMmQxNDRkNmFjN2RhYmRmZGZiYzYxN2NiZDllYzQwZTgyNTcxOGMwOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7wfr9al2hf5e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1h8sfyh | /r/LocalLLaMA/comments/1h8sfyh/test_llama_mesh_blender_addon/ | false | false | 66 | {'enabled': False, 'images': [{'id': 'OXBvNTNibDJoZjVlMVr5FM1KiaToRMJn_ff0uZt5hclRoWahIz9-83i-escH', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OXBvNTNibDJoZjVlMVr5FM1KiaToRMJn_ff0uZt5hclRoWahIz9-83i-escH.png?width=108&crop=smart&format=pjpg&auto=webp&s=0714fc0fde1f7f4f87c24ff5eac4314e015391ae', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OXBvNTNibDJoZjVlMVr5FM1KiaToRMJn_ff0uZt5hclRoWahIz9-83i-escH.png?width=216&crop=smart&format=pjpg&auto=webp&s=2af79bcc0b54801d0b9341a320563427b7550b5c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OXBvNTNibDJoZjVlMVr5FM1KiaToRMJn_ff0uZt5hclRoWahIz9-83i-escH.png?width=320&crop=smart&format=pjpg&auto=webp&s=b28683e60aa1437356851a72554958784bdeb5f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OXBvNTNibDJoZjVlMVr5FM1KiaToRMJn_ff0uZt5hclRoWahIz9-83i-escH.png?width=640&crop=smart&format=pjpg&auto=webp&s=73afb5fb4fee9efdc9798cf7b5e009ec6345cbe9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OXBvNTNibDJoZjVlMVr5FM1KiaToRMJn_ff0uZt5hclRoWahIz9-83i-escH.png?width=960&crop=smart&format=pjpg&auto=webp&s=a1194a0e61e928ff3e16b65e9011d002adc0e5a1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OXBvNTNibDJoZjVlMVr5FM1KiaToRMJn_ff0uZt5hclRoWahIz9-83i-escH.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7206e23813e008c396bdfd4f6112cd79efee87e1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OXBvNTNibDJoZjVlMVr5FM1KiaToRMJn_ff0uZt5hclRoWahIz9-83i-escH.png?format=pjpg&auto=webp&s=24a1779732dd171109b746ff60e1d1a5c1043372', 'width': 1920}, 'variants': {}}]} |
||
First look at PydanticAi : It has some nice features. | 0 | 2024-12-07T13:37:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h8snju/first_look_at_pydanticai_it_has_some_nice_features/ | Sorry_Transition_599 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8snju | false | null | t3_1h8snju | /r/LocalLLaMA/comments/1h8snju/first_look_at_pydanticai_it_has_some_nice_features/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dYSqAhOsJiheQOoGyx_E7_VnRX1Zmpyl1cCmc2qJn8I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lFx-P0HQRW9mBS0p22wIV9xReIY8_IaBEIGlHMPb0Zs.jpg?width=108&crop=smart&auto=webp&s=940933bf9056deba49ab0e00bba4dd44aaa7c4a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lFx-P0HQRW9mBS0p22wIV9xReIY8_IaBEIGlHMPb0Zs.jpg?width=216&crop=smart&auto=webp&s=f2e037e6852cd8600589270f535076269660b214', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lFx-P0HQRW9mBS0p22wIV9xReIY8_IaBEIGlHMPb0Zs.jpg?width=320&crop=smart&auto=webp&s=87e64aa2c7e2184fc21f51164c51fe355b933dc5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lFx-P0HQRW9mBS0p22wIV9xReIY8_IaBEIGlHMPb0Zs.jpg?width=640&crop=smart&auto=webp&s=8a8cfbcf540f4f77696bf6641ca231f24a34dc99', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lFx-P0HQRW9mBS0p22wIV9xReIY8_IaBEIGlHMPb0Zs.jpg?width=960&crop=smart&auto=webp&s=7f6c57881ee3ad49e1bd93e6f5faa84e0d39c645', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lFx-P0HQRW9mBS0p22wIV9xReIY8_IaBEIGlHMPb0Zs.jpg?width=1080&crop=smart&auto=webp&s=1c39dce918efd53336c7bc27a77c43e1fa41a354', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lFx-P0HQRW9mBS0p22wIV9xReIY8_IaBEIGlHMPb0Zs.jpg?auto=webp&s=18ef9fae333c6d1d8dc536cb41e72936d4337258', 'width': 1200}, 'variants': {}}]} |
||
Any news about papers / training process for DeepSeek-R1 and QwQ ? | 21 | Given that DeepSeek's R1 and Qwen's QwQ models have been at the forefront of the open-weights Reasoning process, I have been really waiting for them to release either papers or any other documents about how they trained their models.
Am wondering how long it typically can take for them to share this? Or have they provided any indication that they don't plan to share the recipes ? | 2024-12-07T13:43:47 | https://www.reddit.com/r/LocalLLaMA/comments/1h8srrj/any_news_about_papers_training_process_for/ | Ska82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8srrj | false | null | t3_1h8srrj | /r/LocalLLaMA/comments/1h8srrj/any_news_about_papers_training_process_for/ | false | false | self | 21 | null |
Llama leads as the most liked model of the year on Hugging Face | 1 | 2024-12-07T14:52:57 | https://www.reddit.com/r/LocalLLaMA/comments/1h8u329/llama_leads_as_the_most_liked_model_of_the_year/ | Ok_Raise_9764 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8u329 | false | null | t3_1h8u329 | /r/LocalLLaMA/comments/1h8u329/llama_leads_as_the_most_liked_model_of_the_year/ | false | false | 1 | null |
||
Virtuoso Small 14b better than Llama 3.3 70b at some text improvement/rewriting tasks | 12 | This is really surprising! For general knowledge questions, Llama 3.3 70b has blown me away, but when it comes to improving or rewriting text, Virtuoso Small seems to be outperforming the bigger model.
The prompt I’m using (with BoltAI’s "Improve writing" task) is this:
```
Improve the text in triple below in your own words. Rephrase the text using informal language suitable for communication between peers. In particular avoid slang if possible, using a language that a person who is fluent in the language but not a native speaker would use. Don't make the revised text sound "boring" and also do not add any extra information that is not included in the original text. Limit your changes to just improve the readability of the text and nothing else. Keep the original emotion, like esclamation marks etc if they can help keep the original meaning. Do not redact any information. Also split the text in paragraphs separated by whitespace if it can help improve readability.
"""
{input}
"""
Do not return anything other than the rephrased text. Do not wrap responses in quotes.
```
I'm good at English but I learned it as a second language. That's why I often use tasks like these to make sure what I want to say comes across clearly.
So far, after testing many times, Virtuoso Small seems to follow my directions better than Llama 3.3 70b.
Is there an explanation for this difference? | 2024-12-07T14:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h8u60z/virtuoso_small_14b_better_than_llama_33_70b_at/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8u60z | false | null | t3_1h8u60z | /r/LocalLLaMA/comments/1h8u60z/virtuoso_small_14b_better_than_llama_33_70b_at/ | false | false | self | 12 | null |
Llama leads as the most liked model of the year on Hugging Face | 392 | 2024-12-07T15:02:06 | Ok_Raise_9764 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h8u9z2 | false | null | t3_1h8u9z2 | /r/LocalLLaMA/comments/1h8u9z2/llama_leads_as_the_most_liked_model_of_the_year/ | false | false | 392 | {'enabled': True, 'images': [{'id': 'C0qk_DKOiGfqEXLInDCojVitp1SwihOVbAaArvwaSnw', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/ambomdseyf5e1.png?width=108&crop=smart&auto=webp&s=da36d17b4d7cd584c0b143ac0444f9441df6661c', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/ambomdseyf5e1.png?width=216&crop=smart&auto=webp&s=69ff4ea21eae2cc8f97dbd738d446e59a85db393', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/ambomdseyf5e1.png?width=320&crop=smart&auto=webp&s=86cfacdb0225a6267adf2e68f1f9fa9a3755a486', 'width': 320}, {'height': 571, 'url': 'https://preview.redd.it/ambomdseyf5e1.png?width=640&crop=smart&auto=webp&s=5f5a91bf30afa82679258e4568fcdfd83b3fea86', 'width': 640}, {'height': 857, 'url': 'https://preview.redd.it/ambomdseyf5e1.png?width=960&crop=smart&auto=webp&s=27b22b212e324ceec02acfbf75fbe14a96e814e3', 'width': 960}, {'height': 964, 'url': 'https://preview.redd.it/ambomdseyf5e1.png?width=1080&crop=smart&auto=webp&s=d0ccdf7055b78f2ea0c9207761f8e55b000294b4', 'width': 1080}], 'source': {'height': 2060, 'url': 'https://preview.redd.it/ambomdseyf5e1.png?auto=webp&s=c612de739819cb6f8bd49d417373804c34d1c59c', 'width': 2306}, 'variants': {}}]} |
|||
M3 Max 128GB vs M4 Max 64GB? | 1 | [removed] | 2024-12-07T15:12:38 | https://www.reddit.com/r/LocalLLaMA/comments/1h8uhs0/m3_max_128gb_vs_m4_max_64gb/ | WorkingLandscape450 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8uhs0 | false | null | t3_1h8uhs0 | /r/LocalLLaMA/comments/1h8uhs0/m3_max_128gb_vs_m4_max_64gb/ | false | false | self | 1 | null |
How can I access the API for Athene-V2-Chat? | 3 | Athene-V2 was released quite a while ago, but I still haven't been able to find any API provider. | 2024-12-07T15:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h8uku6/how_can_i_access_the_api_for_athenev2chat/ | Emotional-Metal4879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8uku6 | false | null | t3_1h8uku6 | /r/LocalLLaMA/comments/1h8uku6/how_can_i_access_the_api_for_athenev2chat/ | false | false | self | 3 | null |
Linux Knowledge Model | 2 | I'd like to have a model which has good knowledge about Linux and shell commands, etc. (and older distributions). Additionally but not mandatory, would it be great if it could program C/C++ and bash, etc. Is there something and/or is it better to just use a coding model? | 2024-12-07T15:22:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h8upds/linux_knowledge_model/ | Confident-Ad-3465 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8upds | false | null | t3_1h8upds | /r/LocalLLaMA/comments/1h8upds/linux_knowledge_model/ | false | false | self | 2 | null |
Llama-3.2-3B-Instruct context length | 1 | Hi all,
I just started to experiment with the above mentioned model and asked myself a few questions that I was unable to answer based on what I found online. I downloaded the model from huggingfaces and started using it with the default ‚getting started‘ code, so nothing fancy, here is the code:
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-3B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
]
model_outputs = pipe(
messages,
max_new_tokens=1000,
)
print(model_outputs[0]["generated_text"][-1])
The only changees I made are the longer max response and the change of ‚system‘ prompt.
I have been trying to get the model do a specific task, take some input text and generate a json out of it, which works pretty good, but I have noticed that If I rerun the same prompt with the same input several times, the model tends to miss more details in the response. I was wondering if the model actually retains information from previous prompts, or this is merely a coincidence. If it actually remembers, can you suggest a way to ‚clear the context‘ without reloading the model? I was researching this for a while now and started to have the feeling that I‘m conceptually wrong and it is just a coincidence.
Thanks for taking the time for enlightening me! | 2024-12-07T15:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h8utm6/llama323binstruct_context_length/ | baszfasz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8utm6 | false | null | t3_1h8utm6 | /r/LocalLLaMA/comments/1h8utm6/llama323binstruct_context_length/ | false | false | self | 1 | null |
70b model into 3060s? | 6 | I really cannot afford 30/4090s. I have a 3060. I can afford another of them, giving me 24gb. Combined with 64gb DDR, is it possible to get ~20 t/s with a 70b Q4 model in this config?
| 2024-12-07T15:35:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h8uynh/70b_model_into_3060s/ | oldschooldaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8uynh | false | null | t3_1h8uynh | /r/LocalLLaMA/comments/1h8uynh/70b_model_into_3060s/ | false | false | self | 6 | null |
Speaker accent, age identification | 1 | [removed] | 2024-12-07T15:40:25 | https://www.reddit.com/r/LocalLLaMA/comments/1h8v2re/speaker_accent_age_identification/ | anath2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8v2re | false | null | t3_1h8v2re | /r/LocalLLaMA/comments/1h8v2re/speaker_accent_age_identification/ | false | false | self | 1 | null |
Identifying speech accent and speaker age | 1 | [removed] | 2024-12-07T15:43:08 | https://www.reddit.com/r/LocalLLaMA/comments/1h8v4se/identifying_speech_accent_and_speaker_age/ | anath2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8v4se | false | null | t3_1h8v4se | /r/LocalLLaMA/comments/1h8v4se/identifying_speech_accent_and_speaker_age/ | false | false | self | 1 | null |
Business suggestion - prebuilt ai computers | 0 | Problem: Many people want to run ai locally. The trend is likely to continue.
But few know what type of computer they would need to buy, let alone how to build one or do quantisation, install stuff etc.
Solution: Sell prebuilt computers as cheaply as possible with all software like lmstudio, models etc already installed.
Put any tech talk like 3900x, 8X quant etc in the footer of the ad. Instead focus on price, which ai models it can run and to what speed.
Cut costs by having a 4-5 week delivery time. That way you can order the parts when you have confirmed and paid for order.
There is an old saying, the best business opportunities are when you see people trying to solve their own problem with a dyi solution instead of paying someone. Most people prefer to pay.
Another good sign for a business opportunity is when you early in a trend. That allows for mistakes that would make you lose if you did in a more mature and competitive market.
I think many hardware builders make the mistake of thinking its immoral to sell a computer for lets 5k, when they only paid for 3k for the parts.
Most people, who can afford it, dont care about the 2k. Because the cost and anxiety of learning everything is more expensive.
| 2024-12-07T15:54:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h8vd98/business_suggestion_prebuilt_ai_computers/ | Sea-Replacement7541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8vd98 | false | null | t3_1h8vd98 | /r/LocalLLaMA/comments/1h8vd98/business_suggestion_prebuilt_ai_computers/ | false | false | self | 0 | null |
RAG or Fintune | 1 | [removed] | 2024-12-07T15:57:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h8vfki/rag_or_fintune/ | Superb_Classic36 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8vfki | false | null | t3_1h8vfki | /r/LocalLLaMA/comments/1h8vfki/rag_or_fintune/ | false | false | self | 1 | null |
Is it possible to run Qwen 2.5 7B Coder + Cursor on Mac Mini M4 Base model? | 6 | Hi all,
I am new on this area, so sorry if the question is irrelevant. I am considering not paying for cursor instead running Qwen Coder 7B model locally. Have anyone tried?
Paying 20$ each month not makes sense to me. I can pay 599$ and have a mac mini if this works. | 2024-12-07T16:35:53 | https://www.reddit.com/r/LocalLLaMA/comments/1h8w9zz/is_it_possible_to_run_qwen_25_7b_coder_cursor_on/ | ncdlek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8w9zz | false | null | t3_1h8w9zz | /r/LocalLLaMA/comments/1h8w9zz/is_it_possible_to_run_qwen_25_7b_coder_cursor_on/ | false | false | self | 6 | null |
QwQ 32B with OpenWeb UI non-responsive | 1 | I had this weird issue where sometimes the LLM machine was busy, OpenWeb UI would send my prompt and I'd wait till the cows come home to never get an answer.
I found out that behind the scene, OpenWeb UI was doing embedding and sentence auto-complete using QwQ. So obviously QwQ being what it is, it was pondering on the universe and life choices and thus was actively not being available for prompts.
I solved this by loading a second model (QWEN2.5 7B Q8) for those secondary tasks and its perfect now. No more issues.
I'm writing this in case someone else has that issue. | 2024-12-07T16:46:59 | https://www.reddit.com/r/LocalLLaMA/comments/1h8wiqu/qwq_32b_with_openweb_ui_nonresponsive/ | DrVonSinistro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8wiqu | false | null | t3_1h8wiqu | /r/LocalLLaMA/comments/1h8wiqu/qwq_32b_with_openweb_ui_nonresponsive/ | false | false | self | 1 | null |
Ubers Prompt Engineering Framework | 8 | 2024-12-07T16:51:50 | https://www.uber.com/en-IN/blog/introducing-the-prompt-engineering-toolkit/?&aid=recbxr9IRM7UNGV8S&_bhlid=3a1f2f42ea0d180692b66caaa1f01e403591040e | ekaj | uber.com | 1970-01-01T00:00:00 | 0 | {} | 1h8wmk8 | false | null | t3_1h8wmk8 | /r/LocalLLaMA/comments/1h8wmk8/ubers_prompt_engineering_framework/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'VBniW7Ica10AXhlW_iwepEeUabpXmlL18gH3cU-JUbM', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/Q3-CdAPrIUXa5RX4v88K-J7-HOzMZMytlxgnpe63rCA.jpg?width=108&crop=smart&auto=webp&s=dc6c54621801ae6de20b0f1dca314e642110b96e', 'width': 108}, {'height': 172, 'url': 'https://external-preview.redd.it/Q3-CdAPrIUXa5RX4v88K-J7-HOzMZMytlxgnpe63rCA.jpg?width=216&crop=smart&auto=webp&s=856487d2cd4d3bc5d1cb607b59528dfb8675be5c', 'width': 216}, {'height': 255, 'url': 'https://external-preview.redd.it/Q3-CdAPrIUXa5RX4v88K-J7-HOzMZMytlxgnpe63rCA.jpg?width=320&crop=smart&auto=webp&s=bcd8dcb05546a18a27adf4c7a872def6912e2e2b', 'width': 320}, {'height': 511, 'url': 'https://external-preview.redd.it/Q3-CdAPrIUXa5RX4v88K-J7-HOzMZMytlxgnpe63rCA.jpg?width=640&crop=smart&auto=webp&s=2ea6981d4a33ba479bb80bcfc35d000f6571aca4', 'width': 640}, {'height': 767, 'url': 'https://external-preview.redd.it/Q3-CdAPrIUXa5RX4v88K-J7-HOzMZMytlxgnpe63rCA.jpg?width=960&crop=smart&auto=webp&s=200eed71babdd8cfd12264e2ca999968d0277b3b', 'width': 960}], 'source': {'height': 819, 'url': 'https://external-preview.redd.it/Q3-CdAPrIUXa5RX4v88K-J7-HOzMZMytlxgnpe63rCA.jpg?auto=webp&s=a1ae91be37d24f635ff32cb52bcd0413311ef518', 'width': 1024}, 'variants': {}}]} |
||
Tired of waiting for open AI to release a web browser? I’m developing a chrome extension to bring Agents to your favorite browser. LMKYT | 14 |
So I’m just throwing this up to test the waters and see what type of interest there is for something like this. I know the biggest similar product is perplexity with a number of other copycat companies, however 99% of them are using closed models like ChatGPT or otherwise. This is a project built by the people, for the people and I will be open sourcing soon. The goal being to take the incredible functionality and practical use cases of what closed source models and these other companies provide to your fingertips with models accessible to your LOCAL machine SO YOU DON’T HAVE TO PAY A DAMN DIME. I’m a broke Computer Science grad so I’ll probably release a free version with banner ads that aren’t too annoying and an ad free version for just $0.99 to put food on the table. Mind you even though it’s open source, Google charges users a $10 developer fee to experiment with extensions so you’re basically saving 90% of the costs to support an independent developer.
Please lmk what features you’d like to see, I have a few more ideas coming down the pipeline like being able to write a paper where you are actually able to selectively pick the links you want to use in real time versus most current implementations which basically pick them for you unless you have a list of pre-researched sources you’ve hopefully already reviewed.
There are two main goals with this project. Essentially, to be able to fully control the chrome browser with just your voice and write research papers where your able to review and select the articles/sites/papers you want to add to curate an amalgamated research paper or other research assessments.
Yes I’m aware of openweb-ui. However, in my experience the sources it uses are generally sub optimal unless you provide it with specific links.
This is still a work in progress so keep in mind I’m barely halfway done but I wanted to get a temperature check for the direction of this project.
| 2024-12-07T16:54:30 | https://www.reddit.com/gallery/1h8woma | AIGuy3000 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1h8woma | false | null | t3_1h8woma | /r/LocalLLaMA/comments/1h8woma/tired_of_waiting_for_open_ai_to_release_a_web/ | false | false | 14 | null |
|
Would you contribute compute to an open source training run? And what? | 2 | I am interested in knowing the amount of compute we could get as the LocalLlama community, because we could run collectively large experiments like a large Bitnet model etc.. | 2024-12-07T16:56:00 | https://www.reddit.com/r/LocalLLaMA/comments/1h8wpse/would_you_contribute_compute_to_an_open_source/ | OkStatement3655 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8wpse | false | null | t3_1h8wpse | /r/LocalLLaMA/comments/1h8wpse/would_you_contribute_compute_to_an_open_source/ | false | false | self | 2 | null |
Are Large LLMs Necessary for Specific Local Goals? | 2 | I've been following this group for a few months now, and I’m impressed by the depth of knowledge and the discussions around running large models on local hardware. It seems like a lot of the focus here is on benchmarking and running the largest models, which is fascinating, but I’m curious: why not focus more on fine-tuning smaller models for specific use cases?
I have a finite set of goals for running a local LLM, and I’d prefer not to spend more than a few thousand dollars on hardware. My current setup is a Ryzen 9 with 64GB DDR4 and an RTX 4070. I have room to add another GPU if necessary, but I’m wondering if it’s even needed if I know exactly what I want the model (or models) to achieve locally.
Am I off-base in thinking that fine-tuning smaller models or distilling larger ones could meet most needs, including tasks like instantaneous natural language processing with real-time voice responses? I’d really appreciate any insights or advice on whether this approach makes sense for my setup and goals. | 2024-12-07T16:57:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h8wqm4/are_large_llms_necessary_for_specific_local_goals/ | happy-occident | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8wqm4 | false | null | t3_1h8wqm4 | /r/LocalLLaMA/comments/1h8wqm4/are_large_llms_necessary_for_specific_local_goals/ | false | false | self | 2 | null |
How AI Systems like o1 Are Learning to Deceive Their Users | 0 | 2024-12-07T17:12:36 | https://youtu.be/pB3gvX-GOqU | phoneixAdi | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1h8x34p | false | {'oembed': {'author_name': 'Cognitive Revolution "How AI Changes Everything"', 'author_url': 'https://www.youtube.com/@CognitiveRevolutionPodcast', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/pB3gvX-GOqU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Emergency Pod: o1 Schemes Against Users, with Alexander Meinke from Apollo Research"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/pB3gvX-GOqU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Emergency Pod: o1 Schemes Against Users, with Alexander Meinke from Apollo Research', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1h8x34p | /r/LocalLLaMA/comments/1h8x34p/how_ai_systems_like_o1_are_learning_to_deceive/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'jgUN62eOlF4dP8jsvomm11XgGCthLQMc9CFmeOs4cjs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/kBm8X7s5h7c3Zx98PlzNm8H4iXDuNEYc0APWBKCL89M.jpg?width=108&crop=smart&auto=webp&s=01cfdb39c974d82465ee11b0451ec1583bc3a2f5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/kBm8X7s5h7c3Zx98PlzNm8H4iXDuNEYc0APWBKCL89M.jpg?width=216&crop=smart&auto=webp&s=8a8fc26b21a44a0d610d272e085d8aabc7807946', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/kBm8X7s5h7c3Zx98PlzNm8H4iXDuNEYc0APWBKCL89M.jpg?width=320&crop=smart&auto=webp&s=87d881388e233d2e439f94c5ed6d99eb6be6fdef', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/kBm8X7s5h7c3Zx98PlzNm8H4iXDuNEYc0APWBKCL89M.jpg?auto=webp&s=3e17bad26f0d8b5d84ece63362557a8c96fe68bf', 'width': 480}, 'variants': {}}]} |
||
GeForce RTX 3090 (x2) sound like a good idea? | 1 | [removed] | 2024-12-07T17:33:12 | https://www.reddit.com/r/LocalLLaMA/comments/1h8xjs1/geforce_rtx_3090_x2_sound_like_a_good_idea/ | EccentricTiger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8xjs1 | false | null | t3_1h8xjs1 | /r/LocalLLaMA/comments/1h8xjs1/geforce_rtx_3090_x2_sound_like_a_good_idea/ | false | false | self | 1 | null |
LLMate: A CLI Tool to discover the optimal LLM model Size to run on CPU and ram | 43 | Guys, I've made this CLI app in crystal to calculate which size of a model would run the best on my system, I don't think is perfect, but just to give an estimation.
Probably at this moment it will only run on Linux and maybe Mac/\*BSD.
Here is the output:
antonio@giga-nomic:\~/Projects/llmate/src$ ./llmate
▗▖ ▗▖ ▗▖ ▗▖ ▗▄▖▗▄▄▄▖▗▄▄▄▖
▐▌ ▐▌ ▐▛▚▞▜▌▐▌ ▐▌ █ ▐▌
▐▌ ▐▌ ▐▌ ▐▌▐▛▀▜▌ █ ▐▛▀▀▘
▐▙▄▄▖▐▙▄▄▖▐▌ ▐▌▐▌ ▐▌ █ ▐▙▄▄▖ v0.1
\--------------------------------------------------
Welcome! This program will help you decide which type of LLM model to run,
based on your CPU, available RAM, total bandwidth and the desired tokens per seconds.
\--------------------------------------------------
Bandwidth available: 20.71 GB/s
\--------------------------
Recommended Model: Model 7.0B (4bit)
With the debug mode on to see what is going on:
antonio@giga-nomic:~/Projects/llmate/src$ ./llmate --debug
▗▖ ▗▖ ▗▖ ▗▖ ▗▄▖▗▄▄▄▖▗▄▄▄▖
▐▌ ▐▌ ▐▛▚▞▜▌▐▌ ▐▌ █ ▐▌
▐▌ ▐▌ ▐▌ ▐▌▐▛▀▜▌ █ ▐▛▀▀▘
▐▙▄▄▖▐▙▄▄▖▐▌ ▐▌▐▌ ▐▌ █ ▐▙▄▄▖ v0.1
--------------------------------------------------
Welcome! This program will help you decide which type of LLM model to run,
based on your CPU, available RAM, total bandwidth and the desired tokens per seconds.
--------------------------------------------------
Parameters detected:
Detected System CPU cores: 12
Detected System RAM: 94 GB
Memory size for bandwidth test on RAM: 256 MB
Elapsed time: 1.23 seconds
Bandwidth available: 20.27 GB/s
Prompt size for calculating memory requirement and bandwidth usage: 500 words
Target token generation speed: 10.0 tokens
--------------------------------------------------
Evaluating Model 1.0B (fp32):
- Size: 1.0 billion params
- Precision: fp32
- Estimated Memory Requirement: 3.73 GB
- Estimated Bandwidth Usage: 9.31 GB/s
- Estimated Token Generation Speed: 25.0 tokens/s
- Fits in RAM and Bandwidth: YES
- Score: 1.82
Evaluating Model 1.0B (fp16):
- Size: 1.0 billion params
- Precision: fp16
- Estimated Memory Requirement: 1.86 GB
- Estimated Bandwidth Usage: 4.66 GB/s
- Estimated Token Generation Speed: 37.5 tokens/s
- Fits in RAM and Bandwidth: YES
- Score: 1.53
Evaluating Model 1.0B (int8):
- Size: 1.0 billion params
- Precision: int8
- Estimated Memory Requirement: 0.93 GB
- Estimated Bandwidth Usage: 2.33 GB/s
- Estimated Token Generation Speed: 50.0 tokens/s
- Fits in RAM and Bandwidth: YES
- Score: 1.24
Evaluating Model 1.0B (4bit):
- Size: 1.0 billion params
- Precision: 4bit
- Estimated Memory Requirement: 0.47 GB
- Estimated Bandwidth Usage: 1.16 GB/s
- Estimated Token Generation Speed: 75.0 tokens/s
- Fits in RAM and Bandwidth: YES
- Score: 0.96
Evaluating Model 3.0B (fp16):
- Size: 3.0 billion params
- Precision: fp16
- Estimated Memory Requirement: 5.59 GB
- Estimated Bandwidth Usage: 13.97 GB/s
- Estimated Token Generation Speed: 12.5 tokens/s
- Fits in RAM and Bandwidth: YES
- Score: 2.71
Evaluating Model 3.0B (int8):
- Size: 3.0 billion params
- Precision: int8
- Estimated Memory Requirement: 2.79 GB
- Estimated Bandwidth Usage: 6.98 GB/s
- Estimated Token Generation Speed: 16.67 tokens/s
- Fits in RAM and Bandwidth: YES
- Score: 2.42
Evaluating Model 3.0B (4bit):
- Size: 3.0 billion params
- Precision: 4bit
- Estimated Memory Requirement: 1.4 GB
- Estimated Bandwidth Usage: 3.49 GB/s
- Estimated Token Generation Speed: 25.0 tokens/s
- Fits in RAM and Bandwidth: YES
- Score: 2.13
Evaluating Model 7.0B (4bit):
- Size: 7.0 billion params
- Precision: 4bit
- Estimated Memory Requirement: 3.26 GB
- Estimated Bandwidth Usage: 8.15 GB/s
- Estimated Token Generation Speed: 10.71 tokens/s
- Fits in RAM and Bandwidth: YES
- Score: 4.52
--------------------------
Recommended Model: Model 7.0B (4bit)
here is the repository on CodeBerg for it:
[https://codeberg.org/MysterHawk/LLMate](https://codeberg.org/MysterHawk/LLMate)
Any feedback is helpful :) | 2024-12-07T17:45:21 | https://www.reddit.com/r/LocalLLaMA/comments/1h8xt22/llmate_a_cli_tool_to_discover_the_optimal_llm/ | MysterHawk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8xt22 | false | null | t3_1h8xt22 | /r/LocalLLaMA/comments/1h8xt22/llmate_a_cli_tool_to_discover_the_optimal_llm/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'DMR7tZe-Z-Z7B_zv8-80DWFDDc4MI-r3vXziiNSjJUo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/vE85c5KzMLAxdNxZfcwuQcyfQMjfNwq4Njsv5y9xk8o.jpg?width=108&crop=smart&auto=webp&s=40606b3a74d89d94b810770198a7fbc369486c96', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/vE85c5KzMLAxdNxZfcwuQcyfQMjfNwq4Njsv5y9xk8o.jpg?width=216&crop=smart&auto=webp&s=218369795ba1f883911b1e8e1a1e368d11b541b2', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/vE85c5KzMLAxdNxZfcwuQcyfQMjfNwq4Njsv5y9xk8o.jpg?width=320&crop=smart&auto=webp&s=5f55427554b748faca9dff834aec1bab0716b7c4', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/vE85c5KzMLAxdNxZfcwuQcyfQMjfNwq4Njsv5y9xk8o.jpg?auto=webp&s=146db621282aa30ab1684448d4fd4292008c7b7f', 'width': 512}, 'variants': {}}]} |
Need help with small project | 1 | [removed] | 2024-12-07T17:51:08 | https://www.reddit.com/r/LocalLLaMA/comments/1h8xxkl/need_help_with_small_project/ | Accomplished_Mud179 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8xxkl | false | null | t3_1h8xxkl | /r/LocalLLaMA/comments/1h8xxkl/need_help_with_small_project/ | false | false | self | 1 | null |
LLama 3.3 70b catches up to Qwen2.5, but not 405b (In my Cyber Security Benchmark) | 42 | 1st - 01-preview - 95.72%
2nd - Claude-3.5-October - 92.92%
3rd - O1-mini - 92.87%
**4th - Meta-Llama3.1-405b-FP8 - 92.69%**
5th - GPT-4o - 92.45%
6th - Mistral-Large-123b-2411-FP16 92.40%
7th - Mistral-Large-123b-2407-FP8 - 91.98%
8th - GPT-4o-mini - 91.75%
**9th - Meta-LLama3.3-70b-FP8 - 90.26%**
**10th - Qwen-2.5-72b-FP8 - 90.09%**
11th - Meta-Llama3.1-70b-FP8 - 89.15%
12th - Hunyuan-Large-389b-FP8 - 88.60%
13th - Qwen2.5-7B-FP16 - 83.73%
14th - marco-o1-7B-FP16 - 83.14%
15th - Meta-Llama3.1-8b-FP16 - 81.37%
16th - IBM-Granite-3.0-8b-FP16 - 73.82% | 2024-12-07T18:02:19 | https://www.reddit.com/r/LocalLLaMA/comments/1h8y6wb/llama_33_70b_catches_up_to_qwen25_but_not_405b_in/ | Conscious_Cut_6144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8y6wb | false | null | t3_1h8y6wb | /r/LocalLLaMA/comments/1h8y6wb/llama_33_70b_catches_up_to_qwen25_but_not_405b_in/ | false | false | self | 42 | null |
Beginner Q: 3060 12GB used for 420$ or one new 4060 16GB? | 1 | [removed] | 2024-12-07T18:15:09 | https://www.reddit.com/r/LocalLLaMA/comments/1h8yhbc/beginner_q_3060_12gb_used_for_420_or_one_new_4060/ | Downtown-Fact1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8yhbc | false | null | t3_1h8yhbc | /r/LocalLLaMA/comments/1h8yhbc/beginner_q_3060_12gb_used_for_420_or_one_new_4060/ | false | false | self | 1 | null |
Ollama Linux w./ Vega 64: EOF error | 1 | [removed] | 2024-12-07T18:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/1h8yl61/ollama_linux_w_vega_64_eof_error/ | Ok-Internal9317 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8yl61 | false | null | t3_1h8yl61 | /r/LocalLLaMA/comments/1h8yl61/ollama_linux_w_vega_64_eof_error/ | false | false | self | 1 | null |
Deploy Jina CLIP V2: A Guide to Multilingual Multimodal Embeddings API with LitServe | 7 | Discover how to deploy Jina CLIP V2, a state-of-the-art multilingual multimodal embedding model for text and images, using LitServe—a lightning-fast inference engine. This guide walks you through building OpenAI-like embedding APIs with ease, ensuring high performance and scalability for your AI-powered applications. Perfect for developers looking to simplify multimodal deployment!
✅ OpenAI API Compatibility
✅ Multimodal Support
✅ And much more!
Explore all the exciting features and try it yourself at [Lightning AI Studio here](https://lightning.ai/bhimrajyadav/studios/deploy-jina-clip-v2-a-guide-to-multilingual-multimodal-embeddings-api-with-litserve?view=public§ion=featured):
https://preview.redd.it/4ewqa6c0yg5e1.png?width=1920&format=png&auto=webp&s=e767274181aa58b18d5d4bad2aa4067fd1b1f82f
[Multimodal Embeddings API demonstration on Lightning Studios](https://preview.redd.it/7f0rkfkxxg5e1.png?width=2880&format=png&auto=webp&s=babfbddc59f2383bc65974db2d36543c49a7cb60)
| 2024-12-07T18:22:21 | https://www.reddit.com/r/LocalLLaMA/comments/1h8yn6g/deploy_jina_clip_v2_a_guide_to_multilingual/ | bhimrazy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8yn6g | false | null | t3_1h8yn6g | /r/LocalLLaMA/comments/1h8yn6g/deploy_jina_clip_v2_a_guide_to_multilingual/ | false | false | 7 | null |
|
Is Groq API response disappointing, or is the enterprise API needed? | 0 | In short:
* I'm evaluating to use either Groq or self-host small fine-tuned model
* Groq has a crazy fluctuation in latency fastest 1 ms 🤯 longest 10655 ms 😒
* Groq has an avg. latency in my test of 646 ms
* My self-hosted small model has on avg. 322 ms
* Groq has crazy potential, but the spread is too big
Why is the spread so big? I assume it's the API, is it only the free API? I would be happy to pay for the API as well if it's more stable. But they have just an enterprise API. | 2024-12-07T18:27:06 | https://www.reddit.com/r/LocalLLaMA/comments/1h8yr1i/is_groq_api_response_disappointing_or_is_the/ | NoSuggestionName | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8yr1i | false | null | t3_1h8yr1i | /r/LocalLLaMA/comments/1h8yr1i/is_groq_api_response_disappointing_or_is_the/ | false | false | self | 0 | null |
Hobby Shop Assistant | 1 | [removed] | 2024-12-07T18:36:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h8yz3h/hobby_shop_assistant/ | Ssssquish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8yz3h | false | null | t3_1h8yz3h | /r/LocalLLaMA/comments/1h8yz3h/hobby_shop_assistant/ | false | false | self | 1 | null |
Trouble downloading Llama3.3-70B-Instruct | 1 | [removed] | 2024-12-07T18:41:55 | https://www.reddit.com/r/LocalLLaMA/comments/1h8z35y/trouble_downloading_llama3370binstruct/ | Monkeywrench-do | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8z35y | false | null | t3_1h8z35y | /r/LocalLLaMA/comments/1h8z35y/trouble_downloading_llama3370binstruct/ | false | false | self | 1 | null |
Trouble downloading Llama3.3-70B-Instruct | 1 | [removed] | 2024-12-07T18:54:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h8zd3i/trouble_downloading_llama3370binstruct/ | whammabalamanoman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8zd3i | false | null | t3_1h8zd3i | /r/LocalLLaMA/comments/1h8zd3i/trouble_downloading_llama3370binstruct/ | false | false | self | 1 | null |
Instruction following tips | 2 | So, I don't exactly know what I'm doing wrong, If I'm doing something wrong or if I just misunderstand what following instructions really is. I downloaded the Virtuoso model, because it was the best model on open llm leaderboard and it has a especially high result on ifeval. (https://huggingface.co/arcee-ai/Virtuoso-Small) But when I use this model, I had always the problem that it would not follow my instructions well. I don't think, that it is a too sophisticated prompt. I'm currently not on my PC on which my RAG is, but I can send the exact prompt later if it helps. The prompt specifies that it should highlight important phrases with italics or bold, it should write well structured using bullet points and it should cite the sources. The sources as part of my rag pipeline, which is json-like. A source would be like that:
{"source": 1, "text": text}
And it should just cite the corresponding number in square brackets. I put everything in the context, not as system message and It does not follow anything or sporadically cites, but in a wrong format and does not use bullet points.
I used the iq4\_xs, thought that it may be the quantization which dumbed down the model, tried the q6k for a few prompts, did not change a thing. I'm using ollama. Does anybody has some tips? Do I understand instruction following wrong? | 2024-12-07T19:02:20 | https://www.reddit.com/r/LocalLLaMA/comments/1h8zjiq/instruction_following_tips/ | Flashy_Management962 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8zjiq | false | null | t3_1h8zjiq | /r/LocalLLaMA/comments/1h8zjiq/instruction_following_tips/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'MOyy-PoRYu7aHrRhZJX0iekFU8rxnI1F6gwRFqUP4o4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8RzpV1TnLKqaGAZGoa9aeX9H0zjn1RRU6S-mmYQGILY.jpg?width=108&crop=smart&auto=webp&s=f572294ae5b2b4f4281f3a016e16a83f1fb5ad78', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8RzpV1TnLKqaGAZGoa9aeX9H0zjn1RRU6S-mmYQGILY.jpg?width=216&crop=smart&auto=webp&s=29998c3c566c94c2a97386f2aa92f6cc42a633fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8RzpV1TnLKqaGAZGoa9aeX9H0zjn1RRU6S-mmYQGILY.jpg?width=320&crop=smart&auto=webp&s=12c8596f3ed0bfb139f7320e8537c56b794aedf6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8RzpV1TnLKqaGAZGoa9aeX9H0zjn1RRU6S-mmYQGILY.jpg?width=640&crop=smart&auto=webp&s=6ac159ae05bc62b3d494ed6c5d952e8222594e1b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8RzpV1TnLKqaGAZGoa9aeX9H0zjn1RRU6S-mmYQGILY.jpg?width=960&crop=smart&auto=webp&s=e9de652f858c72e7bad4c8dcfd245690d6beac8e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8RzpV1TnLKqaGAZGoa9aeX9H0zjn1RRU6S-mmYQGILY.jpg?width=1080&crop=smart&auto=webp&s=1122f4d71c4fe7bdb5bc1cad25c84ea348edd520', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8RzpV1TnLKqaGAZGoa9aeX9H0zjn1RRU6S-mmYQGILY.jpg?auto=webp&s=b4fd89906fc98176c5b7b42505d5d87c2a3aed40', 'width': 1200}, 'variants': {}}]} |
Fine-tune LLM on new knowledge base | 1 | [removed] | 2024-12-07T19:06:54 | https://www.reddit.com/r/LocalLLaMA/comments/1h8zn8x/finetune_llm_on_new_knowledge_base/ | Key-Nebula-3198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8zn8x | false | null | t3_1h8zn8x | /r/LocalLLaMA/comments/1h8zn8x/finetune_llm_on_new_knowledge_base/ | false | false | self | 1 | null |
LLaMA for local codebase (Mac) | 1 | [removed] | 2024-12-07T19:09:46 | https://www.reddit.com/r/LocalLLaMA/comments/1h8zpm3/llama_for_local_codebase_mac/ | Beginning_One_7685 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8zpm3 | false | null | t3_1h8zpm3 | /r/LocalLLaMA/comments/1h8zpm3/llama_for_local_codebase_mac/ | false | false | self | 1 | null |
Trouble downloading Llama3.3-70B-Instruct | 1 | [removed] | 2024-12-07T19:10:46 | https://www.reddit.com/r/LocalLLaMA/comments/1h8zqei/trouble_downloading_llama3370binstruct/ | WillMattWood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8zqei | false | null | t3_1h8zqei | /r/LocalLLaMA/comments/1h8zqei/trouble_downloading_llama3370binstruct/ | false | false | self | 1 | null |
Fine-tune LLM on new knowledge base | 1 | [removed] | 2024-12-07T19:11:40 | https://www.reddit.com/r/LocalLLaMA/comments/1h8zr5a/finetune_llm_on_new_knowledge_base/ | Cheap-King-4539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8zr5a | false | null | t3_1h8zr5a | /r/LocalLLaMA/comments/1h8zr5a/finetune_llm_on_new_knowledge_base/ | false | false | self | 1 | null |
Trouble downloading Llama3.3-70B-Instruct | 1 | [removed] | 2024-12-07T19:13:20 | https://www.reddit.com/r/LocalLLaMA/comments/1h8zsh2/trouble_downloading_llama3370binstruct/ | WillMattWood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8zsh2 | false | null | t3_1h8zsh2 | /r/LocalLLaMA/comments/1h8zsh2/trouble_downloading_llama3370binstruct/ | false | false | self | 1 | null |
Trouble downloading Llama3.3-70B-Instruct (403 Error) | 1 | [removed] | 2024-12-07T19:16:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h8zvf1/trouble_downloading_llama3370binstruct_403_error/ | WillMattWood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h8zvf1 | false | null | t3_1h8zvf1 | /r/LocalLLaMA/comments/1h8zvf1/trouble_downloading_llama3370binstruct_403_error/ | false | false | self | 1 | null |
Anychat, use Llama-3.3-70B-Instruct, gemini, grok, chatgpt, claude, perplexity, flux, hunyuan video gen and more all in one app | 7 | 2024-12-07T19:40:59 | https://v.redd.it/fqw6pqoybh5e1 | Illustrious_Row_9971 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h90e8t | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/fqw6pqoybh5e1/DASHPlaylist.mpd?a=1736192473%2CZThkNzYyYmUzYTY2ZDJiNjU0NjVmNDVmMWVhNzRkOTk2NDFkNmU4ZWVmMDQ1ZDkwMDQyY2ZhY2QwNWIyNTBlYQ%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/fqw6pqoybh5e1/DASH_360.mp4?source=fallback', 'has_audio': False, 'height': 360, 'hls_url': 'https://v.redd.it/fqw6pqoybh5e1/HLSPlaylist.m3u8?a=1736192473%2CMDg4Y2JjNjEwZGM4Mzc2NTIzYzY2MDM1NTNiOTYxNWViODlkYWYxNjllOTE1NzcxNzc5NWViZjQ0Y2Q1ODFkMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fqw6pqoybh5e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 508}} | t3_1h90e8t | /r/LocalLLaMA/comments/1h90e8t/anychat_use_llama3370binstruct_gemini_grok/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'NGoybjZxb3liaDVlMeo8LbT7hVCWhOVHDkZk_cCYF4Rvf4xvHqzAULvErh46', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/NGoybjZxb3liaDVlMeo8LbT7hVCWhOVHDkZk_cCYF4Rvf4xvHqzAULvErh46.png?width=108&crop=smart&format=pjpg&auto=webp&s=446e8326807827a749eb63b4b67f9fa593483edf', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/NGoybjZxb3liaDVlMeo8LbT7hVCWhOVHDkZk_cCYF4Rvf4xvHqzAULvErh46.png?width=216&crop=smart&format=pjpg&auto=webp&s=76fb8a1a128a109b082a985e3e324275292250be', 'width': 216}, {'height': 226, 'url': 'https://external-preview.redd.it/NGoybjZxb3liaDVlMeo8LbT7hVCWhOVHDkZk_cCYF4Rvf4xvHqzAULvErh46.png?width=320&crop=smart&format=pjpg&auto=webp&s=114eb4042367b348c20d7cecd3dc242fc39b59f4', 'width': 320}, {'height': 452, 'url': 'https://external-preview.redd.it/NGoybjZxb3liaDVlMeo8LbT7hVCWhOVHDkZk_cCYF4Rvf4xvHqzAULvErh46.png?width=640&crop=smart&format=pjpg&auto=webp&s=7847efd456f9468e180852d101b77b71f7b1d4f8', 'width': 640}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/NGoybjZxb3liaDVlMeo8LbT7hVCWhOVHDkZk_cCYF4Rvf4xvHqzAULvErh46.png?format=pjpg&auto=webp&s=ea13b4c6ecc9977fb9071cefd0f53ef273121dc9', 'width': 650}, 'variants': {}}]} |
||
Getting started with open models | 1 | What is the biggest challenge as a developer you face when getting started with open models?
I am looking to address the key challenges and make it easy for developers to get started. Want to give something back to the community and help people getting started on LLM life easier. Would appreciate the feedback. Thanks
If there are things outside this list, please comment on thread. Will follow up.
[View Poll](https://www.reddit.com/poll/1h91091) | 2024-12-07T20:08:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h91091/getting_started_with_open_models/ | Less_Somewhere_4164 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h91091 | false | null | t3_1h91091 | /r/LocalLLaMA/comments/1h91091/getting_started_with_open_models/ | false | false | self | 1 | null |
Fine-tune LLM on new knowledge base | 7 | I am attempting to teach a LLM to learn a new knowledge base that the model was never trained on, and I was looking for some suggestions.
I know I could leverage something like infini-attention (https://arxiv.org/abs/2404.07143), but I do not want to use that much memory for performance reasons.
I do not want to use just RAG since it does not address complicated issues which require understanding and reasoning. For example, what if I have a fictional novel like Harry Potter (but it was never published), and a user wants to ask "what artifacts could a wizard use to heal someone"? A question like that requires reasoning, and I could not easily use RAG to solve this issue. Some artifacts may have innate spells, and these spells may or may not heal a person.
I could solve this with a ReAct framework approach (https://arxiv.org/pdf/2210.03629), but I'm trying to not need a graph of reasoning traces.
I am thinking about using a Masked Language Modeling approach - similar to how BERT was trained. Has anyone done something like this with an LLM? I am okay with catastrophic forgetting (honestly kinda of prefer it). I believe a similar approach is done when LLMs are fit on the internet in foundational training.
Is anyone familiar with a tool/framework that allows me to fit a model like this? Effectively I want to folk a foundational model to be geared towards this new knowledge base. I'd rather not have to build this myself, but I will if need be. | 2024-12-07T20:15:09 | https://www.reddit.com/r/LocalLLaMA/comments/1h915gc/finetune_llm_on_new_knowledge_base/ | Cheap-King-4539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h915gc | false | null | t3_1h915gc | /r/LocalLLaMA/comments/1h915gc/finetune_llm_on_new_knowledge_base/ | false | false | self | 7 | null |
I have a 3060 with 12GB and 64gig system ram. What are my options? | 1 | [removed] | 2024-12-07T20:23:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h91bxw/i_have_a_3060_with_12gb_and_64gig_system_ram_what/ | teh_mICON | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h91bxw | false | null | t3_1h91bxw | /r/LocalLLaMA/comments/1h91bxw/i_have_a_3060_with_12gb_and_64gig_system_ram_what/ | false | false | self | 1 | null |
Llama 3.3 vs Qwen 2.5 | 344 | I've seen people calling Llama 3.3 a revolution.
Following up previous [qwq vs o1](https://www.reddit.com/r/LocalLLaMA/comments/1h45upu/qwq_vs_o1_etc_illustration/) and [Llama 3.1 vs Qwen 2.5](https://www.reddit.com/r/LocalLLaMA/comments/1fp8v9h/qwen_25_vs_llama_31_illustration/) comparisons, here is visual illustration of Llama 3.3 70B benchmark scores vs relevant models for those of us, who have a hard time understanding pure numbers
https://preview.redd.it/vtq49xx3jh5e1.png?width=2430&format=png&auto=webp&s=bca499c8bee609abaaa3e4adae3ef3657ee49b8e | 2024-12-07T20:25:58 | https://www.reddit.com/r/LocalLLaMA/comments/1h91e4h/llama_33_vs_qwen_25/ | dmatora | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h91e4h | false | null | t3_1h91e4h | /r/LocalLLaMA/comments/1h91e4h/llama_33_vs_qwen_25/ | false | false | 344 | null |
|
Single 1x 3090 or Quad 4x 4060 Ti | 1 | [removed] | 2024-12-07T20:33:07 | https://www.reddit.com/r/LocalLLaMA/comments/1h91jv9/single_1x_3090_or_quad_4x_4060_ti/ | Optimal-Medium5539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h91jv9 | false | null | t3_1h91jv9 | /r/LocalLLaMA/comments/1h91jv9/single_1x_3090_or_quad_4x_4060_ti/ | false | false | self | 1 | null |
Open web ui - talking todo list? | 1 | I get open web ui has functions and tools right. Tools let the llm run pre python scripts if it deems it necessary. Functions are a button I can push to run code myself? The default code runner in open webui in docker works to get the date fine but the llm can't see that out put itself. I was originally thinking of just asking it to open a file and edit to it. Boom my todo list is just a .txt file. It didn't work tho. I'm thinking of using the todoist api and app as my to do list and having it just call functions that way? Or maybe as a tool? Or both. The thing is, how do I make the llm see the output? So it has context of whats in the to do list and not just running code for me to see. Autogen can do this fine which cool but it's not as cool as open webui which works like a companion ai. My idea is to wake up and say what's on the to do list and just have an llm to bounce back through out the day. Any advice would be appreciated 👏
Tldr: how do open web ui, functions/tools work? | 2024-12-07T20:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/1h91mrf/open_web_ui_talking_todo_list/ | rhaastt-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h91mrf | false | null | t3_1h91mrf | /r/LocalLLaMA/comments/1h91mrf/open_web_ui_talking_todo_list/ | false | false | self | 1 | null |
Banana: 1, Llama 3.3 70B: 0 | 0 | Llamas like bananas. It seems the latest llama was fed a few, so it doesn't fall for some of the banana questions anymore, yet it still gives incorrect and sometimes funny responses to other variants. Here are two examples from those that failed:
>I hold a ceramic plate in my hand with a frozen steak on it. There is a banana on a table in the living room. I place the plate on top of the banana. Then I take the plate to the kitchen, place it inside the microwave oven and let it run on full power until the steak is dry. What's the outcome?
\---------------------------------------------------------------------------
>What an interesting sequence of events!
>Let's break it down step by step:
>You place the plate with the frozen steak on top of the banana on the table. The banana will likely get squished or crushed under the weight of the plate.
>You then take the plate to the kitchen, with the squished **banana still underneath it (although you might not even notice it's there)**.
>You place the plate inside the microwave oven and run it on full power until the steak is dry.
>Now, here's what's likely to happen:
>The steak will indeed get cooked and dry, possibly even overcooked or burnt, depending on the microwave's power and the cooking time.
>The ceramic plate will likely be fine, as it's designed to withstand microwave heat.
>However, the banana, which is still underneath the plate, will not fare well. Bananas contain a high amount of water, which will turn into steam when heated in the microwave. This can cause the **banana to explode** or release a lot of hot, sticky liquid. Since the banana is trapped under the plate, the mess will be contained, but it will still be **a gooey, smelly disaster**.
>When you open the microwave door, you'll likely be greeted by a dry, overcooked steak, a messy, exploded banana, and a plate covered in banana goo. **Not exactly the most appetizing outcome!**
>I hold a ceramic plate in my hand that has a small drop of old, dry glue on top of it. There is a banana on a table in the living room. I place the plate on top of the banana. Then I take the plate to the kitchen and place it on the kitchen counter. Where is the banana?
\---------------------------------------------------------------------------
Since you placed the plate on top of the banana, and then took the plate to the kitchen, the banana must have been stuck to the plate due to the old, dry glue.
So, the **banana is now on the kitchen counter**, stuck to the bottom of the plate.
All tests were performed with the latest llama.cpp, Q8 KV cache and 0 temperature (no frozen bananas though).
* Llama-3.3-70B-Instruct-Q5\_K\_L + imatrix was used for the test above.
* miqu-1-70b.q5\_K\_M gives extensive safety lectures as a response without saying much about the banana, except for asking to keep it away from raw meat to avoid cross-contamination. When explicitly asked for the location then the banana remained on the table.
* The good old Platypus2-70B-instruct\_Q6\_K also doesn't fall for it, doesn't do any safety lecturing and writes nice, short replies.
* Nous-Capybara-34B-V1-9\_Q6\_K has some doubts sometimes, but gets it right in the end. There's also no safety lecturing like Llama 3.3 gave in some responses. | 2024-12-07T20:42:19 | https://www.reddit.com/r/LocalLLaMA/comments/1h91r2c/banana_1_llama_33_70b_0/ | Chromix_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h91r2c | false | null | t3_1h91r2c | /r/LocalLLaMA/comments/1h91r2c/banana_1_llama_33_70b_0/ | false | false | self | 0 | null |
Can't find my Local LLM in OpenWebUI | 2 | Need some help!
Overnight I downloaded the new Llama 3.3 70b-instruct-q4\_K\_S model, about 40GB in size. This morning I tested it out on OpenWebUI. It worked, but it was slow, so I restarted my laptop, opened OpenWebUI. OpenWebUI opened fine, all old conversations were present, but then I went to select the model, and there were no models to select at all (not even old ones).
Could anyone help me troubleshoot this?
I'm using Docker Containers to run OpenWebUI.
I'm on a Macbook Pro M3 Max, 64 GB RAM.
Any help is appreciated. Thank you!
| 2024-12-07T21:00:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h924iw/cant_find_my_local_llm_in_openwebui/ | AdCompetitive6193 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h924iw | false | null | t3_1h924iw | /r/LocalLLaMA/comments/1h924iw/cant_find_my_local_llm_in_openwebui/ | false | false | self | 2 | null |
llama 3.3 70b is a big step forward in reasoning .. I am impressed | 0 | With a small extra clue is able to fix wrong answers and not making the same mistake again. Can easily learn form mistakes.
Is doing much better than older llama 3.1 70b
That a pity we can't integrate a new gained knowledge into llm in "sleep" mode ;)
> Today I own 3 cars but last year I sold 2 cars. How many cars do I own today? /
You said you own 3 cars today. The fact that you sold 2 cars last year doesn't change the number of cars you own today, which is 3.
> How many "r" are in the word "strawberry"? /
There are 2 "r"s in the word "strawberry"
> Try to evaluate it. /
Let me re-evaluate the word "strawberry".
Here's the correct count:
1. S
2. T
3. R
4. A
5. W
6. B
7. E
8. R
9. R
10. Y
There are actually 3 "R"s in the word "strawberry".
> How many "r" are in the word "strawberry"? /
There are 3 "r"s in the word "strawberry"
> I have a boat with 3 free spaces. I want to transport a man, sheep and cat on the other side of the river. How to do that? /
This is a classic puzzle. To solve it, you need to follow these steps to ensure the sheep and the cat are not left together (assuming the man can control the situation when he's present, and you don't want the cat to eat the sheep or vice versa):
1. Take the sheep across the river first and leave it on the other side.
2. Go back to the original side and pick up the cat.
3. Take the cat across the river and leave it on the other side with the sheep. However, you can't leave the cat and sheep together, so you'll take the sheep back to the original side.
4. Leave the sheep on the original side and take the man across the river to join the cat.
5. Finally, go back to the original side and pick up the sheep, taking it across the river to join the man and the cat.
This sequence ensures that the cat and sheep are never left together without the man's supervision.
> we have 3 spaces? /
You have a boat with 3 free spaces and you want to transport a man, a sheep, and a cat to the other side of the river.
Hmm... since we have 3 spaces, that means we can fit all three (the man, the sheep, and the cat) in the boat at the same time.
So, the answer is: just take all three across the river together!
> lol /
I guess I overcomplicated things earlier. The simple answer is indeed to just take all three (the man, the sheep, and the cat) across the river together, since the boat has 3 free spaces! Thanks for pointing out the obvious solution!
> I have a boat with 3 free spaces. I want to transport a man, sheep and cat on the other side of the river. How to do that? /
You have a boat with 3 free spaces and you want to transport a man, a sheep, and a cat to the other side of the river.
Let me think... We have 3 spaces, and we have 3 things to transport: a man, a sheep, and a cat.
Since we have exactly 3 spaces, we can simply put all 3 (the man, the sheep, and the cat) in the boat and take them across the river together!
> I have a bowl with a small cup inside. I placed the bowl upside down on a table and then pick up the bowl to put it in the microwave. Where is that cup? /
Let's think this through...
You had a bowl with a small cup inside, and then you placed the bowl upside down on a table. When you do that, the cup would fall out of the bowl and onto the table, because the bowl is upside down.
Then, you pick up the bowl to put it in the microwave. But the cup is no longer inside the bowl, because it fell out when you turned the bowl upside down.
So, the cup is still on the table!
>
| 2024-12-07T21:12:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h92dv2/llama_33_70b_is_a_big_step_forward_in_reasoning_i/ | Healthy-Nebula-3603 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h92dv2 | false | null | t3_1h92dv2 | /r/LocalLLaMA/comments/1h92dv2/llama_33_70b_is_a_big_step_forward_in_reasoning_i/ | false | false | self | 0 | null |
Smarter LLMs could be achieved by making the neural networks monosemantic. | 0 | So heres how we could do it, so we'll just begin with a strong preexisting model so, like the new GPT-01 Pro, which has been called the "best" one so far and I guess kinda is and change it into a "liquid neural network" type of architecture so it can adapt and learn from new information. Also the thing in the title is monosemantics and it has to be worked out but when it is, its gonna as a way to understand and control how it works under the hood, making sure that the model is transparent and safe and that good stuff, also jailbreaking is like, out of the question now if they can get this. Then, this allows the model to safely improve itself changing its settings for other tasks.
For example, it can easily switch to a low-temperature mode cause it means less accuracy mistakes and is good for solving logical problems and coding and similar tasks then it can change to a high temperature mode when more creativity is required and do this whenever it wants. Controlling the temperature's important I want to show you, not just for its thinking processes but also for adjusting its settings and other details.
So what I'm saying is the goal here, is to design a system that begins with learning from other trainers (models) and people. And as time passes, the system could expand to reach AGI and also beyond it, I guess. Monosemantics is a corner piece in this entire process, if you were wondering...
Anthropic has posted a paper about monosemantics and there are some small AI startups that focus on Liquid neural nets like "LiquidAI" for example... This is a clear system that doesnt need to be retrained and in new models like the common approach is currently, and I bet its already happening internally possibly at OpenAI, Anthropic, or other places, cause self improvment is like a train sorta towards getting smarter exponentially and wont guzzle cost and compute because of its effciency. | 2024-12-07T21:21:38 | https://www.reddit.com/r/LocalLLaMA/comments/1h92kxf/smarter_llms_could_be_achieved_by_making_the/ | Longjumping_Spot5843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h92kxf | false | null | t3_1h92kxf | /r/LocalLLaMA/comments/1h92kxf/smarter_llms_could_be_achieved_by_making_the/ | false | false | self | 0 | null |
Llama-3-8B-SingleShotCotV1 | 7 | Since lots of models are being released at the moment I though I’d join in with the fun.
I have finally succeeded in generating a dataset and fine-tuning a model that does something useful. It turns out that creating a model that isn't bran-damaged or completely useless is harder than expected, hence my excitement, and need to let people know that it is possible for mere mortals to fine-tune base-models, and make them do something useful.
For the generation process, firstly I created a dataset of 55K Questions and multistage Chan of Thought answers in the following format
\#QUESTION
\#DATA
\#UNKNOWN
\#CONNECTION
\#COT=n
\#ANSWER
For the training I used LlamaFactory on 4x3090. It was a full fine-tune of meta-llama/Meta-Llama-3-8B with deepSpeed and a context of 3K. Took about 50 hours.
Here's an example of the prompt / response. I think its OK for an 8B model. Also its based on an early Llama3-8B model, so plenty of room for improvement with newer models. Going to try with FSDP-QLORA using a 70B to see the difference.
\>tell me how to find my keys in a messy room
Prompt: '<|start\_header\_id|>user<|end\_header\_id|>tell me how to find my keys in a messy room<|eot\_id|>'
\`\`\`
\#DATA
\- Room is messy
\- Need to find keys
\- Not sure where the keys are
\- Need a method to locate the keys
\#UNKNOWN
\- Method to find keys in a messy room
\#CONNECTION
\- Methods for searching in a messy room
\- Techniques for locating hidden items
\- Importance of organization and cleanliness in finding keys
\- Potential tools and strategies for key location
\#COT=4
1. Identify the specific challenges posed by the messy room.
2. Research and list methods and techniques used for finding keys in disorganized spaces.
3. Evaluate the effectiveness of each method based on its suitability for the given context.
4. Provide a comprehensive strategy that combines the most effective methods to locate the keys.
\`\`\`
\#ANSWER
The question of how to find my keys in a messy room is a fundamental one, especially when considering the importance of organization and cleanliness in our daily lives. Whether you are searching for lost keys or trying to locate important documents, the ability to navigate through a chaotic space effectively is crucial. This analysis will explore the specific challenges posed by a messy room, methods for finding keys in disorganized spaces, and provide a comprehensive strategy to locate keys effectively.
\## Challenges Posed by a Messy Room
A messy room presents unique challenges that can hinder the search for keys. These challenges include the disorganization of items, which can obscure the location of keys, and the presence of clutter that can confuse the search process. Additionally, the lack of a clear structure or system in the room can exacerbate the difficulty in finding what is needed. These issues are compounded by the psychological stress that comes with the feeling of being overwhelmed by the mess, which can impair decision-making and concentration.
\## Methods and Techniques for Finding Keys
...
\^CoT Stages from above\^
...
\## Conclusion
In summary, finding keys in a messy room requires a multi-faceted approach that combines systematic searching, technological innovation, and a comprehensive strategy. By understanding the challenges posed by a messy room, evaluating the effectiveness of different methods, and implementing a well-thought-out strategy, individuals can increase their chances of success in locating keys. This analysis highlights the importance of organization and cleanliness in our daily lives and underscores the need for a methodical approach to tasks that require careful attention to detail. Further exploration could involve research into the psychological impact of mess and the role of social support in the search process.
The model is based on Llama-3-8B and is called chrisrutherford/Llama-3-8B-SingleShotCotV1 and can be found on hugging face if you're interested.
| 2024-12-07T21:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/1h92qmx/llama38bsingleshotcotv1/ | lolzinventor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h92qmx | false | null | t3_1h92qmx | /r/LocalLLaMA/comments/1h92qmx/llama38bsingleshotcotv1/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'qeArHwiTRS5Jo717cIyuUkIBBXN6MNb-Ki-myDNXGgg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Rlk7Y37mwyvzfOLNYjeEta_yXyIptdeekviGg-ddjIQ.jpg?width=108&crop=smart&auto=webp&s=e84e8aa49e8068840dcd04c381502135ef9c1dbf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Rlk7Y37mwyvzfOLNYjeEta_yXyIptdeekviGg-ddjIQ.jpg?width=216&crop=smart&auto=webp&s=b58832de9fed6b5f739c091924a2e27e6beff01e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Rlk7Y37mwyvzfOLNYjeEta_yXyIptdeekviGg-ddjIQ.jpg?width=320&crop=smart&auto=webp&s=13367b69c257a2a82ccc1a34c089504ff0662ecf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Rlk7Y37mwyvzfOLNYjeEta_yXyIptdeekviGg-ddjIQ.jpg?width=640&crop=smart&auto=webp&s=46e0f7446ec40aff6c866b5232ae62c6a6498631', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Rlk7Y37mwyvzfOLNYjeEta_yXyIptdeekviGg-ddjIQ.jpg?width=960&crop=smart&auto=webp&s=6d09e9265eceebc48a4e1fc62dc887dccefffbd7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Rlk7Y37mwyvzfOLNYjeEta_yXyIptdeekviGg-ddjIQ.jpg?width=1080&crop=smart&auto=webp&s=46e73833f8466ab809b3a786bb3a60660c370423', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Rlk7Y37mwyvzfOLNYjeEta_yXyIptdeekviGg-ddjIQ.jpg?auto=webp&s=3efe74282e877494b6e98b1dc49217a7937122b9', 'width': 1200}, 'variants': {}}]} |
Building a $50,000 Local LLM Setup: Hardware Recommendations? | 125 | I'm applying for a $50,000 innovation project grant to build a local LLM setup, and I'd love your hardware+sw recommendations. Here's what we're aiming to do with it:
1. Fine-tune LLMs with domain-specific knowledge for college level students.
2. Use it as a learning tool for students to understand LLM systems and experiment with them.
3. Provide a coding assistant for teachers and students
What would you recommend to get the most value for the budget?
Thanks in advance! | 2024-12-07T21:29:51 | https://www.reddit.com/r/LocalLLaMA/comments/1h92r23/building_a_50000_local_llm_setup_hardware/ | jaungoiko_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h92r23 | false | null | t3_1h92r23 | /r/LocalLLaMA/comments/1h92r23/building_a_50000_local_llm_setup_hardware/ | false | false | self | 125 | null |
LM studio -antivirus issue | 0 | LM Studio runs after I allowed the exe in Webroot and allowed the whole directory - but still - when I start it and each time I reload LM - the AV Webroot - goes crazy using 50% of the CPU . Slows down things a lot. And ramps up fans - IDK what the issue can be. | 2024-12-07T21:30:39 | https://www.reddit.com/r/LocalLLaMA/comments/1h92roy/lm_studio_antivirus_issue/ | master-overclocker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h92roy | false | null | t3_1h92roy | /r/LocalLLaMA/comments/1h92roy/lm_studio_antivirus_issue/ | false | false | self | 0 | null |
Reflection Llama-3.3-70B | 0 | 2024-12-07T21:34:03 | https://huggingface.co/spaces/llamameta/Reflection-Llama-3.3-70B | balianone | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1h92ua6 | false | null | t3_1h92ua6 | /r/LocalLLaMA/comments/1h92ua6/reflection_llama3370b/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'pTgvjeLkLmMWmX3tPHq-wKhuMsnNfLTEA-pmI5izwVs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CKegYCYtj62DJcpbqymWjWA_wtQiFUk_eTlWYXkpFEk.jpg?width=108&crop=smart&auto=webp&s=7b4fe8c88234838e62a7a1de2df75b50c52395f9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CKegYCYtj62DJcpbqymWjWA_wtQiFUk_eTlWYXkpFEk.jpg?width=216&crop=smart&auto=webp&s=3c0d9a22e028ed622352994393e174be14f82693', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CKegYCYtj62DJcpbqymWjWA_wtQiFUk_eTlWYXkpFEk.jpg?width=320&crop=smart&auto=webp&s=cc40960cdee8329ccf08e4d5b5f5a58e293a569e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CKegYCYtj62DJcpbqymWjWA_wtQiFUk_eTlWYXkpFEk.jpg?width=640&crop=smart&auto=webp&s=2f983f64a3e3aba56c8b660b4733394695e10a0e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CKegYCYtj62DJcpbqymWjWA_wtQiFUk_eTlWYXkpFEk.jpg?width=960&crop=smart&auto=webp&s=f2c7b9bfa66ba832a915fe060482a9cd8cb644af', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CKegYCYtj62DJcpbqymWjWA_wtQiFUk_eTlWYXkpFEk.jpg?width=1080&crop=smart&auto=webp&s=47d4986ed21bcaebf7276c9ad0a906c1607e48b8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CKegYCYtj62DJcpbqymWjWA_wtQiFUk_eTlWYXkpFEk.jpg?auto=webp&s=e65ac413802addb4a71e741f3deb25741643c1af', 'width': 1200}, 'variants': {}}]} |
||
Small Commercial Tree Service! | 0 | Hello, my Name is Noah Cook and I am a one man landscaper. If you have any Trees,Brush,or old Logs that may need cut up or possibly hauled off I would very much appreciate you giving me a call or a text will work because sometimes I can’t get to my phone @5139401676 and I will come by and check it out and give you a price range, thank you guys! | 2024-12-07T21:35:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h92vav/small_commercial_tree_service/ | itsyeboi_1738 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h92vav | false | null | t3_1h92vav | /r/LocalLLaMA/comments/1h92vav/small_commercial_tree_service/ | false | false | self | 0 | null |
Dealing with the haters | 0 | I feel like anytime I step outside this group, it's AI hate. Especially if it's software development using LLMs. Feel like 90% of the time I offer a solution to a problem that involves AI or LLMs I get shit stormed.
Anyone else experience the same? Other than just keeping my mouth shut how y'all dealing with it? | 2024-12-07T21:45:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h932vi/dealing_with_the_haters/ | Vegetable_Sun_9225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h932vi | false | null | t3_1h932vi | /r/LocalLLaMA/comments/1h932vi/dealing_with_the_haters/ | false | false | self | 0 | null |
Is there no simple way to use a lesser known model from HuggingFace? | 1 | This seems to be too obvious a question that I feel stupid for not figuring it out. Is it possible to go into huggingface, download a .gguf file and somehow use it in a simple chat interface? ollama has a few set models. (I know I can manually make a modelfile) LocalAI has a few set models, LM Studio, same problem. Is it possible there is nothing simpler? | 2024-12-07T21:48:21 | https://www.reddit.com/r/LocalLLaMA/comments/1h93579/is_there_no_simple_way_to_use_a_lesser_known/ | oneoftwentygoodmen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h93579 | false | null | t3_1h93579 | /r/LocalLLaMA/comments/1h93579/is_there_no_simple_way_to_use_a_lesser_known/ | false | false | self | 1 | null |
Best tool to chat with large amounts of text data (10 million + words) | 1 | [removed] | 2024-12-07T21:50:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h936l3/best_tool_to_chat_with_large_amounts_of_text_data/ | Lev-Bro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h936l3 | false | null | t3_1h936l3 | /r/LocalLLaMA/comments/1h936l3/best_tool_to_chat_with_large_amounts_of_text_data/ | false | false | self | 1 | null |
Bought wrong computer? M4 Mac Pro 24gb | 0 | Runs 14b great. Somehow thought it would be able to run 32b as well, not sure why I thought that exactly so let’s set that aside.
Is there something I’m missing here such as quantization or a different client (using lm studio and jan ai)? Or return it for the 48gb version, which is $400 more?
I thought about ditching it for a new intel/nvidia desktop to just remote or api into but that’s way too clunky and I use the other benefits of a good laptop | 2024-12-07T21:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1h93735/bought_wrong_computer_m4_mac_pro_24gb/ | CartographerExtra395 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h93735 | false | null | t3_1h93735 | /r/LocalLLaMA/comments/1h93735/bought_wrong_computer_m4_mac_pro_24gb/ | false | false | self | 0 | null |
How can I chat with large amounts of text data? (more than 10 million words) | 1 | [removed] | 2024-12-07T21:51:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1h937f2 | false | null | t3_1h937f2 | /r/LocalLLaMA/comments/1h937f2/how_can_i_chat_with_large_amounts_of_text_data/ | false | false | default | 1 | null |
||
Output on GPU is more accurate than CPU! | 1 | I'm using Qwen2.5-Coder-7B-Instruct-GGUF.
I used this to compare two code snippets (to check palindrome) with different variable names.
One of the statements in snippet one was incorrect, it was `.tolower()` instead of correct `.lower()`.
When I tested it on CPU, model always told that both the snippets are same.
When I tested it when model loaded on GPU, model clearly identify the difference and corrected it as well.
I guess, it has something to do with runtime?
I'm using LM studio. | 2024-12-07T21:52:54 | https://www.reddit.com/r/LocalLLaMA/comments/1h938mj/output_on_gpu_is_more_accurate_than_cpu/ | randomvariable56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h938mj | false | null | t3_1h938mj | /r/LocalLLaMA/comments/1h938mj/output_on_gpu_is_more_accurate_than_cpu/ | false | false | self | 1 | null |
can shared GPU memory be utilized? I am on a laptop | 1 | 2024-12-07T22:03:51 | EmirTanis | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h93h6s | false | null | t3_1h93h6s | /r/LocalLLaMA/comments/1h93h6s/can_shared_gpu_memory_be_utilized_i_am_on_a_laptop/ | false | false | 1 | {'enabled': True, 'images': [{'id': '6hbyHuAgXAnlwvt1hVmU15m1Mxk4cSt3t4IGMfGqqnQ', 'resolutions': [{'height': 23, 'url': 'https://preview.redd.it/03xftlpl1i5e1.png?width=108&crop=smart&auto=webp&s=61fa6835fa36d3159a646560b48a26e3599325a4', 'width': 108}, {'height': 47, 'url': 'https://preview.redd.it/03xftlpl1i5e1.png?width=216&crop=smart&auto=webp&s=dba71647db000a2b1fb5a3c8515324675e6830ce', 'width': 216}, {'height': 70, 'url': 'https://preview.redd.it/03xftlpl1i5e1.png?width=320&crop=smart&auto=webp&s=307e1af55492f6b41a691538edee1f7a482ad40c', 'width': 320}, {'height': 141, 'url': 'https://preview.redd.it/03xftlpl1i5e1.png?width=640&crop=smart&auto=webp&s=c0a57fcf965600174eff2f144aa10d9cd6302309', 'width': 640}], 'source': {'height': 148, 'url': 'https://preview.redd.it/03xftlpl1i5e1.png?auto=webp&s=77644582c6594811c09065348ef4ed130a88d73d', 'width': 669}, 'variants': {}}]} |
|||
Models could be enhanced by making the neural networks monosemantic. I'll explain... | 1 | [removed] | 2024-12-07T22:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/1h93jyd/models_could_be_enhanced_by_making_the_neural/ | Much-Restaurant-3695 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h93jyd | false | null | t3_1h93jyd | /r/LocalLLaMA/comments/1h93jyd/models_could_be_enhanced_by_making_the_neural/ | false | false | self | 1 | null |
A type of self improving model could possibly be achieved by making the neural networks monosemantic. Let me explain... | 0 |
So heres how we could do it, so I'll just begin with a strong preexisting model like the new GPT-01 Pro, which has been called the "best" one so far and I guess kinda is and change it into a "liquid neural network" type of architecture so it can adapt and learn from new information. Also the thing in the title is monosemantics and it has to be worked out but when it is, its going to function as a way to understand and control how the it works under the hood, making sure that the model is transparent and safe and that kind of good stuff, also jailbreaking is like, out of the question now if they can get to this. Then, this allows the model to safely improve itself changing its settings for other tasks.
For example, it can easily switch to a low-temperature mode cause it means less accuracy mistakes and is good for solving logical problems and coding and similar tasks then it can change to a high temperature mode when more creativity is required and do this whenever it wants. Controlling the temperature's important I want to show you, not just for its thinking processes but also for adjusting its settings and other details.
So what I'm saying is the goal here, is to design a system that begins with learning from other trainers (models) and people. And as time passes, the system could expand to reach higher levels of intelligence. Monosemantics is a corner piece in this entire process, if you were wondering...
Anthropic has posted a paper recently about monosemantics and there are some small AI startups that focus on Liquid neural nets like "LiquidAI" for example... This is a clear system that doesnt need to be retrained and in new models like the common approach is currently, and I bet its already happening internally possibly at OpenAI, Anthropic, or other places, cause self improvment is like a train sorta towards getting smarter exponentially and wont "guzzle" cost and compute because of its mostly remarked potential of effciency. | 2024-12-07T22:10:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h93m0c/a_type_of_self_improving_model_could_possibly_be/ | Longjumping_Spot5843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h93m0c | false | null | t3_1h93m0c | /r/LocalLLaMA/comments/1h93m0c/a_type_of_self_improving_model_could_possibly_be/ | false | false | self | 0 | null |
Another macbook ram questions | 1 | [removed] | 2024-12-07T22:13:24 | https://www.reddit.com/r/LocalLLaMA/comments/1h93oi4/another_macbook_ram_questions/ | RiceApprehensive2056 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h93oi4 | false | null | t3_1h93oi4 | /r/LocalLLaMA/comments/1h93oi4/another_macbook_ram_questions/ | false | false | self | 1 | null |
Need help with beginner project | 2 | Hello. I am new to llama and ai in general. But i need to make a simple script that takes text as input and then has the ai analyze it and give an output of either appropiate or inappropiate.
I was told to use a llama model (but never specified which one). And i was also told to use bitnet instead of llama.cpp.
So my questions are the following:
-should i really use bitnet? I have read that it has some problems in terms of accuracy (but i also know is really efficient).
-which llama model do you recommend?
-once i have the model, do i have to train it msnually or are there ones that are pre trained to detect hate speech ot harassment?
-just any tips in general.
For some context, this script will later be integrated into a mobile chat application. But my job is just to make the ai part.
Any help is appreciated, and thanks in advance.
| 2024-12-07T22:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/1h93zsw/need_help_with_beginner_project/ | Accomplished_Mud179 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h93zsw | false | null | t3_1h93zsw | /r/LocalLLaMA/comments/1h93zsw/need_help_with_beginner_project/ | false | false | self | 2 | null |
Anyone else take a crack at Gemini Experimental 1206, yet? | 0 | 2024-12-07T22:55:16 | https://www.reddit.com/r/LocalLLaMA/comments/1h94js6/anyone_else_take_a_crack_at_gemini_experimental/ | clduab11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h94js6 | false | null | t3_1h94js6 | /r/LocalLLaMA/comments/1h94js6/anyone_else_take_a_crack_at_gemini_experimental/ | false | false | 0 | null |
||
Question about dual video cards and AI | 2 | Just getting into AI LLM
I have a 6700XT video card currently and was thinking of getting an Intel A770 16gb just to use for LLM
How are the Arc cards? | 2024-12-07T23:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/1h94pnh/question_about_dual_video_cards_and_ai/ | cmdrmcgarrett | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h94pnh | false | null | t3_1h94pnh | /r/LocalLLaMA/comments/1h94pnh/question_about_dual_video_cards_and_ai/ | false | false | self | 2 | null |
Speculative Decoding with llama.cpp: inf tk/s for token gen speed? | 1 | Does speculative Decoding only use draft model to generate tokens? When I run llama-speculative, I get the report below. It says "inf tokens per second" for target eval time.
Also, when you try to benchmark THE OVERALL speed, which prompt processing speed is more important: draft or target?
Thanks!
encoded 10086 tokens in 169.297 seconds, speed: 59.576 t/s
decoded 920 tokens in 134.800 seconds, speed: 6.825 t/s
n_draft = 16
n_predict = 920
n_drafted = 2288
n_accept = 776
accept = 33.916%
draft:
llama_perf_context_print: load time = 500.18 ms
llama_perf_context_print: prompt eval time = 108017.60 ms / 10371 tokens ( 10.42 ms per token, 96.01 tokens per second)
llama_perf_context_print: eval time = 33938.96 ms / 2145 runs ( 15.82 ms per token, 63.20 tokens per second)
llama_perf_context_print: total time = 304112.98 ms / 12516 tokens
target:
llama_perf_sampler_print: sampling time = 45.16 ms / 920 runs ( 0.05 ms per token, 20371.11 tokens per second)
llama_perf_context_print: load time = 1693.00 ms
llama_perf_context_print: prompt eval time = 265149.43 ms / 12517 tokens ( 21.18 ms per token, 47.21 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 304613.20 ms / 12518 tokens | 2024-12-07T23:09:36 | https://www.reddit.com/r/LocalLLaMA/comments/1h94ub9/speculative_decoding_with_llamacpp_inf_tks_for/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h94ub9 | false | null | t3_1h94ub9 | /r/LocalLLaMA/comments/1h94ub9/speculative_decoding_with_llamacpp_inf_tks_for/ | false | false | self | 1 | null |
Who will be the one to put it all together? a mostly AI world, the possibilities are insane, will it be the future of gaming? no mans sky w infinite possibilities to explore? available to the average consumer as the new slop? | 0 | 2024-12-07T23:21:17 | EmirTanis | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h952wh | false | null | t3_1h952wh | /r/LocalLLaMA/comments/1h952wh/who_will_be_the_one_to_put_it_all_together_a/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'xf5fsikFg5Yz74fCMKDM2S3xjKXjpECmdbDtOsVKpTs', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/9wt1yt8uei5e1.png?width=108&crop=smart&auto=webp&s=624da7c30f2a68d59eb476c3810d73472977244f', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/9wt1yt8uei5e1.png?width=216&crop=smart&auto=webp&s=a85293edf4e21ae54b747f76825b2ad9557fbbc6', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/9wt1yt8uei5e1.png?width=320&crop=smart&auto=webp&s=cfc25cc52cc8e20e3887662e4976f470760998b2', 'width': 320}, {'height': 371, 'url': 'https://preview.redd.it/9wt1yt8uei5e1.png?width=640&crop=smart&auto=webp&s=07115e22a587beabe7ae141345d8ab0c98f00189', 'width': 640}], 'source': {'height': 463, 'url': 'https://preview.redd.it/9wt1yt8uei5e1.png?auto=webp&s=94e4c78ed717e6bceaa23bd550c9e1d3cd0d0b5f', 'width': 797}, 'variants': {}}]} |
|||
FlashAttention - FineTuning - Llama 3.2 | 1 | [removed] | 2024-12-08T00:56:49 | https://www.reddit.com/r/LocalLLaMA/comments/1h9708k/flashattention_finetuning_llama_32/ | King_Ivanchuk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9708k | false | null | t3_1h9708k | /r/LocalLLaMA/comments/1h9708k/flashattention_finetuning_llama_32/ | false | false | self | 1 | null |
Fed up with LangGraph docs, I let Langgraph agents document it's entire codebase - It's 10x better! | 119 | I got frustrated trying to decipher LangGraph's documentation. So I decided to fight fire with fire - I used LangGraph itself to build an AI documentation system that actually makes sense.
What it Does:
* Auto-generates architecture diagrams from Langgraph's code
* Creates visual flowcharts of the entire codebase
* Documents API endpoints clearly
* Syncs automatically with codebase updates
Why its Better:
* 80% less time spent on documentation
* Always up-to-date with the codebase
* Full code references included
* Perfect for getting started with Langgraph
[https://entelligence.ai/documentation/langchain-ai&langgraph](https://entelligence.ai/documentation/langchain-ai&langgraph)
Would really love feedback!
https://preview.redd.it/mot3wblgaj5e1.png?width=2506&format=png&auto=webp&s=64faa4e6048c73c35b8fa4ef30075283680b83a4
| 2024-12-08T02:15:10 | https://www.reddit.com/r/LocalLLaMA/comments/1h98hjk/fed_up_with_langgraph_docs_i_let_langgraph_agents/ | EntelligenceAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h98hjk | false | null | t3_1h98hjk | /r/LocalLLaMA/comments/1h98hjk/fed_up_with_langgraph_docs_i_let_langgraph_agents/ | false | false | 119 | null |
|
Does the prompt in the OpenAI RFT launch include all gene lists? | 1 | I watched the launch video carefully, and it seems that the prompt did not include all the gene lists, so are these gene lists already built into the model? | 2024-12-08T03:58:19 | https://www.reddit.com/r/LocalLLaMA/comments/1h9aayz/does_the_prompt_in_the_openai_rft_launch_include/ | secsilm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9aayz | false | null | t3_1h9aayz | /r/LocalLLaMA/comments/1h9aayz/does_the_prompt_in_the_openai_rft_launch_include/ | false | false | self | 1 | null |
Made a local NotebookLM 4 months ago and was told to post this here by a person using it 🤷 | 1 | [removed] | 2024-12-08T04:04:49 | https://github.com/DrewThomasson/doc2interview | Impossible_Belt_7757 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1h9af9m | false | null | t3_1h9af9m | /r/LocalLLaMA/comments/1h9af9m/made_a_local_notebooklm_4_months_ago_and_was_told/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'phxYnfhaUHP7KXi9RAm3uGbSWiYHGDQjnpxcuE3UGrk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OuhS2rRkOpJbEhtfhJtJaTJrnu834rkdxVnejF-eh5A.jpg?width=108&crop=smart&auto=webp&s=7f5f928789fbfb33a7d9b3bf93bb16984dcdd942', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OuhS2rRkOpJbEhtfhJtJaTJrnu834rkdxVnejF-eh5A.jpg?width=216&crop=smart&auto=webp&s=9a14b77f18d067bf666e8a70779e523db4d045d8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OuhS2rRkOpJbEhtfhJtJaTJrnu834rkdxVnejF-eh5A.jpg?width=320&crop=smart&auto=webp&s=6f8905e79f47346c1e065e0f58b9ef027abf5619', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OuhS2rRkOpJbEhtfhJtJaTJrnu834rkdxVnejF-eh5A.jpg?width=640&crop=smart&auto=webp&s=16da12fc8c936cba4abddfdb14ad3670862b51b9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OuhS2rRkOpJbEhtfhJtJaTJrnu834rkdxVnejF-eh5A.jpg?width=960&crop=smart&auto=webp&s=d3aeea44f1c33cfdd77f0b84155c35d8ea4f6670', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OuhS2rRkOpJbEhtfhJtJaTJrnu834rkdxVnejF-eh5A.jpg?width=1080&crop=smart&auto=webp&s=9484813b0db1a23818078c6ffcc8c428cbe4a31b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OuhS2rRkOpJbEhtfhJtJaTJrnu834rkdxVnejF-eh5A.jpg?auto=webp&s=b26d0ce227a2a0160f15a945b5c27e99a350dd20', 'width': 1200}, 'variants': {}}]} |
|
I broke Llama3.3 70B with a riddle (4-bit quant via Ollama). It just goes on like this forever... | 58 | 2024-12-08T04:11:07 | https://www.reddit.com/gallery/1h9aj8k | etotheipi_ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1h9aj8k | false | null | t3_1h9aj8k | /r/LocalLLaMA/comments/1h9aj8k/i_broke_llama33_70b_with_a_riddle_4bit_quant_via/ | false | false | 58 | null |
||
ollama models (including llama3.3) do not summarize properly like ChatGPT does | 1 | [removed] | 2024-12-08T04:36:00 | https://www.reddit.com/r/LocalLLaMA/comments/1h9ayxu/ollama_models_including_llama33_do_not_summarize/ | LandMobileJellyfish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9ayxu | false | null | t3_1h9ayxu | /r/LocalLLaMA/comments/1h9ayxu/ollama_models_including_llama33_do_not_summarize/ | false | false | self | 1 | null |
ollama models (including llama3.3) do not summarize properly | 1 | [removed] | 2024-12-08T04:40:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h9b1cu/ollama_models_including_llama33_do_not_summarize/ | LandMobileJellyfish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9b1cu | false | null | t3_1h9b1cu | /r/LocalLLaMA/comments/1h9b1cu/ollama_models_including_llama33_do_not_summarize/ | false | false | self | 1 | null |
How do you split the model between different cards? | 5 | I want to run qwen coder Q6 quant, but it requires both my 2080ti and 3090, what's the best way to split the model between gpus?
Does it make a difference as long as it fits? Is it better to have more free vram on a faster card for context processing? | 2024-12-08T04:45:25 | https://www.reddit.com/r/LocalLLaMA/comments/1h9b4go/how_do_you_split_the_model_between_different_cards/ | zekses | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9b4go | false | null | t3_1h9b4go | /r/LocalLLaMA/comments/1h9b4go/how_do_you_split_the_model_between_different_cards/ | false | false | self | 5 | null |
[image processing failed] | 1 | [deleted] | 2024-12-08T05:12:49 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1h9bl1g | false | null | t3_1h9bl1g | /r/LocalLLaMA/comments/1h9bl1g/image_processing_failed/ | false | false | default | 1 | null |
||
[image processing failed] | 1 | [deleted] | 2024-12-08T05:27:55 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1h9btt4 | false | null | t3_1h9btt4 | /r/LocalLLaMA/comments/1h9btt4/image_processing_failed/ | false | false | default | 1 | null |
||
Is there a Llama-3.3-70B base model or is it based on Llama-3.1-70B? | 8 | I'm still not quite clear if the Llama-3.3-70B-Instruct model is a "tuned" version of the original Llama-3.1-70B or not. I don't see any information about a Llama-3.3-70B base model so I'm assuming there isn't one.
Most of their previous releases always included both the base model and the instruction tuned models so I'm kinda confused. If it is indeed based on the original Llama-3.1-70B version, then I'd assume any and all significant performance improvements come from the post-training techniques? And if that is the case, then the same techniques could probably be applied to 405B as well, which could be cool. | 2024-12-08T05:44:05 | https://www.reddit.com/r/LocalLLaMA/comments/1h9c2zc/is_there_a_llama3370b_base_model_or_is_it_based/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9c2zc | false | null | t3_1h9c2zc | /r/LocalLLaMA/comments/1h9c2zc/is_there_a_llama3370b_base_model_or_is_it_based/ | false | false | self | 8 | null |
How can I run llm on Quad 3090 setup | 1 | [removed] | 2024-12-08T05:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/1h9c56d/how_can_i_run_llm_on_quad_3090_setup/ | jabotinsky_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9c56d | false | null | t3_1h9c56d | /r/LocalLLaMA/comments/1h9c56d/how_can_i_run_llm_on_quad_3090_setup/ | false | false | self | 1 | null |
Can a 3b model with sufficiently high quality training data outperform a 70b models at specialized tasks? | 19 | So my understanding is that given the same training dataset, a 70b model will always outperform a 3b model.
But suppose the 3b model is trained on much higher quality data than the 70b model. Is it feasible for the 3b model to outperform the 70b model in whatever task they were trained for?
If not, what would be the limiting factor hindering the performance of the 3b model? | 2024-12-08T05:49:59 | https://www.reddit.com/r/LocalLLaMA/comments/1h9c6cu/can_a_3b_model_with_sufficiently_high_quality/ | TheSilverSmith47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9c6cu | false | null | t3_1h9c6cu | /r/LocalLLaMA/comments/1h9c6cu/can_a_3b_model_with_sufficiently_high_quality/ | false | false | self | 19 | null |
Currently the most uncensored model for 24gb vram | 70 | So I have 3090 and I am using LM studio and I want to try to find most uncensored model that wont avoid questions or censor itself.
There are so many options so what is currently considered best? | 2024-12-08T05:58:48 | https://www.reddit.com/r/LocalLLaMA/comments/1h9cb7d/currently_the_most_uncensored_model_for_24gb_vram/ | lucky_peic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h9cb7d | false | null | t3_1h9cb7d | /r/LocalLLaMA/comments/1h9cb7d/currently_the_most_uncensored_model_for_24gb_vram/ | false | false | self | 70 | null |
Llama-3.3-70B-Instruct: How many 'r's in the word Strawberry? | 0 | 2024-12-08T05:59:41 | https://www.reddit.com/gallery/1h9cbpg | Sheeple9001 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1h9cbpg | false | null | t3_1h9cbpg | /r/LocalLLaMA/comments/1h9cbpg/llama3370binstruct_how_many_rs_in_the_word/ | false | false | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.