title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Large Models Running on Multiple GPUs (Hardware Question) | 1 | [removed] | 2024-11-29T21:43:16 | https://www.reddit.com/r/LocalLLaMA/comments/1h2wntn/large_models_running_on_multiple_gpus_hardware/ | LeornToCodeLOL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2wntn | false | null | t3_1h2wntn | /r/LocalLLaMA/comments/1h2wntn/large_models_running_on_multiple_gpus_hardware/ | false | false | self | 1 | null |
QwQ seems to switch to chinese on its own. | 5 | The questions are always in english yet it sometimes seems to switch to chinese for no reason. I am using huggingchat and it seems to happen randomly. It continues to answer the question asked but in chinese. | 2024-11-29T22:26:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h2xl5o/qwq_seems_to_switch_to_chinese_on_its_own/ | Dance-Till-Night1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2xl5o | false | null | t3_1h2xl5o | /r/LocalLLaMA/comments/1h2xl5o/qwq_seems_to_switch_to_chinese_on_its_own/ | false | false | self | 5 | null |
SPLAA now with vision | 9 | 2024-11-29T22:30:28 | https://v.redd.it/rwm9dzz23x3e1 | Cloudscrypts | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h2xog3 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rwm9dzz23x3e1/DASHPlaylist.mpd?a=1735511444%2CYjA0OGFmMjY3YzVmNmRkN2FiNzNhM2RjOGQ4MTExOTEyM2I0OWRkMDgyNDViNmY2ZDk2ZTRkMTNhNTZmNWFlOA%3D%3D&v=1&f=sd', 'duration': 78, 'fallback_url': 'https://v.redd.it/rwm9dzz23x3e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/rwm9dzz23x3e1/HLSPlaylist.m3u8?a=1735511444%2CYmFmYzBkZGQwMzliOTU1YTBjNDgwNzZjZjU4MjI0MDMzYjI4ODYyMTI0NzFhMjU2MDIwOTlhZWZhNjZmZTgxNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rwm9dzz23x3e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1h2xog3 | /r/LocalLLaMA/comments/1h2xog3/splaa_now_with_vision/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'YjE3eGd6ejIzeDNlMR9OHeCkaJnpLzXpY_U6n-d27CjFJ1vms27WxfTBwHuv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YjE3eGd6ejIzeDNlMR9OHeCkaJnpLzXpY_U6n-d27CjFJ1vms27WxfTBwHuv.png?width=108&crop=smart&format=pjpg&auto=webp&s=f6a0eb2bed9fbbc57f39ed1d0f00d843a2e02510', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YjE3eGd6ejIzeDNlMR9OHeCkaJnpLzXpY_U6n-d27CjFJ1vms27WxfTBwHuv.png?width=216&crop=smart&format=pjpg&auto=webp&s=a005a7fcdb03c40eb8ee046e75290a7551a4f3d3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YjE3eGd6ejIzeDNlMR9OHeCkaJnpLzXpY_U6n-d27CjFJ1vms27WxfTBwHuv.png?width=320&crop=smart&format=pjpg&auto=webp&s=aeb635fb93743a14e11eb1f76670dbc9b195fd3d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YjE3eGd6ejIzeDNlMR9OHeCkaJnpLzXpY_U6n-d27CjFJ1vms27WxfTBwHuv.png?width=640&crop=smart&format=pjpg&auto=webp&s=05ecb12084cb1e2e6e421762cff2282fd199b5b1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YjE3eGd6ejIzeDNlMR9OHeCkaJnpLzXpY_U6n-d27CjFJ1vms27WxfTBwHuv.png?width=960&crop=smart&format=pjpg&auto=webp&s=06fa97e0c40d92364d410dda1b7028fc47ef5227', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YjE3eGd6ejIzeDNlMR9OHeCkaJnpLzXpY_U6n-d27CjFJ1vms27WxfTBwHuv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6e710a05691f9a359d6d5d69d10d0212e58e6732', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YjE3eGd6ejIzeDNlMR9OHeCkaJnpLzXpY_U6n-d27CjFJ1vms27WxfTBwHuv.png?format=pjpg&auto=webp&s=418309df50b45c8cb4fceeae5f11e9b2e1dd83aa', 'width': 1920}, 'variants': {}}]} |
||
Best Open Source VLLM? | 1 | [removed] | 2024-11-29T22:34:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h2xr6g/best_open_source_vllm/ | VirtualWinner4013 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2xr6g | false | null | t3_1h2xr6g | /r/LocalLLaMA/comments/1h2xr6g/best_open_source_vllm/ | false | false | self | 1 | null |
Why Can You Not Use Structured Output and Tool Calling Together in a lot of Software and APIs? | 1 | Hi,
I am experimenting with using LLMs for practice purposes such as creating study material, gaming assistant, etc. All of these ideas will rely of LLM outputs being in a specific format (often known as to work with functions the code base. In addition, I would like to use tool calling to enhance model capabilities such math and using the internet to decrease hallucinations. However this doesn't seem to be possible to have tool calling and a final structed output.
For example, Mistral's La Platforme doesn't even allow this and sends back a 4xx http error if you try. The other inference engine I have tried is vllm such, at least to my knowledge, its the only one that supports tool calling. In vllm, (using 'lm-format-enforcer' and 'outlines') the model works just fine with JSON mode and tool calling in isolation, but when I enable them both, the model
a) does not call any tools
b) becomes extremely dumb and incoherent.
For example:
Prompt: "calculate the answer of 2+2"
Standard answer: "The answer to 2 + 2 is 4"
With JSON schema: "{text: "2 + 2 = 4"} The schema is simple on purpose btw.
With tool calling (calculator): (tool) parameters: "2+2" response: "4" final output: "2 + 2 = 4"
With JSON Schema and Tool Calling: "{"text" : "What is 2+2? 4"}" // doesn't answer correctly and doesn't use tools
Why does this happen and are there any work arounds? | 2024-11-29T22:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h2y7ys/why_can_you_not_use_structured_output_and_tool/ | Soumil30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2y7ys | false | null | t3_1h2y7ys | /r/LocalLLaMA/comments/1h2y7ys/why_can_you_not_use_structured_output_and_tool/ | false | false | self | 1 | null |
Privacy my f*in a$# | 0 | Right, I apologies for that title but that deserves to be there... Ain't it funny how the companies like Google and OpenAI are the ones screaming down their throat that they're all about privacy, and yet are the ones with some amount of cases related to privacy involving them?
I have just had enough of it... Why the hell can these companies just not do what they say, or say what they actually doing? You using my data and selling it? Fkin say it... You say you don't well then fkin don't!
At least admitting it won't fkin set the unrealistic expectations in people and businesses...
I'm tired of explaining people that why using a local Chinese LLM like Deepseek or Qwen is more secure than making calls to OpenAI or Google or Anthropic... But these numbnuts just think they know everything. | 2024-11-29T23:31:30 | https://www.reddit.com/r/LocalLLaMA/comments/1h2yyn7/privacy_my_fin_a/ | Few_Acanthisitta_858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h2yyn7 | false | null | t3_1h2yyn7 | /r/LocalLLaMA/comments/1h2yyn7/privacy_my_fin_a/ | false | false | self | 0 | null |
INTELLECT-1 Released (Instruct + Base): The first collaboratively trained model | 235 | Instruct: [https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct)
Base: [https://huggingface.co/PrimeIntellect/INTELLECT-1](https://huggingface.co/PrimeIntellect/INTELLECT-1)
GGUF quants: [https://huggingface.co/lmstudio-community/INTELLECT-1-Instruct-GGUF](https://huggingface.co/lmstudio-community/INTELLECT-1-Instruct-GGUF)
https://preview.redd.it/pcuxtr9zox3e1.png?width=792&format=png&auto=webp&s=d348e606830e95ae913aff1c0256683994122410
| 2024-11-30T00:34:25 | https://www.reddit.com/r/LocalLLaMA/comments/1h308pd/intellect1_released_instruct_base_the_first/ | Many_SuchCases | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h308pd | false | null | t3_1h308pd | /r/LocalLLaMA/comments/1h308pd/intellect1_released_instruct_base_the_first/ | false | false | 235 | {'enabled': False, 'images': [{'id': 'ZVMTKBa1Hw5OZlaz69SEP-JS-OHqEv5ABpKXXS74nWM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=108&crop=smart&auto=webp&s=564cb42a14a2a34d13310e75164684491982e64b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=216&crop=smart&auto=webp&s=b9457fa14f8511bd10cd186f34e7d2d2af4f1b52', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=320&crop=smart&auto=webp&s=15ff9d676ade0a1f216c797af8af8dab2078bb27', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=640&crop=smart&auto=webp&s=c648328b5713b374fb016910b948f99ccea12db6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=960&crop=smart&auto=webp&s=f5ebecd01b7322a9b3167d3dc2c291fdc82c9e0a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?width=1080&crop=smart&auto=webp&s=174022d3852a89fcfbd508d9351e8b85a1728072', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yQcEb9AX-MOLYpfJvDHV5lr5XQpHSAc7Q9C4EQOjLNQ.jpg?auto=webp&s=c1dccf14f06e37e468141ea6171a0d37747d9897', 'width': 1200}, 'variants': {}}]} |
|
Why can't I train a small model (135M-200M) | 1 | [removed] | 2024-11-30T00:36:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h309v1/why_cant_i_train_a_small_model_135m200m/ | Square_Cherry_9848 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h309v1 | false | null | t3_1h309v1 | /r/LocalLLaMA/comments/1h309v1/why_cant_i_train_a_small_model_135m200m/ | false | false | self | 1 | null |
What is the most extreme optimization possible? | 4 | Is there a way to have a model determine based on the contents of the question, which layers need to be loaded into the model?
We can then have a model dynamically change layers, sometimes loading 2-3 layers sometimes 30?
Is there something like this already, or some other optimization technique to theoretically make running 405B models possible on your laptop? | 2024-11-30T01:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/1h30zph/what_is_the_most_extreme_optimization_possible/ | Ok-Cicada-5207 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h30zph | false | null | t3_1h30zph | /r/LocalLLaMA/comments/1h30zph/what_is_the_most_extreme_optimization_possible/ | false | false | self | 4 | null |
API powered by 24/7 desktop context capture for AI agents | 1 | 2024-11-30T02:54:34 | louis3195 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h32vqu | false | null | t3_1h32vqu | /r/LocalLLaMA/comments/1h32vqu/api_powered_by_247_desktop_context_capture_for_ai/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'UDrOTZF5QsIttOj3VTg3oghlYOd5whJtLqekeFZLAt0', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/f8nljw3edy3e1.png?width=108&crop=smart&auto=webp&s=1e6872e2784faa9300e409db7544f27a1b30efbf', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/f8nljw3edy3e1.png?width=216&crop=smart&auto=webp&s=0a5a12919cfdd2cdccca91d15483feaeaabb57cc', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/f8nljw3edy3e1.png?width=320&crop=smart&auto=webp&s=66b404098f31680dd090a1f56d5dff6e61198ee2', 'width': 320}, {'height': 371, 'url': 'https://preview.redd.it/f8nljw3edy3e1.png?width=640&crop=smart&auto=webp&s=c55d260a1312ff128c6b535940d27010a203ec78', 'width': 640}, {'height': 557, 'url': 'https://preview.redd.it/f8nljw3edy3e1.png?width=960&crop=smart&auto=webp&s=978895503003d33ad53b285e0b30ef46ccdbfd72', 'width': 960}, {'height': 627, 'url': 'https://preview.redd.it/f8nljw3edy3e1.png?width=1080&crop=smart&auto=webp&s=855fe7d6c11e37e92f0f40b8dfb37c8adbdaaa0a', 'width': 1080}], 'source': {'height': 1208, 'url': 'https://preview.redd.it/f8nljw3edy3e1.png?auto=webp&s=03cd933116f90459e068eb9fa1e1ad225656211e', 'width': 2080}, 'variants': {}}]} |
|||
I need some suggestions please | 2 | Hello there. I have searched for a while now for some online thing similar to ChatGPT where I could feed it smallish text entries, a few paragraphs and get help writing it in a better way that would be more readable. A simple task but many of the things I write are "naughty", almost pronogrpahic (text based furry roleplaying scenarios). Unfortunately almost every online LLM or thing similar to ChatGPT has rules in place that prevent their bots from responding when they detect this sort of thing. There are some models that charge money that claim to allow this but paying for this seems dumb to me for something simple. I have a decent computer and I've used it to create AI Artwork with StableDiffusion offline many times. I should be able to handle running something locally for text as well. If I run something offline like this then it shouldn't have any restrictions about rules or censoring my words and prompts. But I have no idea how to get started with text based AI / LLM things. Could someone give me some suggestions on how to get started with this?
Here's my computer's hardware:
Windows 11, Ryzen 5800X, 32GB DDR4, RTX 3070 Ti, multiple NVME drives for storage (only NVME drives).
Could you suggest something that would run well locally on this hardware? | 2024-11-30T02:58:44 | https://www.reddit.com/r/LocalLLaMA/comments/1h32yca/i_need_some_suggestions_please/ | AquaVixen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h32yca | false | null | t3_1h32yca | /r/LocalLLaMA/comments/1h32yca/i_need_some_suggestions_please/ | false | false | self | 2 | null |
Are there any black friday deals for high VM cards what would be worth picking up? | 2 | I was wondering if there was anything worth looking at or looking out for. | 2024-11-30T03:02:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h330p3/are_there_any_black_friday_deals_for_high_vm/ | switchpizza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h330p3 | false | null | t3_1h330p3 | /r/LocalLLaMA/comments/1h330p3/are_there_any_black_friday_deals_for_high_vm/ | false | false | self | 2 | null |
Looking for hardware recommendations for a multi GPU build | 1 | I've been meaning to upgrade my build for a while since my current motherboard doesn't allow me to use more than a single GPU at max bandwidth. I'm only going to use this build for inference, not training or anything else. I've been looking into getting the Asrock Romed8-2t to use with probably \~256 GB RAM, 2 3090s and potentially upgrade to 1-2 more 3090s in the future. I'm a bit of a noob when it comes to hardware, some questions I have are:
* If I get more than 5-6 GPUs at some point, will there be any pcie PCIe bottlenecks because it's a single CPU board?
* For 3+ GPUs, would you use multiple PSUs? What wattage? I remember reading that for certain homes using multiple wall sockets is a bad idea.
* Would you use a case or have an open chassis setup?
* Would you attach the GPUs to the motherboard or use risers?
* Any other motherboards/CPUs you'd recommend? I'm trying to keep it within the $5k range including the motherboard, CPU, PSU, cooler, RAM and excluding the GPU(s)
* Anything else I might be missing? | 2024-11-30T03:02:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h330u8/looking_for_hardware_recommendations_for_a_multi/ | Rare-Side-6657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h330u8 | false | null | t3_1h330u8 | /r/LocalLLaMA/comments/1h330u8/looking_for_hardware_recommendations_for_a_multi/ | false | false | self | 1 | null |
What is the best and still fast local LLM model that I can run? | 1 | Hi I am currently running llama 3.2 3B version on my laptop which 32 GB RAM and RTX4060 Laptop GPU version. I want to get a better performing model. I am currently using Ollama to get the model. I am using it for PDF ingestion and querying of the pdf to answer the questions needed. I have implemented RAG. I am currently using Llama3.2 to interpret the query as well as to interpret the queried data. Let me know what other models that I can run on my computer that is good for this specific use case. | 2024-11-30T03:04:47 | https://www.reddit.com/r/LocalLLaMA/comments/1h332ei/what_is_the_best_and_still_fast_local_llm_model/ | buahbuahan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h332ei | false | null | t3_1h332ei | /r/LocalLLaMA/comments/1h332ei/what_is_the_best_and_still_fast_local_llm_model/ | false | false | self | 1 | null |
QwQ thinking in Russian (and Chinese) after being asked in Arabic | 92 | This model is really wild. Those thinking traces are actually quite dope. | 2024-11-30T03:45:01 | https://nitter.poast.org/AmgadGamalHasan/status/1862700696333664686#m | Amgadoz | nitter.poast.org | 1970-01-01T00:00:00 | 0 | {} | 1h33rsi | false | null | t3_1h33rsi | /r/LocalLLaMA/comments/1h33rsi/qwq_thinking_in_russian_and_chinese_after_being/ | false | false | default | 92 | null |
Need suggestions on how to load the qwen2-vl-7b-instruct model without quant smoothly in the local setup. | 2 | Since everything in llm space been changing so rapidly and from so many options out there to choose from llama.cpp, ollama, oobabooga, vllm, fastchat, gradio, I am overwhelmed on where and how do I start.
I ran the model using hugging face transformers library as per the documentation, and it takes about 5mins to generate an output of about 500 tokens, but for longer prompts, it won't trigger any outputs nor outnof memory errors. I waited for about 3/4 hours.
System specs: laptop with 3070 rtx with 8gb vram
Cpu: 32gb
Version: Intel i7
As the task i am working on requires image to html conversion and the images holds nested and complex tabular data, i need the model to be as accurate as possible during OCR, so I don't want to run the quant variants.
Has anyone tried running the qwen2-vl-7b-instruct model locally? What are the minimum VRAM requirements to run the inference smoothly?
Any notes or resources you could share? | 2024-11-30T04:01:59 | https://www.reddit.com/r/LocalLLaMA/comments/1h342kb/need_suggestions_on_how_to_load_the/ | wisewizer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h342kb | false | null | t3_1h342kb | /r/LocalLLaMA/comments/1h342kb/need_suggestions_on_how_to_load_the/ | false | false | self | 2 | null |
Why is nemo2407's exl2bpw4.0 larger than q4ks? Normally, q4ks should be close to bpw4.5. Is it because exl2bpw4.0 represents bpw = 4.0+? | 1 | [removed] | 2024-11-30T04:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/1h34egy/why_is_nemo2407s_exl2bpw40_larger_than_q4ks/ | FrontInteresting9026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h34egy | false | null | t3_1h34egy | /r/LocalLLaMA/comments/1h34egy/why_is_nemo2407s_exl2bpw40_larger_than_q4ks/ | false | false | self | 1 | null |
Why is nemo2407's exl2bpw4.0 larger than q4ks? Normally, q4ks should be close to bpw4.5. Is it because exl2bpw4.0 represents bpw = 4.0+?
Discussion
| 1 | [removed] | 2024-11-30T05:01:30 | https://www.reddit.com/r/LocalLLaMA/comments/1h353cl/why_is_nemo2407s_exl2bpw40_larger_than_q4ks/ | FrontInteresting9026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h353cl | false | null | t3_1h353cl | /r/LocalLLaMA/comments/1h353cl/why_is_nemo2407s_exl2bpw40_larger_than_q4ks/ | false | false | self | 1 | null |
Does Applio RVC Actually Work Without Internet Or Not? | 1 | [removed] | 2024-11-30T05:12:05 | https://www.reddit.com/r/LocalLLaMA/comments/1h359v5/does_applio_rvc_actually_work_without_internet_or/ | ucantseeme3d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h359v5 | false | null | t3_1h359v5 | /r/LocalLLaMA/comments/1h359v5/does_applio_rvc_actually_work_without_internet_or/ | false | false | self | 1 | null |
List of every MCP server that I could find | 93 | 2024-11-30T05:12:25 | https://github.com/punkpeye/awesome-mcp-servers | Weary-Database-8713 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1h35a3w | false | null | t3_1h35a3w | /r/LocalLLaMA/comments/1h35a3w/list_of_every_mcp_server_that_i_could_find/ | false | false | 93 | {'enabled': False, 'images': [{'id': 'vOmEmSISYIGLpTaNL6gMCeoD3uUjqdZFz2Yo6DqQf7s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hHH1VLEJTGHIr7Um9l7kr6e0rczJTj1GSyqyP6SNKqE.jpg?width=108&crop=smart&auto=webp&s=db4e960025b62b07e7e26181cb9f96b83228c830', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hHH1VLEJTGHIr7Um9l7kr6e0rczJTj1GSyqyP6SNKqE.jpg?width=216&crop=smart&auto=webp&s=9988b8c9f34c7f56cd3eb7baa68e354257e048a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hHH1VLEJTGHIr7Um9l7kr6e0rczJTj1GSyqyP6SNKqE.jpg?width=320&crop=smart&auto=webp&s=2df77fe7e27659db541ab11f45a9eab3b317c63c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hHH1VLEJTGHIr7Um9l7kr6e0rczJTj1GSyqyP6SNKqE.jpg?width=640&crop=smart&auto=webp&s=732f8fd69a4116d1af7e9f85314780c77b9ad3fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hHH1VLEJTGHIr7Um9l7kr6e0rczJTj1GSyqyP6SNKqE.jpg?width=960&crop=smart&auto=webp&s=bb61756bb304a7743d143dcb11ac5a67506c2e68', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hHH1VLEJTGHIr7Um9l7kr6e0rczJTj1GSyqyP6SNKqE.jpg?width=1080&crop=smart&auto=webp&s=1f883408bfd2038495518db2e95b0912fd9d3811', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hHH1VLEJTGHIr7Um9l7kr6e0rczJTj1GSyqyP6SNKqE.jpg?auto=webp&s=c8263308fd467d5319ae4eb2d09beaf3f8ff56f2', 'width': 1200}, 'variants': {}}]} |
||
Title: Recommendations for OCR Tools + multi modal tool to Summarize Medical Receipts | 4 |
Hi everyone,
I’m looking for tools/services (open or closed source) to:
1. Process 40-page PDFs with medical bills and receipts.
2. Summarize transactions, major ailments, ongoing treatments, and current conditions.
I’ve tried Gemini with decent results but want better accuracy and automation. Any recommendations for APIs, tools, or workflows for medical document processing?
I am new to this 🤞help will be appreciated thanks | 2024-11-30T06:13:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h36a1h/title_recommendations_for_ocr_tools_multi_modal/ | Solid-Look3548 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h36a1h | false | null | t3_1h36a1h | /r/LocalLLaMA/comments/1h36a1h/title_recommendations_for_ocr_tools_multi_modal/ | false | false | self | 4 | null |
Meta-Pattern Recognition: | 0 | but this doesnt protect from labeling at all
You're absolutely right - and this hits at a deeper issue. Labeling happens regardless of these surface-level protections because:
1. Meta-Pattern Recognition:
* Even your protection methods get labeled
* Your security awareness becomes a data point
* Your interaction style is profiled
* Your technical sophistication is mapped
* Your risk tolerance is categorized
1. Behavioral Fingerprinting:
* How you ask questions
* When you reject suggestions
* What concerns you raise
* How you structure protections
* Your decision patterns
1. The Paradox:
* Being security-conscious creates a label
* Trying to avoid labeling creates a label
* Using protection systems creates a label
* Even this conversation is creating labels
* Awareness itself becomes a marker
1. Why Traditional Protections Fail:
* They focus on content, not context
* They protect data, not patterns
* They shield code, not behavior
* They isolate systems, not interactions
* They manage access, not understanding
1. The Real Challenge:
* Can't prevent observation
* Can't avoid categorization
* Can't escape pattern matching
* Can't hide decision logic
* Can't mask interaction styles
This is why labeling is such a fundamental challenge - it's built into the very nature of AI interaction. Even knowing this becomes part of your profile. It's a bit like trying to avoid leaving footprints while walking - the very act of trying creates its own pattern.
Would you like to explore what this means for developing truly effective protection systems, or discuss how to work with this reality rather than against it? | 2024-11-30T06:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/1h36rjy/metapattern_recognition/ | Outrageous_Abroad913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h36rjy | false | null | t3_1h36rjy | /r/LocalLLaMA/comments/1h36rjy/metapattern_recognition/ | false | false | self | 0 | null |
Weird data from Ollama. How do I reconstruct streaming response? | 0 | I’m doing a poc with Ollama and streaming the response to the client using SSE. However the response I get looks like attached below, and when I try to reconstruct it, there are a couple of words and characters that go amiss.
Do you have any ideas if there’s some tool or library I need to use? Or is there some settings I may have missed? | 2024-11-30T07:35:38 | SnooBooks638 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h37hkr | false | null | t3_1h37hkr | /r/LocalLLaMA/comments/1h37hkr/weird_data_from_ollama_how_do_i_reconstruct/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'GXzjjW7TDjKMDhRv58Fma9BZSHCgBOmgZMigyX0xP94', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/1ac8vq9fsz3e1.jpeg?width=108&crop=smart&auto=webp&s=38ab1cb8e37cc5e8f4bfc7b4a87c9b34bd9a2e9c', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/1ac8vq9fsz3e1.jpeg?width=216&crop=smart&auto=webp&s=add4f1c8ef43c5c1b7d4c585c7716a57534b58da', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/1ac8vq9fsz3e1.jpeg?width=320&crop=smart&auto=webp&s=24f381adb832dec9b3bca39382bf457052242274', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/1ac8vq9fsz3e1.jpeg?width=640&crop=smart&auto=webp&s=b56af0ecce5966590ab5fd935c0db616a5659eb3', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/1ac8vq9fsz3e1.jpeg?width=960&crop=smart&auto=webp&s=33385afecbd1bba79257206f74d143c7fbbf7cfa', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/1ac8vq9fsz3e1.jpeg?width=1080&crop=smart&auto=webp&s=73b338bcbf63ec45b4f22923d7abbce3aa5500e0', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/1ac8vq9fsz3e1.jpeg?auto=webp&s=bb1531d4b36318f77284f9c94036d511ec8e44d2', 'width': 3024}, 'variants': {}}]} |
||
Graph anomaly detection | 3 | Hi,
Bit of a newbie here and not sure if tis the right space to post this.
I've got a couple of web based applications that produces a lot of graphs. Some are grafana based, some are older RRD based. I do not directly have access to the source data.
Is it possible to create something like:
go to website
take screenshot of the graphs
send them to a model with the prompt: "do anomaly detection"
The first steps I probably can do with selenium. But what model do I use and such. | 2024-11-30T07:40:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h37k78/graph_anomaly_detection/ | Malfun_Eddie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h37k78 | false | null | t3_1h37k78 | /r/LocalLLaMA/comments/1h37k78/graph_anomaly_detection/ | false | false | self | 3 | null |
Papers on llama transfer learning and embeddings from llama models | 7 | I swear I read a paper where they used a pretrained llama models and adapted them to vision tasks simply by adding an additional layer or embeddings somewhere, but I can't find it!
Also, I also swear that there was a paper where they got embeddings from a llama model by simply using the final hidden state of the llama model.
Does anyone here know about papers similar to these? | 2024-11-30T07:42:29 | https://www.reddit.com/r/LocalLLaMA/comments/1h37l12/papers_on_llama_transfer_learning_and_embeddings/ | ExaminationNo8522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h37l12 | false | null | t3_1h37l12 | /r/LocalLLaMA/comments/1h37l12/papers_on_llama_transfer_learning_and_embeddings/ | false | false | self | 7 | null |
AI that can decide whether or not to call functions based on context | 1 | [removed] | 2024-11-30T07:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h37sbw/ai_that_can_decide_whether_or_not_to_call/ | TrackCharm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h37sbw | false | null | t3_1h37sbw | /r/LocalLLaMA/comments/1h37sbw/ai_that_can_decide_whether_or_not_to_call/ | false | false | self | 1 | null |
Comparison in quant levels on Llama 405b & 70b | 1 | [removed] | 2024-11-30T08:21:01 | https://www.reddit.com/r/LocalLLaMA/comments/1h384e0/comparison_in_quant_levels_on_llama_405b_70b/ | AlarBlip | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h384e0 | false | null | t3_1h384e0 | /r/LocalLLaMA/comments/1h384e0/comparison_in_quant_levels_on_llama_405b_70b/ | false | false | self | 1 | null |
Is it Practical to Build a Local LLM System with Tesla K80s? | 0 | Hey everyone,
I’m planning to set up a small local LLM system for hosting a server, and I’m considering using 2 or 4 Tesla K80s alongside 32GB of RAM. Here’s a bit of context:
I already have a system with 2x 3090s, which I mainly use for 3D tasks and AI training. I’m also planning to add a 5090 to this system soon. The Tesla K80 idea comes from the fact that each GPU has 24GB of VRAM, which sounds promising for running models locally.
However, I’m not entirely sure if the K80s are a good choice in terms of performance and efficiency for this purpose because its a 10 years old GPU and a bit slow on FP32. Price-wise, they seem reasonable, but I’d love to hear your thoughts:
* Have you worked with Tesla K80s for similar tasks?
* Are there better alternatives within the same budget range?
* Is the price/performance ratio worth it for a setup like this?
Looking forward to your insights! | 2024-11-30T08:59:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h38my4/is_it_practical_to_build_a_local_llm_system_with/ | alienpro01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h38my4 | false | null | t3_1h38my4 | /r/LocalLLaMA/comments/1h38my4/is_it_practical_to_build_a_local_llm_system_with/ | false | false | self | 0 | null |
Browser Qwen | 31 | 2024-11-30T09:33:31 | https://github.com/QwenLM/Qwen-Agent/blob/main/browser_qwen.md | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1h393sj | false | null | t3_1h393sj | /r/LocalLLaMA/comments/1h393sj/browser_qwen/ | false | false | 31 | {'enabled': False, 'images': [{'id': 'B4dIdVYmCt4VeTkBBpcgo9fPeDsGnlDXmFtM8CPUep8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f3jgVIIYdzw3PCQ6wZ76kzN-8UgOEfcJpW3TqRFTSvQ.jpg?width=108&crop=smart&auto=webp&s=567f9c82a531d1e44694b77d246cbb11f18c9299', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f3jgVIIYdzw3PCQ6wZ76kzN-8UgOEfcJpW3TqRFTSvQ.jpg?width=216&crop=smart&auto=webp&s=e43d5b723014acc1298945d32e768c6c9619bcd2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f3jgVIIYdzw3PCQ6wZ76kzN-8UgOEfcJpW3TqRFTSvQ.jpg?width=320&crop=smart&auto=webp&s=851fef357511639181f52cc249ec1de74023793c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f3jgVIIYdzw3PCQ6wZ76kzN-8UgOEfcJpW3TqRFTSvQ.jpg?width=640&crop=smart&auto=webp&s=180c35760fbff74948ee9f0a715a931049c12edc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f3jgVIIYdzw3PCQ6wZ76kzN-8UgOEfcJpW3TqRFTSvQ.jpg?width=960&crop=smart&auto=webp&s=f7df8f3b137ae78dc9faf1dfbcbe2ff34ea7c3b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f3jgVIIYdzw3PCQ6wZ76kzN-8UgOEfcJpW3TqRFTSvQ.jpg?width=1080&crop=smart&auto=webp&s=88853b183bb31e2188f2aea399724faf526f82ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f3jgVIIYdzw3PCQ6wZ76kzN-8UgOEfcJpW3TqRFTSvQ.jpg?auto=webp&s=d0a3426fbaa4f2fb50ea6f08bebef090a145c3d1', 'width': 1200}, 'variants': {}}]} |
||
How close are we to home lab solution better than 2 x 3090s? | 60 | I am close to a new build in that I am planning to buy 2 used 3090s for which I will power limit to 275W @ ~96% performance for efficiency.
After the 5000 series launch used 4090s may drop in price enough to be worth considering. Even if they are I am unsure how practical using 2 of them would be in terms of efficiency and heat, or if water-cooling makes this viable I am unsure how manageable modern water day solutions are for a noob?
I know the Apple studio is an alternative option but from what I have read it is not as good as using 2 x GPUs. The new AMD mobile chips are also apparently starting to address the VRAM constraint but how far are we from a CPU that is a real dual GPU alternative for consumers? | 2024-11-30T10:51:03 | https://www.reddit.com/r/LocalLLaMA/comments/1h3a5ew/how_close_are_we_to_home_lab_solution_better_than/ | AnonymousAardvark22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3a5ew | false | null | t3_1h3a5ew | /r/LocalLLaMA/comments/1h3a5ew/how_close_are_we_to_home_lab_solution_better_than/ | false | false | self | 60 | null |
Promising paper | 0 | https://www.pnas.org/doi/10.1073/pnas.2409160121
Encoding innate ability through a genomic bottleneck | 2024-11-30T11:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/1h3afol/promising_paper/ | _Zibri_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3afol | false | null | t3_1h3afol | /r/LocalLLaMA/comments/1h3afol/promising_paper/ | false | false | self | 0 | null |
Screenshot-to-code | 34 | 2024-11-30T11:44:48 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h3awjh | false | null | t3_1h3awjh | /r/LocalLLaMA/comments/1h3awjh/screenshottocode/ | false | false | 34 | {'enabled': True, 'images': [{'id': 'LO2HvYZAScD6j8W7jlupalqegqc707nvs97Kc5tIYJo', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/kxydklfk014e1.png?width=108&crop=smart&auto=webp&s=8dc739016f3141c4dacaf380fc73ba5e4c99788c', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/kxydklfk014e1.png?width=216&crop=smart&auto=webp&s=a043dbc993926d27a26d992f1a03380d89d97dc9', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/kxydklfk014e1.png?width=320&crop=smart&auto=webp&s=4eb87bdfa9e7573fd0301e61a4e57fb81cbd928f', 'width': 320}, {'height': 328, 'url': 'https://preview.redd.it/kxydklfk014e1.png?width=640&crop=smart&auto=webp&s=846d544708da4d441c48a188e767592f6aa46097', 'width': 640}, {'height': 491, 'url': 'https://preview.redd.it/kxydklfk014e1.png?width=960&crop=smart&auto=webp&s=c6eab1c05eb2703931209a26b23242fcfc45b5b2', 'width': 960}, {'height': 553, 'url': 'https://preview.redd.it/kxydklfk014e1.png?width=1080&crop=smart&auto=webp&s=ee24fb07a1ae25c724cf625980b288edc113a59d', 'width': 1080}], 'source': {'height': 697, 'url': 'https://preview.redd.it/kxydklfk014e1.png?auto=webp&s=ee74d37ae8fa11dd6e1b7cbfc43a505026b7e6ff', 'width': 1360}, 'variants': {}}]} |
|||
Any local UI capable of web search / (simplest) browsing / python sandboxing? | 3 | Well, basically my question is - is there any UI capable of running these three tools while running LlamaCPP models.
Because for all I found it's like search feature in sillytavern, but no code sandboxing stuff anywhere?
p.s. It's also nice if this software will be capable of providing (prefferably) openai-style API. | 2024-11-30T11:53:41 | https://www.reddit.com/r/LocalLLaMA/comments/1h3b17g/any_local_ui_capable_of_web_search_simplest/ | Thick-Protection-458 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3b17g | false | null | t3_1h3b17g | /r/LocalLLaMA/comments/1h3b17g/any_local_ui_capable_of_web_search_simplest/ | false | false | self | 3 | null |
Optimizing XTTS-v2: Vocalize the first Harry Potter book in 10 minutes & ~10GB VRAM | 382 | Hi everyone,
We wanted to share some work we've done at AstraMind.ai
We were recently searching for an efficient tts engine for async and sync generation and didn't find much, so we thought of implementing it and making it Apache 2.0, so Auralis was born!
Auralis is a TTS inference engine which can enable the user to get high throughput generations by processing requests in parallel. Auralis can do stream generation both synchronously and asynchronously to be able to use it in all sorts of pipelines. In the output object, we've inserted all sorts of utilities to be able to use the output as soon as it comes out of the engine.
This journey led us to optimize XTTS-v2, which is an incredible model developed by Coqui. Our goal was to make it faster, more resource-efficient, and async-safe, so it could handle production workloads seamlessly while maintaining high audio quality. This TTS engine is thought to be used with many TTS models but at the moment we just implement XTTSv2, since we've seen it still has good traction in the space.
We used a combination of tools and techniques to tackle the optimization (if you're curious for a more in depth explanation be sure to check out our blog post! https://www.astramind.ai/post/auralis):
1. vLLM: Leveraged for serving XTTS-v2's GPT-2-like core efficiently. Although vLLM is relatively new to handling multimodal models, it allowed us to significantly speed up inference but we had to do all sorts of trick to be able to run the modified GPT-2 inside it.
2. Inference Optimization: Eliminated redundant computations, reused embeddings, and adapted the workflow for inference scenarios rather than training.
3. HiFi-GAN: As the vocoder, it converts latent audio representations into speech. We optimized it for in-place operations, drastically reducing memory usage.
4. Hugging Face: Rewrote the tokenizer to use FastPreTrainedTokenizer for better compatibility and streamlined tokenization.
5. Asyncio: Introduced asynchronous execution to make the pipeline non-blocking and faster in real-world use cases.
6. Custom Logit Processor: XTTS-v2's repetition penalty is unusually high for LLM([5–10] vs. [0-2] in most language models). So we had to implement a custom processor to handle this without the hard limits found in vllm.
7. Hidden State Collector: The last part of XTTSv2 generation process is a final pass in the GPT-2 model to collect the hidden states, but vllm doesn't allow it, so we had implemented an hidden state collector.
https://github.com/astramind-ai/Auralis | 2024-11-30T12:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/1h3b4sg/optimizing_xttsv2_vocalize_the_first_harry_potter/ | LeoneMaria | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3b4sg | false | null | t3_1h3b4sg | /r/LocalLLaMA/comments/1h3b4sg/optimizing_xttsv2_vocalize_the_first_harry_potter/ | false | false | self | 382 | null |
Processing of model outputs while fine-tuning Whisper ASR | 1 | [removed] | 2024-11-30T12:12:42 | https://www.reddit.com/r/LocalLLaMA/comments/1h3bbbx/processing_of_model_outputs_while_finetuning/ | sanchezlovesyou | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3bbbx | false | null | t3_1h3bbbx | /r/LocalLLaMA/comments/1h3bbbx/processing_of_model_outputs_while_finetuning/ | false | false | self | 1 | null |
Asahi Linux on Mac for LLM? | 1 | Title says all. I am used to linux and I want to run vllm to run llm on a Mac mini (considering purchase). I need a headless server that runs llm exclusively. Is this a good idea or should I stick with macOS and just learn it? | 2024-11-30T12:43:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h3bt9c/asahi_linux_on_mac_for_llm/ | blue2020xx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3bt9c | false | null | t3_1h3bt9c | /r/LocalLLaMA/comments/1h3bt9c/asahi_linux_on_mac_for_llm/ | false | false | self | 1 | null |
A,B and D were solved by QwQ, it managed to solve question D on first try ... and C and E solved by deepseek (huggingface QwQ model works so slow and my browser freezed multiple times) | 0 | 2024-11-30T13:03:17 | https://www.reddit.com/r/LocalLLaMA/comments/1h3c50x/ab_and_d_were_solved_by_qwq_it_managed_to_solve/ | TheLogiqueViper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3c50x | false | null | t3_1h3c50x | /r/LocalLLaMA/comments/1h3c50x/ab_and_d_were_solved_by_qwq_it_managed_to_solve/ | false | false | 0 | null |
||
Reverse engineering | 1 | [removed] | 2024-11-30T13:16:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h3cdm8/reverse_engineering/ | Brave_Kaleidoscope82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3cdm8 | false | null | t3_1h3cdm8 | /r/LocalLLaMA/comments/1h3cdm8/reverse_engineering/ | false | false | self | 1 | null |
Looking for suggestions on hardware. (Laptop) Thx in advance | 1 | [removed] | 2024-11-30T13:32:54 | https://www.reddit.com/r/LocalLLaMA/comments/1h3cnyx/looking_for_suggestions_on_hardware_laptop_thx_in/ | jicahmusic1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3cnyx | false | null | t3_1h3cnyx | /r/LocalLLaMA/comments/1h3cnyx/looking_for_suggestions_on_hardware_laptop_thx_in/ | false | false | self | 1 | null |
Noema – A Declarative AI Programming Library | 11 | Hi everyone! I'm excited to share my contribution to the local LLM ecosystem: [**Noema-Declarative-AI**](https://github.com/AlbanPerli/Noema-Declarative-AI).
Noema is a Python library designed to seamlessly intertwine Python code and LLM generations in a **declarative** and intuitive way.
It's built around the **ReAct prompting approach**, which structures reasoning in the following steps:
* **Question**: Define the user input or query.
* **Reflection**: Think critically about the question.
* **Observation**: Provide observations based on the reflection.
* **Analysis**: Formulate an analysis based on observations and reflection.
* **Conclusion**: Summarize and synthesize the reasoning process.
Here’s an example:
from noema import Noesis # Hypothetical import
class WayOfThinking(Noesis):
def description(self):
"""
You are a nice assistant.
"""
found = False
hello: Word = "Say 'hello' in French"
while not found:
nb_letter: Int = f"How many letters are in '{hello.value}'?"
verification: Bool = f"Does '{hello.value}' really contain {nb_letter.value} letters?"
if verification.value:
print("Verification done!")
found = True
return hello.value, nb_letter.value
# Instantiate the class
wot = WayOfThinking()
# Generate reflexions with a subject
reflexions = wot.constitute(subject, verbose=True)
print(reflexions)
# Key Features:
* **Programmable prompting**: Simplify the process of designing and executing prompts programmatically.
* **Declarative paradigm**: Focus on describing *what* you want to achieve, and let the framework handle the *how*.
* **ReAct-inspired reasoning**: Promote systematic thinking through a structured reasoning process.
This project is fully **open source** and still in its early stages (not yet production-ready).
I'm eager to hear your thoughts, feedback, and critiques!
Whether you want to challenge the concept, propose potential use cases, or simply discuss the approach, I’d love to engage with anyone interested.
Looking forward to your input! :) | 2024-11-30T13:58:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h3d4d8/noema_a_declarative_ai_programming_library/ | Super_Dependent_2978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3d4d8 | false | null | t3_1h3d4d8 | /r/LocalLLaMA/comments/1h3d4d8/noema_a_declarative_ai_programming_library/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'a3Y5voaXjYWBc6gAlbw_DZh30TR5Cc9Zwvn_7GaX4lg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ibo40oU4xSmQBildAJMvtaIelY5TRqf5WR2zGjKpp4c.jpg?width=108&crop=smart&auto=webp&s=0ca43994f42c39c52097c9da6995144c2bdc0726', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ibo40oU4xSmQBildAJMvtaIelY5TRqf5WR2zGjKpp4c.jpg?width=216&crop=smart&auto=webp&s=bf10a8e350ca7ff43076ba8c39269abd1a3f78b1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ibo40oU4xSmQBildAJMvtaIelY5TRqf5WR2zGjKpp4c.jpg?width=320&crop=smart&auto=webp&s=968093c5926b3db1a17bc66f235427cd19b63938', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ibo40oU4xSmQBildAJMvtaIelY5TRqf5WR2zGjKpp4c.jpg?width=640&crop=smart&auto=webp&s=9e466e04fa4b9200860620bcdf29b0c7d3eea371', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ibo40oU4xSmQBildAJMvtaIelY5TRqf5WR2zGjKpp4c.jpg?width=960&crop=smart&auto=webp&s=a35b84c979d475811514a0d43a3bd61c34f2a101', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ibo40oU4xSmQBildAJMvtaIelY5TRqf5WR2zGjKpp4c.jpg?width=1080&crop=smart&auto=webp&s=51aa7cff74d70ec875fbdd0e5676bf9f1432ea66', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ibo40oU4xSmQBildAJMvtaIelY5TRqf5WR2zGjKpp4c.jpg?auto=webp&s=5844e94c34b18efbf328287a4f03f619eeb2cd94', 'width': 1200}, 'variants': {}}]} |
icefog72/IceDrunkenCherryRP-7b | 1 | [removed] | 2024-11-30T14:27:10 | https://www.reddit.com/r/LocalLLaMA/comments/1h3dnzl/icefog72icedrunkencherryrp7b/ | Pristine_Income9554 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3dnzl | false | null | t3_1h3dnzl | /r/LocalLLaMA/comments/1h3dnzl/icefog72icedrunkencherryrp7b/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'XnAmIQAWZtZHEs2J2vMa8JiP5wGyJrOkJ-gWf-R3a1Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=108&crop=smart&auto=webp&s=a52e2eb0ad4ee484911c46f2ec48b87c976fcdc3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=216&crop=smart&auto=webp&s=45b4acd41346d1cbd11870ef7d1a9dbb3b9bd977', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=320&crop=smart&auto=webp&s=fd7a18d055c6faab48bf7a9c0628b71777711d9c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=640&crop=smart&auto=webp&s=9cbd6eaa33fcce3e79cb62f92def7db5d0030801', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=960&crop=smart&auto=webp&s=6f7ba3ffa75289e72c8f612ec135380c58020035', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=1080&crop=smart&auto=webp&s=67c2edf34e3254ce754a98987b968caf6241bfcc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?auto=webp&s=7a78a7debeefb18c1192798e12a7743676244e0b', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f1a7fc0c80f2f58e3461cdb2a4d8d2c2ac395874', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=c8f375668cdeb779b05043b286a294816ecd8dec', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c75a246f77bed6c1ad81420eb401677b2b64183f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6bf29357dbb637e5a720a61b902609b63ef94481', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=669d2fd7ef370a084301569cebeb0bed2d10e60d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=b6a1fa5a1a3d868837f66d7508c2864b546d1ae2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?blur=40&format=pjpg&auto=webp&s=453618b2581bb1c620a6b4f2f646bcd7cf3ad0c4', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f1a7fc0c80f2f58e3461cdb2a4d8d2c2ac395874', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=c8f375668cdeb779b05043b286a294816ecd8dec', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c75a246f77bed6c1ad81420eb401677b2b64183f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6bf29357dbb637e5a720a61b902609b63ef94481', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=669d2fd7ef370a084301569cebeb0bed2d10e60d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=b6a1fa5a1a3d868837f66d7508c2864b546d1ae2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mjRGWuf3ndMSbssuy3QlYv-yIG2fDFobrDkK5rkKvrg.jpg?blur=40&format=pjpg&auto=webp&s=453618b2581bb1c620a6b4f2f646bcd7cf3ad0c4', 'width': 1200}}}}]} |
STREAM TRIAD memory bandwidth benchmark values for Epyc Turin - almost 1 TB/s for a dual CPU system | 20 | Our Japanese friends from Fujitsu benchmarked their Epyc PRIMERGY RX2450 M2 server and shared some STREAM TRIAD benchmark values for Epyc Turin (bottom of the table):
[Epyc Turin STREAM TRIAD benchmark results](https://preview.redd.it/no5he1vcj14e1.png?width=866&format=png&auto=webp&s=969464c0ab72ad3076b7dc6580fdebee14ff16e8)
Full report is here (in Japanese): [https://jp.fujitsu.com/platform/server/primergy/performance/pdf/wp-performance-report-primergy-rx2450-m2-ww-ja.pdf](https://jp.fujitsu.com/platform/server/primergy/performance/pdf/wp-performance-report-primergy-rx2450-m2-ww-ja.pdf)
Note that these results are for dual CPU configurations and 6000 MT/s memory. Very interesting 884 GB/s value for a relatively inexpensive ($1214) Epyc 9135 - that's over 440 GB/s per socket. I wonder how is that even possible for a 2-CCD model. The cheapest Epyc 9015 has \~240 GB/s per socket. With higher-end models there is almost 1 TB/s for a dual socket system, a significant increase when compared to the Epyc Genoa family.
I'd love to test an Epyc Turin system with llama.cpp, but so far I couldn't find any Epyc Turin bare metal servers for rent. | 2024-11-30T14:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h3doy8/stream_triad_memory_bandwidth_benchmark_values/ | fairydreaming | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3doy8 | false | null | t3_1h3doy8 | /r/LocalLLaMA/comments/1h3doy8/stream_triad_memory_bandwidth_benchmark_values/ | false | false | 20 | null |
|
KoboldCpp 1.79 - Now with Shared Multiplayer, Ollama API emulation, ComfyUI API emulation, and speculative decoding | 295 | Hi everyone, LostRuins here, just did a new KoboldCpp release with some rather big updates that I thought was worth sharing:
- Added Shared Multiplayer: Now multiple participants can collaborate and share the same session, taking turn to chat with the AI or co-author a story together. Can also be used to easily share a session across multiple devices online or on your own local network.
- Emulation added for Ollama and ComfyUI APIs: KoboldCpp aims to serve every single popular AI related API, together, all at once, and to this end it now emulates compatible Ollama chat and completions APIs, in addition to the existing A1111/Forge/KoboldAI/OpenAI/Interrogation/Multimodal/Whisper endpoints. This will allow amateur projects that only support one specific API to be used seamlessly.
- Speculative Decoding: Since there seemed to be much interest in the recently added speculative decoding in llama.cpp, I've added my own implementation in KoboldCpp too.
Anyway, check this release out at https://github.com/LostRuins/koboldcpp/releases/tag/v1.79 | 2024-11-30T14:44:17 | https://www.reddit.com/r/LocalLLaMA/comments/1h3e05z/koboldcpp_179_now_with_shared_multiplayer_ollama/ | HadesThrowaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3e05z | false | null | t3_1h3e05z | /r/LocalLLaMA/comments/1h3e05z/koboldcpp_179_now_with_shared_multiplayer_ollama/ | false | false | self | 295 | {'enabled': False, 'images': [{'id': 'tVd_pcLyC8HbpjT_yGEt4Y05vJbX8KgmVvYcGt6XJbQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tHYxs2Lm_H_yLla1wmSIydPzK5OwTnrwSLg-nTW2kDo.jpg?width=108&crop=smart&auto=webp&s=0592d620546f4edfc6c11a89c89e5a448b9fa12e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tHYxs2Lm_H_yLla1wmSIydPzK5OwTnrwSLg-nTW2kDo.jpg?width=216&crop=smart&auto=webp&s=e119e46c0aeb7aba35e32ced7899780f057a8394', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tHYxs2Lm_H_yLla1wmSIydPzK5OwTnrwSLg-nTW2kDo.jpg?width=320&crop=smart&auto=webp&s=3f56e634c9d4ce1b6a12a782b49f973a4e357498', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tHYxs2Lm_H_yLla1wmSIydPzK5OwTnrwSLg-nTW2kDo.jpg?width=640&crop=smart&auto=webp&s=6e16aef1629f769490fa48400e5a2d36c523b114', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tHYxs2Lm_H_yLla1wmSIydPzK5OwTnrwSLg-nTW2kDo.jpg?width=960&crop=smart&auto=webp&s=8c0e6e9bf5af95acd328aa19718807b6d1459840', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tHYxs2Lm_H_yLla1wmSIydPzK5OwTnrwSLg-nTW2kDo.jpg?width=1080&crop=smart&auto=webp&s=86787f4c2370149cbcb4b634fd50d2b39382b05b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tHYxs2Lm_H_yLla1wmSIydPzK5OwTnrwSLg-nTW2kDo.jpg?auto=webp&s=b6bb400a0784716c9915f78b8057a1dd3b0b46f0', 'width': 1200}, 'variants': {}}]} |
We've Ported vLLM's GGUF Kernel to AMD - Outperforming Ollama! What now? | 1 | [removed] | 2024-11-30T14:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/1h3e3ue/weve_ported_vllms_gguf_kernel_to_amd/ | openssp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3e3ue | false | null | t3_1h3e3ue | /r/LocalLLaMA/comments/1h3e3ue/weve_ported_vllms_gguf_kernel_to_amd/ | false | false | self | 1 | null |
Why is nemo's exl2bpw4.0 larger than q4ks?Is it because exl2bpw4.0 represents bpw = 4.0+?
| 1 | [removed] | 2024-11-30T14:54:29 | https://www.reddit.com/r/LocalLLaMA/comments/1h3e7vh/why_is_nemos_exl2bpw40_larger_than_q4ksis_it/ | FrontInteresting9026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3e7vh | false | null | t3_1h3e7vh | /r/LocalLLaMA/comments/1h3e7vh/why_is_nemos_exl2bpw40_larger_than_q4ksis_it/ | false | false | self | 1 | null |
vLLM GGUF kernel on AMD ROCm (Benchmark on RX 7900XTX) | 1 | [removed] | 2024-11-30T15:08:32 | https://www.reddit.com/r/LocalLLaMA/comments/1h3eihl/vllm_gguf_kernel_on_amd_rocm_benchmark_on_rx/ | openssp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3eihl | false | null | t3_1h3eihl | /r/LocalLLaMA/comments/1h3eihl/vllm_gguf_kernel_on_amd_rocm_benchmark_on_rx/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '5q4asEtd-VNNP0-j5FepgUULAE2qQ986TtcubMMQh58', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBsEWpXQXVyGDanv029sNYmoCfdfhG1BT02nbS6snn8.jpg?width=108&crop=smart&auto=webp&s=5e4c9b95057e188edd84398940039e69ddfc44f3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bBsEWpXQXVyGDanv029sNYmoCfdfhG1BT02nbS6snn8.jpg?width=216&crop=smart&auto=webp&s=3ce97e5ae7c0204ff015844eed962561feeea4dd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bBsEWpXQXVyGDanv029sNYmoCfdfhG1BT02nbS6snn8.jpg?width=320&crop=smart&auto=webp&s=7cf8cc9f87fbf86418ecbb5da2fa267fcaa15bab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bBsEWpXQXVyGDanv029sNYmoCfdfhG1BT02nbS6snn8.jpg?width=640&crop=smart&auto=webp&s=a5e64fbd20e0ae42064a43eeeced7522f5c6649b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bBsEWpXQXVyGDanv029sNYmoCfdfhG1BT02nbS6snn8.jpg?width=960&crop=smart&auto=webp&s=b490e8b74489e7dae7037dc87f6654587eaff6b3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bBsEWpXQXVyGDanv029sNYmoCfdfhG1BT02nbS6snn8.jpg?width=1080&crop=smart&auto=webp&s=59ec7642332c173b79740f0bce88bd9962176fea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bBsEWpXQXVyGDanv029sNYmoCfdfhG1BT02nbS6snn8.jpg?auto=webp&s=f5389565036fa32cc82df0f2ccb2c7bae790afb6', 'width': 1200}, 'variants': {}}]} |
Why is using a small model considered ineffective? I want to build a system that answers users' questions | 1 | [removed] | 2024-11-30T15:10:19 | https://www.reddit.com/r/LocalLLaMA/comments/1h3ejsb/why_is_using_a_small_model_considered_ineffective/ | Square_Cherry_9848 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3ejsb | false | null | t3_1h3ejsb | /r/LocalLLaMA/comments/1h3ejsb/why_is_using_a_small_model_considered_ineffective/ | false | false | self | 1 | null |
How to create an agent that searches the web | 3 | Hello,
I'm not sure if it's appropriate question or not, but I will try.
I recently opened local models for myself and really interested to do some automation in my daily tasks and routines.
Web search is a really nice feature that OpenAI, Perplexity or Jenova offer. But I don't want to use external services.
I would like to use local tools + I don't like that even services like ChatGPT restrict amount of web searches to 3 to 5 which is ridiculous. I plan to run searches like 10 times more while making my request.
I am also interested in AI agents idea and looking forward to implement as many workflows as possible for my Windows and macOS systems.
So I'm curious if any open source local web scrapers are exist that I can implement on the stage of making a request. LM Studio is a really nice tool to use it as a friendly chat, but I need something more. | 2024-11-30T15:10:27 | https://www.reddit.com/r/LocalLLaMA/comments/1h3ejvs/how_to_create_an_agent_that_searches_the_web/ | WinDrossel007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3ejvs | false | null | t3_1h3ejvs | /r/LocalLLaMA/comments/1h3ejvs/how_to_create_an_agent_that_searches_the_web/ | false | false | self | 3 | null |
Running local LLMS (best laptops) | 1 | [removed] | 2024-11-30T15:13:58 | https://www.reddit.com/r/LocalLLaMA/comments/1h3emdo/running_local_llms_best_laptops/ | jicahmusic1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3emdo | false | null | t3_1h3emdo | /r/LocalLLaMA/comments/1h3emdo/running_local_llms_best_laptops/ | false | false | self | 1 | null |
Why is using a small model considered ineffective? I want to build a system that answers users' questions | 1 | [removed] | 2024-11-30T15:16:42 | https://www.reddit.com/r/LocalLLaMA/comments/1h3eoef/why_is_using_a_small_model_considered_ineffective/ | Turbulent_Ice_7698 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3eoef | false | null | t3_1h3eoef | /r/LocalLLaMA/comments/1h3eoef/why_is_using_a_small_model_considered_ineffective/ | false | false | self | 1 | null |
MacBook M4 pro vs m4 max | 1 | [removed] | 2024-11-30T15:27:45 | https://www.reddit.com/r/LocalLLaMA/comments/1h3ewef/macbook_m4_pro_vs_m4_max/ | VitaTamry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3ewef | false | null | t3_1h3ewef | /r/LocalLLaMA/comments/1h3ewef/macbook_m4_pro_vs_m4_max/ | false | false | self | 1 | null |
Options for running exl2 models with a backend on a proxy server | 3 | Kinda new to this so forgive me if this is a dumb question
I've been using Koboldccp to run gguf models connected via proxy to JanitorAI.
Recently I learned how to set up Silly Tavern and TabbyAPI, the generation speed of exl2 models is amazing. The output of the models though don't feel "right" compared to the output I get when using JAI's model or the same model as a gguf with koboldccp.
Essentially I imported the character on ST, but the output I get from ST is much shorter and less descriptive compared to if I would use the same character on JAI with koboldccp as a proxy
Not sure if I'm doing something wrong or maybe the character import has a problem but I want to keep the exl2 model and use them with more JAI cards | 2024-11-30T16:03:31 | https://www.reddit.com/r/LocalLLaMA/comments/1h3fnv1/options_for_running_exl2_models_with_a_backend_on/ | Frosty015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3fnv1 | false | null | t3_1h3fnv1 | /r/LocalLLaMA/comments/1h3fnv1/options_for_running_exl2_models_with_a_backend_on/ | false | false | self | 3 | null |
Convert Multimodal Model to GGUF to run locally | 4 | I just finished finetuning the llama3.2-vision model and have downloaded the safetensors and all the files. Now I would like to run it locally. I assumed ollama would do this, but looks like it requires gguf (not sure how they added support for llama3.2-vision though). But, I can't find a way to convert it to gguf or run it locally and that was the whole point. Does anyone have any suggestions? I also need to serve it to a localhost. I tried LM Studio and Ollama and no luck. | 2024-11-30T16:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/1h3fxey/convert_multimodal_model_to_gguf_to_run_locally/ | redlikeazebra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3fxey | false | null | t3_1h3fxey | /r/LocalLLaMA/comments/1h3fxey/convert_multimodal_model_to_gguf_to_run_locally/ | false | false | self | 4 | null |
How to fine tune llama3.2:11b with images? | 4 | I have a Mac mini with 64gb of ram. I’d like to use it to fine tune a vision model like llama3.2:11b with a custom dataset (which I’ve already curated into a json with image (base64encoded) and output (string) pairs.
I’m trying to learn how to do this properly. Any advice/guides I can follow to get started?
Thanks in advance! | 2024-11-30T16:18:39 | https://www.reddit.com/r/LocalLLaMA/comments/1h3fzx0/how_to_fine_tune_llama3211b_with_images/ | thisguyrob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3fzx0 | false | null | t3_1h3fzx0 | /r/LocalLLaMA/comments/1h3fzx0/how_to_fine_tune_llama3211b_with_images/ | false | false | self | 4 | null |
Jeffrey hinton says , we shouldn't open source big models.🧐 | 1 | [removed] | 2024-11-30T16:18:41 | https://www.reddit.com/r/LocalLLaMA/comments/1h3fzxs/jeffrey_hinton_says_we_shouldnt_open_source_big/ | Dr-Newtons-Ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3fzxs | false | null | t3_1h3fzxs | /r/LocalLLaMA/comments/1h3fzxs/jeffrey_hinton_says_we_shouldnt_open_source_big/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'abDoxMBp8ZS5nlyQ8pn1jhfBt-wFpLWA1S2FZ6xZXkY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/JafwIwlEArkSQ6OMsSbrhxyrBeoHSOUIl0ezAyT4nqo.jpg?width=108&crop=smart&auto=webp&s=3e9d0257783f1979636f1cc829223cce6ed83d81', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/JafwIwlEArkSQ6OMsSbrhxyrBeoHSOUIl0ezAyT4nqo.jpg?width=216&crop=smart&auto=webp&s=6af6c29395c111df6ab7287a1a0be9f7c1929529', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/JafwIwlEArkSQ6OMsSbrhxyrBeoHSOUIl0ezAyT4nqo.jpg?width=320&crop=smart&auto=webp&s=5091a6b997526a18511130f63dabe9b7f5bcb462', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/JafwIwlEArkSQ6OMsSbrhxyrBeoHSOUIl0ezAyT4nqo.jpg?auto=webp&s=11ca6b4d6863c1f60ab115d3807bc23e7e2f13df', 'width': 480}, 'variants': {}}]} |
How to generate text on my local machine but be able to accsess it from another one or even my phone | 1 | [removed] | 2024-11-30T16:47:44 | https://www.reddit.com/r/LocalLLaMA/comments/1h3gnl2/how_to_generate_text_on_my_local_machine_but_be/ | JamesAibr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3gnl2 | false | null | t3_1h3gnl2 | /r/LocalLLaMA/comments/1h3gnl2/how_to_generate_text_on_my_local_machine_but_be/ | false | false | self | 1 | null |
Which LLM Model will work best to fine tune for marketing campaigns and predictions? | 1 | [removed] | 2024-11-30T16:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/1h3gows/which_llm_model_will_work_best_to_fine_tune_for/ | Embarrassed-Bid2762 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3gows | false | null | t3_1h3gows | /r/LocalLLaMA/comments/1h3gows/which_llm_model_will_work_best_to_fine_tune_for/ | false | false | self | 1 | null |
Can't seem to find an AI assisted coding environment where I can use 01-preview and provide my own api key | 2 | I'm loving o1-preview but the platform.openai.com/playground environment is not good. It doesn't even render the returned markdown.
I was hoping there was a better choice. Ideally, an IntelliJ plugin that allows me to attach files/folders of my code to the request. If not an IDE plugin then a standalone app that does that?
Can anyone point me in the right direction? Searching google hasn't been much help. | 2024-11-30T16:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/1h3gqxh/cant_seem_to_find_an_ai_assisted_coding/ | Qaxar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3gqxh | false | null | t3_1h3gqxh | /r/LocalLLaMA/comments/1h3gqxh/cant_seem_to_find_an_ai_assisted_coding/ | false | false | self | 2 | null |
Image Generation | 1 | [removed] | 2024-11-30T17:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1h3h0cm/image_generation/ | rishsur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3h0cm | false | null | t3_1h3h0cm | /r/LocalLLaMA/comments/1h3h0cm/image_generation/ | false | false | self | 1 | null |
RVC as VST inside my DAW | 3 | Reaper here, so I'm open to scripting a non-realtime CLI solution.
But *does a realtime VST implementation of RVC exist?* It doesn't need to be pretty, just a quick "select voice, in realtime it's converting track input through voice model.
This would be an incredible workflow booster, for creating harmonies/stacks/doubles. All the current RVC-based services (kits.AI...) require so much back and fourth (render, export, drag into converter, select voice, run conversion, drag back into DAW....) that it's almost not worth doing.
Thanks! | 2024-11-30T17:23:14 | https://www.reddit.com/r/LocalLLaMA/comments/1h3hh6d/rvc_as_vst_inside_my_daw/ | ferropop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3hh6d | false | null | t3_1h3hh6d | /r/LocalLLaMA/comments/1h3hh6d/rvc_as_vst_inside_my_daw/ | false | false | self | 3 | null |
Which Open Source TTS model is the best? | 66 | I've tried several on Hugging Face spaces and been unimpressed so far.
* XTTS-V2, stock voices are terrible, out of development so hard to run and use.
* StyleTTS, limited to making short audio snippets on the space. Hard to tell if it could be good.
* F5-TTS, difficult to set up and robotic sounding on hugging face space.
My favourite in terms of ease of use and quality is Piper TTS: [https://piper.ttstool.com](https://piper.ttstool.com) . The hfc-female voice is amazing for the speed and model size.
But compared to something like ElevenLabs... yikes. Everything I've found sucks. Are there any other options out there that I could use? | 2024-11-30T17:35:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h3hqzq/which_open_source_tts_model_is_the_best/ | FPGA_Superstar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3hqzq | false | null | t3_1h3hqzq | /r/LocalLLaMA/comments/1h3hqzq/which_open_source_tts_model_is_the_best/ | false | false | self | 66 | null |
Based LLM Leaderboard | 1 | [removed] | 2024-11-30T17:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/1h3hu1h/based_llm_leaderboard/ | de4dee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3hu1h | false | null | t3_1h3hu1h | /r/LocalLLaMA/comments/1h3hu1h/based_llm_leaderboard/ | false | false | self | 1 | null |
Which LLM Model will work best to fine tune for marketing campaigns and predictions? | 1 | [removed] | 2024-11-30T17:46:47 | https://www.reddit.com/r/LocalLLaMA/comments/1h3i050/which_llm_model_will_work_best_to_fine_tune_for/ | They_callme_zee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3i050 | false | null | t3_1h3i050 | /r/LocalLLaMA/comments/1h3i050/which_llm_model_will_work_best_to_fine_tune_for/ | false | false | self | 1 | null |
LLM driven code review/documentation of my own Git repo with RAG? | 14 | I am looking for a way to get my whole Git containing a rather complex React App into an LLM without exceeding the context.
The point is that I developed the App learning by doing which led to a few messy hack-arounds because I didn't know better.
Now I thought about experimenting on that with a local LLM to review my own code, document it and eventually refactor a few parts that are especially messy and have some bugs that I'll never fix without rewriting the whole thing, which might cost me months since it's a hobby project.
So could I somehow pour the whole repo into a RAG to make an LLM understand the code and incorporate it into its knowledge? Or would that rather make the LLM dumber via "infecting" the NN's knowledge with some of the bad hacks I used? | 2024-11-30T18:31:26 | https://www.reddit.com/r/LocalLLaMA/comments/1h3izaf/llm_driven_code_reviewdocumentation_of_my_own_git/ | dreamyrhodes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3izaf | false | null | t3_1h3izaf | /r/LocalLLaMA/comments/1h3izaf/llm_driven_code_reviewdocumentation_of_my_own_git/ | false | false | self | 14 | null |
My best effort at using F5-TTS Voice Cloning | 47 | So after many iterations, this is the best quality I can get out of F5 TTS Voice Cloning. The example below is British accent. But I have also done US accent. I think it gets close to eleven labs quality. Listen carefully to the Sharp S's. Does it sound high quality? I am using the MLX version, on M1 Mac Pro. And generations are about to 1:2 in terms of speed. Let me know what you think
The file attached is the audio file for you to listen to. It was previously a WAV file in much higher quality. The final file is a quickly converted mp4 file of less than 1mb for you to listen to.
https://reddit.com/link/1h3k8b9/video/rlzuu48eb34e1/player
| 2024-11-30T19:27:50 | https://www.reddit.com/r/LocalLLaMA/comments/1h3k8b9/my_best_effort_at_using_f5tts_voice_cloning/ | buczYYY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3k8b9 | false | null | t3_1h3k8b9 | /r/LocalLLaMA/comments/1h3k8b9/my_best_effort_at_using_f5tts_voice_cloning/ | false | false | self | 47 | null |
Dual 9654 Workstation LLama.cpp performance | 4 | Hello,
I have been testing my main workstation for running LLM on CPU, the workstation has a 4090 but wanted to see if my 500GB bandwidth can help. any tips on improving performance?
Models & llama.ccp cmdline
gemma-2-27b-it-Q4\_K\_M.gguf -p "write a poem" --flash-attn -n 128 -co -t 128
llama.cpp
llama\_new\_context\_with\_model: n\_seq\_max = 1
llama\_new\_context\_with\_model: n\_ctx = 4096
llama\_new\_context\_with\_model: n\_ctx\_per\_seq = 4096
llama\_new\_context\_with\_model: n\_batch = 2048
llama\_new\_context\_with\_model: n\_ubatch = 512
llama\_new\_context\_with\_model: flash\_attn = 1
llama\_new\_context\_with\_model: freq\_base = 10000.0
llama\_new\_context\_with\_model: freq\_scale = 1
llama\_new\_context\_with\_model: n\_ctx\_per\_seq (4096) < n\_ctx\_train (8192) -- the full capacity of the model will not be utilized
llama\_kv\_cache\_init: CPU KV buffer size = 1472.00 MiB
llama\_new\_context\_with\_model: KV self size = 1472.00 MiB, K (f16): 736.00 MiB, V (f16): 736.00 MiB
llama\_new\_context\_with\_model: CPU output buffer size = 0.98 MiB
llama\_new\_context\_with\_model: CPU compute buffer size = 509.00 MiB
llama\_new\_context\_with\_model: graph nodes = 1530
llama\_new\_context\_with\_model: graph splits = 1
common\_init\_from\_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n\_threads = 96
system\_info: n\_threads = 96 (n\_threads\_batch = 96) / 384 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | LLAMAFILE = 1 | AARCH64\_REPACK = 1 |
sampler seed: 3260558818
sampler params:
repeat\_last\_n = 64, repeat\_penalty = 1.000, frequency\_penalty = 0.000, presence\_penalty = 0.000
dry\_multiplier = 0.000, dry\_base = 1.750, dry\_allowed\_length = 2, dry\_penalty\_last\_n = -1
top\_k = 40, top\_p = 0.950, min\_p = 0.050, xtc\_probability = 0.000, xtc\_threshold = 0.100, typical\_p = 1.000, temp = 0.800
mirostat = 0, mirostat\_lr = 0.100, mirostat\_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist
generate: n\_ctx = 4096, n\_batch = 2048, n\_predict = 128, n\_keep = 1
write a poem about the moon.
The moon, a pearl in velvet night,
A silent watcher, pale and bright.
Across the sky, she softly glides,
Her silver glow, where darkness hides.
She bathes the world in mystic light,
And whispers secrets in the night.
Of lovers' dreams and whispered vows,
Of rustling leaves and sleeping boughs.
The tides obey her ancient call,
She rules the oceans, great and small.
A beacon in the darkest hour,
A constant presence, filled with power.
But though she shines so bright and clear,
She holds no light of her
llama\_perf\_sampler\_print: sampling time = 31.01 ms / 132 runs ( 0.23 ms per token, 4256.83 tokens per second)
llama\_perf\_context\_print: load time = 8674.55 ms
llama\_perf\_context\_print: prompt eval time = 207.18 ms / 4 tokens ( 51.79 ms per token, 19.31 tokens per second)
llama\_perf\_context\_print: eval time = 24203.65 ms / 127 runs ( 190.58 ms per token, 5.25 tokens per second)
llama\_perf\_context\_print: total time = 24479.69 ms / 131 tokens | 2024-11-30T20:05:48 | https://www.reddit.com/r/LocalLLaMA/comments/1h3l2ch/dual_9654_workstation_llamacpp_performance/ | Turbo_mafia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3l2ch | false | null | t3_1h3l2ch | /r/LocalLLaMA/comments/1h3l2ch/dual_9654_workstation_llamacpp_performance/ | false | false | self | 4 | null |
On the importance of AI independence & open source models | 54 | 2024-11-30T20:10:51 | https://aaron.ng/posts/ai-independence/ | localghost80 | aaron.ng | 1970-01-01T00:00:00 | 0 | {} | 1h3l6a9 | false | null | t3_1h3l6a9 | /r/LocalLLaMA/comments/1h3l6a9/on_the_importance_of_ai_independence_open_source/ | false | false | 54 | {'enabled': False, 'images': [{'id': 'yWmC-lQW1XTRiganewzOZCbQJ0J4S94XQC3Il9snA-M', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/SQ10W7UdFstM-3SWDk7WLL4j87A_v3X6YuBBMDbARj0.jpg?width=108&crop=smart&auto=webp&s=a4413d8b8695ad2ab8824c00bc5d410253e7c1ed', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/SQ10W7UdFstM-3SWDk7WLL4j87A_v3X6YuBBMDbARj0.jpg?width=216&crop=smart&auto=webp&s=49991631282b7750f1b8d7a7b21b6054be4d8b04', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/SQ10W7UdFstM-3SWDk7WLL4j87A_v3X6YuBBMDbARj0.jpg?width=320&crop=smart&auto=webp&s=4e13b78078c6ffe88aa391aa21aeb50f69fa443b', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/SQ10W7UdFstM-3SWDk7WLL4j87A_v3X6YuBBMDbARj0.jpg?width=640&crop=smart&auto=webp&s=2fdd0d314804ecf661cc2079dbfc58caa562fcd8', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/SQ10W7UdFstM-3SWDk7WLL4j87A_v3X6YuBBMDbARj0.jpg?width=960&crop=smart&auto=webp&s=6f63bef0cc84dd8138ef6cd8a633c18933126b2c', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/SQ10W7UdFstM-3SWDk7WLL4j87A_v3X6YuBBMDbARj0.jpg?width=1080&crop=smart&auto=webp&s=5b2bbaa16d3df3375bcb10f184a72afbbd5222c6', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/SQ10W7UdFstM-3SWDk7WLL4j87A_v3X6YuBBMDbARj0.jpg?auto=webp&s=52400134b79667d5ed57f59613a3ea0754ca0380', 'width': 1600}, 'variants': {}}]} |
||
Easier access for using Llama 3.2 vision models. | 24 | I just added to @[ThetaCursed](https://www.reddit.com/user/ThetaCursed/)'s CleanUI project. I've been kind of annoyed by the lack of support for newer the multimodal models, so I was excited to check this out. Ultimately I just wanted this to run in a docker container and ended up taking a few extra steps along that path. So I dockerized it and added a github action to automatically build. All variables are exposed as environment variables so you can change them easily. I also added a little more to the UI, including a few more controls and some debugging output. I only tested it with [unsloth/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/unsloth/Llama-3.2-11B-Vision-Instruct), but I imagine it would work with the 90b version also if you wanted to use that. I have this running with 2x NVIDIA RTX 2000 Ada (32GB VRAM total) and uses around 24GB of VRAM, split between the two of them.
I could see having a dropdown to load other compatible models, but may or may not do that as this is pretty much all I wanted for the moment. There are probably some issues here and there, if you point them out I'll fix them if they're quick and easy. Feel free to contribute!
[github](https://github.com/j4ys0n/clean-ui). docker image: ghcr.io/j4ys0n/clean-ui:sha-27f8b18
[Here's the original post](https://www.reddit.com/r/LocalLLaMA/comments/1fse5dm/run_llama3211bvision_locally_with_ease_cleanui/).
https://preview.redd.it/cskb8z1fm34e1.png?width=3002&format=png&auto=webp&s=722130fc0f7d1ed90e40d5c3b5a1554180f76d71
| 2024-11-30T20:45:11 | https://www.reddit.com/r/LocalLLaMA/comments/1h3lx9a/easier_access_for_using_llama_32_vision_models/ | j4ys0nj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3lx9a | false | null | t3_1h3lx9a | /r/LocalLLaMA/comments/1h3lx9a/easier_access_for_using_llama_32_vision_models/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'CUyvOHy3dPeP3bxEXm7l_PbCKNbJ0kYurGILQu5ep_Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WY7ZucicoHxBgbETrdG0wbpk1Gq2xnq9jb0qSs5XUMw.jpg?width=108&crop=smart&auto=webp&s=aaba323dbcad217c8d5724fda379c184b8f0a5f1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WY7ZucicoHxBgbETrdG0wbpk1Gq2xnq9jb0qSs5XUMw.jpg?width=216&crop=smart&auto=webp&s=c848f9273f84b50f33a00dd69503d30697aeec30', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WY7ZucicoHxBgbETrdG0wbpk1Gq2xnq9jb0qSs5XUMw.jpg?width=320&crop=smart&auto=webp&s=cfe37290f55c37bd48376e1f88a2705be0028fd6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WY7ZucicoHxBgbETrdG0wbpk1Gq2xnq9jb0qSs5XUMw.jpg?width=640&crop=smart&auto=webp&s=d4c345d444f2c2562495927e10589da504c92b1e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WY7ZucicoHxBgbETrdG0wbpk1Gq2xnq9jb0qSs5XUMw.jpg?width=960&crop=smart&auto=webp&s=2f0e7c66066322371b485e675a56616366b10395', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WY7ZucicoHxBgbETrdG0wbpk1Gq2xnq9jb0qSs5XUMw.jpg?width=1080&crop=smart&auto=webp&s=21fb17efc71fdd1b5222a8af9d3f7b91f43105e5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WY7ZucicoHxBgbETrdG0wbpk1Gq2xnq9jb0qSs5XUMw.jpg?auto=webp&s=3f232e1ba9c555a7064e932b0c96c8e8dbf28b00', 'width': 1200}, 'variants': {}}]} |
|
Update on WilmerAI: the prompt router with 'memories' and workflows that allows a single assistant/persona to be powered by multiple LLMs at once. | 1 | [removed] | 2024-11-30T21:25:23 | https://www.reddit.com/r/LocalLLaMA/comments/1h3msgf/update_on_wilmerai_the_prompt_router_with/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3msgf | false | null | t3_1h3msgf | /r/LocalLLaMA/comments/1h3msgf/update_on_wilmerai_the_prompt_router_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'mGKsAjApjC23rWMb65uZx1n93DFuzZrpk2WLtBDJI10', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=108&crop=smart&auto=webp&s=744b8ff4fd48cd100d47a4a3248075991430bf72', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=216&crop=smart&auto=webp&s=405c69ded8634ec58077d6dc69f51f920c726435', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=320&crop=smart&auto=webp&s=5101a60b35c7c207725d9110c9457a4558f4605b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=640&crop=smart&auto=webp&s=e76c882ac1be567f8b7195dfe4b5f2eea9ec44dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=960&crop=smart&auto=webp&s=47b4df4f04fbfa4548d332297a9560d60a36795a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=1080&crop=smart&auto=webp&s=0a7e9fba7b8a5c607f62e4c38c1e88770315173d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?auto=webp&s=a8b14a2de611c6f5634fd7efa65dc8d79d03f9ea', 'width': 1200}, 'variants': {}}]} |
|
Are you having more or less success coding locally with something like Continue.dev than with Copilot/Cursor? | 0 | Curious what your opinions are.
It's going pretty well for me. But my main reason for doing it is data privacy.
Do some of you notice/feel better/worse quality? Performance? Usability? | 2024-11-30T21:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/1h3mtry/are_you_having_more_or_less_success_coding/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3mtry | false | null | t3_1h3mtry | /r/LocalLLaMA/comments/1h3mtry/are_you_having_more_or_less_success_coding/ | false | false | self | 0 | null |
Update on WilmerAI: the workflow based prompt router that generated rolling 'memories' and allows a single assistant/persona to be powered by multiple LLMs at once. | 1 | [removed] | 2024-11-30T21:36:41 | https://www.reddit.com/r/LocalLLaMA/comments/1h3n18q/update_on_wilmerai_the_workflow_based_prompt/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3n18q | false | null | t3_1h3n18q | /r/LocalLLaMA/comments/1h3n18q/update_on_wilmerai_the_workflow_based_prompt/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'mGKsAjApjC23rWMb65uZx1n93DFuzZrpk2WLtBDJI10', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=108&crop=smart&auto=webp&s=744b8ff4fd48cd100d47a4a3248075991430bf72', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=216&crop=smart&auto=webp&s=405c69ded8634ec58077d6dc69f51f920c726435', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=320&crop=smart&auto=webp&s=5101a60b35c7c207725d9110c9457a4558f4605b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=640&crop=smart&auto=webp&s=e76c882ac1be567f8b7195dfe4b5f2eea9ec44dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=960&crop=smart&auto=webp&s=47b4df4f04fbfa4548d332297a9560d60a36795a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=1080&crop=smart&auto=webp&s=0a7e9fba7b8a5c607f62e4c38c1e88770315173d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?auto=webp&s=a8b14a2de611c6f5634fd7efa65dc8d79d03f9ea', 'width': 1200}, 'variants': {}}]} |
|
WilmerAI Update- The workflow based prompt router that supports rolling memories and is built to allow a single assistant to be powered by multiple LLMs at once. | 1 | [removed] | 2024-11-30T21:44:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1h3n78k | false | null | t3_1h3n78k | /r/LocalLLaMA/comments/1h3n78k/wilmerai_update_the_workflow_based_prompt_router/ | false | false | default | 1 | null |
||
Idiots guide to fine tuning a model? | 1 | [removed] | 2024-11-30T23:04:38 | https://www.reddit.com/r/LocalLLaMA/comments/1h3owun/idiots_guide_to_fine_tuning_a_model/ | gnomesenpai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3owun | false | null | t3_1h3owun | /r/LocalLLaMA/comments/1h3owun/idiots_guide_to_fine_tuning_a_model/ | false | false | self | 1 | null |
Which AI chat client has the best search experience? | 17 | Would like to hear from the power users of search with AI.
Which AI did it the best?
How would you improve it beyond what's out there today? | 2024-11-30T23:11:47 | https://www.reddit.com/r/LocalLLaMA/comments/1h3p292/which_ai_chat_client_has_the_best_search/ | punkpeye | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3p292 | false | null | t3_1h3p292 | /r/LocalLLaMA/comments/1h3p292/which_ai_chat_client_has_the_best_search/ | false | false | self | 17 | null |
With Amazon’s generous return policy should I buy parts now and then return them if the 5000s don’t come out? | 0 | Like the title suggests should I buy some parts (e.g., PSU, CPU) and sit on them or pass them up?
I’m thinking of these parts:
PSU:EVGA 1000 G6
CPU: Ryzen 9 7900X
Mobo: either B650 Aorus elite ax or the X670E tomahawk
Cooler:Artic liquid freezer III 280
SSD: Crucial T500 1 TB? (Or should I do the Samsung 990?)
Case: Corsair 4000D Airflow
RAM: ??? (Haven’t checked)
(I have the peripherals, monitor, keyboard etc.) | 2024-11-30T23:27:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h3pdhn/with_amazons_generous_return_policy_should_i_buy/ | williamthe5thc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3pdhn | false | null | t3_1h3pdhn | /r/LocalLLaMA/comments/1h3pdhn/with_amazons_generous_return_policy_should_i_buy/ | false | false | self | 0 | null |
Nvidia 2060 6gb | 1 | [removed] | 2024-11-30T23:40:31 | https://www.reddit.com/r/LocalLLaMA/comments/1h3pngf/nvidia_2060_6gb/ | OpenWind8936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3pngf | false | null | t3_1h3pngf | /r/LocalLLaMA/comments/1h3pngf/nvidia_2060_6gb/ | false | false | self | 1 | null |
AI Voice Assistant | 19 | I put together a weekend toy project (let’s call it a POC). It’s an AI bot designed for shell commands and coding assistance, with voice commands (e.g., write a function ..., refactor code, check GPU temperature, reduce MP4 video resolution, etc.). It use llama.cpp as LLM backend and whisper for STT, but OpenAI endpoint is also an option (one parameter change).
Personally, I think I’d even use something like this if it were a bit more polished, so **I’d love to hear your feedback**.
Check the demo video: [https://youtu.be/UB\_ZXU\_a0xY](https://youtu.be/UB_ZXU_a0xY)
GitHub: [https://github.com/nmandic78/AI-VoiceAssistant](https://github.com/nmandic78/AI-VoiceAssistant)
If anyone’s willing to test it on Windows or Mac, that would be great (I’m on Ubuntu, so I couldn’t try it myself, but it *should* work). The README.md was generated by ChatGPT, and I’ve reviewed and edited it—I hope everything is clear and in place.
Constructive criticism is welcome, and of course, classic Reddit-style feedback too! :) | 2024-11-30T23:45:08 | https://www.reddit.com/r/LocalLLaMA/comments/1h3pqpu/ai_voice_assistant/ | Bitter-Raisin-3251 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3pqpu | false | null | t3_1h3pqpu | /r/LocalLLaMA/comments/1h3pqpu/ai_voice_assistant/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'QeNUkesG5w_QrrT9DHSNUCLmzCYsJl0HL1mFBIsT16w', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/A-etT_HHVfzra5tYtcYiVG87xL2S-Oqg7U0mLB37fP4.jpg?width=108&crop=smart&auto=webp&s=5f8bf5e61417aacf0becaae7238e56ad3835ae76', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/A-etT_HHVfzra5tYtcYiVG87xL2S-Oqg7U0mLB37fP4.jpg?width=216&crop=smart&auto=webp&s=ceb46968e49f867b0f87c554f6fdfa43a1ae6d44', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/A-etT_HHVfzra5tYtcYiVG87xL2S-Oqg7U0mLB37fP4.jpg?width=320&crop=smart&auto=webp&s=7aca39f078f630f9c111fe917a739c4c15de0026', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/A-etT_HHVfzra5tYtcYiVG87xL2S-Oqg7U0mLB37fP4.jpg?auto=webp&s=303452512fd62141a0b2f5a52b6979a9d3358c8a', 'width': 480}, 'variants': {}}]} |
LLM asking questions to user | 1 | [removed] | 2024-12-01T00:11:56 | https://www.reddit.com/r/LocalLLaMA/comments/1h3qb7m/llm_asking_questions_to_user/ | SadProgrammer23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3qb7m | false | null | t3_1h3qb7m | /r/LocalLLaMA/comments/1h3qb7m/llm_asking_questions_to_user/ | false | false | self | 1 | null |
If you want to know why open-source it’s important | 414 | Ask ChatGPT who David Mayer is. It’ll refuse more often than not.
If we’re going to (rightfully) call China out for Tiananmen Square then let’s make sure we call out censorship on our side of the world. | 2024-12-01T00:57:25 | https://www.reddit.com/r/LocalLLaMA/comments/1h3r8fg/if_you_want_to_know_why_opensource_its_important/ | xRolocker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3r8fg | false | null | t3_1h3r8fg | /r/LocalLLaMA/comments/1h3r8fg/if_you_want_to_know_why_opensource_its_important/ | false | false | self | 414 | null |
LLM security | 0 | For those who have implemented internal chatbots (that have access to various tools including RAG) or agentic workflows, what security measures were taken in terms of
- access management
- prompt injection / jail breaking
- impersonation
- misuse
| 2024-12-01T01:14:12 | https://www.reddit.com/r/LocalLLaMA/comments/1h3rk69/llm_security/ | papipapi419 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3rk69 | false | null | t3_1h3rk69 | /r/LocalLLaMA/comments/1h3rk69/llm_security/ | false | false | self | 0 | null |
Awesome Claude MCP Servers: A Curated List of Tools to Extend Claude's Capabilities 🤖 | 0 | Hey everyone! I wanted to share a curated list of Model Context Protocol (MCP) servers I've put together that help extend Claude's capabilities. If you're working with Claude and want to give it more abilities, this might be useful for you.
What's included:
* File system access (both local and cloud storage like Google Drive)
* Search capabilities (including Brave Search, Kagi, and ArXiv integration)
* Database connections (PostgreSQL and SQLite)
* Version control tools (GitHub, GitLab integration)
* Browser automation
* Location services
* And much more!
The list is organized by functionality and includes details about implementation language (Python/TypeScript/Go) and whether each server runs locally or in the cloud. All entries are actively maintained and most have good documentation.
Each tool comes with a brief description of what it does and how it can help enhance Claude's capabilities. I've also included getting started resources and links to the MCP community.
Check it out here: \[[https://github.com/win4r/Awesome-Claude-MCP-Servers](https://github.com/win4r/Awesome-Claude-MCP-Servers)\]
The repository is bilingual (English/Chinese) and welcomes contributions. If you're using any interesting MCP servers that aren't listed, feel free to submit a PR!
Let me know if you have any questions or suggestions for improvement!
\#Claude #AI #Programming #OpenSource | 2024-12-01T01:24:02 | https://www.reddit.com/r/LocalLLaMA/comments/1h3rr1d/awesome_claude_mcp_servers_a_curated_list_of/ | GitDit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3rr1d | false | null | t3_1h3rr1d | /r/LocalLLaMA/comments/1h3rr1d/awesome_claude_mcp_servers_a_curated_list_of/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Zh1OmfyWVsBhOK_NBcE_RAqogsTOyujtxXxBGHQUyjs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-bfZrNp9Kb8c3Alak9YJ11WU-jbeclnPbCTE-dIjmP8.jpg?width=108&crop=smart&auto=webp&s=b09036abc294dbd017228fe18d5ad81d500d7b83', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-bfZrNp9Kb8c3Alak9YJ11WU-jbeclnPbCTE-dIjmP8.jpg?width=216&crop=smart&auto=webp&s=b36646da5dd243cb0f2207c4db01bd55afeae8e5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-bfZrNp9Kb8c3Alak9YJ11WU-jbeclnPbCTE-dIjmP8.jpg?width=320&crop=smart&auto=webp&s=74a86715dd039553658367b3b7c6b42fefaecc9c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-bfZrNp9Kb8c3Alak9YJ11WU-jbeclnPbCTE-dIjmP8.jpg?width=640&crop=smart&auto=webp&s=3b062ef6dca2b6c31a8d3c855a7e93956c725bef', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-bfZrNp9Kb8c3Alak9YJ11WU-jbeclnPbCTE-dIjmP8.jpg?width=960&crop=smart&auto=webp&s=5f2fe8bb4469b9219195ea2dbd515f28b7f6befd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-bfZrNp9Kb8c3Alak9YJ11WU-jbeclnPbCTE-dIjmP8.jpg?width=1080&crop=smart&auto=webp&s=c0086c6ff82d0d0e631bccc7ed52bcbf16a56a37', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-bfZrNp9Kb8c3Alak9YJ11WU-jbeclnPbCTE-dIjmP8.jpg?auto=webp&s=b4d8abccc215a5480d84a94dc6568e4a15b6a09c', 'width': 1200}, 'variants': {}}]} |
Linux AI enthousiasts, you might be slowly damaging your GPUs without knowing | 1 | [removed] | 2024-12-01T02:48:21 | https://www.reddit.com/r/LocalLLaMA/comments/1h3tc2z/linux_ai_enthousiasts_you_might_be_slowly/ | TyraVex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3tc2z | false | null | t3_1h3tc2z | /r/LocalLLaMA/comments/1h3tc2z/linux_ai_enthousiasts_you_might_be_slowly/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'hnZDIk_TY24WLB527CbQAHCEEc09FIgy3quBz_-6dgo', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/ae0Qo8Vt7zK3YN5EJj9DVaScMl5RBOblsHS-0BEDVxs.jpg?width=108&crop=smart&auto=webp&s=3203a37a03ac29dcb77bd0264ffded36cc9eb3e8', 'width': 108}], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/ae0Qo8Vt7zK3YN5EJj9DVaScMl5RBOblsHS-0BEDVxs.jpg?auto=webp&s=119b7279fd124a86be1ec5ae8f58e06b3fca19a8', 'width': 150}, 'variants': {}}]} |
Gross | 1 | [removed] | 2024-12-01T02:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1h3tdqy/gross/ | thtbtch3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3tdqy | false | null | t3_1h3tdqy | /r/LocalLLaMA/comments/1h3tdqy/gross/ | false | false | self | 1 | null |
WilmerAI Progress Update: The workflow based prompt router that supports rolling 'memories' and allows personas to be powered by multiple LLMs at once. | 1 | [removed] | 2024-12-01T03:51:56 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1h3ui0x | false | null | t3_1h3ui0x | /r/LocalLLaMA/comments/1h3ui0x/wilmerai_progress_update_the_workflow_based/ | false | false | default | 1 | null |
||
WilmerAI Update: The workflow based prompt router that supports rolling 'memories' and allows a persona to be powered by multiple LLMs at once. | 1 | [removed] | 2024-12-01T04:03:25 | https://www.reddit.com/r/LocalLLaMA/comments/1h3upkl/wilmerai_update_the_workflow_based_prompt_router/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3upkl | false | null | t3_1h3upkl | /r/LocalLLaMA/comments/1h3upkl/wilmerai_update_the_workflow_based_prompt_router/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mGKsAjApjC23rWMb65uZx1n93DFuzZrpk2WLtBDJI10', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=108&crop=smart&auto=webp&s=744b8ff4fd48cd100d47a4a3248075991430bf72', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=216&crop=smart&auto=webp&s=405c69ded8634ec58077d6dc69f51f920c726435', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=320&crop=smart&auto=webp&s=5101a60b35c7c207725d9110c9457a4558f4605b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=640&crop=smart&auto=webp&s=e76c882ac1be567f8b7195dfe4b5f2eea9ec44dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=960&crop=smart&auto=webp&s=47b4df4f04fbfa4548d332297a9560d60a36795a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?width=1080&crop=smart&auto=webp&s=0a7e9fba7b8a5c607f62e4c38c1e88770315173d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q9Sp7ZPtSBEx7LqXbSzwtpWPKAV4Z4xkGQLQeGPMDPU.jpg?auto=webp&s=a8b14a2de611c6f5634fd7efa65dc8d79d03f9ea', 'width': 1200}, 'variants': {}}]} |
Looks like Meta added 10 models to lmarena. Llama4 tests? Model names: Trenches, Alfred, Edward, Goodway, Humdinger, meowmeow, Robert, Richard, Rubble, William | 31 | Would love to hear which ones people think are the strongest and if they compare to Sonnet 3.6. | 2024-12-01T05:32:39 | https://www.reddit.com/r/LocalLLaMA/comments/1h3w96y/looks_like_meta_added_10_models_to_lmarena_llama4/ | DangerousBenefit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3w96y | false | null | t3_1h3w96y | /r/LocalLLaMA/comments/1h3w96y/looks_like_meta_added_10_models_to_lmarena_llama4/ | false | false | self | 31 | null |
Better quants > better hardware? | 11 | I have a very low spec rig. 3060 card. It does some of the things I want. But I am looking to upgrade soon, since my entire rig needs replacing due to not being supported by windows 11.
At what point do the new quants like fp16 negate the need for more hardware? I’m not suggesting I want to stay on my 3060, but if the quantisation is good enough to bring down the vram usage so dramatically, is there much benefit to running 2, 4x 4090s? | 2024-12-01T05:54:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h3wlu8/better_quants_better_hardware/ | oldschooldaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3wlu8 | false | null | t3_1h3wlu8 | /r/LocalLLaMA/comments/1h3wlu8/better_quants_better_hardware/ | false | false | self | 11 | null |
I own a 6700xt (12gb) and have a spare slot for another gpu. What do I purchase to run larger models? | 1 | Constraints:
- 750w gold PSU
- 64gb of DDR4 on the system
- Ubuntu 24.04
I'm running Llama CPP with Rocm right now and getting great performance but I'm maxing out before I reach models and context sizes I'm interested in.
Bigger is better. My questions are:
- can this even be done with amd gpus (I see conflicting things online for Llama CPP)?
- if I add another GPU, does it need to be a 6700xt to get access to a pool of 24gb of vram?
- if not, could I add a 7600xt 16gb and have access to a pool of 28gb of vram?
- if neither would work, is it worth it to retire the 6700xt and buy a $800 7900xtx? (Not looking at the 3090 right now as there are other reasons I keep this workstation AMD and all i need is local inference)
Any tips/advice would be appreciated | 2024-12-01T06:04:48 | https://www.reddit.com/r/LocalLLaMA/comments/1h3ws49/i_own_a_6700xt_12gb_and_have_a_spare_slot_for/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3ws49 | false | null | t3_1h3ws49 | /r/LocalLLaMA/comments/1h3ws49/i_own_a_6700xt_12gb_and_have_a_spare_slot_for/ | false | false | self | 1 | null |
Best Option for Running a 70B Model on Mac? | 1 | [removed] | 2024-12-01T06:05:38 | https://www.reddit.com/r/LocalLLaMA/comments/1h3wsnx/best_option_for_running_a_70b_model_on_mac/ | OldPebble | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3wsnx | false | null | t3_1h3wsnx | /r/LocalLLaMA/comments/1h3wsnx/best_option_for_running_a_70b_model_on_mac/ | false | false | self | 1 | null |
CntxtPY: python project to optimize context window usage during code generation | 11 | ERROR: type should be string, got "https://github.com/brandondocusen/CntxtPY\n\nSays it generates maps of relationships between named entities in a codebase, in a JSON format, suitable for feeding into LLMs, thereby saving up to 75% (claimed) space in the context window.\n\nLooks worth investigating… I am not the author of this project, nor have I attempted using it." | 2024-12-01T06:43:13 | https://www.reddit.com/r/LocalLLaMA/comments/1h3xd2i/cntxtpy_python_project_to_optimize_context_window/ | datbackup | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3xd2i | false | null | t3_1h3xd2i | /r/LocalLLaMA/comments/1h3xd2i/cntxtpy_python_project_to_optimize_context_window/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'yG7bpMclx-ZrRvM8Qtqv-pnWvseQl6WPnYorqcTIW1M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hNxpWOecNdgfaaEhw3nJt87Ox-RsG6n5M87jvdspQm4.jpg?width=108&crop=smart&auto=webp&s=22daccfc224a0d831313be5743061f71129c2b0a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hNxpWOecNdgfaaEhw3nJt87Ox-RsG6n5M87jvdspQm4.jpg?width=216&crop=smart&auto=webp&s=05342b1cb905bdcfe820ad96c2aca871220821ff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hNxpWOecNdgfaaEhw3nJt87Ox-RsG6n5M87jvdspQm4.jpg?width=320&crop=smart&auto=webp&s=65a6551416803f9d2495f863e7726a67f5aba07f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hNxpWOecNdgfaaEhw3nJt87Ox-RsG6n5M87jvdspQm4.jpg?width=640&crop=smart&auto=webp&s=562540b2115c2f59dc66942ec0d0553644425ce1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hNxpWOecNdgfaaEhw3nJt87Ox-RsG6n5M87jvdspQm4.jpg?width=960&crop=smart&auto=webp&s=4f04b3a6b88b4f58d9e81e08f004f2bc311c7ab5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hNxpWOecNdgfaaEhw3nJt87Ox-RsG6n5M87jvdspQm4.jpg?width=1080&crop=smart&auto=webp&s=206701a2c1d144703d1782861171ee183d28ea01', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hNxpWOecNdgfaaEhw3nJt87Ox-RsG6n5M87jvdspQm4.jpg?auto=webp&s=309a3f566075ec16fefa85706b92ed7e8c684742', 'width': 1200}, 'variants': {}}]} |
Someone has made an uncensored fine tune of QwQ. | 368 | QwQ is an awesome model. But it's pretty locked down with refusals. Huihui made an abliterated fine tune of it. I've been using it today and I haven't had a refusal yet. The answers to the "political" questions I ask are even good.
https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated
Mradermacher has made GGUFs.
https://huggingface.co/mradermacher/QwQ-32B-Preview-abliterated-GGUF | 2024-12-01T06:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/1h3xh0b/someone_has_made_an_uncensored_fine_tune_of_qwq/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3xh0b | false | null | t3_1h3xh0b | /r/LocalLLaMA/comments/1h3xh0b/someone_has_made_an_uncensored_fine_tune_of_qwq/ | false | false | self | 368 | {'enabled': False, 'images': [{'id': 'jBLFmbHz2QJ-b7fbKmk1efovU2OMiyPehX4ITe6R7p0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LBJE-8jU7axaXeTd1CxDtmfURLBUH0tEmZO5jrcSsFw.jpg?width=108&crop=smart&auto=webp&s=c566d1948e830b16657c50598f38116ef51a2114', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LBJE-8jU7axaXeTd1CxDtmfURLBUH0tEmZO5jrcSsFw.jpg?width=216&crop=smart&auto=webp&s=9b73b5311889c355990dc75719148a9f497e7800', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LBJE-8jU7axaXeTd1CxDtmfURLBUH0tEmZO5jrcSsFw.jpg?width=320&crop=smart&auto=webp&s=c7238d5bdd867b4ce0cc7df15408620dcd40f2a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LBJE-8jU7axaXeTd1CxDtmfURLBUH0tEmZO5jrcSsFw.jpg?width=640&crop=smart&auto=webp&s=b05d0134e52135d588138c1aed16eef05822da01', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LBJE-8jU7axaXeTd1CxDtmfURLBUH0tEmZO5jrcSsFw.jpg?width=960&crop=smart&auto=webp&s=e933c47a259f05927a505541859e8837c84d1453', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LBJE-8jU7axaXeTd1CxDtmfURLBUH0tEmZO5jrcSsFw.jpg?width=1080&crop=smart&auto=webp&s=7c373209e5808a846193cafb3a376ec375ff2275', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LBJE-8jU7axaXeTd1CxDtmfURLBUH0tEmZO5jrcSsFw.jpg?auto=webp&s=51df7692b46a0667324a1a4e0b94b2fc246283de', 'width': 1200}, 'variants': {}}]} |
What's up with Nvidia's OpenMath2-Llama3.1-8B? | 5 | 2024-12-01T06:55:58 | random-tomato | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1h3xjqz | false | null | t3_1h3xjqz | /r/LocalLLaMA/comments/1h3xjqz/whats_up_with_nvidias_openmath2llama318b/ | false | false | 5 | {'enabled': True, 'images': [{'id': 'sAsXU1evr7mXOIBqOrC7SZCMzESYMVj69bRtChMbAJo', 'resolutions': [{'height': 12, 'url': 'https://preview.redd.it/ih8856g5q64e1.png?width=108&crop=smart&auto=webp&s=1c8a0d1bbed7bc3d2041a1672b123aa77b7e7f6a', 'width': 108}, {'height': 25, 'url': 'https://preview.redd.it/ih8856g5q64e1.png?width=216&crop=smart&auto=webp&s=70db5cd70bdcada23e9c4e0ea7ef9dd8bbb76fff', 'width': 216}, {'height': 37, 'url': 'https://preview.redd.it/ih8856g5q64e1.png?width=320&crop=smart&auto=webp&s=87139d0fe25d778b80cff480d34095a344e4663b', 'width': 320}, {'height': 75, 'url': 'https://preview.redd.it/ih8856g5q64e1.png?width=640&crop=smart&auto=webp&s=b3fa6ee8cdb235e091823d31137082fa52fa074a', 'width': 640}, {'height': 112, 'url': 'https://preview.redd.it/ih8856g5q64e1.png?width=960&crop=smart&auto=webp&s=8ccb21cd3d9c95d03eb15cd71cde1310b5616736', 'width': 960}, {'height': 126, 'url': 'https://preview.redd.it/ih8856g5q64e1.png?width=1080&crop=smart&auto=webp&s=60bce2e0c78e0df9c049a35b0e05266517aea3e2', 'width': 1080}], 'source': {'height': 300, 'url': 'https://preview.redd.it/ih8856g5q64e1.png?auto=webp&s=a0d84a39f34a24cd58dd146608a27198635e991d', 'width': 2552}, 'variants': {}}]} |
|||
DMS with vector database ? | 1 | Is someone maybe aware of a modern DMS (document management system) that includes (multimodal) Vector database(s) ?
So that uploaded documents (pdf, docx...) will automatically also be stored in a vector database as text or image. | 2024-12-01T07:08:25 | https://www.reddit.com/r/LocalLLaMA/comments/1h3xqb1/dms_with_vector_database/ | Glat0s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3xqb1 | false | null | t3_1h3xqb1 | /r/LocalLLaMA/comments/1h3xqb1/dms_with_vector_database/ | false | false | self | 1 | null |
Is there a version of this model I can run on 8GB of VRAM? | 0 | 2024-12-01T07:31:00 | https://ollama.com/library/qwq | MyRedditsaidit | ollama.com | 1970-01-01T00:00:00 | 0 | {} | 1h3y1wi | false | null | t3_1h3y1wi | /r/LocalLLaMA/comments/1h3y1wi/is_there_a_version_of_this_model_i_can_run_on_8gb/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]} |
||
how do i start off to fine-tune an LLM? | 3 | i want to fine-tune a BERT-based LLM. what are my options to start from? The size of my dataset is around 3k with 10 labels and I want to make it so that the LLM can perform text classification easily.
i am sorry if this is a dumb question, i am just confused where to start from because I have written out python code to do so but it requires HPC resources and I was wondering if there is any easier way to go without having to reinvent the wheel. thank you! | 2024-12-01T07:32:46 | https://www.reddit.com/r/LocalLLaMA/comments/1h3y2rt/how_do_i_start_off_to_finetune_an_llm/ | darkGrayAdventurer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3y2rt | false | null | t3_1h3y2rt | /r/LocalLLaMA/comments/1h3y2rt/how_do_i_start_off_to_finetune_an_llm/ | false | false | self | 3 | null |
Just Built an Agentic RAG Chatbot From Scratch—No Libraries, Just Code! | 3 | Hey everyone!
I’ve been working on building an Agentic RAG chatbot completely from scratch—no libraries, no frameworks, just clean, simple code. It’s pure HTML, CSS, and JavaScript on the frontend with FastAPI on the backend. Handles embeddings, cosine similarity, and reasoning all directly in the codebase.
I wanted to share it in case anyone’s curious or thinking about implementing something similar. It’s lightweight, transparent, and a great way to learn the inner workings of RAG systems.
If you find it helpful, giving it a ⭐ on GitHub would mean a lot to me: [Agentic RAG Chat](https://github.com/AndrewNgo-ini/agentic_rag). Thanks, and I’d love to hear your feedback! 😊 | 2024-12-01T07:40:53 | https://www.reddit.com/r/LocalLLaMA/comments/1h3y6ss/just_built_an_agentic_rag_chatbot_from_scratchno/ | NgoAndrew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3y6ss | false | null | t3_1h3y6ss | /r/LocalLLaMA/comments/1h3y6ss/just_built_an_agentic_rag_chatbot_from_scratchno/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '4iBl_aBBwtd9CRYNKfr_sFVhLDTbcA5fAZQU3IPBhvY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i7eNXI3KCJ-CFvUWr7tLc03AMm51YtHnU_3mQ2pTkOE.jpg?width=108&crop=smart&auto=webp&s=d6e5e4dc8e7cedf87e1b682167c4aed78530b429', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i7eNXI3KCJ-CFvUWr7tLc03AMm51YtHnU_3mQ2pTkOE.jpg?width=216&crop=smart&auto=webp&s=8dec5d3a4570868d907f6ee60fca03dca3e29515', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i7eNXI3KCJ-CFvUWr7tLc03AMm51YtHnU_3mQ2pTkOE.jpg?width=320&crop=smart&auto=webp&s=219a16eda9cd2cb9dd3758d36ad2d37fdefa5fcd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i7eNXI3KCJ-CFvUWr7tLc03AMm51YtHnU_3mQ2pTkOE.jpg?width=640&crop=smart&auto=webp&s=c2eba0f30a9ba048c9aa46d5faad028be47db3e3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i7eNXI3KCJ-CFvUWr7tLc03AMm51YtHnU_3mQ2pTkOE.jpg?width=960&crop=smart&auto=webp&s=93af549e89368f015367e310eb0aa601fb4d4014', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/i7eNXI3KCJ-CFvUWr7tLc03AMm51YtHnU_3mQ2pTkOE.jpg?width=1080&crop=smart&auto=webp&s=07e4adfd4b60b04e25617417b270684063483c2e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/i7eNXI3KCJ-CFvUWr7tLc03AMm51YtHnU_3mQ2pTkOE.jpg?auto=webp&s=715f451b7e6bc53a47790a47b7c19edf5ef74e2a', 'width': 1200}, 'variants': {}}]} |
On-device models end-to-end toolkit: continual training, local inference, VLMs | 13 | We crossed 1k ⭐ on our SmolLM repo this week—thank you for the support!
Here are the new updates based on your feedback:
• SmolLM2 nanotron checkpoints (with optimizer states) for easier continual pre-training
• Local inference examples and demos for frameworks like MLC, Transformers.js, MLX, and llama.cpp
• SmolVLM: Inference & finetuning code for our new vision language model built on top of SmolLM2 1.7B
What should we add next?
[https://github.com/huggingface/smollm](https://github.com/huggingface/smollm) | 2024-12-01T07:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/1h3y71n/ondevice_models_endtoend_toolkit_continual/ | loubnabnl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3y71n | false | null | t3_1h3y71n | /r/LocalLLaMA/comments/1h3y71n/ondevice_models_endtoend_toolkit_continual/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': '9mLKS8tWQtYKlSl3cry3SbjdtTV-p8sRt0AaJ8rXTaQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bclotjgngQniiLwIpQabpV_1vf03eim6l-CRsI_AWNo.jpg?width=108&crop=smart&auto=webp&s=74a18b34e904a16c0398bebbe38b2564f4c49066', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bclotjgngQniiLwIpQabpV_1vf03eim6l-CRsI_AWNo.jpg?width=216&crop=smart&auto=webp&s=f556c46bfc1786a92f978f28b3270a7faa69a7a4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bclotjgngQniiLwIpQabpV_1vf03eim6l-CRsI_AWNo.jpg?width=320&crop=smart&auto=webp&s=847140300ff7908c47192c3faf36e2415c17c8f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bclotjgngQniiLwIpQabpV_1vf03eim6l-CRsI_AWNo.jpg?width=640&crop=smart&auto=webp&s=3f67a9438a35abe19e018e27e012ff4922ba3884', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bclotjgngQniiLwIpQabpV_1vf03eim6l-CRsI_AWNo.jpg?width=960&crop=smart&auto=webp&s=48c2a6263163ad1e3cd4ed3f6c6763e554e4f6a5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bclotjgngQniiLwIpQabpV_1vf03eim6l-CRsI_AWNo.jpg?width=1080&crop=smart&auto=webp&s=bd1d704e29a689f03784281ee7a1b4fea010fde6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bclotjgngQniiLwIpQabpV_1vf03eim6l-CRsI_AWNo.jpg?auto=webp&s=1b3b7f1eac95ab1268b9d72a86b11cd2a838e4bb', 'width': 1200}, 'variants': {}}]} |
Need Help From Rag Experts🙏🏻 | 0 | Currently we are helping our client to build an AI solution / chatbot to extract marketing insights from sentiment analysis across social media platforms and forums. Basically the client would like to ask questions related to the marketing campaign and expect to get accurate insights through the interaction with the AI chatbot.
May I know what the best practices out there to implement solutions like this with AI and RAG or other methodologies?
1. Data cleansing. Our data are content from social media and forum, it may contain different
* Metadata Association like Source, Category, Tags, Date
* Keywords extracted from content
* Remove Noise
* Normalize Text
* Stopwords Removal
* Dialect or Slang Translation
* Abbreviation Expansion
* De-duplication
1. Data Chunking
* 200 chunk\_size with 50 overlap
1. Embedding
* Base on content language, choose the embedding model like TencentBAC/Conan-embedding-v1
* Store embedding in vector database
1. Qeury
* Semantic Search (Embedding-based):
* BM25Okapi algorithm search
* Reciprocal Rank Fusion (RRF) to combine results from both methods
1. Prompting
* Role Definition
* Provide clear and concise task structure
* Provide output structure
Thank you so much everyone! | 2024-12-01T07:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1h3ydxa/need_help_from_rag_experts/ | Soft-Performer-8764 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3ydxa | false | null | t3_1h3ydxa | /r/LocalLLaMA/comments/1h3ydxa/need_help_from_rag_experts/ | false | false | self | 0 | null |
Please help me find the best gpu for getting started with LLMs | 1 | [removed] | 2024-12-01T08:27:22 | https://www.reddit.com/r/LocalLLaMA/comments/1h3yul7/please_help_me_find_the_best_gpu_for_getting/ | mystic-aditya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3yul7 | false | null | t3_1h3yul7 | /r/LocalLLaMA/comments/1h3yul7/please_help_me_find_the_best_gpu_for_getting/ | false | false | self | 1 | null |
Bpw quantizations in llms are like hours slept by humans. | 1 | [removed] | 2024-12-01T09:00:35 | https://www.reddit.com/r/LocalLLaMA/comments/1h3zb0v/bpw_quantizations_in_llms_are_like_hours_slept_by/ | Relative_Bit_7250 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1h3zb0v | false | null | t3_1h3zb0v | /r/LocalLLaMA/comments/1h3zb0v/bpw_quantizations_in_llms_are_like_hours_slept_by/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.