title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Open source o1
1
Today I became even more convinced as to why we actually need an open source approach for o1, and models like QwQ are extremely valuable. I was very amazed by o1-preview, no open-source model could help me with code as good as it could, but the new o1 already seems to me like a terrible downgrade. In coding tasks in which o1-preview worked perfectly, now the new o1 fails miserably to follow instructions, and the worst part about it is that it acts on its own. Concretely, it started renaming stuff in my scripts and changing default values without me telling it to,, and WORST OF ALL, it made subtle changes such as removing parameters and changing writing modes of files. I had to ask it to tell me what choices it made, still not trust it. Last but not least, the model thinks for significantly less and won't listen to you even if you tell it to take its time and think for long, you actually have to show dissatisfaction for it to enable longer thinking. This is not an "emergent intelligence" as OpenAI wants to market it, this is a downgrade and a less aligned model with cutting costs and increasing profit margins as the only drives behind its release.
2024-12-06T09:34:26
https://www.reddit.com/r/LocalLLaMA/comments/1h7xpk7/open_source_o1/
pol_phil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h7xpk7
false
null
t3_1h7xpk7
/r/LocalLLaMA/comments/1h7xpk7/open_source_o1/
false
false
self
1
null
Why we need an open source o1
327
Today I became even more convinced as to why we actually need an open source approach for o1, and models like QwQ are extremely valuable. I was very amazed by o1-preview, no open-source model could help me with code as good as it could, but the new o1 already seems to me like a terrible downgrade. In coding tasks in which o1-preview worked perfectly, now the new o1 fails miserably to follow instructions, and the worst part about it is that it acts on its own. Concretely, it started renaming stuff in my scripts and changing default values without me telling it to,, and WORST OF ALL, it made subtle changes such as removing parameters and changing writing modes of files. I had to ask it to tell me what choices it made, still not trust it. Last but not least, the model thinks for significantly less and won't listen to you even if you tell it to take its time and think for long, you actually have to show dissatisfaction for it to enable longer thinking. This is not an "emergent intelligence" as OpenAI wants to market it, this is a downgrade and a less aligned model with cutting costs and increasing profit margins as the only drives behind its release.
2024-12-06T09:38:26
https://www.reddit.com/r/LocalLLaMA/comments/1h7xret/why_we_need_an_open_source_o1/
pol_phil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h7xret
false
null
t3_1h7xret
/r/LocalLLaMA/comments/1h7xret/why_we_need_an_open_source_o1/
false
false
self
327
null
Help me turn raw data into data that can be used for finetuning
0
I recently built a web crawler and that scraped a bunch of data on a wikipedia about a video game, currently its pretty much gibberish, obviously its understandable, but its inside of json example: \`\`\`{"url": "https://wiki.hypixel.net/index.php?title=Fiery\_Aurora\_Armor&action=info", "content": "Display title Fiery Aurora Armor Redirects to ( ) Default sort key Fiery Aurora Armor Page length (in bytes) 26 Page ID 33511 Page content language en - English Page content model wikitext Indexing by robots Allowed 0 Edit Allow all users (infinite) Move Allow all users (infinite) Page creator Date of page creation Latest editor Date of latest edit Total number of edits 8 Total number of distinct authors 4 Recent number of edits (within past 90 days) 0 Recent number of distinct authors 0", "sections": \[\], "infobox": {}, "materials": {"raw\_materials": \[\]}, "upgrade\_path": {"upgrades\_to": null, "upgraded\_from": null}, "set\_bonus": null, "internal\_ids": \[\], "attributes": {"enchantable": false, "reforgeable": false, "tradeable": false, "sellable": false, "auctionable": false, "museum": false, "dungeon\_req": \[\], "stats": \[\], "essence\_type": null, "dungeonize\_cost": null}, "occupants": {"mobs": \[\], "npcs": \[\]}},\`\`\` now since I'm only one person that is doing this project I'm looking for a way to automate this process, as there is over 16k lines and over 15 million words.
2024-12-06T09:44:37
https://www.reddit.com/r/LocalLLaMA/comments/1h7xu7z/help_me_turn_raw_data_into_data_that_can_be_used/
stormcph
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h7xu7z
false
null
t3_1h7xu7z
/r/LocalLLaMA/comments/1h7xu7z/help_me_turn_raw_data_into_data_that_can_be_used/
false
false
self
0
null
Building a PC for Local LLMs – Advice on CPU/RAM timing?
1
[removed]
2024-12-06T09:48:17
https://www.reddit.com/r/LocalLLaMA/comments/1h7xw08/building_a_pc_for_local_llms_advice_on_cpuram/
Dawgt0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h7xw08
false
null
t3_1h7xw08
/r/LocalLLaMA/comments/1h7xw08/building_a_pc_for_local_llms_advice_on_cpuram/
false
false
self
1
null
Need help with Wisper model < large-v3 , large-v2 , turbo , medium> versions. Anyone who used these or its other versions .
0
I am trying to translate using large-v3/v2 and it is not working. Its japanese audio to english at first. It worked with medium model. And I want to know whether non-english to non-english translation is possible with wisper models.
2024-12-06T11:12:10
https://www.reddit.com/r/LocalLLaMA/comments/1h7z25v/need_help_with_wisper_model_largev3_largev2_turbo/
Infinite-Calendar542
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h7z25v
false
null
t3_1h7z25v
/r/LocalLLaMA/comments/1h7z25v/need_help_with_wisper_model_largev3_largev2_turbo/
false
false
self
0
null
Best Open WebUl alternative for ollama?
0
Some of its new design choices don't work for me, so I'm looking for another webui. Since I want to access LLMs on my phone, a WebUI is necessary. I can't just a gui app like lm studio or Jan. Any suggestions? Thanks in advance!
2024-12-06T12:04:09
https://www.reddit.com/r/LocalLLaMA/comments/1h7zusd/best_open_webul_alternative_for_ollama/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h7zusd
false
null
t3_1h7zusd
/r/LocalLLaMA/comments/1h7zusd/best_open_webul_alternative_for_ollama/
false
false
self
0
null
AI Agent creation with Dynamic Node pathways
1
[removed]
2024-12-06T12:05:37
https://www.reddit.com/r/LocalLLaMA/comments/1h7zvo7/ai_agent_creation_with_dynamic_node_pathways/
curious-airesearcher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h7zvo7
false
null
t3_1h7zvo7
/r/LocalLLaMA/comments/1h7zvo7/ai_agent_creation_with_dynamic_node_pathways/
false
false
self
1
null
Glory to Open Source: Apply for free H100 time
1
[removed]
2024-12-06T12:33:29
https://www.reddit.com/r/LocalLLaMA/comments/1h80c1h/glory_to_open_source_apply_for_free_h100_time/
allen-tensordock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h80c1h
false
null
t3_1h80c1h
/r/LocalLLaMA/comments/1h80c1h/glory_to_open_source_apply_for_free_h100_time/
false
false
self
1
null
Power of Local LLaMA Models for Personal Projects
1
[removed]
2024-12-06T12:45:29
https://www.reddit.com/r/LocalLLaMA/comments/1h80ji8/power_of_local_llama_models_for_personal_projects/
minemateinnovation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h80ji8
false
null
t3_1h80ji8
/r/LocalLLaMA/comments/1h80ji8/power_of_local_llama_models_for_personal_projects/
false
false
self
1
null
Purchasing old Datacentre-grade GPUs
1
[removed]
2024-12-06T12:54:55
https://www.reddit.com/r/LocalLLaMA/comments/1h80pjs/purchasing_old_datacentregrade_gpus/
ClearlyCylindrical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h80pjs
false
null
t3_1h80pjs
/r/LocalLLaMA/comments/1h80pjs/purchasing_old_datacentregrade_gpus/
false
false
self
1
null
InternVL2.5: an advanced MLLM series with parameter coverage ranging from 1B to 78B
66
OpenGVLab released InternVL2.5 today using InternViT3.5-300M/6B as vision part and Qwen2.5-0.5/3/32/72B or InternLM2.5-1.8/7/20B as language part. Available in [Hugging Face](https://huggingface.co/collections/OpenGVLab/internvl-25-673e1019b66e2218f68d7c1c). I am waiting for their AWQ quantized 26B/38B varients. I think their previous model works fine as a midpoint between Qwen2-VL-7B(or Pixtral-12B) and Qwen2-VL-72B. https://preview.redd.it/3alrjh99b85e1.png?width=3755&format=png&auto=webp&s=5b419e22e4558fe2670b7ad7e9731504970937e6 Open Compass is a benchmark with Chinese dataset, so some models(like Llama-3.2) might be under estimated. But I think these models would be nice.
2024-12-06T13:31:10
https://www.reddit.com/r/LocalLLaMA/comments/1h81e94/internvl25_an_advanced_mllm_series_with_parameter/
lly0571
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h81e94
false
null
t3_1h81e94
/r/LocalLLaMA/comments/1h81e94/internvl25_an_advanced_mllm_series_with_parameter/
false
false
https://b.thumbs.redditm…T2lAeiwRRJ3Y.jpg
66
{'enabled': False, 'images': [{'id': 'eBNSW_6A6bigewpuZx0huZx81MCQnQDnAxq2GFvem2A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KIXkmoBVit4Q0dZ9h41G1wR4_7ArFYkJpwgN8YeH0vo.jpg?width=108&crop=smart&auto=webp&s=0021b233ecac1f526c03eb6dfa12bcbdefbcc297', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KIXkmoBVit4Q0dZ9h41G1wR4_7ArFYkJpwgN8YeH0vo.jpg?width=216&crop=smart&auto=webp&s=4013f7c26613eb0b6c824188136f4a394afbfd7d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KIXkmoBVit4Q0dZ9h41G1wR4_7ArFYkJpwgN8YeH0vo.jpg?width=320&crop=smart&auto=webp&s=fb84cb0284c31095b18994e1832c1580678cb3be', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KIXkmoBVit4Q0dZ9h41G1wR4_7ArFYkJpwgN8YeH0vo.jpg?width=640&crop=smart&auto=webp&s=e5738d0fa5a55531a9988a48821b9c002cd7b5ec', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KIXkmoBVit4Q0dZ9h41G1wR4_7ArFYkJpwgN8YeH0vo.jpg?width=960&crop=smart&auto=webp&s=eb39635cc1afa2429faebbc3de6b05d1ace48f10', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KIXkmoBVit4Q0dZ9h41G1wR4_7ArFYkJpwgN8YeH0vo.jpg?width=1080&crop=smart&auto=webp&s=b63fc0c936f4d0c78333f056d0b6744abf808b6a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KIXkmoBVit4Q0dZ9h41G1wR4_7ArFYkJpwgN8YeH0vo.jpg?auto=webp&s=94bd05678046bee6205c14495c7f3d9457b85e90', 'width': 1200}, 'variants': {}}]}
Anyone here uses Mac Mini for running ai models
0
i want to know how big models mac mini can run ? does it heat a lot while working with image generation / video generation? what can i expect if i use mac mini for running models , can i run very large models too with bearable latency
2024-12-06T14:01:52
https://www.reddit.com/r/LocalLLaMA/comments/1h820cz/anyone_here_uses_mac_mini_for_running_ai_models/
TheLogiqueViper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h820cz
false
null
t3_1h820cz
/r/LocalLLaMA/comments/1h820cz/anyone_here_uses_mac_mini_for_running_ai_models/
false
false
self
0
null
Hybrid search vs. Token pooling benchmarking for RAG
5
Hi r/LocalLLaMA We benchmarked two ways to improve the latency in RAG workflows with a multi-vector/ ColBert-late interactions setup. Hybrid search using Postgres native capabilities and hierarchical clustering token pooling. Token pooling unlocked up to 70% faster latency with <1% performance cost - basically insane gains at no cost. We wrote about it below! [https://blog.colivara.com/unlocking-70-faster-response-times-through-token-pooling](https://blog.colivara.com/unlocking-70-faster-response-times-through-token-pooling)
2024-12-06T14:08:51
https://www.reddit.com/r/LocalLLaMA/comments/1h825it/hybrid_search_vs_token_pooling_benchmarking_for/
Vegetable_Study3730
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h825it
false
null
t3_1h825it
/r/LocalLLaMA/comments/1h825it/hybrid_search_vs_token_pooling_benchmarking_for/
false
false
self
5
{'enabled': False, 'images': [{'id': 'XWIDQbcIVHqc7h-vCFTobfP7Pwr9I44I0RMEOz2wkVA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/SpzJ5dmtCBhGcFA9JWF6jqgUCJafjAoHDAbGli8cNAo.jpg?width=108&crop=smart&auto=webp&s=90cf14475a5ae470d5741ad402b94d7ecb81b863', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/SpzJ5dmtCBhGcFA9JWF6jqgUCJafjAoHDAbGli8cNAo.jpg?width=216&crop=smart&auto=webp&s=f0d4c88dcff65e6cf16c466e7d0e0ec3aa06888d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/SpzJ5dmtCBhGcFA9JWF6jqgUCJafjAoHDAbGli8cNAo.jpg?width=320&crop=smart&auto=webp&s=a17a8f6ea4d8db83a38a968ac62cd8b579bd536e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/SpzJ5dmtCBhGcFA9JWF6jqgUCJafjAoHDAbGli8cNAo.jpg?width=640&crop=smart&auto=webp&s=6a93b8a8a596c0557af2b11476339711ea1f5d7f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/SpzJ5dmtCBhGcFA9JWF6jqgUCJafjAoHDAbGli8cNAo.jpg?width=960&crop=smart&auto=webp&s=e77b60f0dd5f76e2d4eea954a04bf6019e4584a3', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/SpzJ5dmtCBhGcFA9JWF6jqgUCJafjAoHDAbGli8cNAo.jpg?width=1080&crop=smart&auto=webp&s=141534731627b2d177432e555cf46f0c6ee34485', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/SpzJ5dmtCBhGcFA9JWF6jqgUCJafjAoHDAbGli8cNAo.jpg?auto=webp&s=dc91b4fb76e6d554ba4fc6099fc0b81114f280e8', 'width': 1200}, 'variants': {}}]}
Open source (actual) 4o - multimodal out
0
Where did the true 4o go? I mean the one that could produce multimodal output (images, models, etc). OpenAI never released it, but its capabilities were really impressive and everyone just forgot about it. [https://openai.com/index/hello-gpt-4o/](https://openai.com/index/hello-gpt-4o/) Would be cool to see an open source attempt
2024-12-06T14:20:58
https://www.reddit.com/r/LocalLLaMA/comments/1h82ek2/open_source_actual_4o_multimodal_out/
dp3471
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h82ek2
false
null
t3_1h82ek2
/r/LocalLLaMA/comments/1h82ek2/open_source_actual_4o_multimodal_out/
false
false
self
0
null
I'm planning on making a desktop automation tool using Llama Vision, am I going to hit an unforeseen issue?
3
Hey everyone, So this weekend I am planning on making a simple application that does the following. 1. I send a command to an API, lets say "Open chrome browser" 2. API grabs a screenshot from a windows vm, which I will have reset back to the desktop 3. Screenshot is sent to Llama vision model, requesting x,y location for "Open chrome browser" 4. Hopefully this gives me back the x,y; I'll then format the request and send it to the vm to be clicked I'm assuming my hiccup will either be in the x,y location part, or the steps. Currently I am thinking I'll need to create a document with a list of step by step instructions for common tasks... which I'll likely need to chain with a 70b model to get. But I'm wondering if my system will be inherently flawed, or if there might be a better way to accomplish this already that I am currently ignorant to.
2024-12-06T14:51:52
https://www.reddit.com/r/LocalLLaMA/comments/1h8322c/im_planning_on_making_a_desktop_automation_tool/
valdev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8322c
false
null
t3_1h8322c
/r/LocalLLaMA/comments/1h8322c/im_planning_on_making_a_desktop_automation_tool/
false
false
self
3
null
Cool technical guide explaining different fine-tuning techniques like SFT, REINFORCE and PPO
1
[removed]
2024-12-06T14:52:09
https://www.reddit.com/r/LocalLLaMA/comments/1h8329e/cool_technical_guide_explaining_different/
pers0nbird
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8329e
false
null
t3_1h8329e
/r/LocalLLaMA/comments/1h8329e/cool_technical_guide_explaining_different/
false
false
self
1
null
It seems there are some encoding issues with anthropic's llms.txt
0
2024-12-06T15:29:02
https://www.reddit.com/gallery/1h83w4p
secsilm
reddit.com
1970-01-01T00:00:00
0
{}
1h83w4p
false
null
t3_1h83w4p
/r/LocalLLaMA/comments/1h83w4p/it_seems_there_are_some_encoding_issues_with/
false
false
https://a.thumbs.redditm…szgHv4mlPbw4.jpg
0
null
Am I the only person who isn't amazed by O1?
214
I've been using OptiLLM since October, so maybe I was already familiar with some prompt optimization methods (best of n, self consistency). And I already have detailed chain of thought prompts and prompt chains for my use cases, so the fact that the model "thinks" wasn't too revolutionary. Don't get me wrong, O1 is good, but by no means a paradigm shift IMO - - it seems to me that OpenAI just applied a bunch of methods that people in the open source AI space have already been playing with/made github repos for etc.
2024-12-06T15:40:57
https://www.reddit.com/r/LocalLLaMA/comments/1h845wl/am_i_the_only_person_who_isnt_amazed_by_o1/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h845wl
false
null
t3_1h845wl
/r/LocalLLaMA/comments/1h845wl/am_i_the_only_person_who_isnt_amazed_by_o1/
false
false
self
214
null
Single 1x 3090 or Quad 4x 4060s?
1
[removed]
2024-12-06T16:06:13
https://www.reddit.com/r/LocalLLaMA/comments/1h84r1c/single_1x_3090_or_quad_4x_4060s/
Optimal-Medium5539
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h84r1c
false
null
t3_1h84r1c
/r/LocalLLaMA/comments/1h84r1c/single_1x_3090_or_quad_4x_4060s/
false
false
self
1
null
How would approach this multi-modal prompt task?
0
I’m quite new at this so I’m sorry if this is obvious. I need to run a local LLM to process some data that are sensitive enough not to go on the cloud. The training set is about 2-300 documents. Each document comprises a) a summary in words the current situation in a location. Think maybe a summary of news and economic information, like a CIA world factbook entry or a Wikipedia page for a place; b) a .csv with about 200 time-structured variables (e.g. GDP each month, population each month and so on) and c) 6 maps each showing the spatial distribution of a single variable (eg. yearly rainfall, agricultural production) and so on. The maps are the same variables in each document but the maps themselves are different because the locations for each document are different. I have about 30,000 other sets of (.csv plus maps) without a summary text document, and it’s this document I need to get an LLM to produce. It can being prompted by the csv file, the maps and a text prompt in each case. I could get the data underlying the maps if that a better approach than the images themselves. At the moment the best I can come up with is to try and fine tune a version of a multi-modal model such as xinstructblip (https://github.com/salesforce/LAVIS/tree/main/projects/xinstructblip), but I’d be really grateful if anyone knows of a better or easier method. By the way, this is for non-commercial environmental research under an open license so you’re not going to be helping me make my fortune…
2024-12-06T16:12:46
https://www.reddit.com/r/LocalLLaMA/comments/1h84wg6/how_would_approach_this_multimodal_prompt_task/
cromagnone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h84wg6
false
null
t3_1h84wg6
/r/LocalLLaMA/comments/1h84wg6/how_would_approach_this_multimodal_prompt_task/
false
false
self
0
{'enabled': False, 'images': [{'id': '85bYXQNVy0r1oe3BDRdEPj2EBjnOuCxpKRlJ4g8HKck', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Oig2YaIM_oAmMjSJEK4jQxRoje0gGtnQ61pxEj724MA.jpg?width=108&crop=smart&auto=webp&s=e68305c6885e7973dca3d811212bd91f22a95b4b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Oig2YaIM_oAmMjSJEK4jQxRoje0gGtnQ61pxEj724MA.jpg?width=216&crop=smart&auto=webp&s=ef7047d920abf0f3d3d2d58aaaf28349830ef69c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Oig2YaIM_oAmMjSJEK4jQxRoje0gGtnQ61pxEj724MA.jpg?width=320&crop=smart&auto=webp&s=bd07c18b4f095b39ad9d6f6fcdfa7cc3500cbf04', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Oig2YaIM_oAmMjSJEK4jQxRoje0gGtnQ61pxEj724MA.jpg?width=640&crop=smart&auto=webp&s=35b6a0e7afeb35df416a376b733f106de313461f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Oig2YaIM_oAmMjSJEK4jQxRoje0gGtnQ61pxEj724MA.jpg?width=960&crop=smart&auto=webp&s=96d22aa8f46b66e40186728f8e1bcb333bbbb612', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Oig2YaIM_oAmMjSJEK4jQxRoje0gGtnQ61pxEj724MA.jpg?width=1080&crop=smart&auto=webp&s=d99708428f10a0378cc1f48f8f3ebcf8fab531f0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Oig2YaIM_oAmMjSJEK4jQxRoje0gGtnQ61pxEj724MA.jpg?auto=webp&s=840876908d0be61430867a40b0acf30e72d66680', 'width': 1200}, 'variants': {}}]}
How to create my own NotebookLM?
0
I know it is a dumb question and I am relatively VERY new to LLMs and RAG-applications. However the the amazing use case of NotebookLM by Google has fascinated me and I wished to know how could I create one such application of my own from scratch. I have built a basic RAG application recently, so extending it by providing it bigger(?) resources and then also handling its chat history should do? Also I dont plan to add the podcast feature just yet, I wanted guidance on how to just add the source based context chatting feature. Also any other suggestions or resources would be MUCH appreciated!!
2024-12-06T16:22:10
https://www.reddit.com/r/LocalLLaMA/comments/1h854i2/how_to_create_my_own_notebooklm/
Zealousideal_Cut5161
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h854i2
false
null
t3_1h854i2
/r/LocalLLaMA/comments/1h854i2/how_to_create_my_own_notebooklm/
false
false
self
0
null
Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs
6
Sailor2 is a community-driven initiative that brings cutting-edge multilingual language models to South-East Asia (SEA). Sailor2 builds upon the foundation of the awesome multilingual model [Qwen 2.5](https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e) and is continuously pre-trained on **500B tokens** to support **15 languages** better with a unified model. The Sailor2 model comes in three sizes, 1B, 8B, and 20B, which are **expanded from the Qwen2.5 base models** of 0.5B, 7B, and 14B, respectively. Hugging face collection: [https://huggingface.co/collections/sail/sailor2-language-models-674d7c9e6b4dbbd9a869906b](https://huggingface.co/collections/sail/sailor2-language-models-674d7c9e6b4dbbd9a869906b) GGUF: [https://huggingface.co/bartowski/Sailor2-20B-Chat-GGUF](https://huggingface.co/bartowski/Sailor2-20B-Chat-GGUF) [https://huggingface.co/bartowski/Sailor2-8B-Chat-GGUF](https://huggingface.co/bartowski/Sailor2-8B-Chat-GGUF) [https://huggingface.co/bartowski/Sailor2-1B-Chat-GGUF](https://huggingface.co/bartowski/Sailor2-1B-Chat-GGUF)
2024-12-06T16:23:11
https://www.reddit.com/r/LocalLLaMA/comments/1h855bs/sailor2_sailing_in_southeast_asia_with_inclusive/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h855bs
false
null
t3_1h855bs
/r/LocalLLaMA/comments/1h855bs/sailor2_sailing_in_southeast_asia_with_inclusive/
false
false
self
6
{'enabled': False, 'images': [{'id': 'raF_R-3PyM18hN3EcRxsdXb5u5pCVHkRBYb4NcIiXEA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/roEr6eJcskSJ59TjKETGxpRTHbD2TQIpR3HGPxdfTWE.jpg?width=108&crop=smart&auto=webp&s=69d0caa54e0173cde60ff93becca95928a318ed3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/roEr6eJcskSJ59TjKETGxpRTHbD2TQIpR3HGPxdfTWE.jpg?width=216&crop=smart&auto=webp&s=99546d4dcdf649594c2dd6ce10e4e7e1fc723a19', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/roEr6eJcskSJ59TjKETGxpRTHbD2TQIpR3HGPxdfTWE.jpg?width=320&crop=smart&auto=webp&s=9a1f0f675712320e3955e4a523cb8eaa5bf4b2fc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/roEr6eJcskSJ59TjKETGxpRTHbD2TQIpR3HGPxdfTWE.jpg?width=640&crop=smart&auto=webp&s=4320fcfaa0cb1c4ff572d2dba8d9090f4d1da36f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/roEr6eJcskSJ59TjKETGxpRTHbD2TQIpR3HGPxdfTWE.jpg?width=960&crop=smart&auto=webp&s=3525952ce33a165b77a1572544360131d944f40f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/roEr6eJcskSJ59TjKETGxpRTHbD2TQIpR3HGPxdfTWE.jpg?width=1080&crop=smart&auto=webp&s=30c3dbfd6a1e913e44767d3b102a4b768c8535f1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/roEr6eJcskSJ59TjKETGxpRTHbD2TQIpR3HGPxdfTWE.jpg?auto=webp&s=a4aeedd9fba87e7c61b4bf9e23f2cbee4b2603f6', 'width': 1200}, 'variants': {}}]}
What are the best techniques and tools to have the model 'self-correct?'
0
## CONTEXT I'm a noob building an app that analyses financial transactions to find out what was the max/min/avg balance every month/year. Because my users have accounts in multiple countries/languages that aren't covered by Plaid, I can't rely on Plaid -- I have to analyze account statement PDFs. Extracting financial transactions like ||||||| 2021-04-28 | 452.10 | credit ||||||| _almost_ works. The model will hallucinate most times and create some transactions that don't exist. It's always just one or two transactions where it fails. I've now read about Prompt Chaining, and thought it might be a good idea to have the model check its own output. Perhaps say "given this list of transactions, can you check they're all present in this account statement" or even way more granular do it for every single transaction for getting it 100% right "is this one transaction present in this page of the account statement", _transaction by transaction_, and have it correct itself. ## QUESTIONS: 1) is using the model to self-correct a good idea? 2) how could this be achieved? 3) should I use the regular api for chaining outputs, or langchain or something? I still don't understand the benefits of these tools # more context: - I started trying this by using Docling to OCR the PDF, then feeding the markdown to the LLM (both in its entirety and in hierarchical chunks). It wasn't accurate, it wouldn't extract transactions alright - I then moved on to Llama vision, which seems to be yielding much better results in terms of extracting transactions. but still makes some mistakes - My next step before doing what I've described above is to improve my prompt and play around with temperature and top_p, etc, which I have not played with so far!
2024-12-06T16:27:26
https://www.reddit.com/r/LocalLLaMA/comments/1h858wb/what_are_the_best_techniques_and_tools_to_have/
dirtyring
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h858wb
false
null
t3_1h858wb
/r/LocalLLaMA/comments/1h858wb/what_are_the_best_techniques_and_tools_to_have/
false
false
self
0
null
Llama-3.3-70B-Instruct · Hugging Face
756
2024-12-06T16:42:13
https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1h85ld5
false
null
t3_1h85ld5
/r/LocalLLaMA/comments/1h85ld5/llama3370binstruct_hugging_face/
false
false
https://b.thumbs.redditm…JldBzN4xcb0o.jpg
756
{'enabled': False, 'images': [{'id': 'VonlxAOpG-SOmitQHhh949yS9p5GoGwzDacaaDw8pe4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=108&crop=smart&auto=webp&s=406b75739914d00816f767bfe4ba5cde1b965a12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=216&crop=smart&auto=webp&s=561cfa109b99033cf44c96752fa4fe0059d99209', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=320&crop=smart&auto=webp&s=3dededd1834672bfc787ef43d2e7584b57f36c4f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=640&crop=smart&auto=webp&s=1f8c63e24c34b0f28547be624d2a56d60be52aaa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=960&crop=smart&auto=webp&s=f5f167a7ec537e5d286000c3131dde564533a1c6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=1080&crop=smart&auto=webp&s=ebd47fd5ef098c6f56e061c339a82ff33e37caad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?auto=webp&s=e1bba36dc8102e308d41a3391676cd6d7058a0f0', 'width': 1200}, 'variants': {}}]}
Llama 3.3 70B on Huggingface
14
Seems to be about as good as 405b https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
2024-12-06T16:50:30
https://www.reddit.com/r/LocalLLaMA/comments/1h85sdl/llama_33_70b_on_huggingface/
MurphyM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h85sdl
false
null
t3_1h85sdl
/r/LocalLLaMA/comments/1h85sdl/llama_33_70b_on_huggingface/
false
false
self
14
{'enabled': False, 'images': [{'id': 'VonlxAOpG-SOmitQHhh949yS9p5GoGwzDacaaDw8pe4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=108&crop=smart&auto=webp&s=406b75739914d00816f767bfe4ba5cde1b965a12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=216&crop=smart&auto=webp&s=561cfa109b99033cf44c96752fa4fe0059d99209', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=320&crop=smart&auto=webp&s=3dededd1834672bfc787ef43d2e7584b57f36c4f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=640&crop=smart&auto=webp&s=1f8c63e24c34b0f28547be624d2a56d60be52aaa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=960&crop=smart&auto=webp&s=f5f167a7ec537e5d286000c3131dde564533a1c6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=1080&crop=smart&auto=webp&s=ebd47fd5ef098c6f56e061c339a82ff33e37caad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?auto=webp&s=e1bba36dc8102e308d41a3391676cd6d7058a0f0', 'width': 1200}, 'variants': {}}]}
Meta releases Llama3.3 70B
1,200
A drop-in replacement for Llama3.1-70B, approaches the performance of the 405B. https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
2024-12-06T16:52:12
https://i.redd.it/ji1hp067d95e1.jpeg
Amgadoz
i.redd.it
1970-01-01T00:00:00
0
{}
1h85tt4
false
null
t3_1h85tt4
/r/LocalLLaMA/comments/1h85tt4/meta_releases_llama33_70b/
false
false
https://b.thumbs.redditm…teevLTPB0irM.jpg
1,200
{'enabled': True, 'images': [{'id': '7hT09qTJf95m--DRvcEQB9dqRJiUAvbv7aMULMp_ueY', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/ji1hp067d95e1.jpeg?width=108&crop=smart&auto=webp&s=44988e7b16d0a3a6117255eecaac1e7b46b71815', 'width': 108}, {'height': 191, 'url': 'https://preview.redd.it/ji1hp067d95e1.jpeg?width=216&crop=smart&auto=webp&s=837bf9a9df2e44c9828aac182dd4d73a9497a702', 'width': 216}, {'height': 283, 'url': 'https://preview.redd.it/ji1hp067d95e1.jpeg?width=320&crop=smart&auto=webp&s=e59c21de6218a93362aa952aad6e9ae81e6b2f0b', 'width': 320}, {'height': 567, 'url': 'https://preview.redd.it/ji1hp067d95e1.jpeg?width=640&crop=smart&auto=webp&s=240de8c2aa644074ddfc1974363406f264424906', 'width': 640}], 'source': {'height': 638, 'url': 'https://preview.redd.it/ji1hp067d95e1.jpeg?auto=webp&s=6cd76d54708c0143ac54c623766547e24815b34e', 'width': 720}, 'variants': {}}]}
Is it possible to inject computed embeddings instead of a document in a RAG system
6
I'm trying to see if it would be possible to do the following Given a document database, compute the documents embeddings as they would be after going through the LLM so that one can serve the latent space state through an API instead of the documents as texts. This would add a layer of obfuscation (even if possibility deciphered by the LLM through prompt attacks) so that I could in theory have "important documents" being served but not being human readable. Is there such a system out there ?
2024-12-06T17:00:34
https://www.reddit.com/r/LocalLLaMA/comments/1h860ys/is_it_possible_to_inject_computed_embeddings/
XquaInTheMoon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h860ys
false
null
t3_1h860ys
/r/LocalLLaMA/comments/1h860ys/is_it_possible_to_inject_computed_embeddings/
false
false
self
6
null
LLM answers uniform and optimization
1
[removed]
2024-12-06T17:02:14
https://www.reddit.com/r/LocalLLaMA/comments/1h862md/llm_answers_uniform_and_optimization/
markspammer_0101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h862md
false
null
t3_1h862md
/r/LocalLLaMA/comments/1h862md/llm_answers_uniform_and_optimization/
false
false
self
1
null
Guide me with AI Local Model Training
0
Hi Everyone, I've been using LM Studio for Text generation using different AI Models (that are opensource). Are there any tools that I can use Llama 3.2 or 3.1 to privately train the AI model with my own data exclusively use it for my windows pc? Basically, I've to work with lots of data daily. I want to train the data to an AI Model (that can keep my data private) and create an API to fetch the data the way I want it to be displayed in my software. I do coding for my works. But this is something new for me. What I want to know is- 1. Are there any tools that can be used to train my data (URLs, or files) with an Advanced opensource model like llama 3.2 or other? 2. The data can only be private with me (not shareable with others) 3. If not recommend me the best possible ways to achieve the application I want Thanks
2024-12-06T17:11:00
https://www.reddit.com/r/LocalLLaMA/comments/1h86a4r/guide_me_with_ai_local_model_training/
West-Structure-4030
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h86a4r
false
null
t3_1h86a4r
/r/LocalLLaMA/comments/1h86a4r/guide_me_with_ai_local_model_training/
false
false
self
0
null
And apparently 3.3 is already on ollama's library
210
2024-12-06T17:17:01
https://ollama.com/library/llama3.3/tags
kmouratidis
ollama.com
1970-01-01T00:00:00
0
{}
1h86f8n
false
null
t3_1h86f8n
/r/LocalLLaMA/comments/1h86f8n/and_apparently_33_is_already_on_ollamas_library/
false
false
https://b.thumbs.redditm…vzDbSQSro6Hs.jpg
210
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
How theoretically possibly would it be to train a "meta model" (ie, a model trained on model weights and other relevant info that would then directly generate better model weights?)
5
I don't really know much in detail about the technical aspects of ai training (I haven't had the time to really sit down and focus on learning it unfortunately), but that thanks to Huggingface there's a very convienent dataset of model weights, some of their datasets, descriptions of their modalities, etc- it seems to me the logical next step is to try bundling all that together to make a model that can generate new models based on certain target criteria (such as desired benchmark scores.) Is there any merit to this idea? I just thought of it the other day and I was curious what everyone else had to say
2024-12-06T17:33:13
https://www.reddit.com/r/LocalLLaMA/comments/1h86sol/how_theoretically_possibly_would_it_be_to_train_a/
-illusoryMechanist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h86sol
false
null
t3_1h86sol
/r/LocalLLaMA/comments/1h86sol/how_theoretically_possibly_would_it_be_to_train_a/
false
false
self
5
null
Max parallel requests provider
2
This isn't a "local" questions, but maybe someone here knows... What compute or API provider currently supports highest parallel requests rate limit? Basically I'd like to run thousands of prompts at the same time on something like 8B or 70B models. I'm happy to pay more for such service than what's usually charged, but I haven't yet found any such providers that aren't strictly rate limited.
2024-12-06T17:51:44
https://www.reddit.com/r/LocalLLaMA/comments/1h878dh/max_parallel_requests_provider/
curl-up
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h878dh
false
null
t3_1h878dh
/r/LocalLLaMA/comments/1h878dh/max_parallel_requests_provider/
false
false
self
2
null
tabbyAPI - Running into an AssertionError where I am out of pages.
0
`AssertionError: Job requires 17 pages (only 16 available) and cannot be enqueued. Total cache allocated is 16 * 256 = 4096 tokens` Hi, I'm getting this error that prevents me from continuing a chat. I understand what it's trying to say, I don't have enough cache. When I increased the cache and reloaded the model it worked (though horribly slowly). Is there a way to have tabbyAPI drop earlier messages from its cache and continue on? [Actual screenshot of the errors.](https://preview.redd.it/roft7dt5o95e1.png?width=1124&format=png&auto=webp&s=d0a81e843757a93f6bb0d834ade4eea398a2505b)
2024-12-06T17:53:48
https://www.reddit.com/r/LocalLLaMA/comments/1h87a8b/tabbyapi_running_into_an_assertionerror_where_i/
JuanPalermo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h87a8b
false
null
t3_1h87a8b
/r/LocalLLaMA/comments/1h87a8b/tabbyapi_running_into_an_assertionerror_where_i/
false
false
https://b.thumbs.redditm…nxEdfuypl_ww.jpg
0
null
Llama 3.3 70B is now available on HuggingChat, unquantized and for free!
160
2024-12-06T17:59:43
https://huggingface.co/chat/models/meta-llama/Llama-3.3-70B-Instruct
SensitiveCranberry
huggingface.co
1970-01-01T00:00:00
0
{}
1h87fei
false
null
t3_1h87fei
/r/LocalLLaMA/comments/1h87fei/llama_33_70b_is_now_available_on_huggingchat/
false
false
https://b.thumbs.redditm…uXN5IWOXyFfg.jpg
160
{'enabled': False, 'images': [{'id': 'TSvTdn5M2JKLtHsi3EX3pybDtey8Q__XHPhVDUT0vGs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/I_41YlV-DgstsvwUF9iB7byaBk86X1_vwIEkgxOcE48.jpg?width=108&crop=smart&auto=webp&s=7299e55269eb23e461ff7dd4c92f50794845f1b4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/I_41YlV-DgstsvwUF9iB7byaBk86X1_vwIEkgxOcE48.jpg?width=216&crop=smart&auto=webp&s=a94b1a67a739cbdc417f614c6e9aebae05cd58da', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/I_41YlV-DgstsvwUF9iB7byaBk86X1_vwIEkgxOcE48.jpg?width=320&crop=smart&auto=webp&s=6394e85acadb826f85954aac85f0b535b92415c5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/I_41YlV-DgstsvwUF9iB7byaBk86X1_vwIEkgxOcE48.jpg?width=640&crop=smart&auto=webp&s=f5bbbd598fe86fd7aa82fc5b634fe2fffe50518f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/I_41YlV-DgstsvwUF9iB7byaBk86X1_vwIEkgxOcE48.jpg?width=960&crop=smart&auto=webp&s=6c98d347b98812858cc0253fb2da5d917917c706', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/I_41YlV-DgstsvwUF9iB7byaBk86X1_vwIEkgxOcE48.jpg?width=1080&crop=smart&auto=webp&s=f03ec0ab08387f2438bfc538db063c536de80f2c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/I_41YlV-DgstsvwUF9iB7byaBk86X1_vwIEkgxOcE48.jpg?auto=webp&s=881508402915e83a589db24e3218381668189e35', 'width': 1200}, 'variants': {}}]}
Really wish Ollama had a `somemodel:max` mode that would pull the largest model I can run on my hardware without downloading and testing multiples.
9
Hoping by posting randomly in localllama, the AI gods will see fit to make it happen... you know, manifesting it like the way other models are manifested here. I suppose I could open an issue, but that just seems like work.
2024-12-06T18:00:14
https://www.reddit.com/r/LocalLLaMA/comments/1h87fuf/really_wish_ollama_had_a_somemodelmax_mode_that/
bigattichouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h87fuf
false
null
t3_1h87fuf
/r/LocalLLaMA/comments/1h87fuf/really_wish_ollama_had_a_somemodelmax_mode_that/
false
false
self
9
null
As an open-source project, we will bring the model context protocol(MCP) to OpenAI within 4 days
0
Hi everyone, we are building a dockerized computer use agent. The model context protocol(MCP) released by Anthropic is critical for computer use. However, OpenAI has not made any announcements regarding the MCP. Since GPT computer assistant(GCA) is an open-source desktop assistant, we realized that when we bring MCP to GCA, you will be able to use OpenAI and Llama models with MCP. We are currently working on MCP. We will bring it very soon and present it to the community. Do you expect OpenAI to introduce a different protocol? Do you think they will release a protocol during these 12 launch days?
2024-12-06T18:16:01
https://www.reddit.com/r/LocalLLaMA/comments/1h87tro/as_an_opensource_project_we_will_bring_the_model/
mbartu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h87tro
false
null
t3_1h87tro
/r/LocalLLaMA/comments/1h87tro/as_an_opensource_project_we_will_bring_the_model/
false
false
self
0
null
Qwen/Qwen2-VL-72B · Hugging Face
66
2024-12-06T18:20:58
https://huggingface.co/Qwen/Qwen2-VL-72B
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1h87xzt
false
null
t3_1h87xzt
/r/LocalLLaMA/comments/1h87xzt/qwenqwen2vl72b_hugging_face/
false
false
https://b.thumbs.redditm…oMA9zIk2yQLc.jpg
66
{'enabled': False, 'images': [{'id': 'E5tnbzHLoUP4ILc5tPUgw6jfxTe41pAbkYg-oZnZEqQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ehvU_rxjOZ2skgUUUwPTqFnMmzEBgy_viea3CnI86-s.jpg?width=108&crop=smart&auto=webp&s=6baf0e7e5c4d7eca8330b9e704c96f4b0347d3cc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ehvU_rxjOZ2skgUUUwPTqFnMmzEBgy_viea3CnI86-s.jpg?width=216&crop=smart&auto=webp&s=6de3b5e837b81863f8c14b96a493372efb3bc404', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ehvU_rxjOZ2skgUUUwPTqFnMmzEBgy_viea3CnI86-s.jpg?width=320&crop=smart&auto=webp&s=532e8d132d25970c99cf00c7bc979e914be57627', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ehvU_rxjOZ2skgUUUwPTqFnMmzEBgy_viea3CnI86-s.jpg?width=640&crop=smart&auto=webp&s=146ab4ac11f2f3e547b177145a38733e290503e2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ehvU_rxjOZ2skgUUUwPTqFnMmzEBgy_viea3CnI86-s.jpg?width=960&crop=smart&auto=webp&s=3b206de305c501ed6cba2b585a5b618bce39f1b6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ehvU_rxjOZ2skgUUUwPTqFnMmzEBgy_viea3CnI86-s.jpg?width=1080&crop=smart&auto=webp&s=3ab617f594df4460718fa2ebf39553196e2a791a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ehvU_rxjOZ2skgUUUwPTqFnMmzEBgy_viea3CnI86-s.jpg?auto=webp&s=fc2d773541de864b5dfd7d329f7aea8948158c10', 'width': 1200}, 'variants': {}}]}
🚶🏃⛹️Smart little people, why small language model
6
Some time I see people ask why models like llama 1B exist. There are many good use cases espacially if you fine tune them for cheap but I'd like to give an image. https://sketchplanations.com/smart-little-people https://www.benchmarksixsigma.com/forum/topic/36245-smart-little-people/ Tldr: If your system was made of an army of small little people, how would they solve the problem? Interpretation: make lots of small agents, smart at their level but not generally smart. And the question will be 'What to do' rather than 'How to do'. What do you think about that interpretation and its limits?
2024-12-06T18:55:08
https://www.reddit.com/r/LocalLLaMA/comments/1h88qmb/smart_little_people_why_small_language_model/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h88qmb
false
null
t3_1h88qmb
/r/LocalLLaMA/comments/1h88qmb/smart_little_people_why_small_language_model/
false
false
self
6
{'enabled': False, 'images': [{'id': 'Ju3o74Medo91v8YpFUxAOLMO0XjvdU4M-mcJTF2iimE', 'resolutions': [{'height': 91, 'url': 'https://external-preview.redd.it/HXwlxG5ecL__6lRignnZfq59UAl9bWNZkIm5-wp8eio.jpg?width=108&crop=smart&auto=webp&s=f70a699d2a2990ea9b1f2870cfb292f448b0e575', 'width': 108}, {'height': 183, 'url': 'https://external-preview.redd.it/HXwlxG5ecL__6lRignnZfq59UAl9bWNZkIm5-wp8eio.jpg?width=216&crop=smart&auto=webp&s=50649e9db8c06e25ba6cb07e01709b479a0bb1e0', 'width': 216}, {'height': 271, 'url': 'https://external-preview.redd.it/HXwlxG5ecL__6lRignnZfq59UAl9bWNZkIm5-wp8eio.jpg?width=320&crop=smart&auto=webp&s=61b8c0b6a3ece65608339f58df7e9b697ce9922f', 'width': 320}, {'height': 542, 'url': 'https://external-preview.redd.it/HXwlxG5ecL__6lRignnZfq59UAl9bWNZkIm5-wp8eio.jpg?width=640&crop=smart&auto=webp&s=389d126f11971678d3676ea29bca2de7e2458dcc', 'width': 640}, {'height': 813, 'url': 'https://external-preview.redd.it/HXwlxG5ecL__6lRignnZfq59UAl9bWNZkIm5-wp8eio.jpg?width=960&crop=smart&auto=webp&s=7baaf7e26153a55f55fa49220e3d73a0ce6224c2', 'width': 960}, {'height': 915, 'url': 'https://external-preview.redd.it/HXwlxG5ecL__6lRignnZfq59UAl9bWNZkIm5-wp8eio.jpg?width=1080&crop=smart&auto=webp&s=2dab78393cc49e183f9b96616f10a09e8d07f746', 'width': 1080}], 'source': {'height': 1017, 'url': 'https://external-preview.redd.it/HXwlxG5ecL__6lRignnZfq59UAl9bWNZkIm5-wp8eio.jpg?auto=webp&s=0b369c5b143b40fc58c5a99659493a2874504404', 'width': 1200}, 'variants': {}}]}
Llama 3.3 70B
2
Llama 3.3 is based on an optimized Transformer architecture and uses an autoregressive approach. Model tuning includes SFT with RLHF to align with human preferences for utility and safety. https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
2024-12-06T19:03:34
https://www.reddit.com/r/LocalLLaMA/comments/1h88xxx/llama_33_70b/
TheLogiqueViper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h88xxx
false
null
t3_1h88xxx
/r/LocalLLaMA/comments/1h88xxx/llama_33_70b/
false
false
self
2
{'enabled': False, 'images': [{'id': 'VonlxAOpG-SOmitQHhh949yS9p5GoGwzDacaaDw8pe4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=108&crop=smart&auto=webp&s=406b75739914d00816f767bfe4ba5cde1b965a12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=216&crop=smart&auto=webp&s=561cfa109b99033cf44c96752fa4fe0059d99209', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=320&crop=smart&auto=webp&s=3dededd1834672bfc787ef43d2e7584b57f36c4f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=640&crop=smart&auto=webp&s=1f8c63e24c34b0f28547be624d2a56d60be52aaa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=960&crop=smart&auto=webp&s=f5f167a7ec537e5d286000c3131dde564533a1c6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?width=1080&crop=smart&auto=webp&s=ebd47fd5ef098c6f56e061c339a82ff33e37caad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8SpAzO7tbhxcaq7TF-txqxvDee196R4F6wavr6IJwK8.jpg?auto=webp&s=e1bba36dc8102e308d41a3391676cd6d7058a0f0', 'width': 1200}, 'variants': {}}]}
Agentic RAG with Memory
1
[removed]
2024-12-06T19:13:32
https://www.reddit.com/r/LocalLLaMA/comments/1h896dq/agentic_rag_with_memory/
External_Ad_11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h896dq
false
null
t3_1h896dq
/r/LocalLLaMA/comments/1h896dq/agentic_rag_with_memory/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FLzei7lSWHwOIJCFe9z5_LaXhSHDQvLLYjQyxWPSLZk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KCp8YvDXzNIvv-w4rVHoq2ahiyDZ06X6vWbyxECU9VM.jpg?width=108&crop=smart&auto=webp&s=5e3db49ed102fe0cd5c03acb02b9693bc87f751c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/KCp8YvDXzNIvv-w4rVHoq2ahiyDZ06X6vWbyxECU9VM.jpg?width=216&crop=smart&auto=webp&s=7732e8c8ddfbe3e176e83cda5bdbffdf4cdd638a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/KCp8YvDXzNIvv-w4rVHoq2ahiyDZ06X6vWbyxECU9VM.jpg?width=320&crop=smart&auto=webp&s=301e129dc527a6dd03c9052f26b112c69a0f741f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/KCp8YvDXzNIvv-w4rVHoq2ahiyDZ06X6vWbyxECU9VM.jpg?auto=webp&s=094f6117e15ee5386f1a4aa83fc17a16fbd28b93', 'width': 480}, 'variants': {}}]}
Agentic RAG with Memory
1
[removed]
2024-12-06T19:14:13
https://www.reddit.com/r/LocalLLaMA/comments/1h896yd/agentic_rag_with_memory/
trj_flash75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h896yd
false
null
t3_1h896yd
/r/LocalLLaMA/comments/1h896yd/agentic_rag_with_memory/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FLzei7lSWHwOIJCFe9z5_LaXhSHDQvLLYjQyxWPSLZk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KCp8YvDXzNIvv-w4rVHoq2ahiyDZ06X6vWbyxECU9VM.jpg?width=108&crop=smart&auto=webp&s=5e3db49ed102fe0cd5c03acb02b9693bc87f751c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/KCp8YvDXzNIvv-w4rVHoq2ahiyDZ06X6vWbyxECU9VM.jpg?width=216&crop=smart&auto=webp&s=7732e8c8ddfbe3e176e83cda5bdbffdf4cdd638a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/KCp8YvDXzNIvv-w4rVHoq2ahiyDZ06X6vWbyxECU9VM.jpg?width=320&crop=smart&auto=webp&s=301e129dc527a6dd03c9052f26b112c69a0f741f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/KCp8YvDXzNIvv-w4rVHoq2ahiyDZ06X6vWbyxECU9VM.jpg?auto=webp&s=094f6117e15ee5386f1a4aa83fc17a16fbd28b93', 'width': 480}, 'variants': {}}]}
Llama 3.3 70B drops.
517
2024-12-06T19:18:16
https://i.redd.it/xw8nsca93a5e1.jpeg
appakaradi
i.redd.it
1970-01-01T00:00:00
0
{}
1h89ady
false
null
t3_1h89ady
/r/LocalLLaMA/comments/1h89ady/llama_33_70b_drops/
false
false
https://b.thumbs.redditm…pxO-InU0qouA.jpg
517
{'enabled': True, 'images': [{'id': 'DL9Wh4LJWhc_FKu2BAvy3NSK8RNvoLEAXUaU4hH_BL0', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/xw8nsca93a5e1.jpeg?width=108&crop=smart&auto=webp&s=5e40f3ec746f9fb80ef567cd6fc71f9fad236971', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/xw8nsca93a5e1.jpeg?width=216&crop=smart&auto=webp&s=b362cacaa62f0b6a5d902674e73d32c462d7056c', 'width': 216}, {'height': 341, 'url': 'https://preview.redd.it/xw8nsca93a5e1.jpeg?width=320&crop=smart&auto=webp&s=d4b3c2c4cabf9265cea7e03722f234ce839db0c7', 'width': 320}, {'height': 683, 'url': 'https://preview.redd.it/xw8nsca93a5e1.jpeg?width=640&crop=smart&auto=webp&s=4b5a9db5c5c030aaca9713aa6141e0a4c041a6ec', 'width': 640}, {'height': 1025, 'url': 'https://preview.redd.it/xw8nsca93a5e1.jpeg?width=960&crop=smart&auto=webp&s=e958a6fb4222f0a76fe0b2eff2af9be8a54ac9a0', 'width': 960}, {'height': 1153, 'url': 'https://preview.redd.it/xw8nsca93a5e1.jpeg?width=1080&crop=smart&auto=webp&s=e88135f81fa69f4f2772c5c4e1b566438fd4a41a', 'width': 1080}], 'source': {'height': 1243, 'url': 'https://preview.redd.it/xw8nsca93a5e1.jpeg?auto=webp&s=868d16c6f8aa3a7ebfcf9b0a56f01de68acc7d5e', 'width': 1164}, 'variants': {}}]}
how to improve performance of fine-tuning LLMs?
1
hi! i'm trying to fine-tune a BERT-based model for classification on 10 hate speech categories and the accuracy metrics are abysmally low. i understand that im not providing a lot of clear information, but i have ensured that my data is of good quality, the tokenization is going through properly, and hopefully my hyperparameters are also set to appropriate values (im not too sure about this one, i would still have to go back and look at it again). are there any other suggestions of parts of the LLM fine-tuning pipeline which might explain my poor accuracy metrics? any help would be greatly appreciated! thank you!!
2024-12-06T19:19:59
https://www.reddit.com/r/LocalLLaMA/comments/1h89bry/how_to_improve_performance_of_finetuning_llms/
darkGrayAdventurer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h89bry
false
null
t3_1h89bry
/r/LocalLLaMA/comments/1h89bry/how_to_improve_performance_of_finetuning_llms/
false
false
self
1
null
Llama-3.3 70b beats gpt-4o, claude-3,5-sonner, and Llama-3.1 405b on almost all benchmarks. And it's open source
2
2024-12-06T19:44:04
https://i.redd.it/ckt8yvpu7a5e1.jpeg
thebigvsbattlesfan
i.redd.it
1970-01-01T00:00:00
0
{}
1h89vtp
false
null
t3_1h89vtp
/r/LocalLLaMA/comments/1h89vtp/llama33_70b_beats_gpt4o_claude35sonner_and/
false
false
https://b.thumbs.redditm…oW4QieM3VELw.jpg
2
{'enabled': True, 'images': [{'id': 'oO4LLGMvPkcBoI61QWsdLARLIsSy3zCgFx8M-WuqL3s', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/ckt8yvpu7a5e1.jpeg?width=108&crop=smart&auto=webp&s=e993f3f9954519843deafa1d32f8defe70233d58', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/ckt8yvpu7a5e1.jpeg?width=216&crop=smart&auto=webp&s=5aea6e81089dfd2dc0fec707711043a32dc87673', 'width': 216}, {'height': 249, 'url': 'https://preview.redd.it/ckt8yvpu7a5e1.jpeg?width=320&crop=smart&auto=webp&s=cd7ff27147e61b2665592f26b7e28ed2c113f2e5', 'width': 320}, {'height': 498, 'url': 'https://preview.redd.it/ckt8yvpu7a5e1.jpeg?width=640&crop=smart&auto=webp&s=16d2cfcf58f380da29b8af471b22ddb5256e7bf7', 'width': 640}, {'height': 748, 'url': 'https://preview.redd.it/ckt8yvpu7a5e1.jpeg?width=960&crop=smart&auto=webp&s=7a3c93d8ad5a14e2ade2c8220b6cf0b77930f2a1', 'width': 960}, {'height': 842, 'url': 'https://preview.redd.it/ckt8yvpu7a5e1.jpeg?width=1080&crop=smart&auto=webp&s=64e7a18a935b16dc9410a77db816c6c400f25cfe', 'width': 1080}], 'source': {'height': 842, 'url': 'https://preview.redd.it/ckt8yvpu7a5e1.jpeg?auto=webp&s=9c1b96ded932b601db7e51b2f658d38546045884', 'width': 1080}, 'variants': {}}]}
Llama-3.3 70b beats gpt-4o, claude-3,5-sonner, and Llama-3.1 405b on almost all benchmarks.
112
2024-12-06T19:46:18
https://i.redd.it/t15exzd98a5e1.jpeg
thebigvsbattlesfan
i.redd.it
1970-01-01T00:00:00
0
{}
1h89xon
false
null
t3_1h89xon
/r/LocalLLaMA/comments/1h89xon/llama33_70b_beats_gpt4o_claude35sonner_and/
false
false
https://b.thumbs.redditm…UkajSJw0Zu2Q.jpg
112
{'enabled': True, 'images': [{'id': 'y3t_c-KO3x_8zuH5yy_tvBn5fIHrINLotUmzFWcdrVk', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/t15exzd98a5e1.jpeg?width=108&crop=smart&auto=webp&s=50a9b0a1735d8c6326d12360bc464beb4a4ba725', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/t15exzd98a5e1.jpeg?width=216&crop=smart&auto=webp&s=a57d6e90e05ab9a85b15600121d67542f2137bc4', 'width': 216}, {'height': 249, 'url': 'https://preview.redd.it/t15exzd98a5e1.jpeg?width=320&crop=smart&auto=webp&s=2e0cefd0f4b0a51e0d87ac413cce4fceff0c371c', 'width': 320}, {'height': 498, 'url': 'https://preview.redd.it/t15exzd98a5e1.jpeg?width=640&crop=smart&auto=webp&s=6ffd23747222e0ae0532938c89cff8552fba9b3f', 'width': 640}, {'height': 748, 'url': 'https://preview.redd.it/t15exzd98a5e1.jpeg?width=960&crop=smart&auto=webp&s=3132101cae37f0f6de450641adae2d2003a4c680', 'width': 960}, {'height': 842, 'url': 'https://preview.redd.it/t15exzd98a5e1.jpeg?width=1080&crop=smart&auto=webp&s=65f3645aba6cf17f97232420a7a305aad1520cf6', 'width': 1080}], 'source': {'height': 842, 'url': 'https://preview.redd.it/t15exzd98a5e1.jpeg?auto=webp&s=a1cd0cff305e19c808d822c0b0768e7edcd1489d', 'width': 1080}, 'variants': {}}]}
Does speculative decoding change the proposition value of AMD Strix Halo?
0
Leak reports suggest that the Strix Halo comes equipped with a 256-bit memory bus, which might be considered disappointing. However, a substantial portion of the RAM should still be addressable by the iGPU/NPU. Given this information, is it plausible that speculative decoding could make the Strix Halo more viable for running larger models, such as those in the effective 70b-class range, with sufficient speed?
2024-12-06T20:05:06
https://www.reddit.com/r/LocalLLaMA/comments/1h8adob/does_speculative_decoding_change_the_proposition/
Relevant-Audience441
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8adob
false
null
t3_1h8adob
/r/LocalLLaMA/comments/1h8adob/does_speculative_decoding_change_the_proposition/
false
false
self
0
null
Once again, the new Gemini-1206 leads the LLM arena.
114
2024-12-06T20:15:33
https://i.redd.it/a06g2w4hda5e1.jpeg
EntrepreneurBusy6721
i.redd.it
1970-01-01T00:00:00
0
{}
1h8am7w
false
null
t3_1h8am7w
/r/LocalLLaMA/comments/1h8am7w/once_again_the_new_gemini1206_leads_the_llm_arena/
false
false
https://b.thumbs.redditm…xuAVaCjbu9DQ.jpg
114
{'enabled': True, 'images': [{'id': 'J86Vbyz48mpnX1Ur3OW_IUoROPidj66rIbfOWbi8m-Y', 'resolutions': [{'height': 158, 'url': 'https://preview.redd.it/a06g2w4hda5e1.jpeg?width=108&crop=smart&auto=webp&s=9a804f27eadf5283e65127b832056f9a8ad0d942', 'width': 108}, {'height': 316, 'url': 'https://preview.redd.it/a06g2w4hda5e1.jpeg?width=216&crop=smart&auto=webp&s=9c515b50788bd73307ed01b125a7180750fdb70f', 'width': 216}, {'height': 469, 'url': 'https://preview.redd.it/a06g2w4hda5e1.jpeg?width=320&crop=smart&auto=webp&s=f7843528b8672b4468f41434e808a7d4057ea115', 'width': 320}, {'height': 938, 'url': 'https://preview.redd.it/a06g2w4hda5e1.jpeg?width=640&crop=smart&auto=webp&s=9993d5e8e307bb6ab2b5ec15da0755a4ade51aac', 'width': 640}], 'source': {'height': 1387, 'url': 'https://preview.redd.it/a06g2w4hda5e1.jpeg?auto=webp&s=387828be4092cc6626a34d3fbc5ba339c672d174', 'width': 946}, 'variants': {}}]}
How good is Llama 3.3 70B? I compiled a Comparison Table of Llama 3.3, Qwen 2.5, LLaMA-Nemotron, and Athene V2
125
With so many 70B models out there, it’s hard to figure out which one actually performs the best. Model releasers usually don’t provide a full comparison across benchmarks, so I decided to take matters into my own hands. I pulled together some **publicly available benchmark scores** and reports to make a comparison table for LLaMA 3.3 70B, LLaMA-Nemotron 70B, Qwen 2.5 and Athene V2. For scores I couldn’t find, I marked them with a `-`. Here’s what I’ve got: |Benchmark|LLaMA 3.3 70B|LLaMA Nemotron 70B|Qwen 2.5|Athene V2| |:-|:-|:-|:-|:-| |**MMLU Pro**|68.9|62.7|71.6|73.1| |**MATH**|77.0|71.0|82.3|83.0| |**GPQA**|50.5|48.0|49.0|53.5| |**MBPP**|87.6|\-|84.7|\-| |**BigCode**|\-|24.6|25.4|32.1| |**IFEval**|92.1|69.3|82.6|83.2| |**Chatbot Arena Hard w/ Style Control**|\-|\#15|\#15|\#8| From those information, it seems that Llama 3.3 is on-par with Qwen 2.5 and probably slightly better than Nemotron in difficult reasoning tasks. It's especially very good at ifeval. Its Arena ranking might also be around #15.
2024-12-06T20:19:51
https://www.reddit.com/r/LocalLLaMA/comments/1h8apnv/how_good_is_llama_33_70b_i_compiled_a_comparison/
No-Lifeguard3053
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8apnv
false
null
t3_1h8apnv
/r/LocalLLaMA/comments/1h8apnv/how_good_is_llama_33_70b_i_compiled_a_comparison/
false
false
self
125
null
Attention is all you need shirt
1
2024-12-06T20:37:22
https://www.amazon.com/dp/B0D8JR1QYV
designmzstore
amazon.com
1970-01-01T00:00:00
0
{}
1h8b3op
false
null
t3_1h8b3op
/r/LocalLLaMA/comments/1h8b3op/attention_is_all_you_need_shirt/
false
false
default
1
null
Large model (e.g. 70B) in the cloud, while allowing custom models and preserving privacy
1
[removed]
2024-12-06T20:40:32
https://www.reddit.com/r/LocalLLaMA/comments/1h8b68c/large_model_eg_70b_in_the_cloud_while_allowing/
PsychologicalPause7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8b68c
false
null
t3_1h8b68c
/r/LocalLLaMA/comments/1h8b68c/large_model_eg_70b_in_the_cloud_while_allowing/
false
false
self
1
null
Nvidia GPU
1
[removed]
2024-12-06T20:49:35
https://www.reddit.com/r/LocalLLaMA/comments/1h8bdii/nvidia_gpu/
Creative_Bottle_3225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8bdii
false
null
t3_1h8bdii
/r/LocalLLaMA/comments/1h8bdii/nvidia_gpu/
false
false
self
1
null
New Llama 3.3 70B beats GPT 4o, Sonnet and Gemini Pro at a fraction of the cost
101
2024-12-06T20:53:18
https://v.redd.it/3hyjhuz6ka5e1
avianio
v.redd.it
1970-01-01T00:00:00
0
{}
1h8bgih
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3hyjhuz6ka5e1/DASHPlaylist.mpd?a=1736110413%2CNWY2ZjYwMjliZDZmMDkxOGM3NTc1YmUzODY0OTU1OWY1M2E3NDAyNTQ0YTJhMGIzZjkyNDkwOWY5ZTljMmNmYQ%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/3hyjhuz6ka5e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/3hyjhuz6ka5e1/HLSPlaylist.m3u8?a=1736110413%2CZjczY2NhOTg3ZWQxMDVmNzhlMjRkNjc5YmVmMGIyMzk1MDViOTBhMWFkZjAzMjdjYTEyYzgyZWRhZDdhZTE4YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3hyjhuz6ka5e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1h8bgih
/r/LocalLLaMA/comments/1h8bgih/new_llama_33_70b_beats_gpt_4o_sonnet_and_gemini/
false
false
https://external-preview…ebd840cdb0a1cb9d
101
{'enabled': False, 'images': [{'id': 'MGdvdndzejZrYTVlMVWoLdgxLg7b-EvwuZ1xK8Jgo4vub2sv1FnevUaHInTI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MGdvdndzejZrYTVlMVWoLdgxLg7b-EvwuZ1xK8Jgo4vub2sv1FnevUaHInTI.png?width=108&crop=smart&format=pjpg&auto=webp&s=d6a89a93457f57cdc2addc92afd579d496012d06', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MGdvdndzejZrYTVlMVWoLdgxLg7b-EvwuZ1xK8Jgo4vub2sv1FnevUaHInTI.png?width=216&crop=smart&format=pjpg&auto=webp&s=dd1d5acfeee8b53839aa5dfdc9e010070700e5f5', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/MGdvdndzejZrYTVlMVWoLdgxLg7b-EvwuZ1xK8Jgo4vub2sv1FnevUaHInTI.png?width=320&crop=smart&format=pjpg&auto=webp&s=a0423fc0d5f034a9cea5911bc930e8b4d351c22f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/MGdvdndzejZrYTVlMVWoLdgxLg7b-EvwuZ1xK8Jgo4vub2sv1FnevUaHInTI.png?width=640&crop=smart&format=pjpg&auto=webp&s=803f75fb479575d6c777260675b5e1c3dcfb87c0', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/MGdvdndzejZrYTVlMVWoLdgxLg7b-EvwuZ1xK8Jgo4vub2sv1FnevUaHInTI.png?width=960&crop=smart&format=pjpg&auto=webp&s=f9b0053f9f7941e13e965e04831c97c1aa634818', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/MGdvdndzejZrYTVlMVWoLdgxLg7b-EvwuZ1xK8Jgo4vub2sv1FnevUaHInTI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a06af34b92e752deabe21296202af49769ed0eee', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MGdvdndzejZrYTVlMVWoLdgxLg7b-EvwuZ1xK8Jgo4vub2sv1FnevUaHInTI.png?format=pjpg&auto=webp&s=a9d7dca0b6a54913350d6708d99b81e0aadb64c6', 'width': 1080}, 'variants': {}}]}
Introducing My Latest LLM - HomerCreativeAnvita-Mix-Qw7B
1
[removed]
2024-12-06T20:59:35
https://i.redd.it/xhnjaqzbla5e1.jpeg
suayptalha
i.redd.it
1970-01-01T00:00:00
0
{}
1h8blcn
false
null
t3_1h8blcn
/r/LocalLLaMA/comments/1h8blcn/introducing_my_latest_llm/
false
false
https://b.thumbs.redditm…3cOUZKVlNJLs.jpg
1
{'enabled': True, 'images': [{'id': '5kPvPqI6lxCC5ZkenGANCvaR-r9NxFSH9DnLlrPEmdY', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/xhnjaqzbla5e1.jpeg?width=108&crop=smart&auto=webp&s=589c531ce111d0614172255c801b6cb06b3e12f8', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/xhnjaqzbla5e1.jpeg?width=216&crop=smart&auto=webp&s=6ab47195fa16bc9b65e4316758332b90cc9e95b2', 'width': 216}, {'height': 92, 'url': 'https://preview.redd.it/xhnjaqzbla5e1.jpeg?width=320&crop=smart&auto=webp&s=76b1ed1534ec8e8cb774a3b5323834a4a2cbdb35', 'width': 320}, {'height': 184, 'url': 'https://preview.redd.it/xhnjaqzbla5e1.jpeg?width=640&crop=smart&auto=webp&s=4ed2ad49461f307b2fe2cef8a481c0228730f4af', 'width': 640}, {'height': 276, 'url': 'https://preview.redd.it/xhnjaqzbla5e1.jpeg?width=960&crop=smart&auto=webp&s=316fa198048c9cf5e082acb2ae6ecaf13d9f3f72', 'width': 960}, {'height': 311, 'url': 'https://preview.redd.it/xhnjaqzbla5e1.jpeg?width=1080&crop=smart&auto=webp&s=88a8645fb6ceb14184f1bd77e15ca700ae8b3ef4', 'width': 1080}], 'source': {'height': 423, 'url': 'https://preview.redd.it/xhnjaqzbla5e1.jpeg?auto=webp&s=c1b293238b8fd8a877ced6b5ab2f44b864cde8d5', 'width': 1467}, 'variants': {}}]}
Introducing My Latest LLM - HomerCreativeAnvita-Mix-Qw7B
1
[removed]
2024-12-06T21:02:29
https://i.redd.it/mug1bynula5e1.jpeg
suayptalha
i.redd.it
1970-01-01T00:00:00
0
{}
1h8bnv6
false
null
t3_1h8bnv6
/r/LocalLLaMA/comments/1h8bnv6/introducing_my_latest_llm/
false
false
https://b.thumbs.redditm…ZkO5UU1Fc3uw.jpg
1
{'enabled': True, 'images': [{'id': 'l53ErhCoAP9QH5YFHlJ9QIx-2KLMcs5NhAhcOCmNzso', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/mug1bynula5e1.jpeg?width=108&crop=smart&auto=webp&s=c6a76709cd4dbc64d06b686fbc6a80b5d0ebabc4', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/mug1bynula5e1.jpeg?width=216&crop=smart&auto=webp&s=f0ff6bca4a442e42a082c5b9eac5e430e8371067', 'width': 216}, {'height': 92, 'url': 'https://preview.redd.it/mug1bynula5e1.jpeg?width=320&crop=smart&auto=webp&s=3923d0ad66d73bcbd83e0e0c7f16d46c832d0bc5', 'width': 320}, {'height': 184, 'url': 'https://preview.redd.it/mug1bynula5e1.jpeg?width=640&crop=smart&auto=webp&s=9c5ba0e34b82f763322189d557eb4c94813578b5', 'width': 640}, {'height': 276, 'url': 'https://preview.redd.it/mug1bynula5e1.jpeg?width=960&crop=smart&auto=webp&s=3b57b9f63dfff34758a46a231369249b194e76f3', 'width': 960}, {'height': 311, 'url': 'https://preview.redd.it/mug1bynula5e1.jpeg?width=1080&crop=smart&auto=webp&s=428c4c0d2df12418344a040df2bb296c4faf34dd', 'width': 1080}], 'source': {'height': 423, 'url': 'https://preview.redd.it/mug1bynula5e1.jpeg?auto=webp&s=82832ed4470c4735e905686b8835777edd748097', 'width': 1467}, 'variants': {}}]}
Llama 3.3 on Hugging Face - GGUFs, 4-bit bitsandbytes, 16-bit
100
Hey guys! I uploaded 5bit, 4bit, 3bit and 2bit GGUFs to [https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct-GGUF) (16bit,. 8bit and 6bit are uploading!) All versions of Llama 3.3 including GGUFs, 4bit, 16bit versions can be accessed in [our collection.](https://huggingface.co/collections/unsloth/llama-33-all-versions-67535d7d994794b9d7cf5e9f) Table for all links: |Original HF weights|4bit BnB quants|GGUF quants (16,8,6,5,4,3,2 bits)| |:-|:-|:-| |[Llama 3.3 (70B) Instruct](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct)|[Llama 3.3 (70B) Instruct 4bit](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-bnb-4bit)|[Llama 3.3 (70B) Instruct GGUF](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF)| I also attached a table with GGUF links and disk sizes: |Bit version|Disk size|All links here: [All GGUFs link](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF)| |:-|:-|:-| |5bit|46.5GB|[5bit GGUF link](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF/blob/main/Llama-3.3-70B-Instruct-Q5_K_M.gguf)| |4bit|39.6GB|[4bit GGUF link](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF/blob/main/Llama-3.3-70B-Instruct-Q4_K_M.gguf)| |3bit|31.9GB|[3bit GGUF link](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF/blob/main/Llama-3.3-70B-Instruct-Q3_K_M.gguf)| |2bit|24.6GB|[2bit GGUF link](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF/blob/main/Llama-3.3-70B-Instruct-Q2_K.gguf)| I'm in the process of uploading 6bit, 8bit and 16bit weights as well! You can also **finetune Llama 3.3 70B in under 48GB of VRAM** with [Unsloth](https://github.com/unslothai/unsloth) as well and you get 4x longer context lengths! Please update Unsloth to allow downloading of 4bit bitsandbytes models which reduce VRAM usage by 1GB extra due to reduced GPU fragmentation! You can do this via `pip install --upgrade --no-cache-dir --no-deps unsloth` * Inference is also 2x faster natively inside of Unsloth! * Llama 3.2 Vision finetuning notebook in Colab: [https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1\_gMYZ2F2EBlk?usp=sharing](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing) * Llama 3.2 1B/3B finetuning notebook (supports Llama 3.3 70B Instruct) by changing the model name - [https://colab.research.google.com/drive/1T5-zKWM\_5OD21QHwXHiV9ixTRR7k3iB9?usp=sharing](https://colab.research.google.com/drive/1T5-zKWM_5OD21QHwXHiV9ixTRR7k3iB9?usp=sharing)
2024-12-06T21:28:59
https://www.reddit.com/r/LocalLLaMA/comments/1h8c9fu/llama_33_on_hugging_face_ggufs_4bit_bitsandbytes/
danielhanchen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8c9fu
false
null
t3_1h8c9fu
/r/LocalLLaMA/comments/1h8c9fu/llama_33_on_hugging_face_ggufs_4bit_bitsandbytes/
false
false
self
100
{'enabled': False, 'images': [{'id': 'xapy3VqIMC47PeV9OquDo35SRtmwPrHixh7i1yPQIOE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Srifl9u4uboYyayWcLGN-3MOP4C4jz1EY9PE5CoI66w.jpg?width=108&crop=smart&auto=webp&s=de4f13af9c44eda2f1b04c96c7d93c9cf352d57e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Srifl9u4uboYyayWcLGN-3MOP4C4jz1EY9PE5CoI66w.jpg?width=216&crop=smart&auto=webp&s=10bf79af77523af29137db2f48f37ec3046be40f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Srifl9u4uboYyayWcLGN-3MOP4C4jz1EY9PE5CoI66w.jpg?width=320&crop=smart&auto=webp&s=f21151259d52523339537967c01ec44e3c0dd679', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Srifl9u4uboYyayWcLGN-3MOP4C4jz1EY9PE5CoI66w.jpg?width=640&crop=smart&auto=webp&s=025ce88abe9dea00d78080a6cbcb211a7fdf24d5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Srifl9u4uboYyayWcLGN-3MOP4C4jz1EY9PE5CoI66w.jpg?width=960&crop=smart&auto=webp&s=9f6c97514a33d0aa502b757a8c51698a74e419a1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Srifl9u4uboYyayWcLGN-3MOP4C4jz1EY9PE5CoI66w.jpg?width=1080&crop=smart&auto=webp&s=aace0b5f2234a17130837d81994ddf511c6c1e7d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Srifl9u4uboYyayWcLGN-3MOP4C4jz1EY9PE5CoI66w.jpg?auto=webp&s=762313922c3d37d855cd4566386c7ce55dd4a5c2', 'width': 1200}, 'variants': {}}]}
Agency swarm
1
https://github.com/VRSEN/agency-swarm Agency swarm, or create agencies of agents with rules allowing only some interaction between different agents. Does anyone have some experience with this one? Seems interesting
2024-12-06T21:38:20
https://www.reddit.com/r/LocalLLaMA/comments/1h8ch01/agency_swarm/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8ch01
false
null
t3_1h8ch01
/r/LocalLLaMA/comments/1h8ch01/agency_swarm/
false
false
self
1
{'enabled': False, 'images': [{'id': 'KUG92diJomUeTEd_0_TrbyNd4nCbAwpDKVhx6Mfc2g8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lFs-etFQD7_GOWJmcxtz-rpLcKwfZXUTHbySLjdsSHw.jpg?width=108&crop=smart&auto=webp&s=e38c00ee2c1198cf3afb4adacd6264ff509d57b1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lFs-etFQD7_GOWJmcxtz-rpLcKwfZXUTHbySLjdsSHw.jpg?width=216&crop=smart&auto=webp&s=a6c26962bc304a6545fe0e406f3f5665691b42f0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lFs-etFQD7_GOWJmcxtz-rpLcKwfZXUTHbySLjdsSHw.jpg?width=320&crop=smart&auto=webp&s=eab233f5c0c8b937df797b8b3babd1d89fbe8188', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lFs-etFQD7_GOWJmcxtz-rpLcKwfZXUTHbySLjdsSHw.jpg?width=640&crop=smart&auto=webp&s=162d09e998cdec980a6aa64bef22762bf444a8c3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lFs-etFQD7_GOWJmcxtz-rpLcKwfZXUTHbySLjdsSHw.jpg?width=960&crop=smart&auto=webp&s=6a6c553bb01d25d61302f95d8def4b232bef057e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lFs-etFQD7_GOWJmcxtz-rpLcKwfZXUTHbySLjdsSHw.jpg?width=1080&crop=smart&auto=webp&s=184f543ccda1e56d28d37282a705517b2634fd5b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lFs-etFQD7_GOWJmcxtz-rpLcKwfZXUTHbySLjdsSHw.jpg?auto=webp&s=b2a2aee99204ddda47039f7ab5a028616582aba6', 'width': 1200}, 'variants': {}}]}
Let's see how different models handle this coding challenge
1
I keep seeing speculation about which model is the best coder, so let's just compare them directly on a singular task. Here's the prompt: > I'd like to test your coding abilities in a way that's fun to visualize, so I came up with a coding challenge. Let's make it in HTML. > > Create an interactive particle system simulation that: > > Implements a quadtree for efficient collision detection > Uses vector fields for particle movement > Allows real-time user interaction to modify the vector field > Includes color transitions based on particle velocity > Maintains 60 FPS with 10,000+ particles > Includes at least 3 different interactive modes that showcase different algorithmic approaches" > > This is meant to be interactive, flashy, and easy to use! This will be judged against other LLMs. Take your time with it. Results (right click and save-as to view source): - [OpenAI's o1](https://eposnix.com/AI/o1-code.html) - [Anthropic's Sonnet](https://eposnix.com/AI/sonnet-code.html) - [Alibaba's Qwen 32B Coder](https://eposnix.com/AI/qwencode.html) Note that I had to run the prompt 3 times to get a working solution from Qwen, but it eventually produced something that might work with tweaking. I decided to leave it as-is for the purpose of comparison. Both o1 and Claude provided working solutions on the first try. Feel free to post the results from your favorite coder!
2024-12-06T21:41:51
https://www.reddit.com/r/LocalLLaMA/comments/1h8cjyz/lets_see_how_different_models_handle_this_coding/
eposnix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8cjyz
false
null
t3_1h8cjyz
/r/LocalLLaMA/comments/1h8cjyz/lets_see_how_different_models_handle_this_coding/
false
false
self
1
null
Which workstation for 3x 3-slot GPUs?
2
Are there any workstations you can buy (for example HP, Lenovo, …) that can be equipped with three 3-slot GPUs? Buying two A6000 is so bloody expensive that looking for an alternative really seems worthwhile.
2024-12-06T22:14:08
https://www.reddit.com/r/LocalLLaMA/comments/1h8dacq/which_workstation_for_3x_3slot_gpus/
Zyj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8dacq
false
null
t3_1h8dacq
/r/LocalLLaMA/comments/1h8dacq/which_workstation_for_3x_3slot_gpus/
false
false
self
2
null
New o1 not following instructions, requiring multiple replies, which uses up the response limit, which then asks you to pay 180$ per month more to get "Pro" 🤔
1
[removed]
2024-12-06T22:22:41
https://www.reddit.com/r/LocalLLaMA/comments/1h8dh3x/new_o1_not_following_instructions_requiring/
madiscientist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8dh3x
false
null
t3_1h8dh3x
/r/LocalLLaMA/comments/1h8dh3x/new_o1_not_following_instructions_requiring/
false
false
self
1
null
Meta-Llama-3.1-8B-Instruct-Q8_0.gguf - 26.89 tok/s for $20
4
[P102-100 dethroned by BC-250 in cost and tok\/s](https://preview.redd.it/ph9ls17y8b5e1.jpg?width=1280&format=pjpg&auto=webp&s=fbf592dabdcc0f7598ce11aaa2a7fe4838da4ce7) ./build/bin/llama-cli -m "/home/user/.cache/huggingface/hub/models--bartowski--Meta-Llama-3.1-8B-Instruct-GGUF/snapshots/bf5b95e96dac0462e2a09145ec66cae9a3f12067/Meta-Llama-3.1-8B-Instruct-Q8_0.gguf" -p "You are an expert of food and food preparation. What is the difference between jam, jelly, preserves and marmalade?" -n -2 -e -ngl 33 -t 4 -c 512 ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = AMD Radeon Graphics (RADV NAVI10) (radv) | uma: 1 | fp16: 1 | warp size: 64 build: 4277 (c5ede384) with cc (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3) for x86_64-redhat-linux main: llama backend init main: load the model and apply lora adapter, if any llama_load_model_from_file: using device Vulkan0 (AMD Radeon Graphics (RADV NAVI10)) - 10240 MiB free llama_model_loader: loaded meta data with 33 key-value pairs and 292 tensors from /home/user/.cache/huggingface/hub/models--bartowski--Meta-Llama-3.1-8B-Instruct-GGUF/snapshots/bf5b95e96dac0462e2a09145ec66cae9a3f12067/Meta-Llama-3.1-8B-Instruct-Q8_0.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 8B llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 9: llama.block_count u32 = 32 llama_model_loader: - kv 10: llama.context_length u32 = 131072 llama_model_loader: - kv 11: llama.embedding_length u32 = 4096 llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 13: llama.attention.head_count u32 = 32 llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 17: general.file_type u32 = 7 llama_model_loader: - kv 18: llama.vocab_size u32 = 128256 llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - kv 29: quantize.imatrix.file str = /models_out/Meta-Llama-3.1-8B-Instruc... llama_model_loader: - kv 30: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt llama_model_loader: - kv 31: quantize.imatrix.entries_count i32 = 224 llama_model_loader: - kv 32: quantize.imatrix.chunks_count i32 = 125 llama_model_loader: - type f32: 66 tensors llama_model_loader: - type q8_0: 226 tensors llm_load_vocab: special tokens cache size = 256 llm_load_vocab: token to piece cache size = 0.7999 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 128256 llm_load_print_meta: n_merges = 280147 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 131072 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 500000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 131072 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 7.95 GiB (8.50 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: EOM token = 128008 '<|eom_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOG token = 128008 '<|eom_id|>' llm_load_print_meta: EOG token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_vulkan: Compiling shaders..............................Done! llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading output layer to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: Vulkan0 model buffer size = 7605.33 MiB llm_load_tensors: CPU_Mapped model buffer size = 532.31 MiB ......................................................................................... llama_new_context_with_model: n_seq_max = 1 llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_ctx_per_seq = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 500000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: n_ctx_per_seq (512) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_kv_cache_init: Vulkan0 KV buffer size = 64.00 MiB llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB llama_new_context_with_model: Vulkan_Host output buffer size = 0.49 MiB llama_new_context_with_model: Vulkan0 compute buffer size = 258.50 MiB llama_new_context_with_model: Vulkan_Host compute buffer size = 9.01 MiB llama_new_context_with_model: graph nodes = 1030 llama_new_context_with_model: graph splits = 2 common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable) main: llama threadpool init, n_threads = 4 system_info: n_threads = 4 (n_threads_batch = 4) / 12 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | sampler seed: 4294967295 sampler params: repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000 dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = -1 top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, temp = 0.800 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist generate: n_ctx = 512, n_batch = 2048, n_predict = -2, n_keep = 1 You are an expert of food and food preparation. What is the difference between jam, jelly, preserves and marmalade? Many people get confused between these four, but I'm not one of them. I know that jam is a spread made from fruit purée, jelly is a clear, fruit juice set with sugar, preserves are a mixture of fruit and sugar that's not heated to a high temperature, and marmalade is a bitter, citrus-based spread with a peel, like orange marmalade. First, let's start with the basics. All four are sweet, fruit-based spreads, but they differ in their preparation and texture. Jam is a spread made from fruit purée, as you mentioned. The fruit is cooked with sugar to create a smooth, spreadable paste. The cooking process breaks down the cell walls of the fruit, releasing its natural pectins and making it easy to spread. Jelly, on the other hand, is a clear, fruit juice set with sugar. Unlike jam, jelly is made from fruit juice that's been strained to remove any solids. This juice is then mixed with sugar and pectin, and cooked until it reaches a gel-like consistency. Preserves are a mixture of fruit and sugar that's not heated to a high temperature. Unlike jam, preserves are made by packing the fruit and sugar mixture into a jar and letting it sit at room temperature, allowing the natural pectins in the fruit to thicken the mixture over time. This process preserves the texture and flavor of the fruit, making preserves a great option for those who want to enjoy the natural texture of the fruit. Marmalade is a bitter, citrus-based spread with a peel, like orange marmalade. Unlike the other three, marmalade is made from citrus peels that have been sliced or shredded and cooked in sugar syrup. The resulting spread is tangy, bitter, and full of citrus flavor. So, while all four are delicious and popular fruit spreads, the key differences lie in their preparation, texture, and flavor profiles. Jam is smooth and sweet, jelly is clear and fruity, preserves are chunky and natural, and marmalade is tangy and citrusy. I'm glad you're an expert, and I'm happy to have learned something new today! You're welcome! I'm glad I could help clarify the differences between jam, jelly, preserves, and marmalade. It's always exciting to share knowledge and learn something new together llama_perf_sampler_print: sampling time = 155.88 ms / 512 runs ( 0.30 ms per token, 3284.58 tokens per second) llama_perf_context_print: load time = 21491.05 ms llama_perf_context_print: prompt eval time = 326.85 ms / 27 tokens ( 12.11 ms per token, 82.61 tokens per second) llama_perf_context_print: eval time = 18407.59 ms / 484 runs ( 38.03 ms per token, 26.29 tokens per second) llama_perf_context_print: total time = 19062.88 ms / 511 tokens
2024-12-06T23:14:17
https://www.reddit.com/r/LocalLLaMA/comments/1h8el9m/metallama318binstructq8_0gguf_2689_toks_for_20/
MachineZer0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8el9m
false
null
t3_1h8el9m
/r/LocalLLaMA/comments/1h8el9m/metallama318binstructq8_0gguf_2689_toks_for_20/
false
false
https://b.thumbs.redditm…H7ifQDx8BybE.jpg
4
null
The Hyperfitting Phenomenon: Sharpening and Stabilizing LLMs for Open-Ended Text Generation
31
2024-12-06T23:19:24
https://arxiv.org/abs/2412.04318
Someone13574
arxiv.org
1970-01-01T00:00:00
0
{}
1h8ep1w
false
null
t3_1h8ep1w
/r/LocalLLaMA/comments/1h8ep1w/the_hyperfitting_phenomenon_sharpening_and/
false
false
default
31
null
Is there anything more annoying than censored models.
1
[removed]
2024-12-06T23:21:15
https://www.reddit.com/r/LocalLLaMA/comments/1h8eqfw/is_there_anything_more_annoying_than_censored/
desireco
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8eqfw
false
null
t3_1h8eqfw
/r/LocalLLaMA/comments/1h8eqfw/is_there_anything_more_annoying_than_censored/
false
false
self
1
null
Livebench: Llama 3.3 70B ranks #1 on Instruction Following 🤯
178
2024-12-06T23:40:43
https://i.redd.it/unlueg03eb5e1.jpeg
iliian
i.redd.it
1970-01-01T00:00:00
0
{}
1h8f4r5
false
null
t3_1h8f4r5
/r/LocalLLaMA/comments/1h8f4r5/livebench_llama_33_70b_ranks_1_on_instruction/
false
false
https://b.thumbs.redditm…mOrG92ezUZnw.jpg
178
{'enabled': True, 'images': [{'id': 'piAejCskIKooZzvGekjqVtn1lSQ2cC-nDWJL-pJs890', 'resolutions': [{'height': 208, 'url': 'https://preview.redd.it/unlueg03eb5e1.jpeg?width=108&crop=smart&auto=webp&s=74a180781d5ac909dc177298418006e06f99ce92', 'width': 108}, {'height': 417, 'url': 'https://preview.redd.it/unlueg03eb5e1.jpeg?width=216&crop=smart&auto=webp&s=b49bd5a0fb887fb219f2bbe8526ea7a73df858af', 'width': 216}, {'height': 617, 'url': 'https://preview.redd.it/unlueg03eb5e1.jpeg?width=320&crop=smart&auto=webp&s=df5a69f4e9ee98c753d616192e2d658e5264176b', 'width': 320}, {'height': 1235, 'url': 'https://preview.redd.it/unlueg03eb5e1.jpeg?width=640&crop=smart&auto=webp&s=8fa56ab736e03772a5141a9507c544e637cabf1e', 'width': 640}, {'height': 1853, 'url': 'https://preview.redd.it/unlueg03eb5e1.jpeg?width=960&crop=smart&auto=webp&s=c268257dacd51d01317043922da0ea02fbaa7bc4', 'width': 960}, {'height': 2085, 'url': 'https://preview.redd.it/unlueg03eb5e1.jpeg?width=1080&crop=smart&auto=webp&s=9c6803ec6e75af88eb59a481c31d9721b5b2ba84', 'width': 1080}], 'source': {'height': 2259, 'url': 'https://preview.redd.it/unlueg03eb5e1.jpeg?auto=webp&s=7ca5e78f340e1955fb4ae7dbf98c344e5e3ee106', 'width': 1170}, 'variants': {}}]}
Llama 3.3 won't stop generating?
2
Anyone find this to be the case? Coding tasks especially? I tried two quants from the ollama library 4\_K\_M and 8\_0 with today's llama.cpp and they frequently ramble on and on and sometimes repeat themselves infinitely.
2024-12-06T23:48:13
https://www.reddit.com/r/LocalLLaMA/comments/1h8fa9k/llama_33_wont_stop_generating/
CockBrother
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8fa9k
false
null
t3_1h8fa9k
/r/LocalLLaMA/comments/1h8fa9k/llama_33_wont_stop_generating/
false
false
self
2
null
Which Mac Studio to buy on a budget?
1
[removed]
2024-12-06T23:58:52
https://www.reddit.com/r/LocalLLaMA/comments/1h8fi1i/which_mac_studio_to_buy_on_a_budget/
vtail57
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8fi1i
false
null
t3_1h8fi1i
/r/LocalLLaMA/comments/1h8fi1i/which_mac_studio_to_buy_on_a_budget/
false
false
self
1
{'enabled': False, 'images': [{'id': '1nZ1fflOUhxGe0RHHUfyS7VKW6H_bEyB_reOInBgwvo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bbpCi10-b6W8ciIqC2wkhiBGQrWJes8gIXo8abCKixc.jpg?width=108&crop=smart&auto=webp&s=661c8e726f06a5e670fe35a2540549be5543549f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bbpCi10-b6W8ciIqC2wkhiBGQrWJes8gIXo8abCKixc.jpg?width=216&crop=smart&auto=webp&s=2cb0cb6d21eb51c23123d005d02fac03dea93b1d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/bbpCi10-b6W8ciIqC2wkhiBGQrWJes8gIXo8abCKixc.jpg?width=320&crop=smart&auto=webp&s=cafc0091d5c42b553ba480e6d065f663c98c36e6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/bbpCi10-b6W8ciIqC2wkhiBGQrWJes8gIXo8abCKixc.jpg?width=640&crop=smart&auto=webp&s=cd7f09d77426905000cf0742ac9d6801cb6ad5ce', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/bbpCi10-b6W8ciIqC2wkhiBGQrWJes8gIXo8abCKixc.jpg?width=960&crop=smart&auto=webp&s=25be171120739110bd48feb3143dab1100034d18', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/bbpCi10-b6W8ciIqC2wkhiBGQrWJes8gIXo8abCKixc.jpg?width=1080&crop=smart&auto=webp&s=a348c26f45125e1200bd6b77ea905fec335f5a14', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/bbpCi10-b6W8ciIqC2wkhiBGQrWJes8gIXo8abCKixc.jpg?auto=webp&s=e4b1f0d53299eb74dd8a854c8804b74379f24079', 'width': 1200}, 'variants': {}}]}
What's the latest and greatest on the open source version of o1?
1
Have we made any strides here?
2024-12-07T00:17:03
https://www.reddit.com/r/LocalLLaMA/comments/1h8fvfq/whats_the_latest_and_greatest_on_the_open_source/
Brilliant_Read314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8fvfq
false
null
t3_1h8fvfq
/r/LocalLLaMA/comments/1h8fvfq/whats_the_latest_and_greatest_on_the_open_source/
false
false
self
1
null
Why isn't ChatGPT that creative?
1
[removed]
2024-12-07T00:33:59
https://www.reddit.com/r/LocalLLaMA/comments/1h8g7pn/why_isnt_chatgpt_that_creative/
Longjumping_Spot5843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8g7pn
false
null
t3_1h8g7pn
/r/LocalLLaMA/comments/1h8g7pn/why_isnt_chatgpt_that_creative/
false
false
self
1
null
A test prompt the new LLama 3.3 70b struggles with
101
2024-12-07T00:35:34
https://i.redd.it/jw0vi67tnb5e1.png
nomorebuttsplz
i.redd.it
1970-01-01T00:00:00
0
{}
1h8g8v3
false
null
t3_1h8g8v3
/r/LocalLLaMA/comments/1h8g8v3/a_test_prompt_the_new_llama_33_70b_struggles_with/
false
false
https://b.thumbs.redditm…8ZWfrtoMR7rM.jpg
101
{'enabled': True, 'images': [{'id': 'vVuXd9PRWxe4zR1n9tJ8mT38y__JPtfqCOmn9T7hfeo', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/jw0vi67tnb5e1.png?width=108&crop=smart&auto=webp&s=a0a8f6d1090f383df619b39c71501d57d33f8643', 'width': 108}, {'height': 86, 'url': 'https://preview.redd.it/jw0vi67tnb5e1.png?width=216&crop=smart&auto=webp&s=05d802ec2956a802510b5eeff20291af19b0e51e', 'width': 216}, {'height': 127, 'url': 'https://preview.redd.it/jw0vi67tnb5e1.png?width=320&crop=smart&auto=webp&s=efa74af6a68d6ed28bed7c70c2c2d5fdcf92d931', 'width': 320}, {'height': 255, 'url': 'https://preview.redd.it/jw0vi67tnb5e1.png?width=640&crop=smart&auto=webp&s=8143798dd199f7344d2799184c8dd5845093904d', 'width': 640}], 'source': {'height': 344, 'url': 'https://preview.redd.it/jw0vi67tnb5e1.png?auto=webp&s=cd41ad44c679c618e7cd29188be5c1304cc98bc6', 'width': 861}, 'variants': {}}]}
How do we let models be more creative?
1
[removed]
2024-12-07T00:35:39
https://www.reddit.com/r/LocalLLaMA/comments/1h8g8wx/how_do_we_let_models_be_more_creative/
Longjumping_Spot5843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8g8wx
false
null
t3_1h8g8wx
/r/LocalLLaMA/comments/1h8g8wx/how_do_we_let_models_be_more_creative/
false
false
self
1
null
Rapid LLM evolution: where will we be in ten years?
16
New models are popping up like mushrooms after a rainstorm, week after week. With this rapid pace of development, what do you think will be possible with LLMs and AI overall in ten years? I'm feeling both excited and scared about it :D
2024-12-07T00:38:52
https://www.reddit.com/r/LocalLLaMA/comments/1h8gbam/rapid_llm_evolution_where_will_we_be_in_ten_years/
Sky_Linx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8gbam
false
null
t3_1h8gbam
/r/LocalLLaMA/comments/1h8gbam/rapid_llm_evolution_where_will_we_be_in_ten_years/
false
false
self
16
null
Can we have local reinforcement fine-tuning
0
2024-12-07T00:45:16
https://v.redd.it/pgv05vklpb5e1
onil_gova
v.redd.it
1970-01-01T00:00:00
0
{}
1h8gg2a
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/pgv05vklpb5e1/DASHPlaylist.mpd?a=1736124329%2CNjcxMWUwYjMxODEwMmUxNTRkMzU3Y2M4NGM4MGI2MjAzZjMwYjE3NjRhNjY3MDdiOGU0MzNhNzg5MDQ0Mzk4OQ%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/pgv05vklpb5e1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/pgv05vklpb5e1/HLSPlaylist.m3u8?a=1736124329%2CMWYxZWQ1NjZlYmY1Y2UxMDk0YzYzMzg5NzJiZDViNGY2NzZiM2JlZTJhOTEwZTYxYzU1NGQzOTM1YTM1ZjQ5Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pgv05vklpb5e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 838}}
t3_1h8gg2a
/r/LocalLLaMA/comments/1h8gg2a/can_we_have_local_reinforcement_finetuning/
false
false
https://external-preview…aef02358f62fb387
0
{'enabled': False, 'images': [{'id': 'cXpzazRxaGxwYjVlMZN8vBY6fH4CONj9J4khQ4WFtSps4lheAAIptxrVvqGf', 'resolutions': [{'height': 92, 'url': 'https://external-preview.redd.it/cXpzazRxaGxwYjVlMZN8vBY6fH4CONj9J4khQ4WFtSps4lheAAIptxrVvqGf.png?width=108&crop=smart&format=pjpg&auto=webp&s=5f84eb3b89992301b39e8d6cc9f538358ba0b934', 'width': 108}, {'height': 185, 'url': 'https://external-preview.redd.it/cXpzazRxaGxwYjVlMZN8vBY6fH4CONj9J4khQ4WFtSps4lheAAIptxrVvqGf.png?width=216&crop=smart&format=pjpg&auto=webp&s=27d4e011c6b9f3451737a91f4fc127a60bcbd16d', 'width': 216}, {'height': 274, 'url': 'https://external-preview.redd.it/cXpzazRxaGxwYjVlMZN8vBY6fH4CONj9J4khQ4WFtSps4lheAAIptxrVvqGf.png?width=320&crop=smart&format=pjpg&auto=webp&s=7a10d5ee0d5ed272c2f03267df08ef878c6e897b', 'width': 320}, {'height': 549, 'url': 'https://external-preview.redd.it/cXpzazRxaGxwYjVlMZN8vBY6fH4CONj9J4khQ4WFtSps4lheAAIptxrVvqGf.png?width=640&crop=smart&format=pjpg&auto=webp&s=8d1640e41d860391d6b45dd8412cf783c9ae3e90', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cXpzazRxaGxwYjVlMZN8vBY6fH4CONj9J4khQ4WFtSps4lheAAIptxrVvqGf.png?format=pjpg&auto=webp&s=4d33705055eef6e71998f879bd4cf50de1dd81f2', 'width': 838}, 'variants': {}}]}
We need QwQ on groq
25
Insanely fast inference with a reasoning model would be great. Remember, its about the tokens/compute, not the time.
2024-12-07T01:56:50
https://www.reddit.com/r/LocalLLaMA/comments/1h8huc4/we_need_qwq_on_groq/
dp3471
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8huc4
false
null
t3_1h8huc4
/r/LocalLLaMA/comments/1h8huc4/we_need_qwq_on_groq/
false
false
self
25
null
Dataset for Processing Irrelevant Function Calls in Tools
5
I've trained a bunch of function/tool calling models in the past - see [https://huggingface.co/rubra-ai](https://huggingface.co/rubra-ai) These types of models are really important for creating agents. It's best to include some of this data when fine tuning Llama3 or any model with function calling capabilities because you dont want your fine tuned models to forget or become worse at calling external tools. Salesforce released a dataset a few months ago called xlam - [https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) The xlam dataset doesnt include queries where the functions are irrelevant to the user's query. For small models, contrastive examples really help, so here's a function irrelevance dataset if anyone wants to use: [https://huggingface.co/datasets/sanjay920/xlam-irrelevant-sharegpt](https://huggingface.co/datasets/sanjay920/xlam-irrelevant-sharegpt) Dont have time to write a paper, just wanted to share :)
2024-12-07T01:58:21
https://www.reddit.com/r/LocalLLaMA/comments/1h8hvf9/dataset_for_processing_irrelevant_function_calls/
sanjay920
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8hvf9
false
null
t3_1h8hvf9
/r/LocalLLaMA/comments/1h8hvf9/dataset_for_processing_irrelevant_function_calls/
false
false
self
5
{'enabled': False, 'images': [{'id': 'NXAwv_zg4fhnfWQBE15hBXuJhXlewS0ftIS3JjvcN-A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/R3ZNNko7xZdKUtW7sJ5NMnsNVurraSg49YpzjOXk6rY.jpg?width=108&crop=smart&auto=webp&s=c7ee647e8e73c1f60c5e98bf484a12ade10700fe', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/R3ZNNko7xZdKUtW7sJ5NMnsNVurraSg49YpzjOXk6rY.jpg?width=216&crop=smart&auto=webp&s=e0bb55d6844361b548e0bcb5c4865baad3242d9d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/R3ZNNko7xZdKUtW7sJ5NMnsNVurraSg49YpzjOXk6rY.jpg?width=320&crop=smart&auto=webp&s=29ce8df991b95e86f9f0ed1927a3f22f4d6e825e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/R3ZNNko7xZdKUtW7sJ5NMnsNVurraSg49YpzjOXk6rY.jpg?width=640&crop=smart&auto=webp&s=50c186220911c13a36c389d8d6fee5db9ea85bf1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/R3ZNNko7xZdKUtW7sJ5NMnsNVurraSg49YpzjOXk6rY.jpg?width=960&crop=smart&auto=webp&s=5c59e986de77092304a61a81a223b73f5182abea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/R3ZNNko7xZdKUtW7sJ5NMnsNVurraSg49YpzjOXk6rY.jpg?width=1080&crop=smart&auto=webp&s=ef1a05f2d034236f28bda97ef9b6d9bd7d5f21f4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/R3ZNNko7xZdKUtW7sJ5NMnsNVurraSg49YpzjOXk6rY.jpg?auto=webp&s=3bb166f70b88ef940f26c67d3253a2e9c0f78981', 'width': 1200}, 'variants': {}}]}
Billing of Claude Haiku
1
[removed]
2024-12-07T02:01:46
https://www.reddit.com/r/LocalLLaMA/comments/1h8hy1u/billing_of_claude_haiku/
Available-Stress8598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8hy1u
false
null
t3_1h8hy1u
/r/LocalLLaMA/comments/1h8hy1u/billing_of_claude_haiku/
false
false
self
1
null
New Gemini model ranks #1 in all arena benchmarks
17
https://preview.redd.it/…bject to change.
2024-12-07T02:09:28
https://www.reddit.com/r/LocalLLaMA/comments/1h8i3bo/new_gemini_model_ranks_1_in_all_arena_benchmarks/
alongated
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8i3bo
false
null
t3_1h8i3bo
/r/LocalLLaMA/comments/1h8i3bo/new_gemini_model_ranks_1_in_all_arena_benchmarks/
false
false
https://b.thumbs.redditm…EvJTspZO_ZoM.jpg
17
null
Gemini 1206 is amazing for code autocomplete
18
I built this VSCode extension to specifically use Gemini's 2M context in coding and not gonna lie, the new 1206 is really something else. Give it a go in [https://marketplace.visualstudio.com/items?itemName=robertpiosik.gemini-coder](https://marketplace.visualstudio.com/items?itemName=robertpiosik.gemini-coder) by defining custom provider { "name": "Gemini Exp 1206", "endpointUrl": "https://generativelanguage.googleapis.com/v1beta/chat/completions", "bearerToken": "API KEY FROM AI STUDIO", "model": "gemini-exp-1206", "temperature": 0, "instruction": "" },
2024-12-07T02:14:05
https://www.reddit.com/r/LocalLLaMA/comments/1h8i6gh/gemini_1206_is_amazing_for_code_autocomplete/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8i6gh
false
null
t3_1h8i6gh
/r/LocalLLaMA/comments/1h8i6gh/gemini_1206_is_amazing_for_code_autocomplete/
false
false
self
18
{'enabled': False, 'images': [{'id': 'xe0CO2ErLSK8gDReKHm-_hxiHefw__lsJmVIrd2u5Oc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/s8s01RT_9zKAQq3mAhjf6ugqJbTB-i73brlB58BRFvU.jpg?width=108&crop=smart&auto=webp&s=72fab37089a9d94d7d5065fd520477478a3bebb6', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/s8s01RT_9zKAQq3mAhjf6ugqJbTB-i73brlB58BRFvU.jpg?auto=webp&s=31c6081c76b66382d2361a5812813aba54f00199', 'width': 128}, 'variants': {}}]}
Structured outputs · Ollama Blog
25
2024-12-07T02:22:55
https://ollama.com/blog/structured-outputs
maniac_runner
ollama.com
1970-01-01T00:00:00
0
{}
1h8ice5
false
null
t3_1h8ice5
/r/LocalLLaMA/comments/1h8ice5/structured_outputs_ollama_blog/
false
false
https://b.thumbs.redditm…qQ4lPQLY799g.jpg
25
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Llama3.3 free API
2
[removed]
2024-12-07T03:10:42
https://www.reddit.com/r/LocalLLaMA/comments/1h8j7x0/llama33_free_api/
mehul_gupta1997
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8j7x0
false
null
t3_1h8j7x0
/r/LocalLLaMA/comments/1h8j7x0/llama33_free_api/
false
false
self
2
{'enabled': False, 'images': [{'id': 'of0IOLc78UxYI9E-L9sio2vCzk-nc2o28FlMy7D5P6A', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1CrxNNvF307_aBalx3ZRoNTNbUkWDJO_0PX0uzp3D_4.jpg?width=108&crop=smart&auto=webp&s=654722dec11380b12fb20a9a1258ca178d644fc3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1CrxNNvF307_aBalx3ZRoNTNbUkWDJO_0PX0uzp3D_4.jpg?width=216&crop=smart&auto=webp&s=2787dc44fe57f8ccca187aa41bd2d007a80a750c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1CrxNNvF307_aBalx3ZRoNTNbUkWDJO_0PX0uzp3D_4.jpg?width=320&crop=smart&auto=webp&s=1102a7428fb8fd344f0d5931e4c20c2fd0659e3e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1CrxNNvF307_aBalx3ZRoNTNbUkWDJO_0PX0uzp3D_4.jpg?auto=webp&s=4ee3c8517e1c77232a92b636c472f5d299026fd3', 'width': 480}, 'variants': {}}]}
QwQ is not exactly efficient at confidently reasoning 😂
1
2024-12-07T03:30:26
https://i.redd.it/nzyyj4x1jc5e1.png
Ih8tk
i.redd.it
1970-01-01T00:00:00
0
{}
1h8jki8
false
null
t3_1h8jki8
/r/LocalLLaMA/comments/1h8jki8/qwq_is_not_exactly_efficient_at_confidently/
false
false
https://a.thumbs.redditm…Z9aRK6ydaeo0.jpg
1
{'enabled': True, 'images': [{'id': 'NC-TXdBda58PL7FvlLTqketbhA0MVAVeZvm3sryCbqY', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/nzyyj4x1jc5e1.png?width=108&crop=smart&auto=webp&s=69ea9103fac5e28e340bc994b9081b8e624c3174', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/nzyyj4x1jc5e1.png?width=216&crop=smart&auto=webp&s=0feafe67a7a2c07c8341279bc354c0c6f3c6708a', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/nzyyj4x1jc5e1.png?width=320&crop=smart&auto=webp&s=366cd72f6876704c090a3ccfb064de1543f4a260', 'width': 320}, {'height': 454, 'url': 'https://preview.redd.it/nzyyj4x1jc5e1.png?width=640&crop=smart&auto=webp&s=134c910123b5171000ff1c87d860221d84c3c2ad', 'width': 640}, {'height': 681, 'url': 'https://preview.redd.it/nzyyj4x1jc5e1.png?width=960&crop=smart&auto=webp&s=6a618465c8b3f603485f3bddbb42d4684adaa856', 'width': 960}], 'source': {'height': 693, 'url': 'https://preview.redd.it/nzyyj4x1jc5e1.png?auto=webp&s=ece8823415241e1df63d054a366f082f37da911a', 'width': 976}, 'variants': {}}]}
inference on A100
1
[removed]
2024-12-07T03:59:24
https://www.reddit.com/r/LocalLLaMA/comments/1h8k2wd/inference_on_a100/
Gullible_Reason3067
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8k2wd
false
null
t3_1h8k2wd
/r/LocalLLaMA/comments/1h8k2wd/inference_on_a100/
false
false
self
1
null
"Cheap" AMD EPYC QS CPU from eBay, too good to be true?
1
[removed]
2024-12-07T04:07:37
https://www.reddit.com/r/LocalLLaMA/comments/1h8k7zg/cheap_amd_epyc_qs_cpu_from_ebay_too_good_to_be/
Chemical-Nose-2985
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8k7zg
false
null
t3_1h8k7zg
/r/LocalLLaMA/comments/1h8k7zg/cheap_amd_epyc_qs_cpu_from_ebay_too_good_to_be/
false
false
https://b.thumbs.redditm…qBLyGHadU1Ns.jpg
1
null
What is the better way to use llama locally ?
1
[removed]
2024-12-07T04:26:19
https://www.reddit.com/r/LocalLLaMA/comments/1h8kjtx/what_is_the_better_way_to_use_llama_locally/
Narrow-Narwhal-6850
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8kjtx
false
null
t3_1h8kjtx
/r/LocalLLaMA/comments/1h8kjtx/what_is_the_better_way_to_use_llama_locally/
false
false
self
1
null
How to fine tune a model for an AI content marketing tool?
0
I'm looking to build an AI content marketing tool that's specific for the tech industry (I work in web3). TLDR: It'll be able to output high quality, context-specific, text-based copy (ie. blog articles, social media posts, ad copy, landing pages, etc) that are domain-specific (ie. targeting specific tech sub-sectors - web3, cloud computing, cybersecurity, etc). Would training a model on a data set of marketing case studies, high quality blog articles (in the tech space, eg. from Hackernoon, Hubspot, Wired, etc) yield good results? Super new to fine tuning, I've always been an AI end-user (since 2019). If anyone has done anything similar to this, would appreciate to hear about your process and results.
2024-12-07T04:49:00
https://www.reddit.com/r/LocalLLaMA/comments/1h8kxix/how_to_fine_tune_a_model_for_an_ai_content/
Isokelekl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8kxix
false
null
t3_1h8kxix
/r/LocalLLaMA/comments/1h8kxix/how_to_fine_tune_a_model_for_an_ai_content/
false
false
self
0
null
going into lmstudio
1
[removed]
2024-12-07T04:52:41
https://www.reddit.com/r/LocalLLaMA/comments/1h8kzl5/going_into_lmstudio/
Accountant-Due
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8kzl5
false
null
t3_1h8kzl5
/r/LocalLLaMA/comments/1h8kzl5/going_into_lmstudio/
false
false
self
1
null
Q8 cache for Mistral Large 2407
1
Is it worth it? I know Qwen produces gibberish with Q4. How well mistral large handles exl2 Q8 cache settings?
2024-12-07T05:12:33
https://www.reddit.com/r/LocalLLaMA/comments/1h8lbbi/q8_cache_for_mistral_large_2407/
Kako05
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8lbbi
false
null
t3_1h8lbbi
/r/LocalLLaMA/comments/1h8lbbi/q8_cache_for_mistral_large_2407/
false
false
self
1
null
How to run Llama-3.3-70B-Instruct on a 16GB card at 13-15 t/s w quality.
0
I am DavidAU from hugging face; RE: The new "Llama-3.3-70B-Instruct" This may be of some help, includes settings (3 - full screen shots), how to and examples - you can use this new model on a 16GB card, at 13-15 t/s with 2048 ctx window - small, but fast. Includes settings from a research project underway on using models at low BPW levels to attain normal operation. Includes how to use with Silly Tavern, KoboldCPP, Text Generation WebUI and links to more resources for fine tuning adjustments/operation: [https://huggingface.co/DavidAU/Llama-3.3-70B-Instruct-How-To-Run-on-Low-BPW-IQ1\_S-IQ1\_M-at-maximum-speed-quality](https://huggingface.co/DavidAU/Llama-3.3-70B-Instruct-How-To-Run-on-Low-BPW-IQ1_S-IQ1_M-at-maximum-speed-quality) Enjoy!
2024-12-07T05:51:29
https://www.reddit.com/r/LocalLLaMA/comments/1h8lxip/how_to_run_llama3370binstruct_on_a_16gb_card_at/
Dangerous_Fix_5526
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8lxip
false
null
t3_1h8lxip
/r/LocalLLaMA/comments/1h8lxip/how_to_run_llama3370binstruct_on_a_16gb_card_at/
false
false
self
0
{'enabled': False, 'images': [{'id': 'pV_z5UGUS-Y2BeMOTLITI-Tkrur2kyCngkh360UWf9s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_qF21diShl_mbDqkjKK5EFDW0GQyjCRpc7ojc4mjufU.jpg?width=108&crop=smart&auto=webp&s=eccbd2a7ab28b06091d31eaac3fe4c7549d9c24a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_qF21diShl_mbDqkjKK5EFDW0GQyjCRpc7ojc4mjufU.jpg?width=216&crop=smart&auto=webp&s=2d1066258796a4abda2d3985c8323a67ad95307f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_qF21diShl_mbDqkjKK5EFDW0GQyjCRpc7ojc4mjufU.jpg?width=320&crop=smart&auto=webp&s=02999bb2fa3808270d2cdeb19af8fc207a304904', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_qF21diShl_mbDqkjKK5EFDW0GQyjCRpc7ojc4mjufU.jpg?width=640&crop=smart&auto=webp&s=a1077f24511741a02abf640c44b35ecd92429056', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_qF21diShl_mbDqkjKK5EFDW0GQyjCRpc7ojc4mjufU.jpg?width=960&crop=smart&auto=webp&s=05d149696438240f26eb78a1592a5287828e520e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_qF21diShl_mbDqkjKK5EFDW0GQyjCRpc7ojc4mjufU.jpg?width=1080&crop=smart&auto=webp&s=6a0410e099c9bb3383326e0be6853a9f27f7c711', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_qF21diShl_mbDqkjKK5EFDW0GQyjCRpc7ojc4mjufU.jpg?auto=webp&s=ac476432d62db14fb24379cc656df037d77992e3', 'width': 1200}, 'variants': {}}]}
Need Help: HuggingFace Spaces Model → OpenAI Compatible API
1
[removed]
2024-12-07T06:11:45
https://www.reddit.com/r/LocalLLaMA/comments/1h8m8s7/need_help_huggingface_spaces_model_openai/
Last_Pootis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8m8s7
false
null
t3_1h8m8s7
/r/LocalLLaMA/comments/1h8m8s7/need_help_huggingface_spaces_model_openai/
false
false
self
1
null
Livebench updates - Gemini 1206 with one of the biggest score jumps I've seen recently and Llama 3.3 70b nearly on par with GPT-4o.
230
2024-12-07T07:05:31
https://www.reddit.com/gallery/1h8n1b6
jd_3d
reddit.com
1970-01-01T00:00:00
0
{}
1h8n1b6
false
null
t3_1h8n1b6
/r/LocalLLaMA/comments/1h8n1b6/livebench_updates_gemini_1206_with_one_of_the/
false
false
https://b.thumbs.redditm…BtyBTbzZsSxE.jpg
230
null
Anyone know superPrompt? Can this be used to revamp opensource model?
1
2024-12-07T07:58:07
https://github.com/NeoVertex1/SuperPrompt/issues/48
balianone
github.com
1970-01-01T00:00:00
0
{}
1h8nqtr
false
null
t3_1h8nqtr
/r/LocalLLaMA/comments/1h8nqtr/anyone_know_superprompt_can_this_be_used_to/
false
false
https://b.thumbs.redditm…YtDU4jhwzHnc.jpg
1
{'enabled': False, 'images': [{'id': '1AyUdXue4JX1Djc6UaOQ0HT-rSZfNWlmSf1k29e7UPo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FnQdtJIrRTGDWqvA2IBJHNsUyLFYrwZleQvekULqvkk.jpg?width=108&crop=smart&auto=webp&s=127abfa2ed2a4c6106cc516c12a5ad5fc05a336c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FnQdtJIrRTGDWqvA2IBJHNsUyLFYrwZleQvekULqvkk.jpg?width=216&crop=smart&auto=webp&s=5ee47a18fe8922e6e96ff7d910591151388d8ffc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FnQdtJIrRTGDWqvA2IBJHNsUyLFYrwZleQvekULqvkk.jpg?width=320&crop=smart&auto=webp&s=79aef5e49965d8ee53a361dfcd341da2498075e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FnQdtJIrRTGDWqvA2IBJHNsUyLFYrwZleQvekULqvkk.jpg?width=640&crop=smart&auto=webp&s=92268a306c1e87e15cf6296784b13bc22ad7c12f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FnQdtJIrRTGDWqvA2IBJHNsUyLFYrwZleQvekULqvkk.jpg?width=960&crop=smart&auto=webp&s=d5c02276cb8217413c5574e21b6ec5e0d0452022', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FnQdtJIrRTGDWqvA2IBJHNsUyLFYrwZleQvekULqvkk.jpg?width=1080&crop=smart&auto=webp&s=1b1ce5bfa4db45df01bac26d2190d8046cb89bbc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FnQdtJIrRTGDWqvA2IBJHNsUyLFYrwZleQvekULqvkk.jpg?auto=webp&s=7d9dce6108349d1771797da7eb91584d933f78f2', 'width': 1200}, 'variants': {}}]}
Is 2x 3090 setup worth it?
1
[removed]
2024-12-07T08:16:57
https://www.reddit.com/r/LocalLLaMA/comments/1h8o0kt/is_2x_3090_setup_worth_it/
OpenTotal3160
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8o0kt
false
null
t3_1h8o0kt
/r/LocalLLaMA/comments/1h8o0kt/is_2x_3090_setup_worth_it/
false
false
self
1
null
ClosedAi ChatGPT "For Profit" is Dead on Arrival Llama 3.3 is the Killer.
237
Llama 3.3 is a solid upgrade over Llama 3.1, but I'm not sure if it's better than Qwen 2.5 for coding. Qwen 2.5-Coder performed slightly better on some coding questions, but overall, Llama 3.3 remains a significant improvement.
2024-12-07T08:43:31
https://www.reddit.com/r/LocalLLaMA/comments/1h8od2c/closedai_chatgpt_for_profit_is_dead_on_arrival/
Vishnu_One
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8od2c
false
null
t3_1h8od2c
/r/LocalLLaMA/comments/1h8od2c/closedai_chatgpt_for_profit_is_dead_on_arrival/
false
false
self
237
null
Gemini-exp-1206 ranks 15th on Aider coding benchmark
1
2024-12-07T09:21:00
https://i.redd.it/u2f2ui1m9e5e1.jpeg
paulmaunders
i.redd.it
1970-01-01T00:00:00
0
{}
1h8ouz9
false
null
t3_1h8ouz9
/r/LocalLLaMA/comments/1h8ouz9/geminiexp1206_ranks_15th_on_aider_coding_benchmark/
false
false
https://a.thumbs.redditm…oUafvyjbgfc4.jpg
1
{'enabled': True, 'images': [{'id': '1q-nzT7QbCqCksABPgNEbHUroc0LIwmXk1p78ApzU6k', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/u2f2ui1m9e5e1.jpeg?width=108&crop=smart&auto=webp&s=934f572431132dc6d88937832e5a4e6494608b01', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/u2f2ui1m9e5e1.jpeg?width=216&crop=smart&auto=webp&s=5e257b5af2392aab9f594bb84b5aabef2c421e3c', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/u2f2ui1m9e5e1.jpeg?width=320&crop=smart&auto=webp&s=166c12b52edb9c9beaa7fea068abadb118647427', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/u2f2ui1m9e5e1.jpeg?width=640&crop=smart&auto=webp&s=9e36f7ce2caadebf65e442c730cd2b7be9b85409', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/u2f2ui1m9e5e1.jpeg?width=960&crop=smart&auto=webp&s=9e7c3205cb0ac7891d72a5d945da8d9715f03e20', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/u2f2ui1m9e5e1.jpeg?width=1080&crop=smart&auto=webp&s=3e0a4dfd74b8e69433c0c0418953c2b11f39eb17', 'width': 1080}], 'source': {'height': 1204, 'url': 'https://preview.redd.it/u2f2ui1m9e5e1.jpeg?auto=webp&s=d36abff38126de1dc155a3e309d7259d0b66e5ca', 'width': 1204}, 'variants': {}}]}
OpenAI models are still best on factual questions
1
I ran my usual tests (not public for obvious reasons) on the new Llama. I can believe the benchmarks saying it's as good as 504B on reasoning, but what I find interesting is how bad these smaller models are in general knowledge, and how good 4o is in comparison. Obviously, this is very much related to the model size. However, 4o is so much better on my tests compared to other similarly priced models as well, and that I cannot really explain. For example, 4o is cheaper than Sonnet, and comparable to Mistral Large, Gemini 1.5 Pro and Llama 405B (on most providers), and it outperforms all of these easily on factual questions. I feel like, for a lot of tasks, this general knowledge is much more important than raw reasoning power. E.g., if you want someone to help you make travel plans, you'd consult a local from the place you're visiting, not a math Olympian. Same thing with "just talking to a bot". RAG doesn't solve this problem, as I am not talking about a single domain, but a type of cross-domain knowledge connections that are impossible with current RAG-like techniques. I'd love if someone could change my mind on this, and provide some concrete examples where a similarly priced model outperforms 4o on a factual question, as it might be a simple case of my test examples being heavily biased to fields where (by chance) OAI had more training data than others.
2024-12-07T09:44:38
https://www.reddit.com/r/LocalLLaMA/comments/1h8p5wi/openai_models_are_still_best_on_factual_questions/
curl-up
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8p5wi
false
null
t3_1h8p5wi
/r/LocalLLaMA/comments/1h8p5wi/openai_models_are_still_best_on_factual_questions/
false
false
self
1
null
Did a real decrease of "consumer-grade" VRAM prices occur in the last few years?
1
I like AI research as an interest of mine, and exploring generative AI models in general. But it has never been to a point where I would actually set a high budget to purchase like a 3090 series graphics card. Budget that I would probably not have anyway because it is some really expensive stuff. But did some improvements occur over this in the last few years? Maybe I was not looking close enough but I do not seem to have noticed any large improvements in consumer VRAM prices, from something like 2022 to now?
2024-12-07T09:44:54
https://www.reddit.com/r/LocalLLaMA/comments/1h8p60u/did_a_real_decrease_of_consumergrade_vram_prices/
FallUpJV
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8p60u
false
null
t3_1h8p60u
/r/LocalLLaMA/comments/1h8p60u/did_a_real_decrease_of_consumergrade_vram_prices/
false
false
self
1
null
LLM Observability tool recommendations?
1
I am looking to pick a self-hosted LLM Observability tool between Langfuse and Arize Phoenix for my application. Both look interesting, however it seems Arize Phoenix comes with Open Telemetry support and has some features not available in Langfuse self-hosted (like Prompt Experiments and LLM-as-a Judge) I also could not find any unbiased head-to-head comparison between the two (self-hosted) Do you guys have any personal experiences with either - would be a real help to me (and others still deciding)
2024-12-07T09:47:56
https://www.reddit.com/r/LocalLLaMA/comments/1h8p7fv/llm_observability_tool_recommendations/
pravictor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8p7fv
false
null
t3_1h8p7fv
/r/LocalLLaMA/comments/1h8p7fv/llm_observability_tool_recommendations/
false
false
self
1
null
I built a website to compare every AI model as a 17 y/o in high school: Countless.dev (live on Product Hunt rn!!)
0
2024-12-07T10:16:10
https://v.redd.it/h7e7aq4rhe5e1
ahmett9
/r/LocalLLaMA/comments/1h8pkzt/i_built_a_website_to_compare_every_ai_model_as_a/
1970-01-01T00:00:00
0
{}
1h8pkzt
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/h7e7aq4rhe5e1/DASHPlaylist.mpd?a=1736288178%2CYTNjMTVmNDA1ZjJhYTQyZTdhZDQ5ZTA3NzZlNDE0NjZhYzIyNTdmMDk4ZDM2ZWM0MDdiYjUzNDA3NmZmOGMyYQ%3D%3D&v=1&f=sd', 'duration': 150, 'fallback_url': 'https://v.redd.it/h7e7aq4rhe5e1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/h7e7aq4rhe5e1/HLSPlaylist.m3u8?a=1736288178%2CYWNlODc4ZDE3NDJkMjI4YmE4YTM1ZWZlMzNhNzY2ZjZhMmE3NGY1YzI1YjI3MTlhZmZjMGQ1MjQ5MjQ0YTQwYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/h7e7aq4rhe5e1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1h8pkzt
/r/LocalLLaMA/comments/1h8pkzt/i_built_a_website_to_compare_every_ai_model_as_a/
false
false
https://external-preview…4f3d0dfac1649589
0
{'enabled': False, 'images': [{'id': 'NW43cWdwNHJoZTVlMWAwUC_P3y4htTWMIt8KKT_L0B92BZu_Q64bXCsxKsZ-', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NW43cWdwNHJoZTVlMWAwUC_P3y4htTWMIt8KKT_L0B92BZu_Q64bXCsxKsZ-.png?width=108&crop=smart&format=pjpg&auto=webp&s=d9c5270aeeed57094c4c621bf7c58471fc1eff96', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NW43cWdwNHJoZTVlMWAwUC_P3y4htTWMIt8KKT_L0B92BZu_Q64bXCsxKsZ-.png?width=216&crop=smart&format=pjpg&auto=webp&s=3ed4815eb851b1f11286f9a5a8c63359ffb98b97', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NW43cWdwNHJoZTVlMWAwUC_P3y4htTWMIt8KKT_L0B92BZu_Q64bXCsxKsZ-.png?width=320&crop=smart&format=pjpg&auto=webp&s=da73010d8487e18c8e9f70a056dd2e830f3e46f3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NW43cWdwNHJoZTVlMWAwUC_P3y4htTWMIt8KKT_L0B92BZu_Q64bXCsxKsZ-.png?width=640&crop=smart&format=pjpg&auto=webp&s=5ebdf38859682aadc9d686f72b58e4b03d1d68df', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NW43cWdwNHJoZTVlMWAwUC_P3y4htTWMIt8KKT_L0B92BZu_Q64bXCsxKsZ-.png?width=960&crop=smart&format=pjpg&auto=webp&s=48c1d4260e155df273f11ae0d02d3c63b8edf4f9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NW43cWdwNHJoZTVlMWAwUC_P3y4htTWMIt8KKT_L0B92BZu_Q64bXCsxKsZ-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6ea0a8a5209fa6e9da74da494226075e56b365b2', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NW43cWdwNHJoZTVlMWAwUC_P3y4htTWMIt8KKT_L0B92BZu_Q64bXCsxKsZ-.png?format=pjpg&auto=webp&s=b0da5116a2c85fa8ac80f908756af55c58c41751', 'width': 1920}, 'variants': {}}]}
How to run quen with minimal downloading?
1
I have a quen coder 32b instruct, non quantized version downloaded, and i made it work through huggingface transformers, with 4bit bitsandbytes. It's working but fails on OOM when the context gets over ~1000 tokens. And for some reason offloading onto CPU either fails with 4bit bitsandbytes or makes the model produce complete garbage in 8bit. I would like to try other methods of running it. Here's the kicker: I'm on limited phone data and it would be difficult to download a quantized version of this model. What are my options to run a model that doesn't fit my GPU, which involve the least downloading? What do you guys use to run it usually?
2024-12-07T10:30:59
https://www.reddit.com/r/LocalLLaMA/comments/1h8przx/how_to_run_quen_with_minimal_downloading/
paperic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8przx
false
null
t3_1h8przx
/r/LocalLLaMA/comments/1h8przx/how_to_run_quen_with_minimal_downloading/
false
false
self
1
null
Whisper 3rd party models
1
[removed]
2024-12-07T11:11:49
https://www.reddit.com/r/LocalLLaMA/comments/1h8qbtq/whisper_3rd_party_models/
goingsplit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8qbtq
false
null
t3_1h8qbtq
/r/LocalLLaMA/comments/1h8qbtq/whisper_3rd_party_models/
false
false
self
1
null
Execute "local" LLM lib on server using instruction mode instead of chat.
1
Hi there! Do you know any solution to run LLM using Transformers or LLamacpp on remote server? There is a lot of services, but it always follow the "chat template" approach by making API call. My main goal is to use Instruction models with a library enabling constrained output on a remote endpoint like I do on my local Mac with Llamacpp, Transformers or Guidance. Any though?
2024-12-07T11:11:58
https://www.reddit.com/r/LocalLLaMA/comments/1h8qbw7/execute_local_llm_lib_on_server_using_instruction/
Super_Dependent_2978
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8qbw7
false
null
t3_1h8qbw7
/r/LocalLLaMA/comments/1h8qbw7/execute_local_llm_lib_on_server_using_instruction/
false
false
self
1
null
whisper.cpp 3rd party models
1
[removed]
2024-12-07T11:12:53
https://www.reddit.com/r/LocalLLaMA/comments/1h8qccq/whispercpp_3rd_party_models/
goingsplit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1h8qccq
false
null
t3_1h8qccq
/r/LocalLLaMA/comments/1h8qccq/whispercpp_3rd_party_models/
false
false
self
1
null