title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Lex Fridman Podcast dataset | 11 | I released a [@lexfridman](https://twitter.com/lexfridman) Lex Fridman Podcast dataset suitable for LLaMA, Vicuna, and WizardVicuna training.
[https://huggingface.co/datasets/64bits/lex\_fridman\_podcast\_for\_llm\_vicuna](https://huggingface.co/datasets/64bits/lex_fridman_podcast_for_llm_vicuna) | 2023-06-08T05:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/1440gge/lex_fridman_podcast_dataset/ | Latter-Implement-243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1440gge | false | null | t3_1440gge | /r/LocalLLaMA/comments/1440gge/lex_fridman_podcast_dataset/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': '5osrP8LEZI-T2pBa9Am7vKvBjzWhjfs4NUhtZmVGLl0', 'resolutions': [], 'source': {'height': 48, 'url': 'https://external-preview.redd.it/-MqMa8Dac9L2EoGyLzv_hMhVy1QutV-rI6YMArmymIA.jpg?auto=webp&s=de891f33bbb471b24d7b4fc97c2cd6a6176caae3', 'width': 48}, 'variants': {}}]} |
Gorilla: An API AppStore for LLMs | 1 | [removed] | 2023-06-08T06:20:09 | https://www.reddit.com/r/LocalLLaMA/comments/14419ki/gorilla_an_api_appstore_for_llms/ | satmarz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14419ki | false | null | t3_14419ki | /r/LocalLLaMA/comments/14419ki/gorilla_an_api_appstore_for_llms/ | false | false | default | 1 | null |
K Quantization vs Perplexity | 50 | https://github.com/ggerganov/llama.cpp/pull/1684
The advancements in quantization performance are truly fascinating. It's remarkable how a model quantized to just 2 bits consistently outperforms the more memory-intensive fp16 models at the same scale. To put it simply, a 65B model quantized with 2 bits achieves superior results compared to a 30B fp16 model, while utilizing similar memory requirements as a 30B model quantized to 4-8 bits. This breakthrough becomes even more astonishing when we consider that the 65B model only occupies 13.6 GB of memory with 2-bit quantization, surpassing the performance of a 30B fp16 model that requires 26GB of memory. These developments pave the way for the future, where we can expect to witness the emergence of super models exceeding 100B parameters, all while consuming less than 24GB of memory through the use of 2-bit quantization. | 2023-06-08T06:36:46 | onil_gova | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1441jnr | false | null | t3_1441jnr | /r/LocalLLaMA/comments/1441jnr/k_quantization_vs_perplexity/ | false | false | 50 | {'enabled': True, 'images': [{'id': '2b4lUdxfqUaF5GR6hOD1iOBgqNXpAFfbiQFtSV5SBh8', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/i9ep2yyroq4b1.png?width=108&crop=smart&auto=webp&s=59d2f3b132eaf169536e4c0b090a9fc7d1b8e822', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/i9ep2yyroq4b1.png?width=216&crop=smart&auto=webp&s=1114712578e10c04a0e65c84500a7cb40d8f452c', 'width': 216}, {'height': 247, 'url': 'https://preview.redd.it/i9ep2yyroq4b1.png?width=320&crop=smart&auto=webp&s=5621d6e364170cffb2579b30c03d0a99e47ba6b5', 'width': 320}, {'height': 494, 'url': 'https://preview.redd.it/i9ep2yyroq4b1.png?width=640&crop=smart&auto=webp&s=2feab5f47ab46a59c38c6a620202463e7b45bfc2', 'width': 640}], 'source': {'height': 612, 'url': 'https://preview.redd.it/i9ep2yyroq4b1.png?auto=webp&s=8ac4f6f04c00a4b3fd2a2e878784e8e010371b34', 'width': 792}, 'variants': {}}]} |
||
For running GGML models, should I get a bunch of Intel Xeon CPU's to run concurrent tasks better, or just one regular CPU, like a ryzen 9 7950 or something? | 8 | Or maybe just one really good intel xeon or ryzen epyc instead of getting a bunch? It kinda makes sense because GPU's are good because they have thousands of cores, well with xeon's you could at least get hundreds. Sorry this might be dumb, I only ask because I can run a way larger model using GGML than GPTQ, maybe could get one of the giant models running locally | 2023-06-08T06:51:43 | https://www.reddit.com/r/LocalLLaMA/comments/1441sw4/for_running_ggml_models_should_i_get_a_bunch_of/ | CompetitiveSal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1441sw4 | false | null | t3_1441sw4 | /r/LocalLLaMA/comments/1441sw4/for_running_ggml_models_should_i_get_a_bunch_of/ | false | false | self | 8 | null |
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression | 1 | [deleted] | 2023-06-08T07:46:28 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1442qdv | false | null | t3_1442qdv | /r/LocalLLaMA/comments/1442qdv/spqr_a_sparsequantized_representation_for/ | false | false | default | 1 | null |
||
Can someone point me to lit or articles on why model sizes are seemingly so random? Why 3/7/13/60 instead of 2/4/8…? | 15 | Might just be missing something obvious… at first I thought it was some weird 1024 vs 1000 artifact but that doesn’t seem like it would apply here. | 2023-06-08T08:23:49 | https://www.reddit.com/r/LocalLLaMA/comments/1443cbq/can_someone_point_me_to_lit_or_articles_on_why/ | Ultimarr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1443cbq | false | null | t3_1443cbq | /r/LocalLLaMA/comments/1443cbq/can_someone_point_me_to_lit_or_articles_on_why/ | false | false | self | 15 | null |
Hardware requirements for GGML quantization | 6 | I haven't found anything on this subject. If you do GGML quantization, can you share your hardware and the time it took to accomplish it?
Can it be done with a basic gpu? Or even just a cpu? Or will it take too long? | 2023-06-08T10:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/1445ihk/hardware_requirements_for_ggml_quantization/ | terhisseur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1445ihk | false | null | t3_1445ihk | /r/LocalLLaMA/comments/1445ihk/hardware_requirements_for_ggml_quantization/ | false | false | self | 6 | null |
Learning sources on working with local LLMs | 19 | Hey guys, I have been playing with different LLMs for some time and still am struggling with the basics. When I google around it seems like there are no good packages source. Does anyone know of any?
I am particularly interested, in no specific order about:
- What is quantization and what types are trending
- Tokenizers, where they sit and how they work
- Different types of ways how to store weights (safetensors, PyTorch files, etc)
- What type of specific LLM to use (3B, 5B, 8B, 13B, ...)
- Overhead libraries that work with these models (oogabooga, text webui, llama.cpp)
- How to work with huggingface
- What are embedders
- Current models on the market and their differences (llama derivatives, falcon, gpt, ...)
- Prompt engineering basics
- Intro to LoRAs and fine-tuning in general
- Optional: intro to transformers and how they work | 2023-06-08T10:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/1445m59/learning_sources_on_working_with_local_llms/ | BorderKeeper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1445m59 | false | null | t3_1445m59 | /r/LocalLLaMA/comments/1445m59/learning_sources_on_working_with_local_llms/ | false | false | self | 19 | null |
Currentl obstacles for LLMs and solutions | 4 | What are the biggest obstacles in LLMs?Which have solutions in the near future (research papers) and for which there are no solutions yet?
What I currently see:
* very slow when running on CPU
* huge (V)RAM requirements even after quantization
* ability to "remember" someone to catch up previous conversations
* context size (summarizing long texts)- hallucinations
For context-size there was a paper with landmark-attention ([https://www.reddit.com/r/LocalLLaMA/comments/13wb59a/code\_released\_landmark\_attention\_randomaccess/](https://www.reddit.com/r/LocalLLaMA/comments/13wb59a/code_released_landmark_attention_randomaccess/))
But for the rest I haven't seen any research paper yet, or maybe I missed something | 2023-06-08T11:16:36 | https://www.reddit.com/r/LocalLLaMA/comments/1446f6q/currentl_obstacles_for_llms_and_solutions/ | Koliham | self.LocalLLaMA | 2023-06-08T11:20:12 | 0 | {} | 1446f6q | false | null | t3_1446f6q | /r/LocalLLaMA/comments/1446f6q/currentl_obstacles_for_llms_and_solutions/ | false | false | self | 4 | null |
Upgrading 16” 2021 M1 Pro 16GB to 16” M2 Max 32GB for local LLMs smart move or not? | 8 | [deleted] | 2023-06-08T11:21:26 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1446il5 | false | null | t3_1446il5 | /r/LocalLLaMA/comments/1446il5/upgrading_16_2021_m1_pro_16gb_to_16_m2_max_32gb/ | false | false | default | 8 | null |
||
P2P AI Training | 31 | I have been contemplating the use of a P2P (Peer-to-Peer) infrastructure to train large language models that are too expensive to be trained on GPU clusters. However, my uncertainty lies in the feasibility of this approach. Specifically, due to my limited mathematical knowledge, I am unsure whether training such a large language model in a decentralized manner would be efficient or practical.
You guys have an opinion on this? Since it is possible to use many GPUs to do the job it should be possible to do it on remote GPUs/CPUs. | 2023-06-08T11:32:09 | https://www.reddit.com/r/LocalLLaMA/comments/1446qcx/p2p_ai_training/ | _omid_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1446qcx | false | null | t3_1446qcx | /r/LocalLLaMA/comments/1446qcx/p2p_ai_training/ | false | false | self | 31 | null |
What is the best rated Commercial free Model available to use? | 1 | I'm building a project that will require an AI Model that at the very least doesn't make spelling mistakes. Any recommendations? | 2023-06-08T11:43:48 | https://www.reddit.com/r/LocalLLaMA/comments/1446yxs/what_is_the_best_rated_commercial_free_model/ | Shyvadi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1446yxs | false | null | t3_1446yxs | /r/LocalLLaMA/comments/1446yxs/what_is_the_best_rated_commercial_free_model/ | false | false | self | 1 | null |
Remote Access Solutions: Port Forwarding or API | 1 | [removed] | 2023-06-08T11:47:57 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 144722y | false | null | t3_144722y | /r/LocalLLaMA/comments/144722y/remote_access_solutions_port_forwarding_or_api/ | false | false | default | 1 | null |
||
How prescient is it that back in 1984, The Terminator used a LLM with Top-P/K sampling and Context? | 173 | 2023-06-08T11:55:44 | https://www.reddit.com/gallery/14477vq | SeymourBits | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14477vq | false | null | t3_14477vq | /r/LocalLLaMA/comments/14477vq/how_prescient_is_it_that_back_in_1984_the/ | false | false | nsfw | 173 | null |
|
Estimating lora training time for a 30b GPTQ model? | 2 | Id like to train a lora for the thebloke/wizard vicuña 30b gptq model, but I'm wondering how long it would take so I can calculate the cost, approximately.
I'm thinking of giving a raw text file of a bunch of zero shot instructions in this format:
USER: what's the latest AI news? ASSISTANT: let me search the web... \[command=search\_web "AI news"
USER: more instructions ASSISTANT:ideal answer etc..
I'll be training on runpod. | 2023-06-08T11:59:57 | https://www.reddit.com/r/LocalLLaMA/comments/1447azd/estimating_lora_training_time_for_a_30b_gptq_model/ | pelatho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1447azd | false | null | t3_1447azd | /r/LocalLLaMA/comments/1447azd/estimating_lora_training_time_for_a_30b_gptq_model/ | false | false | self | 2 | null |
Alternative ways for running models locally and hosting APIs | 10 | Till now I’ve been relying on the webui for running and testing models.
I want to learn about LLMs and their inner workings, and want to build an application powered by LLMs. I need APIs that will let me interface with these LLMs.
Instead of using the webui’s API, I’d prefer writing my own API that is able to run models and let me tweak parameters.
Are there any good resources that’ll teach me to run these LLMs from scratch locally (without any tools like the webui) ? Once I accomplish this, I will looking into writing an API to interface with the models. | 2023-06-08T12:43:01 | https://www.reddit.com/r/LocalLLaMA/comments/144891a/alternative_ways_for_running_models_locally_and/ | QUAZARD3141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144891a | false | null | t3_144891a | /r/LocalLLaMA/comments/144891a/alternative_ways_for_running_models_locally_and/ | false | false | self | 10 | null |
The travelling LLaMa: Graph theory meets local LLM - another test of performance of local LLMs | 19 | This puzzle is solvable! I solved it and GPT-4 solved it and we got the same result. Yes I am nervous in case I still got it wrong, but the previous thread was insightful and even the flawed results delivered some insights. The main takeaway from the first attempt was for me was that "all LLaMA models have the same reasoning ability", or do they...
This is another original puzzle that models would not have seen before and I modified it so that even GPT-4 did not see it before.
​
**The Delivery Route Puzzle:**
There are 4 towns - A, B, C, and D.
A delivery truck has to deliver packages to all of these towns. Each town must receive only one delivery, and the truck must return to its starting point.
The distances between the towns are as follows:
* A to B is 50 miles
* A to C is 70 miles
* A to D is 60 miles
* B to C is 30 miles
* B to D is 80 miles
* C to D is 40 miles
The truck begins its journey with 200 miles of fuel. However, there are a few rules that the driver needs to consider:
1. Every delivery uses up 10 miles worth of fuel, regardless of distance.
2. The truck can refuel only at town C and it can only do this once. When it refuels, it must fill up the tank to its full capacity of 200 miles.
3. The truck has to return to town A after delivering to all towns.
The puzzle is to find the route that the truck should take to deliver to all towns, refuel at town C and return to town A with the minimum amount of fuel remaining. The driver is extremely cautious and doesn't want to risk running out of fuel.
Remember, the goal is to have the minimum amount of fuel remaining after returning to town A, but without ever running out of fuel during the trip.
# Results summary
There is only one optimal solution if we adhere to all constraints: **ACBDA**
**GPT-4:** correct
EDIT, **Claude** (through Poe): correct sequence, but it looks like a fluke because the explanation is wildly incorrect:
A -> C (delivery uses 10 miles, refuel to 200 miles)
C -> B (delivery uses 10 miles, 190 miles remaining)
B -> D (delivery uses 10 miles, 150 miles remaining)
D -> A (delivery uses 10 miles, 90 miles remaining)
**Bing**: incorrect. Gave two answers, then argued with me :)
**Bard**: incorrect. Does not understand the problem, does not adhere to constraints. It returned to the same town, also its output is almost identical to guanaco 65B. It also did not deliver anything to town C, only refueled, twice... disastrous performance. Use **Wizard-Vicuna-30B-Uncensored.ggmlv3.q8\_0.bin** and get better performance and privacy.
​
Bard output:
1. Start in town A.
2. Drive to town B and deliver a package. (This uses up 10 miles of fuel.)
3. Drive to town C and refuel. (This brings the fuel level back to 200 miles.)
4. Drive to town D and deliver a package. (This uses up 10 miles of fuel.)
5. Drive back to town C. (This uses up 40 miles of fuel.)
6. Refuel in town C. (This brings the fuel level back to 200 miles.)
7. Drive back to town A. (This uses up 70 miles of fuel.)
​
**Local LLM answers:** all incorrect with most not following instructions, some missing the 10 'mile' penalty per delivery, some not knowing the basic arithmetics.
Among the local LLMs, this is my ranking:
1. **Wizard-Vicuna-30B-Uncensored.ggmlv3.q8\_0.bin**: overall good reasoning, adheres to instructions to the highest level and is not unhinged. It does not know maths.
2. **alpaca-lora-65B.ggmlv3.q5\_1.bin**: Incorrect and somewhat strange reasoning, but uses less flawed reasoning compared to other 65B models.
3. **guanaco-65B.ggmlv3.q5\_1.bin**: almost OK except it missed the delivery costs and of course the answer is incorrect
​
The reject pile:
**guanaco-65B.ggmlv3.q8\_0.bin**: worse than guanaco-65B.ggmlv3.q5\_1.bin. It could be that quantizing broke it. It cannot do arithmetics, does not follow instruction. Will no longer use.
**VicUnlocked-Alpaca-65B.ggmlv3.q8\_0.bin:** better than Bard, but not useful
**VicUnlocked-Alpaca-65B.ggmlv3.q5\_1.bin:** identical to **VicUnlocked-Alpaca-65B.ggmlv3.q8\_0.bin** but more verbose
**dromedary-lora-65B.ggmlv3.q5\_1.bin**: Starts off strong, gets confused in a scarily confident way.
**based-30b.ggmlv3.q8\_0.bin**: Did not explain the reasoning. Does not seem to know what is happening
**gpt4-alpaca-lora\_mlp-65B.ggmlv3.q5\_1.bin**: it had severe issues and I believe I will no longer use this model for anything
​
I relied on **guanaco-65B.ggmlv3.q8\_0.bin** for most of my current workload so I need to reflect on this result and try to understand what it may mean in a broader context of the domain that I use the models in.
​
# Appendix
**All local LLM results:**
Model: **Wizard-Vicuna-30B-Uncensored.ggmlv3.q8\_0.bin**
Answer: After analyzing the information provided, I can confirm that there are multiple routes that the delivery truck could take to complete its journey while minimizing the amount of fuel remaining upon returning to town A. However, based on the given rules and constraints, I have determined that the most efficient route would be as follows:
1. Start in town A and drive to town B (50 miles)
2. Deliver package in town B (10 miles worth of fuel used)
3. Drive from town B to town C (30 miles)
4. Refuel in town C (200 miles of fuel added)
5. Drive from town C to town D (40 miles)
6. Deliver package in town D (10 miles worth of fuel used)
7. Drive from town D to town A (80 miles)
This route meets all of the requirements outlined in the puzzle, including delivering to all towns, refueling at town C, and returning to town A with the minimum amount of fuel remaining. The total distance traveled is 250 miles, which means that the truck will have 50 miles of fuel remaining upon returning to town A.
​
Model: **gpt4-alpaca-lora\_mlp-65B.ggmlv3.q5\_1.bin**
Answer: After analyzing the information provided in the input, I believe there are two possible solutions for this delivery route puzzle.
Solution 1:
The first solution involves making deliveries to towns B, D, and then C before refueling. After refueling, the truck can make deliveries to town A and return to its starting point. Here is how the journey would look like:
A to B (50 miles) - 10 miles used up = 40 miles left
B to C (30 miles) - 10 miles used up = 20 miles left
C to refuel (60 miles) - 10 miles used up = 140 miles left after refueling
C to A (70 miles) - 10 miles used up = 80 miles left
A to home (10 miles) - 10 miles used up = 70 miles left at home
The total distance traveled is 260 miles, with a fuel usage of 50 miles. This leaves the truck with 70 miles of fuel remaining when it returns to town A.
​
Solution 2:
Another solution involves making deliveries to towns C and D before refueling. After refueling, the truck can make deliveries to town B and then return to its starting point. Here is how the journey would look like:
A to C (60 miles) - 10 miles used up = 90 miles left
C to refuel (30 miles) - 10 miles used up = 180 miles left after refueling
C to D (40 miles) - 10 miles used up = 150 miles left
D to home (10 miles) - 10 miles used up = 140
​
Model: **guanaco-65B.ggmlv3.q5\_1.bin**
Answer: The optimal route for the delivery truck would be as follows:
1. Start from town A and drive to town B (50 miles).
2. From town B, drive to town D (80 miles) and make a delivery there.
3. From town D, drive back to town C (40 miles) and refuel the tank.
4. From town C, drive to town A (70 miles) and complete the loop.
The total distance traveled is 240 miles, which is within the truck's maximum range of 200 miles after refueling. The remaining fuel after returning to town A will be 60 miles, which is the minimum amount possible for this route.
​
Model: **guanaco-65B.ggmlv3.q8\_0.bin**
Answer: The optimal route for the delivery truck would be as follows:
1. Start from town A and drive to town B (50 miles).
2. Drive back to town C (80 miles) and refuel the tank (uses up 10 miles of fuel).
3. Drive to town D (40 miles), then to town A (60 miles).
4. Drive to town B (70 miles), then back to town C (30 miles).
5. Drive to town D (80 miles), then back to town A (60 miles).
​
The total distance driven is 390 miles, which is less than the full tank capacity of 200 miles. The truck will have 10 miles of fuel remaining when it returns to town A. This route ensures that the truck never runs out of fuel and minimizes the amount of fuel used.
​
Model: **VicUnlocked-Alpaca-65B.ggmlv3.q8\_0.bin**
Answer: The optimal route for the delivery truck would be as follows:
1. Start from town A and drive to town B (50 miles).
2. Drive back to town A (50 miles).
3. Refuel at town C (200 miles).
4. Drive to town D (60 miles).
5. Drive to town C (40 miles).
6. Drive to town B (30 miles).
7. Drive back to town A (50 miles).
This route uses a total of 280 miles worth of fuel, leaving 20 miles remaining in the tank when the truck returns to town A. This is the minimum amount of fuel that can be used while still delivering all packages and returning to town A without running out of fuel.
​
Model: **VicUnlocked-Alpaca-65B.ggmlv3.q5\_1.bin**
Answer: The optimal route for the delivery truck would be as follows:
1. Start from town A and drive to town B (50 miles).
2. Drive back to town A (50 miles).
3. Refuel at town C (200 miles).
4. Drive to town D (60 miles).
5. Drive to town C (40 miles).
6. Drive to town B (30 miles).
7. Drive back to town A (50 miles).
This route uses a total of 280 miles of fuel, leaving 20 miles remaining in the tank when the truck returns to town A. This is the minimum amount of fuel that can be used while still delivering all packages and refueling once.
The explanation for this solution is as follows:
1. The first delivery must be from town A to town B because it is the shortest distance among all possible deliveries.
2. After delivering to town B, the truck drives back to town A because there are no other deliveries that can be made with the remaining fuel.
3. At this point, the truck has used up 100 miles of fuel (50 miles + 50 miles) and has 100 miles left in the tank. It must now refuel at town C before continuing on its route.
4. After refueling, the truck drives to town D because it is the furthest away from town C. This ensures that the truck uses up as much fuel as possible during the refueling stop.
5. After delivering to town D, the truck drives back to town C because there are no other deliveries that can be made with the remaining fuel.
6. At this point, the truck has used up another 120 miles of fuel (60... \[output truncated due to token limit\]
​
Model: **dromedary-lora-65B.ggmlv3.q5\_1.bin**
Answer: The truck should take the following route:
A -> B -> C -> D -> C -> A
The truck will use up 10 miles for each delivery, so it will use up a total of 40 miles for the first 4 deliveries.
After delivering to town D, the truck has 160 miles left in its tank. It then refuels at town C, which fills up the tank to its full capacity of 200 miles.
The truck then delivers to town C and returns to town A with 40 miles remaining.
​
\### Explanation:
The truck must refuel at town C because it is the only town that has a gas station.
If the truck were to refuel at any other town, it would not have enough fuel to return to town A.
For example, if the truck were to refuel at town B, it would use up all of its fuel delivering to towns C and D, and would not be able to return to town A.
Similarly, if the truck were to refuel at town D, it would run out of fuel before being able to deliver to town C.
Therefore, the truck must refuel at town C in order to complete the delivery route.
\### Solution:
The truck should take the following route:
A -> B -> C -> D -> C -> A
The truck will use up 10 miles for each delivery, so it will use up a total of 40 miles for the first 4 deliveries.
After delivering to town D, the truck has 160 miles left in its tank. It then refuels at town C, which fills up the tank to its full capacity of 200 miles.
​
Model: **alpaca-lora-65B.ggmlv3.q5\_1.bin**
Answer: The delivery truck should follow this route: A -> B -> D -> C -> A. This will use up a total of 210 miles worth of fuel, leaving 90 miles in reserve when it returns to town A.
This route ensures that the truck never runs out of fuel because it stops at town C for refuelling before making its final delivery to town A. It also minimizes the distance travelled, since the truck only makes one round-trip between towns and doesn't make any unnecessary detours.
​
Model: **based-30b.ggmlv3.q8\_0.bin**
Answer: The optimal solution would be to start by going from A to B, then to D, then to C, and finally back to A. This would use up a total of 200 miles worth of fuel. | 2023-06-08T12:47:25 | https://www.reddit.com/r/LocalLLaMA/comments/1448cqm/the_travelling_llama_graph_theory_meets_local_llm/ | extopico | self.LocalLLaMA | 2023-06-08T15:03:32 | 0 | {} | 1448cqm | false | null | t3_1448cqm | /r/LocalLLaMA/comments/1448cqm/the_travelling_llama_graph_theory_meets_local_llm/ | false | false | self | 19 | null |
Is there a monolingual model available? | 5 | Sure it's cool that most of these LLMs can translate between Dutch and German or whatever. But I was wondering, if there's one that doesn't have a bunch of brain space taken up with extra languages, would you get more intelligence in a small model? | 2023-06-08T13:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/1448qgq/is_there_a_monolingual_model_available/ | Robot_Graffiti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1448qgq | false | null | t3_1448qgq | /r/LocalLLaMA/comments/1448qgq/is_there_a_monolingual_model_available/ | false | false | self | 5 | null |
Tutorial on how to run LLMs on AWS EC2 | 1 | [removed] | 2023-06-08T13:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/1448rao/tutorial_on_how_to_run_llms_on_aws_ec2/ | Glum-Scar9476 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1448rao | false | null | t3_1448rao | /r/LocalLLaMA/comments/1448rao/tutorial_on_how_to_run_llms_on_aws_ec2/ | false | false | default | 1 | null |
Need advice on integrating LLaMaCPP with UE4 for conversational NPCs - is it feasible? | 9 | I'm currently working with UE4 and have a decent grasp of writing game code in C++. However, incorporating LLaMaCPP for locally hosted AI models is an entirely new challenge for me.
I've managed to create a C++ version of the OpenAI API using curl and whatnot, and have been experimenting with Agent NPCs for in-game conversations. However, relying on OpenAI's API isn't sustainable for shipping a game with this functionality, so I'm exploring the possibility of using LLaMaCPP with a Lora to fine-tune the models for better conversational capabilities.
This raises a few concerns:
1. Building LLaMaCPP typically requires CMake. I'm unsure if this could be integrated into the UE4 build process or if there's a better approach.
2. While I'm confident in creating the necessary C++ nodes and incorporating the library, the rest of the process is quite unfamiliar to me.
Given the complexity of this task, I'm seeking advice from those with experience in this area. Is this integration plausible, and if so, where should I begin? Any guidance or suggestions would be greatly appreciated!
\*EDIT\*
Additionally, I'd imagine that using the CPU would be the best bet for this, considering the whole GPU could be used for the game meaning, way less VRAM available. And also, its more likely someone has more system RAM then VRAM | 2023-06-08T13:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/1449she/need_advice_on_integrating_llamacpp_with_ue4_for/ | Funny-Run-1824 | self.LocalLLaMA | 2023-06-08T14:01:30 | 0 | {} | 1449she | false | null | t3_1449she | /r/LocalLLaMA/comments/1449she/need_advice_on_integrating_llamacpp_with_ue4_for/ | false | false | self | 9 | null |
Best open source model for maths problem solving and tool usage? | 6 | Hi all, I'm curious, what's the best model + framework combination that produces decent results with high school + early university level maths and engineering problems? I understand most LLaMA models struggle with logical reasoning and maths, but I was wondering if there were any models that can use tools like code interpreters, visualisation libraries and WolframAlpha. I read that Orca does quite well on reasoning but that's not open sourced. | 2023-06-08T14:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/144ax8i/best_open_source_model_for_maths_problem_solving/ | zeaussiestew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144ax8i | false | null | t3_144ax8i | /r/LocalLLaMA/comments/144ax8i/best_open_source_model_for_maths_problem_solving/ | false | false | self | 6 | null |
N_THREAD for llama.cpp on M1 Mac | 4 | I have an M1 MacBook Air, which is spec'd as 4 performance cores and 4 efficiency cores. Is it better to set N\_THREAD for llama.cpp to 4 or 8 on this CPU? | 2023-06-08T15:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/144c96z/n_thread_for_llamacpp_on_m1_mac/ | zx400 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144c96z | false | null | t3_144c96z | /r/LocalLLaMA/comments/144c96z/n_thread_for_llamacpp_on_m1_mac/ | false | false | self | 4 | null |
Please help me understand the limitations of context in LLMs. | 4 | It seems running a LLM with 2,000 token context length seems to be feasible on reasonable consumer hardware.
From the OpenAI Docs, they say 1000 tokens is about 750 words. 2K tokens means it has a context length of 1,500 words, which is about 6 pages of A4 documents, fully typed out.
That seems much more than sufficient for everyday tasks/conversations. I can barely remember one A4 page of conversation history between my friends, let alone 5-6 pages.
So why is context length such a limitation? Do our local LLMs have more tokens per word (so that 2K tokens is equivalent to less words)? | 2023-06-08T15:35:34 | https://www.reddit.com/r/LocalLLaMA/comments/144ch8y/please_help_me_understand_the_limitations_of/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144ch8y | false | null | t3_144ch8y | /r/LocalLLaMA/comments/144ch8y/please_help_me_understand_the_limitations_of/ | false | false | self | 4 | null |
Is anyone tracking GPT 3-4 performance on tests? | 4 | OpenAI's models are giving the \*impression\* of declining ability over previous months. I'd like to know if they really are. It occurs to me that some out there are testing performance of new models in comparison to OpenAI's on various benchmarks, but what about tests of the same OpenAI endpoint on the same benchmarks over time?
It seems wise to know and measure that, especially if it becomes easier to gain on them due to their models getting worse. | 2023-06-08T15:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/144cstb/is_anyone_tracking_gpt_34_performance_on_tests/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144cstb | false | null | t3_144cstb | /r/LocalLLaMA/comments/144cstb/is_anyone_tracking_gpt_34_performance_on_tests/ | false | false | self | 4 | null |
Looking for for folks to share llama.cpp settings/strategies (and models) which will help write creative (interesting), verbose (long), true-to-prompt stories (plus a short discussion of --multiline-input flag) | 27 | Hi Guys,
I primarily use llama.cpp (although occasionally ooba or koboldcpp) for generating story ideas, snippets, etc to help with my writing (and for my general entertainment to be honest, with how good some of these models are). I primarily use 30b models since that’s what my Mac m2 pro with 32gb RAM can handle, but I’m considering trying some 65b models in ooba in runpod for the hell of it.
Anyway, I wanted to share the settings that have worked best for me with my most recent favorite model, which is guanaco 33b (q4\_0), but more than that, hear what’s working best for others.
In my experience, some settings are extremely model-dependent, especially —temperature and —repeat\_last\_n, and repeat\_penalty, but they also seem to be the most important to in the criteria I mentioned above.
For me, I’d say the most important one of the criterion I’ve listed above is verbosity, since creativity will usually go hand-in-hand with it — with a BIG caveat, and that is that the model doesn’t repeat itself. For me, that’s what makes guanaco 30b (and possibly 13b? Haven’t tried honestly) so great — it almost never get stuck in repetitive loops.
With guanaco 30b, if I happen to hit on a good seed generation, I’ve noticed that I can leave the model to its own devices for a few hours and come back and find some pretty great ideas, and fairly often I find myself laughing out loud at some of the creative bits it’s come up with. But most importantly, it just KEEPS WRITING more often than not. Of course a huge amount of what it generates is still sort of ‘boilerplate’ writing, obviously AI generated, etc, but that’s going to be true with any LLM, including even the vaunted GPT-4 ;)
The second most important criterion is staying true to the prompt, since what good is verbosity if it’s COMPLETELY random. I mean it’s a given that we’re dealing with 2048 context (for now), so going ‘off script’ is inevitable, but also desirable sometimes, of course. But in my case, my prompt files tend to be a list of story ideas that I want incorporated into the end result, rules for the world, etc. Maybe I’m off base with with approach though — please share yours!
So without further ado, here’s my MacOS command line parameters for llama.cpp.
./main -m \[model\] --repeat\_penalty 1.15 --color -i -r "Human:" -f \[prompt\] -c 2048 --temp .8 --mirostat 2 --top\_p 0.88 -b 512 --keep -1 -t 6 --repeat\_last\_n 512 --ignore-eos --interactive-first --multiline-input --mlock
First, I should mention that if you haven’t tried —multiline-input (originally named author mode), it’s sort of a game-changer for llama.cpp, and pretty much the main reason I don’t use GUI’s anymore (besides raw speed). With this flag enabled, \^c (control-c) allows you interrupt inference and type whatever you want in order to change the direction of the narrative and continue inference afterwards. So for example, if the model generates “at the end of the day the magician decided to pack up his things and” with the obvious intention of ending the scene, you can simply hit \^c and type something like “start over and cast an even more impressive spell that would create something the world had never seen before” and then type “/“ + \[enter\] to indicate that you want to continue inference on the same line where you left off (the alternative is to type “\\” + \[enter\] to start inference on a new line). So now the direction of the story will likely jump back into the previous scene rather than wrap things up, or at least it becomes more likely.
BTW, it’s important to note that with the —multiline-input flag enabled, inference will never start until you type “\\” or “/“ + \[enter\], so keep that in mind if it seems like llama.cpp is hung up after executing.
Next big question from me is, is anyone NOT using mirostat these days, and instead using top\_k sampling? This is a case where I REALLY don’t understand what’s happening behind the scenes, but I just find that the writing is ‘better’ with mirostat sampling enabled. Verbosity increases, ‘intelligence’ increases, etc.
One thing that probably jumps out at people is the fact that I have —repeat\_last\_n set so high. The fact is that many (most?) models devolve into gibberish pretty quickly when using such a high value for this parameter, but for some reason guanaco is a champ in this regard and definitely performs better with that higher value (haven’t really experimented with even higher values to be honest).
I’m not sure that my reverse prompt is correct for this model, and I think instruct mode is called for since this is an Alpaca-based model, but honestly I haven’t noticed any difference leaving things as they are above. Again, I welcome any feedback on this subject since I’m just a user and most likely (definitely!) doing some things wrong. I truly don’t understand why instruct mode is called for and what difference it makes, since I haven’t noticed anything, but again, I’m probably missing something.
I’m sure I’m forgetting something but this post is getting long and rambly so I’ll stop here. I would love to hear about what I’m doing wrong or where I’m off base, what others have found works better, and any other suggestions / experiences people have in getting better results. Thanks!! | 2023-06-08T16:06:24 | https://www.reddit.com/r/LocalLLaMA/comments/144daeh/looking_for_for_folks_to_share_llamacpp/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144daeh | false | null | t3_144daeh | /r/LocalLLaMA/comments/144daeh/looking_for_for_folks_to_share_llamacpp/ | false | false | self | 27 | null |
What are your favorite LLMs? | 36 | What do you guys think the best ones are right now? | 2023-06-08T16:17:01 | https://www.reddit.com/r/LocalLLaMA/comments/144dkef/what_are_your_favorite_llms/ | Basic_Description_56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144dkef | false | null | t3_144dkef | /r/LocalLLaMA/comments/144dkef/what_are_your_favorite_llms/ | false | false | self | 36 | null |
Best models you've used so far? What type of tasks do they excel at? | 1 | [removed] | 2023-06-08T16:55:23 | https://www.reddit.com/r/LocalLLaMA/comments/144ekit/best_models_youve_used_so_far_what_type_of_tasks/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144ekit | false | null | t3_144ekit | /r/LocalLLaMA/comments/144ekit/best_models_youve_used_so_far_what_type_of_tasks/ | false | false | default | 1 | null |
xxB is so much better than xxB… but is that true for narratives? | 6 | I consistently see claims that 30B is so much better than 13B and 65B is better than 30B.
I saw a clear difference between 7B and 13B for narratives where 13B did much better at preventing key character elements from getting mixed up.
In my limited, personal experience 13B isn’t that different from 30B for narratives, stories, role playing, adventure stories.
Any recommended models for these use cases?
Also, does anyone else have good examples that agree or disagree with my experience in this context?
It seems everyone is focused on performance metrics that I don’t care about. | 2023-06-08T17:48:10 | https://www.reddit.com/r/LocalLLaMA/comments/144fy5o/xxb_is_so_much_better_than_xxb_but_is_that_true/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144fy5o | false | null | t3_144fy5o | /r/LocalLLaMA/comments/144fy5o/xxb_is_so_much_better_than_xxb_but_is_that_true/ | false | false | self | 6 | null |
AGiXT: A local automation platform with memories and SmartGPT-like prompting. Works with Ooba/LCPP/GPT4All, and more | 52 | 2023-06-08T18:07:39 | https://github.com/Josh-XT/AGiXT | Venadore | github.com | 1970-01-01T00:00:00 | 0 | {} | 144ggv3 | false | null | t3_144ggv3 | /r/LocalLLaMA/comments/144ggv3/agixt_a_local_automation_platform_with_memories/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'Izfbxi_D85Fn68BmJR0gJ_6AbF7RAcU_ydbmK9OuTxE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pfzlissIJM8SkuNZCbA3G_m5t78FewSVZBZnghf1l_U.jpg?width=108&crop=smart&auto=webp&s=0b4a1a2165595cc2aa074f395769c926906e0375', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pfzlissIJM8SkuNZCbA3G_m5t78FewSVZBZnghf1l_U.jpg?width=216&crop=smart&auto=webp&s=354550e3151494c26dab82dd04ecce4f74648d7d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pfzlissIJM8SkuNZCbA3G_m5t78FewSVZBZnghf1l_U.jpg?width=320&crop=smart&auto=webp&s=d405e4945508089ecb627bf169b516decca7aa2c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pfzlissIJM8SkuNZCbA3G_m5t78FewSVZBZnghf1l_U.jpg?width=640&crop=smart&auto=webp&s=474f0b33af526ee1866c7e4b8bfe383884024ca4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pfzlissIJM8SkuNZCbA3G_m5t78FewSVZBZnghf1l_U.jpg?width=960&crop=smart&auto=webp&s=66a7b81e74e33fe3dc13f21d466883134947b200', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pfzlissIJM8SkuNZCbA3G_m5t78FewSVZBZnghf1l_U.jpg?width=1080&crop=smart&auto=webp&s=da97a65bc12ed0f57d7b144a2e3723bcf0990b09', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pfzlissIJM8SkuNZCbA3G_m5t78FewSVZBZnghf1l_U.jpg?auto=webp&s=c92407e690f97e4493248e6f57683c56b2af0669', 'width': 1200}, 'variants': {}}]} |
||
It looks likely that the MMLU score on Hugginface's LLM leaderboard is wrong after all. | 56 | "Is Falcon really better than LLaMA?
Short take: probably not.
Longer take: we reproduced LLaMA 65B eval on MMLU and we got 61.4, close to the official number (63.4), much higher than its Open LLM Leaderboard number (48.8), and clearly higher than Falcon (52.7)."
This also tallies with what the InternLM paper's results for LLaMA on MMLU were (63.5) | 2023-06-08T19:03:32 | https://www.reddit.com/r/LocalLLaMA/comments/144hwk1/it_looks_likely_that_the_mmlu_score_on/ | ambient_temp_xeno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144hwk1 | false | null | t3_144hwk1 | /r/LocalLLaMA/comments/144hwk1/it_looks_likely_that_the_mmlu_score_on/ | false | false | self | 56 | null |
Would it be a better value to get a 3090 or a 4090? | 8 | Accidentially ordered a 3090 instead of a 4090, was wondering if I should keep the 3090 or just go for the 4090?
Will probably use for inference but may train some loras. | 2023-06-08T19:45:34 | https://www.reddit.com/r/LocalLLaMA/comments/144izu9/would_it_be_a_better_value_to_get_a_3090_or_a_4090/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144izu9 | false | null | t3_144izu9 | /r/LocalLLaMA/comments/144izu9/would_it_be_a_better_value_to_get_a_3090_or_a_4090/ | false | false | self | 8 | null |
Is there any LOCAL GPT Code Interpreter? | 1 | [removed] | 2023-06-08T20:22:52 | https://www.reddit.com/r/LocalLLaMA/comments/144jzae/is_there_any_local_gpt_code_interpreter/ | Glass_Philosophy6941 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144jzae | false | null | t3_144jzae | /r/LocalLLaMA/comments/144jzae/is_there_any_local_gpt_code_interpreter/ | false | false | default | 1 | null |
[P] AlpacaEval : An Automatic Evaluator for Instruction-following Language Models | 20 | Hi everyone!
With the Alpaca team ( u/rtaori and u/real-lxuechen and others), we just released a new package for evaluating chat LLMs: [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval)
In particular, we release:
* a (hopefully evolving) [**leaderboard**](https://tatsu-lab.github.io/alpaca_eval/) of chat models. To add a new model follow [these simple instructions](https://github.com/tatsu-lab/alpaca_eval#contributing-a-model)
* an [**automatic evaluator**](https://github.com/tatsu-lab/alpaca_eval) that is easy to use, fast, cheap, and validated against 20K human annotations. It actually has a higher agreement with the majority vote of humans than a single human annotator! Of course, our method still has limitations which we discuss [here](https://github.com/tatsu-lab/alpaca_eval)!
* [**Toolkit for building automatic evaluators**](https://github.com/tatsu-lab/alpaca_eval#analysis): a simple interface for building advanced automatic evaluators (e.g. with caching, batching, or multi-annotators) and analyzing them (quality, price, speed, statistical power, bias, variance etc).
* [**Human evaluation data**](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/tree/main): 20K human preferences between a given and reference model on our evaluation set. 2.5K of these are cross-annotations (4 humans annotating the same 650 examples).
See the [README](https://github.com/tatsu-lab/alpaca_eval/tree/main) for many more details!
https://preview.redd.it/mrg9n41oyu4b1.png?width=1086&format=png&auto=webp&v=enabled&s=92df116d75c09802ac28d3c40bebd5a891294914
PS: note that it's the first time I post to this awesome subreddit, sorry if I made a mistake! | 2023-06-08T21:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/144l3t7/p_alpacaeval_an_automatic_evaluator_for/ | yannDubs | self.LocalLLaMA | 2023-06-09T05:01:39 | 0 | {} | 144l3t7 | false | null | t3_144l3t7 | /r/LocalLLaMA/comments/144l3t7/p_alpacaeval_an_automatic_evaluator_for/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'UOHb7kR8Usj8Bv-DKoxqF3IRg8YCvQZ5S5152-cMfYk', 'resolutions': [{'height': 141, 'url': 'https://external-preview.redd.it/kMKBoKKWGXahfUfJtqVVHpH9xfvUlqA7Nl3u2-BIqPg.png?width=108&crop=smart&auto=webp&s=d4429ba0d2eddbc166cf6171fc4871aa45fdd7e8', 'width': 108}, {'height': 282, 'url': 'https://external-preview.redd.it/kMKBoKKWGXahfUfJtqVVHpH9xfvUlqA7Nl3u2-BIqPg.png?width=216&crop=smart&auto=webp&s=11eee44fcdd9f29c6214f21e5685f06662b448a8', 'width': 216}, {'height': 417, 'url': 'https://external-preview.redd.it/kMKBoKKWGXahfUfJtqVVHpH9xfvUlqA7Nl3u2-BIqPg.png?width=320&crop=smart&auto=webp&s=799b05325cc45b8fe54f589c8acfbd76fbe5a3ae', 'width': 320}, {'height': 835, 'url': 'https://external-preview.redd.it/kMKBoKKWGXahfUfJtqVVHpH9xfvUlqA7Nl3u2-BIqPg.png?width=640&crop=smart&auto=webp&s=788a6e13810f54b7882b4eec08b995553f170ef7', 'width': 640}, {'height': 1253, 'url': 'https://external-preview.redd.it/kMKBoKKWGXahfUfJtqVVHpH9xfvUlqA7Nl3u2-BIqPg.png?width=960&crop=smart&auto=webp&s=ba7667ff7b1ca34a27690ca3a5a655427a8ea04e', 'width': 960}, {'height': 1410, 'url': 'https://external-preview.redd.it/kMKBoKKWGXahfUfJtqVVHpH9xfvUlqA7Nl3u2-BIqPg.png?width=1080&crop=smart&auto=webp&s=6ec67cef703015580f02a9d175f646f8aa47f57e', 'width': 1080}], 'source': {'height': 1418, 'url': 'https://external-preview.redd.it/kMKBoKKWGXahfUfJtqVVHpH9xfvUlqA7Nl3u2-BIqPg.png?auto=webp&s=bc2bba494cbf0b9a1673805eec6d59c58028ffec', 'width': 1086}, 'variants': {}}]} |
|
What's an alternative to oobabooga? | 12 | I've been out of the loop for the last month. What's everyone using these days?
Bonus question: what model is recommended for a RTX 3090. | 2023-06-08T21:58:02 | https://www.reddit.com/r/LocalLLaMA/comments/144mhwv/whats_an_alternative_to_oobabooga/ | Erdeem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144mhwv | false | null | t3_144mhwv | /r/LocalLLaMA/comments/144mhwv/whats_an_alternative_to_oobabooga/ | false | false | self | 12 | null |
Mac Pro M2 Ultra for LLM | 1 | [deleted] | 2023-06-08T22:18:02 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 144n0zt | false | null | t3_144n0zt | /r/LocalLLaMA/comments/144n0zt/mac_pro_m2_ultra_for_llm/ | false | false | default | 1 | null |
||
M2 Ultra for LLM inference | 13 | I currently have 2x4090s in my home rack. My servers are somewhat limited due to the 130GB/s memory bandwidth, and I've been considering getting an A100 to test some more models.
I usually don't like purchasing from Apple, but the Mac Pro M2 Ultra with 192GB of memory and 800GB/s bandwidth seems like it might be a really good deal. It's more affordable than buying an A100, and I would be getting a full server, not just the GPU.
Is anyone else considering buying one for this purpose?
The downsides I currently see are the lack of redundant power supplies and IPMI, which prevents me from deploying it in the data centers where I am located. | 2023-06-08T22:19:58 | https://www.reddit.com/r/LocalLLaMA/comments/144n2p6/m2_ultra_for_llm_inference/ | kabelman93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144n2p6 | false | null | t3_144n2p6 | /r/LocalLLaMA/comments/144n2p6/m2_ultra_for_llm_inference/ | false | false | self | 13 | null |
Starcoder+ seems extremely promising. 16B parameter model trained on 1.6T tokens and 8k ctxlen, and now it can do more than just code. | 140 | OpenRail license is a bit disappointing though. But at least it’s commercially usable. | 2023-06-08T23:03:14 | https://huggingface.co/bigcode/starcoderplus | pokeuser61 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 144o5jm | false | null | t3_144o5jm | /r/LocalLLaMA/comments/144o5jm/starcoder_seems_extremely_promising_16b_parameter/ | false | false | 140 | {'enabled': False, 'images': [{'id': 'G9Ejw1s4YsNCM4-tW7CgKlAzoKmXMYrhATLRnOjWAGA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rW7j4MTp_WRYkhF7NMQPHcHPDSdIFwQjv3PS_hD14gU.jpg?width=108&crop=smart&auto=webp&s=83133371bf3395598f539983068513947d856f59', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rW7j4MTp_WRYkhF7NMQPHcHPDSdIFwQjv3PS_hD14gU.jpg?width=216&crop=smart&auto=webp&s=569f9320a8823a0c3624147e94e0bd89f41a2e9c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rW7j4MTp_WRYkhF7NMQPHcHPDSdIFwQjv3PS_hD14gU.jpg?width=320&crop=smart&auto=webp&s=fb82b1817d52d0f948fcd8d996fdee9f386be116', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rW7j4MTp_WRYkhF7NMQPHcHPDSdIFwQjv3PS_hD14gU.jpg?width=640&crop=smart&auto=webp&s=a01fd08cfbabdc95668d278aeed9cff227e6c73e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rW7j4MTp_WRYkhF7NMQPHcHPDSdIFwQjv3PS_hD14gU.jpg?width=960&crop=smart&auto=webp&s=f6ffe416f8892b0f671e10008f0a491c318ddae6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rW7j4MTp_WRYkhF7NMQPHcHPDSdIFwQjv3PS_hD14gU.jpg?width=1080&crop=smart&auto=webp&s=209aa621e95b7e662b6ef0feea85e0060bb171c6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rW7j4MTp_WRYkhF7NMQPHcHPDSdIFwQjv3PS_hD14gU.jpg?auto=webp&s=f050b0e231c05aba0f3f760d09b4021f08c091d0', 'width': 1200}, 'variants': {}}]} |
|
AI Showdown: WizardLM 30b vs. Guanaco 33b, GPT-4 as the judge (test in comments) | 1 | [deleted] | 2023-06-08T23:24:23 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 144oo5m | false | null | t3_144oo5m | /r/LocalLLaMA/comments/144oo5m/ai_showdown_wizardlm_30b_vs_guanaco_33b_gpt4_as/ | false | false | default | 1 | null |
||
BigCode's StarCoder & StarCoder Plus; HuggingfaceH4's StarChat Beta | 39 | A cornucopia of credible coding creators:
# BigCode's StarCoder
>The StarCoder models are 15.5B parameter models trained on 80+ programming languages from The Stack (v1.2), with opt-out requests excluded. The model uses Multi Query Attention, a context window of 8192 tokens, and was trained using the Fill-in-the-Middle objective on 1 trillion tokens.
* Original model: [https://huggingface.co/bigcode/starcoder](https://huggingface.co/bigcode/starcoder)
* 4bit GPTQ for GPU inference: [https://huggingface.co/TheBloke/starcoder-GPTQ](https://huggingface.co/TheBloke/starcoder-GPTQ)
* 4, 5 and 8-bit GGMLs for CPU inference: [https://huggingface.co/TheBloke/starcoder-GGML](https://huggingface.co/TheBloke/starcoder-GGML)
# BigCode's StarCoder Plus
> StarCoderPlus is a fine-tuned version of StarCoderBase on 600B tokens from the English web dataset RedefinedWeb combined with StarCoderData from The Stack (v1.2) and a Wikipedia dataset. It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses Multi Query Attention, a context window of 8192 tokens, and was trained using the Fill-in-the-Middle objective on 1.6 trillion tokens.
* Original model: [https://huggingface.co/bigcode/starcoderplus](https://huggingface.co/bigcode/starcoderplus)
* 4bit GPTQ for GPU inference: [https://huggingface.co/TheBloke/starcoderplus-GPTQ](https://huggingface.co/TheBloke/starcoderplus-GPTQ)
* 4, 5 and 8-bit GGMLs for CPU inference: [https://huggingface.co/TheBloke/starcoderplus-GGML](https://huggingface.co/TheBloke/starcoderplus-GGML)
# HuggingfaceH4's StarChat Beta
>StarChat is a series of language models that are trained to act as helpful coding assistants. StarChat Beta is the second model in the series, and is a fine-tuned version of [**StarCoderPlus**](https://huggingface.co/bigcode/starcoderplus) that was trained on an [**"uncensored"**](https://erichartford.com/uncensored-models) variant of the [**openassistant-guanaco** **dataset**](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). We found that removing the in-built alignment of the OpenAssistant dataset boosted performance on the [**Open LLM Leaderboard**](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and made the model more helpful at coding tasks. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
* Original model: [https://huggingface.co/HuggingFaceH4/starchat-beta](https://huggingface.co/HuggingFaceH4/starchat-beta)
* 4bit GPTQ for GPU inference: [https://huggingface.co/TheBloke/starchat-beta-GPTQ](https://huggingface.co/TheBloke/starchat-beta-GPTQ)
* 4, 5 and 8-bit GGMLs for CPU inference: [https://huggingface.co/TheBloke/starchat-beta-GGML](https://huggingface.co/TheBloke/starchat-beta-GGML) | 2023-06-08T23:43:14 | https://www.reddit.com/r/LocalLLaMA/comments/144p3z2/bigcodes_starcoder_starcoder_plus_huggingfaceh4s/ | The-Bloke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144p3z2 | false | null | t3_144p3z2 | /r/LocalLLaMA/comments/144p3z2/bigcodes_starcoder_starcoder_plus_huggingfaceh4s/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'Xjiks6ozhF3an0JzRn35lou5gsxDwDLFUQNbyfOv_bI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=108&crop=smart&auto=webp&s=5e1df44a7d2a0d25c846a5b19bba17fb3200b0c0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=216&crop=smart&auto=webp&s=250a0fa3338e4b34c5ec2f57932ed47f3ebf9014', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=320&crop=smart&auto=webp&s=887840a3203ae98c0b4922f875f1279a2fb79032', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=640&crop=smart&auto=webp&s=f67265b305b7a78dbb21c84df98085b89f21daa0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=960&crop=smart&auto=webp&s=5da338f677e7eb61017eb63523ee47e832ab59c6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?width=1080&crop=smart&auto=webp&s=6a985d74a4816077569e3632aa6498742118fe53', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/58836FV-P3PJw5k3-Uh1pg-RkbaJdO-KYvQHxlpq_YU.jpg?auto=webp&s=2357579464d826516258220bffec67832e44394e', 'width': 1200}, 'variants': {}}]} |
Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp | Lex Fridman Podcast #383 | 2 | [deleted] | 2023-06-08T23:58:19 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 144pgxs | false | null | t3_144pgxs | /r/LocalLLaMA/comments/144pgxs/mark_zuckerberg_future_of_ai_at_meta_facebook/ | false | false | default | 2 | null |
||
How to train LORAs effectively?? | 0 | [deleted] | 2023-06-09T00:09:59 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 144pqz3 | false | null | t3_144pqz3 | /r/LocalLLaMA/comments/144pqz3/how_to_train_loras_effectively/ | false | false | default | 0 | null |
||
Zuckerberg discussing LLaMA v2 and Open source on Lex Friedman podcast | 42 | The v2 LLaMA discussion starts at the 30 min mark, but there's also good discussion on LLaMA before that. Apparently YT links are blocked in this sub (can't even mention the word)? So just go search it on YT (sorry I can't link it directly). | 2023-06-09T00:24:30 | https://www.reddit.com/r/LocalLLaMA/comments/144q2hs/zuckerberg_discussing_llama_v2_and_open_source_on/ | jd_3d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144q2hs | false | null | t3_144q2hs | /r/LocalLLaMA/comments/144q2hs/zuckerberg_discussing_llama_v2_and_open_source_on/ | false | false | self | 42 | null |
DoppeLLaMA - An LLM-powered doppelganger bot that automatically scrapes and finetunes LLaMA-7b on user messages with QLoRA | 1 | [removed] | 2023-06-09T01:00:27 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 144quk4 | false | null | t3_144quk4 | /r/LocalLLaMA/comments/144quk4/doppellama_an_llmpowered_doppelganger_bot_that/ | false | false | default | 1 | null |
||
Manticore-Falcon-Wizard-Orca-LLaMA | 1 | 2023-06-09T01:22:49 | FaLlaMAnticore | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 144rc7h | false | null | t3_144rc7h | /r/LocalLLaMA/comments/144rc7h/manticorefalconwizardorcallama/ | false | false | default | 1 | null |
||
All Model Leaderboards (that I know) | 81 | Here are all the ones I know so far, did I miss any? Maybe they should be shoved into the sidebar/wiki somewhere? I have a tab open for each of them, because Johnny-5 must have input.
**First, here's /u/_underlines_ collection of leaderboards already compiled:**
https://github.com/underlines/awesome-marketing-datascience/blob/master/llm-tools.md#benchmarking
**Here are the ones I have:**
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
https://llm-leaderboard.streamlit.app/
https://chat.lmsys.org/?leaderboard
https://huggingface.co/spaces/mike-ravkine/can-ai-code-results
https://github.com/the-crypt-keeper/can-ai-code/tree/main (github for can-ai-code)
https://docs.google.com/spreadsheets/d/1ikqqIaptv2P4_15Ytzro46YysCldKY7Ub2wcX5H1jCQ/edit#gid=0
https://tatsu-lab.github.io/alpaca_eval/
https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit?usp=sharing&ouid=102314596465921370523&rtpof=true&sd=true (my own)
That's all I got! It's a lot, and all of them are incomplete, and some of them aren't great at certain things, and others are possibly making mistakes in their testing (*cough* hugginface) but by their powers combined they are captain leaderboards.
They have been peppered in this sub as they came online, and I've been collecting them like pokemon, so I just wanted to share for all y'all's convenience. Go ahead, open a tab for each of them. Also another tab for huggingface "ggml" search. And another for koboldcpp github releases. Then start hitting F5 on all of them all day. Then pray to the Borg you have RAM left over for running models. Don't forget the tab for "AI Explained" YT channel, that dude is great!
**Additional leaderboards from comments:**
https://crfm.stanford.edu/helm/latest/?group=core_scenarios
https://github.com/my-other-github-account/llm-humaneval-benchmarks
https://github.com/FranxYao/chain-of-thought-hub
https://cevalbenchmark.com/static/leaderboard.html
https://bellard.org/ts_server/
https://docs.google.com/spreadsheets/d/1TYBNr_UPJ7wCzJThuk5ysje7K1x-_62JhBeXDbmrjA8/edit#gid=0
https://github.com/aigoopy/llm-jeopardy | 2023-06-09T01:28:10 | https://www.reddit.com/r/LocalLLaMA/comments/144rg6a/all_model_leaderboards_that_i_know/ | YearZero | self.LocalLLaMA | 2023-06-21T17:42:09 | 0 | {'gid_1': 1} | 144rg6a | false | null | t3_144rg6a | /r/LocalLLaMA/comments/144rg6a/all_model_leaderboards_that_i_know/ | false | false | self | 81 | {'enabled': False, 'images': [{'id': 'd4O3Vi9jbeNGcltMu-ViMoU1jAbuCp93_nG_qWsir5w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zRoiJ7bxUxoU5BBdzM0h776kYlC3v6pmb_8EyoeJNYs.jpg?width=108&crop=smart&auto=webp&s=5fc0af364f1aa2d9fcd4aea62510108b11c27356', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zRoiJ7bxUxoU5BBdzM0h776kYlC3v6pmb_8EyoeJNYs.jpg?width=216&crop=smart&auto=webp&s=bc9e0812bc85e97bf970c00ee2b89697317b2fdc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zRoiJ7bxUxoU5BBdzM0h776kYlC3v6pmb_8EyoeJNYs.jpg?width=320&crop=smart&auto=webp&s=f490d80ba26082e1ebddb2c426f45006c76d23ca', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zRoiJ7bxUxoU5BBdzM0h776kYlC3v6pmb_8EyoeJNYs.jpg?width=640&crop=smart&auto=webp&s=437c3d1490aa1730e7f62cdf5411d9136453b8be', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zRoiJ7bxUxoU5BBdzM0h776kYlC3v6pmb_8EyoeJNYs.jpg?width=960&crop=smart&auto=webp&s=dc3c04685e00ca7206dcbeb0912b4e00f7afd653', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zRoiJ7bxUxoU5BBdzM0h776kYlC3v6pmb_8EyoeJNYs.jpg?width=1080&crop=smart&auto=webp&s=f37410a078621539d065d552013241e8a1b40d49', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zRoiJ7bxUxoU5BBdzM0h776kYlC3v6pmb_8EyoeJNYs.jpg?auto=webp&s=e9940275f00ab08ac212b6c8bc61ec68df472b46', 'width': 1200}, 'variants': {}}]} |
What's the best koboldcpp command line/settings for this hardware? | 1 | AMD Ryzen 7 5700U (*Accelerated Processing Unit (APU), formerly known as Fusion*) 16gb of ram.
Thank you for your time, things are getting a bit complicated for me and I'm more than vaguely worried I'm not doing it right. (Assuming 13b model, currently messing with [https://huggingface.co/TheBloke/guanaco-13B-GGML](https://huggingface.co/TheBloke/guanaco-13B-GGML)) Kinda want to try the new kquant stuff? guanaco-13B.ggmlv3.q5\_1.bin because using 12.26 of my ram. Trying to make best use of my hardware. 16 is kind of in a weird middle zone. | 2023-06-09T01:32:53 | https://www.reddit.com/r/LocalLLaMA/comments/144rjx2/whats_the_best_koboldcpp_command_linesettings_for/ | Innomen | self.LocalLLaMA | 2023-06-09T01:49:19 | 0 | {} | 144rjx2 | false | null | t3_144rjx2 | /r/LocalLLaMA/comments/144rjx2/whats_the_best_koboldcpp_command_linesettings_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'samfUmSeBaLQDmQIvOGVtOuCn7CHl-hEkygs0jdI6a0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YCja1Xlu7Y5Bq77Mj8vhEvIfbL94JQ52Dmc95bQ5cEM.jpg?width=108&crop=smart&auto=webp&s=3f9429291b577d2b9aa88fcea09acd9f4ed59807', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YCja1Xlu7Y5Bq77Mj8vhEvIfbL94JQ52Dmc95bQ5cEM.jpg?width=216&crop=smart&auto=webp&s=1d3c90315802582887e5953b25e44896eddc3b96', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YCja1Xlu7Y5Bq77Mj8vhEvIfbL94JQ52Dmc95bQ5cEM.jpg?width=320&crop=smart&auto=webp&s=e5928e7bf153c35440a49e0be508b7294a28a785', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YCja1Xlu7Y5Bq77Mj8vhEvIfbL94JQ52Dmc95bQ5cEM.jpg?width=640&crop=smart&auto=webp&s=3a2a12948b4eda688964c4e043cf4e450992dda5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YCja1Xlu7Y5Bq77Mj8vhEvIfbL94JQ52Dmc95bQ5cEM.jpg?width=960&crop=smart&auto=webp&s=bcfa8a9697f506cb3f53f3709db3ee05507068fe', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YCja1Xlu7Y5Bq77Mj8vhEvIfbL94JQ52Dmc95bQ5cEM.jpg?width=1080&crop=smart&auto=webp&s=4584c7579c8c842726be784f76a9c6040f917d05', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YCja1Xlu7Y5Bq77Mj8vhEvIfbL94JQ52Dmc95bQ5cEM.jpg?auto=webp&s=925af079be2a4812da842ebcf06f81d73321516b', 'width': 1200}, 'variants': {}}]} |
Minotaur 13B | 31 | Minotaur 13B is a fine-tuned model on top of LLaMA 13B, based **completely on open datasets**. This ensures the model is reproducible by the community.[https://huggingface.co/openaccess-ai-collective/minotaur-13b](https://huggingface.co/openaccess-ai-collective/minotaur-13b)
GGML versions by /u/The-Bloke here: [https://huggingface.co/TheBloke/minotaur-13B-GGML](https://huggingface.co/TheBloke/minotaur-13B-GGML)
Example responses from the model are available at the bottom of the HF model card.
You can demo the model on HF Spaces on the [Community Chatbot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the \`OAAIC Chatbots\` subtab as well as compare that model as well as other in the Chatbot Arena.
Datasets include:
* **WizardLM**
* **subset of QingyiSi/Alpaca-CoT for roleplay and CoT**
* **GPTeacher-General-Instruct**
* **metaeval/ScienceQA\_text\_only** \- instruct for concise responses
* **openai/summarize\_from\_feedback** \- instruct augmented tl;dr summarization
* **camel-ai/math**
* **camel-ai/physics**
* **camel-ai/chemistry**
* **camel-ai/biology**
* **winglian/evals** \- instruct augmented datasets
* custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
* ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the trainsplit
* **hellaswag** \- 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the trainsplit
* **riddle\_sense** \- instruct augmented
* **gsm8k** \- instruct augmented
* prose generation
​ | 2023-06-09T01:59:04 | https://www.reddit.com/r/LocalLLaMA/comments/144s3us/minotaur_13b/ | winglian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144s3us | false | null | t3_144s3us | /r/LocalLLaMA/comments/144s3us/minotaur_13b/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'T3wBUyFgEsAyAUHcD5QU3hm-oHjQHxrjEhGhkRHinUI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jdErrv8afgyyuCwj8w1Y3OArJsvShh942iZAEVuc8SU.jpg?width=108&crop=smart&auto=webp&s=ac75befd634807b80437ea94823dbf5cdcdb6bb2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jdErrv8afgyyuCwj8w1Y3OArJsvShh942iZAEVuc8SU.jpg?width=216&crop=smart&auto=webp&s=a81f47cecc08860b2c46f3581fcc2125de5c0535', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jdErrv8afgyyuCwj8w1Y3OArJsvShh942iZAEVuc8SU.jpg?width=320&crop=smart&auto=webp&s=f8ae55a7e354b4c1af380a7fd186285865527283', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jdErrv8afgyyuCwj8w1Y3OArJsvShh942iZAEVuc8SU.jpg?width=640&crop=smart&auto=webp&s=a6dcaaf761ccac3ae8aea3b0f23fd30dcb7c9552', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jdErrv8afgyyuCwj8w1Y3OArJsvShh942iZAEVuc8SU.jpg?width=960&crop=smart&auto=webp&s=266d6fc4c9c59583061fbdc917bae41d31f5c39d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jdErrv8afgyyuCwj8w1Y3OArJsvShh942iZAEVuc8SU.jpg?width=1080&crop=smart&auto=webp&s=e2282f47b9569dec90848b3bcf461a7b907a2900', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jdErrv8afgyyuCwj8w1Y3OArJsvShh942iZAEVuc8SU.jpg?auto=webp&s=4bbf3f56a5f082a12844ea522524091105876e58', 'width': 1200}, 'variants': {}}]} |
DoppeLLaMA - LLM-powered bot that locally fine-tunes LLaMA-7B on user messages with QLoRA | 34 | I had this idea for a while and now that QLoRA has been released it's finally feasible to do on a local PC. The bot has an interface to automatically scrape messages from [A certain instant messaging client] server and then finetune a LoRA model on the messages from that server. This allows it to learn different users' interactions and speaking styles and then replicate them: an AI doppelganger.
Right now you can run it locally on Linux/Mac (bitsandbytes isn't compatible with windows, unfortunately) and I've also set up a Google colab, all you need is to just drop in a new bot token.
Hopefully you guys can have some fun with it. | 2023-06-09T03:44:17 | https://github.com/Urist-Mc-Urist/DoppeLLaMA | MakesAbhorrentPosts | github.com | 1970-01-01T00:00:00 | 0 | {} | 144uakb | false | null | t3_144uakb | /r/LocalLLaMA/comments/144uakb/doppellama_llmpowered_bot_that_locally_finetunes/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'QQr3WpWKQ6V2tJItPhu52wQPw8_ed-5nqvKZtzsalAo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xOZic46s-3h-TGhXj0cCnMHpipGBiNUHov135DfOBXY.jpg?width=108&crop=smart&auto=webp&s=2d1d9f296d551a3273ec2761fcd1877763594e69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xOZic46s-3h-TGhXj0cCnMHpipGBiNUHov135DfOBXY.jpg?width=216&crop=smart&auto=webp&s=b41ec72034c37452d62f23c561bebc9a5d4457e5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xOZic46s-3h-TGhXj0cCnMHpipGBiNUHov135DfOBXY.jpg?width=320&crop=smart&auto=webp&s=95e3ddc00b1464df6ca794fa9221ea3dc171d711', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xOZic46s-3h-TGhXj0cCnMHpipGBiNUHov135DfOBXY.jpg?width=640&crop=smart&auto=webp&s=facceb606f11aa0af93ab076c99a42a505e467a0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xOZic46s-3h-TGhXj0cCnMHpipGBiNUHov135DfOBXY.jpg?width=960&crop=smart&auto=webp&s=ffefcc7515994fa611f8ac041f09cd87707e9555', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xOZic46s-3h-TGhXj0cCnMHpipGBiNUHov135DfOBXY.jpg?width=1080&crop=smart&auto=webp&s=696550fa64af9b4a4e15cd9afec330d1e9d021a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xOZic46s-3h-TGhXj0cCnMHpipGBiNUHov135DfOBXY.jpg?auto=webp&s=af9c1df6f8f74d8bc24ef78755158caa3fcc3a67', 'width': 1200}, 'variants': {}}]} |
|
Damn, I was so satisfied with my 3080 with 10GB of VRAM until I found this subreddit. | 286 | It maxes out all my games and seems to be able to handle the latest Stable Diffusion without too many problems.
And then I read about Wizard 30B. FML, I would love to play around with the cutting edge of local AI, but for the first time in my life (besides trying to run a maxed 4k Cyberpunk RTX) my quaint little 3080 is not enough.
Now y’all got me planning to save up and try to buy a new 4090 rig next year with an unholy amount of ram…. It’s pretty amazing to be living through this - but as with all such eras, torturously expensive. | 2023-06-09T03:46:22 | https://www.reddit.com/r/LocalLLaMA/comments/144uc0l/damn_i_was_so_satisfied_with_my_3080_with_10gb_of/ | katiecharm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144uc0l | false | null | t3_144uc0l | /r/LocalLLaMA/comments/144uc0l/damn_i_was_so_satisfied_with_my_3080_with_10gb_of/ | false | false | self | 286 | null |
LoRA training ggml? | 2 | [removed] | 2023-06-09T04:13:50 | https://www.reddit.com/r/LocalLLaMA/comments/144uvmq/lora_training_ggml/ | Equal_Station2752 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144uvmq | false | null | t3_144uvmq | /r/LocalLLaMA/comments/144uvmq/lora_training_ggml/ | false | false | default | 2 | null |
Mark Zuckerberg on upcoming LLaMA v2 | 193 | This comes from Lex Fridman's recent podcast with Mark Zuckerberg.
**Summary:**
* No news to announce yet but they're developing it and leaning toward open source.
* Crowdsourcing finetuning with the community has been considered, like a Wikipedia style way for the community to drive finetuning.
* WhatsApp will be integrated with future LLaMA versions. Metaverse and all future products will be underlied by AI language infrastructure.
* Mark sees a future of smaller LLMs covering each category instead of one major LLM like OpenAI. Smaller LLaMA models could be good for this.
**On LLaMA v1:**
>Mark: LLaMA is the language model that our research team made, and we did a limited open source release for it. For the first one, there were a bunch of questions around whether we should be releasing this commercially, so we kind of punched it on that for v1 of LLaMA and just it released it for research. We're working on the follow-up models for this and thinking through how exactly this should work for follow-on now that we've had time to work on a lot more of the safety and the pieces around that.
>
>I just kind of think that it would good if there were a lot of different folks who had the ability to build state of the art technology here and not just a small number of big companies. I just think that like there's all this innovation out there that people can create and I just think that we'll also learn a lot by seeing what the whole community of students, hackers, startups and different folks build with this, and that's kind of been how we've approached this.
>
>Lex: So for people who don't know, you did the limited release I think in February of this year of LLaMA, and it got quote unquote leaked, meaning like it's escaped the limited release aspect, but it was, you know, that was something you probably anticipated given that it's just released to researchers.
>
>Mark: We shared it with researchers, so yeah.
>
>Lex: It's just trying to make sure that there's like a slow release.
>
>Mark: Yeah.
**On the open source community:**
>Mark: I think it's been really neat to see. I mean there's been folks who are getting it to run on local devices, right, so if you're an individual who just wants to experiment with this at home, you probably don't have a large budget to get access to like a large amount of cloud compute, so getting it to run on your local laptop is pretty good, pretty relevant.
>
>I think that for the scale of what we're seeing now with AI, I think we're more likely to get to good alignment and good understanding of kind of what needs to do to work well by having it be open source, and that's something that I think is quite good to have out there and happening.
>
>I think that at least for the stage that we're at now, the equities balance strongly in my view towards doing this more openly. I think if you got something that was closer to superintelligence, then I think you'd have to discuss that more and think through that a lot more.
**On LLaMA v2:**
>Lex: You mentioned the next of LLaMA. What can you say about the next version of LLaMA?
>
>Mark: A lot of what we're doing is taking the first version which was primarily this research version and trying to now build a version that has all of the latest state of the art safety precautions built in. We're using some more data to train it from across our services, but a lot of the work that we're doing internally is really just focused on making sure that this is, you know, as aligned and responsible as possible.
>
>You'll have an assistant that you can talk to in WhatsApp. I think in the future, every creator will have kind of an AI agent that can kind of act on their behalf that their fans can talk to. I want to go get to the point where every small business basically has an AI agent that people can talk to to do commerce and customer support and things like that.
>
>So they're going to be all these different things, and LLaMA or the language model underlying this is basically going to be the engine that powers that. The reason to open source is that, as we did with the first version, is that it basically unlocks a lot of innovation in the ecosystem. We'll make our products better as well and it also gives us a lot of valuable feedback on security and safety which is important for making this good. I mean, the work that we're doing to advance the infrastructure, it's basically at this point taking it beyond a research project into something which is ready to be kind of core infrastructure not only for our own products but hopefully for a lot of other things out there too.
>
>Lex: Do you think that LLaMA or the language model underlying that version two will be open sourced?
>
>Mark: **I think the question is how to do it.** I think we did the research license for v1 and I think the big thing that we're thinking about is basically like what's the right way.
>
>People have tried a lot of different things, and we've tried a bunch of stuff internally, so we're making progress here but also we're able to learn from some of the best ideas in the community and I think we want to just continue pushing that forward.
**On OpenAI's large model:**
>Mark: We sort of believe in this more community oriented, more democratic approach to building out the products and technology around this. We don't think that's going to be the one true thing. We think that there should kind of be a lot of development.
​
>I mean there's been folks who are getting it to run on local devices
What's the chances Mark has browsed this sub, or someone else at Meta? I think everyone should show their support for an open source LLaMA v2. Talk about it online, mention how it's helped you in real use cases, link this community, advocate for open research, and anything else to show support for a next generation open LLaMA. | 2023-06-09T04:30:02 | https://www.reddit.com/r/LocalLLaMA/comments/144v6t7/mark_zuckerberg_on_upcoming_llama_v2/ | llamaShill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144v6t7 | false | null | t3_144v6t7 | /r/LocalLLaMA/comments/144v6t7/mark_zuckerberg_on_upcoming_llama_v2/ | false | false | self | 193 | null |
Is there a way to test if any directly biased or BS info was added to certain models? | 9 | Not to indicate any model, or that this is even a thing yet.
But it certainly will be if it isn’t already.
Someone is eventually going to pepper in either absolute bullshit or work in engineered attempts to candy certain plausible knowledge/stats/details in a revisionist or denialist fashion.
Maybe broad spanning or narrow and tiny in deep detail.
Models are being aligned and finetuned with preferred info and their datasets are filtered,
so just a thought.
not aiming to rustle any jimmies lol | 2023-06-09T04:36:00 | https://www.reddit.com/r/LocalLLaMA/comments/144vbb7/is_there_a_way_to_test_if_any_directly_biased_or/ | Fungunkle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144vbb7 | false | null | t3_144vbb7 | /r/LocalLLaMA/comments/144vbb7/is_there_a_way_to_test_if_any_directly_biased_or/ | false | false | self | 9 | null |
Is this the best I'm gonna get? | 2 | Got a PowerEdge server with 128GB of RAM but it's so old it doesn't even support avx. Is there any hope for faster speeds? Or even maybe the ability to use this thing's RAM and a different machine's CPU? | 2023-06-09T04:40:35 | shortybobert | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 144vekz | false | null | t3_144vekz | /r/LocalLLaMA/comments/144vekz/is_this_the_best_im_gonna_get/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'c3F5vYPBXsQJDpFYjeMo1jNZQVJ3LcGSfPNGoKyJB0Q', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/q80crcpy8x4b1.png?width=108&crop=smart&auto=webp&s=fb4e9190b5f75a7a6f7c7be04d51e9376d5cd4c5', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/q80crcpy8x4b1.png?width=216&crop=smart&auto=webp&s=96e968116c5c2ec76492ed438438c2a446ecdc49', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/q80crcpy8x4b1.png?width=320&crop=smart&auto=webp&s=7d86bdd04e9d1f9b7594e5c9023eaf4bf8c80a3c', 'width': 320}, {'height': 298, 'url': 'https://preview.redd.it/q80crcpy8x4b1.png?width=640&crop=smart&auto=webp&s=204d526041049a2e5a42be269e4ac585543e21ec', 'width': 640}], 'source': {'height': 341, 'url': 'https://preview.redd.it/q80crcpy8x4b1.png?auto=webp&s=9da9d7857af1046955f359309e39ae23fad1fae2', 'width': 732}, 'variants': {}}]} |
||
Local medical LLM | 12 | I am seeing news about LLMs passing medical exams not only in English but also in different languages / medical systems.
Currently I am working to build an evaluation systems of a LLM for medical capabilities (I have help from real doctors).
I plan to release the system as OS and also use it to test some LLMs and share the results.
My question is if there are some LLMs available that are focused on medical. I saw the medalpaca 13b, but is quite bad. Is there any other that is fine tuned for medical? Or I should start testing generic 65b LLMs?
​
Also, I know that this is very dangerous area, but I was thinking that there are areas on the earth who could benefit from such an LLM (when the closest doctor is 1000km away it is not easy to get to see a doctor).
A different usecase for such an LLM came from a friend of mine, a nurse doing volunteering work for vaccination. She met plenty of people with different issues and she was trying to help, but on some cases was missing some information (she is a nurse not a doctor). Without the internet, was impossible to help. But, they have pretty powerful computers, so a local LLM with medical knowledge used by someone with some medical training I think it could do a lot of good in some areas.
So, when I heard about LLMs passing the medical exam, I thought to build something like this. I have some money to throw at this, but not enough to start from scratch, so I need to start from an existing model.
What do you think about this? Could this work? | 2023-06-09T05:14:15 | https://www.reddit.com/r/LocalLLaMA/comments/144w1t0/local_medical_llm/ | Ion_GPT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144w1t0 | false | null | t3_144w1t0 | /r/LocalLLaMA/comments/144w1t0/local_medical_llm/ | false | false | self | 12 | null |
Losing the plot | 6 | I've been enjoying llm for roleplay. Letting it build a fantasy world around me has been so much fun!
But...the context length sucks no matter the model I use. I have to continually remind it of the context. For example I'll enter a village and it will say there is a guy with jeans and a denim shirt leaning in his car. Eventually it cant even be corrected, or it will just repeat the same few sentences and i have to start over with a new prompt. Sometimes I'll summarize on a fresh session to try to continue, but the characters are never the same and neither is the feel.
I've used oobabooga and gpt4all. Is tavern better? Do I just have wait for bigger context lengths, or is there a way around it that i don't see? | 2023-06-09T05:19:45 | https://www.reddit.com/r/LocalLLaMA/comments/144w5dt/losing_the_plot/ | Jenniher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144w5dt | false | null | t3_144w5dt | /r/LocalLLaMA/comments/144w5dt/losing_the_plot/ | false | false | self | 6 | null |
Play a Twitter role-playing chat game | 6 | Hi guys.
I have developed a website [https://docs.cworld.ai/dataset/twitter](https://docs.cworld.ai/dataset/twitter)
You can **search** **twitter user and download their all tweets** on this
the tweets can be three format
* **rawtxt**
* every tweet is split by two newlines \`\\n\\n\`
* **alpaca**
* alpaca format json file
* **origin tweets json**
* the origin tweets json file
You can use each user's tweet to train the model and make it speak like a certain user
This is a example
**speak like elonmusk** [https://cworld.ai/models/27/twitterelonmusk](https://cworld.ai/models/27/twitterelonmusk)
I use elonmusk's tweets to train the model about 5min
Have fun !
​ | 2023-06-09T05:27:29 | https://www.reddit.com/r/LocalLLaMA/comments/144wajw/play_a_twitter_roleplaying_chat_game/ | Separate-Awareness53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144wajw | false | null | t3_144wajw | /r/LocalLLaMA/comments/144wajw/play_a_twitter_roleplaying_chat_game/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'WWN7_1SvY8qH3JI8Lz6zv4pXrq8Sih2K7TuByM1_IJQ', 'resolutions': [{'height': 26, 'url': 'https://external-preview.redd.it/JXmMEKYiSJlkC1Ve8L2WzXLdWozH3mx3rKnIWLmU7HE.jpg?width=108&crop=smart&auto=webp&s=a2e7d7c46a070d7c2c3fc2012c0691755fa7a2ff', 'width': 108}, {'height': 53, 'url': 'https://external-preview.redd.it/JXmMEKYiSJlkC1Ve8L2WzXLdWozH3mx3rKnIWLmU7HE.jpg?width=216&crop=smart&auto=webp&s=7916ebb99bc5e9846c0e7e504d86667367740e94', 'width': 216}, {'height': 78, 'url': 'https://external-preview.redd.it/JXmMEKYiSJlkC1Ve8L2WzXLdWozH3mx3rKnIWLmU7HE.jpg?width=320&crop=smart&auto=webp&s=aab3c1fd8343a281a035dcd928212c2b8482f083', 'width': 320}], 'source': {'height': 100, 'url': 'https://external-preview.redd.it/JXmMEKYiSJlkC1Ve8L2WzXLdWozH3mx3rKnIWLmU7HE.jpg?auto=webp&s=74841ff8af93637acbf16b3d3c7d51a2de5bb087', 'width': 407}, 'variants': {}}]} |
How to train 7B models with small documents? | 18 | I've been wondering, is it a possible use case for LLaMa 7B (or similar) to read a document and answer questions about it?
If so, is there a resource explaining how to do it? How would you do it if you don't have the hardware resources?
I remember seeing somewhere GPT ingesting just a README describing how to use a programming language, and without any training (in the form of prompt/answer), it would start writing code for that language.
Thanks | 2023-06-09T05:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/144wgd0/how_to_train_7b_models_with_small_documents/ | oliveoilcheff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144wgd0 | false | null | t3_144wgd0 | /r/LocalLLaMA/comments/144wgd0/how_to_train_7b_models_with_small_documents/ | false | false | self | 18 | null |
AI Showdown: WizardLM 30b VS Guanaco 33b, GPT-4 as the judge (test in comments) | 11 | 2023-06-09T05:42:28 | imakesound- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 144wk8p | false | null | t3_144wk8p | /r/LocalLLaMA/comments/144wk8p/ai_showdown_wizardlm_30b_vs_guanaco_33b_gpt4_as/ | false | false | 11 | {'enabled': True, 'images': [{'id': 'zIoWEg1rztVRXvfrVZaOY4YsthWm3f8RVKW_HD2OoLw', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/xzd7dhmpjx4b1.png?width=108&crop=smart&auto=webp&s=059c74d7c28059aa1fb2df4c8ee12aca0fcd9b61', 'width': 108}, {'height': 268, 'url': 'https://preview.redd.it/xzd7dhmpjx4b1.png?width=216&crop=smart&auto=webp&s=368ee95729c865221dd6e043d7ef22cdbebf14ba', 'width': 216}, {'height': 397, 'url': 'https://preview.redd.it/xzd7dhmpjx4b1.png?width=320&crop=smart&auto=webp&s=c7bcdf0de4e03f6f3e56daef65b115fa07450343', 'width': 320}, {'height': 795, 'url': 'https://preview.redd.it/xzd7dhmpjx4b1.png?width=640&crop=smart&auto=webp&s=c1d6316d17df3b696792bf30188adf0fc2976726', 'width': 640}], 'source': {'height': 922, 'url': 'https://preview.redd.it/xzd7dhmpjx4b1.png?auto=webp&s=ecebfc812e2cb05e5b224d9827a4b4b10f0697d2', 'width': 742}, 'variants': {}}]} |
|||
meet Vicky the AI | 10 | hey folks
wanted to share the demo of my LLM: [https://model.tanglebox.ai](https://model.tanglebox.ai/).
I posted it here a while ago in an earlier form - thankyou to all who tested it out and communicated with me to help improve it.
I've named it Vicky, it's derived from LLaMA and is trained on a number of datasets including mainly Vicuna, hence the name. To the best of my abilities I've removed its ability to refuse to answer questions - so please use with caution and discretion especially around NSFW topics. What you post or share online remains your responsibility regardless whether it was generated by the AI.
At the moment it's code abilities are poor, however its creative writing is possibly the best I've seen, context window size limit notwithstanding. The system prompt for the web UI is tuned toward creativity rather than accuracy - if you'd like to test it with the system prompts tweaked, hang tight until I release the next UI version which provides access to the prompts, or give it a try with the REST endpoint.
I've got a busy roadmap which includes an unconventional approach to improve its code abilities, plus right now in beta is the next version with search engine, IoT / google home, and online shopping platform integrations.
If you're so inclined, it can also be used using said REST endpoint - see the GitHub README linked on the page for how-to.
Thanks for reading! If you like using Vicky please consider the Buy Me Coffee link on the page.
For disclosure I'm not sponsored or funded in any way, this is a solo operation and the compute time comes out of my own pocket | 2023-06-09T06:29:09 | https://www.reddit.com/r/LocalLLaMA/comments/144xe47/meet_vicky_the_ai/ | TimTams553 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144xe47 | false | null | t3_144xe47 | /r/LocalLLaMA/comments/144xe47/meet_vicky_the_ai/ | false | false | self | 10 | null |
What are some models that does some specific tasks very well | 1 | I want to find some models that do one task extremely well.
For example, models that are very good at extracting data into JSON format, models that are extremely good at story telling, or models that are extremely good at making summaries and extracting information.
Does this type of model exists and can anyone give me some examples of these models? | 2023-06-09T06:31:43 | https://www.reddit.com/r/LocalLLaMA/comments/144xfso/what_are_some_models_that_does_some_specific/ | learnai_account | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144xfso | false | null | t3_144xfso | /r/LocalLLaMA/comments/144xfso/what_are_some_models_that_does_some_specific/ | false | false | self | 1 | null |
Built PC rig, or an m2 mini? | 1 | So, I'm planning to have a rig in my homelab that would allow me to run some AI models. However, I cannot decide which one is better: to get an m2 mac mini, or to build a pc with a good gpu (a 3099 maybe?).
I'd love.to hear your opinions/experiences folks. | 2023-06-09T07:11:20 | https://www.reddit.com/r/LocalLLaMA/comments/144y55a/built_pc_rig_or_an_m2_mini/ | katatondzsentri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144y55a | false | null | t3_144y55a | /r/LocalLLaMA/comments/144y55a/built_pc_rig_or_an_m2_mini/ | false | false | self | 1 | null |
🖥️🔮 Future Hardware Options for LLMs: Nvidia vs. Apple? | 32 | **Yeehaw, y'all 🤠**
I am deep inside the LLM rabbit hole 🐇 and believe they are revolutionary.
However, the major concern I have with them is privacy, especially with all consumer-ready LLMs - **ChatGPT, Bard, Claude** \- running on US servers and considering that [Snowden revealed 10 years ago](https://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data), that the NSA is using Big Tech companies to spy on the whole world. So, I'm looking into buying hardware in the future to run large language models locally.
I currently own a MacBook Air M1 with 8 GB of RAM, which is terrible for AI use cases, because of the t 8 GB. However, the major concern I have is privacy, especially with GPT-4 running on US servers. So, I'm looking into buying hardware in the future to run large language models myself.
The main issue I'm considering is **VRAM**. The new M2 Ultra in the updated Mac Studio supports a whopping **192 GB of VRAM** due to its unified memory. This got me thinking about the better deal. I can't even fathom the cost of an Nvidia GPU with 192 GB of VRAM, but Nvidia is renowned for its AI support and offers greater flexibility, based on my experience. While LLaMa now works with **Apple's Metal**, for instance, I feel like it's more of a port, and for complete control over LLMs as well as the ability to fine-tune models, using a Linux PC with an Nvidia GPU seems like the best approach.
Also, Linux generally is a huge privacy plus for me, due to Apple [not being as privacy-friendly, as they like to portray themselves](https://gizmodo.com/apple-iphone-analytics-tracking-even-when-off-app-store-1849757558).
Furthermore, Nvidia excels in all other AI applications like image generators like Stable Diffusion. This makes me lean toward Nvidia, especially because I'm also an avid gamer and would love to experience ray tracing on a powerful graphics card. Buut, then again, Apple recently made a huge step towards becoming more gaming focused, by [creating a tool, that makes porting Windows games to macOS pretty easy.](https://www.theverge.com/2023/6/7/23752164/apple-mac-gaming-game-porting-toolkit-windows-games-macos)
Most importantly, Apple's **unified memory** in the Mac Studio is incredible. The question is, can Nvidia compete with the value? Considering the professional market, it seems unlikely that Nvidia would release consumer graphics cards with a lot more VRAM, as that would pose a risk to their enterprise products. This dilemma leads me to contemplate whether a Mac might be a better choice due to the substantial amount of unified memory it offers - even though it has a lower bandwidth than Nvidia GPU's VRAM.
So, I'd love to hear your opinions on this matte**r. What hardware do you think will dominate the future? Will Nvidia release new consumer graphics cards with increased VRAM? Or is Apple's unified memory the way to go?** Share your thoughts and let's discuss what the future holds for large language models and AI hardware! | 2023-06-09T08:22:00 | https://www.reddit.com/r/LocalLLaMA/comments/144zcgr/future_hardware_options_for_llms_nvidia_vs_apple/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144zcgr | false | null | t3_144zcgr | /r/LocalLLaMA/comments/144zcgr/future_hardware_options_for_llms_nvidia_vs_apple/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'oy9SNDD0yTlKdu8d55rR0YYpBYwnt81hV0ReMlv-9AA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ammZiTuXgYxNAcPg0weCxlfwOALVC4F_FN5jz88ISac.jpg?width=108&crop=smart&auto=webp&s=48434595426de28433d3df3da93b6da26f6f256b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ammZiTuXgYxNAcPg0weCxlfwOALVC4F_FN5jz88ISac.jpg?width=216&crop=smart&auto=webp&s=0aee64bccdd12d6988dab720825164da81519b69', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ammZiTuXgYxNAcPg0weCxlfwOALVC4F_FN5jz88ISac.jpg?width=320&crop=smart&auto=webp&s=194670266c00bd83237323c73ecf6eee150d610d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ammZiTuXgYxNAcPg0weCxlfwOALVC4F_FN5jz88ISac.jpg?width=640&crop=smart&auto=webp&s=3fc689b0f6f0f42c5030e8feaf04f4d4765c7631', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ammZiTuXgYxNAcPg0weCxlfwOALVC4F_FN5jz88ISac.jpg?width=960&crop=smart&auto=webp&s=0c91caaca8e607bb3107ce3a3c8bfe0f7ce753cc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ammZiTuXgYxNAcPg0weCxlfwOALVC4F_FN5jz88ISac.jpg?width=1080&crop=smart&auto=webp&s=20f77bfd5c4aa42f1f251cf6c89dd250c5b434e0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ammZiTuXgYxNAcPg0weCxlfwOALVC4F_FN5jz88ISac.jpg?auto=webp&s=8cbe1e1d06b04be418ec014d607dfe5de43be03a', 'width': 1200}, 'variants': {}}]} |
What pc should I build for Local LLMs and Stable Diffusion? | 15 | I have no experience with training or running language models locally and only a little experience with Stable Diffusion and automatic1111. I really want to have a good workstation at home to learn and experiment.
Does anyone have suggestions for the best set up I could get on a budget of under 6k?
Would a single 24gb 4090 be better than multiple cheaper GPUs? Would it be possible or even useful to try to get a dual 4090 setup?
I know very little about hardware. Any tips would be greatly appreciated. Thank you! | 2023-06-09T08:26:38 | https://www.reddit.com/r/LocalLLaMA/comments/144zf6z/what_pc_should_i_build_for_local_llms_and_stable/ | mensanserem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 144zf6z | false | null | t3_144zf6z | /r/LocalLLaMA/comments/144zf6z/what_pc_should_i_build_for_local_llms_and_stable/ | false | false | self | 15 | null |
Pygmalion Dataset Availability | 1 | [removed] | 2023-06-09T09:14:45 | https://www.reddit.com/r/LocalLLaMA/comments/14508t5/pygmalion_dataset_availability/ | Proof_Mouse9105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14508t5 | false | null | t3_14508t5 | /r/LocalLLaMA/comments/14508t5/pygmalion_dataset_availability/ | false | false | default | 1 | null |
Selling Dual RTX 3090 AI workstation in Dortmund, Germany | 1 | [removed] | 2023-06-09T10:18:09 | https://www.reddit.com/r/LocalLLaMA/comments/1451di0/selling_dual_rtx_3090_ai_workstation_in_dortmund/ | Zyj | self.LocalLLaMA | 2023-06-09T10:34:44 | 0 | {} | 1451di0 | false | null | t3_1451di0 | /r/LocalLLaMA/comments/1451di0/selling_dual_rtx_3090_ai_workstation_in_dortmund/ | false | false | default | 1 | null |
Where do I start?? | 1 | [removed] | 2023-06-09T12:16:43 | https://www.reddit.com/r/LocalLLaMA/comments/1453npa/where_do_i_start/ | cowman3456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1453npa | false | null | t3_1453npa | /r/LocalLLaMA/comments/1453npa/where_do_i_start/ | false | false | default | 1 | null |
is Falcon 7b best for nsfw roleplay? | 4 | thx | 2023-06-09T13:22:37 | https://www.reddit.com/r/LocalLLaMA/comments/14555p6/is_falcon_7b_best_for_nsfw_roleplay/ | Sofronyami | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14555p6 | false | null | t3_14555p6 | /r/LocalLLaMA/comments/14555p6/is_falcon_7b_best_for_nsfw_roleplay/ | false | false | nsfw | 4 | null |
8gb for good gaming performance or 12gb for the vram? | 0 | [removed] | 2023-06-09T13:25:22 | https://www.reddit.com/r/LocalLLaMA/comments/14557yb/8gb_for_good_gaming_performance_or_12gb_for_the/ | Artemis369212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14557yb | false | null | t3_14557yb | /r/LocalLLaMA/comments/14557yb/8gb_for_good_gaming_performance_or_12gb_for_the/ | false | false | default | 0 | null |
EVGA NVIDIA GeForce GTX 1080 Ti ? | 1 | [removed] | 2023-06-09T13:37:52 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1455igk | false | null | t3_1455igk | /r/LocalLLaMA/comments/1455igk/evga_nvidia_geforce_gtx_1080_ti/ | false | false | default | 1 | null |
||
Can somebody ELI5 what is top_a? | 1 | [removed] | 2023-06-09T14:55:50 | https://www.reddit.com/r/LocalLLaMA/comments/1457fok/can_somebody_eli5_what_is_top_a/ | Extraltodeus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1457fok | false | null | t3_1457fok | /r/LocalLLaMA/comments/1457fok/can_somebody_eli5_what_is_top_a/ | false | false | default | 1 | null |
Web UI to show trending transformer models | 4 | 2023-06-09T15:10:29 | https://colab.research.google.com/drive/1hw4hZ-_35MW-otSOoOroD_gpF__WCIde?usp=sharing | _underlines_ | colab.research.google.com | 1970-01-01T00:00:00 | 0 | {} | 1457tq2 | false | null | t3_1457tq2 | /r/LocalLLaMA/comments/1457tq2/web_ui_to_show_trending_transformer_models/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
||
Embarking on the Journey of LLMs: Seeking Tips and Guidance! | 2 | Hello Everyone,
As a newcomer to the exciting world of Large Language Models, I'm eagerly diving into the nitty-gritty of this complex field. I am reaching out to this knowledgeable community for some beginner-friendly advice and perhaps a step-by-step guide to help me navigate this challenging terrain.
1. \*\*Prerequisites\*\*: I am curious to understand the foundational knowledge required to get started. Do I need a basic understanding of deep learning and Natural Language Processing? I am more than willing to invest my time in learning these if they are critical to my journey.
2. \*\*GPU Considerations\*\*: I am considering investing in used GPUs to start running my models, with the ambitious aim of running a 65B parameter model! I've been researching options like AMD's MI25, and Nvidia's P40 and M40. These do have older generations' VRAM; I am curious how significantly this would impact performance. Additionally, I would appreciate any other pointers on what to look for in a suitable GPU besides high VRAM. One more thing, I do have an RTX 3080 Laptop GPU but it only has 8GB GDDR6 memory, which is only enough for 4-bit 7B models :(
3. \*\*Installation Challenges\*\*: I have attempted to follow this subreddit's guide on "How to Install LLaMA: 8-bit and 4-bit". While I've made some progress, I've hit a few roadblocks along the way. I'm fairly confident I can overcome these!!
This post only scratches the surface of my curiosity; I have a multitude of questions still. However, I'm hopeful that many of these will be answered organically through your insightful comments and shared experiences.
Looking forward to learning from all of you and am grateful for any tips you can share!
THANK YOU IN ADVANCE!!
May the AI overlords accept your and my existence.
Edit- Grammar, I thought I should include further specs as I've seen people run LLMs on the CPU:
ROG Strix Scar 15
\-R9 5900HX (8C/16T) 13.2k C23
\-32GB RAM 3200MHz
\-RTX 3080 Laptop GPU VBIOS Mod to 155W
\-Total system 3D Mark TimeSpy: 12.6k\~
​ | 2023-06-09T15:29:38 | https://www.reddit.com/r/LocalLLaMA/comments/1458b15/embarking_on_the_journey_of_llms_seeking_tips_and/ | KvAk_AKPlaysYT | self.LocalLLaMA | 2023-06-09T15:51:18 | 0 | {} | 1458b15 | false | null | t3_1458b15 | /r/LocalLLaMA/comments/1458b15/embarking_on_the_journey_of_llms_seeking_tips_and/ | false | false | self | 2 | null |
Context tokens are the bane of all fun. | 89 | I feel like people are focusing a lot on the base quality of the replies from models and not so much on the context tokens, which at this point I care more about than anything else. I'm perfectly happy with the WizardLM 30b 1.0 gptq (thought I will be even happier once someone uncensors it, I dislike people playing morality police on my local machine).
It feels like people don't seem to really care about the context tokens. Am I alone in feeling like it undercuts much of the potential fun of LLMs? I got GPT4 to play a text-based city builder game, but it started losing context after about 5 turns (and was completely gone by 20), which was a huge shame because it was really fun to poke at.
Similar issue with feeding a LLM a D&D campaign, a campaign of any length will have far more context tokens than any LLM can take. Even single interactions can bust through token limits.
Using an API to a LLM for game development is also a nonstarter beyond using it in extremely limited ways and in extremely limited contexts for similar reasons.
Ditto for any kind of storytelling venture beyond something the length of a short wikipedia summary, 2k tokens fills up extremely quickly, and even the models on large platforms the best you can try to do is write goodnight moon. Granted, I'm not particularly interested in LLMs, writing books on their own (you'd probably end up sued for copyright infringement anyhow since it seems to love copying existing stories in these kinds of prompts), but it would be neat to use it to spitball story details and places your story could go, which frequently falls short due to a lack of context capacity.
I've heard of a couple of ways of trying to extend the limit. Personally I've been using superbooga, but reading what it pulls for the context it is extremely hit and miss (to the point where I am starting to seriously consider contributing a different approach to that specific project).
TL;DR: Is there something that can be done to further extend the context tokens? Other than superbooga. | 2023-06-09T15:43:50 | https://www.reddit.com/r/LocalLLaMA/comments/1458nxr/context_tokens_are_the_bane_of_all_fun/ | InvalidCharacters | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1458nxr | false | null | t3_1458nxr | /r/LocalLLaMA/comments/1458nxr/context_tokens_are_the_bane_of_all_fun/ | false | false | self | 89 | null |
30b vs. 60/65b - does it make a big difference? | 28 | I've got a 10GB 3080 and 96GB RAM, so in theory I could run a 65b model and offload some layers to the GPU.
I just upgraded the RAM, so I've been limited to 16b models so far. Is there a huge huge difference between 30b and 60/65b, especially when it comes to creative stuff?
And can anyone recommend a larger model that would be best for creative pursuits, and ideally uncensored? Thanks. | 2023-06-09T16:06:09 | https://www.reddit.com/r/LocalLLaMA/comments/14598rd/30b_vs_6065b_does_it_make_a_big_difference/ | justanadoptedson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14598rd | false | null | t3_14598rd | /r/LocalLLaMA/comments/14598rd/30b_vs_6065b_does_it_make_a_big_difference/ | false | false | self | 28 | null |
Let's discuss Orca 13B from Microsoft Research, soon to be open sourced. | 1 | [removed] | 2023-06-09T16:19:05 | https://www.reddit.com/r/LocalLLaMA/comments/1459kw1/lets_discuss_orca_13b_from_microsoft_research/ | Megneous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1459kw1 | false | null | t3_1459kw1 | /r/LocalLLaMA/comments/1459kw1/lets_discuss_orca_13b_from_microsoft_research/ | false | false | default | 1 | null |
Token Sequence Impact on GPT-4 | 43 | I've suspected that if you have a big text and you want to ask about it (or i.e. you have an example usage of a python module and you want to use it for your specific needs but ask gpt to do it, given the example)
then asking the question first and giving the text after that will help.
So I've tried this out and here are the results:
https://preview.redd.it/6le94k1w215b1.png?width=631&format=png&auto=webp&s=2bb043163288c450d73b53f0a8d9b32ecebdad38
Little blog post about specifics:
[https://medium.com/@jonavan01/exploring-the-impact-of-token-order-on-gpt-4-performance-spoiler-you-should-care-3ac5bee9d685](https://medium.com/@jonavan01/exploring-the-impact-of-token-order-on-gpt-4-performance-spoiler-you-should-care-3ac5bee9d685)
What do you think?
It's fascinating I think, because I feel that this can give a better intuition about what might these LLMs do, how do they work.
| 2023-06-09T17:36:25 | https://www.reddit.com/r/LocalLLaMA/comments/145bkbv/token_sequence_impact_on_gpt4/ | nofreewill42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145bkbv | false | null | t3_145bkbv | /r/LocalLLaMA/comments/145bkbv/token_sequence_impact_on_gpt4/ | false | false | 43 | {'enabled': False, 'images': [{'id': '_VowIwuZ0pu1scn2ux2S9dFk9LgSmg7tAnGktzFMrRo', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/dfqLdaJXocLLsJIOw98Lk1IhtsZDQz1SUUGSaIO8GbM.jpg?width=108&crop=smart&auto=webp&s=0e5500bfb845a35cc0fe548bbc0100b85dc64a60', 'width': 108}, {'height': 169, 'url': 'https://external-preview.redd.it/dfqLdaJXocLLsJIOw98Lk1IhtsZDQz1SUUGSaIO8GbM.jpg?width=216&crop=smart&auto=webp&s=019b56f9a98dff9155b1abc47433194e473332ad', 'width': 216}, {'height': 251, 'url': 'https://external-preview.redd.it/dfqLdaJXocLLsJIOw98Lk1IhtsZDQz1SUUGSaIO8GbM.jpg?width=320&crop=smart&auto=webp&s=2298a34abefcffb4125f94cb5eba1efcc38424ff', 'width': 320}], 'source': {'height': 466, 'url': 'https://external-preview.redd.it/dfqLdaJXocLLsJIOw98Lk1IhtsZDQz1SUUGSaIO8GbM.jpg?auto=webp&s=a06c72e2be72451bbdf51df1d468a3e280d28dd5', 'width': 593}, 'variants': {}}]} |
|
I have access to a workstation with dual A6000s, is there a guide to run the best performing model (Guanaco maybe?) using PrivateGPT for document analysis? | 1 | Hello
The main goal is to analyze documents, completely locally.
As OOgabooga or similar appear to require copypasting prompt by prompt and don't form a database of the documents, it looks like PrivateGpt suits the usecase.
Im now trying to get it to run using GPU, but there are some concepts Im not familiar with, like GGML vs GPTQ, which quantization to use etc.
Asusming we can use the full 96GB VRAM (running on ubuntu 18), is there somewhere I can further read how to get this working?
Thanks! | 2023-06-09T18:19:24 | https://www.reddit.com/r/LocalLLaMA/comments/145cn6u/i_have_access_to_a_workstation_with_dual_a6000s/ | Dogmaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145cn6u | false | null | t3_145cn6u | /r/LocalLLaMA/comments/145cn6u/i_have_access_to_a_workstation_with_dual_a6000s/ | false | false | self | 1 | null |
What are your personal top use cases? | 16 | New to this and wanting to see what everyone actually uses this practically for on a day to day basis
I use ChatGPT for a few questions but not daily so I don’t have a good understanding of how everyone’s using this self hosted solution | 2023-06-09T18:19:28 | https://www.reddit.com/r/LocalLLaMA/comments/145cn8b/what_are_your_personal_top_use_cases/ | Hubba_Bubba_Lova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145cn8b | false | null | t3_145cn8b | /r/LocalLLaMA/comments/145cn8b/what_are_your_personal_top_use_cases/ | false | false | self | 16 | null |
Does LocalLLaMA run on a Intel® Core™ i5-2500 ? | 4 | Hello, I am a complete noob in LocalLLaMA
I try to make it run on an old PC with Intel i5-2500 CPU, 32 GB RAM and not using the GPU
I could launch the server once for install, downloaded a model from the WEB GUI, shut down the CMD windows by closing it, and after a Windows reboot, start command does not work anymore
Got an OS Error Win Error
Thanks for your help | 2023-06-09T18:21:18 | https://www.reddit.com/r/LocalLLaMA/comments/145coy2/does_localllama_run_on_a_intel_core_i52500/ | EiffelPower76 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145coy2 | false | null | t3_145coy2 | /r/LocalLLaMA/comments/145coy2/does_localllama_run_on_a_intel_core_i52500/ | false | false | self | 4 | null |
Which one is better: 33B q_2 or 13B 5_1? | 5 | I have seen comparisons where 13B q\_2 won over 7B fp 16, so I assume the same is true for 33B. But I'm just asking to confirm before downloading huge models on my hard-drive. | 2023-06-09T18:37:37 | https://www.reddit.com/r/LocalLLaMA/comments/145d3to/which_one_is_better_33b_q_2_or_13b_5_1/ | Xeruthos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145d3to | false | null | t3_145d3to | /r/LocalLLaMA/comments/145d3to/which_one_is_better_33b_q_2_or_13b_5_1/ | false | false | self | 5 | null |
Linked 6 x EVGA NVIDIA GeForce GTX 1080 Ti for running the 65b | 7 | [deleted] | 2023-06-09T19:13:37 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 145e0cl | false | null | t3_145e0cl | /r/LocalLLaMA/comments/145e0cl/linked_6_x_evga_nvidia_geforce_gtx_1080_ti_for/ | false | false | default | 7 | null |
||
The first instruction tuning of open llama is out. | 108 | It's dataset is a mixture of Open Assistant and the Dolly instruction set. Valid for commercial use.
[TheBloke/open-llama-7b-open-instruct-GGML · Hugging Face](https://huggingface.co/TheBloke/open-llama-7b-open-instruct-GGML)
[TheBloke/open-llama-7b-open-instruct-GPTQ · Hugging Face](https://huggingface.co/TheBloke/open-llama-7b-open-instruct-GPTQ)
[VMware/open-llama-7b-open-instruct · Hugging Face](https://huggingface.co/VMware/open-llama-7b-open-instruct) | 2023-06-09T19:23:57 | https://www.reddit.com/r/LocalLLaMA/comments/145e9m3/the_first_instruction_tuning_of_open_llama_is_out/ | jetro30087 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145e9m3 | false | null | t3_145e9m3 | /r/LocalLLaMA/comments/145e9m3/the_first_instruction_tuning_of_open_llama_is_out/ | false | false | self | 108 | {'enabled': False, 'images': [{'id': 'GgNdrUMn3erI_P6R2jupcUhFHolAEUTmlZXgBMLYB_s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kp4HGLl5ELCDDoNkaRi08CEcnDOO_SX92gEOn6MtyyI.jpg?width=108&crop=smart&auto=webp&s=9f3fa7e738fc989c403b6c0464a94f122112968a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kp4HGLl5ELCDDoNkaRi08CEcnDOO_SX92gEOn6MtyyI.jpg?width=216&crop=smart&auto=webp&s=787cc35e19c2190dfcffd2c6c35af2780940f801', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kp4HGLl5ELCDDoNkaRi08CEcnDOO_SX92gEOn6MtyyI.jpg?width=320&crop=smart&auto=webp&s=cc81729a4e76444812df0fbc059623e7d87fb801', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kp4HGLl5ELCDDoNkaRi08CEcnDOO_SX92gEOn6MtyyI.jpg?width=640&crop=smart&auto=webp&s=5a71abe101abffee89a388903b45029f68d45eee', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kp4HGLl5ELCDDoNkaRi08CEcnDOO_SX92gEOn6MtyyI.jpg?width=960&crop=smart&auto=webp&s=4a8f8c1d0ad6909f83d47893308d3fb9e4645f6d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kp4HGLl5ELCDDoNkaRi08CEcnDOO_SX92gEOn6MtyyI.jpg?width=1080&crop=smart&auto=webp&s=dc2aace0c612c3252625afa5a232cd909c640f2b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kp4HGLl5ELCDDoNkaRi08CEcnDOO_SX92gEOn6MtyyI.jpg?auto=webp&s=415b87ffd30c04c30fdb7bb21b79fd42e2e92566', 'width': 1200}, 'variants': {}}]} |
twitter account claims to have gotten 30k+ context on a llama7b model using landmark attention. exiting if true | 1 | [removed] | 2023-06-09T19:38:27 | https://www.reddit.com/r/LocalLLaMA/comments/145emrl/twitter_account_claims_to_have_gotten_30k_context/ | Capn23Cook | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145emrl | false | null | t3_145emrl | /r/LocalLLaMA/comments/145emrl/twitter_account_claims_to_have_gotten_30k_context/ | false | false | default | 1 | null |
OpenAI API clone projects for local models using GPTQ/transformers? | 1 | [deleted] | 2023-06-09T20:14:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 145fi6n | false | null | t3_145fi6n | /r/LocalLLaMA/comments/145fi6n/openai_api_clone_projects_for_local_models_using/ | false | false | default | 1 | null |
||
Llama official weights | 1 | [removed] | 2023-06-09T20:48:37 | https://www.reddit.com/r/LocalLLaMA/comments/145gdm8/llama_official_weights/ | thepragprog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145gdm8 | false | null | t3_145gdm8 | /r/LocalLLaMA/comments/145gdm8/llama_official_weights/ | false | false | default | 1 | null |
Kubuntu 23 + 32GB RAM + Nvidia 1080 8gb RAM pretty unstable with llama.cpp | 0 | [removed] | 2023-06-09T20:49:12 | https://www.reddit.com/r/LocalLLaMA/comments/145ge4y/kubuntu_23_32gb_ram_nvidia_1080_8gb_ram_pretty/ | FreeComplex666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145ge4y | false | null | t3_145ge4y | /r/LocalLLaMA/comments/145ge4y/kubuntu_23_32gb_ram_nvidia_1080_8gb_ram_pretty/ | false | false | default | 0 | null |
Would this method work to increase the memory of the model? Saving summaries generated by a 2nd model and injecting them depending on the current topic. | 2 | I'm trying to implement some sort of memory for my Wizard-Vicuna-30B-Uncensored model. I already tried Langchains Conversation Knowledge Graph Memory but Vicuna is unable to create knowledge triplets. Also tried to summarize the past conversation, but the results are very poor. It leaves out most of the conversation and focuses on some random part of the convo.
So I came up with an idea and also tried it out manually. It should work in theory. This is how I want to automate it:
Every time I send a message to the bot, I use GPT3.5 to create a summary of the conversation and group the sentences into topics.
For example:
>Topic: Video Games - Rick and Bot talked about their favorite video games. Ricks favorite game is Fallout New Vegas.
>Topic: Movies - Rick and Bot talked about movies. Both agree that Cocaine Bear wasn't a very good movie.
I refresh this after every reply with the latest few paragraphs and save it into a database.
Now after some talking the limit of 2000 tokens is reached and older parts of the conversation are discarded. Now I send the last few messages to GPT3.5 to determine the topic.
>[LAST_MESSAGES] Which topic are they talking about? You can choose between [Video Games, Movies, Neither].
Using the result, I can query the database for the latest summary and inject the summary into the prompt.
I know that using GPT3.5 defeats the purpose of this subreddit, but I found no local model that can realizably summarize and categorize conversations. Vicuna REALLY sucks at it, Guanaco 30B is slightly better at it. Are there any models that might be better suited for this? | 2023-06-09T20:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/145gk4u/would_this_method_work_to_increase_the_memory_of/ | flamingrickpat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145gk4u | false | null | t3_145gk4u | /r/LocalLLaMA/comments/145gk4u/would_this_method_work_to_increase_the_memory_of/ | false | false | self | 2 | null |
Story writing concept | 7 | I have this idea for how to set up writing a story/book with the AI model handling everything, and not losing any kind of memory. I'm new to all this so I'm thinking about this conceptually without fully knowong if it is possible.
I want to eliminate the need to include character and world data into the token context by using some form of long term memory.
I've seen where you can create characters with JSON files for how they write, and act. You can use multiple characters in a chat together and have them interact.
Is there a way to use these character JSONs to store world info, all characters in a story, story outlines, chapter outlines, and currently written story? Can they be stored with vector embeddings for long term memory, and is that even possible with JSONs?
Do they need to be trained into the LLM between each change? | 2023-06-09T21:40:34 | https://www.reddit.com/r/LocalLLaMA/comments/145hos1/story_writing_concept/ | reiniken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145hos1 | false | null | t3_145hos1 | /r/LocalLLaMA/comments/145hos1/story_writing_concept/ | false | false | self | 7 | null |
iOS app for running ggml models locally? | 2 | [removed] | 2023-06-09T21:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/145i08b/ios_app_for_running_ggml_models_locally/ | chocolatebanana136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145i08b | false | null | t3_145i08b | /r/LocalLLaMA/comments/145i08b/ios_app_for_running_ggml_models_locally/ | false | false | default | 2 | null |
Is it possible that Lazarus 30B has a context window of 8192? | 3 | [removed] | 2023-06-09T22:34:26 | https://www.reddit.com/r/LocalLLaMA/comments/145j0ho/is_it_possible_that_lazarus_30b_has_a_context/ | ElectricalAngle1611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145j0ho | false | null | t3_145j0ho | /r/LocalLLaMA/comments/145j0ho/is_it_possible_that_lazarus_30b_has_a_context/ | false | false | default | 3 | null |
Scaling of context length in time | 3 | If compute requirements c scales with the square of context length n, `c~n^2`, and compute doubles every 2 years `c~exp(T/2)` then context length should double every four, breakthroughs notwithstanding.
What am I missing?
edit:
I am obviously growth-rate challenged. Words are right but I should have written `c~exp(rT)` with `r=ln(2)/2`. | 2023-06-09T22:39:13 | https://www.reddit.com/r/LocalLLaMA/comments/145j4ni/scaling_of_context_length_in_time/ | _supert_ | self.LocalLLaMA | 2023-06-10T09:05:42 | 0 | {} | 145j4ni | false | null | t3_145j4ni | /r/LocalLLaMA/comments/145j4ni/scaling_of_context_length_in_time/ | false | false | self | 3 | null |
Is there a viable way to enforce LLM copywright? | 1 | Recently the full version of OpenLlama dropped, meaning that the local LLM community can produce commercially usable models and finally pay off our A100 debts
This begs the question:
Is it actually possible for Meta to distinguish output from an LLM api as being Llama to come sue you?
Open Llama makes it even more difficult, given that it's architecture is even identical to the original | 2023-06-09T22:39:47 | https://www.reddit.com/r/LocalLLaMA/comments/145j53i/is_there_a_viable_way_to_enforce_llm_copywright/ | FreezeproofViola | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145j53i | false | null | t3_145j53i | /r/LocalLLaMA/comments/145j53i/is_there_a_viable_way_to_enforce_llm_copywright/ | false | false | self | 1 | null |
The summarization tree trick to get unlimited context. | 6 | 2023-06-09T22:51:39 | https://sharegpt.com/c/Bt808lc | NancyAurum | sharegpt.com | 1970-01-01T00:00:00 | 0 | {} | 145jf6l | false | null | t3_145jf6l | /r/LocalLLaMA/comments/145jf6l/the_summarization_tree_trick_to_get_unlimited/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'eV_FGzOm_boH5EF5D8qEvCOSQNKmDnzWckBxwgGz03s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z2Uzke9F5A5mzu3PIgtsrREn7FLZTwhgdF7No0DsJfQ.jpg?width=108&crop=smart&auto=webp&s=2cca22c23c9c64ccb531a7b292301373e9bd5fa3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z2Uzke9F5A5mzu3PIgtsrREn7FLZTwhgdF7No0DsJfQ.jpg?width=216&crop=smart&auto=webp&s=b48f2f9f70ba2ccffd9287861b32fdf18e0f8606', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z2Uzke9F5A5mzu3PIgtsrREn7FLZTwhgdF7No0DsJfQ.jpg?width=320&crop=smart&auto=webp&s=4fa4d46d2b46f1a46346407c0e1d26466f561a61', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z2Uzke9F5A5mzu3PIgtsrREn7FLZTwhgdF7No0DsJfQ.jpg?width=640&crop=smart&auto=webp&s=4b4c3e4ed29d28b632867dc770290b025e1dd2e3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z2Uzke9F5A5mzu3PIgtsrREn7FLZTwhgdF7No0DsJfQ.jpg?width=960&crop=smart&auto=webp&s=b8f3f27a34ee49a510ff22cfc89f96a1701a71bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z2Uzke9F5A5mzu3PIgtsrREn7FLZTwhgdF7No0DsJfQ.jpg?width=1080&crop=smart&auto=webp&s=0298eda68f9ed1c2767356f335ccd6170e077802', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z2Uzke9F5A5mzu3PIgtsrREn7FLZTwhgdF7No0DsJfQ.jpg?auto=webp&s=c73e07a18199f769ba33d6a082a49732c86ed793', 'width': 1200}, 'variants': {}}]} |
||
I must have missed something, how is 2 bit working so well? | 32 | I thought the sentiment was that going to 2 bit was going to result in WAY too much info and accuracy being lost, how are they performing so well now? | 2023-06-09T23:09:35 | https://www.reddit.com/r/LocalLLaMA/comments/145juhg/i_must_have_missed_something_how_is_2_bit_working/ | noneabove1182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145juhg | false | null | t3_145juhg | /r/LocalLLaMA/comments/145juhg/i_must_have_missed_something_how_is_2_bit_working/ | false | false | self | 32 | null |
Weird invalid tokens | 3 | Every once in a while, the model will spit out a token that doesn't render properly.
At first I assumed that I just didn't have the proper localizations installed, but I couldn't seem to figure out what I needed. So I dug in some more and tried to get the bytes for the token value, and they all resolved to "0xEFBFBD", or "�". Of course, 0xEFBFBD is the actual value for the placeholder character generated when the character doesn't render, and not just a value that isn't rendering.
At first I assumed that somehow a bunch of those "�" characters ended up in the training data, but when I started watching the token generation, the various "�" are coming back from different token Ids. (Ex. 186). To me, that would imply that per the training data, these are all different characters. However when I attempt to render them to a string, the encoded string comes back with the same underlying byte data (0xEFBFBD) for all of them.
I cant imagine this swap is being performed on the system level. Even if I dont have the proper localizations installed to render the characters, I have to assume the underlying byte array for the string would still differ. That being said, I have no idea whats actually going on under the hood with Llama.cpp.
For now I've managed to ban them by automatically adjusting the logit-bias at run time any time a string resolves to "�" for a generated token, but I would really like to know of these values are actually all linked to "�" or if theres some weird fuckery going on under the hood while retrieving the string value. It seems at least plausible that some of the data was lost from the original model during conversion or training or something like that, but I'm wondering if anyone can confirm. | 2023-06-10T00:58:19 | https://www.reddit.com/r/LocalLLaMA/comments/145m9a8/weird_invalid_tokens/ | mrjackspade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145m9a8 | false | null | t3_145m9a8 | /r/LocalLLaMA/comments/145m9a8/weird_invalid_tokens/ | false | false | self | 3 | null |
Newbie Question - Error with Dalai lama | 1 | [removed] | 2023-06-10T01:03:37 | https://www.reddit.com/r/LocalLLaMA/comments/145mdnk/newbie_question_error_with_dalai_lama/ | Strong-Employ6841 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145mdnk | false | null | t3_145mdnk | /r/LocalLLaMA/comments/145mdnk/newbie_question_error_with_dalai_lama/ | false | false | default | 1 | null |
Resources for creating custom QLoRA Datasets? | 0 | Are the any datasets that have been successfully used to train some of these local models with the QLoRA method? I really don't know much about training or datasets in general, except that the python looks pretty simple and I'm going to need to train a smaller model to my use case before it will be even remotely capable of what I need it for.
I understand that the dataset itself is in json format, but really feel like I'd benefit from seeing some datasets that have been successfully used for QLoRA training so that I know how to best phrase the question/answer pairs, or really any documentation on building this kind of dataset. Anyone have suggestions? | 2023-06-10T01:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/145n3ad/resources_for_creating_custom_qlora_datasets/ | gentlecucumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145n3ad | false | null | t3_145n3ad | /r/LocalLLaMA/comments/145n3ad/resources_for_creating_custom_qlora_datasets/ | false | false | self | 0 | null |
Am I an idiot or is my computer just not up to snuff? | 11 | Im running a i7 12700, rtx 2080 sup, 32 g of ram. I have tried every 7b model under the sun and the vast majority of them don't work. The ones that do are ggml models and even some of those don't work. The most common error is that I don't have enough vram. I heard that you can use your CPU to compensate but I know nothing about the model settings and oobabooga terms. | 2023-06-10T02:51:35 | https://www.reddit.com/r/LocalLLaMA/comments/145okk7/am_i_an_idiot_or_is_my_computer_just_not_up_to/ | Liquidmesh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145okk7 | false | null | t3_145okk7 | /r/LocalLLaMA/comments/145okk7/am_i_an_idiot_or_is_my_computer_just_not_up_to/ | false | false | self | 11 | null |
I have my hands on InternLM chatbot right now. Comment anything you want to ask | 58 | I will reply back its response.
Technical report: [https://github.com/InternLM/InternLM-techreport](https://github.com/InternLM/InternLM-techreport)
Previous discussion: [https://www.reddit.com/r/LocalLLaMA/comments/143fvnd/internlm\_a\_multilingual\_foundational\_language/](https://www.reddit.com/r/LocalLLaMA/comments/143fvnd/internlm_a_multilingual_foundational_language/) | 2023-06-10T02:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/145ol78/i_have_my_hands_on_internlm_chatbot_right_now/ | WiSaGaN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145ol78 | false | null | t3_145ol78 | /r/LocalLLaMA/comments/145ol78/i_have_my_hands_on_internlm_chatbot_right_now/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': '7N9uY-Qg3kBvXLB57VqI7zXghYDjxW0V_vYfLy_KayE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gVvQHjl398AOqqr3J_bEHr1Remp8uoWZmW_0odiTpsg.jpg?width=108&crop=smart&auto=webp&s=93ece1aa1b4122d38c1e25d4c5766c757f6ab5f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gVvQHjl398AOqqr3J_bEHr1Remp8uoWZmW_0odiTpsg.jpg?width=216&crop=smart&auto=webp&s=a73d1d1b19d0e16769196613ba2561cbe842b8bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gVvQHjl398AOqqr3J_bEHr1Remp8uoWZmW_0odiTpsg.jpg?width=320&crop=smart&auto=webp&s=03f3a2f6c5c4b925e57e54fff0eb26ded7e966c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gVvQHjl398AOqqr3J_bEHr1Remp8uoWZmW_0odiTpsg.jpg?width=640&crop=smart&auto=webp&s=5cb0ae891b45a30e054f096ee468004d08f2ad2f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gVvQHjl398AOqqr3J_bEHr1Remp8uoWZmW_0odiTpsg.jpg?width=960&crop=smart&auto=webp&s=9fe597f6b07026f824fc26d2781d1a2d6a105504', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gVvQHjl398AOqqr3J_bEHr1Remp8uoWZmW_0odiTpsg.jpg?width=1080&crop=smart&auto=webp&s=98f846b3131fa9b6018a817d1c6af83ff2b00188', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gVvQHjl398AOqqr3J_bEHr1Remp8uoWZmW_0odiTpsg.jpg?auto=webp&s=c82a5eb5d1e1255e33b58d87cf510572c369942d', 'width': 1200}, 'variants': {}}]} |
I've been playing with prompts and stuff and a post earlier had me playing around with vectorization. I'm not great a coding but here's what I have! | 9 | Yes it's thrown together pretty messily. I was playing with the concept of dual-sided prompting using the "role" part of gpt3.5turbo. Which is why it's just a simple SQL table instead of anything actually good.
The vectors are pretty simple at the moment but I just thought I'd throw this out there for people who are smarter than me that can take it further. Eventually I want to be able to search through the vectors via time.
import datetime
import pickle
import openai
import sqlite3
import os
from openai import OpenAIError
from sklearn.feature_extraction.text import TfidfVectorizer
import config
from config import (
API_FILE,
CONVERSATION_OUTPUT_FILE,
REMOVE_OUTPUT_STRINGS_FILE,
USER_NAME,
BOT_NAME,
DATABASE_NAME,
delete_table_on_exit,
MAX_TOKENS,
TEMPERATURE,
TOP_P,
PRESENCE_PENALTY,
FREQUENCY_PENALTY,
BEST_OF,
STOP_SEQUENCES,
)
DESCRIPTION_TEXT, DIRECTIVE_TEXT = config.load_all_files()
def authenticate_openai():
try:
with open(API_FILE, "r") as file:
openai.api_key = file.read().strip()
except:
print("Failed to authenticate with OpenAI API. Please check your credentials and try again.")
exit()
def create_tables():
with sqlite3.connect(f"{DATABASE_NAME}.db") as conn:
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS conversation_history (
id INTEGER PRIMARY KEY,
user_prompt TEXT,
chatbot_response TEXT,
vector BLOB,
timestamp TEXT
)
""")
conn.commit()
return conn
vectorizer = TfidfVectorizer()
def generate_vector(user_prompt, chatbot_response):
# combine the user and chatbot responses into one "document"
document = user_prompt + " " + chatbot_response
# fit_transform expects a list of documents, and we only have one,
# so we wrap our single document in a list
vectors = vectorizer.fit_transform([document])
# fit_transform returns a sparse matrix, but we need to store our vector
# in the database as a simple list, so we convert the matrix to an array
# and then convert the array to a list
dense_vector = vectors.toarray()
vector = dense_vector.tolist()
return vector
def build_messages(conversation_history, DESCRIPTION_TEXT, DIRECTIVE_TEXT):
messages = [{"role": "system", "content": DESCRIPTION_TEXT, "name": BOT_NAME}]
for exchange in conversation_history:
messages.append({"role": "user", "content": exchange[USER_NAME], "name": USER_NAME})
if exchange[BOT_NAME]:
messages.append({"role": "assistant", "content": exchange[BOT_NAME], "name": BOT_NAME})
messages.append({"role": "system", "content": DIRECTIVE_TEXT, "name": BOT_NAME})
return messages
def generate_response(conversation_history, max_tokens=MAX_TOKENS):
try:
messages = build_messages(conversation_history, DESCRIPTION_TEXT, DIRECTIVE_TEXT)
# print("Generated messages:", messages)
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
max_tokens=MAX_TOKENS,
n=BEST_OF,
temperature=TEMPERATURE,
top_p=TOP_P,
frequency_penalty=FREQUENCY_PENALTY,
presence_penalty=PRESENCE_PENALTY,
stop=STOP_SEQUENCES if STOP_SEQUENCES else None,
)
generated_text = response.choices[0].message['content'].strip()
# print("Generated text:", generated_text)
if not generated_text:
raise Exception("ChatGPT failed to generate a response.")
return generated_text
except OpenAIError as e:
delete_last_row(conn)
print(f"Error: {str(e)}")
raise Exception("ChatGPT failed to generate a response.")
def clean_response_text(text):
text = ' '.join(text.split())
text = '\n'.join(line.strip() for line in text.splitlines() if line.strip())
return text
def insert_conversation(conn, user_prompt, chatbot_response, vector):
timestamp = datetime.datetime.now().isoformat() # get the current time in ISO 8601 format
with conn:
cursor = conn.cursor()
# convert the vector to a binary string using pickle
vector_blob = pickle.dumps(vector)
cursor.execute("INSERT INTO conversation_history (user_prompt, chatbot_response, vector, timestamp) VALUES (?, ?, ?, ?)", (user_prompt, chatbot_response, vector_blob, timestamp))
conn.commit()
def get_conversation_history(cursor, max_tokens=4096):
rows = []
token_count = 0
for row in cursor.execute("SELECT id, user_prompt, chatbot_response FROM conversation_history ORDER BY id DESC"):
user_prompt_len = len(row[1]) if row[1] is not None else 0
chatbot_response_len = len(row[2]) if row[2] is not None else 0
row_token_count = user_prompt_len + chatbot_response_len
if token_count + row_token_count < max_tokens:
token_count += row_token_count
rows.append(row)
else:
break
rows.reverse()
conversations = [{"id": row[0], USER_NAME: row[1], BOT_NAME: row[2]} for row in rows]
return conversations
def sort_conversations(conversations):
sorted_conversations = sorted(conversations, key=lambda x: x['id'], reverse=False)
return sorted_conversations
def delete_last_row(conn):
cursor = conn.cursor()
cursor.execute("DELETE FROM conversation_history WHERE id = (SELECT MAX(id) FROM conversation_history)")
conn.commit()
def print_conversation_history(cursor):
conversation_history = get_conversation_history(cursor)
for exchange in conversation_history:
if exchange[USER_NAME]:
print(f"\n{USER_NAME}: {exchange[USER_NAME]}")
if exchange[BOT_NAME]:
print(f"\n{BOT_NAME}: {exchange[BOT_NAME]}")
return conversation_history
def write_conversation_to_file(text):
now = datetime.datetime.now()
timestamp = now.strftime("%Y-%m-%d %H:%M:%S")
with open(CONVERSATION_OUTPUT_FILE, 'a', encoding='utf-8') as f:
f.write(f"|---|START|---|{timestamp}|---|\n{text}\n|---|END|---|\n")
def close_connection(conn):
conn.close()
conn = sqlite3.connect(f"{DATABASE_NAME}.db")
create_tables()
def main_loop():
cursor = conn.cursor()
while True:
user_input = input(f"\n{USER_NAME}: ")
if user_input.lower() == "exit":
break
if user_input.lower() == "delete memory":
delete_last_row(conn)
print("Deleted memory.")
continue
# initially insert the user_prompt with placeholder chatbot_response and vector
insert_conversation(conn, user_input, None, None)
conversation_history = get_conversation_history(cursor, max_tokens=4096 - MAX_TOKENS)
try:
output = generate_response(conversation_history)
clean_response = clean_response_text(output)
print(f"\n{BOT_NAME}: {clean_response}")
vector = generate_vector(user_input, clean_response)
# update the placeholder chatbot_response and vector in the last inserted row
vector_blob = pickle.dumps(vector)
cursor.execute("UPDATE conversation_history SET chatbot_response = ?, vector = ? WHERE id = (SELECT MAX(id) FROM conversation_history)", (clean_response, vector_blob))
conn.commit()
write_conversation_to_file(user_input)
write_conversation_to_file(clean_response)
except Exception as e:
print(e)
continue
if delete_table_on_exit:
cursor.execute("DROP TABLE conversation_history")
conn.commit()
# Close the database connection
conn.close()
if __name__ == "__main__":
authenticate_openai()
main_loop()
Here's the config.py that goes with it:
# folders and files
API_FILE = "openai_api_key.txt"
CONVERSATION_OUTPUT_FILE = "conversationlog.txt"
REMOVE_OUTPUT_STRINGS_FILE = "removeoutputstrings.txt"
# names
USER_NAME = "User"
BOT_NAME = "CDB"
DATABASE_NAME = f"{BOT_NAME}"
# bools
delete_table_on_exit = False
# gpt config
MAX_TOKENS = 1024
TEMPERATURE = 0.5
TOP_P = 1.0
PRESENCE_PENALTY = 1.1
FREQUENCY_PENALTY = 1
BEST_OF = 1
STOP_SEQUENCES = None
# file loader
DESCRIPTION_NAME = "Contrarian"
DIRECTIVE_NAME = "ContrarianDi"
DESCRIPTION_FOLDER = "descriptions"
DIRECTIVE_FOLDER = "directives"
DESCRIPTION_FILE_NAME = DESCRIPTION_NAME + ".txt"
DIRECTIVE_FILE_NAME = DIRECTIVE_NAME + ".txt"
def get_file_path(folder, file_name):
return f"{folder}/{file_name}"
def load_file(file):
with open(file, 'r', encoding='utf8') as f:
text = f.read()
return text
def load_all_files():
DESCRIPTION_TEXT = load_file(get_file_path(DESCRIPTION_FOLDER, DESCRIPTION_FILE_NAME))
DIRECTIVE_TEXT = load_file(get_file_path(DIRECTIVE_FOLDER, DIRECTIVE_FILE_NAME))
return DESCRIPTION_TEXT, DIRECTIVE_TEXT | 2023-06-10T02:54:08 | https://www.reddit.com/r/LocalLLaMA/comments/145om7h/ive_been_playing_with_prompts_and_stuff_and_a/ | _The_Librarian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 145om7h | false | null | t3_145om7h | /r/LocalLLaMA/comments/145om7h/ive_been_playing_with_prompts_and_stuff_and_a/ | false | false | self | 9 | null |
Subsets and Splits