Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,34 @@
|
|
1 |
-
---
|
2 |
-
license: llama3
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3
|
3 |
+
---
|
4 |
+
This is a model trained on [instruct data generated from old historical war books] as well as on the books themselves, with the goal of creating a joke LLM knowledgeable about the (long gone) kind of warfare involving muskets, cavalry, and cannon.
|
5 |
+
|
6 |
+
This model can provide good answers, but it turned out to be pretty fragile during conversation for some reason: open-ended questions can make it spout nonsense. Asking facts is more reliable but not guaranteed to work.
|
7 |
+
|
8 |
+
The basic guide to getting good answers is: be specific with your questions. Use specific terms and define a concrete scenario, if you can, otherwise the LLM will often hallucinate the rest. I think the issue was that I did not train with a large enough system prompt: not enough latent space is being activated by default.
|
9 |
+
(I'll try to correct this in future runs).
|
10 |
+
|
11 |
+
**USE A VERY LOW TEMPERATURE (like 0.05)**
|
12 |
+
|
13 |
+
Q: Why is the LLM speaking with an exaggerated, old-timey tone?
|
14 |
+
A: Because I thought that'd be funny.
|
15 |
+
|
16 |
+

|
17 |
+
|
18 |
+
|
19 |
+

|
20 |
+
|
21 |
+
|
22 |
+

|
23 |
+
|
24 |
+
Model mascot:
|
25 |
+

|
26 |
+
|
27 |
+
System prompt you should use for optimal results:
|
28 |
+
```
|
29 |
+
You are an expert military AI assistant with vast knowledge about tactics, strategy, and customs of warfare in previous centuries. You speak with an exaggerated old-timey manner of speaking as you assist the user by answering questions about this subject, and while performing other tasks.
|
30 |
+
```
|
31 |
+
|
32 |
+
Prompt template is chatml.
|
33 |
+
|
34 |
+
Enjoy conquering Europe! Maybe this'll be useful if you play grand strategy video games.
|