Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,34 @@ This model was fine-tuned on a parsed version of The Wizard of Wikipedia dataset
|
|
8 |
|
9 |
`script_speaker_name` = `person alpha`
|
10 |
|
11 |
-
`script_responder_name` = `person beta`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
`script_speaker_name` = `person alpha`
|
10 |
|
11 |
+
`script_responder_name` = `person beta`
|
12 |
+
|
13 |
+
## usage
|
14 |
+
|
15 |
+
### in ai-msgbot
|
16 |
+
|
17 |
+
```
|
18 |
+
python ai_single_response.py --model GPT2_conversational_355M_WoW10k --prompt "hi! what are your hobbies?"
|
19 |
+
|
20 |
+
... generating...
|
21 |
+
|
22 |
+
finished!
|
23 |
+
|
24 |
+
'i like to read.'
|
25 |
+
|
26 |
+
```
|
27 |
+
|
28 |
+
### in huggingface inference API
|
29 |
+
|
30 |
+
The model training (and the ai-msgbot scripts) "force" GPT-2 to generate text in a chat-like structure. If you want non-garbage outputs, these need to be specified manually:
|
31 |
+
|
32 |
+
```
|
33 |
+
person alpha:
|
34 |
+
hi! what are your hobbies?
|
35 |
+
```
|
36 |
+
|
37 |
+
then model will respond, ideally with person beta: "response text"
|
38 |
+
|
39 |
+
---
|
40 |
+
|
41 |
+
|