Update README.md
Browse files
README.md
CHANGED
@@ -31,7 +31,13 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
31 |
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
32 |
input_ids = tokenizer(input_text, return_tensors="pt")
|
33 |
outputs = model.generate(**input_ids, max_length=128)
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
```
|
36 |
# Training Data
|
37 |
We have 103 rows of descriptions of different roles and their respective goals and backstory which is used to train the models, see [this dataset](https://huggingface.co/datasets/DrDrek/crewai_finetuning_dataset) for details.
|
|
|
31 |
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
32 |
input_ids = tokenizer(input_text, return_tensors="pt")
|
33 |
outputs = model.generate(**input_ids, max_length=128)
|
34 |
+
output = tokenizer.decode(outputs[0])
|
35 |
+
#print("llm output:",output)
|
36 |
+
|
37 |
+
backstory=(output.split("\n\n"))[1].split("\n\n")[0]
|
38 |
+
goal=(output.split(backstory)[1].replace("<eos>","")).replace("\n\n","")
|
39 |
+
print("backstory: ",backstory)
|
40 |
+
print("goal: ",goal)
|
41 |
```
|
42 |
# Training Data
|
43 |
We have 103 rows of descriptions of different roles and their respective goals and backstory which is used to train the models, see [this dataset](https://huggingface.co/datasets/DrDrek/crewai_finetuning_dataset) for details.
|