Update README.md
Browse files
README.md
CHANGED
|
@@ -81,6 +81,10 @@ gen_tokens = model.generate(input_ids, do_sample=True, max_length=400)
|
|
| 81 |
print("-"*20 + "Output for model" + 20 * '-')
|
| 82 |
print(tokenizer.batch_decode(gen_tokens)[0])
|
| 83 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
|
| 85 |
# Bias, Risks, and Limitations
|
| 86 |
CrystalChat has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). The training data is known and made available [here](https://huggingface.co/datasets/LLM360/CrystalCoderDatasets). It primarily consists of SlimPajama, StarCoder, and WebCrawl dataset.
|
|
|
|
| 81 |
print("-"*20 + "Output for model" + 20 * '-')
|
| 82 |
print(tokenizer.batch_decode(gen_tokens)[0])
|
| 83 |
```
|
| 84 |
+
# Evaluation
|
| 85 |
+
|
| 86 |
+
Coming Soon!
|
| 87 |
+
|
| 88 |
|
| 89 |
# Bias, Risks, and Limitations
|
| 90 |
CrystalChat has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). The training data is known and made available [here](https://huggingface.co/datasets/LLM360/CrystalCoderDatasets). It primarily consists of SlimPajama, StarCoder, and WebCrawl dataset.
|