derek-thomas
Abstracting sidebar and dark_mode
272ccb0
raw
history blame
1.85 kB
import reflex as rx
p2 = '''
# Steps
### Dataset Selection
We begin with the [layoric/labeled-multiple-choice-explained](https://huggingface.co/datasets/layoric/labeled-multiple-choice-explained) dataset, which includes reasoning provided by GPT-3.5-turbo. reasoning explanations serve as a starting point but may differ from Mistral's reasoning style.
0. *[00-poe-generate-mistral-reasoning.ipynb](https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/00-poe-generate-mistral-reasoning.ipynb)*: To align with Mistral, we need to create a refined dataset: [derek-thomas/labeled-multiple-choice-explained-mistral-reasoning](https://huggingface.co/datasets/derek-thomas/labeled-multiple-choice-explained-mistral-reasoning).
1. *[01-poe-dataset-creation.ipynb](https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/01-poe-dataset-creation.ipynb)*: Then we need to create our prompt experiments.
2. *[02-autotrain.ipynb](https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/02-autotrain.ipynb)*: We generate autotrain jobs on spaces to train our models.
3. *[03-poe-token-count-exploration.ipynb](https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/03-poe-token-count-exploration.ipynb)*: We do some quick analysis so we can optimize our TGI settings.
4. *[04-poe-eval.ipynb](https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/04-poe-eval.ipynb)*: We finally evaluate our trained models.
**The flowchart is _Clickable_**
'''
def mermaid_svg():
with open('assets/prompt-order-experiment.svg', 'r') as file:
svg_content = file.read()
return rx.html(
f'<div style="width: 300%; height: auto;">{svg_content}</div>'
)
def page():
return rx.vstack(
rx.markdown(p2),
mermaid_svg(),
)