Saibo-backup commited on
Commit
6d80e45
·
1 Parent(s): 2f4db23

fix italic font to bold

Browse files
Files changed (1) hide show
  1. app.py +3 -3
app.py CHANGED
@@ -72,7 +72,7 @@ if __name__ == "__main__":
72
  gr.Markdown(
73
  """
74
  # 👻 Transformers-CFG JSON Demo
75
- This is a demo of how you can constrain the output of a GPT-2 model to be a *valid* JSON string(*up to max length truncation*).
76
  Here we use a simple JSON grammar to constrain the output of the model.
77
  The grammar is defined in `json_minimal.ebnf` and is written in the **Extended Backus-Naur Form (EBNF)**.
78
 
@@ -82,8 +82,8 @@ if __name__ == "__main__":
82
  The inference is a bit slow because of the inference is run on **CPU(~20s for 30 tokens)**.
83
  The constraint itself **doesn't** introduce significant overhead to the inference.
84
 
85
- The output may be *truncated* to 30 tokens due to the limitation of the maximum length of the output.
86
- In practice, with a decent `max_length` parameter, your JSON output will be *complete* and *valid*.
87
  """
88
  )
89
 
 
72
  gr.Markdown(
73
  """
74
  # 👻 Transformers-CFG JSON Demo
75
+ This is a demo of how you can constrain the output of a GPT-2 model to be a **valid** JSON string(**up to truncation**).
76
  Here we use a simple JSON grammar to constrain the output of the model.
77
  The grammar is defined in `json_minimal.ebnf` and is written in the **Extended Backus-Naur Form (EBNF)**.
78
 
 
82
  The inference is a bit slow because of the inference is run on **CPU(~20s for 30 tokens)**.
83
  The constraint itself **doesn't** introduce significant overhead to the inference.
84
 
85
+ The output may be **truncated** to 30 tokens due to the limitation of the maximum length of the output.
86
+ In practice, with a decent `max_length` parameter, your JSON output will be **complete** and **valid**.
87
  """
88
  )
89