Tonic commited on
Commit
173be96
Β·
unverified Β·
1 Parent(s): 4e89b40

adds usage example

Browse files
Files changed (1) hide show
  1. app.py +33 -1
app.py CHANGED
@@ -132,7 +132,39 @@ def init_chat(formal_statement, informal_prefix):
132
  ]
133
 
134
  @spaces.GPU
135
- def chat_handler(user_message, informal_prefix, max_tokens, chat_history):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
  if not chat_history or len(chat_history) < 2:
137
  chat_history = init_chat(user_message, informal_prefix)
138
  display_history = [("user", user_message)]
 
132
  ]
133
 
134
  @spaces.GPU
135
+ def chat_handler(
136
+ user_message: str,
137
+ informal_prefix: str = "/-Explain using mathematics-/\n",
138
+ max_tokens: int = 2500,
139
+ chat_history: list = None
140
+ ):
141
+ """
142
+ Handles a single chat interaction with the Kimina Prover model, a state-of-the-art large language model for formal mathematical theorem proving in Lean 4, developed by Project Numina and the Kimi teams [[Kimina-Prover-Preview]](https://github.com/MoonshotAI/Kimina-Prover-Preview/tree/master).
143
+
144
+ The Kimina Prover model is designed for competition-style problem solving and rigorous formal reasoning, achieving state-of-the-art results on the miniF2F benchmark. It supports long context windows (up to 32K tokens) and is capable of both formal and informal mathematical reasoning. This function manages prompt construction, model inference, and output formatting for the Gradio interface, enabling interactive theorem proving and mathematical problem solving.
145
+
146
+ Args:
147
+ user_message (str): The user's input message or formal statement to be solved or discussed by the model. This can be a Lean 4 goal, theorem, or mathematical problem statement.
148
+ informal_prefix (str, optional): An optional informal explanation or context to prepend to the formal statement. Defaults to '/-Explain using mathematics-/\n'.
149
+ max_tokens (int, optional): The maximum number of tokens to generate in the model's response. Must be between 150 and 4096. Defaults to 2500.
150
+ chat_history (list, optional): The conversation history as a list of message dicts, each with 'role' and 'content'. Used to maintain context across turns. Defaults to None.
151
+
152
+ Returns:
153
+ tuple: (
154
+ display_history (list of tuples): List of (role, content) pairs for display in the chat UI.
155
+ output_data_json (str): JSON string containing model input, output, extracted Lean4 code, and updated chat history.
156
+ code (str): Extracted Lean4 code from the model's response, if present.
157
+ chat_history (list): Updated chat history including the latest user and assistant messages.
158
+ )
159
+
160
+ Example:
161
+ >> user_message = "Goal:\n X : UU\n Y : UU\n P : UU\n xp : (X β†’ P) β†’ P\n yp : (Y β†’ P) β†’ P\n X0 : X Γ— Y β†’ P\n x : X\n ============================\n (Y β†’ P)"
162
+ >> informal_prefix = "/-Explain using mathematics-/\n"
163
+ >> max_tokens = 1234
164
+ >> chat_history = None
165
+ >> display_history, output_data_json, code, chat_history = chat_handler(user_message, informal_prefix, max_tokens, chat_history)
166
+ # display_history contains the chat turns, output_data_json contains the full model output, code contains extracted Lean4 code.
167
+ """
168
  if not chat_history or len(chat_history) < 2:
169
  chat_history = init_chat(user_message, informal_prefix)
170
  display_history = [("user", user_message)]