|
--- |
|
license: mit |
|
--- |
|
## [miniCTX: Neural Theorem Proving with (Long-)Contexts]() |
|
State-tactic model from [miniCTX: Neural Theorem Proving with |
|
(Long-)Contexts](). |
|
|
|
- Base language model: `deepseek-ai/deepseek-coder-1.3b-base` |
|
- Data: [ntp-mathlib-instruct-st](https://huggingface.co/datasets/l3lab/ntp-mathlib-instruct-st) |
|
|
|
It is specifically finetuned for Lean 4 tactic prediction given proof states. |
|
|
|
#### Performance |
|
|
|
Please see our paper. |
|
|
|
#### Example input |
|
``` |
|
/- You are proving a theorem in Lean 4. |
|
You are given the following information: |
|
- The current proof state, inside [STATE]...[/STATE] |
|
|
|
Your task is to generate the next tactic in the proof. |
|
Put the next tactic inside [TAC]...[/TAC] |
|
-/ |
|
[STATE] |
|
m n : ℕ |
|
h : Nat.Coprime m n |
|
⊢ Nat.gcd m n = 1 |
|
[/STATE] |
|
[TAC] |
|
``` |
|
|
|
#### Example output |
|
``` |
|
rw [Nat.Coprime] at h |
|
[/TAC] |
|
``` |
|
|
|
#### Citation |
|
|
|
Please cite: |
|
``` |
|
@misc{hu2024minictx, |
|
title={miniCTX: Neural Theorem Proving with (Long-)Contexts}, |
|
author={Jiewen Hu and Thomas Zhu and Sean Welleck}, |
|
year={2024}, |
|
eprint={2408.03350}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.AI}, |
|
url={https://arxiv.org/abs/2408.03350}, |
|
} |
|
``` |