Update README.md
Browse files
README.md
CHANGED
|
@@ -28,7 +28,7 @@ We present TAT-LLM, a specialized language model crafted through the innovative
|
|
| 28 |
|
| 29 |
## Training
|
| 30 |
|
| 31 |
-
We train our TAT-LLM model in various sizes, including 7B, 13B, and 70B, using different methods such as parameter-efficient fine-tuning and full-parameter fine-tuning of LLaMA 2 on a combination of financial data from the FinQA, TAT-QA, and TAT-DQA
|
| 32 |
|
| 33 |
## Inference & Evaluation
|
| 34 |
|
|
|
|
| 28 |
|
| 29 |
## Training
|
| 30 |
|
| 31 |
+
We train our TAT-LLM model in various sizes, including 7B, 13B, and 70B, using different methods such as parameter-efficient fine-tuning and full-parameter fine-tuning of LLaMA 2 on a combination of financial data from the FinQA, TAT-QA, and TAT-DQA training sets([🤗HuggingFace Repo](https://huggingface.co/datasets/next-tat/tat-llm-instructions)). To refine accuracy, we introduce an External Executor, enhancing the model by processing intermediate outputs to derive conclusive answers. Please refer to the [paper](https://arxiv.org/abs/2401.13223) for more details.
|
| 32 |
|
| 33 |
## Inference & Evaluation
|
| 34 |
|