rubentito commited on
Commit
9ddad89
1 Parent(s): 9985ad0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -54,10 +54,12 @@ answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
54
  ## Metrics
55
  **Average Normalized Levenshtein Similarity (ANLS)**
56
 
57
- The standard metric for text-based VQA tasks (ST-VQA and DocVQA). It evaluates the method's reasoning capabilities while smoothly penalizes OCR recognition errors. For more information check [Scene Text Visual Question Answering](https://arxiv.org/abs/1905.13648)
 
58
 
59
  **Answer Page Prediction Accuracy (APPA)**
60
- In the MP-DocVQA task, the models can provide the index of the page where the information required to answer the question is located. For this subtask accuracy is used to evaluate the predictions: i.e. if the predicted page is correct or not. For more information, check [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/abs/2212.05935)
 
61
 
62
  ## Model results
63
 
 
54
  ## Metrics
55
  **Average Normalized Levenshtein Similarity (ANLS)**
56
 
57
+ The standard metric for text-based VQA tasks (ST-VQA and DocVQA). It evaluates the method's reasoning capabilities while smoothly penalizes OCR recognition errors.
58
+ Check [Scene Text Visual Question Answering](https://arxiv.org/abs/1905.13648) for detailed information.
59
 
60
  **Answer Page Prediction Accuracy (APPA)**
61
+ In the MP-DocVQA task, the models can provide the index of the page where the information required to answer the question is located. For this subtask accuracy is used to evaluate the predictions: i.e. if the predicted page is correct or not.
62
+ Check [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/abs/2212.05935) for detailed information.
63
 
64
  ## Model results
65