PyTorch
mistral
Krutrim
language-model
krutrim-admin commited on
Commit
a06fe4f
·
verified ·
1 Parent(s): 13525df

Updated BharatBench evals

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -100,6 +100,19 @@ After fine-tuning, the model underwent Direct Preference Optimization (DPO) with
100
  | FloresIN (1-shot, xx-en) (chrf++) | 50% | 54% | 58% |
101
  | FloresIN (1-shot, en-xx) (chrf++) | 34% | 41% | 46% |
102
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
  ### Qualitative Results
104
  Below are the results from manual evaluation of prompt-response pairs across languages and task categories. Scores are between 1-5 (higher the better). Model names were anonymised during the evaluation.
105
 
 
100
  | FloresIN (1-shot, xx-en) (chrf++) | 50% | 54% | 58% |
101
  | FloresIN (1-shot, en-xx) (chrf++) | 34% | 41% | 46% |
102
 
103
+ ### BharatBench
104
+ The existing Indic benchmarks are not natively in Indian languages, rather, they are translations of existing En benchmarks. They do not sufficiently capture the linguistic nuances of Indian languages and aspects of Indian culture. Towards that Krutrim released BharatBench - a natively Indic benchmark that encompasses the linguistic and cultural diversity of the Indic region, ensuring that the evaluations are relevant and representative of real-world use cases in India.
105
+
106
+ | Benchmark | Metric | Krutrim-1 7B | MN-12B-Instruct | Krutrim-2 12B | llama-3.1-8B-Instruct | llama-3.1-70B-Instruct | Gemma-2-9B-Instruct | Gemma-2-27B-Instruct | GPT-4o |
107
+ |-------------------------------------|------------|--------------|-----------------|---------------|------------------------|------------------------|---------------------|---------------------|--------|
108
+ | Indian Cultural Context (0-shot) | Bert Score | 0.86 | 0.56 | 0.88 | 0.87 | 0.88 | 0.87 | 0.87 | 0.89 |
109
+ | Grammar Correction (5-shot) | Bert Score | 0.96 | 0.94 | 0.98 | 0.95 | 0.98 | 0.96 | 0.96 | 0.97 |
110
+ | Multi Turn (0-shot) | Bert Score | 0.88 | 0.87 | 0.91 | 0.88 | 0.90 | 0.89 | 0.89 | 0.92 |
111
+ | Multi Turn Comprehension (0-shot) | Bert Score | 0.90 | 0.89 | 0.92 | 0.92 | 0.93 | 0.91 | 0.91 | 0.94 |
112
+ | Multi Turn Translation (0-shot) | Bert Score | 0.85 | 0.87 | 0.92 | 0.89 | 0.91 | 0.90 | 0.91 | 0.92 |
113
+ | Text Classification (5-shot) | Accuracy | 0.61 | 0.71 | 0.76 | 0.72 | 0.88 | 0.82 | 0.86 | 0.89 |
114
+ | Named Entity Recognition (5-shot) | Accuracy | 0.31 | 0.51 | 0.53 | 0.55 | 0.61 | 0.61 | 0.65 | 0.65 |
115
+
116
  ### Qualitative Results
117
  Below are the results from manual evaluation of prompt-response pairs across languages and task categories. Scores are between 1-5 (higher the better). Model names were anonymised during the evaluation.
118