Spaces:
Runtime error
Runtime error
Update constants.py
Browse files- constants.py +1 -1
constants.py
CHANGED
@@ -114,7 +114,7 @@ The CommonVoice Test provides a Word Error Rate (WER) within a 20-point margin o
|
|
114 |
|
115 |
Moreover, it's worth noting that selecting the model with the lowest WER on CommonVoice aligns with choosing the model based on the lowest average WER. This approach proves effective for ranking the best-performing models with precision. However, it's essential to acknowledge that as the average WER increases, the spread of results becomes more pronounced. This can pose challenges in reliably identifying the worst-performing models. The test split size of CommonVoice for a given language is a crucial factor in this context, and it's worth considering. This insight highlights the need for a nuanced approach to ASR model selection, considering various factors, including dataset characteristics, to ensure a comprehensive evaluation of ASR model performance.
|
116 |
|
117 |
-
Additionally, it
|
118 |
|
119 |
Custom splits and potential data leakage during training can indeed lead to misleading results, making it challenging to compare architectures accurately.
|
120 |
|
|
|
114 |
|
115 |
Moreover, it's worth noting that selecting the model with the lowest WER on CommonVoice aligns with choosing the model based on the lowest average WER. This approach proves effective for ranking the best-performing models with precision. However, it's essential to acknowledge that as the average WER increases, the spread of results becomes more pronounced. This can pose challenges in reliably identifying the worst-performing models. The test split size of CommonVoice for a given language is a crucial factor in this context, and it's worth considering. This insight highlights the need for a nuanced approach to ASR model selection, considering various factors, including dataset characteristics, to ensure a comprehensive evaluation of ASR model performance.
|
116 |
|
117 |
+
Additionally, it has come to our attention that Nvidia's models, trained using NeMo with custom splits from common datasets, including Common Voice, may have had an advantage due to their familiarity with parts of the Common Voice test set. It's important to note that this highlights the need for greater transparency in data usage, as OpenAI itself does not publish the data they used for training. This could explain their strong performance in the results. Transparency in model training and dataset usage is crucial for fair comparisons in the ASR field and ensuring that results align with real-world scenarios.
|
118 |
|
119 |
Custom splits and potential data leakage during training can indeed lead to misleading results, making it challenging to compare architectures accurately.
|
120 |
|