Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ pipeline_tag: automatic-speech-recognition
|
|
23 |
|
24 |
# NB-Whisper small (beta)
|
25 |
|
26 |
-
This is a **_public beta_** of the Norwegian NB-Whisper. NB-Whisper is a series of models for automatic speech recognition (ASR) and speech translation, building upon the foundation laid by [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). All models are trained on 20,000 hours of labeled data.
|
27 |
|
28 |
<center>
|
29 |
<figure>
|
@@ -38,12 +38,12 @@ This is a **_public beta_** of the Norwegian NB-Whisper. NB-Whisper is a series
|
|
38 |
|
39 |
## Model Details
|
40 |
|
41 |
-
NB-Whisper models
|
42 |
|
43 |
| Model Size | Parameters | Availability |
|
44 |
|------------|------------|--------------|
|
45 |
-
| tiny | 39M | _Will be released in public beta
|
46 |
-
| base | 74M | _Will be released in public beta
|
47 |
| small | 244M | This model, available in public beta |
|
48 |
| medium | 769M | _Will be released in public beta later this summer_ |
|
49 |
| large | 1550M | _Will be released in public beta later this summer_ |
|
@@ -67,7 +67,7 @@ Please refer to the OpenAI Whisper model card for more details about the backbon
|
|
67 |
|
68 |
- **Repository:** https://github.com/NbAiLab/nb-whisper/
|
69 |
- **Paper:** _Coming soon_
|
70 |
-
- **Demo:**
|
71 |
|
72 |
## Uses
|
73 |
|
@@ -78,24 +78,17 @@ This is a **_public beta_** release. The models published in this repository are
|
|
78 |
|
79 |
### Downstream Use
|
80 |
|
81 |
-
|
82 |
|
83 |
-
A significant part of the training material comes from TV subtitles. Subtitles often shorten
|
84 |
|
85 |
-
### Out-of-Scope Use
|
86 |
|
87 |
-
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
|
88 |
|
89 |
## Bias, Risks, and Limitations
|
90 |
|
91 |
-
These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
|
92 |
|
93 |
|
94 |
-
### Recommendations
|
95 |
-
|
96 |
-
We recommend users of NB-Whisper models to consider finetuning them for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
|
97 |
-
|
98 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
99 |
|
100 |
## How to Get Started with the Model
|
101 |
|
@@ -137,78 +130,6 @@ asr(
|
|
137 |
# 'text': ' Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.'}]}
|
138 |
```
|
139 |
|
140 |
-
|
141 |
-
## Training Details
|
142 |
-
|
143 |
-
### Training Data
|
144 |
-
|
145 |
-
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
146 |
-
|
147 |
-
Trained data comes from Språkbanken and the digital collection at the National Library of Norway. Training data includes:
|
148 |
-
|
149 |
-
- [NST Norwegian ASR Database (16 kHz)](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-54/), and its corresponding [dataset](https://huggingface.co/datasets/NbAiLab/NST)
|
150 |
-
- Transcribed speeches from the Norwegian Parliament produced by Språkbanken
|
151 |
-
- TV broadcast (NRK) subtitles (NLN digital collection)
|
152 |
-
- Audiobooks (NLN digital collection)
|
153 |
-
|
154 |
-
|
155 |
-
### Training Procedure
|
156 |
-
|
157 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
158 |
-
|
159 |
-
#### Preprocessing [optional]
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
|
164 |
-
#### Training Hyperparameters
|
165 |
-
|
166 |
-
- **Training regime:** bf16 mixed precision
|
167 |
-
|
168 |
-
#### Speeds, Sizes, Times [optional]
|
169 |
-
|
170 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
171 |
-
|
172 |
-
[More Information Needed]
|
173 |
-
|
174 |
-
## Evaluation
|
175 |
-
|
176 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
177 |
-
|
178 |
-
### Testing Data, Factors & Metrics
|
179 |
-
|
180 |
-
#### Testing Data
|
181 |
-
|
182 |
-
<!-- This should link to a Data Card if possible. -->
|
183 |
-
|
184 |
-
[More Information Needed]
|
185 |
-
|
186 |
-
#### Factors
|
187 |
-
|
188 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
189 |
-
|
190 |
-
[More Information Needed]
|
191 |
-
|
192 |
-
#### Metrics
|
193 |
-
|
194 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
195 |
-
|
196 |
-
[More Information Needed]
|
197 |
-
|
198 |
-
### Results
|
199 |
-
|
200 |
-
[More Information Needed]
|
201 |
-
|
202 |
-
#### Summary
|
203 |
-
|
204 |
-
|
205 |
-
|
206 |
-
## Model Examination [optional]
|
207 |
-
|
208 |
-
<!-- Relevant interpretability work for the model goes here -->
|
209 |
-
|
210 |
-
[More Information Needed]
|
211 |
-
|
212 |
## Environmental Impact
|
213 |
|
214 |
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
@@ -221,40 +142,22 @@ Carbon emissions estimated using the [Machine Learning Impact calculator](https:
|
|
221 |
- **Compute Region:** `us-central1`
|
222 |
- **Carbon Emitted:** Total emissions are estimated to be 247.77 kgCO₂ of which 100 percents were directly offset by the cloud provider.
|
223 |
|
224 |
-
## Technical Specifications [optional]
|
225 |
-
|
226 |
-
### Model Architecture and Objective
|
227 |
-
|
228 |
-
[More Information Needed]
|
229 |
-
|
230 |
-
### Compute Infrastructure
|
231 |
-
|
232 |
-
[More Information Needed]
|
233 |
-
|
234 |
-
#### Hardware
|
235 |
-
|
236 |
-
[More Information Needed]
|
237 |
|
238 |
#### Software
|
239 |
|
240 |
-
|
241 |
|
242 |
-
## Citation
|
|
|
243 |
|
244 |
_A paper is coming soon!_
|
245 |
|
246 |
-
|
247 |
-
|
248 |
-
[More Information Needed]
|
249 |
-
|
250 |
-
**APA:**
|
251 |
-
|
252 |
-
[More Information Needed] -->
|
253 |
|
254 |
## Acknowledgements
|
255 |
|
256 |
-
Thanks to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for supporting this project with extensive training resources. Thanks to Google Cloud for supporting us with credits for translating large parts of the corpus. A special thanks to [Sanchit Ghandi](https://huggingface.co/sanchit-gandhi) for providing thorough technical advice in debugging and with the work of getting this to train on Google TPUs.
|
257 |
|
258 |
## Contact
|
259 |
-
|
260 |
<a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>
|
|
|
23 |
|
24 |
# NB-Whisper small (beta)
|
25 |
|
26 |
+
This is a **_public beta_** of the Norwegian NB-Whisper model released by the National Library of Norway. NB-Whisper is a series of models for automatic speech recognition (ASR) and speech translation, building upon the foundation laid by [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). All models are trained on 20,000 hours of labeled data.
|
27 |
|
28 |
<center>
|
29 |
<figure>
|
|
|
38 |
|
39 |
## Model Details
|
40 |
|
41 |
+
NB-Whisper models will be available in five different sizes:
|
42 |
|
43 |
| Model Size | Parameters | Availability |
|
44 |
|------------|------------|--------------|
|
45 |
+
| tiny | 39M | _Will be released in public beta shortly_ |
|
46 |
+
| base | 74M | _Will be released in public beta shortly_ |
|
47 |
| small | 244M | This model, available in public beta |
|
48 |
| medium | 769M | _Will be released in public beta later this summer_ |
|
49 |
| large | 1550M | _Will be released in public beta later this summer_ |
|
|
|
67 |
|
68 |
- **Repository:** https://github.com/NbAiLab/nb-whisper/
|
69 |
- **Paper:** _Coming soon_
|
70 |
+
- **Demo:** http://ai.nb.no/demo/nb-whisper
|
71 |
|
72 |
## Uses
|
73 |
|
|
|
78 |
|
79 |
### Downstream Use
|
80 |
|
81 |
+
For Norwegian transcriptions we are confident that this public beta will give you State-of-the-Art results compared to currently available Norwegian ASR models of the same size. However, it is still known to show some hallucinations, as well as a tendency to drop part of the transcript from time to time. Please also note that the transcripts are typically not word by word. Spoken language and written language are often very different, and the model aims to "translate" spoken utterances into grammatically correct written sentences. We strongly believe that the best way to understand these models is to try them yourself.
|
82 |
|
83 |
+
A significant part of the training material comes from TV subtitles. Subtitles often shorten the content to make it easier to read. Typically, non-essential parts of the utterance can be also dropped. In some cases, this is a desired ability, in other cases, this is undesired. The final release of these model will provida a mechanism to control for this beaviour.
|
84 |
|
|
|
85 |
|
|
|
86 |
|
87 |
## Bias, Risks, and Limitations
|
88 |
|
89 |
+
This is a public beta that is not intended for production. Production use without adequate assessment of risks and mitigation may be considered irresponsible or harmful. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
|
90 |
|
91 |
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
## How to Get Started with the Model
|
94 |
|
|
|
130 |
# 'text': ' Først så kan vi ta og henge dem kjemme, og så får vi gjøre vårt valget når vi kommer dit.'}]}
|
131 |
```
|
132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
133 |
## Environmental Impact
|
134 |
|
135 |
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
|
|
142 |
- **Compute Region:** `us-central1`
|
143 |
- **Carbon Emitted:** Total emissions are estimated to be 247.77 kgCO₂ of which 100 percents were directly offset by the cloud provider.
|
144 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
145 |
|
146 |
#### Software
|
147 |
|
148 |
+
The model is trained using Jax/Flax. The final model is converted to Pytorch, whisper.cpp and ONXX. Please tell us if you would like future models to be converted to other format.
|
149 |
|
150 |
+
## Citation & Authors
|
151 |
+
This model was developed within the scope of the _NoSTram_ project, led by _Per Egil Kummervold_. The Jax code and training scripts were crafted by _Javier de la Rosa_, _Freddy Wetjen_, _Rolv-Arild Braaten_, and _Per Egil Kummervold_. Dataset curation was carried out by _Freddy Wetjen_, _Rolv-Arild Braaten_, and _Per Egil Kummervold_. Documentation was composed by _Javier de la Rosa_ and _Per Egil Kummervold_. The AiLab is under the direction of _Svein Arne Brygfjeld_. Each author contributed to the development and deliberations on the optimal way to train a Norwegian ASR model using Whisper. The work on this model was conducted as part of their professional roles at the National Library of Norway.
|
152 |
|
153 |
_A paper is coming soon!_
|
154 |
|
155 |
+
If you plan on using this model in your research, we st
|
|
|
|
|
|
|
|
|
|
|
|
|
156 |
|
157 |
## Acknowledgements
|
158 |
|
159 |
+
Thanks to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for supporting this project with extensive training resources. Thanks to Google Cloud for supporting us with credits for translating large parts of the corpus. A special thanks to [Sanchit Ghandi](https://huggingface.co/sanchit-gandhi) for providing thorough technical advice in debugging and with the work of getting this to train on Google TPUs. A special thanks to Per Erik Solberg at Språkbanken for the collaboration with regard to the Stortinget corpus.
|
160 |
|
161 |
## Contact
|
162 |
+
We are releasing this ASR Whisper model as a public beta to garner constructive feedback on its performance. Please do not hesitate to contact us with any experiences, insights, or suggestions that you may have. Your input is invaluable in helping us to improve the model and ensure that it effectively serves the needs of users. Whether you have technical concerns, usability suggestions, or ideas for future enhancements, we welcome your input. Thank you for participating in this critical stage of our model's development.
|
163 |
<a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>
|