File size: 1,477 Bytes
a7c2274 1ea6ba3 a7c2274 1ea6ba3 aa84874 cbc23a7 83339c7 cbc23a7 854b150 a7c2274 20493db e24b7fd 20493db 1ea6ba3 a7c2274 1ea6ba3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
library_name: transformers
tags: []
---
# ConvNext (trained on XCL from BirdSet)
ConvNext trained on the XCL dataset from BirdSet, covering 9736 bird species from Xeno-Canto. Please refer to the [BirdSet Paper](https://arxiv.org/pdf/2403.10380) and the
[BirdSet Repository](https://github.com/DBD-research-group/BirdSet/tree/main) for further information.
### Model Details
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them.
## How to use
The BirdSet data needs a custom processor that is available in the BirdSet repository. The model does not have a processor available.
The model accepts a mono image (spectrogram) as input (e.g., `torch.Size([16, 1, 128, 343])`)
- The model is trained on 5-second clips of bird vocalizations.
- num_channels: 1
- pretrained checkpoint: facebook/convnext-base-224-22k
- sampling_rate: 32_000
- normalize spectrogram: mean: -4.268, std: 4.569 (from esc-50)
- spectrogram: n_fft: 1024, hop_length: 320, power: 2.0
- melscale: n_mels: 128, n_stft: 513
- dbscale: top_db: 80
```python
import torch
from transformers import AutoModelForImageClassification
from datasets import load_dataset
dataset = load_dataset("DBD-research-group/BirdSet", "HSN")
```
## Model Source
- **Repository:** [BirdSet Repository](https://github.com/DBD-research-group/BirdSet/tree/main)
- **Paper [optional]:** [BirdSet Paper](https://arxiv.org/pdf/2403.10380)
## Citation |