RaphaelSchwinger commited on
Commit
cfa17bd
·
verified ·
1 Parent(s): 27704b5

Add Colab link

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -13,7 +13,7 @@ ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Visi
13
 
14
  ## How to use
15
  The BirdSet data needs a custom processor that is available in the BirdSet repository. The model does not have a processor available.
16
- The model accepts a mono image (spectrogram) as input (e.g., `torch.Size([16, 1, 128, 343])`)
17
 
18
  - The model is trained on 5-second clips of bird vocalizations.
19
  - num_channels: 1
@@ -24,7 +24,8 @@ The model accepts a mono image (spectrogram) as input (e.g., `torch.Size([16, 1,
24
  - melscale: n_mels: 128, n_stft: 513
25
  - dbscale: top_db: 80
26
 
27
- See [example inference notebook](https://github.com/DBD-research-group/BirdSet/blob/main/notebooks/tutorials/model_inference.ipynb):
 
28
 
29
  ```python
30
  from transformers import ConvNextForImageClassification
 
13
 
14
  ## How to use
15
  The BirdSet data needs a custom processor that is available in the BirdSet repository. The model does not have a processor available.
16
+ The model accepts a mono image (spectrogram) as input (e.g., `torch.Size([16, 1, 128, 334])`)
17
 
18
  - The model is trained on 5-second clips of bird vocalizations.
19
  - num_channels: 1
 
24
  - melscale: n_mels: 128, n_stft: 513
25
  - dbscale: top_db: 80
26
 
27
+ See [example inference notebook](https://github.com/DBD-research-group/BirdSet/blob/main/notebooks/tutorials/model_inference.ipynb).
28
+ Run in [Google Colab](https://colab.research.google.com/drive/1pp_RCJEjSR4gPBGFtxDdgnr4Uk1_KimU?usp=sharing):
29
 
30
  ```python
31
  from transformers import ConvNextForImageClassification