Update README.md
Browse filesAdded reference to training and validation image lists.
README.md
CHANGED
@@ -19,7 +19,7 @@ This model takes in an image of a fish and segments out traits, as described [be
|
|
19 |
See [github.com/Cadene/pretrained-models.pytorch#resnext](https://github.com/Cadene/pretrained-models.pytorch#resnext) for documentation about the source.
|
20 |
|
21 |
The segmentation model was first trained on ImageNet ([Deng et al., 2009](10.1109/CVPR.2009.5206848)), and then the model was fine-tuned on a specific set of image data relevant to the domain: [Illinois Natural History Survey Fish Collection](https://fish.inhs.illinois.edu/) (INHS Fish).
|
22 |
-
The Feature Pyramid Network (FPN) architecture was used for fine-tuning, since it is a CNN-based architecture designed to handle multi-scale feature maps (Lin et al., 2017: [IEEE]
|
23 |
The FPN uses SE-ResNeXt as the base network (Hu et al., 2018: [IEEE](10.1109/CVPR.2018.00745), [arXiv](arXiv:1709.01507)).
|
24 |
|
25 |
|
@@ -110,12 +110,15 @@ The image data were annotated using [SlicerMorph](https://slicermorph.github.io/
|
|
110 |
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
111 |
|
112 |
<!--We only have 295 training images, 99 testing images, and 98 validation images. -->
|
113 |
-
To increase the size and diversity of the training dataset (originally 295 images), we employed data augmentation techniques such as flipping, shifting, rotating, scaling, and adding noise to the original image data to increase the dataset 10-fold.
|
114 |
We developed 12 target classes, or trait masks, for our segmentation problem, each representing different morphological traits of a fish specimen.
|
115 |
The segmentation classes are: dorsal fin, adipose fin, caudal fin, anal fin, pelvic fin, pectoral fin, head minus the eye, eye, caudal fin-ray, alt fin-ray, alt fin-spine, and trunk.
|
116 |
Although minnows do not have adipose fins, the segmentation model was trained on a variety of fish image data, some of which had adipose fins.
|
117 |
We retained this class because the segmentation model may erroneously assign an adipose fin to a minnow (Fig. S1), and a domain scientist examining these outputs may want to analyze the accuracy of the model.
|
118 |
|
|
|
|
|
|
|
119 |
|
120 |
### Training Procedure
|
121 |
|
|
|
19 |
See [github.com/Cadene/pretrained-models.pytorch#resnext](https://github.com/Cadene/pretrained-models.pytorch#resnext) for documentation about the source.
|
20 |
|
21 |
The segmentation model was first trained on ImageNet ([Deng et al., 2009](10.1109/CVPR.2009.5206848)), and then the model was fine-tuned on a specific set of image data relevant to the domain: [Illinois Natural History Survey Fish Collection](https://fish.inhs.illinois.edu/) (INHS Fish).
|
22 |
+
The Feature Pyramid Network (FPN) architecture was used for fine-tuning, since it is a CNN-based architecture designed to handle multi-scale feature maps (Lin et al., 2017: [IEEE](10.1109/CVPR.2017.106), [arXiv](arXiv:1612.03144)).
|
23 |
The FPN uses SE-ResNeXt as the base network (Hu et al., 2018: [IEEE](10.1109/CVPR.2018.00745), [arXiv](arXiv:1709.01507)).
|
24 |
|
25 |
|
|
|
110 |
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
111 |
|
112 |
<!--We only have 295 training images, 99 testing images, and 98 validation images. -->
|
113 |
+
To increase the size and diversity of the training dataset (originally 295 images), we employed data augmentation techniques such as flipping, shifting, rotating, scaling, and adding noise to the original image data to increase the dataset 10-fold.
|
114 |
We developed 12 target classes, or trait masks, for our segmentation problem, each representing different morphological traits of a fish specimen.
|
115 |
The segmentation classes are: dorsal fin, adipose fin, caudal fin, anal fin, pelvic fin, pectoral fin, head minus the eye, eye, caudal fin-ray, alt fin-ray, alt fin-spine, and trunk.
|
116 |
Although minnows do not have adipose fins, the segmentation model was trained on a variety of fish image data, some of which had adipose fins.
|
117 |
We retained this class because the segmentation model may erroneously assign an adipose fin to a minnow (Fig. S1), and a domain scientist examining these outputs may want to analyze the accuracy of the model.
|
118 |
|
119 |
+
The training dataset utilized the image files listed in [training_dataset_INHS.txt](https://huggingface.co/imageomics/BGNN-trait-segmentation/blob/main/training_dataset_INHS.txt).
|
120 |
+
|
121 |
+
The validation dataset utilized the image files listed in [validation_dataset_INHS.txt](https://huggingface.co/imageomics/BGNN-trait-segmentation/blob/main/validation_dataset_INHS.txt).
|
122 |
|
123 |
### Training Procedure
|
124 |
|