nanmao commited on
Commit
ac5dedd
·
verified ·
1 Parent(s): 4243607

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -4
README.md CHANGED
@@ -8,7 +8,6 @@ WalnutData
8
 
9
  With the gradual maturity of UAV technology, it can provide extremely powerful support for smart agriculture and precise monitoring. Currently, there is no dataset related to green walnuts in the field of agricultural computer vision. Therefore, in order to promote the algorithm design in the field of agricultural computer vision, we used UAV to collect remote sensing data from 8 walnut sample plots. Considering that green walnuts have the characteristics of being affected by various lighting conditions and being occluded, we constructed a large-scale dataset with a higher fine-grained target feature - WalnutData. This dataset contains a total of 30,240 images and 7,062,080 instances, and there are 4 target categories: illuminated from the front and not occluded (A1), backlit and not occluded (A2), illuminated from the front and occluded (B1), and backlit and occluded (B2). We provide three types of labels: VOC, COCO, and YOLO, which are suitable for many currently mainstream object detection models. Then, we evaluated many mainstream algorithms on WalnutData and took these evaluation results as the baseline standard.The link to the paper of the WalnutData dataset: **https://doi.org/10.48550/arXiv.2502.20092**.
10
 
11
- Examples of the categories in WalnutData are shown in the following figure. (a) represents category A1, that is, the green walnuts are illuminated from the front and not occluded. (b) represents category A2, that is, the green walnuts are backlit and not occluded. (c) represents category B1, that is, the green walnuts are illuminated from the front and occluded. (d) represents category B2, that is, the green walnuts are backlit and occluded.
12
 
13
  <div align="center">
14
 
@@ -45,9 +44,6 @@ The following table shows the detailed information of WalnutData.
45
  ## 3.1 Grayscale Value Analysis
46
  The average grayscale values of the training set, validation set, and test set are 107.316, 108.048, and 107.544 respectively. The proportions of values lower than the middle grayscale value of 127.5 are 76.31%, 75.59%, and 75.81% respectively. This indicates that most of the green walnuts in WalnutData are in backlight conditions or are blocked by leaves in relatively dark places.
47
 
48
- The following figure shows the distribution of the average grayscale values of WalnutData.
49
-
50
- ![image](https://github.com/1wuming/WalnutData/blob/WalnutData/README_IMAGES/Grayscale%20Value%20Statistics.jpg)
51
 
52
  ## 3.2 Distribution of Category Instances in WalnutData
53
  The proportions of the training set, validation set, and test set are 7:2:1, with the numbers of images being 21,167, 6,048, and 3,025 respectively. In addition, in the arrangement of the distribution of the number of categories, we have tried our best to ensure the similarity and balance of the distribution.
 
8
 
9
  With the gradual maturity of UAV technology, it can provide extremely powerful support for smart agriculture and precise monitoring. Currently, there is no dataset related to green walnuts in the field of agricultural computer vision. Therefore, in order to promote the algorithm design in the field of agricultural computer vision, we used UAV to collect remote sensing data from 8 walnut sample plots. Considering that green walnuts have the characteristics of being affected by various lighting conditions and being occluded, we constructed a large-scale dataset with a higher fine-grained target feature - WalnutData. This dataset contains a total of 30,240 images and 7,062,080 instances, and there are 4 target categories: illuminated from the front and not occluded (A1), backlit and not occluded (A2), illuminated from the front and occluded (B1), and backlit and occluded (B2). We provide three types of labels: VOC, COCO, and YOLO, which are suitable for many currently mainstream object detection models. Then, we evaluated many mainstream algorithms on WalnutData and took these evaluation results as the baseline standard.The link to the paper of the WalnutData dataset: **https://doi.org/10.48550/arXiv.2502.20092**.
10
 
 
11
 
12
  <div align="center">
13
 
 
44
  ## 3.1 Grayscale Value Analysis
45
  The average grayscale values of the training set, validation set, and test set are 107.316, 108.048, and 107.544 respectively. The proportions of values lower than the middle grayscale value of 127.5 are 76.31%, 75.59%, and 75.81% respectively. This indicates that most of the green walnuts in WalnutData are in backlight conditions or are blocked by leaves in relatively dark places.
46
 
 
 
 
47
 
48
  ## 3.2 Distribution of Category Instances in WalnutData
49
  The proportions of the training set, validation set, and test set are 7:2:1, with the numbers of images being 21,167, 6,048, and 3,025 respectively. In addition, in the arrangement of the distribution of the number of categories, we have tried our best to ensure the similarity and balance of the distribution.