Rice testing~

Real-time rice inspection using handheld cameras

Small farmers play an important role in the global food supply. As smartphones become more common, they enable small farmers to collect images at a very low cost.

In this study, the researchers proposed an effective deep convolutional neural network (DCNN) structure to detect the growth stage (DVS) of rice using photos taken by a handheld camera. The DCNN model was trained with different strategies and compared with the traditional time-series Green chromatic coordinate (time-series Gcc) method and the hand-extracted feature combination support vector machine (MF-SVM) method. In addition, images from different angles, model training strategies, and interpretations of DCNN model predictions are also studied. The DCNN model trained using the proposed two-step fine-tuning strategy achieved optimal results with an overall accuracy of 0.913 and a low mean absolute error of 0.090.

The results show that images taken at wide angles contain more valuable information, and using images taken at multiple angles can further improve the performance of the model. The two-step fine-tuning strategy greatly improves the model's robustness to perspective randomness. Interpretation results show that it is possible to extract bioclimatologically relevant features from images. This study provides a bioclimatology detection method using handheld camera images in real time and provides some important insights into the use of deep learning in real-world scenarios.

The hypothesis of this study is that characteristics of crop phenotypes can be captured by machine learning by analyzing images, whereas they can traditionally be identified by agricultural experts through observation. However, deep learning research on crop bioclimate detection is still very limited. Yalcin (Plant phenology recognition using deep learning: Deep-pheno. In 2017 The sixth international conference on agro-geoinformatics (pp. 1–5). https://doi.org/10.1109/Agro-Geoin formatics.2017.8046996) applied DCNN , classification of growth stages using fixed-angle images. Bai et al. (Rice heading stage automatic observation by multi-classifier cascade based rice spike detection method. Agricultural and Forest Meteorology, 259, 21360–270. https://doi.org/10.1016/j.agrformet.2018.05.001) used Support vector machine and DCNN to distinguish image patches of rice ears. The number of panicle patches detected determines the trend stage of the rice.

The above two studies focused on images at fixed angles and positions, whereas small farmers can capture images at random angles and positions. In order to facilitate processing of these random images, it is necessary to develop a general method. It would be fascinating to extract maximum bioclimatic information from images taken from multiple angles. There is also a need for a training strategy that improves the performance of deep learning methods by reducing the impact of image perspective uncertainty.

Study area

The experimental area (23° 5′ 52″–23° 7′ 23″ N, 108° 57′ 7″–108° 58′ 34″ E) is located in Binyang County, Guangxi, China. The area's 160 hectares of land are divided into more than 800 plots and managed by local farmers. The average annual precipitation in the area is approximately 1,600 mm, and the average temperature is 21°C. 70 plots were randomly selected and 12 managed plots were used for analysis.

Research Framework

Data collection and processing

In order to utilize images taken at different viewing angles for phenomenological identification, four vertical directions between the photography direction and the gravity direction were chosen: 0° (A), 20° (B), 40° (C) and 60° ( D), as shown in Figure a below. In the study area, most plots were transplanted via drill rigs. When images are taken in the sowing direction, the soil between the two rows of rice is well captured, while in other directions the image captures less soil. Three horizontal directions between the photography direction and the sowing direction (0°(a), 45°(b) and 90°(c) respectively to avoid the influence of drilling (see Figure b)). Twelve photos were taken for each observation, and the viewing angle was roughly manually controlled at 1.5 meters above the ground. Image collection by local farmers on 70 plots of land was deployed seven times. A dataset with 622 observations containing 7464 images was constructed and 7320 of them were used for analysis. Other images are of poor quality and cannot be used. The 610 observations were divided into 10 groups according to DVS, and each group was further divided into training (60%), validation (20%) and testing (20%) groups. whaosoft  aiot  http://143ai.com  

Data augmentation 

Data augmentation strategies

The cropping scheme of three datasets

The image above shows image blocks cropped from the original image. After clipping, the resulting distribution of the data set is more even than the original distribution.

Deep convolutional neural network approach experimental results 

The above table is the ACC/MAE of the time-series-Gcc method. (a) Confusion matrix yielded by the Bb set; (b) confusion matrix yielded by the Dc set.

MF-SVM方法的ACC/MAE。 a–d are results yielded by18 features from one channel, 54 features from 3 channels, 108 features from 6 channels, and 270 features from 15 channels, respectively.

A, B, C and D in the above figure represent 0°, 20°, 40° and 60° respectively. Comparison of photos from different shooting angles

Final Results:

Guess you like

Origin blog.csdn.net/qq_29788741/article/details/132997152