1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      High-throughput and separating-free phenotyping method for on-panicle rice grains based on deep learning

      methods-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Rice is a vital food crop that feeds most of the global population. Cultivating high-yielding and superior-quality rice varieties has always been a critical research direction. Rice grain-related traits can be used as crucial phenotypic evidence to assess yield potential and quality. However, the analysis of rice grain traits is still mainly based on manual counting or various seed evaluation devices, which incur high costs in time and money. This study proposed a high-precision phenotyping method for rice panicles based on visible light scanning imaging and deep learning technology, which can achieve high-throughput extraction of critical traits of rice panicles without separating and threshing rice panicles. The imaging of rice panicles was realized through visible light scanning. The grains were detected and segmented using the Faster R-CNN-based model, and an improved Pix2Pix model cascaded with it was used to compensate for the information loss caused by the natural occlusion between the rice grains. An image processing pipeline was designed to calculate fifteen phenotypic traits of the on-panicle rice grains. Eight varieties of rice were used to verify the reliability of this method. The R 2 values between the extraction by the method and manual measurements of the grain number, grain length, grain width, grain length/width ratio and grain perimeter were 0.99, 0.96, 0.83, 0.90 and 0.84, respectively. Their mean absolute percentage error (MAPE) values were 1.65%, 7.15%, 5.76%, 9.13% and 6.51%. The average imaging time of each rice panicle was about 60 seconds, and the total time of data processing and phenotyping traits extraction was less than 10 seconds. By randomly selecting one thousand grains from each of the eight varieties and analyzing traits, it was found that there were certain differences between varieties in the number distribution of thousand-grain length, thousand-grain width, and thousand-grain length/width ratio. The results show that this method is suitable for high-throughput, non-destructive, and high-precision extraction of on-panicle grains traits without separating. Low cost and robust performance make it easy to popularize. The research results will provide new ideas and methods for extracting panicle traits of rice and other crops.

          Related collections

          Most cited references30

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Deep Residual Learning for Image Recognition

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.

            State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Feature Pyramid Networks for Object Detection

                Bookmark

                Author and article information

                Contributors
                Journal
                Front Plant Sci
                Front Plant Sci
                Front. Plant Sci.
                Frontiers in Plant Science
                Frontiers Media S.A.
                1664-462X
                18 September 2023
                2023
                : 14
                : 1219584
                Affiliations
                [1] 1 Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University , Haikou, China
                [2] 2 Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology , Wuhan, Hubei, China
                [3] 3 MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology , Wuhan, Hubei, China
                [4] 4 Department of Physics, School of Science, Hainan University , Haikou, China
                Author notes

                Edited by: Jeffrey Too Chuan Tan, Genovasi University College, Malaysia

                Reviewed by: Jiong Mu, Sichuan Agricultural University, China; Md Nashir Uddin, The University of Tokyo, Japan

                *Correspondence: Lejun Yu, yulj@ 123456hainanu.edu.cn ; Qian Liu, qliu@ 123456hainanu.edu.cn

                †These authors have contributed equally to this work

                Article
                10.3389/fpls.2023.1219584
                10544938
                37790779
                fece8abe-d641-4746-8e5d-2f4681381e56
                Copyright © 2023 Lu, Wang, Fu, Yu and Liu

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 09 May 2023
                : 28 August 2023
                Page count
                Figures: 11, Tables: 3, Equations: 5, References: 31, Pages: 12, Words: 5257
                Funding
                This work was supported by Hainan Yazhou Bay Seed Lab (B21HJ0904), Sanya Yazhou Bay Science and Technology City (SCKJ-JYRC-2023-25), Hainan Provincial Natural Science Foundation of China (322MS029).
                Categories
                Plant Science
                Methods
                Custom metadata
                Technical Advances in Plant Science

                Plant science & Botany
                rice,rice panicle traits,high-throughput phenotyping,visible light scanning,deep learning

                Comments

                Comment on this article