0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Accurate and fast implementation of soybean pod counting and localization from high-resolution image

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Introduction

          Soybean pod count is one of the crucial indicators of soybean yield. Nevertheless, due to the challenges associated with counting pods, such as crowded and uneven pod distribution, existing pod counting models prioritize accuracy over efficiency, which does not meet the requirements for lightweight and real-time tasks.

          Methods

          To address this goal, we have designed a deep convolutional network called PodNet. It employs a lightweight encoder and an efficient decoder that effectively decodes both shallow and deep information, alleviating the indirect interactions caused by information loss and degradation between non-adjacent levels.

          Results

          We utilized a high-resolution dataset of soybean pods from field harvesting to evaluate the model’s generalization ability. Through experimental comparisons between manual counting and model yield estimation, we confirmed the effectiveness of the PodNet model. The experimental results indicate that PodNet achieves an R 2 of 0.95 for the prediction of soybean pod quantities compared to ground truth, with only 2.48M parameters, which is an order of magnitude lower than the current SOTA model YOLO POD, and the FPS is much higher than YOLO POD.

          Discussion

          Compared to advanced computer vision methods, PodNet significantly enhances efficiency with almost no sacrifice in accuracy. Its lightweight architecture and high FPS make it suitable for real-time applications, providing a new solution for counting and locating dense objects.

          Related collections

          Most cited references38

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.

            State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              PyTorch: An Imperative Style, High-Performance Deep Learning Library

              Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks. 12 pages, 3 figures, NeurIPS 2019
                Bookmark

                Author and article information

                Contributors
                URI : https://loop.frontiersin.org/people/1886836Role: Role: Role: Role:
                URI : https://loop.frontiersin.org/people/2541041Role: Role:
                Role: Role: Role:
                Role: Role: Role:
                Role: Role: Role:
                Role: Role: Role:
                Role: Role: Role:
                Role: Role:
                Journal
                Front Plant Sci
                Front Plant Sci
                Front. Plant Sci.
                Frontiers in Plant Science
                Frontiers Media S.A.
                1664-462X
                20 February 2024
                2024
                : 15
                : 1320109
                Affiliations
                [1] 1 College of Robotics, Guangdong Polytechnic of Science and Technology , Zhuhai, China
                [2] 2 Department of Network Technology, Guangzhou Institute of Software Engineering , Conghua, China
                [3] 3 School of Electronics and Information Engineering, Wuyi University , Jiangmen, China
                [4] 4 College of Business, Guangzhou College of Technology and Business , Foshan, China
                [5] 5 School of Mechanical Engineering, Guangxi University , Nanning, China
                Author notes

                Edited by: Gregorio Egea, University of Seville, Spain

                Reviewed by: Zejun Zhang, Zhejiang Normal University, China

                Fenmei Wang, University of Science and Technology of China, China

                Chenqiang Gao, Chongqing University of Posts and Telecommunications, China

                *Correspondence: Zhenghong Yu, honger1983@ 123456gmail.com ; Jianxiong Ye, jxye59720@ 123456gmail.com

                †These authors have contributed equally to this work and share first authorship

                Article
                10.3389/fpls.2024.1320109
                10913015
                38444529
                8f470314-e30d-42cb-ad46-229a553bec16
                Copyright © 2024 Yu, Wang, Ye, Liufu, Lu, Zhu, Yang and Tan

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 11 October 2023
                : 30 January 2024
                Page count
                Figures: 6, Tables: 4, Equations: 22, References: 38, Pages: 13, Words: 7597
                Funding
                The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported in part by 2022 key scientific research project of ordinary universities in Guangdong Province under Grant 2022ZDZX4075, in part by 2022 Guangdong province ordinary universities characteristic innovation project under Grant 2022KTSCX251, in part by the Collaborative Intelligent Robot Production & Education Integrates Innovative Application Platform Based on the Industrial Internet under Grant 2020CJPT004, in part by 2020 Guangdong Rural Science and Technology Mission Project under Grant KTP20200153, in part by the Engineering Research Centre for Intelligent equipment manufacturing under Grant 2021GCZX018, in part by GDPST&DOBOT Collaborative Innovation Centre under Grant K01057060 and in part by Innovation Project of Guangxi Graduate Education under Grant YCSW2022081.
                Categories
                Plant Science
                Original Research
                Custom metadata
                Technical Advances in Plant Science

                Plant science & Botany
                soybean pod,convolutional network,computer vision,counting and locating,dense objects

                Comments

                Comment on this article