17
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Multi-Input Convolutional Neural Network for Flower Grading

      , , ,
      Journal of Electrical and Computer Engineering
      Hindawi Limited

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Flower grading is a significant task because it is extremely convenient for managing the flowers in greenhouse and market. With the development of computer vision, flower grading has become an interdisciplinary focus in both botany and computer vision. A new dataset named BjfuGloxinia contains three quality grades; each grade consists of 107 samples and 321 images. A multi-input convolutional neural network is designed for large scale flower grading. Multi-input CNN achieves a satisfactory accuracy of 89.6% on the BjfuGloxinia after data augmentation. Compared with a single-input CNN, the accuracy of multi-input CNN is increased by 5% on average, demonstrating that multi-input convolutional neural network is a promising model for flower grading. Although data augmentation contributes to the model, the accuracy is still limited by lack of samples diversity. Majority of misclassification is derived from the medium class. The image processing based bud detection is useful for reducing the misclassification, increasing the accuracy of flower grading to approximately 93.9%.

          Related collections

          Most cited references6

          • Record: found
          • Abstract: found
          • Article: not found

          Face recognition: a convolutional neural-network approach.

          We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Convolutional face finder: a neural architecture for fast and robust face detection

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Object Instance Segmentation and Fine-Grained Localization Using Hypercolumns

                Bookmark

                Author and article information

                Journal
                Journal of Electrical and Computer Engineering
                Journal of Electrical and Computer Engineering
                Hindawi Limited
                2090-0147
                2090-0155
                2017
                2017
                : 2017
                :
                : 1-8
                Article
                10.1155/2017/9240407
                e12aac56-9abb-410d-a80e-b353e6d45aff
                © 2017

                http://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article