0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      C3D-ConvLSTM based cow behaviour classification using video data for precision livestock farming

      , , ,
      Computers and Electronics in Agriculture
      Elsevier BV

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references44

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Rethinking the Inception Architecture for Computer Vision

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

            Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Multivariate LSTM-FCNs for time series classification

                Bookmark

                Author and article information

                Journal
                Computers and Electronics in Agriculture
                Computers and Electronics in Agriculture
                Elsevier BV
                01681699
                February 2022
                February 2022
                : 193
                : 106650
                Article
                10.1016/j.compag.2021.106650
                3ce61687-b5da-4b07-99aa-741c19392e17
                © 2022

                https://www.elsevier.com/tdm/userlicense/1.0/

                History

                Comments

                Comment on this article