1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      High-resolution density assessment assisted by deep learning of Dendrophyllia cornigera (Lamarck, 1816) and Phakellia ventilabrum (Linnaeus, 1767) in rocky circalittoral shelf of Bay of Biscay

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This study presents a novel approach to high-resolution density distribution mapping of two key species of the 1170 “Reefs” habitat, Dendrophyllia cornigera and Phakellia ventilabrum, in the Bay of Biscay using deep learning models. The main objective of this study was to establish a pipeline based on deep learning models to extract species density data from raw images obtained by a remotely operated towed vehicle (ROTV). Different object detection models were evaluated and compared in various shelf zones at the head of submarine canyon systems using metrics such as precision, recall, and F1 score. The best-performing model, YOLOv8, was selected for generating density maps of the two species at a high spatial resolution. The study also generated synthetic images to augment the training data and assess the generalization capacity of the models. The proposed approach provides a cost-effective and non-invasive method for monitoring and assessing the status of these important reef-building species and their habitats. The results have important implications for the management and protection of the 1170 habitat in Spain and other marine ecosystems worldwide. These results highlight the potential of deep learning to improve efficiency and accuracy in monitoring vulnerable marine ecosystems, allowing informed decisions to be made that can have a positive impact on marine conservation.

          Related collections

          Most cited references37

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.

            State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              You Only Look Once: Unified, Real-Time Object Detection

                Bookmark

                Author and article information

                Contributors
                Journal
                PeerJ
                PeerJ
                PeerJ
                PeerJ
                PeerJ Inc. (San Diego, USA )
                2167-8359
                7 March 2024
                2024
                : 12
                : e17080
                Affiliations
                [1 ]Department of Animal Biology, Soil and Geology, University of La Laguna , San Cristóbal de La Laguna, Santa Cruz de Tenerife, Spain
                [2 ]Photonics Engineering Group, University of Cantabria , Santander, Cantabria, Spain
                [3 ]Santander Oceanographic Center, Spanish Institute of Oceanography (IEO-CSIC) , Santander, Cantabria, Spain
                [4 ]Complutum Geographic Information Technologies (COMPLUTIG) , Alcalá de Henares, Madrid, Spain
                Author information
                http://orcid.org/0000-0002-4626-3194
                Article
                17080
                10.7717/peerj.17080
                10924775
                38464748
                278afc20-40f0-4bb4-b235-7386deca9ffa
                © 2024 Gayá-Vilar et al.

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ) and either DOI or URL of the article must be cited.

                History
                : 6 November 2023
                : 19 February 2024
                Funding
                Funded by: LIFE IP INTEMARES Project
                Funded by: Biodiversity Foundation of the Ministry for the Ecological Transition and the Demographic Challenge
                Funded by: European Union’s LIFE Program
                Award ID: LIFE 15 IPE ES 012
                This research has been carried out in the scope of the LIFE IP INTEMARES project, coordinated by the Biodiversity Foundation of the Ministry for the Ecological Transition and the Demographic Challenge. This research was funded by the European Union’s LIFE program (LIFE 15 IPE ES 012). There was no additional external funding received for this study. Data collection. ROV images.
                Categories
                Conservation Biology
                Ecology
                Marine Biology
                Data Mining and Machine Learning

                artificial intelligence,vulnerable marine ecosystem,habitat mapping,object detection model,natura 2000 network

                Comments

                Comment on this article