2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      BdSLW-11: Dataset of Bangladeshi sign language words for recognizing 11 daily useful BdSL words

      data-paper

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The dataset of Bangladeshi sign language words (BdSLW) is rare. Though there are lots of datasets of BdSL sign alphabets, numbers, or characters, there are not enough datasets of sign words. This is the first dataset about sign words of BdSL according to the author(s) knowledge. So, this dataset is developed by collecting data from people. This is an image dataset. This dataset is a collection of 1105 images of sign words. A total of 11 sign word categories are selected which are important and daily use in our life. As this is an image dataset, so the images of sign words are taken by camera from the sign users of Bangladesh. Authors have gone to the individuals of sign users and captured images from them with their permission. Then the images are analyzed and segmented into the images which have quality such as no background, clear, bright, etc. This dataset is used for recognizing BdSL sign words.

          Related collections

          Most cited references9

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          A survey on Image Data Augmentation for Deep Learning

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Data-Driven Structural Health Monitoring and Damage Detection through Deep Learning: State-of-the-Art Review

            Data-driven methods in structural health monitoring (SHM) is gaining popularity due to recent technological advancements in sensors, as well as high-speed internet and cloud-based computation. Since the introduction of deep learning (DL) in civil engineering, particularly in SHM, this emerging and promising tool has attracted significant attention among researchers. The main goal of this paper is to review the latest publications in SHM using emerging DL-based methods and provide readers with an overall understanding of various SHM applications. After a brief introduction, an overview of various DL methods (e.g., deep neural networks, transfer learning, etc.) is presented. The procedure and application of vibration-based, vision-based monitoring, along with some of the recent technologies used for SHM, such as sensors, unmanned aerial vehicles (UAVs), etc. are discussed. The review concludes with prospects and potential limitations of DL-based methods in SHM applications.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found
              Is Open Access

              CNN Based on Transfer Learning Models Using Data Augmentation and Transformation for Detection of Concrete Crack

              Cracks in concrete cause initial structural damage to civil infrastructures such as buildings, bridges, and highways, which in turn causes further damage and is thus regarded as a serious safety concern. Early detection of it can assist in preventing further damage and can enable safety in advance by avoiding any possible accident caused while using those infrastructures. Machine learning-based detection is gaining favor over time-consuming classical detection approaches that can only fulfill the objective of early detection. To identify concrete surface cracks from images, this research developed a transfer learning approach (TL) based on Convolutional Neural Networks (CNN). This work employs the transfer learning strategy by leveraging four existing deep learning (DL) models named VGG16, ResNet18, DenseNet161, and AlexNet with pre-trained (trained on ImageNet) weights. To validate the performance of each model, four performance indicators are used: accuracy, recall, precision, and F1-score. Using the publicly available CCIC dataset, the suggested technique on AlexNet outperforms existing models with a testing accuracy of 99.90%, precision of 99.92%, recall of 99.80%, and F1-score of 99.86% for crack class. Our approach is further validated by using an external dataset, BWCI, available on Kaggle. Using BWCI, models VGG16, ResNet18, DenseNet161, and AlexNet achieved the accuracy of 99.90%, 99.60%, 99.80%, and 99.90% respectively. This proposed transfer learning-based method, which is based on the CNN method, is demonstrated to be more effective at detecting cracks in concrete structures and is also applicable to other detection tasks.
                Bookmark

                Author and article information

                Contributors
                Journal
                Data Brief
                Data Brief
                Data in Brief
                Elsevier
                2352-3409
                13 November 2022
                December 2022
                13 November 2022
                : 45
                : 108747
                Affiliations
                [a ]Department of Computer Science and Engineering, University of Information Technology and Sciences (UITS), Dhaka 1212, Bangladesh
                [b ]Department of Computer Science and Engineering, Bangladesh University of Business and Technology (BUBT), Mirpur, Dhaka-1216, Bangladesh
                [c ]Department of Computer Science and Engineering, Atish Dipankar University of Science & Technology, Dhaka 1230, Bangladesh
                [d ]Department of Computer Science and Engineering, Dhaka University of Engineering & Technology, Gazipur 1707, Bangladesh
                Author notes
                [* ]Corresponding author. monirul.islam@ 123456uits.edu.bd
                Article
                S2352-3409(22)00951-9 108747
                10.1016/j.dib.2022.108747
                9679746
                36425983
                20ae905b-5778-4e16-9e2f-6d42f0f5be3d
                © 2022 The Author(s)

                This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

                History
                : 30 September 2022
                : 5 November 2022
                : 7 November 2022
                Categories
                Data Article

                deaf & dumb community,bangla sign language words,image classification,computer vision,image processing

                Comments

                Comment on this article

                scite_
                0
                0
                0
                0
                Smart Citations
                0
                0
                0
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content351

                Cited by4

                Most referenced authors76