2
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Real-time detection of particleboard surface defects based on improved YOLOV5 target detection

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Particleboard surface defect detection technology is of great significance to the automation of particleboard detection, but the current detection technology has disadvantages such as low accuracy and poor real-time performance. Therefore, this paper proposes an improved lightweight detection method of You Only Live Once v5 (YOLOv5), namely PB-YOLOv5 (Particle Board-YOLOv5). Firstly, the gamma-ray transform method and the image difference method are combined to deal with the uneven illumination of the acquired images, so that the uneven illumination is well corrected. Secondly, Ghost Bottleneck lightweight deep convolution module is added to Backbone module and Neck module of YOLOv5 detection algorithm to reduce model volume. Thirdly, the SELayer module of attention mechanism is added into Backbone module. Finally, replace Conv in Neck module with depthwise convolution (DWConv) to compress network parameters. The experimental results show that the PB-YOLOv5 model proposed in this paper can accurately identify five types of defects on the particleboard surface: Bigshavings, SandLeakage, GlueSpot, Soft and OliPollution, and meet the real-time requirements. Specifically, recall, F1 score, mAP@.5, mAP@.5:.95 values of pB-Yolov5s model were 91.22%, 94.5%, 92.1%, 92.8% and 67.8%, respectively. The results of Soft defects were 92.8%, 97.9%, 95.3%, 99.0% and 81.7%, respectively. The detection of single image time of the model is only 0.031 s, and the weight size of the model is only 5.4 MB. Compared with the original YOLOv5s, YOLOv4, YOLOv3 and Faster RCNN, the PB-Yolov5s model has the fastest Detection of single image time. The Detection of single image time was accelerated by 34.0%, 55.1%, 64.4% and 87.9%, and the weight size of the model is compressed by 62.5%, 97.7%, 97.8% and 98.9%, respectively. The mAP value increased by 2.3%, 4.69%, 7.98% and 13.05%, respectively. The results show that the PB-YOLOV5 model proposed in this paper can realize the rapid and accurate detection of particleboard surface defects, and fully meet the requirements of lightweight embedded model.

          Related collections

          Most cited references16

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Tomato detection based on modified YOLOv3 framework

          Fruit detection forms a vital part of the robotic harvesting platform. However, uneven environment conditions, such as branch and leaf occlusion, illumination variation, clusters of tomatoes, shading, and so on, have made fruit detection very challenging. In order to solve these problems, a modified YOLOv3 model called YOLO-Tomato models were adopted to detect tomatoes in complex environmental conditions. With the application of label what you see approach, densely architecture incorporation, spatial pyramid pooling and Mish function activation to the modified YOLOv3 model, the YOLO-Tomato models: YOLO-Tomato-A at AP 98.3% with detection time 48 ms, YOLO-Tomato-B at AP 99.3% with detection time 44 ms, and YOLO-Tomato-C at AP 99.5% with detection time 52 ms, performed better than other state-of-the-art methods.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found
            Is Open Access

            Automated cattle counting using Mask R-CNN in quadcopter vision system

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Detection of concealed cracks from ground penetrating radar images based on deep learning algorithm

                Bookmark

                Author and article information

                Contributors
                gezhedong@sdjzu.edu.cn
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                5 November 2021
                5 November 2021
                2021
                : 11
                : 21777
                Affiliations
                [1 ]GRID grid.440623.7, ISNI 0000 0001 0304 7531, School of Information and Electrical Engineering, , Shandong Jianzhu University, ; Jinan, 250101 China
                [2 ]Department of Quality Control, Shandong Institute for Quality Inspection, Jinan, 250000 China
                Article
                1084
                10.1038/s41598-021-01084-x
                8571343
                34741057
                43c71238-10ab-4246-a50e-d3c4ef9f7419
                © The Author(s) 2021

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 18 August 2021
                : 21 October 2021
                Funding
                Funded by: The authors are grateful for the support of the youth fund of Shandong Natural Science Founda-tion
                Award ID: No.ZR2020QC174
                Award ID: No.ZR2020QC174
                Award ID: No.ZR2020QC174
                Award ID: No.ZR2020QC174
                Award ID: No.ZR2020QC174
                Award ID: No.ZR2020QC174
                Award Recipient :
                Funded by: The Taishan Scholar Advantage Characteristic Discipline Talent Team Project of Shandong Prov-ince of China
                Award ID: No.2015162
                Award ID: No.2015162
                Award ID: No.2015162
                Award ID: No.2015162
                Award ID: No.2015162
                Award ID: No.2015162
                Award Recipient :
                Categories
                Article
                Custom metadata
                © The Author(s) 2021

                Uncategorized
                engineering,electrical and electronic engineering
                Uncategorized
                engineering, electrical and electronic engineering

                Comments

                Comment on this article