1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Evaluation of a decided sample size in machine learning applications

      research-article
      1 , 2 , , 4 , 1 , 3
      BMC Bioinformatics
      BioMed Central
      Sample size, Machine learning, Effect sizes, Criteria

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          An appropriate sample size is essential for obtaining a precise and reliable outcome of a study. In machine learning (ML), studies with inadequate samples suffer from overfitting of data and have a lower probability of producing true effects, while the increment in sample size increases the accuracy of prediction but may not cause a significant change after a certain sample size. Existing statistical approaches using standardized mean difference, effect size, and statistical power for determining sample size are potentially biased due to miscalculations or lack of experimental details. This study aims to design criteria for evaluating sample size in ML studies. We examined the average and grand effect sizes and the performance of five ML methods using simulated datasets and three real datasets to derive the criteria for sample size. We systematically increase the sample size, starting from 16, by randomly sampling and examine the impact of sample size on classifiers’ performance and both effect sizes. Tenfold cross-validation was used to quantify the accuracy.

          Results

          The results demonstrate that the effect sizes and the classification accuracies increase while the variances in effect sizes shrink with the increment of samples when the datasets have a good discriminative power between two classes. By contrast, indeterminate datasets had poor effect sizes and classification accuracies, which did not improve by increasing sample size in both simulated and real datasets. A good dataset exhibited a significant difference in average and grand effect sizes. We derived two criteria based on the above findings to assess a decided sample size by combining the effect size and the ML accuracy. The sample size is considered suitable when it has appropriate effect sizes (≥ 0.5) and ML accuracy (≥ 80%). After an appropriate sample size, the increment in samples will not benefit as it will not significantly change the effect size and accuracy, thereby resulting in a good cost-benefit ratio.

          Conclusion

          We believe that these practical criteria can be used as a reference for both the authors and editors to evaluate whether the selected sample size is adequate for a study.

          Supplementary Information

          The online version contains supplementary material available at 10.1186/s12859-023-05156-9.

          Related collections

          Most cited references50

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs

          Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA's such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            PhysioBank, PhysioToolkit, and PhysioNet

            Circulation, 101(23)
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Power failure: why small sample size undermines the reliability of neuroscience.

              A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.
                Bookmark

                Author and article information

                Contributors
                mail2daniyal@gmail.com
                Journal
                BMC Bioinformatics
                BMC Bioinformatics
                BMC Bioinformatics
                BioMed Central (London )
                1471-2105
                14 February 2023
                14 February 2023
                2023
                : 24
                : 48
                Affiliations
                [1 ]GRID grid.37589.30, ISNI 0000 0004 0532 3167, Institute of Cognitive Neuroscience, , National Central University, Zhongda Rd, ; No. 300, Zhongli District, Taoyuan City, 320317 Taiwan, ROC
                [2 ]GRID grid.37589.30, ISNI 0000 0004 0532 3167, Taiwan International Graduate Program in Interdisciplinary Neuroscience, , National Central University and Academia Sinica, ; Taipei, Taiwan, ROC
                [3 ]GRID grid.37589.30, ISNI 0000 0004 0532 3167, Department of Biomedical Sciences and Engineering, , National Central University, ; Taoyuan, Taiwan, ROC
                [4 ]GRID grid.37589.30, ISNI 0000 0004 0532 3167, Department of Computer Science and Information Engineering, , National Central University, ; Taoyuan, Taiwan, ROC
                Article
                5156
                10.1186/s12859-023-05156-9
                9926644
                36788550
                8c59aaca-5121-47a2-b636-94e417c3c602
                © The Author(s) 2023

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

                History
                : 13 October 2022
                : 23 January 2023
                Categories
                Research
                Custom metadata
                © The Author(s) 2023

                Bioinformatics & Computational biology
                sample size,machine learning,effect sizes,criteria
                Bioinformatics & Computational biology
                sample size, machine learning, effect sizes, criteria

                Comments

                Comment on this article