3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Robust and generalizable embryo selection based on artificial intelligence and time-lapse image sequences

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Assessing and selecting the most viable embryos for transfer is an essential part of in vitro fertilization (IVF). In recent years, several approaches have been made to improve and automate the procedure using artificial intelligence (AI) and deep learning. Based on images of embryos with known implantation data (KID), AI models have been trained to automatically score embryos related to their chance of achieving a successful implantation. However, as of now, only limited research has been conducted to evaluate how embryo selection models generalize to new clinics and how they perform in subgroup analyses across various conditions. In this paper, we investigate how a deep learning-based embryo selection model using only time-lapse image sequences performs across different patient ages and clinical conditions, and how it correlates with traditional morphokinetic parameters. The model was trained and evaluated based on a large dataset from 18 IVF centers consisting of 115,832 embryos, of which 14,644 embryos were transferred KID embryos. In an independent test set, the AI model sorted KID embryos with an area under the curve (AUC) of a receiver operating characteristic curve of 0.67 and all embryos with an AUC of 0.95. A clinic hold-out test showed that the model generalized to new clinics with an AUC range of 0.60–0.75 for KID embryos. Across different subgroups of age, insemination method, incubation time, and transfer protocol, the AUC ranged between 0.63 and 0.69. Furthermore, model predictions correlated positively with blastocyst grading and negatively with direct cleavages. The fully automated iDAScore v1.0 model was shown to perform at least as good as a state-of-the-art manual embryo selection model. Moreover, full automatization of embryo scoring implies fewer manual evaluations and eliminates biases due to inter- and intraobserver variation.

          Related collections

          Most cited references48

          • Record: found
          • Abstract: found
          • Article: not found

          Long Short-Term Memory

          Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            pROC: an open-source package for R and S+ to analyze and compare ROC curves

            Background Receiver operating characteristic (ROC) curves are useful tools to evaluate classifiers in biomedical and bioinformatics applications. However, conclusions are often reached through inconsistent use or insufficient statistical analysis. To support researchers in their ROC curves analysis we developed pROC, a package for R and S+ that contains a set of tools displaying, analyzing, smoothing and comparing ROC curves in a user-friendly, object-oriented and flexible interface. Results With data previously imported into the R or S+ environment, the pROC package builds ROC curves and includes functions for computing confidence intervals, statistical tests for comparing total or partial area under the curve or the operating points of different classifiers, and methods for smoothing ROC curves. Intermediary and final results are visualised in user-friendly interfaces. A case study based on published clinical and biomarker data shows how to perform a typical ROC analysis with pROC. Conclusions pROC is a package for R and S+ specifically dedicated to ROC analysis. It proposes multiple statistical tests to compare ROC curves, and in particular partial areas under the curve, allowing proper ROC interpretation. pROC is available in two versions: in the R programming language or with a graphical user interface in the S+ statistical software. It is accessible at http://expasy.org/tools/pROC/ under the GNU General Public License. It is also distributed through the CRAN and CSAN public repositories, facilitating its installation.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Focal loss for dense object detection

              The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https://github.com/facebookresearch/Detectron.
                Bookmark

                Author and article information

                Contributors
                Role: Data curationRole: Formal analysisRole: Writing – original draftRole: Writing – review & editing
                Role: Formal analysisRole: InvestigationRole: Writing – original draftRole: Writing – review & editing
                Role: Formal analysisRole: InvestigationRole: Writing – original draftRole: Writing – review & editing
                Role: Data curationRole: Formal analysisRole: Writing – original draft
                Role: Formal analysisRole: Writing – original draftRole: Writing – review & editing
                Role: Editor
                Journal
                PLoS One
                PLoS One
                plos
                PLoS ONE
                Public Library of Science (San Francisco, CA USA )
                1932-6203
                2 February 2022
                2022
                : 17
                : 2
                : e0262661
                Affiliations
                [1 ] Vitrolife A/S, Aarhus, Denmark
                [2 ] Harrison AI, Sydney, New South Wales, Australia
                [3 ] Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
                School of Sciences and Languages, Sao Paulo State University (UNESP), BRAZIL
                Author notes

                Competing Interests: This study was supported by Vitrolife, the employer of J.B., J.R., J.T.L, M.F.K. All authors participated in the study design, data collection and analysis and preparation of the manuscript. The decision to publish was taken by J.B. Vitrolife produces and markets iDAScore. The study was also supported by Harrison.AI, the employer of D.T. He participated in the study design, data collection and analysis. D.T. has a patent related the current study. J.B. and J.R. are Vitrolife A/B shareholders. This does not alter our adherence to PLOS ONE policies on sharing data and materials, as detailed online in our guide for authors.

                Author information
                https://orcid.org/0000-0002-5976-1002
                https://orcid.org/0000-0002-0829-1450
                Article
                PONE-D-21-06393
                10.1371/journal.pone.0262661
                8809568
                35108306
                26502a01-a8dc-43d4-801c-87d4dfa11b6c
                © 2022 Berntsen et al

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 25 February 2021
                : 3 January 2022
                Page count
                Figures: 3, Tables: 4, Pages: 18
                Funding
                Funded by: Innovation Fund Denmark
                Award ID: 7039-00068B
                Award Recipient :
                Funded by: Vitrolife
                Award Recipient :
                Funded by: Vitrolife
                Award Recipient :
                Funded by: Vitrolife
                Award Recipient :
                Funded by: Harrison.AI
                Award Recipient :
                Funded by: Vitrolife
                Award Recipient :
                Vitrolife provided support in the form of salaries for J.B., J.R., J.T.L, M.F.K., but did not have any additional role in the study design, decision to publish, or analysis and preparation of the manuscript. Vitrolife supported data collection and equipment for AI model training. Harrison.AI supported data collection. Data collection were done in collaboration with local Vitrolife offices. However, study design, analysis and the decision to publish was made solely by the authors.
                Categories
                Research Article
                Biology and Life Sciences
                Developmental Biology
                Embryology
                Embryos
                Biology and Life Sciences
                Developmental Biology
                Embryology
                Blastocysts
                Biology and Life Sciences
                Bioengineering
                Biotechnology
                Medical Devices and Equipment
                Medical Implants
                Engineering and Technology
                Bioengineering
                Biotechnology
                Medical Devices and Equipment
                Medical Implants
                Medicine and Health Sciences
                Medical Devices and Equipment
                Medical Implants
                Computer and Information Sciences
                Artificial Intelligence
                Machine Learning
                Deep Learning
                Biology and Life Sciences
                Developmental Biology
                Fertilization
                Insemination
                Computer and Information Sciences
                Artificial Intelligence
                Physical Sciences
                Physics
                Optics
                Focal Planes
                Biology and Life Sciences
                Developmental Biology
                Embryology
                Embryo Development
                Custom metadata
                The data underlying this study are not publicly available. The data sets are owned by the 18 participating clinics. In the data agreement contract between the clinic and Vitrolife it is explicitly stated that data must not be made public due to potentially sensitive information. The dataset contains anonymized patient demographics, basic treatment information (e.g. start month, insemination method, length of incubation), transfer decisions/outcomes and time-lapse images for each embryo. Data access requests can be sent to the individual IVF clinics or clinic chain, contact information listed below: a. Virtus Health Head Office, Level 3, 176 Pacific Highway, Greenwich NSW 2065, Australia. Email: info@ 123456ivf.com.au b. Ciconia Fertilitetsklinik, Saralyst Allé 50, 8270 Højbjerg, Denmark. E-mail: aarhus@ 123456ciconia.dk c. Skive Fertilitetsklinik, Resenvej 25, 7800 Skive, Denmark. E-mail: fertilitetsklinikken@ 123456midt.rm.dk d. Horsens Fertilitsklinik, Sundvej 30, 8700 Horsens, Denmark. E-mail: fertilitet@ 123456horsens.rm.dk e. Kato Ladies Clinic, Nishishinjuku, 7, Shinjuku City, 160-0023 Tokyo, Japan. E-mail: klc@ 123456towako-kato.com f. Maigaard Fertilitetsklinik, Jens Baggesens Vej 88H, 8200 Aarhus, Denmark. E-mail: mail@ 123456maigaard.dk The authors did not receive any special privileges in accessing the data.

                Uncategorized
                Uncategorized

                Comments

                Comment on this article