0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      First-Trimester Influenza Infection Increases the Odds of Non-Chromosomal Birth Defects: A Systematic Review and Meta-Analysis

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Viral infections during pregnancy raise several clinical challenges, including birth defects in the offspring. Thus, this systematic review and meta-analysis aims to prove and highlight the risk of birth defects after first-trimester maternal influenza infection. Our systematic search was performed on 21 November 2022. Studies that reported maternal influenza infection in the first trimester and non-chromosomal congenital abnormalities were considered eligible. We used odds ratios (OR) with 95% confidence intervals (CIs) to measure the effect size. Pooled ORs were calculated with a random effects model. Heterogeneity was measured with I² and Cochran’s Q tests. We found that first-trimester maternal influenza was associated with increased odds of developing any type of birth defects (OR: 1.5, CI: 1.30–1.70). Moreover, newborns were more than twice as likely to be diagnosed with neural tube defects (OR: 2.48, CI: 1.95–3.14) or cleft lip and palate (OR: 2.48, CI: 1.87–3.28). We also found increased odds of developing congenital heart defects (OR: 1.63, CI: 1.27–2.09). In conclusion, influenza increases the odds of non-chromosomal birth defects in the first trimester. The aim of the present study was to estimate the risk of CAs in the offspring of mothers affected by first-trimester influenza infection.

          Related collections

          Most cited references54

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

          The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            RoB 2: a revised tool for assessing risk of bias in randomised trials

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Interrater reliability: the kappa statistic

              The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                VIRUBR
                Viruses
                Viruses
                MDPI AG
                1999-4915
                December 2022
                December 02 2022
                : 14
                : 12
                : 2708
                Article
                10.3390/v14122708
                aab85749-fcb9-4f0a-80de-f5a26ee23605
                © 2022

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article