10
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Benefits and Challenges in Using Seroprevalence Data to Inform Models for Measles and Rubella Elimination

      research-article

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We use simulations to highlight how (accounting for the dynamical context) high-quality measles and rubella serological surveys can be used to inform key control and elimination questions if the challenges of conducting, analyzing, and interpreting them are overcome.

          Abstract

          Background

          Control efforts for measles and rubella are intensifying globally. It becomes increasingly important to identify and reach remaining susceptible populations as elimination is approached.

          Methods

          Serological surveys for measles and rubella can potentially measure susceptibility directly, but their use remains rare. In this study, using simulations, we outline key subtleties in interpretation associated with the dynamic context of age-specific immunity, highlighting how the patterns of immunity predicted from disease surveillance and vaccination coverage data may be misleading.

          Results

          High-quality representative serosurveys could provide a more accurate assessment of immunity if challenges of conducting, analyzing, and interpreting them are overcome. We frame the core disease control and elimination questions that could be addressed by improved serological tools, discussing challenges and suggesting approaches to increase the feasibility and sustainability of the tool.

          Conclusions

          Accounting for the dynamical context, serosurveys could play a key role in efforts to achieve and sustain elimination.

          Related collections

          Most cited references39

          • Record: found
          • Abstract: found
          • Article: not found

          Measles antibody: reevaluation of protective titers.

          A school blood drive before a measles outbreak permitted correlation of preexposure measles antibody titers with clinical protection using the plaque reduction neutralization (PRN) test and an EIA. Of 9 donors with detectable preexposure PRN titer less than or equal to 120, 8 met the clinical criteria for measles (7 seroconfirmed) compared with none of 71 with preexposure PRN titers greater than 120 (P less than .0001). Seven of 11 donors with preexposure PRN titers of 216-874 had a greater than or equal to 4-fold rise in antibody titer (mean, 43-fold) compared with none of 7 with a preexposure PRN titer greater than or equal to 1052 (P less than .02). Of 37 noncases with preexposure PRN titer less than 1052, 26 (70%) reported one or more symptoms compared with 11 (31%) of 35 donors with preexposure PRN titers greater than or equal to 1052 (P less than .002). By EIA, no case had detectable preexposure antibody; the preexposure geometric mean titer of asymptomatic donors (220) was not significantly higher than that of symptomatic donors who did not meet the clinical criteria for measles (153) (P = .10). The study suggests that PRN titers less than or equal to 120 were not protective against measles disease and illness without rash due to measles may occur in persons with PRN titers above this level.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Measuring Coverage in MNCH: Design, Implementation, and Interpretation Challenges Associated with Tracking Vaccination Coverage Using Household Surveys

            This paper is part of the PLOS Medicine “Measuring Coverage in MNCH” Collection. Introduction The percentage of a population that has been vaccinated—vaccination coverage—is an imperfect but helpful measure of the effectiveness of vaccination programs and of public health more broadly [1]. Vaccination coverage is a tracer condition for results-based financing [2], an indicator of eligibility for Millennium Challenge Account assistance [3], and a criterion for support from the GAVI Alliance for the introduction of new vaccines [4]. Making funding decisions contingent on coverage potentially incentivizes inflation of coverage figures, and there is wide recognition of the need to improve the data [5]–[8]. Ideally, vaccination coverage should be monitored continuously using registries or administrative reports [9]. Electronic immunization registries aim to document all vaccinations of each individual in each birth cohort [10],[11]. Denominators may derive from the same registry [11] or from a separate vital statistics system. When well implemented, electronic immunization registries can provide data for coverage measurement and for program management activities such as monitoring vaccine supply and requisitions and sending vaccination reminders. However, challenges facing such registries include accounting for migration within and between countries, ensuring complete birth registration and vaccination reporting, avoiding record duplication [12], and ensuring continuity after organizational changes [13]. Although pilot studies of electronic registries are ongoing in low- and middle-income countries including Albania, Guatemala, India, and Viet Nam [14], these challenges currently limit their use. Therefore, in most low- and middle-income countries, “administrative coverage” is calculated using aggregate reported data on the number of doses of each vaccine administered to children in the target age group in a given time period and target population estimates from censuses [1]. Health workers at each health facility typically compile data manually each month from clinic records such as immunization registers or tally sheets. At each vaccination visit, the health worker records vaccinations on clinic records and on a vaccination card, child health card, or other home-based record (HBR) that the mother keeps. The HBR serves as an educational tool for the mother and is also an important data source in household surveys. In many countries, however, the quality of primary recording of vaccinations, of transcription and compilation of data, and of reporting is low, and numerators may be either inflated (e.g., because doses outside the recommended age range are included) or too low (e.g., if private practitioners do not report). Moreover, denominators are often grossly inaccurate [5],[7]. Hence, wherever possible other data sources such as surveys are still considered in the World Health Organization (WHO)–United Nations Children's Fund (UNICEF) estimates of national immunization coverage [15]. Given current problems with coverage estimates based on administrative reports in many countries, we believe that surveys will continue to provide important information in the short-to-medium term, at national and sub-national levels. It is therefore critical that surveys are conducted rigorously. In this review, which is part of the PLOS Medicine “Measuring Coverage in MNCH” Collection, we discuss the survey methods used to estimate vaccination coverage in low- and middle-income countries, highlight potential pitfalls, and propose strategies to improve coverage measurement. Our review aims to inform public health practitioners and the researchers who design and implement surveys as well as Ministry of Health officials and donors who interpret and use data from surveys. Survey Methods Used to Measure Vaccination Coverage Four types of surveys are commonly employed to estimate vaccination coverage (Table 1). The Demographic and Health Surveys (DHS) [16] and Multiple Indicator Cluster Surveys (MICS) [17] are probability sample surveys, in which each household has a known and nonzero probability of being selected in the sample. There have been about 10–15 DHS and 20 MICS per year since 1995. These large, important, and generally well-conducted household surveys, which are used to collect data about many aspects of health, are described in detail in a companion paper in this Collection [18]. 10.1371/journal.pmed.1001404.t001 Table 1 Characteristics of common surveys used to measure vaccination. Survey Characteristic DHS MICS EPI LQAS Primary objectives Collection of information on a wide range of population, health, and nutrition topics, plus additional optional modules Collection of information on population health, child protection, and child development Estimation of vaccination coverage Classification of lots (catchment areas) into two groups: those with adequate coverage and those with inadequate coverage Sampling scheme Stratified cluster sampling; clusters selected using PPES; clusters are usually census enumeration areas Stratified cluster sampling; clusters selected using PPES; clusters are usually census enumeration areas Cluster sampling with or without stratification; clusters are usually villages or urban neighborhoods, selected using PPES Classic method uses simple random sampling within a lot; when lots are large, cluster sampling is sometimes employed Household selection Household selected randomly based on a complete household listing and mapping in the sample clusters Current practice is random selection of households based on a complete listing and mapping of enumeration areas Varies; usually non-probability; the first household is selected randomly, then neighboring households are selected until seven children can be enrolled When cluster sampling is used, the first household is selected randomly before moving in a consistent direction, sampling every kth household Total sample size Based on desired precision for key indicators at the regional level; the number of children aged 12–23 months covered in recent surveys is typically around 1,800 at the national level Based on desired precision of key indicators selected by implementing agencies; usually >2,000 women and several hundred children aged 12–23 months Usually 30 clusters of seven children aged 12–23 months; sized to yield estimate of ±10% assuming design effect of two Varies greatly; 19 respondents per lot is a common size with simple random sampling; 50 or 60 is common when using cluster sampling Respondents All men and women aged 15–49 years; vaccination data on children <5 years if biological mother is interviewed, and on women of childbearing age All women aged 15–49 years; vaccination data on children <5 years if primary caretaker is interviewed, and on women of childbearing age Mother or primary caretaker of children aged 12–23 months Varies; field workers interview caretaker and when possible substantiate response with vaccination record or sometimes indelible ink finger mark on child Questionnaire length Household: 25 pages; woman's questionnaire: about 70 pages Household: 18 pages; woman's: 38 pages; children under 5 years: 18 pages 1–2 pages Often 1 page Implementers Usually National Statistical Office or equivalent, with capacity-building from MEASURE DHS Usually National Statistical Office, with support from UNICEF and other partners Varies; often national- or district- level Ministry of Health employees Varies; usually independent from vaccination team Duration 12 months or more to plan, implement, analyze, and report 12 months or more to plan, implement, analyze, and report Several months to plan; weeks to implement, analyze, and report Varies; 1–2 days per lot to implement and analyze PPES, probability proportional to estimated population size. The Expanded Programme on Immunization (EPI) cluster survey was developed by the WHO and was described in 1982 as a practical tool to quickly estimate coverage to within ±10 percentage points of the point estimate [19]. The original EPI survey method selects 30 clusters from which seven children in each cluster are selected using the “random start, systematic search” method. Specifically, a starting dwelling is chosen by starting at a central location in the village or town, selecting a direction at random, counting the dwellings lying in that direction up to the edge of the village, and selecting one of them randomly; adjacent households are then visited until seven children aged 12–23 months have been enrolled [20],[21]. The central starting location may bias the method to include households with good access to vaccination, so it is difficult to assign unbiased probabilities of selection to the households using this method, which does not meet the above criteria for a probability sample and is, therefore, a “non-probability sampling” survey method [22]. EPI surveys are widely used at national and sub-national levels, but there is no central database of results, so the total number of surveys conducted is unknown. Adaptations of the EPI survey have incorporated probability sampling at the final stage of sample selection [22]–[26], and the updated WHO guidelines [21] as well as a recent companion manual on hepatitis B immunization surveys emphasize the need for probability sampling for scientifically robust estimates of coverage [27]. The main design differences between EPI surveys (if probability sampling is used) and DHS or MICS surveys is that EPI surveys focus specifically on vaccination data while DHS and MICS surveys cover a wide range of population and health topics and include a much larger sample size. In addition, field implementation of EPI surveys is variable and often done without external technical assistance, while the DHS and MICS are highly standardized and have substantial technical assistance and quality control. A final household survey method commonly used to estimate health intervention coverage in low- and middle-income countries is Lot Quality Assurance Sampling (LQAS). LQAS surveys use a stratified sampling approach to classify “lots,” which might be districts, health units, or catchment areas, as having either “adequate” or “inadequate” coverage of various public health interventions. For vaccination coverage measurement, LQAS is “nested” within a cluster survey to evaluate neonatal tetanus elimination [28], coverage of yellow fever vaccination [29], and coverage of meningococcal vaccine campaigns [30], and to monitor polio vaccination coverage after supplementary immunization activities [31]. Survey Design and Implementation Surveys used to estimate vaccination coverage should have a sample size that results in an acceptable sampling error, and implementation should minimize non-sampling errors, including selection bias and information bias (Table 2). In DHS and MICS surveys, the sample size is determined by the estimated number of households required for the desired precision of key indicators (not vaccination coverage), and all children in the eligible age groups in those households are included. In recent DHS surveys this design has given sample sizes of around 1,800 children aged 12–23 months. EPI surveys traditionally included 210 children aged 12–23 months in 30 clusters, but the sample size and number of clusters should be calculated according to assessments of the likely coverage, intra-cluster correlation, and desired precision of the vaccination coverage estimate [21],[27]. 10.1371/journal.pmed.1001404.t002 Table 2 Main potential sources of error and strategies to minimize them in population-based surveys measuring vaccination coverage. Source of Error Effect of Error on Results Strategies to Minimize Error Random error Sampling error Reduces precision Choose optimum sample design (e.g., number and size of clusters) and adjust sample size to achieve desired precision while retaining budgetary and logistical practicality Systematic error Selection bias—sampling frame Depends on size of excluded population and difference in vaccination uptake between those excluded and included Use most recent census data available Assess likelihood of census projections reflecting reality and update census if necessary If large populations have been excluded (e.g., security constraints at time of census), consider special efforts to include them Selection bias—sampling procedures Non-probabilistic sampling may lead to bias in either direction Use probability sampling method (plan time for listing of households within selected clusters) Use appropriate weighting in analysis Selection bias—poor field procedures Most likely to lead to upward bias in coverage results Preselect households and ensure strict supervision Conduct survey at time of year and of day when people most likely to be available Work with communities to enhance survey participation rates Conduct revisits as necessary to locate caregivers and HBRs Do not substitute households Information bias—lack of HBR or poorly filled HBR Bias in coverage results may underestimate or overestimate coverage depending on how missing data are handled and how HBRs are read by enumerators Public health programs need to educate families to retain HBRs and improve primary recording of vaccination data Publicize reminders about HBRs prior to survey (e.g., during household listing step) Allow time for mothers to look for HBR, revisit if necessary Include younger age groups in surveys and measure age-appropriate vaccination coverage Include questions as to condition of HBR and checks for errors Seek health facility–based records on children without HBR Information bias—inaccurate verbal history Most likely to bias infant coverage upwards as mothers may feel pressure to say their children have been vaccinated; for tetanus toxoid in adult women, verbal history usually underestimates percent of women protected Ensure interviewers maintain neutral attitude Give time to mothers to respond Shorter questionnaires likely to have less interviewee fatigue Standardize questions, use visual aids, conduct close supervision For tetanus toxoid, ask careful questions about all doses received in previous and current pregnancies and in campaigns (but this still does not account for diphtheria-tetanus-pertussis vaccination received in infancy); sero-surveys play a useful role in the measurement of the prevalence of protection Data transcription and data entry errors May increase data classed as missing; can bias coverage results Conduct close supervision Conduct range and consistency checks; enumerators can revisit household if necessary to correct data Missing data If nonrandom, biases result, often upwards Conduct high-quality planning, training, and supervision to reduce missing data Include appropriate statistical adjustment for missing data Selection bias may occur due to use of an outdated or nonrepresentative sampling frame, use of non-probability sampling, or poor field worker practices such as substituting a selected household with one that is easier to reach. The “random start, systematic search” method used in traditional EPI surveys has intrinsic geographic bias. It allows field workers to select households rather than this being part of the initial sampling process, does not document reasons for nonparticipation, and cannot adjust for biases resulting from out-of-date size estimates for selection of clusters using probability proportional to estimated size sampling. Moreover, in EPI and LQAS surveys, teams are likely to replace households where no one is home or where eligible respondents refuse to participate. If respondents are not selected randomly and if the same forces that influence participation in the survey also influence participation in vaccination (e.g., families missed by interviewers because they work in the fields all day may also lack time to attend vaccination clinics), replacement is likely to result in bias, probably upwards. Finally, surveys of the vaccination status of living individuals are inherently subject to selection bias since death is more likely in unvaccinated than in vaccinated children. In settings where there is a high infant mortality rate, this bias may be substantial. There are multiple potential sources of information error and bias in measuring the vaccination status of each child in surveys (Figure 1), many of which also affect data included in administrative reports. Mistakes can occur during primary data recording each time a child attends a vaccination point or when survey interviewers transcribe birth and vaccination dates onto a paper or digital questionnaire. If a paper questionnaire is used, further errors can occur during digital data entry. Data on source documents can also be incomplete [32] or inaccurate [33] and, when new vaccines are introduced, old HBRs may remain in circulation, requiring health workers to improvise in their recording (Figure 2). There is further confusion regarding recording of vaccines administered during campaigns such as “vaccination weeks” on HBRs [34]. 10.1371/journal.pmed.1001404.g001 Figure 1 Schematic of recording of vaccination data at the time of vaccination and during community surveys. Recording at the time of vaccination (primary recording) is indicated in black boxes; recording during surveys is indicated in green boxes. Main potential sources of information error and bias are highlighted in blue. DOB, date of birth. 10.1371/journal.pmed.1001404.g002 Figure 2 Several instances of improvisation on a vaccination card. (Photo courtesy of Carolina Danovaro, Pan American Health Organization.) When the HBR is not available (it may be lost or locked away, or mothers may not be given enough time to find it), interviewers question the child's parent or guardian to construct a verbal vaccination history. The reliability of such histories may vary with the information received or understood by mothers at the time of vaccination; the interviewer's skills, carefulness, neutral demeanor, and use of appropriate language; the recall period; and the length of the questionnaire and resulting interview fatigue [35]–[37]. The complexity of the vaccination schedule can also affect the reliability of a verbal vaccination history. When the EPI survey was introduced in the 1980s, the infant EPI schedule comprised five visits, which lent themselves to straightforward questions to the mother (Table 3). Because current schedules are much more complex (Table 4) and vary over time and between countries, constructing a verbal history of vaccinations received is now much more difficult and likely to become increasingly so. Thus, the questions included in surveys need substantial and continuous adaptation. 10.1371/journal.pmed.1001404.t003 Table 3 Illustrative questions used in the past to elicit a verbal history of vaccination according to the EPI schedule in the 1980s. Recommended Age for Vaccination Vaccines and How Administered Example Questions to Mother to Elicit Verbal History Birth BCG (intradermally, usually in the upper arm) Did the child receive an injection in the upper arm soon after birth? (check for scar) 6 weeks First dose of DTP (subcutaneous or intramuscular injection, usually in the thigh) and OPV (oral) Did the child receive an injection in the thigh (the “triple vaccine”)? If yes, how many times? Did the child also receive drops in the mouth? If yes, how many times? 10 weeks DTP, OPV 2 Same as for 6 weeks 14 weeks DTP, OPV 3 Same as for 6 weeks 9 months Measles (subcutaneous injection, usually in the upper arm) Did the child receive an injection in the arm against [use local term for measles], after he/she was old enough to sit up or crawl? DTP, diphtheria toxoid, tetanus toxoid, and whole cell pertussis vaccine combination; OPV, oral polio vaccine. 10.1371/journal.pmed.1001404.t004 Table 4 World Health Organization–recommended EPI schedule, 2012. Age of Infant Parenteral Vaccines Oral Vaccines Birth BCG, HBVa OPVb 6 weeks (some countries give this dose at 8 weeks) DTP, Hib, HBV, usually administered as pentavalent combinationa; 10- or 13-valent PnCVc OPV; rotavirus vaccine (Rotateq or Rotarix) 10 weeks (some countries give this dose at 16 weeks) Pentavalent combinationa; 10- or 13-valent PnCV (3p+0 schedule)c OPV; rotavirus vaccine (Rotateq or Rotarix) 14 weeks (some countries give this dose at 24 weeks) Pentavalent combinationa; 10- or 13-valent PnCVc OPV; rotavirus vaccined (Rotateq) 9–12 months Measlese (rubellaf, with measles); 10- or 13-valent PnCV (2p+1 schedule)c; yellow fever (endemic countries)g; Japanese encephalitis (endemic countries)h Adapted from [67]. a Since perinatal or early postnatal transmission is an important cause of chronic infections globally, all infants should receive their first dose of hepatitis B vaccine as soon as possible (<24 hours) after birth even in low-endemicity countries. The primary hepatitis B immunization series conventionally consists of three doses of vaccine (one monovalent birth dose followed by two monovalent or combined vaccine doses at the time of DTP1 and DTP3 vaccine doses). However, four doses may be given for programmatic reasons (e.g., one monovalent birth dose followed by three monovalent or combined vaccine doses with DTP vaccine doses), according to the schedules of national routine immunization programs. b OPV alone, including a birth dose, is recommended in all polio-endemic countries and those at high risk for importation and subsequent spread. A birth dose is not considered necessary in countries where the risk of polio virus transmission is low, even if the potential for importation is high/very high. c For infants, three primary doses (the 3p+0 schedule) or, as an alternative, two primary doses plus a booster (the 2p+1 schedule). If the 3p+0 schedule is used, vaccination can be initiated as early as 6 weeks of age with an interval between doses of 4–8 weeks. If the 2p+1 schedule is selected, the two primary doses should ideally be completed by 6 months of age, starting as early as 6 weeks of age with a minimum interval of 8 weeks between the two doses (for infants aged ≥7 months a minimum interval of 4 weeks between doses is possible). One booster dose should be given at 9–15 months of age. d If Rotarix is used, only two doses are administered. e In countries that have achieved a high level of control of measles, the initial dose of measles vaccine can be administered at 12 months of age. All children are currently expected to receive a second dose of measles vaccine. In the least developed countries this is often administered through mass immunization campaigns. f Rubella vaccine, administered in combination with measles vaccine, is recommended for countries that reliably administer two doses of measles vaccine and have achieved a high level of measles control. g Yellow fever should be co-administered at the infant visit when measles vaccine is administered. h Japanese encephalitis vaccines may be given at age 12 months for children living in highly endemic areas. DTP, diphtheria toxoid, tetanus toxoid, and whole cell pertussis vaccine combination; HBV, hepatitis B vaccine; Hib, Haemophilus Influenzae type b conjugate vaccine; OPV, oral polio vaccine; pentavalent combination, DTP+HBV+Hib formulated to be administered in combination as a single injection; PnCV, pneumococcal conjugate vaccine containing either 10 or 13 separate conjugates of different capsular serotypes. Data Analysis and Reporting Issues Traditionally, surveys report the proportion of persons who have been vaccinated as recorded “by card only” and by “card plus history,” both by age 12 months and by age at the time of the survey. EPI surveys also calculate and report separately on coverage of “valid” doses among children with cards, such as a minimum interval of 28 days between doses of diphtheria-tetanus-pertussis-containing vaccines and a minimum age of 270 days for measles vaccination [21]. As coverage increases, evaluation of the timeliness of vaccination among children with documented dates of birth, and of each vaccine dose, provides additional information to guide program performance. Timeliness can be illustrated through graphs of the distribution of age at receipt of each dose compared to the national schedule [32],[38] or by time-to-event curves of the cumulative coverage by age [32],[39]. The mean number of extra days or weeks that children remain under-vaccinated and at risk of disease [38],[40],[41] and risk factors for delay in vaccination can be assessed [39],[40]. Surveys that use probability proportional to estimated size sampling without stratification assume that each cluster has equal weight in the analysis. EPI surveys do not collect data on the number of eligible households in each cluster, and cannot validate this assumption. Consequently, if outdated or inaccurate sampling frames are used, the resulting point estimate may be biased. If, however, a household listing step is included in the survey preparation and sampling stages, appropriate weights can be calculated and used to derive national estimates and confidence intervals, as is recommended nowadays by DHS and MICS protocols [18]. The standard error of the coverage estimate is traditionally used to calculate and report a 95% confidence interval around the point estimate. The confidence interval is affected by the sample size, the sampling design, and the underlying proportion itself. Because individuals living in one cluster of a population tend to be more similar to each other than persons from different clusters, respondents in a cluster sample each contribute less independent information about the overall population than respondents in a simple random sample. This positive intra-cluster correlation causes cluster samples to have a wider confidence interval around the point estimate of the population parameter than a simple random sample of the same size. DHS, MICS, and EPI surveys all provide guidance on estimation of confidence intervals for key indicators, but the degree to which confidence intervals are reported and used varies widely, as discussed elsewhere in this Collection [42]. The application of standard statistical techniques to estimate confidence intervals has been challenged for surveys that use non-probability sampling of households within each cluster [43], although simulations of results from EPI surveys have shown that confidence intervals in these surveys are generally within the desired precision of ±10 percentage points [44]. Some variations on the EPI survey method take a probability sample (e.g., a systematic random sample in the final stage) [22]–[26], which makes it possible to calculate sampling weights and construct meaningful confidence intervals. LQAS surveys inevitably have a central range of coverage (the gray area) that is not excluded by either the “adequate” or the “inadequate” classification. That is, neither classification excludes the medium category. For fixed values of alpha and beta (the probability of type I and II errors, respectively), a larger sample size per lot will result in a narrower gray area and a correspondingly more confident conclusion about whether coverage is likely to be adequate (Figure 3). When data are combined across numerous lots, it is possible to estimate a region-wide proportion and confidence interval using formulae from stratified sampling and applying strata and cluster weights. However, at the level of the individual lot, the user does not obtain a precise coverage estimate from a LQAS survey, but only an assurance that coverage in populations where there is very low coverage is very likely to be classified as inadequate and that coverage in populations where there is very high coverage is very likely to be classified as adequate. 10.1371/journal.pmed.1001404.g003 Figure 3 Operating characteristic curves for four LQAS sampling plans. In each panel, the curve indicates the probability of finding d* or more vaccinated children in a random sample of size n. Lots with coverage≤lower threshold (LT) will be classified as having inadequate coverage with probability ≥(1 − α). Lots with coverage≥upper threshold (UT) will be classified as having adequate coverage with probability ≥(1 − β). The gray area is the region where LT
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Seroepidemiology: an underused tool for designing and monitoring vaccination programmes in low- and middle-income countries

                Bookmark

                Author and article information

                Journal
                J Infect Dis
                J. Infect. Dis
                jid
                The Journal of Infectious Diseases
                Oxford University Press (US )
                0022-1899
                1537-6613
                01 August 2018
                19 March 2018
                19 March 2018
                : 218
                : 3
                : 355-364
                Affiliations
                [1 ]Department of Ecology and Evolutionary Biology, Princeton University, Princeton, New Jersey
                [2 ]Department of Environmental Health Sciences, Mailman School of Public Health, Columbia University, New York, New York
                [3 ]London School of Hygiene and Tropical Medicine, London, United Kingdom
                [4 ]Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
                [5 ]International Vaccine Access Center, Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
                [6 ]Center for Infectious Disease Dynamics, The Pennsylvania State University, State College, Pennsylvania
                [7 ]Netherlands National Institute for Public Health and the Environment, Bilthoven, Netherlands
                [8 ]Leiden University Medical Center, Leiden, Netherlands
                Author notes
                Correspondence: A. K. Winter, MPH PhD, 106A Guyot Hall, Princeton, NJ 08544 ( awinter@ 123456princeton.edu ).
                Article
                jiy137
                10.1093/infdis/jiy137
                6049004
                29562334
                0e40b1a0-4e36-4049-88f1-ace530d59f68
                © The Author(s) 2018. Published by Oxford University Press for the Infectious Diseases Society of America.

                This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 05 December 2018
                : 16 March 2018
                Page count
                Pages: 10
                Funding
                Funded by: Bill and Melinda Gates Foundation 10.13039/100000865
                Categories
                Major Articles and Brief Reports
                Vaccines

                Infectious disease & Microbiology
                elimination,measles,rubella,serology,serosurvey
                Infectious disease & Microbiology
                elimination, measles, rubella, serology, serosurvey

                Comments

                Comment on this article