Abstract. This article explains how papers should be structured to guide the preparation of papers to be submitted to Psychological Test Adaptation and Development . Each submission should adhere as strictly as possible to the following structure. If, for any reason, certain aspects cannot be provided, this should be explained and considered in the limitations and recommendations. The outline in Table 1 is followed by a detailed explanation for each section. Table 1 Section content required Specifics for registered reports Theory/Introduction (A) What is the construct being measured? • Define each construct measured • Define possible hierarchy • Elaborate on score intercorrelations • Elaborate on nomological net - Relations with manifest variables (items) - Relations with other constructs • Derive hypotheses regarding - Structural validity - Convergent and discriminant validity • Explain how items were constructed and how they reflect the theory defined above • Consider consequences of adaptations/translations (B) What is the intended use? • Define intended use(s)/rule out what it is not intended for - Elaborate on necessary data selection contexts • Derive hypotheses regarding test criterion correlations - Pay attention to possible construct overlap, so not only bivariate correlations, use regression, for example, for facet scores • Delineate requirements for item difficulties • Delineate requirements for reliability estimates - Prognosis vs. status - Survey vs. individual assessment - Measurement precision (C) What is the intended target population? • Define target population(s) • Explain how this is adhered to during the studies • Delineate requirements for item difficulties and content Methods • Provide specifics about a possible translation or the adaptation/development undertaken • Data collection • Report all measures used in the entire study • Provide sample information - Descriptive statistics - Size - Composition (e.g., age, gender) • Justify sample size • Statistical analyses - Define alpha level and if a test is one- or two-tailed - Explain which methods match which hypotheses/assumptions and which result is indicative of supporting evidence • Provide rules for stopping data collection - Structural validity • Clearly state whether exploratory (no evidence) or confirmatory • Exploratory factor analysis (EFA) vs. confirmatory factor analysis (CFA) • Cluster vs. factor mixture modeling • Define cutoffs • Define what to do in case of misfit - Define rules for item selection (e.g., based on loadings, difficulties, variance, content) - Report software (version) and packages - Provide code that allows a reproduction of analyses Results • Provide all information necessary to evaluate evidence with regard to the assumptions/hypotheses in ABC • Report all exploratory analyses that were additionally conducted Discussion • Evaluate evidence provided with regard to ABC • Not required upon initial submission • Elaborate on limitations • Make clear recommendations how the score(s) can be used