There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
The usage of psychological networks that conceptualize behavior as a complex interplay of psychological and other components has gained increasing popularity in various research fields. While prior publications have tackled the topics of estimating and interpreting such networks, little work has been conducted to check how accurate (i.e., prone to sampling variation) networks are estimated, and how stable (i.e., interpretation remains similar with less observations) inferences from the network structure (such as centrality indices) are. In this tutorial paper, we aim to introduce the reader to this field and tackle the problem of accuracy under sampling variation. We first introduce the current state-of-the-art of network estimation. Second, we provide a rationale why researchers should investigate the accuracy of psychological networks. Third, we describe how bootstrap routines can be used to (A) assess the accuracy of estimated network connections, (B) investigate the stability of centrality indices, and (C) test whether network connections and centrality estimates for different variables differ from each other. We introduce two novel statistical methods: for (B) the correlation stability coefficient, and for (C) the bootstrapped difference test for edge-weights and centrality indices. We conducted and present simulation studies to assess the performance of both methods. Finally, we developed the free R-package bootnet that allows for estimating psychological networks in a generalized framework in addition to the proposed bootstrap methods. We showcase bootnet in a tutorial, accompanied by R syntax, in which we analyze a dataset of 359 women with posttraumatic stress disorder available online. Electronic supplementary material The online version of this article (doi:10.3758/s13428-017-0862-1) contains supplementary material, which is available to authorized users.
The Mini International Neuropsychiatric Interview (MINI) is a short diagnostic structured interview (DSI) developed in France and the United States to explore 17 disorders according to Diagnostic and Statistical Manual (DSM)-III-R diagnostic criteria. It is fully structured to allow administration by non-specialized interviewers. In order to keep it short it focuses on the existence of current disorders. For each disorder, one or two screening questions rule out the diagnosis when answered negatively. Probes for severity, disability or medically explained symptoms are not explored symptom-by-symptom. Two joint papers present the inter-rater and test-retest reliability of the MINI the validity versus the Composite International Diagnostic Interview (CIDI) (this paper) and the Structured Clinical Interview for DSM-III-R patients (SCID) (joint paper). Three-hundred and forty-six patients (296 psychiatric and 50 non-psychiatric) were administered the MINI and the CIDI ‘gold standard’. Forty two were interviewed by two investigators and 42 interviewed subsequently within two days. Interviewers were trained to use both instruments. The mean duration of the interview was 21 min with the MINI and 92 for corresponding sections of the CIDI. Kappa coefficient, sensitivity and specificity were good or very good for all diagnoses with the exception of generalized anxiety disorder (GAD) (kappa = 0.36), agoraphobia (sensitivity = 0.59) and bulimia (kappa = 0.53). Interrater and test-retest reliability were good. The main reasons for discrepancies were identified. The MINI provided reliable DSM-III-R diagnoses within a short time frame, The study permitted improvements in the formulations for GAD and agoraphobia in the current DSM-IV version of the MINI.
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.