There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Generalized anxiety disorder (GAD) is one of the most common mental disorders; however, there is no brief clinical measure for assessing GAD. The objective of this study was to develop a brief self-report scale to identify probable cases of GAD and evaluate its reliability and validity. A criterion-standard study was performed in 15 primary care clinics in the United States from November 2004 through June 2005. Of a total of 2740 adult patients completing a study questionnaire, 965 patients had a telephone interview with a mental health professional within 1 week. For criterion and construct validity, GAD self-report scale diagnoses were compared with independent diagnoses made by mental health professionals; functional status measures; disability days; and health care use. A 7-item anxiety scale (GAD-7) had good reliability, as well as criterion, construct, factorial, and procedural validity. A cut point was identified that optimized sensitivity (89%) and specificity (82%). Increasing scores on the scale were strongly associated with multiple domains of functional impairment (all 6 Medical Outcomes Study Short-Form General Health Survey scales and disability days). Although GAD and depression symptoms frequently co-occurred, factor analysis confirmed them as distinct dimensions. Moreover, GAD and depression symptoms had differing but independent effects on functional impairment and disability. There was good agreement between self-report and interviewer-administered versions of the scale. The GAD-7 is a valid and efficient tool for screening for GAD and assessing its severity in clinical practice and research.
We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.
The usage of psychological networks that conceptualize behavior as a complex interplay of psychological and other components has gained increasing popularity in various research fields. While prior publications have tackled the topics of estimating and interpreting such networks, little work has been conducted to check how accurate (i.e., prone to sampling variation) networks are estimated, and how stable (i.e., interpretation remains similar with less observations) inferences from the network structure (such as centrality indices) are. In this tutorial paper, we aim to introduce the reader to this field and tackle the problem of accuracy under sampling variation. We first introduce the current state-of-the-art of network estimation. Second, we provide a rationale why researchers should investigate the accuracy of psychological networks. Third, we describe how bootstrap routines can be used to (A) assess the accuracy of estimated network connections, (B) investigate the stability of centrality indices, and (C) test whether network connections and centrality estimates for different variables differ from each other. We introduce two novel statistical methods: for (B) the correlation stability coefficient, and for (C) the bootstrapped difference test for edge-weights and centrality indices. We conducted and present simulation studies to assess the performance of both methods. Finally, we developed the free R-package bootnet that allows for estimating psychological networks in a generalized framework in addition to the proposed bootstrap methods. We showcase bootnet in a tutorial, accompanied by R syntax, in which we analyze a dataset of 359 women with posttraumatic stress disorder available online. Electronic supplementary material The online version of this article (doi:10.3758/s13428-017-0862-1) contains supplementary material, which is available to authorized users.
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.