1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      What Is the Maximum Likelihood Estimate When the Initial Solution to the Optimization Problem Is Inadmissible? The Case of Negatively Estimated Variances

      , , ,
      Psych
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The default procedures of the software programs Mplus and lavaan tend to yield an inadmissible solution (also called a Heywood case) when the sample is small or the parameter is close to the boundary of the parameter space. In factor models, a negatively estimated variance does often occur. One strategy to deal with this is fixing the variance to zero and then estimating the model again in order to obtain the estimates of the remaining model parameters. In the present article, we present one possible approach for justifying this strategy. Specifically, using a simple one-factor model as an example, we show that the maximum likelihood (ML) estimate of the variance of the latent factor is zero when the initial solution to the optimization problem (i.e., the solution provided by the default procedure) is a negative value. The basis of our argument is the very definition of ML estimation, which requires that the log-likelihood be maximized over the parameter space. We present the results of a small simulation study, which was conducted to evaluate the proposed ML procedure and compare it with Mplus’ default procedure. We found that the proposed ML procedure increased estimation accuracy compared to Mplus’ procedure, rendering the ML procedure an attractive option to deal with inadmissible solutions.

          Related collections

          Most cited references33

          • Record: found
          • Abstract: not found
          • Article: not found

          lavaan: AnRPackage for Structural Equation Modeling

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper)

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Principles of multilevel modelling.

              Multilevel modelling, also known as hierarchical regression, generalizes ordinary regression modelling to distinguish multiple levels of information in a model. Use of multiple levels gives rise to an enormous range of statistical benefits. To aid in understanding these benefits, this article provides an elementary introduction to the conceptual basis for multilevel modelling, beginning with classical frequentist, Bayes, and empirical-Bayes techniques as special cases. The article focuses on the role of multilevel averaging ('shrinkage') in the reduction of estimation error, and the role of prior information in finding good averages.
                Bookmark

                Author and article information

                Contributors
                (View ORCID Profile)
                (View ORCID Profile)
                Journal
                Psych
                Psych
                MDPI AG
                2624-8611
                September 2022
                June 30 2022
                : 4
                : 3
                : 343-356
                Article
                10.3390/psych4030029
                150a6fd3-98ab-42c7-8598-30c7612f7d76
                © 2022

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article