9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Application and theory gaps during the rise of Artificial Intelligence in Education

      , , ,
      Computers and Education: Artificial Intelligence
      Elsevier BV

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references137

          • Record: found
          • Abstract: found
          • Article: not found

          Deep learning.

          Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement.

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Interrater reliability: the kappa statistic

              The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
                Bookmark

                Author and article information

                Contributors
                Journal
                Computers and Education: Artificial Intelligence
                Computers and Education: Artificial Intelligence
                Elsevier BV
                2666920X
                2020
                2020
                : 1
                : 100002
                Article
                10.1016/j.caeai.2020.100002
                ce639467-7acc-4354-9f79-9f09562ba69c
                © 2020

                https://www.elsevier.com/tdm/userlicense/1.0/

                http://creativecommons.org/licenses/by-nc-nd/4.0/

                History

                Comments

                Comment on this article