1
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Evaluation of the Feasibility, Reliability, and Repeatability of Welfare Indicators in Free-Roaming Horses: A Pilot Study

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Simple Summary

          Animal welfare assessment is an essential tool for maintaining positive animal wellbeing. Validated welfare assessment protocols have been developed for farm, laboratory, zoo, and companion animals, including horses in managed care. However, wild and free-roaming equines have received relatively little attention, despite populations being found worldwide. In the UK, free-roaming ponies inhabit areas of Exmoor, Dartmoor, and New Forest, England, and Snowdonia National Park in Wales, amongst others. Visitors and local members of the public who encounter free-roaming ponies occasionally raise concerns about their welfare, as they are not provided with additional food, water, or shelter. In this study, we evaluated the feasibility, reliability, and repeatability of welfare indicators that can be applied to a population of free-roaming Carneddau Mountain ponies to address such concerns. Our findings indicate that many of the trialed indicators were successfully repeated and had good levels of inter-assessor reliability. Reliable and repeatable welfare indicators for free-roaming and semi free-roaming ponies will enable population managers and conservation grazing schemes to manage the welfare of free-roaming horses and ponies.

          Abstract

          Validated assessment protocols have been developed to quantify welfare states for intensively managed sport, pleasure, and working horses. There are few protocols for extensively managed or free-roaming populations. Here, we trialed welfare indicators to ascertain their feasibility, reliability, and repeatability using free-roaming Carneddau Mountain ponies as an example population. The project involved (1) the identification of animal and resource-based measures of welfare from both the literature and discussion with an expert group; (2) testing the feasibility and repeatability of a modified body condition score and mobility score on 34 free-roaming and conservation grazing Carneddau Mountain ponies; and (3) testing a prototype welfare assessment template comprising 12 animal-based and 6 resource-based welfare indicators, with a total of 20 questions, on 35 free-roaming Carneddau Mountain ponies to quantify inter-assessor reliability and repeatability. This pilot study revealed that many of the indicators were successfully repeatable and had good levels of inter-assessor reliability. Some of the indicators could not be verified for reliability due to low/absent occurrence. The results indicated that many animal and resource-based indicators commonly used in intensively managed equine settings could be measured in-range with minor modifications. This study is an initial step toward validating a much-needed tool for the welfare assessment of free-roaming and conservation grazing ponies.

          Related collections

          Most cited references65

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Interrater reliability: the kappa statistic

          The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from −1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            A Coefficient of Agreement for Nominal Scales

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Understanding interobserver agreement: the kappa statistic.

              Items such as physical exam findings, radiographic interpretations, or other diagnostic tests often rely on some degree of subjective interpretation by observers. Studies that measure the agreement between two or more observers should include a statistic that takes into account the fact that observers will sometimes agree or disagree simply by chance. The kappa statistic (or kappa coefficient) is the most commonly used statistic for this purpose. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A limitation of kappa is that it is affected by the prevalence of the finding under observation. Methods to overcome this limitation have been described.
                Bookmark

                Author and article information

                Contributors
                Role: Academic Editor
                Role: Academic Editor
                Journal
                Animals (Basel)
                Animals (Basel)
                animals
                Animals : an Open Access Journal from MDPI
                MDPI
                2076-2615
                02 July 2021
                July 2021
                : 11
                : 7
                : 1981
                Affiliations
                [1 ]Animal Behaviour & Welfare Research Group, Department of Biological Sciences, University of Chester, Chester CH1 4BJ, UK; k.mclennan@ 123456chester.ac.uk (K.M.M.); christina.stanley@ 123456chester.ac.uk (C.R.S.)
                [2 ]Faculty of Health and Life Sciences, Institute of Infection, Veterinary & Ecological Sciences, The University of Liverpool, Neston CH64 7TE, UK; J.D.Stack@ 123456liverpool.ac.uk (J.D.S.); H.Braid@ 123456liverpool.ac.uk (H.B.)
                Author notes
                [* ]Correspondence: 1914124@ 123456chester.ac.uk
                Author information
                https://orcid.org/0000-0002-9355-9641
                https://orcid.org/0000-0003-2582-7584
                https://orcid.org/0000-0002-8888-540X
                https://orcid.org/0000-0002-5053-4831
                Article
                animals-11-01981
                10.3390/ani11071981
                8300213
                34359108
                3b8cce6a-c2ce-41aa-a905-ca050a424c40
                © 2021 by the authors.

                Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( https://creativecommons.org/licenses/by/4.0/).

                History
                : 21 May 2021
                : 29 June 2021
                Categories
                Article

                animal welfare,assessment,equine,extensively managed,feral horses

                Comments

                Comment on this article