Inter-rater reliability/Related Articles: Difference between revisions
Jump to navigation
Jump to search
imported>Daniel Mietchen m (Robot: encapsulating subpages template in noinclude tag) |
imported>Housekeeping Bot m (Automated edit: Adding CZ:Workgroups to Category:Bot-created Related Articles subpages) |
||
Line 16: | Line 16: | ||
{{r|Fleiss' kappa}} | {{r|Fleiss' kappa}} | ||
{{Bot-created_related_article_subpage}} | |||
<!-- Remove the section above after copying links to the other sections. --> | <!-- Remove the section above after copying links to the other sections. --> |
Latest revision as of 17:29, 11 January 2010
- See also changes related to Inter-rater reliability, or pages that link to Inter-rater reliability or to this page or whose text contains "Inter-rater reliability".
Parent topics
Subtopics
Bot-suggested topics
Auto-populated based on Special:WhatLinksHere/Inter-rater reliability. Needs checking by a human.
- Fleiss' kappa [r]: Statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. [e]