Interrater reliability of Texas all-region choir audition adjudicators: The effects of training and judging experience

Date

2015-08

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The purpose of this study was to evaluate the extent to which participation in a structured music adjudication training program and prior judging experience affected interrater reliability for adjudicators of middle school All-Region Choir Auditions. Participants included pre-service and professional music educators (N=152) who judged All-Region Choir auditions in three different Texas Music Educators Association (TMEA) Regions. Novice participants, defined as having 0-1 years’ prior judging experience (n=55), were divided into training experimental (n=26) and control (n=29) groups. Experimental group members participated in a structured adjudication training workshop prior to the day of All-Region Choir Auditions, while members of the control group received no training beyond the general instructions given to all adjudicators on the morning of auditions.

There were 24 judging panels, each consisting of 5–8 adjudicators, wherein each judge ranked auditioning students. TMEA requires its Regions to use Olympic scoring, in which the highest and lowest rankings for each auditioning student are dropped. In the present study, a modified form of Olympic scoring was employed, and the rate at which a judge’s rankings were dropped (drop rate) was used to assign a measure of interrater reliability to each individual adjudicator. (Lower drop rate indicates higher interrater reliability). Pearson correlations indicated higher reliability for adjudicators who received training (r= -0.27, p=0.045, n=55) but lower reliability for adjudicators with any prior judging experience (r=0.21, p=0.012, n=140). The mean drop rate for trained novice judges (27.5%) was lower than that for both untrained novice (32.9%) and untrained experienced (33.7%) adjudicators. This result suggests that training, not prior judging experience, has a positive effect on interrater reliability. Additional findings indicated that reliability was lower for judging panels that evaluated more than 50 auditioning students. Furthermore, there was no relationship between prior music teaching experience and interrater reliability.

Description

Keywords

Interrater Reliability, Inter-Rater Reliability, Adjudication, All-Region Choir, Choral Auditions, Judging, Adjudication Training, Adjudicator Training, Training

Citation