That’s not fair! An approach to evaluating the fairness of 4-H competitive activities: Part 2

Evaluating the “fairness” of 4-H contests can be a challenge. By following a succinct method of investigation, 4-H leaders, superintendents and staff can determine the fairness of events for participants and make changes, if necessary.

Michigan State University Extension programs are founded on research based knowledge and determining the “fairness” of contests and events should be no exception. In order to determine the fairness of contests and events, evaluators should examine the mechanics of the contest, the perceptions of youth, parents and volunteers and create a common language that communicates the intent of the 4-H project and associated contests.

To date, a majority of studies that have been conducted on youth development and fairness have relied heavily on youth perceptions, which has led to a subjective view of “fairness.” “How Do We Know if Our Contests Are “Fair?” examined the fairness of the Clackamas County Master Showmanship Contest. Her approach to solving the question of fairness based on statistical data can help program planners determine if they are viewing the question of fairness on the perception of parents and youth or on statistical data and empirical evidence.

The first step in determining the fairness of your competitive activity is to gather data. Data should be gathered over a three to five year time span so that trends can be identified. This can be easily done developing a single survey and administering it to program participants and contest officials using a one to five rating system:

1 = Strongly disagree

2 = Disagree

3 = Neutral

4 = Agree

5 = Strongly agree

It is important to keep the survey questions exactly the same over the course of the study.

 Questions for program participants could be:

  • Do you feel that the contest was fair?
  • Do you feel that the “best” participant was chosen as the winner?

Questions for contest officials could be:

  • Did the program do a good job in recognizing the “best” participants in each contest?
  • Do you feel that any participant had an advantage over other participants?
  • Do you feel that the contest is fair?

Additionally, open ended questions should be included in the surveys. Participant open ended questions could be:

  • Does any program participant have an unfair advantage? Why?
  • What do you like about the contest format?
  • What would you change about the contest format?

Contest officials should be asked similar open ended questions such as:

  • Do you think that it is easier for some participants to win the contest? Why?
  • Does any program participant have an unfair advantage? Why?
  • What did you like about the contest format?
  • What would you change about the contest format?

Although these questions yield subjective answers, they can help identify the perceptions of the event so that staff and program managers can get a better feeling for how the contest is viewed from participant and contest officials’ points of view.

Simultaneously, program managers must gather empirical data for the contests. For example, in a state-wide contest if it is brought to the attention of staff that a certain county is perceived as having an unfair advantage, staff can break down the number of winners from each county over a three year span to identify if the perception is, in fact, reality.

The next article in this series will share best practices for staff and program managers in examining the data that they have collected in order to propose solutions and make program changes, if necessary.

Did you find this article useful?