Whilst thinking for a topic for my blog this week, I decided I’d like to talk about something I’m passionate about. In a recent course rep meeting with members of staff, we were discussing the impact and accuracy of grades. This discussion made me realise that grades are a key factor in determining your future life choices. Your grades at school determine if you can pursue your interest at degree level, and your grades at university can determine the likelihood of achieving your dream job. Therefore over the upcoming weeks I will talk about whether grading is the most effective method for assessing students, and how grades impact students in a multitude of ways. However, as suggested by Boud (1995, as cited in Yorke, Bridges & Woolf, 2000), “Students can, with difﬁculty, escape from the effects of poor teaching, they cannot (by deﬁnition, if they want to graduate) escape the effects of poor assessment”. Grading is often a complex process, and the marker will have an impact on the grade before the grade has an impact on the student. Students often assume their grade is solely reflective of their performance, when in reality a number of other factors can influence the grade they are given.
Lecturers who mark your work can be subjected to influence, just like any other individual. Personality traits, energy, time available, interest and experience of marking and the subject area can all affect a marker’s accuracy (Suto & Nadas, 2008; Yorke, Bridges & Woolf, 2000). In addition to this, another students’ work can influence an individual’s mark. Markers often struggle to treat each piece of work separately, and comparisons are often made to the previous assignment (Crisp, 2010). For example, if the assignment read before yours was outstanding, by comparison your work may not be graded as highly. In this respect, it seems that marking is vulnerable to subjectivity.
Marking assignments at University isn’t the only thing lecturers do, as they are often involved in lecturing, research and pastoral care. As a result, a team of markers, including postgraduate students, are often used for a quicker marking turnover (Bird & Yucel, 2010). This may be time-effective, but the inter-rater reliability between markers is often subsequently questioned (Suto & Nadas, 2008; Bird & Yucel, 2010). It is important for markers to have a mutual understanding of what is expected from an assignment, however self-reports have suggested they often use different judgement processes when marking. Some markers have learnt their assessment techniques from colleagues, others from how they were graded at school, or through training within their educational institution (Yorke, Bridges & Woolf, 2000).
There are methods for tackling this lack of inter-rater reliability. Bird & Yucel (2010) suggested an “integrated model of assessment programme”, which uses marking schemes, moderation sessions and examples of graded work. This particular programme resulted in a decrease of marker variability, and time taken to mark work was reduced. It is often assumed that there is a speed-accuracy trade off with marking, however it has been found that practice effects allow markers to assess the work more automatically, and occasionally more accurately (Nadas & Suto, 2010). Thus practice makes perfect, and inexperienced postgraduates in particular should be provided ample opportunity to practice their assessing ability.
Clear marking schemes will increase reliability amongst markers. However it is important to ensure this focus on reliability isn’t at the expense of validity. A valid assessment tests if students have an in-depth knowledge of the area, and education minister Michael Gove has supported this by suggesting our focus should shift from reliability to validity to truly stretch students’ abilities (Ofqual, 2013). Also, if marking schemes are expected to be followed strictly, what happens when a student produces something novel? Will their creativity be inhibited by the need to follow guidelines? I will discuss the effect grading has on creativity in a later blog.
Research has also investigated the scope of marks given for subjective assignments. In a subject such as Maths, there is a definitive answer, thus the full range of marks are often given. However in a subject such as Psychology where there is often no wrong or right answer, grades are often restricted to the middle quartile. The top marks (e.g. A*) are expected to be used for a ‘perfect’ answer, however due to subjective marking, ‘perfect’ is almost impossible to achieve (Yorke, Bridges & Woolf, 2000).
Some educators believe inaccuracy between markers and subjectivity is such a problem, that alternative methods have been suggested. For example automated assessment programmes have been created, that can be used to assess essay answers (Valenti, Neri & Cucchiarelli, 2003). This method removes the subjective element to human marking, however would a student’s motivation be impacted if the essay they slaved over for days was never read? Inevitably, computers cannot be inspired and entertained by an essay like a marker can (Rosa, 2013). This is a fascinating, controversial method which has sparked some debate amongst educators, and I will explore this issue further in next week’s post.
From this research it is clear that markers play a vital role in students’ grades. Regardless of training and strict criteria for markers, some human error is inevitable. Thus it is important education aims to minimise this error, even when they cannot eliminate it. Inevitably, subjectivity is what makes us human, and allows students to take a novel approach on a question. Our markers aren’t machines; therefore it would be illogical and suppressing on both the student and marker to have an idealism of completely objective marking. So after you have read this blog, how are you going to assess it? Will you follow a strict criteria of how an assignment should be structured, or allow novel ideas and subjectivity to suppress the guidelines?