Assessment patterns across degree programs: Recent research has implications for curriculum leaders
I want to draw on some recent research on patterns of assessment across degree programs and link this to the challenges of effectively assessing the Mathematics and Science Threshold Learning Outcomes (TLOs). First, I’ll start with students because it is the students who we are ultimately doing this work for.
Luckily, students recognise the importance of many of the skills that underpin the TLOs in both science and mathematics (at least the students in this study did).
However, our recent research shows that assessment opportunities to demonstrate development of key skills (written and oral communications) are limited and often not scaffolded across years within a degree program. Even with major curriculum reform unfolding over several years, shifting students’ perceptions of developing key skills (quantitative skills in this case) is difficult.
What is a curriculum leader to do?!
Assessment drives learning, right? Clearly, assessment is a critical piece of the solution to building students’ TLOs, but perhaps not in the ways we think it does.
A new study, led by David Boud, on disruptive effects of assessment patterns has many lessons for curriculum leaders.
Boud has previously written about how assessment in higher education has to do ‘double duty’. That is, it has to be an indicator of student learning outcomes (assessment of learning) and a means for guiding students’ learning (assessment for learning).
In this study, assessment is viewed as serving both purposes PLUS being about helping students to learn to make sensible judgements about their own learning (self-assessing accurately or, as Sadler says, developing evaluative expertise).
The researchers collected data via an online system that captured students’ self-assessed grade and teacher-generated grade and analysed years worth of assessment tasks. This allowed them to explore patterns and factors correlated to successful self-assessment within units of study and across several units of study from first to second to final years.
The analysis answered many interesting questions:
- Does self-assessment vary by performance? (yes, but perhaps not like you think)
- Is self-assessment linked to improved performance? (yes)
- Does self-assessment improve within a unit of study? (depends)
- Do students get better at self-assessing across units of study/years? (yes but time to improve judgement depends)
Equally interesting questions, perhaps more relevant to curriculum leaders, included:
- Within a sequence of units of study, does self-assessment improve? (not as much as expected)
- Does the type (e.g. report, presentation) of assessment task influence self-assessment abilities over time? (yes but depends)
- Does criteria used for assessment types influence self-assessment abilities over time? (yes)
Conclusion: This supports the notion that familiarity in assessment type accelerates students’ ability to make accurate judgements. Disruptive assessment patterns where there is no consistency of assessment type or use of criteria across tasks appear to delay students’ development of evaluative expertise. (page 56)
Caveat: The study was conducted within a specific institutional context, in certain disciplines (not science or mathematics).
Implications for Science and Maths Curriculum Leaders
If we think self-awareness and judgement are important for graduating science and mathematics students’ (that is students know what they know, and what they don’t know with regard to learning outcomes), coordinating patterns of assessment should become an important focus of curriculum development.
Specifically, as curriculum leaders, we should look at sensible/appropriate opportunities to share marking criteria across units of study for common modes of assessment tasks to make these familiar and coherent for students.
We have all heard the calls for “more explicit” teaching of TLOs to make such learning more visible to students. I think what we really want are students who know what they know and can articulate that learning to others (like employers).
Yes, the Boud-led study did not look at science or mathematics programs. But I think we can safely “transfer” some lessons and perhaps even “apply” this study in our disciplinary contexts as a future research focus.
This blog post was inspired by this work:
David Boud, Romy Lawson & Darrall G. Thompson (2015) The calibration of student judgement through self-assessment: disruptive effects of assessment patterns, Higher Education Research & Development, 34:1, 45-59, DOI: 10.1080/07294360.2014.934328
I encourage you to read the full study for yourself – it is well worth it and will reveal nuisances I have simplified for the sake of brevity and to suit this medium.
Kelly Matthews, Senior Lecturer in Higher Education
Institute for Teaching and Learning Innovation, Faculty of Science, The University of Queensland