White Paper # 2 - How to offer better education and training with the best blend of in-person, on-line and outcomes-driven assessments.
A version of this article was first published as a LinkedIn post in December 2022
What’s the best mix of on-line and in person study for effective, affordable learning?
Three long-term projects have reached significant milestones in 2022 for Datchet Consulting. The second is to make blended learning work in an engaging and affordable way.
Motivation
I became an academic in my early 40s but was fortunate to join a department with high quality thinkers and practitioners in what was Information Systems and Computing and is now Computer Science.
A lunchtime talk around 2002 by Mark Harman on how to mark exams quicker and better piqued my interest in criterion-based (or threshold-based) marking. As I worked with others on these ideas, I discovered how radically they relied on realistic and robust learning outcomes being taken seriously, and how they moved effort from marking (saving significant time) to writing exam scripts (which now required careful design).
But a new approach to marking was only the start. I wanted:
To reduce failures by students and provide them with more real-world insight.
To understand what elements of the module were most reflected in student achievement.
To use staff time to much greater effect.
Experimental work
From 2010, a group of us (see below) migrated a mandatory final year module on software project management away from a standard format. We videoed lectures, put them on-line with a self-study wrap around them, and focused face-to-face time on discourse. We mixed things up with a game session, a movie, a speed-reading course, and a peer assessment session. We made the assessment integral to the module with a single assessment in two parts, one part coursework and one part exam, both parts being criterion-marked.
We tracked students’ activity and, where we could, engagement. We rewarded cohorts that leapt ahead with an earlier crack at the next phase of learning and e-mailed those who were behind to make a start. We even told one class how their predecessors had been doing at that stage of the year before.
Failure rates dropped markedly against the traditional format although student feedback was mixed, since on-line learning felt less secure than lectures.
From 2014/15 to 2016/17 we collected data and analysed it, generating some great insights. It was possible to see, for instance, which elements of the module correlated most with final grade (in our case, the first few weeks of mainly on-line study), or to observe the performance of several cohorts of students within the class. We could measure whether coursework resits helped or hindered later achievement. It was clear that this combination provided a powerful tool for continuous improvement within a year and from year to year.
In the public domain
Four of us have written up the experiment in detail, with post-hoc analysis of the findings:
Exploring Student Engagement and Outcomes: Experiences from Three Cycles of an Undergraduate Module (Robert D. Macredie, Martin Shepperd, Tommaso Turchi, Terry Young, 2022)
I have also written up some of the implications – including the affordability of these methods:
Content is free: universities should stop producing it (Times Higher Oct 2021)
A new business model for the business school (EFMD Global Blog, Oct 2020)
An industry-style focus on teaching costs is vital to survive the pandemic (Times Higher, Sept 2020)
What Higher Education can learn from the healthcare reforms (Times Higher, January 2016)
Finally, I put the whole method up in bite-sized vlogs:
Interested? Please contact terry@datchet.consulting.