White Paper # 2 - How to offer better education and training with the best blend of in-person, on-line and outcomes-driven assessments.

Brain model

Photo by Robina Weemeiler on Unspash

A version of this article was first published as a LinkedIn post in December 2022

What’s the best mix of on-line and in person study for effective, affordable learning?

Three long-term projects have reached significant milestones in 2022 for Datchet Consulting. The second is to make blended learning work in an engaging and affordable way.

Motivation

I became an academic in my early 40s but was fortunate to join a department with high quality thinkers and practitioners in what was Information Systems and Computing and is now Computer Science.

A lunchtime talk around 2002 by Mark Harman on how to mark exams quicker and better piqued my interest in criterion-based (or threshold-based) marking. As I worked with others on these ideas, I discovered how radically they relied on realistic and robust learning outcomes being taken seriously, and how they moved effort from marking (saving significant time) to writing exam scripts (which now required careful design).

But a new approach to marking was only the start. I wanted:

  1. To reduce failures by students and provide them with more real-world insight.

  2. To understand what elements of the module were most reflected in student achievement.

  3. To use staff time to much greater effect.

Experimental work

From 2010, a group of us (see below) migrated a mandatory final year module on software project management away from a standard format. We videoed lectures, put them on-line with a self-study wrap around them, and focused face-to-face time on discourse. We mixed things up with a game session, a movie, a speed-reading course, and a peer assessment session. We made the assessment integral to the module with a single assessment in two parts, one part coursework and one part exam, both parts being criterion-marked.

We tracked students’ activity and, where we could, engagement. We rewarded cohorts that leapt ahead with an earlier crack at the next phase of learning and e-mailed those who were behind to make a start. We even told one class how their predecessors had been doing at that stage of the year before.

Failure rates dropped markedly against the traditional format although student feedback was mixed, since on-line learning felt less secure than lectures.

From 2014/15 to 2016/17 we collected data and analysed it, generating some great insights. It was possible to see, for instance, which elements of the module correlated most with final grade (in our case, the first few weeks of mainly on-line study), or to observe the performance of several cohorts of students within the class. We could measure whether coursework resits helped or hindered later achievement. It was clear that this combination provided a powerful tool for continuous improvement within a year and from year to year.

In the public domain

Four of us have written up the experiment in detail, with post-hoc analysis of the findings:

Exploring Student Engagement and Outcomes: Experiences from Three Cycles of an Undergraduate Module (Robert D. Macredie, Martin ShepperdTommaso TurchiTerry Young, 2022)

I have also written up some of the implications – including the affordability of these methods:

Finally, I put the whole method up in bite-sized vlogs:

Getting Started this Autumn

Interested? Please contact terry@datchet.consulting.

Previous
Previous

White Paper # 3 - How to work out whether your beliefs are consistent with your behaviour in your business.

Next
Next

White paper #1 - Why modelling is key to designing health services and what it costs you if you don’t