Peer instruction was developed in the 1990’s by Harvard physics professor, Eric Mazur, as a response to the gulf between conceptual understanding and rote learning of his first year intake.1 Professor Mazur noticed that students would often score very highly on procedural algorithmic questions that had a numerical output, but fail at questions that tested purely conceptual understanding, such as predicting whether a lightbulb in an electrical circuit would get brighter or dimmer when a switch elsewhere in the circuit was closed.
Peer instruction is classed as an active learning method, and its key feature is a peer discussion of a conceptually-challenging question. There is an overwhelming body of evidence for a drastic measurable increase in knowledge retention for active learning over traditional instruction, and Peer Instruction is the most popular technique employed within an active learning framework.3,5 Specifically, a meta-analysis of 250+ research papers indicates an average 10% decrease in student failure rates when active learning is adopted.
I largely followed the implementation guides of Galloway2 and Newbury2, as follows: A peer instruction instance revolves around a “conceptest”. This is a conceptually-challenging, multiple-choice question. The question should:
As such, the question acts as a surrogate measure of conceptual understanding: an individual who understands the underlying concept ought to be able to answer correctly, and to be able to explain why, without prior coaching.
Within a lecture, students are first prompted to tackle the question individually and then vote on the correct answer. This can be using flashcards, “clickers”, or student device polling with software such as Socrative or Mentimeter. If between 30 and 70% of the class have picked the correct answer, then students are prompted to turn to a neighbour who answered differently and try to convince them that they are correct. The ensuing lively discussion is held without any knowledge of the overall vote, or the correct answer, and lasts for 1-2 minutes. Students then vote again individually on the same question, at which point the correct answer is revealed and discussed.
A successful peer instruction implementation will show an increase in the proportion of students answering correctly, from pre- to post-discussion. This increase indicates that in most of the peer-to-peer discussions, the person with an incorrect conceptual understanding was less convincing than the person with the correct conceptual understanding. By pairing up students who have only just acquired a “threshold concept” with those that have not, we avoid the so-called “curse of knowledge” within ourselves and our ingrained expertise.1
The impact of Peer Instruction is well-supported by the literature, and forms the core of most active learning and flipped classroom implementations, for which there is an overwhelming body of evidence.3,5 The evidence for success is such that I have never conducted a purely-traditional lecture at Strathclyde. I have no personal baseline data to compare to, as I felt that collecting such data would be unethical for the control group.
In my first year of Peer Instruction deployment, student exam scores in the module increased approximately 25%, bringing them in line with the year average for other modules. Student reception has been extremely positive in every instance so far, and engagement with the quizzes is excellent.
I encountered two main challenges in implementation: writing good questions, and picking a good technology to facilitate voting.
A good question should not be solvable by memorised algorithms, either numerical or more general problem-solving schema. Questions whose answers primarily depend on following a routine will show improvement during a Peer Instruction instance but will not be reflective of an increase in conceptual understanding. In these cases, the peer dialog is limited to “I used this algorithm” rather than a tussle over concepts.6
The polling method used should be scalable, but also capable of obscuring results until the end of the Peer Instruction instance. As such, a show of hands is unsuitable for Peer Instruction, but the voting still needs to have a low barrier to participation. Fortunately, a number of dedicated software packages have emerged in recent years which use student devices to replace the venerable “clicker” technology. I have settled on Mentimeter, but would encourage a curious reader to discuss technology with your faculty or departmental learning technologist.
I would rather have deployed Peer Instruction into an existing course, where the conceptual bottlenecks are well-known and well-understood – this would have accelerated implementation. As such, a number of my questions were too easy, answered correctly first-time around by >80% of students. These questions can be cut or made harder for next year!
Keeping the true answer hidden until the end of the Peer Instruction instance is absolutely key – I managed to inadvertently sabotage one of my questions by reading it out loud, putting a subconscious stress on the correct answer that students picked up on rather than relying on understanding.
One of the main purposes of Peer Instruction is to provide scalable interactivity, something not possible with a single instructor. Peer instruction scales extremely well and can be used in very large lecture theatres. It has also been successfully deployed on MOOC courses with tens of thousands of participants. Although it works at scale, it also remains an effective tool for catalysing and structuring discussion in small-group teaching.
Peer Instruction is already used across several different departments, most notably CIS and SIPBS. Peer Instruction is transferrable beyond STEM subjects, and has even been used worldwide in humanities and social sciences teaching.4