Here are two lessons on interviewing witnesses (cognitive interview) and suspects (ethical interview). Each lesson assumes you have set advance reading from whichever textbook or other source you are using. Lesson one starts with students making comparisons between standard police interviews and cognitive interviews using this visible thinking routine for comparing. The main application activity is to write a letter to a chief constable persuading her to adopt cognitive interviewing in her force. I’ve found that some students get all up tight about writing an essay because it smells like assessment and they do a better job if they write a letter instead, even though the same skills are required. The slideshow gives a structure for the lesson.
Here are some resources for teaching weapon focus, research methods and statistics. There is a set of stimuli for a weapon focus experiment and a response sheet (copy for the students or project it). The experiment is designed with at least one fatal flaw (failure to counterbalance in a repeated measures design) and several extraneous variables (e.g. image quality). You could use it to demo the general idea underlying most weapon focus research, use it as a stimulus for class discussion etc. Alternately, there is a slideshow to structure a lesson and a set of activities on weapon focus, research methods and statistics.
I’m a big fan of the jigsaw classroom (Aronson et al, 1978) to the point where I probably overuse it. If you’re not familiar, it’s a cooperative learning activity format in which students learn part of a topic so they can teach it to others and, in turn, are taught other parts by them. The aim is that all the students end up learning the whole topic. The students are organised into ‘jigsaw’ groups. Each jigsaw group is then split up and rearranged into ‘expert’ groups. Each expert group is given responsibility for mastering one part of the topic knowledge. The expert groups are then returned to their jigsaw groups, where they teach each other. There’s a good guide to the jigsaw technique here.
When it’s done well, jigsaw promotes a high degree of interdependence amongst learners and exposes all the students to the material to be learned, both of which contribute to its effectiveness as a psychology teaching strategy (Tomcho & Foels, 2012). Compared to non-cooperative methods (i.e. those that do not require interdependence) techniques like jigsaw provide more effective learning of conceptual knowledge, a greater sense of competence and more enjoyment of learning. This is particularly so when the activity is highly structured with assigned roles, prompts for self reflection, and both individual and group feedback on performance (Supanc et al, 2017).
When I use it I like to keep group sizes to a maximum of four. If you have 16 or 32 students in a class that’s great because you can divide the material into four and have four students in each jigsaw/expert group. A group of 25 also works well, with the material divided into five parts. It can be a headache to assign groups when you have inconvenient numbers of students so you need to plan ahead and think about how you will ensure that every student learns all the content.
In my experience, the jigsaw approach works best when:
You stress that the activity is all about understanding what they are learning and remind students throughout of their responsibility for both teaching and learning the material. The danger is that it can easily become an ‘information transfer’ exercise, with students copying down material verbatim and dictating to each other without understanding. It is sometimes useful to impose rules to prevent this (e.g. limit the number of words students are allowed to use when making notes in their expert groups, only allowing them to draw pictures etc.)
The learning material is tailored to the students. This means adjusting the difficulty/complexity level of the material to be just difficult enough so that the students need to engage with it and each other to co-construct an understanding. Too difficult and they can’t do it; too easy and it becomes trivial; either way, they lose interest.
The learning material is tailored to the timescale. Again, we want the students to create meaning from the materials and this takes time. If too little time is given then either some of the material won’t get taught, or students will resort to ‘information transfer’ and there will be no co-construction.
You actively monitor what’s going on in the groups, particularly the expert groups. This is how we moderate the difficulty of the materials. We don’t want the students teaching each other things that are wrong. At the same time, it’s important not to just charge in and instruct the learners directly. Doing that undermines the point of the approach. In any case, I wouldn’t use jigsaw to teach fundamental concepts for the first time; it’s just too risky. I prefer to use it to elaborate on and deepen understanding of ideas.
You have an accountability mechanism (i.e. a test). Multiple choice/online assessment is quick and effective if the test items are well written. Plickers and Socrative are useful tools for this. One approach that can work here is to tell the students that everyone will do the test but that each student will receive the average mark for their jigsaw group. This creates an incentive for students to ensure that everyone in the group does well (although it also creates an incentive to blame people if the group does badly, so YMMV).
Here’s a set of materials for teaching some of the factors that moderate the misinformation effect on eyewitness testimony using the jigsaw method. This is for a one-hour lesson with a 10-15 minute expert groups phase and a 15-20 minute jigsaw groups phase. There is a slideshow that structures the lesson and a set of learning materials covering the moderating effects of time, source reliability, centrality and awareness of misinformation. You can extend the activity by prompting students to evaluate the evidence offered. If you are a Socrative user (free account with paid upgrades) you can get the multiple choice quiz using this link. As with all these approaches, there is no guarantee that it’s superior to the alternatives but the available evidence suggests it is worth trying. And, like everything, its effectiveness is likely to grow when both teacher and students are practised in the technique.
Aronson, E., Blaney, N., Stephin, C., Sikes, J., & Snapp, M. (1978). The Jigsaw Classroom. Beverly Hills, CA: Sage Publishing Company
Supanc, M., Vollinger, V.A. & Brunstein, J.C. (2017). High-structure versus low-structure cooperative learning in introductory psychology classes for student teachers: Effects on conceptual knowledge, self-perceived competence, and subjective task values. Learning and Instruction, 50, 75-84.
Tomcho, T.J. & Foels, R. (2012). Meta-analysis of group learning activities: Empirically based teaching recommendations. Teaching of Psychology, 39 (3), 159-169.
I’m not a massive fan of presenting a set of learning objectives (or whatever we’re calling them this inspection cycle) at the start of every lesson. I agree it’s important that students know where they’re heading and how what they’re engaging with relates to other things they are learning; I just don’t think that sticking today’s LOs on the board and reading them out/getting students to copy them down is a particularly effective way of accomplishing this. That said, there is still an argument for defining clear set of LOs when we plan. When we teach a syllabus whose content and examination format we don’t determine (like A – Level Psychology) careful thought needs to be given to translating its potentially vague statements into terms that are meaningful given the people we’re teaching, the context in which we’re teaching them and the timescales involved.
I’ve done this a variety of ways in the past. I’ve always found it a very useful exercise for me, but of relative little apparent value to my students. To try to extract some more mileage from the process I’m currently experimenting with proficiency scales (Marzano, 2017). Besides communicating clearly what students need to be able to do, Marzano’s format also requires us to consider what progression in knowledge and understanding might look like in a topic and gives a scoring rubric we can use as the basis for assessment and feedback. I am interested to see how this works in practice.
Here is a set of proficiency scales for the Edexcel criminological psychology topic and a generic proficiency scale (RTF) you can adapt for your own purposes. I’ve divided up the content using SOLO levels (Biggs & Collis, 1982) because it’s a fairly useful model of how students’ knowledge and understanding can be expected to develop. I’ll upload more topic proficiency scales when I’ve finished writing them.
Biggs, J.B. & Collis, K.F. (1982). Evaluating the quality of learning: the SOLO taxonomy. New York: Academic Press.
Marzano, R.J. (2017). The new art and science of teaching. Alexandria: Solution Tree/ASCD.