There is a lot of talk about mastery in science education. It has become the new ‘buzzword’ and some might claim the panacea in teaching and learning. Having done a bit of reading and thinking about this, I am yet to be convinced about whether it is really the solution to success in science. In my role of assessment editor, I have had to explore the possibilities of mastery and opportunities for methods of effective assessment and feedback.
What is mastery?
To me, the essence of mastery is that there are very clearly defined objectives, with a programme of ‘instruction’, followed by a test, if a student scores 80% or higher, they have ‘mastered’ that content, if not they are made aware of what they don’t know and have to revisit the core instruction and retake the test until they reach the 80% pass mark. There are lots of models based on this Objectives in chunks, Instruct/Teach/Learn, Test – Pass: Move on, Fail: Feedback and repeat approach.
Mastery in science
Some subjects lend themselves to the mastery model better than others. They need to be a subject upon which concepts and ideas are built. Maths is a classic example: if taught in a particular way, such as the Shanghai method, we can see significant learning gains. If mastery is the answer to all the ills of science education, then we need to first ascertain that, second develop an instructional programme and third train teachers in the methods. This would take significant investment in the research, educational assessment experts and training programmes for teachers.
One of the biggest issues teachers face is defining when a student has ‘mastered’ something – at what point can it be deemed that a concept has been mastered. When I was developing the Activate assessment model, I used the word ‘secure’ to define as a benchmark of success. This was because the term was used when describing ‘secure understanding of blocks of knowledge’ in the 2014 National Curriculum. I then worked with others to deem what ‘secure’ might look like based on previous curricula, Key Stage 3 assessment, Key Stage 4 demands and Bloom’s Taxonomy. This has been useful for developing learning objectives and outcomes, developing suitable teaching, learning and assessment activities and finally for tracking progress.
I shied away from using the term ‘mastery’, because I felt mastery was associated with a specific method of instruction. It is one, at this stage, that I have reservations about applying to science at Key Stage 3.
Moving forward, AQA produced their own Key Stage 3 Syllabus. This is focussed on a mastery model in that once a student can ‘apply’ knowledge, they have mastered it in that context. In order to ‘apply’ knowledge, the student has to gain knowledge, the ‘know’ phase. This is equivalent to our original ‘Developing’, ‘Secure’, ‘Extending’ model, but emphasises what has to be done for ‘mastery’ in ‘Know’, ‘Apply’ and ‘Extend’.
Instead of mastery per se, I have developed a model of assessment that allows for effective feedback in a variety of ways. For me, effective feedback is made up of careful diagnosis of a gap, clear communication of that gap and suitable intervention by the student or the teacher. This goes back to the Paul Black and Dylan Wiliam 1998 paper on formative assessment. It is worth noting that several of the case studies they sourced as evidence used mastery methodologies. One of the main conclusions of this review was that students need to be able to perceive a gap through effective feedback. Since then John Hattie and colleagues have produced meta-analyses of hundreds of quantitative studies. These studies overwhelmingly show that the most effective method for improving academic attainment is, you’ve guessed it, feedback.
Over the past five years, many schools have introduced a range of strategies to ensure teachers give feedback and their students respond. This is commendable, but in many cases has become a performance and effectively meaningless. Just because a child has written something in purple pen, green pen or in a specific box does not mean that they have learnt anything more.
There is a long list of statements in the national curriculum that pupils have to ‘know and understand’ by the end of Key Stage 3 – too many to check that every statement is ‘secure’ or ‘mastered’.
This is why we introduced ‘Check Point’ lessons. These start with an online diagnostic test which identifies ‘gaps’ in knowledge and understanding of a block of knowledge (in this case a science topic). This then allows pupils who fall below the expectation to do intervention work and those who ‘pass’ to extend and deepen their knowledge and understanding. This is very similar to a mastery model. The important point here is that the students only revisit what they got wrong. The feedback is specific and hopefully accessible enough to respond and learn the concepts required.
At the Association for Science Education Annual Conference in January I will be presenting my ideas concerning effective feedback through Check Point lessons and the idea of Pinch Point concepts.
Dr Andrew Chandler-Grevatt has a PhD in school assessment and a passion for science teaching and learning. Having worked as a science teacher for ten years, of which five were as an AST, Andy has a real understanding of the pressures and joys of teaching. Alongside his research in school assessment, Andy is a teaching fellow on the PGCE course at the University of Sussex, and is a successful published assessment author. He is the Assessment Editor for Activate, AQA GCSE Sciences Third Edition and OCR Gateway GCSE Science.
Andy Chandler-Grevatt will be presenting at ASE Annual Conference in Reading in January 2017. Don’t miss his sessions on Five-year assessment for AQA KS3 and GCSE, and Pinch points: planned intervention in science education.