Don’t mix the six! Thinking about assessment as six different tools with six different jobs.

 

With the very best of intentions, assessment has gone rogue.

It’s hard to imagine now, but when I started teaching in the late 80s, we didn’t really do assessment. We didn’t do much by way of marking, there were no SATS, no data drops and GCSE results didn’t get turned into league tables.  A few years later I remember being incredibly excited by the idea that school improvement should be based on actual data about what was and wasn’t going well, as opposed to an unswerving belief in triple mounting as the benchmark of best practice. That seemed such a modern, progressive idea that would really help schools focus on the right kind of things.  Around about the same time came ideas about the power of assessing for learning.  Now we would actually know, rather than just assert, what really was effective practice.  Henceforth we would teach children what they really needed to learn. A bright new future beckoned. I was an enthusiast.

The ‘father’ of sociology Max Weber talks about routinisation. All charismatic movements have to change in order to ensure their long-term survival. But in changing they must give up their definitive, charismatic qualities. Instead of exciting possibilities we get routines, policies and KPIs as the charisma is, of necessity, institutionalised.

Exciting ideas are all very well, but of course they need, to use a ghastly word ‘operationalising.’ But what happens over time is that the originally revolutionary impulse becomes so well established in systems and routines that they become more important than the original idea. Powerful ideas arise to address specific problems. Once routinised, the specific problem can get forgotten. Instead, we get unthinking adherence to a set of practises, divorced from reflection on whether or not those practises actually serve the purposes they were set up to address.

One of the things that has gone wrong with assessment is that it has morphed into a single magical big ‘thing’ that schools must do rather than a repertoire of different practices thoughtfully employed in different circumstances. The performance of assessment rituals is perceived as creating the reality of educational ‘righteousness’ – by doing certain things, like data drops and targets and marking and so on and so forth, a school becomes good, or at the very least avoids being bad. Not having some big system comes unthinkable.

But assessment is not one thing. It is not a ritual to be performed. Assessment is a tool, or rather set of tools, not an end in itself. Assessment is the process of doing something in order to find something out and then doing something as a result of having that new information. Because there are lots of different things we might want to find out, and lots of different ways we might seek to find that information, assessment cannot be one thing. The term assessment covers a range of different tools, all with different purposes. Whenever we are tempted to assess something, we should ask ourselves what is it we are trying to find out and what will be done differently as a result of having this information? If we can’t answer those two questions, we are on a hiding to nothing.

So what are these different purposes? The familiar language of formative and summative assessment – or more correctly – formative and summative inferences drawn from assessments is a helpful starting point. Formative assessment helps guide further learning; summative assessment evaluates learning at the end of a period of study by comparing it against a standard or benchmark.

But if we are to remind ourselves about the actual reasons why we might want to assess something, I think we need to expand beyond these two categories. I’ve come up with six, three that are different kinds of formative assessment and three that our summative.  By being clear about the purpose of each different type and not mixing them up we can get assessment back to being a powerful set of tools, that can be used thoughtfully where and when appropriate.

Formative assessment includes:

Diagnostic assessment which provides teachers with information which enables them to diagnose individual learning needs and plan how to help pupils make further progress.  Diagnostic assessment is mainly for teachers rather than pupils. If a pupil does not know enough about a topic, then they do not need feedback, they need more teaching.  Feedback is for the teacher so they can adapt their plans.  Trying to teach a child who does not know how to do something by giving the kind of feedback that involves writing a mini essay on their work is not only incredibly time consuming for the teacher, it is also highly unlikely to be effective. Further live teaching that addresses problem areas in subsequent lessons is going to do much more to address a learning issue than performative marking rituals.

Diagnostic assessment involves checking for understanding:

  • In the moment, during lessons, so that teachers can flex their teaching on the spot to clarify and address misconceptions.
  • After lessons, through looking at pupils’ work, in order to plan subsequent lessons to meet pupil needs.
  • At the end of units of work, in order to evaluate how successful the teaching of a particular topic has been and what might need to be improved the next time this unit is taught. An end of unit assessment of some sort is one possible way of doing this. Another might be looking through children’s books or using a pupil book study approach.
  • In the longer term, in order to check what pupils have retained over time, so that we can provide opportunities for revisiting and consolidating learning that has been forgotten.

Diagnostic assessment should not be conflated with motivational assessment or pupil self-assessment. A lot of the problems with assessment have arisen because the various kinds of formative assessment have been lumped together into one thing alongside a huge emphasis on evidencing that they have taken place.  This has led to an obsession with teachers physically leaving an evidence trail by putting their ‘mark’ on pupils’ work – in rather the same way that cats mark out their territory through leaving their scent on various trees.

Diagnostic assessment is assessment for teaching. The next two forms of formative assessment are assessment for learning.  Assessment for teaching is probably the most powerful of all forms of assessment and yet has been overlooked in favour of afl approaches selected mainly for their visibility.

Motivational assessment provides pupils (or their parents/carers) with information about what they have done well and what they can do to improve future learning. For motivational assessment to be effective in improving future learning, it must tell the pupil something that is within their power to do something about. Telling a child to ‘include more detail’ when they do not know more detail is demotivating and counterproductive. To use the familiar example from Dylan Wiliam, there is no point in telling a child to ‘be more systematic in their scientific enquires’ because if they knew how to be systematic, they would have done it in the first place.

Only where the gap between actual and desired performance is small enough for the pupil to address it with no more than a small nudge, can feedback be motivating.  On the other hand, feedback about effort, attendance, behaviour or homework could provide information that may have the potential to motivate pupils to make different choices.[1]

Pupil self-assessment: Pupil agency, resilience and independence can be built by teaching subject-specific metacognitive self-assessment strategies.  Teaching pupils about the power of retrieval practice and how they can use this to enhance their learning is a very powerful strategy and should form a central plank of each pupil’s self-assessment repertoire. Retrieval practice is not one thing. There are a range of ways of doing it. Younger pupils benefit from a degree of guided recall, whereas as children get older, more emphasis on free recall is more likely to be effective.

 Pupils should also be taught strategies for checking their own work – for example monitoring writing for transcription errors, reading written work aloud to check for sense and clarity, using inverse operations in maths to check for answers, monitoring one’s comprehension when reading and then rereading sections when one notices that what you’ve read does not make sense.  Pupils need be given time to use these tools routinely to check and improve their work.

Summative assessment includes:

Assessment for certification.  This includes exams and qualifications. Some of these – a grade 5 music exam for example, state that a certain level of performance has been achieved. Others, such as A levels and to an extent GCSEs, are rationing mechanisms to determine access to finite resources in a relatively fair way. Unfortunately, some of these assessments have been used evaluatively.  This is not what these qualifications are designed for and all sorts of unhelpful and unintended consequences fall out of using qualifications as indicators of school quality. In particular, it distorts the profession’s understanding of what assessment looks like and leads to the proliferation of GCSE-like wannbe assessments used throughout secondary schools.

Evaluative assessment enables schools to set targets and benchmark their performance against a wider cohort. Evaluative assessment can also feed into system-wide data allowing MATs, Local Authorities and the DfE to monitor and evaluate the performance of the schools’ system at an individual school and whole system level.

It is perfectly reasonable for large systems to seek to gather information about performance, as long as this is done in statistically literate ways. This generally means using standardised assessments and being aware of their inherent limitations. Just because we want to be able to ‘measure’ something, doesn’t mean it is actually possible. (Indeed, I have a lifelong commitment to eradicate the word ‘measure’ from the assessment lexicon.) Standardised assessments have a degree of error (as do all assessments – though standardised assessments at least have the advantage of knowing the likely range of this error). As a result, the inferences we are able to make from them are more reliable when talking about attainment than progress because progress scores involve the double whammy of two unreliable numbers.[2] They are also far more reliable at a cohort level than for making inferences about individuals since over and under performance by individuals will balance each other out when considering the performance of a cohort as a whole.

Since standardised assessments do not exist for many subjects, it is not possible to evaluate performance for say geography in the same way it is possible as it is for maths. Non standardised assessments that a school devises might give the school useful information – for example they could tell the school how successfully their curriculum has been learnt, but they don’t allow for reliable inferences about performance in geography beyond that school.

Given these limitations – the unreliability at individual pupil level, the unreliability inherent in evaluating progress and the unavailability of standardised assessments in most subjects, schools should think very carefully about any system for tracking pupil attainment or progress. By all means have electronic data warehouses of attainment information but be very aware of what the information within can and can’t tell you. I’d recommend reading  Dataproof Your School to make sure you are fully aware of the perils and pitfalls involved in seeking to make inferences from data.

What is more, summative assessment in reading is notoriously challenging since reading comprehension tests suffer from construct-irrelevant variance. In other words, they assess things other than reading comprehension such as vocabulary and background knowledge. More reliable inferences could be made were there standardised assessments of reading fluency. However, the one contender to date that could do this – the DIBELS assessment – explicitly rules out its use to evaluate performance of institutions.

Evaluative assessment is just one type of assessment with a limited, narrow purpose. It should not become the predominant form of assessment.

Informative assessment enables schools to report information about performance relative to other pupils to parents/carers, as well as information to help older pupils make choices about the examination courses, qualifications and careers.  This is the most challenging aspect to get right when seeking to develop an assessment system that avoids the problems of previous practice. Often, schools use the same system that is used for evaluative assessment for accountability purposes. But evaluative assessment is most reliable when talking about large groups of pupils, not individuals, so  where schools share standardised scores, they need to caveat this with an explanation about the limits of accuracy.

Let’s ask ourselves, what it is that parents what to find out about their child?

Most parents what to know

  • Is my child happy?
  • Is my child trying hard?
  • How good are they compared to what you would expect for a child of this age?
  • What can I do to help them?

However, parents do not necessarily want to have the answer to all of these questions in all subjects all of the time.

The first question is obviously important and schools will have a variety of ways of finding this out. It is probably most pressing when a child starts at a school. For example, it would be an odd secondary school that didn’t seek to find out if their new year 7s had settled in well at some point during the autumn term.

The second question involves motivational assessment.  Schools sometimes have systems of effort grades. These can work well where the school has worked hard with staff to agree narrative descriptors of what good effort actually involves and what it means to improve effort. For example, as well as attendance and punctuality, this could include the extent to which pupils

  • Monitor their own learning for understanding and ask for help when unsure or stuck
  • Contribute to paired or group tasks
  • Show curiosity
  • The attitude to homework
  • Work independently

Thus they create a metalanguage that allows a shared understanding of what it means for a child to work effortfully. This can then be shared with pupils and parents.  This metalanguage is portable between subjects. To a large degree, to work effortfully in Spanish involves the same behaviours as working effortfully in art. The metalanguage provides a short cut to describe what those behaviours are and where necessary how they could be further built upon. If there is a disparity between subjects, it allows for meaningful conversation about what is it specifically that the child isn’t doing in a particular subject that they could address.

If this work developing a shared understanding work does not take place and individual teachers are just asked to rate a child on a 4-point scale, then inevitably some teachers will grade children more harshly than others. I am sure I am not the only parent who has interrogated their child as to why their effort is only 3 in geography, yet it is 4 in everything else? When maybe the geography teacher reserves 4 for truly exceptional behaviour whereas the others score 4 for generally fine?

But it’s the third question that is really challenging. Schools sometimes avoid this altogether and talk about effort and what wonderful progress a child had made which is all well and good but can go horribly wrong if no one has ever had an honest conversation with parents about how their child’s performance compares with what is typical. It shouldn’t come as a surprise to parents if their child gets 2s and 3s at GCSEs for example. This might represent significant achievement and brilliant progress but parents should be aware that relatively speaking their child is finding learning in this subject more challenging than many of their peers.

However many schools often go to the other extreme and give parents all sorts of numerical information that purports to report with impressive accuracy how their child is doing. The problem being this accuracy is not only entirely spurious but rests on teachers spending valuable curriculum time on assessment activities and then even more valuable leisure time marking these assessments. And why? Just so that parents can be served up some sort of grade or level at regular intervals.

Grades or levels are important for qualifications because they represent a shared metalanguage, a shared currency that opens – or closes – doors to further study or jobs. Pandemics aside, considerable statistical modelling goes into to making sure grades have at least some sort of consistency between years. Schools however do not need to try to generate assessments that can then be translated into some kind of metalanguage that is translatable across subjects.  The earlier example of effort worked because effort is portable and comparable. It is possible to describe the effort a child habitually makes in Spanish and in DT and be talking about the same observable behaviours. This is not the same for attainment. There isn’t some generic, context-free thing called standards of attainment that can be applied from subject to subject.  We can measure length in a variety of different contexts because we have an absolute measure of a metre against which all other meters can be compared. There isn’t an absolute standard grade 4 in a vault at Ofqual. Indeed, some subjects, such as maths, assess in terms of difficulty whereas others, such as English, assess in terms of quality. Even within the same subject it is not straightforward to compare standards in one topic with another. Attainment in athletics might not bear any relating to attainment in swimming or dance for example, let alone meaning the same sort of standard of attainment in physics.  So even if it were desirable for schools to communicate attainment to parents via a metalanguage, it wouldn’t actually communicate anything of any worth.

 Yet in many schools the feeling persists that unless there is a conditionally formatted spreadsheet somewhere, learning cannot be said to have taken place. Learning is not real until it has been codifed and logged.  But schools are not grade farms that exist to grow crops of assessment data.  What we teach children is inherently meaningful and does not acquire worth or value through being assessed and labelled, let alone assessed and labelled in a self-deceiving, spurious way.

But if we do not have a metalanguage of some sort, how can we communicate to parents how well their child is doing?

First of all, the idea that telling parents that their child is working at ‘developing plus’, at a grade 3 or whatever other language we use is helpful because it uses a shared language is fanciful. The vast majority of parents will have not idea whether a grade 3 or developing plus or whatever is any good.  Even if they do, we are very likely misleading parents by purporting to share information with an accuracy that it just can’t have. If we tell parents that their child is grade 3c in RE but grade 3b in science, does that actually mean their RE is weaker than their science? If in the next science assessment the child gets a 3c, have they actually regressed? Do they really know less they than they did previously? And in any case, is a 3b good, bad or indifferent?

Nor is the use of metalanguage particularly useful for teachers. What helps teachers teach better is knowing the granular detail of what a child can and can’t do. Translating performance into a metalanguage by averaging everything out removes exactly the detail that makes assessment useful. Teachers waste time translating granular assessment information into their school’s metalanguage then meeting with leaders who want to know why such and such a child is flagging as behind. They then having to translate back from the metalanguage into the granular to explain what the problem areas are.  All this just because conditionally formatted spreadsheets give an illusion of rigour and dispassionate analysis.  

While most parents will probably want to know how well their child is doing relative to what might be typical for a child of their age, this does not mean parents want this information for every subject every term. Secondary schools in particular seem to have been sucked into a loop of telling parents every term about attainment in every subject. Not only is this not necessary, it also actively undermines standards in subjects with lesser teaching time. Take music for example. A child might get 1 lesson a week in music and 4 lessons a week in maths. If both music and maths have to summatively assess children at the same frequency, then a disproportionate amount of time that could be used for teaching music will be used instead to assess it.

Instead, school could have a reporting rota system. For example, in a secondary school context it might look something like this:

October year 7: information about how the child is settling.

Effort descriptors for 4 subjects

December year 7: attainment information for English, maths and history

Effort descriptors for 4 other subjects

Music concert

March year 7: attainment information for science, geography and languages

Effort descriptors for 4 other subjects

Art and DT exhibition.

July year 7: attainment information for RE and computing, plus English and maths standardised scores

Effort descriptors for all subjects

with a similar pattern in year 8 and year 9, though with information for all subjects coming earlier in the year for year 9 to inform children making their options.

This reduces workload and allows teaching time to focus on teaching rather than generating assessments to feed a hungry data system.  It does not mean that teaches can’t round off a topic with a final task that brings together various strands that have been taught over a series of lessons if this would enhance learning. It makes this a professional decision. It may be that writing an essay or doing a test or making a product or doing a performance gives form and purpose to a unit of work. And it may be that the teacher then gives feedback about strengths and areas to work on. But the timing of such set pieces should be determined by the inner logic of the curriculum and not shoehorned into a reporting schedule. And they may not be necessary at all. Some subjects by their very nature need to be shared with an audience. Rather than trying to grade performance in art or music or drama, have events that showcase the work of all that parents are invited to. As well as celebrating achievement, this should give parents the opportunity to see a range of work and make their own conclusions about well their child is doing compared to their peers.

There is one metalanguage that could potentially be used to report attainment that is portable between subjects: the language of maths. If we are trying to provide a meaningful answer to the question ‘how good is my child compared to what you would expect for a child of this age?’ then we are taking about making a comparative evaluation. Where they exist, standardised assessments can be used. These allow parents to understand not just how their chid is doing in comparison to their class but in comparison to a national sample.  There is no point in doing this though unless the assessment assesses what you have actually taught them. This sounds obvious but I’ve heard many a conversation with parents about how they got a low mark because lots of the test was on fractions, but we haven’t taught fractions yet!

For those subjects which don’t have standardised assessments and where it makes sense to do so, assessments of what has actually been taught can be marked and given a percentage score or score out of ten. There will be a range of scores with the class or year group. Where the child lies within that range can be communicated by sharing the child’s score, the year group average, and possibly the range of scores. In the same way, standardised scores – which is their raw form may not make much sense to most parents – can be reported in terms of where the child lies on the continuum from well above average to well below average.

Some reading this part may flinch here, especially for children who find learning in a subject more challenging.  Yet if we want to give parents information about how well their child is doing compared to what we might typically expect, we can’t get away from the fact that some children are doing much less well than their peers. What we can do, and should do, is not let this kind of reporting dominate what we understand assessment to be. It has its place, but it is just once tool among a range. Other tools, such as those that  enable responsive teaching, share information about motivation, or that equip students with tools to assess and improve their own learning, are much more likely to actually make a difference.


[1] Some children may face additional barriers that make it much more challenging to make improvements in one or more of these arears. Young children are not responsible for their attendance for example. Some children with SEMH need more than information to help them improve their behaviour.

[2] See Dylan Wiliam p35 in The ResearchED Guide to Assessment

Advertisement
 Don’t mix the six! Thinking about assessment as six different tools with six different jobs.

Cognitive load: a case study

This is a shortened version of the talks I gave at ResearchED Durrington and ResearchED Rugby

When we are taught something, the information our teacher is sharing passes first into our working memory. The working memory is the place where we think.  What many teachers do not realise is that the capacity of the working memory is fixed and limited; as a result, it can only think about a very small number of things at a time.  Once the working memory is full, it can only take on more information by ‘dropping’ something, in the same way that you might be able to juggle with two balls easily enough, but add a third into the mix and everything would go pear shaped. The technical term in cognitive science for ‘going pear shaped’ is cognitive overload. 

Fortunately, there is a work-around. Unlike the teeny-tiny working memory, the long-term memory is vast. I like to think of it a bit like the Room of Requirement in Harry Potter.  The long term memory is the place where things go when we have thought hard about them. The great thing about this is, once something makes it to the long-term memory, we can bring that memory back into the working memory when we want to think about something. We can remember things. With things we have thought about over and over again,  retrieval of memories  can become completely effortless and automatic.  For example, you can read these words with minimal effort because reading for you has become automatic. This means you have cognitive capacity to spare in your working memory to think about what these words are actually saying. You don’t have to use any of you capacity trying to work out what the words say.

This cognitive architecture has implications for teachers. We will need to consider the cognitive load involved in what we are teaching  and be keenly aware of the limited nature of working memory. This means we will need to present information in really small steps. Another implication is that we will need to make sure that students have to think hard about what we want them to remember (rather than thinking hard about something else, like the format of the lesson).  A third implication is that because we want students to remember what we taught them, we will need to give them lots and lots of opportunities to retrieve what we have taught them from their long-term memories, as this will make the memories stronger.

Some things we learn form the building blocks of much of our later thinking so secure recall of these is vital. They must be practised over and over until they are so automatic, it is impossible to forget them. We need these tools to be available to us in our working memory whenever we want them, without any conscious effort. We don’t want to have to remember how to read before we can read anything  or have to resort to counting on our fingers in the middle of our maths GCSE. (For more about how we remember things, see here.)

However, we don’t always bear these implications in mind. For example, we don’t break things down into small enough steps because we are experts in the things were are teaching. Things seem easy to use, precisely because various steps in the learning process have become so automated and unconscious, we don’t even recognise all the different things we are doing at once.  Wiemann called this ‘the curse of knowledge’[1].

I’m going to explore this using a case study approach. I’m going to explore how we learn to tell the time. However, since I am assuming that you probably can already tell the time using a conventional, analogue clock, I am going to teach you using a kind of clock I’m pretty sure most people who read this won’t be familiar with.   Please let me introduce the Fibonacci clock.

fib clock

The Fibonacci clock uses the Fibonacci sequence, rather than the more conventional numbers 1-12.  To work out the Fibonacci sequence, start with 0 and 1, and add them together. Obviously this is equal to 1, which now forms the third number in our sequence of 0,1,1. To get the next number, add the last number in the sequence to number to the one before it. So the next number will be 2. The number after that will be 3, then 5 and so on. If you really want to get into the spirit of things, you might wish to pause and work out the next few numbers in the sequence for yourself.  For ease of reference, I’ve put them here.[2]

However, for the purposes of our clock, we only need to first 5 of these (the first 5 after zero that is, so 1,1,2,3,5). Another property of Fibonacci numbers is that if you draw squares whose sides equal the numbers in the Fibonacci sequence, you can arrange these squares into an ever expanding spiral, known as the golden spiral or the Fibonacci spiral.

fib sprial numbers

For our purposes, we only want to look at the rectangle formed when 1,1,2,3,5 are placed together in this spiral formation. This rectangle will form our clock face.

fib 1 to 5

fib clock numbers

The panels on the face light up different colours and the pattern of colours is what tells us the time. (They are only accurate to 5 minutes.) These are the rules for telling the time on a Fibonacci clock

  • The hours are displayed using red and the minutes using green.
  • To work out the minutes just add up the green squares and multiply by 5

That seems simple enough, so let’s have a go (answers at the end as footnotes)

a)[3]

7 oclcok

 

b)[4]

6 30 1

That’s not so bad. The hours are quite straightforward. The minutes are a little but more clunk to work out – worth remembering when we expect children to grasp that with the minute hand you also have to count in 5’s.

However, it isn’t quite as straightforward as that. Here is the full set of rules.

  • The hours are displayed using red and the minutes using green.
  • When a square is used to display both the hours and minutes it turns blue.
  • So to work out the hours just add up the red and blue squares.
  • To work out the minutes just add up the green and blue squares and multiply by 5

Ok, let’s try telling the time now

c)[5]

545

d)[6]

6 30 2

e)[7]

6 30 3

 

There’s more than one way to display the same time on a Fibonacci clock.

 

f)[8]

925.PNGI’m hoping that you are finding this a bit taxing. There’s a lot to think about and you are a good way off being able to ‘read’ the time in the same way you can read your watch without thinking.

Now let’s contrast the rules for telling the time on a Fibonacci clock with those for telling the time on an analogue clock.

fib rules

There are actually more complicated rules for the analogue clock. Yet we expect children to pick this up with a couple of three week block in year 2 and year 4, and then wonder why so many of them can’t tell the time! Because there are 4 different rules that all need orchestrating simultaneously, the cognitive load is too high for many children, so learning fails. The ones that get it probably had already had a fair bit of practice at home, so some of the rules were already automated and didn’t need to be consciously worked through. This meant these pupils had more space left in their working memories to think about those rules that were new to them. So, extrapolating from telling the time, we should consider that whenever children struggle with something, it is worth asking ourselves if we have overwhelmed their working memory by underestimating how complex something is? More often than not, the answer will be yes.

If we really did teach the time using a Fibonacci clock, what would be an effective way to do it? We’d break it down into small steps, one rule at a time, practising that lots and lots, before introducing the next rule. So we would start off just telling the time in hours, using red only. if we did this lots and lots, the children would start to benefit from what is known as the ‘chunking effect.’ If we gave children plenty of time to practise each component aspect separately, this step would become stored in the long term memory as a ‘chunk’.

Have you ever tried to carry a large bundle of washing upstairs.  First of all, you drop a sock. When you pick this up you then drop some pants. Precariously balancing your pants on top of the pile causes yet more socks to cascade to the floor. Then consider the same load, packed into 5 carrier bags. You easily manage to climb the stairs without depositing underwear on the landing or hosiery in the corridor. A similar thing happens in our brains with chunking. The classic illustration of this effect is to ask someone to try and remember a sequence of letters or numbers. For example, look at this sequence for a few seconds (or even better, have somebody else read you this sequence) then look away and try to recall it.

TCV  QBM  TBI  NTS

Now try this sequence which has exactly the same letters

BBC ITV NQT SMT

British readers should find this much, much easier as the groups now form instantly recognisable chunks (for non-Brits, BBC is obviously the more usual name of the British Broadcasting Company TV channel, ITV is another TV channel, NQT stands for ‘newly qualified teacher’ and SMT stands for ‘senior management team’ – the leadership team in a school.)

Each of these ‘chunks’ of meaning only take up one slot in the working memory, so in the second example we only have to remember 4 things, not 12. We use chunking when we read a clock face. When we read a watch, we don’t count round in 5’s, we automatically ‘read’ the time from the position of the hands. We can even do it when the numbers are missing!

watchface.PNG

 

In the same way we no longer consciously sound out every letter when we read but can just ‘see’ what a word says, given sufficient practice, children will be able to just’ read’ a clock or watch. So now let’s practise reading our Fibonacci clock sticking just to red for the moment.  You may find you begin to just recognise certain patterns if you do this a few times.

a)[9]

1oc

b)[10]

2oc 1

c)[11]

2oc 2

d)[12]

3oc

e)[13]

4 oc

f)[14]

5oc

g)[15]

5 oc 2

h)[16]

6oc

i)[17]6oc 2

j)[18]

7oc 2

k)[19]

7

l)[20]

8oc

m)[21]

9oc 2

n)[22]

9oc

o)[23]

10oc

 

p)[24]

10 oc

 

q)[25]

11 oc

 

r)[26]

12oc

s)[27]

12 oc 2

When we were able to just read all these red clock faces automatically, we could move on to reading hours using a mixture of red and blue. When that was completely fluent we would concentrate on minutes, first of all just using green and when that was very secure, green and blue minutes. Eventually we would be in a position to put it all together.  This would take a lot of time and a lot of short but frequent practice.

If we translate this into how we teach children to tell the time using an analogue clock, it is little wonder children find it so hard and teachers so frustrating to teach. We don’t break it down enough and don’t do nearly enough practice once we’ve finished teaching the unit on time. In fact, it’s a miracle anyone learns to tell the time at all! If you want to find out about a better way of teaching time, I suggest you look at my blog here, where I advocate teaching using the hour hand only at first, and then subsequently teaching the minute hand separately. When both of these can be read fluently, read two clocks side by side, one showing hours, the other minutes. Finally, after all this practice, you can introduce a standard two hand clock.

As I said earlier, there are some things we learn as in the early years and key stage one that form the building blocks of much of our later thinking. If we want children to have the mental capacity to be independent, critical thinkers, we need to move heaven and earth to make sure as many  as possible of these crucial building blocks become completely automatic so that precious working memory space can be used for more creative thinking. These key skills must be practised over and over until they are so automatic we cannot forget them and don’t need to think about them. Drivers may well remember how difficult it was when first learning to drive to change gear, steer, signal and read the traffic all at the same time.  A year or so later, the process is so automatic, you can arrive at home without even remembering much of your journey. Instead, you’ve been able to think about other, more important things on the way home.

In the same way, our children have an entitlement to be given time and encouragement to commit the basic building blocks of thinking into their long term memories. Primary schools owe it to the children they teach to make sure that as a  bare minimum, all of these are learnt to automaticity.

  • Number bonds
  • Times tables
  • Phonics
  • Handwriting
  • Telling the time
  • Full stops and capital letters.
  • Weeks and months
  • Recognising map of UK and beyond

Yet there is a reluctance to spend time practising basic skills. It is derided as ‘meaningless rote learning.’  Nothing could be farther from the truth.  What is really meaningless is condemning children to a lifetime of having to count on their fingers when we could have set them free from the bondage to counting by making such they knew their number bonds to automaticity. What could hinder problem solving more than not being able to manipulate numbers effortlessly because you were never given the opportunity to learn your tables by heart, because your teacher described that sort of thing as ‘regurgitation’?  What could be less creative than not being able to read fluently because your teaching thought phonics was boring? It is our duty as educators to ensure that we help children move as much information as possible to long-term memory, so that their cognitive load can be utilised on the fun stuff, the clever stuff, the important stuff.

 

[1] Wiemann, C. (2007) ‘the curse of knowledge’. Or why intuition about teaching often fails’. APS News 16 p.9

[2] 0,1,1,2,3,5,8,13,21,34,55,89…

 

[3] 7 o’clock

[4] 6:30

[5] 5:45

[6] 6:30

[7] 6:30

[8] 9:25

[9] 1 o’clock

[10] 2 o’clock

[11] 2 o’clock (those annoying duplicates!)

[12] 3 o’clock

[13] 4 o’clock

[14] 5 o’clock

[15] 5 o’clock

[16] 6 o’clock

[17] 6 o’clock

[18] 7 o’clock

[19] 7 o’clock

[20] 8 o’clock

[21] 9 o’clock

[22] 9 o’clock

[23] 10 o’clock

[24] 10 o’clock

[25] 11 o’clock

[26] 12 o’clock

[27] This is also 12 o’clock. I forgot to tell you that rule, in the same way we forget to tell children that 12 is also zero on an analogue clock

Cognitive load: a case study

Oven-ready, Hello Fresh or Just Eat? What’s the beef about pre-planned lessons?

Another weekend, another Twitterstorm. The Policy Exchange have just released a paper arguing for more availability of ‘coherent curriculum programmes’ which include, among other things, lesson plans, text books and lesson resources such as worksheets. Unfortunately the TES reported this as ‘The solution to the workload crisis? Stop teachers designing their own lessons’  which, understandably, has gone down like a bucket of cold sick on Twitter.  The fear being that this augers the triumph of the neo-liberal take-over of education, with lesson plans direct from Pearson delivered straight to the classroom by Amazon drone.  Or, to refer to my possibly obscure title, delivered by motorbike by Just Eat, with the teacher’s role limited to opening the plastic cartons and serving them out; lamb bhuna tonight, whether you want it or not.

Having read the entire article, what the article is actually proposing is something much more reasonable: debatable, but reasonable. The argument goes that the 2014 National Curriculum is not being implemented as well as it possibly could because the appropriate resources and training to implement it well either don’t exist, or if they do exist, are hard to locate among the myriad of online resources. It bemoans the current situation where many teachers trawl through online resources, of possibly dubious quality, late into the night, as they attempt to plan each and every lesson ‘from scratch,’ although in reality, probably ‘from Twinkl.’[1]   This is wrong, the report argues, because the ‘lesson by lesson’ approach is highly unlikely to result in a coherent curriculum that hangs together across the year groups, or that provides sufficient provision for revisiting previous learning. The workload argument is more of a side issue in the report, not its main thrust. Its main thrust is about having a coherent curriculum.

I’m all in favour of coherent curriculum. Indeed, in this blog I argue for curriculum design that has coherence not only within each specific subject, but across subjects.  Yet the type of ‘3D’ curriculum I’m advocating is extremely time consuming to write. We’ve been at it for almost 2 years and it’s not where I want it to be yet. The same situation is being replicated across the country. In my ideal world, the DfE would pay me and my selected Twitter mates to devote ourselves to this task, but since (doubtless due to unintended oversight) the report fails to mention me explicitly by name, it comes up with the suggestion that the Government should have a curriculum fund that brings ‘teachers with curriculum planning ideas together with institutions who can provide quality assurance and wider scale distribution.’p36  The kind of institutions it posits as being in a position to do this are multi-academy trusts, learned societies, subject associations and museums.  What about schools not in MAT’s, I’d argue? As otherwise that means the vast majority of primary schools would be overlooked, and surely some of us have something to offer? And what about the BBC?

So, while I might argue with the detail about who might and might not secure funding to write detailed, coherent curriculum programmes, I think this is an excellent idea. I’d much rather use a curriculum resource written by a bunch of teachers in partnership with, say, a museum than by most educational publishers.  Especially if there existed a range of quality assured, kite marked resources that schools could choose to use, if they wanted to. Many primary schools already use ‘off the peg’ curriculum packages, usually for discrete subjects but occasionally across the curriculum. [2]  What is lacking is the all-important question of quality assurance. At the moment, schools buy in all sorts of ready-made packages for aspects of their curriculum.  With Ofsted signalling its intention to scrutinise the quality of the curriculum (which in a primary school context is shorthand for ‘everything other than English and maths’), primary headteachers are tearing their hair out trying to rustle up a coherent curriculum offer for the foundation subjects while secondary heads fret about ks3. Just off the top of my head, I can think of the following resources that primary schools of my acquaintance use.[3]  Jolly Phonics, Third Space Learning, Cornerstones Curriculum, White Rose, Literacy Shed Plus, International Primary Curriculum, Developing Experts, Jigsaw PHSE, Discovery RE  Val Sabin PE, Rigolo, Discovery Education Coding, ReadWriteInc, Maths Mastery, Charanga.

The thing that strikes me going through this list is that there are lots of different resources out there for maths and phonics and plenty for those really specialist areas of the primary curriculum where many primary teachers are more than willing to ‘fess up to having little to no subject knowledge and welcome explicit handholding; PE, music, computing.  But for geography and history, I know of nothing except for Cornerstones and IPC, which offer many subjects. I think it is fair to say to both parties that the IPC is not quite what the authors of the 2014 National Curriculum quite had in mind. And neither of these curriculum packages have the sort of horizontal, vertical and diagonal links that  I would argue  an excellent curriculum should be striving to build within and across subjects.

However, I really do understand the horror some teachers are expressing on Twitter today about having the planning of lessons taken away from them. The two main objections are that no ‘off the peg’ lesson can ever hope to meet the specific learning needs of the diverse classes we all teach and that planning lessons specifically for one’s children was one of the best bits of teaching, part of what made the job rewarding.

So, finally, let’s get back to the title.  In the report, the author John Blake suggests that coherent curricular programmes could be thought of as ‘oven-ready’ – presumably a sort of educational ready meal that just needs a bit of warming up. He argues that these would be especially useful for teachers new to the profession or new to a particular subject. And to be honest, even those of us who love lesson planning probably don’t mind using ‘ready meals’ for some subjects where they lack subject knowledge. If you told most primary school teachers that they were not allowed to use externally produced resources for computing, MFL or music, for example, and had to plan every lesson entirely from scratch, then there would be tears. (Except for the highly knowledgeable minority, of course, who might not understand what all the fuss was about).

Blake then goes on to talk about ‘the final foot.’  What he means here is how teachers could take an ‘oven ready’ resource and then use their professional expertise to adapt it as necessary for the realities of their class. Much of the groundwork having already been done, the teacher is freed up to tweak the lesson to fit their children.  This is what I meant by the ‘Hello Fresh’ approach. Hello Fresh is one of those companies that delivers boxes of food with all the ingredients you need to make the particular recipes it also provides. Everything is already in exactly the right quantity, all the cook needs to do is chop, peel, and actually cook the ingredients. Unlike a ready meal, this gives you scope either to follow the recipe slavishly, or, for those who feel confident, add or omit ingredients according to your family’s preferences, play about with cooking times (because you know your cooker best, right) or even go completely rogue and use the ingredients in a completely different recipe, maybe adding in other ingredients bought elsewhere and chucking others.

Yet I understand that some teachers will still object and see this as an assault on their professional autonomy and creativity.  When I was a class teacher, I loved lesson planning. So it was with some trepidation that 4 years ago we tried out a particular maths scheme that has very detailed, partially scripted lesson plans. I’m not going to say which one because I’m not specifically arguing for the merits of that particular programme or not, but about the idea of using very detailed plans written by others. (Besides which, many of you will either already know or be able to guess).  Anyway, we got funding with one class. The class teacher was happy to give it a go, though she was already an experienced, skilled teacher.  The reason why she soon loved it was because it wasn’t a ready meal, it was more of a ‘Hello Fresh’ kind of thing.  In fact, you had to tweak the lessons because, as the programme makes quite clear, they are aimed towards the average child and your actual children aren’t average. Some will need more challenge, more depth, others will need more support. So the teacher needed to think about how to adapt every lesson for the particulars of their class. The teacher also needed to decide whether or not to spend more time on a particular lesson, skip over lessons if the class didn’t need them, swap suggested manipulatives for something else, and had the freedom to design their own worksheets or to not use any worksheets at all.

What made this possible was that the programme wasn’t really a set of resources, it was a training programme, of which resources were a part.  Each unit of work included a video explaining key concepts, an overview, links to articles and research, as well as the lesson plans and flipchart slides to go with it. These resources are excellent and go far beyond what any of us in school would have been able to offer. And ours is a school unusually blessed with knowledgeable maths teachers.  There was also some central training and the expectation that the maths leader was regularly coaching teachers new to the programme. Indeed, during the first year, our maths leader, a year 6 teacher, had to teach year 1 maths once a week using the programme, so that she became familiar with it.  Without this training, the resource would not have had half the impact it did.

Now you may argue, if you had to do all that tweaking, what on earth is the point? You might as well have designed the whole thing yourself. Well no, even with the tweaking, lesson planning was much quicker. But why our first teacher really loved it, and why the subsequent teachers to use it also love it, is because it is so clever.  The progression and the way it comes back to topics again and again, the way it builds in reasoning at every step, the way it moves children away from reliance on counting and towards reasoning based on known facts is excellent. We might rate ourselves as excellent maths teachers who can plan fantastic lessons, but we simply do not have the expertise or time to develop a scheme of such quality. What really struck me doing lesson observations one week was how brilliantly progression is planned into the scheme. I saw addition lessons in year 1, 3 and 4 and in each lesson exactly the same structure was used, but with increasing complexity. Given its obvious superiority to anything we could produce, it would be foolish and arrogant to insist that we had the ‘freedom’ to plan our own lessons, just because we liked it. Nor do teachers feel reduced to mere delivery bots. I really feared they might, but that just didn’t happen. Because the lessons made sense. And where, very occasionally, a lesson didn’t seem to work, they had the freedom to teach it again, their way.

That’s not to say we don’t occasionally do things differently. For example, I think this way of teaching telling the time is better, so we don’t use all of their resources for that – just some. And we are encouraged to comment on lessons and suggest improvements which are listened to. Because the resource is online, rather than a textbook, when they adapt the programme, we don’t have to throw out costly resources. Were there to be similar quality programmes in other areas of the curriculum, I would buy them like a shot.

However, I also really understand that many teaches love the creativity that planning affords and would be loath to relinquish it.  On the other hand, just because you love doing something, doesn’t mean everybody does. As Michael Fordham says:

michael f

Instead, as an alternative to moving into leadership, more experienced teachers should have the option to move into curriculum design themselves. This is what happens in Singapore, where experienced teachers have options to move into senior specialist roles that work on areas such as curriculum design, testing, educational research or educational psychology.

With talk here of sabbaticals for teachers, maybe one sabbatical opportunity could be to work within a curriculum development team, producing resources for others to use?

 

 

 

 

 

[1] The report doesn’t mention Twinkl, that’s me, being facetious.

[2] I can only comment in detail on primary schools. Maybe it’s different in secondary schools where teachers are subject specialists?  But from talking to many secondary teachers, I don’t think it is as different as all that.

[3] Inclusion in this list does not mean I think the resource is either good or bad. We use some of these; some I wouldn’t touch with a bargepole.

Oven-ready, Hello Fresh or Just Eat? What’s the beef about pre-planned lessons?

The 3D curriculum that promotes remembering

In my previous blog I explained about how memory works, and how teachers can use strategies from cognitive science such as retrieval practice to promote long term learning. After all, the learned curriculum is the only curriculum that actually counts in the end.

The curriculum is the means by which we ensure that all our children get their fair share of the rich cultural inheritance our world affords.  A good curriculum empowers children with the knowledge they are entitled to: knowledge that will nourish both them and the society of which they are members. Because, as Angela Rayner, Labour shadow education secretary says, knowledge belongs to the many, not the few.

But if children don’t remember what we have taught them, then even the richest curriculum is pointless. Knowledge can’t empower if it is forgotten. So as well as thinking about what is the richest, best material to put into our curriculum, we also have to structure our curriculum in a way that make remembering almost inevitable. This blog relies very heavily on the thinking of Christine Counsell, so much so I did ask her if it was alright to use her ideas about building a memorable curriculum. She was much more concerned that the ideas got ‘out there’ than to claim ownership of them, but much of what follows is a result of her sharing her vision of a memorable, knowledge rich curriculum with me. The actual examples from different year groups come from me, so if you find the specifics lacking, that’s my fault, not hers.

Schools tend to spend a lot of time thinking about how children are going to learn, rather than what. Then when schools start to think about what they want children to learn – when they start to think hard about their curriculum – they overlook planning systematically how they can build their curriculum so that children remember it.

When I first started teaching there was no National Curriculum, or SATS and no Ofsted.  Schools were completely free to teach whatever they liked. Indeed, it was often down to the individual teacher to choose what they wanted to teach. My mother was a primary teacher and her colleague said she didn’t like maths so didn’t teach it. That’s pretty extreme. But it really was more or less up to you. The school I started in was more prescriptive than most – we had maths scheme and a reading scheme which means it was ultra-traditional for its time, but I was still asked what I wanted to teach for my first ‘topic’. Your topic drove the curriculum. The idea was that under the umbrella theme, you tried to find bits of learning from each subject that linked with it. So, for example, I decided for my first topic that I would do ‘the weather’ – actually quite a good topic, as it goes.  So we made rain gauges and wind socks and measured rainfall, wind direction and temperature, we learnt about wind speeds and the Beaufort scale in a geography/science combo.  We made mobiles with the symbols from weather forecasts. (I think that was art but it might have been DT). We wrote stories about storms. We played percussion instruments to make a storm. RE? well Noah’s Ark, obviously. We didn’t do any history that term. Not in a deliberately planned way, but just because it didn’t fit. Well I suppose I could have done the history of umbrellas or something.

This approach hasn’t completely died out either. Not long ago, some poor year 6 teacher on Twitter asked for help in planning what to teach in history that term to fit with her topic. Her topic was roller coasters. A topic chosen not by her but by some senior manager who decided that since going on a roller coaster was fun, learning about them would be too.

Actually planning a topic like this was quite fun and the best teachers were really inventive and taught good stuff. The rationale behind this approach was that by linking stuff together, it would be more interesting and hence more memorable than teaching a series of atomised, unrelated subjects. Strong links between the subjects was its raison d’etre. The problem was that it was just so arbitrary.  It was quite possible for children to do the same topic twice (or even three times) because it was just down to the individual teacher. Whole subjects could be left untaught for term after term after term, just because they didn’t ‘fit’ with the topic, and not because a strategic decision had been made to concentrate on something else. Or, in a desperate attempt to shoehorn a subject into a topic, tenuous links were made. I once joked that my topic that term was ‘tenuous links across the curriculum’. I was chatting to Christine Counsell the other day about this and she told me about a teacher who was doing a topic on colours. Desperate to fit in some history, the teacher plumped for teaching them about the Black Death!

But actually, this emphasis on links wasn’t completely misguided. If we want to build a curriculum that promotes remembering, we will absolutely need to build links in. In fact, we will need to build in those links in a far more systematic and structural way than the ‘topic web’ approach ever imagined. The very bones of our curriculum across the years and across subjects will need to link up in a highly well thought out way, so that knowledge taught in one subject is explicitly reinforced and revisited in a not only in other subjects, but in subsequent years. In this way, key concepts and vocabulary are reinforced because new words and concepts are encountered repeatedly in meaningful contexts. I am calling this way of building a curriculum a 3D curriculum, for reasons which I hope will become obvious.

First of all, vertical links should be deliberated constructed within a subject so that over the years, key ‘high yield’ concepts are encountered again and again. Not only are these concepts practised again and again through retrieval practice while the unit of work is being taught, the curriculum design provides planned opportunities to revisit the concept in subsequent years.

So, for example, let us consider the word ‘tyrant’ and its associates ‘tyranny’ and ‘tyrannical’ in the context of teaching history.

We first meet a ‘tyrant’ in year 1, when our students encounter King John (of Magna Carta fame) and learn that he was (until the barons got him) a tyrant. We don’t meet any tyrants in history again until in year 5 when we encounter Dionysius of Syracuse (the definitive tyrant) where his tyranny is counterpoised with the democracy of Ancient Greek city states.  While its quite a stretch to expect that children will remember the word ‘tyrant’ from 4 years previously, it provides an opportunity to remind students about the Magna Carta and how power is limited in Britain. Then in year 6, we can compare Hitler with Churchill. By now, we also know the adjective ‘tyrannical.’

Alongside this, we need to develop horizontal links between subjects in a year. These are the sort of links we loved back in the old days of topic webs.  In year 3 students learn about rivers in geography and the importance of the river Nile when learning about the history of Ancient Egypt.  In year 4 we learn that Vikings invade England, microbes invade bodies and about invasion games in PE.

Important grammar concepts, such as nominalisation – so important for academic writing – are also addressed when children write a non-chronological report or an explanation about something they have learnt in another subject. For example, children are taught that rather than writing that the Nazi’s invaded Poland we teach it is more effective to write about the invasion. Instead of saying the French were defeated we write about the defeat of the French and later about the opposition and resistance of the French.

Finally, we need to map out the diagonal links. That is to say, links that join concepts across both year groups and across subjects. So when in year 3 children learn in RE the story of the Exodus and encounter the brutality of Pharaoh they are reminded that he is behaving like a tyrant – a term they learnt in history in year 1!  To give another example, the word ‘source’ is the place where a river begins when studying the River Nile in year 3, but is also the person or book that provides information for a news story or for historical research when we discuss primary and secondary sources in later years. In English in year 6, students revisit our beloved word ‘tyranny’ when they encounter the Warden in ‘Holes’ and her tyrannical regime. A later study of the biography of Harriet Tubman affords the opportunity to describe slavery as being a form of tyranny, but of one group of people who ‘rule’ over another.

Each time a concept is encountered within a different context, not only is the concept more likely to be remembered, the understanding of that concept becomes more nuanced.

What is really important is that this revisiting is done in a deliberate, planned way and not as an inconsequential aside along the lines of ‘remember when you learnt about plants’ without explicitly reminding the students exactly what it is about plants that you want them to link with what they are learning now. So for example, explicitly revisiting the different types of plants that grow in different biomes when learning about adaptation. References to previously studied content need to build on or develop previous learning, as well as strengthening students’ ability to remember the terms. None of this should be ad hoc. These links form the bones of the curriculum. That’s why we can talk of the curriculum as the progression model.

I’m not saying building such a curriculum is easy. Primary school teachers are not used to knowing what children have learnt in foundation subjects in previous year groups, let alone which key concepts might provide fruitful opportunities for development. In other words, which key concepts really are ‘key’.  Indeed, in my experience, most primary schools are only just beginning to map out the kind of knowledge they think children should be learning, let alone thinking about the route map of key concepts within and across years and subjects.

Yet imagine the incredible head start our children would have if they arrived at secondary school will a sophisticated understanding, grounded in different contexts of the following concepts that I’ve lifted from our knowledge organisers: I’ve tried to give the word in its nominalised form where possible but obviously we need to make sure they know the other words in the ‘family’ too.

(Primarily from history) Ruler, king, monarch, monarchy, reign, democracy, election, tyranny, dictator, opposition, resistance, rebellion, invasion, conquest, triumph, parliament, government, tribe, emperor, empire, defeat, occupation, exploration, taxation, civilisation, citizen, culture, state, military, conflict, alliance, treaty, coalition, surrender, warrior, poverty, flee, exile, hostility, community, migration, persecution, oppression, liberation, neutral, eye-witness, source, archaeologist, expedition, navigation, exploration

(Primarily from RE) Creation, gratitude, compassion, victim, sacrifice, sacred, holy, pagan, monotheism, polytheism, immortal, salvation, forgiveness, sin, incarnation, reincarnation, prophet, liberation, obedience, commandment, prayer, worship, wisdom, commitment, faith, belief,

(Primarily from geography) Climate, weather, temperature, erosion, fertile, irrigation, meander, crop, trade, settlement, environment, abundance, scarcity, resources, habitat, adaptation, population, predator, prey, immigration

(Primarily for science) Flammable, conductor, insulator, dissolving, soluble, solvent, evaporation, condensation, pitch, volume, circuit, particle, reversible, irreversible, extinct, orbit, reflection, reproduction, sexual, asexual, friction.

This list is self-evidently far too long. We are only at the beginnings of building our 3D curriculum.

I gave a talk partly based on this blog at the conference at Reach Academy last Monday and someone asked me the very sensible question, am I talking about Isabel Beck’s tier two words? In case you haven’t read her work (and you really should, it’s all about vocabulary), Beck divides words into 3 categories; tier 1 are everyday words like table, cup, house; tier 3 words are technical, subject specific words such as photosynthesis or glacier; tier 2 are where we find words that provide a more precise or mature ways of referring to ideas they already know about. For example, knowing the word benevolent as well as kind or fortunate as well as lucky. See here for more. Tier 2 words are the words teachers should really concentrate on, argues Beck, because they lend a sophistication and maturity to communication that many child may not encounter at home and hence need explicit instruction.

While I agree with this, I think the key concepts we need to build a 3D curriculum from a set I’m going to call tier 2.5!  I’m still reflecting on this but I think the key concepts we need are ones that although often grounded in a specific subject domain (so tier 3) are also used in a metaphorical or looser way outside that domain (so tier 2 possibly?) For example, meander has a very specific – in fact tier 3 – usage in geography yet is useful word to use to describe thoughts or route through shopping malls. It’s probably not quite rich enough to from part of the endoskeleton of our curriculum, though ideally all our teachers will know that in year 3, children learn about meanders so that should the occasion occur where meander would be a useful verb, they will explicit reference river bends in their explanation.

Looking at my long list, it seems that the humanities afford more words able to be co-opted for use in other domains, whereas science vocabulary is more likely to be hyper-specific and domain bound. I also note that most of my history words tend to be about power and a fair few geography words about economics. I’m not sure if that’s by lefty bias coming into play or not? But since power and money are such powerful drivers, it is no wonder that words which formally mean one thing in once context – empire, for example – are pressed into service to describe more the commonplace human interactions of the power crazy. English teachers, I presume, would look on that long list of words first encountered in history lessons and be delighted to think that children would come to English lessons already with an understanding, albeit in a very specific context, of the word; an understanding is exploited when authors use words figuratively.  This is much less likely to happen with scientific words such as isotope. There are however still links to be made: coalition/coalesce for example.

Much of the detail of this approach is still tentative. I welcome comments.

The 3D curriculum that promotes remembering

Education and Stockholm Syndrome: the road to recovery

In 1973, 4 bank employees in Stockholm were taken hostage and held by their captors for 6 days. Yet when they were released, not one of them would testify against their captors; on the contrary, they raised money for their defense.

In June of this year, at the Festival of Education, Amanda Spielman released the English educational establishment from its captivity to a narrowly data-driven paradigm of educational excellence. Yet so strongly has this paradigm held us in its grasp for so many years, it is hard to let it go.  More than that, it is difficult to appreciate quite how perniciously this paradigm has permeated into our psyches, so that we find it difficult to detect just how far its corrupting influence distorts what we do. We suffer from a data-induced myopia. There are a myriad of possibilities we cannot ‘see’ because our focus is firmly fixed elsewhere. Our sense of what ‘good’ looks like has been so warped, we flounder when challenged to concentrate ‘on the curriculum and the substance of education, not preparing your pupils to jump through a series of accountability hoops.’ Surely ‘good’ looks like good results? Take away this guiding light and we are all at sea. You mean, my good results aren’t enough anymore? You mean I can have good results and still be bad? Those wicked jailers have taken away our security blanket; no wonder we want it back!

The penny is slowly beginning to drop. Now we don’t know what ‘bad’ looks like. Before, as long as we cleared those hoops, we were ok. If we cleared them in spectacular style, we might even be double ok with a cherry on the top. But unless we did something really horrific like having out of date plasters or the wrong type of fencing, we could be pretty sure we weren’t actually bad, as long as our results held up. Until now.

Of course we’ve always said there’s too narrow a focus on data and there’s more to education than English and maths and what about the arts and personal development and so on and so forth.  But when our jailers not only agree with us but blow up the jail, without this familiar reference point we find it hard to negotiate the landscape.  We keep looking back to where the jail once was to orientate ourselves.

In May, a month before Amanda’s talk, we held a governor away day to think about our ‘vision’. It was a good day. We spent much more time looking at our values than our results and ended up with our vision statement, which at the time I was really please with. It went like this:

Our Vision 

‘Learning to live life in all its fullness’ 

  1. Maintain and improve pupil progress and achievement within a responsibly balanced budget.
  2. Reduce educational inequality through maximising progress for all.
  3. Encouraging personal development in line with the school’s values.
  4. Working in collaboration and not competition with local schools for the good of all our pupils: ‘all pupils are our pupils’.

But now, when I look through it with Spielman-spectacles, is see how prison bound it is.  3) and 4) are ok, it’s 1) and 2) I have the problem with. Let’s look at 1).   (Forget the bit about the budget, that’s just an acknowledgement of the challenge of maintaining provision in the face of a drastically reduced budget)

Maintain and improve pupil progress and achievement.

We all know what this is code for. What it really means is ‘get good Sats results’ in English and maths. Now I’m not saying that Amanda thinks for one moment that getting good results isn’t important, of course it is. But we’ve forgotten that these results are an imperfect proxy for being suitably literate and numerate rather than an end in themselves. This is compounded by 2)

Reduce educational inequality through maximising progress for all

This is code for ‘make sure pupil premium children get good results too.’

Which is a worthy aim, as far as it goes, but it’s all just a bit reductionist.   Amanda’s speech, on the other hand, shared a vision of education ‘broadening minds, enriching communities and advancing civilization.’ Now getting good Sats results will contribute to that to a certain degree; let’s not understate the case. Minds are not going to be broadened very much unless children can read and write well and are confident in their use of maths.  There are many things that might enrich a community and advance civilization, but most of them are greatly helped by agents who are literate and numerate.

It’s the fixation on measuring things (implicit here) that’s the problem. To an outsider, ‘progress’ and ‘achievement’ sound like perfectly good things to aim for. But we all know that progress ain’t progress as the lay person might understand it. It’s Progress™, something quantifiable, something on a spreadsheet, something with the illusion of tangibility.  Our vision statements may be vague and aspiration, but that’s ok because pretty soon they will be translated into smart targets with numbers and everything. But, as the saying goes, measure what you value because you will value what we measure.  Our jailers measured us relentlessly and soon we valued their measurements above all things. We may have denied this with our words but our actions spoke louder.

Of course we want to broaden minds, enrich communities and advance civilisation. That’s a dream job description!  But mark my words, before long someone will invent a ‘broadened mind’ rubric so we can report how many microGoves of Progress™ we have made in the mind broadening business.

Grade Descriptor
9 A superlatively broad mind. Sublime community enrichment. Establishment of heaven on earth.
8 An extremely broad mind. Excellent community enrichment. Rapid advancement of civilisation.
7 An impressively broad mind. Impressive community enrichment. Notable advancement of civilisation.
6 A broad mind. Community enriched. Civilisation advancing.
5 A mainly broad mind with occasional narrowness. Community showing fledgling signs of enrichment. Civilisation inching forwards.
4 Some narrowness with outbreaks of broadening. Community just about managing, civilisation in two minds whether to go forwards of backwards
3 Quite a narrow mind, community a bit impoverished, civilisation retreating slowly
2 A narrow mind, community impoverished, civilisation in retreat
1 A very narrow mind, community very impoverished, civilisation put to rout.

(With thanks to Alex Ford for the inspiration and this great blog, written about those who, like Hiroo Onoda, are behind with the news)

A few people have asking me recently about curriculum development and wanting to know more about our attempts to create a knowledge rich curriculum that builds cultural capital. A question that sometimes comes up is, ‘Why are you doing this? How is it contributing to rising standards?’ ‘Standards’ of course being another code word for ‘great Sats results in English and maths.  As if everything has to be justified – especially major initiatives – in terms of the payback in test results. Cos that’s what the prison guards used to fixate on, so that’s what we find it hard to think beyond.

But surely, I hear you saying, a broad, knowledge-rich curriculum will result in higher standards across the board. Why, I said this myself here.  I argued that because inference depends on broad general knowledge ‘cutting back on foundation subjects to improve reading is a false economy.’   This is true, of course, of improving reading in a qualitative sense. However, while knowledge is essential for the comprehending of reading, the kind of knowledge gaps that thwart children in the Sats Reading Comprehension tend to be about why cats appear well looked after because they have shiny coats – not the sort of stuff you study in history and geography or science for that matter. The idea that curriculum time and financial and human resources might be poured into something that might not make that much impact on our data, on Standards,  is one that is going to take some time for schools to get their head around. It seems reckless, profligate when looked at from a prison perspective.

Although if we dare lift our eyes above the accountability horizon and contemplate the impact of a broad, knowledge-rich curriculum on the longer term achievement of our pupils at secondary school and beyond, we will see that we have given them the intellectual nourishment they need to thrive. We need to think hard about what words like ‘standards’ and ‘achievement’ and ‘progress’ might mean, when liberated from data-jail. Maybe it looks like broadening minds, enriching communities and advancing civilization?

Education and Stockholm Syndrome: the road to recovery

The highs and lows of knowledge organisers: an end of year report

In January, after one term of us using knowledge organisers, I posted this blog about how our experiment with them was going. 6 months later, the academic year over, I thought it might be useful to share my reflections upon what we’ve learnt along the way.  Since January, the importance of schools taking a good, long look at the curriculum they offer has really come to the fore, thanks to those trend setters down at Ofsted Towers. Amanda Spielman’s talk at the Festival of Education underlined what Sean Harford has been talking (and tweeting) about all year – stop obsessing about data (sort of) and the inevitable narrow focus on English and maths that necessitates[1], the curriculum is where it is at these days guys. So there is a lot of waking up and smelling the coffee going on as we begin to realise just how iconoclastic this message really is.  The ramifications are huge and startling. It’s a bit like the emperor with no clothes suddenly berates us for our poor fashion sense. We feel indignant (the data nonsense was Ofsted driven after all), pleased (we always wanted a broader curriculum), terrified (are asking to have their cake and eat it – schools side-lined the rest of the curriculum for a reason and not on a whim – how possible is it to really go for quality in the other subjects when getting good sats /gcse results is still such a monumental struggle?) and woefully ill-prepared.

I’m going to focus on the ‘pleased’ bit. It’s not that I don’t share the indignation and the terror. The indignation we will just have to get over. A broader curriculum will only happen if Ofsted want a broader curriculum – such is the power they wield – so let’s try and move on from the exasperation we feel when the curriculum poachers turn curriculum gamekeepers. As for the terror, let’s keep on letting Amanda and Sean know why we are so scared. I wrote another blog a while back about the triple constraint – the idea (from engineering project management) that the three variables of time, cost and scope (a term which embraces both quality and performance specification) are constrained by one another.  If you wish to increase the scope of a project by wanting quality in a broader range of areas than previously, then that will inevitably either cost you more time or more money. Time in education is relatively inelastic.  We can’t just deliver the ‘project’ later.  We can’t say we will get high standards across all areas of the curriculum by doing our GCSE’s when the ‘children’ are 20 (though this school did try something along those lines. It didn’t end well.)  So that leaves spending more on our project as the only other option. Mmmm, few problems with that.

But I digress. Back to being pleased. I am really pleased. After all, we started on revamping our ‘afternoon’ subjects well before Ofsted started banging on about this. We did so not because of Ofsted but because a) developments from cognitive science make a very strong case for ensuring children are explicitly taught knowledge if they are to become critical thinkers and creative problem solvers and b) children are entitled to a knowledge-rich curriculum.  I have become convinced of the moral duty to provide our children with a curriculum that ensures that they get their fair share of the rich cultural inheritance our nation and our world affords, an inheritance hitherto seen as the birth right of the rich and not the poor.

By sharing our experience so far, I hope I can save other schools some time (that precious commodity) by helping them avoid making the mistakes we did when we rolled out knowledge organisers and multiple choice quizzes last September.

A quick recap about what we did. We focused on what I am going to call ‘the big four’ i.e. the 4 ‘foundation’[2] subjects: history, geography, RE and science.  In July 2016 I shared some knowledge organisers from other schools with the staff – almost all from secondary schools as I could only find one example from a primary school at that point. Staff then attempted to write their own for these 4 subjects for the coming academic year.  It seemed to me at the time that this would be a relatively straight forward thing to do. I was wrong but more of that later. Our afternoon curriculum had been timetables into 3 week blocks, with strict cut offs one the 3 weeks had elapsed. This worked extremely well. It tightened planning – much less faff – much more deciding up front what really mattered, hitting the ground running with specific coverage in mind. It gave an excitement to the learning. Neither the children nor the teacher got bored by a topic that drifted on and on, just because that half term was quite long. It also meant that subjects did not fall off the edge of the school year never taught because people had run out of time. I would highly recommend this way of structuring the delivery of most of the foundation subjects. Obviously it doesn’t work for PE (though a good case can be made for doing it in swimming), MFL or PHSE, which need to be done at least weekly, but that still leaves at least 3 afternoons for the other stuff.

The weekend before each block started, the children took home the knowledge organiser for the new block.  The idea being that they read the KO, with their parents help where necessary. Then on Monday, the teacher started to teach them the content, some of which some of them would have already read about at the weekend. The next weekend, the KO’s went home again, along with a multiple choice quiz based on it, the answers to which were all (in theory) in the KO. These didn’t have to be given in and the scores were not recorded, although in some classes children stuck the KO and each quiz in a homework book.  The same procedure was repeated on the second weekend of the block. Then on the final Friday of each block, a multiple choice quiz was done and marked in class. The teacher took notice of the scores but we didn’t track them on anything. This is something we are changing this September with a very simple excel spreadsheet to record just the final end of unit quiz score.

Since we didn’t have KO’s for computing, art or DT, I suggested that during these curriculum blocks, children should take home the KO from a previous block and revise that and then do a quiz on it at the end of the art (or whatever) block. The ideas being that by retrieving the knowledge at some distance from when it was originally taught, the testing effect would result in better long term recall.  However, as it was a suggestion and I didn’t really explain about the testing effect and teachers are busy and the curriculum over full, it just didn’t happen. From this September, I’ve explicitly specified what needs to be revisited when in our curriculum map. Towards the end of last year, I also gave over some staff meeting and SMT time to studying cognitive psychology and this will continue next term with the revamp of our teaching and learning policy which is being rewritten with the best insights from cognitive science explicitly in mind.

Then, in the dying days of term, in mid July, the children took an end of year quiz in each of the 4 subjects which mixed up questions from all the topics they had studied that year. In the two weeks prior to this, children had revised from a mega KO, in effect a compilation of all previous KO’s and quizzes that year. They had revised this in lessons (particularly helpful at the end of term when normal service in interrupted by special events, hand over meetings and so forth) and at the weekend for homework. It hadn’t really been my intention to do this at the start of the year, but I confess to being a bit spooked by Ofsted reports that had (the lack of) assessment in the foundation subjects down as a key issue, something I wrote about here.  But having done so, I think it is a good idea. For one, it gives the children another chance to revisit stuff they’ve learnt several months previously, so improving the likelihood that they will be able to recall this information in the longer term.  Secondly, it gives these subjects status. We did the tests after our reports were written and parents meetings held. Next year I want to get the end of year scores (just a simple mark out of 10 or 15) on reports and shared with parents.  The results from the end of year tests were interesting. In the main, almost all children did very well. Here are the results, expressed as average class percentages. I’m not going to tell you which year group is which as my teachers might rightly feel a bit perturbed about this, so I’ve mixed up the order here, but it represents year groups 2-6.

History RE Science Geography
86% 93% 85% 84%
79% 85% 91% 82%
83% 95% 87% n/a
75% 75% 67% 74%
70% 76% 66% n/a

One class was still studying their geography block when we took the tests and another did Ancient Egypt as mixed geography/history block, geography coming off somewhat the worse in this partnership, something I may not have noticed without this analysis, and which we are now changing for next year.

From this I notice that we seem to be doing something right in RE and that by contrast, science isn’t as strong.  The tests threw up some common errors; for example, children confusing evaporation and condensation, something we can make sure we work on. Looking at the class with the lowest results, it is striking that the average is depressed by a few children scoring really badly (4 out of 10, 5 out of 15) but these are not the children with SEN but generally children with whom we already have concerns about their attitude to learning.  All the more reason to share these results with their parents.

Even so, the lowest score here is 66%, and that is without doing any recap once the block has finished until the very end of the year, something we will do next year.  I don’t have anything to compare these results with but my gut instinct is that in previous years, children would be hard pressed to remember 2/3’s of what they had learnt that year, let alone remembering 95% of it. As Kirschner and co remind us, if nothing has  been changed in the long term memory, nothing has been learned.[3] Or as Joe Kirby puts it ‘learning is remembering in disguise.’  So next year, I’d like us to aim for average around the 90% mark – mainly achieved by going back over tricky or easily confused content and by keeping a close eye on the usual suspects. Are they actually doing their revision at home?

So, after that lengthy preamble, what are the main pitfalls when using KO’s and MCQ’s for the first time.

  1. Deciding which knowledge makes it onto a KO is hard, particularly in history and sometimes RE. One teacher did a KO on Buddhism that had enough information for a degree! In general, the less you know about something, the harder it is to make judicious choices because you simply do not know what is and isn’t really important. In science it is pretty easy, go to BBC bitesize for the relevant topic and use that. For history you actually have to decide how to cut a vast topic down to size. Who will do this deciding? The class teacher, the subject co-ordinator, the SLT or the head teacher? For what it’s worth I’d start with the class teacher so they own the learning, but make sure that is scrutinised by someone else, someone who understands what is at stake here[4]. Quite a few primary schools have developed KO’s this year, so look at these and adapt from there, rather than starting from scratch. I’m going to put ours on @Mr_P_Hillips one  https://padlet.com/jack_helen12/czfxn9ft6n8o once I’ve removed any copyright infringing images. It’s one thing using these images on something just used in one school, quite another putting these up on the web. There are some up already by other people, so do take a look. I definitely think this hive-mind approach ton developing KO’s at primary level is the way ahead.  We are unlikely to have subject specialists for all the subjects in the curriculum in our individual schools, let alone ones who are up to date with the latest debates about makes for a good curriculum. However, by combining forces across the edu-twittersphere, I’m sure we can learn from each other, refining each other’s early attempts until we get something we know is really good. We’ve revised ours twice this year, once in January after a term of writing ones that were too long and then again in July with the benefit of hindsight
  2. Seems obvious but…if you are using quizzes, make sure the answers are in the KO! Someone – a secondary school teacher I think – tweeted a while back that KO’s are only KO’s if they can help children self-quiz. I think he was alluding to the grid sort of KO that looks like this (here’s an extract)
When did the ancient Greeks live? about 3,000 years ago
When was Greek civilisation was most powerful Between 800 BC and 146 BC.
Ancient Greece was not a single country but was made up of many city states
Some examples of city states are Athens, Spartan and Corinth
City states used to fight each other a lot. But if enemies not from Greece attacked they all joined together to fight back
The first city states started About 800 BC
All Greeks Spoke the same language and worshipped the same gods.
Ancient Greece is sometimes called the ‘cradle of Western civilisation’
Cradle of Western civilisation means The place where European culture all started
The climate in Greece is Warm and dry
In ancient Greece most people earned their living by Farming, fishing and trade
The two most powerful city states were Athens and Sparta

 

As opposed to the same information presented as continuous prose like this.

The ancient Greeks lived about 3,000 years ago

Greek civilisation was most powerful between 800 BC and 146 BC.

Ancient Greece was not a single country but was made up of many city states such as Athens, Spartan and Corinth; but all Greeks spoke the same language and worshipped the same gods.

City states used to fight each other a lot. But if enemies who were not from Greece attacked, they all joined together to fight back.

Ancient Greece has been called ‘the cradle of Western civilisation’ because writing, art, science, politics, philosophy and architecture in Europe all developed from Greek culture.

Ancient Greece had a warm, dry climate, as Greece does today. Most people lived by farming, fishing and trade

The idea with the grid being that children cover one half and write the answers (or questions) as a way of revising.  I get this for secondary children but it doesn’t seem suitable for primary aged children – especially the younger ones. The grid is just too forbidding to read. And we don’t expect them to write out answers for homework to check themselves. Again for younger children that would turn it into such as chore rather something we have found our children actually like doing.  Maybe we might develop a grid alongside the continuous prose? (I did both for Ancient Greece to see which worked better, but went for the prose version in the end).  Maybe for years 5 and 6 only?

When we audited the KO’s against the quizzes we found that the quizzes sometimes asked questions that weren’t on the KO! We spend a couple of staff meetings putting that right so I think that’s all sorted now, but if you spot any omissions when I finally do post our KO’s and quizzes, do let me know. Keep thinking hive mind.

  1. If you think KO’s are hard to write, wait until you try to write quizzes! The key to a good mcq is that the other answers – the distractors as they are known in the trade, are suitably plausible. Maybe some of our high scores were down to implausible distractors? However a really good distractor can help you spot misconceptions so are really useful formatively.

Polar explores (year 4,  joint history/geography topic)

Question Answer A Answer B Answer C
Which one of these is NOT a continent? North America Europe Russia
Which on of these is NOT  a country? Argentina Africa Hungary
Pemmican is… an animal that lives in water and has wings. high energy food made of meat and fat. high energy food made out of fish and protein.
Great Britain is surrounded by water so it is an.. island Ireland continent
If you travel north east from the U.K you will reach… Norway Belgium Austria
Shackleton’s ship was called… The Antarctica The Elephant The Endurance
When did Henson and Peary make a mad dash for the North Pole? 1909 1609 1979

 

I think this example has good distractors. I particularly like the way the common misconception that Africa is a country is addressed. With the dates, you may argue that children are using deduction rather than recall. I don’t think at this point that is a problem. Besides the fact that by having to think about the question their recall will have been strengthened anyway, we all know hard it is for children to develop a sense of time. 2009 was the year many of year 4 were born so if they think that happened a mere 40 years before they were born – when possibly their teacher was already alive, then we know their sense of chronology is still way out. But I would hope that most children would automatically dismiss this date and then be faced with a choice between 1609 and 1909. Some will just remember 1909 of course. But others might reason that since that 1609 is a really long time ago before the Fire of London whereas 1909 is only just over 100 years ago and appreciate that while the story is set in the past, it’s not that long ago and the technology needed to make the voyage far outstripped that around even in 1666. On the other hand, if the can reason that well about history they probably already know it was 1909! When at primary level we try to get children to remember dates, it is in order to build up their internal time line and relate events relative to one another. By the time children study this in year 4, they have previously learnt about the Magna Carta, Fire of London, the Crimean War and World War 1 (yr 2 ‘nurses’ topic on Florence Nightingale, Mary Seacole and Edith Cavell), the Stone Age, The Iron Age, Ancient Egypt, the Romans, the Anglo Saxons and the Vikings as well as knowing that Jesus was born 2017 years ago (and hopefully beginning to understand BC and why the numbers go backwards). I would hope they would be able to group these into a sequence that was roughly accurate – that’s something else we should develop some assessments for. Elizabeth Carr and Christine Counsell explored this with ks3 children; I’m going to adapt it for ks2 next year.

  1. I had hoped to bring all the KO’s and quizzes together into a nicely printed and bound book ready for revision before the final end of year assessments. In fact, ideally this booklet would be ready at the start of next year, so that children could revise from it at spare moments –not only at home and during specific revision lessons, but also when they had a supply teacher for example (for part of the day) , or in those odd 20 minute slots you sometimes get after a workshop has finished or before it starts. I wanted it to be properly printed and spiral bound to look ‘posh’ and look important. However, I really underestimated how much paper all this generates. There was I worrying we weren’t covering enough content – when we gathered it all together it took up 36.4MB. The price for getting a hard copy printed for each child (for their year group only) came to over £1500 – well beyond our budget. So a member of the admin team spent a whole day photocopying everything. By copying stuff back to back we were able to make it slim enough for the photocopier to staple. These were then put into those A4 see-through plastic pouches – we call them ‘slippery fish’ at our school.  They didn’t have anywhere near the gravitas that I’d hoped for – stapled at one corner only with pages inevitably tearing off. The teachers didn’t let them home until the final weekend because they were scared they would get lost. So much for the lovely idea that we would present leavers with a bound copy of all the KO’s and quizzes they had since year 2. So unless you have a friendly parent in the printing business or can get someone to sponsor you – be prepared for a low tech, photocopier intensive solution. In hindsight if every class had had a homework book the KO’s and quizzes went into as we went along, that would have been problem solved.

So there we have it. The top tip is to learn from what is already out there, adapting and honing what others have already done. Then please share back.

[1] I’m talking from a primary perspective here. The message to secondary schools being similar, but more along the lines of ‘forget your PiXL box of magic tricks and start making sure your kids are really learning important stuff.’

[2] Yes, I know, officially RE and science are ‘core’ subjects. They are not really though, in practice, are they. That’s partly what Amanda and Sean are getting at

[3] Kirschner A., Sweller J. and Clark E., 2006. Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching, Educational Psychologist, 41(2), p77

[4] I had intended to write about what is at stake in this blog but its long enough already. Another time, maybe. I do talk about the issues in my intial blog on KO’s mentioned at the start though, if you are looking for help .

The highs and lows of knowledge organisers: an end of year report

Beyond the blame game –  the trouble with transfer

It’s a laugh a minute in the Sealy household at dinner as two teachers swap amusing anecdotes about their day while our sons listen on enthralled. Yes, I’m lying. The sons are sticking pins in their eyes in a vain effort to MAKE IT STOP while we drone on to each other about the trials and tribulations of our respective days.  My partner is a maths intervention teacher and trainer who mainly spends his time training other teachers and TA’s how to teach maths to children who are struggling.  The interventions he trains people in are all very effective and have tonnes of evidence to back them up (albeit too expensive to staff for most of us in these cash-strapped times when having a class teacher and the lights on at the same time is considered a luxury). Among his top ten moans[1] is the situation when class teachers fail to recognise that ex-intervention students are now actually quite good at maths, instead seating them in the 7th circle of hell that is ‘orange table’, where there might as well be a sign saying ‘despair all who enter here’ and where the cognitive challenge is low.  When the intervention teacher tries to argue their case, the class teacher, who does not consider their colleague to be a ‘real’ teacher, argues that ‘she might be able to do place value (or whatever) with you, but she can’t do it in the class room where it really matters.’  The unspoken assumption being that intervention teachers – who are not real teachers anyway – don’t really know what they are doing and are easily tricked into thinking that a child has got something because they’ve played a nice game with their not-real teacher who doesn’t understand about important things like Sats and tests and being at the expected level and obviously couldn’t hack it in the classroom. Indeed, a quite senior teacher, worried for her value added, once said to him that he ‘artificially inflated’ pupils learning by teaching them stuff.   To which he countered that all teaching ‘artificially’ inflates learning – that’s what we’re paid to do! We are employed to use artifice to achieve learning.

It occurred to me recently that cognitive science provides an explanation as to why this conflict happens; an explanation that blames neither teacher and also explains equally well why every September, class teachers shake their heads in disbelief at the assessment information provided by their colleague,  the former teacher, a disbelief that is amplified on the transfer from primary to secondary school.

Transferring learning is, quite simply, a bitch.  There are three cognitive hurdles to overcome on the journey from the pupil’s first encounter with an idea to them being able to understand whatever it is in a flexible and adaptable way. First, they need to be presented with the idea in an understandable way that make them think hard[2] about what they are learning. If they think hard about it, it is more likely to make that all important journey from their short term memory to their long term memory. Sometimes teachers try and make ideas memorable by making them exciting in some way. This can backfire if the ‘exciting’ medium becomes more memorable than the actual message the teacher wants to get across. I recall one child who was finding learning to count really tricky, so to engage him we used gold paper plates and toy dinosaurs. He was totally absorbed, but not on the maths, unfortunately – and did much better with plain paper plates and cubes.  But hurdle one is not where the intervention vs class teacher fault line lays.

The second hurdle lies in overcoming the ‘I’ve taught it therefore they know it’ fallacy, particularly common among less experienced teachers.  But even if our panoply of afl strategies tell us that a particular child has grasped a particular concept, it is highly likely that by the next day they will have forgotten most of what we taught them. That is just how our brains work. But that does not mean we labour in vain; the forgetting is an important part of remembering.  The forgotten memory is not really forgotten, it’s floating about somewhere in our long term memory, ready to be reactivated. All it takes is for us to re-teach the information and on second encounter, the material is learned much faster. By the next week it is all mostly forgotten again but with a third presentation, the material is learned very quickly indeed.  And so on.  Each time we forget something, we relearn it more quickly and retain it for longer.

This means that teachers need to build into our lessons routine opportunities to revisit material we taught the day before, the week before, the month before, the term before and the year before.  This is known in the trade as ‘spaced repetition.’  Each time we do so, we enhance the storage strength of memories. Ignorance of this phenomenon accounts for part of the professional friction between colleagues. It wasn’t wishful thinking on behalf of the ‘sending’ teacher.  The pupil genuinely did really know how to partition 2-digit numbers, for example, but has now forgotten. That’s an inevitable part of how our brains work and not some other professional’s ‘fault’.  When faced with a conflict between what it is reported that a student can do and what they appear actually able to do, the most charitable and scientifically probable explanation is that they have forgotten how to do something that they once could do well; with a bit of input it will all come back fairly quickly. If we remind ourselves on this each September and expect to have to cover a lot of ‘old’ ground, that will be better for our students, for our blood pressure and for professional relationships.

However, hurdle number three has, to my mind, the best explanatory power for this aggravating situation.  To understand this, I will have to explain the difference between episodic and semantic memory.  Episodic memory remembers…episodes…events….experiences. It is autobiographical, composed of memories of times, places and emotions and derived from information from our 5 senses.  Semantic memory is memory of facts, concepts, meanings and knowledge, cut free from the spatial/temporal context in which it was acquired.  Generally, especially where teaching is concerned, memories start off as episodic and then with lots of repetition, particularly in different contexts with different sensory cues, the memory becomes semantic and can be recalled in any context. This is the destination we want all learning to arrive at.

So when we learn something new, we remember it episodically at first.   We’ve all had those lessons when we remind our class about the previous lesson and they can recall, in minute detail, that Billy farted, but not what an adverb is.  Or they’ll remember that you spilled your coffee or that Samira was late or even that ‘we used highlighter pens.’  But anything actually important…gone!  Of course, when you recap on yesterday’s lesson, it will all come flooding back.  See hurdle two.  However, the problem for transferring this knowledge beyond working with this teacher in this classroom is that with episodic memories, environmental and emotional cues are all important.  Take these cues away and the memory is hard to recall. We don’t want a situation, for many reasons, where our children can only recall what an adverb is if prompted by the environmental cue provided from Billy’s posterior.  We are a proud profession, we aim a little bit higher than that. We want what we teach to be transferable to any context.  Until that has occurred, how can we say learning has successfully happened?

So, back to our maths intervention teacher. The pupil has learnt a whole heap of maths and made many months of progress in a short space of time.  However, although their teacher has got them to think hard about this material and got them to apply their new knowledge in many different situations, and although the teacher has also used the principles of spaced repetition and revisited previously taught material many times, there is still the very real possibility that the memory of some of this material is still mainly episodic, still mainly dependent on familiar environmental cues for recall.  It is not that the child is emotionally dependent on the familiar adult to boost their confidence – thought that can also happen – but that the academic memory is bundled with the sound and sight (and possibly, the coffee breath of) their intervention teacher and the room in which the intervention happened.  Without these, the memory is inaccessible.

This problem is only exaggerated when the transfer is from one year group to another – with the added difficulty that the student is unlikely to have been doing much hard thinking about either denominators or adverbs over the six weeks summer holiday. It is even more of a barrier when students are transferring to a completely different school, such as at secondary transfer, with all the other attendant changes that brings.

To counter this, when teaching material, we need to try and play about with the environmental conditions to lessen the impact of context cues. So when an intervention teacher asks to come and work in class alongside a pupil as part of their weaning off intervention, that is not some namby pamby special snow flake treatment by a teacher who clearly is too attached to their pupils, but a strategy rooted in cognitive science to help the pupil access episodic memories with most of the familiar context cues removed. Class teachers can try and break the dependence on context cues with material they teach by, at the very least, getting pupils to sit in different seats with different pupils from time to time.[3]  Year 6 teachers, now faced with the post sats quandary of what to teach now, would do well to teach nothing much new and instead ensure over learning of what pupils already know but within as many different  physical contexts as possible  – maths in the playground, or hall or even just by swapping classrooms for the odd lesson.  If pupils are used to sitting next to the same group of pupils in every lesson, now is the time to mix things up, to lessen the dependence on emotional cues (again, episodic) gained from the sense of familiarity of sitting with the same people day in, day out[4].

Transfer can also be facilitated by applying learning in different parts of the curriculum, using maths in DT for example, or in art lessons or maths through drama and also by applying the learning in open ended problem solving.  Indeed, the very sort of ‘progressive’ teaching strategies that card carrying traditionalists usually eschew, are fine for transfer, once the learning is securely understood, but probably still remembered episodically. It’s the use of these methods for the initial teaching of ideas that’s a bad idea – explicit teaching does that job so much better. Whizzy bangy stuff early on – or even in the middle – of a sequence of learning, runs the very real danger of getting children to think hard about the whizz bangs and not the content – so the whizz bangery will be what gets remembered in the episodic memory. See hurdle one. But that’s a whole other blog post.

Accepting the inevitability of the difficulties of transferring learning from one context to another can help us plan better for that and be less frustrated by it both in preparing to say goodbye to pupils in July and when saying hello to students in September.   It’s not that learning slumps as such in September, it’s that it is being reawakened and then transferred from episodic to semantic memory. Once memories have made this journey, they are so much stronger and more flexible, so worth the frustration.  So this September, when your new pupils don’t seem to be able to remember anything their assessment information would indicate they should know, take a deep breath, remember the three hurdles and that is just how learning and memory works. It probably isn’t their former teacher’s fault at all.  Maybe you just don’t smell right.

[1] Just in case a colleague of my partner is reading, he insists I make it abundantly clear this has not happened for a long while where he teaches. It does happen to some of the people he trains (in other schools) though – it is an occupational hazard of being an intervention teacher.

[2] Memory being the residue of thought, as Daniel Willingham explains in this book you really should read.

[3] I am relying heavily on chapter 6 of ‘What every teacher needs to know about psychology’ by David Didau and Nick Rose for all of this. This is also a very good book for teachers to read. If you read both this and the Willingham one above, you would be well set up.

[4] Not that I would recommend this in the first place, but if that is how you do things, shake them up for the last few weeks of term in the interest of better transfer

Beyond the blame game –  the trouble with transfer

Test to the Teach

making-good-progressWhen Daisy Christodoulou told us not to teach to the test, I assumed she was mainly concerned with teachers spending too much lesson time making sure children understood the intricacies of the mark scheme at the expense of the intricacies of the subject. Personally, I’ve never spent that much time on the intricacies of any mark scheme. I’ve been far too busy making sure children grasp the rudimentary basics of how tests work to have time spare for anything intricate.   For example, how important it is to actually read the question.  I spend whole lessons stressing ‘if the question says underline two words that mean the same as …., that means you underline TWO words. Not one word, not three words, not two phrases. TWO WORDS.    Or if the questions says ‘tick the best answer’ then,  and yes, I know this is tricky, the marker is looking to see if you can select the BEST answer from a selection which will have been deliberately chosen to include a couple that are half right. BUT NOT THE BEST. (I need to lie down in a darkened room just thinking about it).

But this is not Christodoulou’s primary concern.

Christodoulou’s primary concern is that the way we test warps how we teach. While she is well aware that the English education system’s mania for holding us accountable distorts past and present assessment systems into uselessness, her over-riding concern is one of teaching methodology.  She contrasts the direct teaching of generic skills (such as using inference for example) with a methodology that believes such skills are better taught indirectly through teaching a range of more basic constituent things first, and getting those solid.  This approach, she argues, creates the fertile soil in which  inferring (or problem solving or  communicating or critical thinking or whatever) can thrive. It is a sort of ‘look after the pennies and the pounds will look after themselves’ or (to vary the metaphor) a ‘rising tide raises all boats’ methodology. Let me try to explain…

I did not come easily to driving. Even steering – surely the easiest part of the business – came to me slowly, after much deliberate practice in ‘not hitting anything.’ If my instructor had been in the business of sharing learning objectives she would surely have told me that ‘today we are learning to not hit anything.’

Luckily for the other inhabitants of Hackney, she scaffolded my learning by only letting me behind the wheel once we were safely on the deserted network of roads down by the old peanut factory. The car was also dual control, so she pretty much covered the whole gears and clutch business whilst I concentrated hard on not hitting anything.  Occasionally she would lean across and yank the steering wheel too.  However, thanks to her formative feedback (screams, yanks, the occasional extempore prayer), I eventually mastered both gears and not-hitting-anything.  Only at that point did we actually go on any big roads or ‘ to play with the traffic’ as she put it.  My instructor did not believe that the best way to get me to improve my driving was by driving. Daisy Christodoulou would approve.

Actually there was this book we were meant to complete at the end of each lesson. Michelle (my instructor) mostly ignored this, but occasionally she would write something – maybe the British School of Motoring does book looks –  such as ‘improve clutch control,’ knowing full well the futility of this –  if I  actually knew how to control a clutch I bloody well would. She assessed that what I needed was lots and lots of safe practice of clutch control with nothing else to focus on. So most lessons (early on anyway) were spent well away from other traffic, trying to change gears without stalling, jumping or screeching, with in-the-moment verbal feedback guiding me. And slowly I got better. If Michelle had had to account for my progress towards passing my driving test, she would have been in trouble. Whole areas  of the curriculum such as overtaking, turning right at a junction and keeping the correct distance between vehicles were not even attempted until after many months of lessons had taken place. Since we did not do (until right near the very end) mock versions of the driving test, she was not able to show her managers a nice linear graph showing what percentage of the test I had, and had not yet mastered.  I would not have been ‘on track’. Did Michelle adapt my learning to fit in with these assessments?  Of course not!  She stuck with clutch control until I’d really got it and left ‘real driving’ to the future- even though this made it look like I was (literally) going nowhere, fast.  Instead Michelle just kept on making sure I mastered all the basics and gradually added in other elements as she thought I was ready for them.  In the end, with the exception of parallel parking, I could do everything just about well enough. I passed on my third occasion.

I hope this extended metaphor helps explain Christodoulou’s critique of teaching and assessment practices in England today. Christodoulou’s book ‘Making Good Progress?’ explores why it is that the assessment revolution failed to transform English education. After all, the approach was rooted in solid research and was embraced by both government and the profession. What could possibly go wrong?

One thing that went wrong, explains Christodoulou, is that instead of  teachers ‘using evidence of student learning to adapt…teaching…to meet student needs’[1], teachers adapted their teaching to meet the needs of their (summative) assessments. Instead of assessment for learning we got learning for assessment.

Obviously assessments don’t actually have needs themselves. But the consumers of assessment – and I use the word advisedly –  do.  There exist among us voracious and insatiable accountability monsters, who need feeding at regular intervals with copious bucketfuls of freshly churned data.  Imagine the British School of Motoring held pupil progress meetings with their instructors. Michelle might have felt vulnerable that her pupil was stuck at such an early stage and have looked at the driving curriculum and seen if there were some quick wins she could get ticked off before the next data drop.  Preferably anything that doesn’t require you to drive smoothly in a straight line…signalling for example.

But this wasn’t even the main thing that went wrong. Or rather, something was already wrong, that no amount of AfL could put right. We were trying to teach skills like inference directly, when, in fact, these, so Christodoulou argues, are best learnt more indirectly by learning other things first. Instead of learning to read books by reading books, one should start with  technical details like phonics. Instead of starting with maths problem solving, one should learn some basic number facts. Christodoulou describes how what is deliberately practised – the technical detail –  may look very different from the final skill in its full glory. Phonics practice isn’t the same as reading a book.  Learning dates off by heart is not the same as writing a history essay.  Yet the former is necessary, if not sufficient basis for the latter. To use my driving metaphor, practising an emergency stop on a deserted road at 10mph when you know it’s coming is very, very different from actually having to screech to a stop from 40mph on a rainy day in real life, when a child runs out across the road. Yet the former helped you negotiate the latter.

The driving test has two main parts; technical control of the vehicle and behaviour in traffic (a.k.a. playing with the traffic). It is abundantly clear that to play with the traffic safely, the learner must have mastered a certain amount of technical control of the vehicle first. Imagine Michelle had adopted the generic  driving skill approach and assumed  these technical matters could be picked up en route, in the course of generally driving about,  and assumed that I could negotiate left and right turns at the same time as maintaining control of the vehicle. When I repeatedly stall, because the concentration it take to both brake and steer distracts me from concentrating on changing gears to match this slower speed, Michelle tells me that I did not change down quickly enough, which I find incredibly frustrating because I know I’ve got a gears problems, and it is my gears problem I need help with. But what I don’t get with the generic skill approach is time to practice changing gears up and down as a discrete skill. That would be frowned on as being ‘decontextualised’. I might protest that I’d feel a lot safer doing a bit of decontextualized practice right now – but drill and practice  is frowned upon – isn’t real driving after all – and in the actual test I am going to have to change gears and steer and brake all at the same time (and not hit anything) so better get used to it now.

Christodoulou argues that the direct teaching of generic skills  leads to the kind of assessment practice that puts the cart, if not before the horse, then parallel with it. Under this approach, if you want the final fruit of a course of study to be an essay on the causes of the First World War, the route map to this end point will punctuated with ‘mini-me’ variations of this final goal; shorter versions of the essay perhaps. These shorter versions are then used by the teacher formatively, to give the learner feedback about the relative strengths and weaknesses of these preliminary attempts. All the learner then has to do, in theory, is marshal all this feedback together, address any shortcomings whilst retaining, and possibly augmenting, any strengths. However, this often leaves the learner none the wiser about precisely how to address their shortcomings.  Advice to ‘be more systematic’ is only useful if you understand what being systematic means in practice, and if you already know that, you probably would have done so in the first place.[2]

It is the assessment of progress through  interim assessments that strongly resemble the final exam that Christodoulou means by teaching to the test.  Not because students shouldn’t know what  format an exam is going to take and have a bit of practice on it towards the very end of  a course of study.  That’s not teaching to the test. Teaching to the test is working backwards from the final exam and then writing a curriculum punctuated by  slightly reduced versions of that exam – and then teaching each set of lessons with the next test in mind.   The teaching is shaped by the approaching test.  This is learning for assessment.  By contrast Christodoulou argues that we should just concentrate on teaching the  curriculum and that there may be a whole range of other activities to assess how this learning is going that may look nothing like the final learning outcome. These, she contends, are much better suited to helping the learner actually improve their performance. For example, the teacher might teach the students what the key events were in the build up to the first World War, and then, by way of assessment, ask students to put these in correct chronological order on a time line.  Feedback from this sort of assessment is very clear –if events are in the wrong order, the student needs to learn them in the correct order.  The teacher teaches some  small component that will form part of the final whole, and tests that discrete part. Testing to the teach, in other words, as opposed to teaching to the test.

There are obvious similarities with musicians learning scales and sports players doing specific drills – getting the fine details off pat before trying to orchestrate everything together.  David Beckham apparently used to practice free kicks from all sorts of positions outside the penalty area, until he was able to hit the top corner of the goal with his eyes shut.  This meant that in the fury and flurry of a real, live game, he was able to hit the target with satisfying frequency.  In the same way, Christodoulou advocates spending more time teaching and assessing progress in acquiring decontextualized technical skills and less time on the contextualised ‘doing everything at once’, ‘playing with the traffic’ kind of tasks that closely resemble the final exam.  Only when we do this, she argues, will assessment for learning be able to bear fruit. When the learning steps are small enough and comprehensible enough for the pupil to act on them, then and only then will afl be a lever for accelerating pupil progress.

Putting my primary practitioner hat on, applying this approach in some areas (for example reading) chimes with what we already do,  but in others (I’m thinking writing here) the approach seems verging on the heretical.  Maths deserves a whole blog to itself, so I’m going to leave that for now – whilst agreeing whole-heartedly that thorough knowledge of times tables and number bonds  (not just to ten but within ten and within  twenty   – so including  3+5 and 8+5 for example) are  absolutely  crucial. Indeed I’d go so far as to say number bonds are even more important than times table knowledge, but harder to learn and rarely properly tested. hit-the-button I’ve mentioned hit the button in a previous blog. We have now created a simple spreadsheet that logs each child’s score from year 2 to year 6  in the various categories for number bonds. Children start with make 10 and stay on this until they score 25 or more (which means 25 correct in 1 minute which I reckon equates to automatic recall.  Then then proceed through the categories in turn – with missing numbers and make 100 with lower target scores of 15.  Finally they skip the two decimals categories and go to the times table section – which has division facts as well as multiplication facts. Yes!  When they’ve got those off pat, then they can return to do the decimals and the other categories. We’ve shared this, and the spreadsheet –  with parents and some children are practising at home each night. With each game only taking one minute, it’s not hard to insist that your child plays say three rounds of this first, before relaxing.  In class, the teachers test a group each day in class, using their set of 6 ipads.  However since kindle fire’s were on sale for £34.99 recently, we’ve just bought 10 of them (the same as the cost of 1 i pad). We’ll use them for lots of other things too, of course – anything where all you really need is access to an internet browser.

When we talk about mastery, people often talk about it like it’s this elusive higher plan that the clever kids might just attain in a state of mathematical or linguistic nirvana when really what it means is that every single child in your class – unless they have some really serious learning difficulty – has automatic recall of these basic number facts and (then later) their times tables.  And can use full stops and capital letters correctly the first time they write something. And can spell every word on the year 3 and 4 word list (and year 1 & 2 as well of course).  And read fluently – at least 140 words a minute, by the time they leave year 6. And have books they love to read – having read at least a million words for pleasure in the last year (We use accelerated reader to measure this – about half of year 6 are word millionaires already this year and a quarter have read over 2 million words.) How about primary schools holding themselves accountable to their secondary schools for delivering cohorts of children who have mastered all of these (with allowances for children who have not been long at the school or who have special needs)  a bit like John Lewis is ‘Never Knowingly Undersold’, we  should aim (among other things) to ensure at the very least, all our children who possibly could, have got these basics securely under their belt.

(My teacher husband and I now pause to have an argument about what should make it to the final list.   Shouldn’t something about place value be included? Why just facts?  Shouldn’t there be something about being able to use number bonds to do something?  I’m talking about a minimum guarantee here – not specifying everything that should be in the primary curriculum. He obviously needs to read the book himself.)

Reading

My extended use of the metaphor of learning to drive to explain Christodoulou’s approach has one very obvious flaw. We usually teach classes of 30 children whereas driving lessons are normally conducted 1:1. It is all very well advocating spending as much time on the basics as is necessary before proceeding onto having to orchestrate several different skills all at the same time, but imagine the frustration the more able driver would have felt stuck in a class with me and my poor clutch control.  They would want to be out there on the open roads, driving, not stuck behind me and my kangaroo petrol.  Children arrive at our schools at various starting points. Some children pick up the sound-grapheme correspondences almost overnight; for others it takes years. I lent our phonics cards to a colleague to show here three-year-old over the weekend; by Monday he knew them all. Whereas another pupil, now in year 5, scored under 10 in both his ks1 phonic checks.  I tried him again on it recently and he has finally passed.  He is now just finishing turquoise books. In other words, he has just graduated from year 1 level reading, 4 years later.  This despite daily 1:1 practice with a very skilled adult, reading from a decodable series he adores (Project X Code), as well as recently starting on the most decontextualized reading programme ever (Toe by Toe – which again he loves) and playing SWAP. He is making steady progress – which fills him with pride – but even if his secondary school carries on with the programme[3], at this rate he won’t really be a fluent reader until year 10. I keep on hoping a snowball effect will occur and the rate of progress will dramatically increase.

Outliers aside, there is a range of ability (or prior attainment if you prefer) in every class and for something as technical as phonics, this is most easily catered for by having children in small groups, depending on their present level. We use ReadWriteInc in  the early years and ks1.  Children are assessed individually by the reading leader  for their technical ability to decode, segment and blend every half term and groups adjusted accordingly.  So that part of our reading instruction is pretty Christodoulou-compliant, as I would have thought it is in most infant classes.  But what about the juniors, or late year 2 – once the technical side is pretty sorted and teachers turn to teaching reading comprehension.  Surely, if ever  a test was created solely  for the purposes of being able to measure something, it was the reading comprehension test, with the whole of ks2 reading curriculum one massive, time wasting exercise in teaching to the test?

I am well aware of the research critiquing the idea that there are some generic comprehension skills that can be taught in a way that can be learnt from specific texts and then applied across many texts, as Daniel Willingham explores here.  Christodoulou quotes Willingham several times in her book and her critique of generic skills is obvioulsy influenced by his work. As Willingham explains, when we teach reading comprehension strategies we are  actually teaching vocabulary, noticing understanding, and connecting ideas. (My emphasis).  In order to connect ideas (which is what inference is), the reader needs to know enough about those ideas, to work out what hasn’t been said as well as what has been. Without specific knowledge, all the generic strategies in the world won’t help. As Willingham explains

Inferences matter because writers omit a good deal of what they mean. For example, take a simple sentence pair like this: “I can’t convince my boys that their beds aren’t trampolines. The building manager is pressuring us to move to the ground floor.” To understand this brief text the reader must infer that the jumping would be noisy for the downstairs neighbors, that the neighbors have complained about it, that the building manager is motivated to satisfy the neighbors, and that no one would hear the noise were the family living on the ground floor. So linking the first and second sentence is essential to meaning, but the writer has omitted the connective tissue on the assumption that the reader has the relevant knowledge about bed‐jumping and building managers. Absent that knowledge the reader might puzzle out the connection, but if that happens it will take time and mental effort.’

So what the non-comprehending reader needs is  very specific  knowledge (about what it’s like to live in a flat), not some generic skill.  It could be argued then that schools therefore should spend more time teaching specific knowledge and less time elusive and non existant generic reading skills. However, Willingham concedes that the research shows that even so, teaching reading comprehension strategies does work. How can this be, he wonders? He likens the teaching of these skills as similar to giving someone vague instructions for assembling Ikea flat pack furniture.

 ‘Put stuff together. Every so often, stop, look at it, and evaluate how it is going. It may also help to think back on other pieces of furniture you’ve built before.

This is exactly the process we go through during shared reading. On top of our daily phonics lessons we have two short lessons a week of shared reading where the class teacher models being a reader using the eric approach. In other words, we have daily technical lessons,  and twice a week  we also have a bit of ‘playing with the traffic’ or more accurately, listening to the teacher playing with the traffic and talking about what they are doing as they do it.  In our shared reading lessons, by thinking out loud about texts, the teacher makes it very explicit that texts are meant to be understood and enjoyed and not just for barking at and that  therefore we should check as we go along that we are understanding what we are reading (or looking at). If  we don’t understand something, we should stop and ask ourselves questions.   It is where the teacher articulates that missing ‘connective tissue’,  or ‘previous experience of building furniture’  to use Willingham’s Ikea metaphor, sharing new vocabulary and knowledge of the how the world works, knowledge that many of our inner city children do not have.  (Although actually for this specific instance many of them would know about noisy neighbours, bouncing on beds and the perils of so doing whilst living in flats.)

eric

For example, this picture (used in ‘eric’ link above) gives the the teacher the opportunity to share their knowledge that that sometimes the sea can get rough and that this means the waves get bigger and the wind blows strongly. Sometimes it might blow so hard that it could even blow your hat right off your head. As the waves rise and fall, the ship moves up and down and tilts first one way, and then the other. (Pictures are sometimes used for this  rather than texts so working memory is relieved from the burden of decoding).

When teaching children knowledge is extolled as the next panacea, it’s not that I don’t agree, it’s just that I reckon people really underestimate quite how basic some of the knowledge we need to impart for our younger children. I know of primary schools proudly adopting a ‘knowledge curriculum’ and teaching  two hours of history a week, with two years given over to learning about the Ancient Greeks.  I just don’t see how this will help children understand texts about noisy neighbours, or about what the sea is like (although you could do that in the course of learning about Ancient Greece if you realised children didn’t know), or, for that matter, what it is like to mill around in bewilderment.  The only kind of assessment that will help here is the teacher’s ‘ear to the ground’ minute by minute assessment – realising that -oh, some of them haven’t ever seen the sea, or been on a boat. They don’t know about waves or how windy it can be or how you rock up and down.   This is the kind of knowledge that primary teachers in disadvantaged areas need to talk about all the time.  And why we need to go on lots of trips too. But it is not something a test will pick up nor something you can measure progress gains in.  The only way to increase vocabulary is one specific word at a time. It is also why we should never worry about whether something is ‘relevant’ to the children or not. If it is too relevant, then they already know about it – the more irrelevant the better.

I don’t  entirely agree with the argument that since we can’t teach generic reading skills we should instead teach lots more geography and history since this will give  children the necessary knowledge they need to understand what they read.   We need to read and talk, talk talk about stories and their settings -not just what a mountain is but how it feels to climb a mountain or live on a mountain, how that affects your daily life, how you interact with your neighbours.  We need to read more non fiction aloud, starting in the early years.  We need to talk about emotions and body language and what the author is telling us by showing us.  A quick google will show up writers body language ‘cheat sheets’. We need to reverse engineer these and explain that if the author has their character fidgetting with darting eyes, that probably means they are feeling nervous. Some drama probably wouldn’t go amiss either.  Willingham’s trio of  teaching vocabulary, noticing understanding, and connecting ideas is a really helpful way of primary teachers thinking about what they are doing when they teach reading comprehension. What we need to assess and feedback to children is how willing they to admit they don’t understand something, to ask what a word means, to realise they must be missing some connection.  None of this is straightforwardly testable. That doesn’t mean it isn’t important.

Writing

Whereas most primary schools, to a greater or lesser degree, teach reading by at first teaching phonics, the teaching of writing is much more likely to be taught though students writing than it is through teaching a series of sub skills.  It is the idea that we ensure technical prowess before  we spend too much time on creative writing that most challenges the way we currently do things.

Of course we teach children to punctuate their sentences with capital letters and full stops right at the start of their writing development. However, patently, this instruction has limited effectiveness for many children.  They might remember when they are at the initial stages and when they only write one sentence anyway – so not so hard to remember the final full stop in that case. Where it all goes wrong is once they start writing more than one sentence, further complicated when they start writing sentences with more than one clause. I’ve often thought we underestimate how conceptually difficult it is to understand what a sentence actually is.  Learning to use speech punctuation is far easier than learning what is, and what is not, a sentence. Many times we send children back to put in their full stops, actually, they don’t really get where fulls tops really go.  On my third session doing 1:1 tuition with a year 5 boy, he finally plucked up the courage to tell me that he know he should but he just didn’t get how you knew where sentences ended.  So I abandoned what I’d planned and instead we  learnt about sentences. I told him that sentences had a person or a thing doing something, and then after those two crucial bits we might get some extra information about where or why or with whom or whatever that  belongs with the person/thing, so needs to be in the same sentence.   We analysed various sentences, underlining the person/thing in one colour, the doing something word in another colour and finally the extra information (which could be adjectives, adverbs, prepositions, the object of the sentence – the predicate minus the verb basically) in another. This was some time ago before the renaissance of grammar teaching, so it never occurred to me to use the terms ‘subject’ ‘noun’ ‘verb’ etc but I would do now. It was all done of the hoof, but after three lessons he had got it, and even better, could apply it in his own writing.

What Christodoulou is advocating is that instead of waiting until things have got so bad they need 1:1 tuition to put it right, we systematically teach sentence punctuation (and other  common problems such as verb subject agreement), giving greater priority to this than to creative writing. In other words, stop playing with the traffic before you’ve mastered sufficient technical skills to do so properly.  This goes against normal primary practice, but I can see the sense in this. If ‘practice makes permanent’ as cognitive psychology tells us (see chapter 7 of What Every teacher Needs to Know About Psychology by Didau and Rose for more on this), then the last thing we want is for children to practice again and again doing something incorrectly. But this is precisely what our current practice does. Because most of the writing we ask children to do is creative writing, children who can’t punctuate their sentences get daily practice in doing it wrong. The same goes for letter formation and spelling of high frequency common exception words. Maybe instead we need to spend far more time in the infants and into year 3 if necessary on doing drills where we punctuate text without the added burden of composing as we go. Maybe this way, working memories would not become so overburdened with thinking about what to say that the necessary technicalities went out the window. After that, we could rewrite this correctly punctuated text in correctly formed handwriting.  Some children have genuine handwriting or spelling problems and I wouldn’t want to condemn dyslexic and dyspraxic children to permanent technical practice. However if we did more technical practice in the infants  – which would mean less time for writing composition – we might spot who had a genuine problem earlier and then put in place specific programmes to help them and/or aids to get round the problem another way. After all,  not all drivers use manual transmission, some drive automatics.

Christodoulou mentions her experience of using the ‘Expressive Writing’ direct instruction programme, which I duly ordered. I have to say it evoked a visceral dislike in me; nasty cheap paper, crude line drawings,  totally decontextualised, it’s everything my primary soul eschews (and  it’s expensive to boot). However, the basic methodology is sound enough – and Christodoulou only mentions it because it is the ones she is familiar with. It is not like she’s giving it her imprimatur or anything.  I’m loathed to give my teachers more work, but  I don’t think it would be too hard to invent some exercises that are grounded in the context of something else children are learning; some sentences about Florence Nightingale or the Fire of London for example, or a punctuation-free excerpt from a well-loved story.  Even if we only did a bit more of this and a bit less of writing compositions where we expect children to orchestrate many skills all at once, we should soon see gains also in children’s creative writing. Certainly, we should insist of mastery in these core writing skills by year 3, and  where children still can’t punctuate a sentence, be totally ruthless in focusing on that until the problem is solved. And I don’t just mean that they can edit in their full stops after the fact, I mean they put them (or almost all of them in ) as they write. it needs to become an automatic process. Once it is automatic is it easy.   Otherwise we are not doing them any favours in the long term as we are just making their error more and more permanent and harder and harder to undo.

Certainly pupil progress meetings would be different. Instead of discussing percentages and averages,  the conversation would be very firmly about the teacher sharing the gaps in knowledge they had detected, the plans they had put in place to bridge those gaps, and progress to date in so doing, maybe courtesy of the ‘hit the button’ spreadsheet, some spelling tests, end of unit maths tests, records of increasing reading fluency. Already last July our end of year reports for parents shared with them which number facts, times tables and spellings (from the year word lists) their child did not yet know…with the strong suggestion that the child work on these over the summer!   We are introducing ‘check it’ mini assessments so that we can check that we we taught three weeks ago is still retained. It’s easy, we just test to the teach.

[1] Christodoulou quoting D. Wiliams, p 19 Making Good Progress?

[2] Christodoulou quoting D. Wiliams, p 20 Making Good Progress?

[3] I say this because our local secondary school told me they didn’t believe in withdrawing children from class for interventions. Not even reading interventions. Surely he could miss MFL and learn to read in English first? As a minimum.  Why not English lessons? I know he is ‘entitled’ to learn about Macbeth but at the expense of learning to read? Is Macbeth really that important? Maybe he will go to a different secondary school or they’ll change their policy.

Test to the Teach

Curating knowledge/organising knowledge

Jon Brunskill has started off an interesting exchange on twitter after posting two blogs about knowledge organisers in primary schools including one about the Apollo 11 Mission to the Moon. Some commentators don’t like the idea much on the grounds that it is a disembodied list of facts and therefore dull and uninteresting. To which Jon (and I) reply that of course it would be were that the only thing  that was presented in the lesson. However, bringing knowledge to life in a lively way is the job of the teacher; a job made much easier by having spent time deciding which knowledge to include and which to discard. Among the myriad of concepts, definitions, dates, events, descriptions , quotations, hypotheses, opinions and arguments that we could potentially include, what exactly is it that is  so crucial to the topic that it warrants inclusion on the KO?   What knowledge should we curate? ( Those of us of a left-wing bent could decide to call our knowledge organisers knowledge curators to make it  clear that despite having gone all ‘knowledgey’ and seemingly in the same camp as Lord Nash, Civitas and Michael Gove, our socialist credentials remain intact and we acknowledge that the selection of knowledge is a political act.  We could do, but people would laugh at us. Even more.)

The national curriculum, punctiliously specific in English and maths, relaxes into vague suggestions for the wider curriculum, particularly in history.  I’d rather that than the breathless charge through British history that was in Mr Gove’s draft national curriculum (p165 and following) which appeared to assume curriculum time was infinitely expandable.  However, the lack of explicit direction leaves non specialist primary teachers with the task of choosing what to include and what to leave out within topic headings such as ‘the Roman Empire and its impact on Britain’ followed by 5 non statutory suggestions. Does that mean we should try and pick one of the five? Do all five?  If we did something completely different, would that matter? Given that curriculum time for foundation subjects is all too finite, what should make the final cut?

Other commentators like the concept of knowledge organisers  but want to refine the idea, something that Jon welcomes; ‘the friction from the resistance is ultimately…what will polish the diamond.’  Do let’s have more constructive twitter/blog exchanges like this that help us all reflect and improve what we do. It is particularly useful to have contributions from secondary specialists explaining what areas of knowledge it is most useful for children to have acquired during their primary years.

As St Matthias, we too have been using KO’s since September, so I thought I’d post some of ours in to help discussion along.  I didn’t write these; the class teachers did.  I’m really pleased with how they’ve taken to the idea, but there’s a lot more to this KO business than it seems and they will all need polishing and refining further. I’ve included three, a history focused one from year 2 on the Fire of London, another history focused one from year 3 on Ancient Egypt and a geography focused one that uses the stories of  Ernest Shackleton and Matthew Henson as a context for learning about continents and countries. All three units cross over with literacy, but unlike Jon’s, the KO’s were not written in order that the children could  write a non fiction piece of writing at the end. Instead the children write shorter pieces of writing throughout the unit. However, I like Jon’s idea, so maybe that is something we will develop.

As a result, ours are a bit different from Jon’s.  For example, they contain pictures as well as text. This is deliberate. Partly because a diagram can  sometimes express information more succinctly and lucidly than text can; Jon’s KO would in my opinion be improved by a diagram of the lunar module and command module and a diagram showing the path the mission took, along these lines and partly because images of the major players bring the text to life.  For example, it reminds us that Matthew Henson was African American. Maybe I’m not sufficiently hardcore; the pictures do make the KO’s look more inviting, more primary. Compare for example with the excellent, but stern looking example in Robert Peal’s blog.   But each picture takes up space that could contain more text; so each picture needs to be justifiable beyond being pretty. For example in the Year 2 one below, I think the map and the picture of Pepyes have a stronger claim to space than the other two pictures. However I am happy with the text; it doesn’t seem to me that anything important has been omitted so they can stay.  The geography one on polar explorers obviously needs its maps as learning where things are on the globe forms the key knowledge pupils are meant to learn in this unit.  The year 3 one originally had an annotated a map of the Nile which I replaced with more text; partly because I  was worried about being sued for copyright by Dorling Kindersley and partly because I though thought there was not enough emphasis on historical causation or chronology.

Which brings us to the heart of the matter.  If our KO’s are not to become just lists of  highly specific fun facts – hey canopic jars or pemmican  anyone – then they must have some transferability. Hush my mouth, I’ve said a bad word!  What I mean is that in a history KO we must make sure that at least some of the facts we teach them knit the different topics they will study together by developing chronological understanding, and understanding causality and consequence.  Of course these are not free standing ‘skills’ that make sense without the facts, but especially for us primary non specialists, it would be easy to omit those all important aspects out of ignorance. Which is why on the Ancient Egypt KO below, I made sure it included as facts to be learnt, awareness that Ancient Egypt occurred contemporaneously with the late Stone Age, Bronze and Iron Ages; while we were grubbing about in the mud, a far advanced civilisation was flourishing elsewhere.

Chronology is notoriously badly understood by primary children. I’ve come across year 3 children who think the tallest teacher is the oldest, despite  having youthful tall teachers and short grey haired ones. The idea that ‘the past’ is not just one ‘place’ but many, all related to each other and some occurring simultaneously seems to be very difficult for some children to grasp. Certainly we must use number lines and teach dates and remember that chronological awareness encompasses duration and interval as well as sequence. For this reason, I think that Jon’s KO should have the fact that JFK was president and that this happened when Queen Elizabeth II was queen. Maybe all history KO’s  used in the UK; elsewhere the chronological anchor fact will need to reference whoever is significant in that locale).  For that matter, our polar explorers one should reference Edward VII for Matthew Henson and George V for Shackleton. I don’t think learning dates by themselves is sufficient. The dates need fleshing out with explicit links stressed. Who was on the throne?  What else was going on in the world?

We’ve also tweaked our KO’s so that they include explanations as to why things happened.  Ancient Egypt flourished because the land was fertile and the deserts provided protection from invaders.  The land was fertile because the Nile flooded.  London burned because wood is flammable,  dry wood even more so and the houses close together. Buckets were leather because plastic had not been invented.  Shackleton could not radio for help because long-range radio didn’t exist.  Pemmican was good to eat because in extreme cold you need high energy foods. Causality cannot be taught in a vacuum aside from knowledge; it is a concept that becomes denser the more times children encounter different scenarios needing different…or not so different…explanations.  Having studied the Fire of London (houses close together: fire spread easily), Ancient Egypt (River Nile floods yearly: soil very fertile) and Shackleton (Antarctic too cold for plants: very little lives there) the transferable concept that the physical environment influences the prosperity or otherwise of those who live there gains traction.

I’m looking again at Jon’s KO on the Apollo 11 mission. He explains the term ‘quarantine’ and ‘space race’.  ‘Space race’ will, we hope, be a small step, as it were in our student eventually grasping that explorations of  distant unknown regions are very costly and therefore funded by very rich patrons and function as, among other things, a status symbol. I’m sure there was discussion in class about why there had to be separate lunar and command modules, rather than one module that landed on the moon and then returned to earth. However, given the complexity of that explanation and the age of the pupils (6 going on 7), I can see why Jon hasn’t tried to condense that into a sentence!

As I mentioned above, Jon wrote his KO in order for his class to have some rich facts so that they could then, in literacy, write an information text. It was not written as a history topic per se, even though Neil Armstrong is included in the KS1 history National Curriculum as an example of a significant individual whom one might use to compare with Christopher Columbus. Having learnt about Armstrong in literacy, I would urge Jon to then do that comparison, thus exploiting the obvious points  – and reasons for – the similarities and the differences between the two explorers.

By saying there should be some transferability  of ideas between different topics, I’m not saying that’s the only function of the unit. Knowledge is neither the master not the slave of  transferability, but rather its bedrock. Maybe transferability is the wrong word. We teach what at first seem like isolated islands of ‘knowledge’,then bit by bit we realise these islands are joined in ways we couldn’t at first realise. The more we know, the more we are able to predict, infer, make links. When deciding which knowledge to teach, we make choices based on what specific facts educated children should know, regardless of wider, more general links and what might be more useful. For example, in an earlier blog I contrasted learning about the history of chocolate and the history of the Romans and made the point that learning about the Romans in year 3 helps you understand more about British and European history in general and about how Christianity became a global religion. As I said, knowledge may be power but not all knowledge is equally powerful.

The other ‘transferable’ element within our KO’s is vocabulary. Most of the vocabulary within our KO’s is  necessarily very context-specific. However a few words are more generalizable and are ‘high yield’ words children will encounter and need to understand again and again, across many different domains of knowledge. Looking at the KO’s above  (and Jon’s)  I find the words flammable, eyewitness, expedition, navigate, crop, fertile, trade, afterlife, archaeology, crew, quarantine, module all of which are necessary for understanding many other areas of the curriculum. Again, Jon wrote his for a different reason and maybe did a separate one when the class studied the solar system but I wonder whether ‘expedition’ ‘voyage’  ‘orbit’ , ‘atmosphere’ and  ‘launch and ‘gravity’ ( both briefly mentioned) should explicitly feature in the vocabulary column.

So within our KO’s, alongside the specific dates, names, places and other vocabulary specific to the topic in hand,  we must also  include those high dividend words that will reoccur across the curriculum, rather like Isabel Beck’s tier two words that I wrote about here. (You will see the St Matthias year 2 one also includes the tier one words oven and bakery and might wonder why such basic words are included. A large proportion of our children speak English as an additional language and it is exactly these words that are primarily used in a domestic sphere that they might not ever hear in English unless we explicitly tell them – for example I remember a very eloquent year 6 child referring to a cup and plate, because she had never heard the word saucer because the world of the kitchen was a world where she only spoke Bengali.)

The more I write, the more complicated it seems.  I started out just wanting key facts, then facts including dates and quotations, then some chronological anchor facts (if its history), possibly a diagram or two, definitely a map if it is geography, some explicit causality and now tier two type vocabulary. Am I asking it to bear too much? Have a departed from the basic concept?  I look at Robert Peal’s KS3 KO (link above) and his is just a long list of facts (including some specific vocabulary eg fealty) in question and answer form, and then a brief key dates summary at the bottom. I presume that because the pupils learn all this knowledge and have it at their finger tips, he can then spend more lesson time talking about causes, making links across time periods and describing similarities and differences. Maybe this side of things needs to be more explicit in primary KO’s because a) the children are younger and know and understand less and have less well developed vocabularies  and b) the teachers are generalists who might otherwise forget to talk about these aspects.  Quality text bools are in short supply and even if they existed we couldn’t afford them.

Here are three knowledge organisers from years 2,3 and 4. I welcome comments.

fire of London ko.PNGgeog-ko-uk-europe

 

ko-geog-shackleton-vocab

shackleton-henson-stories

yr3-ancient-egypt-facts

ancient-egypt-key-words

Curating knowledge/organising knowledge

What if lesson observations were every week? How we reduced stress by observing staff more often.

Every year I commission a staff survey and every year – although the overall findings are always really positive – I  have to lock myself away to read it as it always makes me so cross initially!  It is of course done anonymously but with results reported separately for teachers and support staff.  Because we are a relatively small school – 1 form entry – the temptation is to try and second  guess exactly which miserable ***** it was who disagreed (and strongly) that ‘the SLT provides them with the support and guidance they needed to do their job effectively.’ The fact it was a teacher rather than a TA makes it worse.  Teacher results are always more positive than TA results. But not, apparently, in this case. And there are only 11 possible people it could be.  So I run through the eligible candidates, thinking about who this Brutus could possibly be. I summon my deputy (I know it’s not her) and together we run through the remaining 10 teachers and together decide it has to be teacher X and find ways to dismiss their opinion as totally invalid. Then we move on to pinpoint which Judas TA (from the 16 who completed it) does not agree ‘they are treated with fairness and respect.’ And so on through 100 or so criteria. I’m sure we don’t make for a very edifying spectacle which is why I make sure to give myself time to through my hissy fit in private.

Considering I commissioned the survey in the first place, you’d think I’d welcome the honest feedback. I suppose I assume that everybody thinks everything is absolutely marvellous, so am always disappointed that I get any negative feedback at all. In my heart of hearts, I don’t quite understand why people only vote ‘agree’ rather than ‘strongly agree’ that everything within my kingdom, I mean our school, is 100% amazing.  The anonymous thing is necessary, I get that, but so frustrating. It hurts that there is a member of staff out there who really feels that they are not treated with fairness and respect. Hissy fit aside, I want to hear what it is that has made them feel that way and to reassure that of course I value and respect them and if something has made them feel otherwise, I want to put that right straight away. But I don’t know who it was, so maybe they still feel the same now as they did in May, when they did the survey.

The day after I first read the survey, I re-read it and now I am able to have a bit of distance, be a bit more objective and to learn from it.  The most useful learning comes from statements where quite a few people disagree. Usually these are things I can put my hands up to with relative ease or are long running problems – for example 45% of teachers do not feel they are able to strike the right balance between their work and home life. The previous year it was 62% so we’re heading in the right direction although *gnashes teeth* 3 teachers ‘do not agree that senior leaders are looking at ways of reducing teacher workload.’ (Strangled cry of ‘yes we are, it’s in the bloody development plan; I’ve banned marking for God sake…deep breaths Clare, deep breaths)

The stand out finding from last year was that 37% of support staff ‘did not feel the received regular and constructive feedback.’  This was not entirely unsurprising – we knew only too well that we had never found a workable system for appraisal for TA’s and the SENCO started each year with the best of intentions regarding coaching those TA’s who worked supporting children with statements or EHCP’s, with other matters somehow taking precedence in real life. But in some ways it was also surprising; our major school development initiative that year had been implementing the MITA project (Maximising the Impact of Teaching Assistants), which I have written about here. It had been, by all accounts incredibly successful and we all wanted to build on it this year.  It involved a lot of feedback – including watching video footage of your own teaching – from one’s peers with some teacher involvement from the 3 teachers running the project. We deliberately did not include the SLT (apart from myself as I was part of the team) to make the project less intimidating.  But we appeared to have uncovered a desire for better, more regular structured conversations with line managers about how well TA’s were doing their jobs. What manager could ask for more! What a gift!

At the same time, I was anxious to do all I could to reduce the stress-load on teachers.  Reducing the workload might be a Sisyphean task, but I could try and make the work less stressful. Top of the list of stressors came lesson observations and teaching and learning reviews. How could I make sure I had an accurate picture of our strengths and weaknesses in the classroom in a way that was less stressful? We already didn’t grade lesson observations, but the termly half hour visit was still perceived as being incredibly stressful however much I chucked around terms like ‘developmental’ and ‘helpful.’

By chance, I was reading Leverage Leadership by Paul Bambrick-Santoyo.  At the centre of his school improvement work, in the centre of his leadership in fact, lay a system where he (or other leaders) observed every teacher for 10 minutes every week for a low stakes observation, followed by a very brief 10 minute feedback session.  He goes through  and dismisses all the excuses about the impossibility of timetabling this and is pretty persuasive. (I’m not quite sure about how big his school is – obviously it is smaller than a typical secondary school – but I’m sure an adapted form of this could be used in even the largest school). I wondered if this might provide a model that enabled the senco to get around her TA’s, allowed us to build on the success of the MITA project, reduced stress in teachers and gave me a stronger picture than the traditional model we had been using. More than that, I hoped be a strong lever, as Leverage Leadership alleged, for further school improvement.  We were in a strong position that every one of our teachers was at least good – many were very strong indeed – and here lay a system of observation and feedback that would enable really great teachers to make incremental improvements every week, which when taking altogether would lead to a substantial school-wide improvement. In other words, we were embracing the UK Olympic Cycling team’s philosophy of going for marginal gains.

I was a bit wary about how staff would feel about it. It could come across as even more stressful. The SLT thought it a good idea so I dropped the idea into conversation with various teachers, stressing how light touch the approach was and promising no reviews by outsiders this year. these intial soundings were all positive. Then I discussed it formally in our end of year INSET day (held on June 23rd when the school was closed for the referendum) and no one objected, so in it went into our development plan.

Over an enjoyable dinner in early September with my two former deputies, both now head teachers in their own right, there was much chortling about how impossible I would find it to fit it in. This actually was a great spur to prove them wrong. We had already set up our new diner date for early December. I was determined to come back with tales of how well it had worked.

How it worked was as follows. We didn’t start until the third week back, to give people a bit of time to get to know their classes. In my experience, the first two weeks are full of realising you’ve pitched work too low or too high – so I thought a bit of grace time to sort that out would be more productive all round.  There are 8 class teachers plus the deputy who teaches groups in the morning. Of these 9 teachers, I would aim to see 7 a week while the deputy would see the remaining 2 (obviously she didn’t observe herself). The senco would dedicate Thursdays every week to seeing TA’s when they were explicitly supporting children with SEN, aiming to get through these every 3 weeks.  She saw each TA with each child – so where TA’s work with more than one child, she would do a different observation for each child (rather than per adult). The Early Years leader would use part of her nct each Friday to see at least two of her support staff (out of a team of 6) each week. I would then mop up anyone not included in the system (for example a couple of TA’s who do not work with  statemented/EHCP children at all), staying  5-10 minutes longer in the class where I was seeing the teacher so I could also take notes on their role. Notes were written up as bullet points on an excel spreadsheet and feedback was given during either assembly or the next break or after school. Sometimes if a TA was in class just before lunch/end of the day they would cover the last 10 minutes so feedback could be done then.

I immediately loved the new system. I got into classes so much more regularly and could give bite sized feedback that was acted on immediately. I also timetabled myself to sometimes go at unusual times – at the very beginning of a lesson as the children came in, or during story time – for example, to check that every minute of the teaching day was being exploited for maximum learning gain. Often feedback was all positive, with only the most minor point being picked up on (that green pen on your whiteboard is hard to read from the back of the class – try something with better colour contrast*[1]) or suggestions for how to build on what I’d seen rather than development points as such.  Often when there were points they were about getting smoother, slicker transitions, something we need to introduce as proper whole school system in due course. It did allow me to introduce things I’ve got waiting in the wings for the right time such as SLANT to teachers for whom it seemed appropriate. For example in one class children listened really well to the teacher but possibly in a less focused way when their peers were addressing the class. Getting pupils to track each other when talking has really increased pupil attention to each other. Most of my comments were generic about teaching itself – use of voice, transitions, questioning routines rather than subject specific as such, although these did feature.

Alongside these ‘drop-ins’ as we called them, the maths leader continued to coach and team teach with teachers new to our mathsmastery programme for the full hour during her nct, the early reading teacher has a coaching programme for all staff (teachers and TA’s) teaching phonics and rotates around them, sometimes coaching, sometimes modelling and our literacy lead is in different classes each week helping staff implement our new system that has replaced guided reading and to model how to give feedback on writing now that we do not remotely mark any writing. Obviously if during drop ins we had encountered concerns about teaching, we would have had to adapt the drop in programme for something more formal for the person concerned, but that is not where we are. I did see one (10 minute slot of a) lesson that did not work – it was in a guided reading lesson before we replaced them and exemplified everything about why we moved away from that model – independent groups busy doing activities that don’t actually help them learn much – so we had a discussion about what learning the teacher had assumed would occur and problem solved why reality and aspiration were so adrift and moved on.  I already knew this teacher is usually fabulous but under this system I was back the next week anyway, and the next and the next, so any sustained loss of form would have been quickly picked up. But the system allowed me to see it for the blip it was – we all make mistakes.

We also still do work surveys, although we now see pupils books in lots of different ways. Most staff meetings involve bringing books along for us to share together what’s working and what’s not and we now make sure every full governing body meeting and some committee meetings involve looking at pupils work. again, because books are looked at so often, problems are picked up early and the whole thing becomes routine rather than a make or break high stakes stress-fest.

All the staff doing the drop ins liked them and felt they knew so much more about learning in the school than previously. The senco managed, more or less to stick to her Thursdays-for-drop-ins timetable, although time for feedback can be an issue. Unfortunately due to ‘exigencies of the service’ the deputy is now teaching almost all day so has had to drop out of the rota after a few weeks – she maybe manages one a week on a good week. But how did the staff feel about them?

But by bit various teachers volunteered feedback.  One nqt said she loved the new system and found it so much less stressful (the previous year she was a TeachFirst trainee) and more helpful getting regular small steps to work on. One teacher who gets ridiculously nervous around observations said it really helped because now they happened so regularly she just couldn’t get herself all worked up about it. As a result I’ve actually seen her teach really well rather than impeded by nerves. A teacher new to the school who previously came from an outstanding school where, reading between the lines, observations had been extremely high stakes, said initially she thought it sounded a bit barmy and lacking in rigour but having had it for a term she now found it simultaneously less stressful but more rigorous – in that it actually moved her practice on more effectively. Previously she would go to extreme effort and put on an all singing all dancing show and be given very positive feedback. Whereas now we saw her bread-and-butter day in, day out teaching, warts and all, so she received feedback that helped her improve without feeling she had failed in some way. She could risk teaching normally.

In order to get a more rounded picture, I wrote a quick 7 question survey using Survey Monkey and attached it to the weekly calendar I send out every Friday, asking colleagues to let me see how it was going. Out of 19 respondents, 17 preferred in and 2 did not have a preference –so no going back, that’s for sure. The next question asked how useful was the feedback they had received.  11 (again out of 19) said it was very useful, 6 fairly useful and 2 occasionally useful. Considering some people are so strong that I failed to find anything of significance to feedback most visits and it is meant to be about mainly finding marginal gains, I’m more than happy with this.  Asked if they agreed with feedback given 10 said yes always, 7 said often and 1 said sometimes (1 person didn’t answer that question). Asked if they thought they had improved as a result, 7 said they had improved a lot, 9 said they had improved in the areas identified and the remaining 3 a bit. I can’t know for certain but I’m assuming that’s from the strongest teachers who are being tactful in the face of my not very helpful feedback.

Timetabling the drop ins was the biggest headache, so I was hoping that the staff were now so blasé about them that they wouldn’t need any prior notice and would be happy about me popping in at any time… in my dreams. 5 staff would be happy with this and 1 – the one who used to get really nervous – said she would be ok if she knew the day but not the time. The remaining 7 said a resolute NO, so I guess that is not going to change. Not yet anyway.

The other bugbear is time for feedback.  Sometimes I feel it would work just as well by email – as long as both parties reserved the right to ask to meet in person, either prior or post observation.  This really divided opinion. 5 said this was a great idea, 6 didn’t mind either way, 4 would prefer face to face but didn’t hate the idea whereas 4 hated it as far too impersonal. So next term – people will get a choice between email or in person – although the person dropping in will reserve the right to meet them in person if what they need to say is too complicated or needs the human touch. So I’m hoping that will save loads of time with 11 people opting for email.

Finally, I asked them if they would like the opportunity to drop in on colleagues themselves.  6 said they would find this very useful, 9 quite useful and 2 said yes but it would need to be for longer to be useful. The remaining 2 said no thanks, they would find this too awkward.  I also asked colleagues to email any further thoughts, although these comments would not be anonymous. Only 1 teacher did. She reminded us that when we launched the idea, sometimes the SLT were going to cover the class teacher so the class teacher could observe (and subsequently better guide) their TA.  In class they are too busy doing their own role to observe their TA as well, yet this would be really useful. We had forgotten this and it was great to get the reminder and something we must remember to do next term.

So overall it’s been a great success and one that we will continue to build on. It’s been particularly useful in enabling me to track the implementation of key development plan initiatives as week by week things move from being innovations to becoming routine.  Or not.  At my evening out with my former deputies I reported back that I had missed one week when I was ill, 1 week when everyone was either out training, on a trip, doing an art workshop or performing in a concert and the last 3 weeks of term when it was assessments and nativity plays I had done every week – 8 weeks in all. Next term I will skip the first week as we are doing pupil progress meetings and probably the last but intend to do every week. Some weeks I’ll be covering teachers so they can observe their TA’s. I’ve also started teaching year 6 science 1 lesson a week, so I will have to get myself dropped into at least occasionally.

It may not be suitable for all situations. One of my former deputies has recently taken over a school that was in a bit of a mess (though thought it was amazing). She didn’t think it was ready for this yet, particularly as she didn’t have a strong leadership team around her who she would trust in their judgements yet.  And it’s a bigger school so she would absolutely need colleagues to help out. She’s thinking about starting it with her leaders though.

If I could change one thing about how we implemented it, I’d change how we recorded it.  I copied Leverage Leadership and created a spreadsheet with a page per teacher. However, maybe they either are much more familiar with excel or used a different programme but this has been so unwieldly.  I finally now know how to use the return key and add a bullet point using excel but have to remind myself afresh every time I use it. Then there was the problem that if one person had the spreadsheet open, no one else could use it. We are only a small school so don’t think we need an expensive web based system like BluSkyEducation (although do leave a comment if you have a suggestion of a system that might work) so we will probably transfer over to a word based system next term.

Looking back, I’d never return to the old system. It seems so inflexible and uninformative.  This system tells me what I need to know about the school improvement journey, helps staff improve, reduces the stress they feel and reaches all staff – not just teachers.  I’m looking forward to our staff survey next May and hoping various key indicators show a marked improvement.  I’ll still be grumpy initially though.

[1] They’d only used the green pen for a couple of comments – not for the main text.

What if lesson observations were every week? How we reduced stress by observing staff more often.