Interviews – why do we make them so complicated?

Many moons ago, as an NQT, I remember that attending an interview was a relatively straightforward process. I was rarely asked to teach a lesson, for example. Much of the interviewing was in the style of a friendly chat. Yet I was aware that behind the seemingly innocuous questions, some razor sharp minds were evaluating my responses.

“So Mr Fish, I see you’re a Man of Kent. Or is it a Kentish Man? What is the difference by the way?” The head teacher looked up from poring over my CV.

My actual thought was, “I bet you know the difference full well!” But when you think about it, my ability to answer or not answer this seemingly unimportant question would have told the head teacher a lot. An inability to answer it would suggest that either I was dishonest on my CV and hadn’t actually grown up in Kent, or that I lacked a piece of general knowledge about my home county, which might be an indication of a lack of interest or commitment to the community where I lived. For those not from Kent wondering what I am on about, the deciding factor (though sometimes disputed) is that it depends on which side of the river Medway you were born.

I was born in Dover and attribute my interest in foreign languages and other countries from walks by the sea with my parents. They would point across the channel saying, “You can see France today!”. The distant grey line of cliffs was fascinating for me. France. What was it like? I wondered. This line of reasoning would doubtless be dismissed by modern day interviewers, who would expect me to spout some stuff about having been inspired by a passionate MFL teacher at my school. Not that my teachers weren’t passionate subject specialists, but my interest went back before I ever started learning languages at school.

I remember once taking a prospective teacher round my current school. This candidate had impressive credentials and had apparently taught a wonderful lesson earlier that morning. But the lack of interest in the school, the department, education or life in general told me all I needed to know. I am often told that watching a candidate teach is a good thing because you can see how they react with the children. I disagree – a show lesson is a totally artificial environment. How they react with children taking them round the school is more revealing.

Children interviewing prospective candidates has understandably had a bad press and I suppose I am lucky in that, when I have experienced it, I did not have a problem. I remember inwardly smiling when it was obvious that the interviewer (the head boy) had been on Rate My Teachers to look me up before the interview. If there are to be student panels (I have reservations), I think it should be older pupils who have some position of responsibility. But actually, for secondary schools. I would say an interview with the head teacher, the deputy and another with the head of department (not at the same time on a panel, but separate) is enough.

I’m afraid I don’t do the usual stuff when I interview. I don’t ask about a lesson they thought really went well. Nor do I ask them questions about pedagogy. This is because I believe that if I do, I will get people trying to guess what I want them to say, So I tell them about the school and the department and look and listen carefully at how they react to what I am saying. This doesn’t mean I want someone who just nods and smiles at what I say – I am looking for sparks of interest or a willingness to ask me challenging questions. Simple but effective in my opinion.

Perhaps the only good thing about the current recruitment crisis is that the long, over complicated interviews, scrutiny of lesson plans and expectations of an all singing and dancing show lesson are on the wane. While I would say that more than one person needs to be involved in the interview process, a chat and a tour of the school will usually tell you all you need to know.

Posted in Uncategorized | 1 Comment

Progress or attainment? The real question

As an MFL teacher, I am used to the fact that many pupils often find my subject extremely difficult and much harder than their other subjects. Actually, I do not believe that UK pupils are intrinsically less able in MFL than pupils in other countries. However schools in other countries seem to recognise that “little and often” is essential in MFL if material is not to get forgotten. I recall my time spent teaching in a selective school in eastern Europe where pupils did not start learning a foreign language until their equivalent of our “Year 9”. However, once started, they had nine 45 minute lessons a week in the first year. They were taught very traditionally with a lot of drilling and grammar practice. As the native speaker, I had to do the “conversation lessons” which were based on the particular grammar points covered. Surprisingly, you might think, other subjects including maths, science and the native language were relegated to just one or two periods a week in that crucial first year of pupils learning a foreign language. The time for foreign languages was accordingly reduced in the second and third year of learning them. But the pupils’ level of English after 3 years was far in advance of the standard needed to gain an A grade in MFL in the UK.

However, I digress. I accept I have to deal with the curriculum time I am given and try and make it work the best I can.  Before “Progress 8” this was much easier. Basically, my task was to ensure my pupils passed, ie. got a grade C or above. As they were academically able pupils, they were usually doing so in their other subjects and if there was a subject where they weren’t quite making the grade, it was often (but not always!) MFL. In response to this, my department began to organise appropriate “intervention.” Pupils would turn up (however reluctantly in some cases!) and we began to succeed in our underachievers getting the vital C grade which they knew was considered to be the key “pass grade” and could therefore be put on their UCAS forms. I lost count of the number of times year 11 pupils rushed up to me on results day delighted at having attained that C grade, even when they had a string of As and Bs in everything else.

Since Progress 8, the situation is rather different. Instead of just having intervention in their “worst” subject, pupils are now pulled all over the place. They are told they need to attend intervention in every subject where their current attainment is below their target grade. The “Moving the A to an A* intervention group” (soon to be moving the 8 to a 9!) is of equal importance to the ” Moving the D to a C”  group.

One unintended consequence of this is that pupils attend the intervention sessions in their best subjects, rather than their worst. After all, a pupil working at a grade A standard in a subject might well be considering taking it for A level. But what if their target grade is an A*?  Right then, intervention is required! But what about the subjects where the pupil isn’t quite making the C grade? Oh, no need to bother with those – it’s too much effort to get them up to a “pass grade.”

Another consequence is that if a pupil is below target in English and Maths, the school is likely to encourage the pupil to attend intervention in those subjects at the expense of others, even if they are already achieving top grades in those subjects. English and Maths count for double points remember! So off they go to unnecessary intervention in those subjects and then are too exhausted or have no time to attend intervention in subjects where they are not quite a pass grade standard

The Progress 8 cheerleaders might be happy with this situation. I am not. As I have stated before, I see nothing wrong in having a pass/fail benchmark. In the world outside education, no one seems to have a problem with this. Speaking personally, I would find it far more satisfying to say ” I passed the test” rather than “I achieved my target grade.” I also feel it is more beneficial for a pupil to have achieved pass grades in a range of subjects, rather than superb grades in Maths and English (double points in Progress 8!) and mediocre “fail” grades in subjects which do not count for so much under the Progress 8 system. I may be a lone voice at the moment, but I look forward to the abolition of Progress 8 in a few years time…


Posted in Uncategorized | 1 Comment

The sheer pointlessness of “fine grading” with the new GCSEs

From what I can make out, fine grading was essentially doing to GCSE grades the same as “sublevels” did to levels. Instead of inputting a grade, you inputted a grade and a number. Eg. A1 would imply that the pupil was at the top of the A grade range and borderline A*. A2 meant the pupil was in the middle of the A grade range and A3 meant the pupil was at the bottom of the A grade range.

Let’s face it,  the controlled assessment regime leant it self to “fine grading/laser grading” and I could see its value under that system in my subject, where 60% of the GCSE was down to controlled assessments. Basically, you could add up the UMS marks in controlled assessments and work out precisely how many UMS marks a pupil would need to gain in the “final examination” to get a certain grade, thanks to the converter tables published by the examination boards. While the final examination was an unknown, the controlled assessments were under the control of the teacher. Pupils were therefore encouraged to do repeated controlled assessments in order to improve  and “bank” even more UMS points. As a consequence, GCSE predictions could be made with some degree of certainty.

As has been said many times on Twitter, blogs and by the examination boards, the new system does not lend itself to this. To my mind, therefore, a new system is required. In my view, no predictions at all should be made until the spring term of year 11.

“But how do you decide whether a pupil is making sufficient progress?” is the next question. And this is where I become controversial.  For I actually believe that there should be a threshold grade in GCSEs which denotes a “pass”. Grades below that should be a fail. All we can do, until the end of year 11, is look at a piece of work and say “Based on that particular unit of work, has the student demonstrated a sufficient understanding of the knowledge and skills involved in that particular unit?” It is then up to individuals and departments to judge whether they have or have not. A simple yes/no system could be created for tracking purposes.

And here I come to what has always been my issue with Progress 8. I did not see anything wrong with schools focussing intervention on the old “C/D borderline”. Yes,  there were probably others who with intervention could have moved their As to A*s. At the other end, there were others who could have moved their Gs to Es. Schools are now trying to focus intervention all over the place and it does not work. At the end of the day, in the world outside education, attainment rather than progress is what matters. Progress is simply a means to an end. Of course we should praise pupils who strive towards a goal. But progress should not be an end in itself. The world will not stop asking for pass/fail exams however much some in education would like them to.

Imagine sports day. You won the race? You expect a cup? Oh no! After all, you were 2nd last year, so you’ve made less progress than the child who came 2nd this year, but was 4th last year, so they should get the cup! As for you, the fact that you won the race is irrelevant. You need to go away and have some intervention to boost your progress!

Sounds daft, but that is the logic behind the denigration of “attainment” and the celebration of “progress”. At this point someone often says, “Yes, but we shouldn’t set up a school system which creates “losers.” But would it? If I fail my driving test, I can try again next year. This is the rationale behind the idea of “repeating a year” which is used in many European countries.

To summarise, I realise that a lot of people will disagree, but I think we should shift our focus back to “attainment.” The whole nonsense about predictions is the result of a system which has become obsessed with trying to work out forensically a system of measuring progress.

Posted in Uncategorized | 1 Comment

The astounding diversity in curriculum provision

As a head of department, I am interested in how different schools make provision for my subject, MFL. This is often difficult to discover. While all school websites state the subjects offered, it takes considerable effort to find out just how much curriculum time is offered to each subject. Often it is not stated at all. When it is, a lot of calculation is required. School periods can be anything from 35 minutes to 3 hours. Most schools that I have seen seem to have 50 to 60 minute periods. Some of them run two week timetables. Some run carousel systems for some subjects. Some change the length of lessons according to the year group.

In the event, I took four different schools, three of which had periods of 60 minutes and the other one periods of 50 minutes. All four were state schools. I added up the minutes given to my subject in all four schools from year 7 to year 11. I assumed an academic year of 40 weeks in all cases, which I am aware does not take into account the fact that year 11 finish early. Nevertheless, the results were astounding.

School A had 720 hours of MFL teaching with 2 languages studied in KS3

School B had 580 hours of MFL teaching with 2 languages studied in KS3

School C had 500 hours of MFL teaching with 2 languages studied in KS3

School D had 400 hours of MFL teaching with 1 language studied in KS3

Therefore, over a period of 5 years, a pupil in school A would have some 320 hours more MFL teaching than a pupil in School D. Of course, it might be argued that School D chose to concentrate on just the one language, therefore they would have similar curriculum time to the other schools for that one language. But then look at the gap between School A and School C, where in both schools 2 languages are studied. There are 220 hours more MFL teaching in School A.

As an MFL teacher, I am naturally envious of School A. Yet I am aware that all this provision for MFL must have a cost on other areas of the curriculum. Maybe a history teacher, a technology teacher, a creative arts teacher in School A would be able to do a similar calculation to find the gap is just as wide in the other direction. I suppose I have two questions.

1) is this diversity to be welcomed?

2) Is it fair?

Although I am broadly traditional and I believe in evidence based practice, the liberal in me tends to recoil at the thought of scripted, uniform lessons. I can’t explain it. Maybe I would be won over if I saw it in action. Yet at the moment, I am not convinced. I would feel a twinge of regret if scripted lessons meant the demise of the eccentric, individualistic teacher. And while I use textbooks and generally follow the course book, I would hate having to use some else’s lesson plans all the time. I  believe there should be scope for schools to try new ideas. In addition, if a school has a large cohort of pupils who do not have English as a first language, I can see that it makes sense to account for this in curriculum design, by giving more time to English. If a school has a specialism in a particular subject area, this obviously has to be considered when allocating curriculum time to subjects. So I guess my answer to my first question is, “Some diversity should be welcomed – yes – but this much…probably not.”

Because however you look at it, it cannot be fair that academic outcomes in School A and School D will be judged without any reference to the amount of curriculum time offered. A gap of 320 hours is, I would say, simply too much.

So what is the solution? As I say, I would not like a situation of total uniformity. Yet it does occur to me that parents looking at the subjects offered by a school can easily be deceived, since schools are not obliged to publish the curriculum time given to each subject over a period of years. A number of schools now offer Mandarin Chinese as they believe it will appeal to parents. I remember reading a comment by a pupil on the teaching of Mandarin at his school. “You can’t learn such a difficult language to any decent level if you only have one period a week”.

To conclude therefore, if I were the responsible minister at the DFE, I would introduce a requirement for schools to display on their websites the number of hours given to each subject in each year group. Yes, I know, yet more wretched data! But I come back to the 320 hour gap and I think, “Parents do need to know curriculum time allocations”. Schools should not be able to conceal just how much or how little time is being given to a subject.

Incidentally, I often hear “grammar schools” lumped together as one homogeneous group. In my table, only School D is a comprehensive. Schools A, B and C are grammar schools….





Posted in Uncategorized | 2 Comments

Another chat over the custard creams

One of my most read posts has been A chat over the custard creams – a light hearted piece which I’m told raised a few smiles. I thought I’d do a follow up…..

  • So, Miss Pedagogy. You see the problem?
  • I do, Mrs Head!
  • I’ve managed to talk the governors round to introducing Project Based Learning in the lower school. Not just me, of course. I’m really grateful that Mr Verynaive was elected staff governor and put on the curriculum committee.
  • (Shocked) Mrs. Head! We don’t call it that!
  • Of course! Sorry! The Facilitation of Learning Committee. Chaired by Mr Meanswell.  He attended a fantastic course on the knowledge free school.
  • Really? You mean it was called that?
  • Oh no! It was called  “Forward looking governors in forward looking schools.”! Brilliant, eh?
  • Great, Mrs Head. So what’s the issue?
  • Well, firstly, I’m a little worried that results might drop.
  • That’s easily sorted, Mrs Head. Compulsory after school and holiday  interventions in KS4 to make up for anything they might have missed out on at KS3.
  • Good idea! Secondly, I want to put on the website that our staff are engaged with research.
  • But that’s great, Mrs Head. Don’t worry, I’ll make sure they get given a list of suitable articles. And Mr Verynaive has volunteered to pilot the idea with his class in the summer term. Followed by the student voice survey asking the children how it could be made even more fun! That’ll keep the staff on their toes!
  • Yes that’s great, but what about Mrs Stuckinthemud and Mr Awkward?
  • Coasting teachers who refuse to engage with research!
  • But that’s just it, they do engage with research. They’re on Twitter! They question our ideas!
  • I see the problem, Mrs Head. They’ll read the research we don’t want them to read!
  • Yes. You know, that annoying thing from the EEF. And that paper from Kirschner!
  • It’s all right. we’ll counter it. I’ve got some good research from the 1970s in favour of it.
  • Brilliant. Now then, KS4. The move to introduce APP grids in each subject.
  • Ah yes. Problem is, staff in some subjects say descriptors might not be appropriate for marking essays.
  • What? Never heard such a thing! Everything must be broken down and classified. Everything!
  • Some of them are expressing an interest in this thing called comparative judgement…
  • Damn these modern ideas! I’ve heard of this and it’s terrible! Those poor students! How can anyone write an essay or story without descriptors to tick off?
  • Well I gather a lot of people do and indeed have done so in the past…..
  • But they need to know what to do to improve! How can the poor students do this unless they have criteria to tick off?
  • I believe you show them exemplars Mrs Head. That way, they can see that things that work in one essay are not necessarily appropriate in another – it depends on context.
  • Ridiculous! If the descriptor states “use longer sentences” that’s what they do!  How on earth do you mark them?
  • I gather staff simply take essays and say which one is better. The computer eventually ranks them. You can add some exemplars to the set if you wish. Apparently for essays it works much better than ticking off a list of descriptors…
  • Dreadful! Thank goodness the exam boards are sticking to descriptors then!
  • I agree Mrs Head. But this could change…
  • Oh, Miss Pedagogy! Life was so much easier when teachers didn’t do their own research and just read what we told them to!
  • I know, Mrs Head. Sadly, it’s the world we live in today……
Posted in Uncategorized | 1 Comment

How I avoided descriptors – assessing against questions rather than criteria

In my subject, MFL, descriptors are everywhere and I’ve always disliked them, for the reasons given by Daisy Christodoulou in “Making Good Progress”. Here are some I descriptors I have come across:

Can write some single words from memory, with plausible spelling

OK – how about this: je    salu   sack   an   gomm   abite   mappell   onz

Can write simple words and several short phrases from memory, with understandable spelling

This maybe? je mapple         dan mon sack       jabite       fermay la port         eel y a   quell nombr

Can write words, phrases and short simple sentences from memory, with understandable spelling

This? je mappel Olivia       jai onz an       eel y a un livres dan sack     mon anniversary cest le 13 may

Yes – you can see “progress” But what is the difference between plausible and understandable? Who decides? And indeed, as a French national said to me, “they are all rubbish anyway.”

However the above examples of descriptors are better than some I have seen. How about “Uses a range of linguistic devices”  (what do we mean by a range and what do we mean by a linguistic device?) Or “Can write longer sentences” (longer than what?)

I rejoiced when levels were abolished. But I was aware of the danger of simply replicating them and wanted to avoid lessons where pupils spent time ticking off vague “can do” statements on APP grids. Language learning is too complex to be defined in a list of descriptors. I became attracted by Shaun Allison’s idea of the Growth and Thresholds model of assessment and looked at ways this could be adapted to MFL.

One thing I decided to do was avoid simply dividing each assessment into the 4 skills of listening, speaking, reading and writing. As far as I am aware, other European countries do not use this rigid classification when teaching English. My summative assessments (which normally take one lesson – an hour in my school) therefore consist of two sections: “Linguistic Competence” (which may comprise reading, listening or writing tasks) and “Grammar and Vocabulary,” which usually encompasses translation to and from the target language or gap filling tasks. Both sections have equal weighting. We use textbooks and conduct assessments at the end of every unit in Year 7 and every two units in years 8 and 9. There is a threshold mark in each section for “Above expectations”, “Meeting expectations” or “Below expectations”. This fits in with the system my school uses to track progress since levels were abolished.

But what about speaking, I hear you ask? Well, the problem with speaking tests is they take up a ridiculous amount of time. Moreover, I remember from teaching in other countries that they managed to get their pupils speaking without hauling them out to the front one by one for conversation, or without whole lessons where teachers went round the class listening to pair work. So I leave a formal speaking test until the end of the year. This of course does not mean that pupils we do not get pupils speaking in lessons – far from it – but it does mean we avoid lessons where the teacher’s time is taken up hearing 32 pupils say not very much.

When assessments are given back, pupils write their own paragraphs saying where they did well and where and why they went wrong. These paragraphs are checked by the teacher. Mostly we just tick them off. If a pupil has not analysed their performance sufficiently, we add our own suggestions.


The first challenge I anticipated was to ensure that the questions we asked in our assessments really did encompass as much as possible of the language covered in that unit of work and also contained both easier and more difficult tasks. Fortunately I have a team of experienced teachers and we were able to do this without too much difficulty.

The second challenge is one we are still working on – where to set the thresholds for each section in each assessment. What is the minimum percentage in each section needed for the thresholds “Above expectations”, “Meeting expectations” and “Below expectations?” We still have  discussions about this.

Will it work long term? Too early to say. I had hoped that GCSEs would move towards the mixed skill approach that is used at A level. Sadly, this was not to be. My current hope is the work being done on Comparative Judgement will spread to MFL GCSE writing tasks. Having marked to rubrics for an exam board, I have become convinced we need a different system for assessing writing tasks. Comparative Judgement is, in my view, the right way to go.

No assessment system is perfect, but the abolition of levels gives us a chance to change things for the better, provided, of course, your SLT is willing to give some autonomy to departments. In that respect, I have been very fortunate. Most other departments in my school use a descriptor based model. As I say, I wanted something different…..





Posted in Uncategorized | 1 Comment

Learning a subject is not exactly the same as building a car – my issue with Assessment for Learning

As a not particularly sporty individual, it took me longer to learn how to play tennis than it did some of the others on the course I attended back in the early 1990s. Time and again, I couldn’t get the serve right. I knew the techniques and was trying to follow them. I asked the coach for guidance and he smiled. “You’ll get there, Fish, you just need more practice”

“But everyone else seems to be getting it” was my response.

“Probably because they’ve done racquet sports before and you haven’t. Just keep at it.”

He was right. I’ll never be a tennis star, but after practising evening after evening, I did eventually “get it”

The reason I am sharing this anecdote is because we are now encouraged to think that improvements in learning can always be thought of in terms of “what they are currently doing wrong” and “next steps.”  At a recent parents evening, I was asked the depressingly common question “So what does she need to do to improve?” In this case it was the student’s translation skills which were letting her down.

The honest answer was “She simply needs more practice and she will be getting that over the next few months” and it was the answer I initially gave. Yet I could see that this did not satisfy the parent. “But where exactly is she falling down – what extra things does she need to do?” was the slightly puzzled response to my first answer. I then mumbled something about identifying different tenses, which was noted down with satisfaction. I had identified something concrete. “More practice” wasn’t really seen as OK.

Yet the student concerned could identify tenses when doing specific grammar exercises on tenses. What she and indeed all the other students needed was practice at identifying tenses in the middle of a prose passage which contained numerous other grammar points. Nor was it just the tenses which needed to be identified, but relative clauses, passives, conditionals, adjective endings, the lot. Nearly all my students could  identify and use these grammar points in specific grammar exercises.  We were now beginning to look at putting all that knowledge together. This does not come immediately. It takes time, which is why I have always done translation with my classes, even when it was completely unfashionable. I have never bought the argument which says if a language teacher practises translation with students, then they are addicted to the “grammar- translation method” and obviously never use the target language or engage in communicative tasks. I have always done both.

I gather Dylan Wiliam’s idea of “Assessment for Learning” took the analogy of Japanese car makers and noted how “quality assurance” took place at all stages and not simply at the end. This was then applied in education. I do not dismiss the importance of students knowing what they need to do to improve, but it does not take into account the need for practice. A carmaker who has inserted a widget the wrong way simply needs their error pointing out. In education, I have no problem with students having their mistakes pointed out – indeed, it is necessary for improvement. Nevertheless, education is not really the same as building a car. Sometimes the answer to the question “What do they need to do to improve?” is simply “More practice, which they will get throughout the course, so they don’t really need to do anything extra”. Yet a whole generation of parents, teachers and pupils are now so conditioned by the idea of trying to analyse exactly what a student is doing wrong, that the idea of simply needing practice has become anathema. Moreover the practice needed will come over a series of lessons – a lot of the time a student does not need to do anything “extra” other than attend lessons regularly, where they will get all the practice they need.

Practice makes perfect. A hackneyed phrase, but it can be forgotten in an education climate of targets, WWW and EBI….




Posted in Uncategorized | 2 Comments