In this interview Professor Rama Mathew, retired from the Department of Education, University of Delhi, shares her views on formative assessment, a challenging yet very crucial dimension of evaluation within education. In her discussion, she includes both, the secondary and tertiary education assessment systems. She further suggests several examples of assessment from which teachers can draw on to capture ‘growth’ of ESL learners.
Lina Mukhopadhyay: Thank you, Professor Rama Mathew, for agreeing to share your views on language assessment for this special issue of Fortell on Assessment. What according to you, is assessment and how can it be carried out in class to support learning?
Rama Mathew: The topic is close to my heart and I’m very happy to talk about it. Assessment within the school/college curriculum deals broadly with summative assessment (SA) and formative assessment (FA). SA is assessment of the sum-total of learning in a given year, i.e. the ‘product’ of learning while FA reflects a commitment to understand and support learning during the ‘process’ of learning. SA could also be seen as externally conducted in that the Board/university takes responsibility to conduct a common test(s) across schools/colleges for purposes of comparability and certification. FA falls under the purview of teachers and students. In this sense we can contrast the two modes in terms of the purpose of assessment and also in terms of who is involved in conducting the assessment.
LM: Formative assessment, more often than not, seems to be imitating the summative assessment format. Do you think that teachers are allowed to design formative assessments independently?
RM: Probably this perception, more common in schools than at the tertiary level, arises because school boards are increasingly ‘fixing’ test formats to such an extent that teachers need to hardly think or work on their own about what to test and how to test; it’s all pre-determined by the board. This probably is because it is believed that teachers are not capable of designing their own assessment tools let alone construct suitable questions/test items.
LM: Is this the reason why we focus a lot on content-based assessment in language classrooms in India?
RM: Content-based assessment, i.e. asking questions on already taught content (when the content is supposed to be only the means for fine-tuning your language in a language classroom) emanates from a prescribed textbook (PTB) approach. There is no easy escape from content based testing in a PTB system: whenever the prescribed texts (poems, stories, plays etc.) are not overtly tested, it is seen as ‘outside’ the PTB and is considered a waste of time and resources. I remember, when CBSE implemented the communicative approach and introduced the ‘Interact in English’ series, the Board desperately resisted too much of outside content in the exam papers. The plea, then, was: how are the weak learners going to pass the exam? And interestingly, even teachers, at least those who did not receive adequate orientation to the approach, saw teaching the ‘Main Course Book’ as a waste of time, as there were not too many ‘direct’ questions students could prepare for from the exam point of view. No one saw the value of practicing all the four skills in engaging ways and sharpening their skills in the process, which this book provided ample opportunities for; parents understandably opposed unnecessary emphasis on listening and speaking skills which were not formally tested then. Clearly the high-stakes final exam had a powerful washback on what happened in the classroom.
LM: Making language tests based on the prescribed textbook seems to be the norm. Is it in anyway related to the idea of maintaining uniformity in test formats across schools?
RM: The answer is multifold and is also intertwined: PTBs allow uniformity across schools and define the limits of the syllabus which is otherwise elusive for a teacher. This way teachers are held accountable: exams that are based on a given PTB can be easily seen to be within or outside the syllabus. Further, teachers do not have to select materials on a daily basis for which most of them would not have the necessary skills. We know that critiquing a set of materials, adapting them and using them according to varying student levels and interests is a highly specialized area and needs training. Therefore, PTBs actually serve as shortcuts to the otherwise complex task that every teacher is not ready for. There is also the issue of access to good and authentic materials; not every teacher has the time or the ability to find appropriate materials for the given syllabus. So what we have in essence is a standardised system where ‘experts’ put together a textbook based on the syllabus and the exam stems out of that and given these two fixed entities, the teacher ensures that students are prepared to answer exam questions so that they score high marks/grades.
LM: What is the role of the teacher in all this preparation – is it only to ensure that students answer questions?
RM: The teacher is merely an assembly line worker and students ‘products’ that emerge at the end of the assembly line: the more uniform (all high scorers with a few exceptions) they are, the better for the system to justify the whole process. I know I’m presenting a very gloomy picture and not all teachers are assembly-line functionaries. But given that the system is based on an input-output model, there is very little we can do about what happens in the classroom, ‘black-box’ as it is often called. That explains why we do not interrogate what teachers and students actually do inside the classroom.
LM: As part of this ‘gloomy’ scenario are we also not undermining the potential of assessment in addition to treating teachers as ‘assembly line workers’?
RM: Assessment often fulfils a fait-accompli function. Although it has enormous possibilities to give ample evidence of where one is at a given stage, what one wants to achieve and how one can go where they wish to, we seldom exploit it to the full. CCE (Continuous Comprehensive Evaluation) for example, that CBSE introduced (and has now withdrawn) is an example of how something that has a huge potential is under-utilised or even distorted quite often.
LM: How did CCE influence teachers and what kind of tests were they required to create?
RM: For the first time probably the term formative assessment became part of teachers’ active vocabulary in this scheme and it intended to break away from the typical ‘unit test’ concept and provided for assessing students’ learning continually through not just a paper-pencil test but a variety of methods such as quizzes, assignments, projects, portfolios, pair and group work and so forth. Given the magnitude of operations and the need to maintain some sort of uniformity for purposes of comparability across schools, the Board required every school to follow the scheme of four FAs and two SAs during the two terms in an academic year. The SAs are typically paper-pencil tests with a given break-up for reading, writing and grammar and literature, which is based on a PTB. CBSE also set aside 20 marks for assessing listening and speaking carried out at the school level, for which guidelines and very often actual tasks were provided.
LM: Given this fantastic structure proposed through the concept of CCE, are there no doubts regarding its implementation?
RM: Actually many questions arise in this context: Are teachers using a variety of assessment tools as part of FA? If so, how well are they being designed and used? What skill(s) do they focus on? We know from experience that good assessment requires a lot of training and practice. Are teachers equipped to handle this with a reasonable level of sophistication? Even as they learn on the job, is continuing support provided? The short (3-5 days) training in assessment that some teachers might have received is definitely not adequate. Therefore what appears as a ‘modern’ scheme, while it is definitely an improvement over the earlier traditional unit-test model, is not achieving its full potential. I don’t have any research evidence to claim its efficacy one way or other, but from what teachers report generally, there seems to be a range of ‘not satisfactory’ to ‘quite good’ practice that exists. And this seems to depend on whether a given school expects its teachers to follow sound assessment practices or not. I can say one thing with some certainty: I have met many teachers who admit to their ‘ignorance’ of how FA should be carried out; worse still, they are not aware of what actually FA is all about. The very impressive scheme and the teachers’ manual for CCE seem to be just a policy document that outlines everything that is desirable, but what happens in reality is a far cry from what is envisaged.
Having said that, I must now mention that CBSE has withdrawn the CCE scheme and introduced what they call ‘Uniform Scheme of Assessment’ which does not anymore talk about FA and SA but Periodic Assessment (20%) comprising periodic tests worth 10 marks, note book submission and enrichment activities for 5 marks each. The yearly exam gets a weight of 80 %. I’d call this move regressive, if only to maintain uniformity across 18,000 odd schools across the country.
The situation I described earlier about not entrusting the teacher with any responsibility and supporting him/her with adequate training is now further strengthened. The entire concern seems to be about making report cards comparable across schools for easy mobility of children from one school/state to another. I’m quite concerned that other states will now whole-heartedly or half-heartedly follow this pattern.
LM: Let me move to a slightly different but related area now. What about the assessment systems in colleges and universities in India? Are there similar problems to what we face at the school level?
RM: It is actually a different story with college / University teachers. There are in my view two types available: one, a very traditional category where ‘old type’ paper-pencil tests are used that usually test knowledge and understanding of prescribed texts and some ‘stock’ essay/paragraph questions peppered with some discrete grammar and vocabulary items. The more recent or new ones are those that try to make it as communicatively oriented as possible; this is available in some more recently set up progressive universities or engineering colleges/universities where students are assessed on reading, writing, vocabulary and grammar items. To what extent such tests manage to test students’ ability in those skills they claim to, varies from one situation to another. It can vary from not at all/hardly to quite well on a validity or effectiveness scale.
LM: So does that mean that in the more progressive universities listening and speaking also get assessed, given that they are an integral part of everyday communication?
RM: Almost invariably students are not tested on speaking and listening skills in a formal way as they are tested on reading and writing skills in a test-situation. However, they are assessed on these skills especially in the more progressive contexts in a seminar mode through PPTs, group presentations, and so forth. This then is tested internally, by their teachers and usually counts towards their final grade. The English proficiency courses offered to undergraduate students at Ambedkar University, Delhi is an example.
LM: Can we say that these communicative modes of assessment help teachers capture ‘growth’ in learning the second language?
RM: Well, I would say that teachers can see ‘growth’ in students’ performance if they wish to. Because put simply, whatever the criteria for assessment, if student X is making progress from assessment 1 to 2 to 3, then it is clearly visible in terms of marks or grades. Also teachers have a good sense of how their students are doing, through their observations and interactions with students. But very often, this goes unrecorded or unacknowledged. It’s even better if their performance is described qualitatively: Did the student(s) make fewer grammatical errors, or was their presentation more coherent? In what way? Did it have a more effective introduction and conclusion and so forth. One can clearly see a progression, in the context of the criteria used. More importantly this change in their performance should be perceptible to students, provided we decide to involve them in the assessment design and process, i.e. what tasks to use, what criteria and the assessing process itself.
LM: Can you give us some more examples of formative assessment you have used in your courses?
RM: I would like to share my experience of teaching the course on Evaluation at the B.Ed. level and also the course on Qualitative Research Methods to M.Phil and Ph.D. scholars at CIE (Delhi University) where I worked. With B.Ed. students, we spent the beginning several hours examining critically what assessment practices they had been subjected to as students, both in school and college. This provided the basis for thinking of and practicing ‘new’ approaches such as portfolios, open book exams, writing as a process, etc. What made the course effective was that they, as students, experienced first-hand all the ‘new’ approaches that we were learning about, to be used with their students later on.
LM: Give us some more details of how the students were supported.
RM: On the research methodology course, the M.Phil and Ph.D. scholars worked in teams of 2-3 on a small research study, and wrote it up with continuous support from each other and me, the tutor, and learnt about researching, collaboration, academic writing which involved 2-3 drafts with peer- feedback and all of that. Peer review involved looking for a good introduction and a conclusion, coherence, hedging, the rationale for headings and subheadings, the language used and other features that they found relevant. By the end of the entire process, they produced a ‘paper’ that in 50 % of the cases could be considered for publication. They found the writing part quite tough but enjoyable as they could see progress towards a full-fledged paper. Peer-review, editing, giving and receiving feedback and doing at least 3 drafts were all required for a satisfactory grade for this assignment. That was the only way I could ensure everyone went through the process to understand what academic writing involved. During the final ‘test’ they critiqued their own assignment from the early stages to the end and graded themselves on it. I couldn’t have taught a course on Academic Writing without the mini-research study providing the base. On the whole it was very satisfying and a huge learning experience for all of us.
LM: You must have guided quite a few doctoral dissertations on assessment. Does any example come to your mind where growth is systematically captured and reported?
RM: Nupur Samuel from the University of Delhi did a research study that involved young adults in learning to write. Students refined a set of criteria for assessing writing tasks and in fact it was when they understood what those criteria actually meant did they start making improvement. Of course the research study was set up in such a way that it enabled involvement of students gradually and empowered them to take responsibility for their learning.
LM: You have told us a lot about assessment at the school and college level, and also a little bit about research that you have guided in the area. Can I take you back to formative assessment and link that with training? Would you say that teachers can practise formative assessment only if they are trained?
RM: Assessment is all about practice: a good scheme can only go up to a point. Whether teachers who have to concretise it in live classroom contexts are equipped to handle it, including self/peer-assessment and how they feed the evidence back to their future teaching work to improve learning are all easily said than practised. Therefore, FA which is the teacher’s responsibility is much more complex and demanding than SA which can be externally designed and managed. For this to happen the single most important component is training in assessment. We know that training programmes that offer courses or modules on classroom methodology seldom have a full module on assessment. CIEFL (EFL University now) where I worked for several years, had a course on assessment as an optional course that teachers could opt out of. But a teacher is by default an assessor and can’t opt out of it. Even the most famous B.Ed. programme had the course on Evaluation as an elective and now on the two-year programme I understand it has half the weight of a methodology course. But why? What is the rationale? My colleague who teaches courses on assessment at the Central institute of Education in Delhi University remarked quite seriously: ‘Where is the need for teachers to learn about assessment? Anyway students have to pass!’ We have trivialised this field so much that we will have to work quite hard to redeem it.
LM: I agree that it has been trivialised. Any way out of this problem?
RM: We will need to professionalise assessment in a way we have not done so far. We will need to spend our time, effort and money on assessment training. When we shy away from it, the results are dangerous and harm the education system totally. A last comment on this issue: at the tertiary level, no training of any sort exists and we can imagine how teachers stumble and learn things on the job. A recent thesis on how college teachers conceptualise learning and assessment throws light on many of these issues (see Violet Macwan’s thesis from Delhi University).
LM: Undoubtedly formative assessment design and practice is challenging. What are your suggestions for teachers?
RM: First of all as I said earlier, teachers need one thing for which there is no substitute or short cut possible i.e. training in assessment. While it doesn’t have to be face to face for a given number of days, it will have to be a planned, structured and a hands-on programme where teachers can together read about, discuss, design and construct assessments for different levels of learners. There are many MOOCs available, for example: Designing Assessments to Measure Student Outcomes (https://www.futurelearn.com/courses/assessments-student-outcomes). Teachers in a school or college can form a friendly group and decide what they want to learn in assessment: is it about how they can monitor student progress in the classroom; or developing test items to assess different skills, i.e. listening, reading, writing and speaking; or developing appropriate criteria for marking productive skills, i.e. speaking and writing? The area is pretty large and one has to go about it in small steps. A very important dimension about teachers making such efforts together is that they can try out new ideas in their classroom and share their experience with others and this way engage in action research. I would like to re-emphasise that assessment is all about practice. All the theoretical concepts come alive in practice; knowing ‘theory’ will not automatically ensure quality assessment.
LM: Do learners have a role in formative assessment as well?
RM: Yes, a crucial dimension is to involve learners. When you ask them about what kind of assessments they like, and how they would like to be assessed, you will be amazed at how much they know and how well they can assess themselves or their peers; of course you will need to monitor them unobtrusively and guide them as they progress. We often feel that when we leave assessment to students, they might cheat or inflate their marks or they don’t know enough to be able to assess which is in fact the teacher’s job. Some of it might even be true, but when you create an atmosphere of mutual trust and bonhomie, part of what we call ‘learner-centred pedagogy’, you will see that children of all age groups can be ethical, trustworthy, and honest and more importantly competent. In one study that I carried out long ago, I found that students of Grade 9 were more ‘critical’ of their writing than me and they always gave themselves at least one score less than what I did.
LM: If teachers decide to get help from assessment experts in training, will that help?
RM: I’d say that when teachers get together and decide how they might want to go about training themselves, they might want an assessment ‘expert’ to guide or mentor them: it’s always possible to call them in, but be very sure of what you want them to do and at what stage of your group’s work you want their inputs. Given a chance, experts will give you good lectures on how to do credible assessment but it is seldom useful for actual work. And these days ‘experts’ are available with a click on the mouse – there is so much help available in the form of youtubes, TED talks, PPTs and articles that we just need to spend some time on the computer. But one thing – I’ve found that it’s fun to learn together and share.
LM: Many thanks, Professor Mathew for sparing your valuable time for this interaction and sharing your thoughts and suggestions on this significant area of assessment.
Lina Mukhopadhyay, an Associate Professor in the Department of Training and Development, The English and Foreign Languages University, Hyderabad, has taught, researched, and held workshops in the area of language assessment. She can be reached at linamukhoapdhyay@efluniversity.ac.in
Rama Mathew can be contacted for any questions/comments at ramamathew@yahoo.co.in