 Good morning, and welcome to the first meeting at the Education and Skills Committee in 2019. I take this opportunity to wish everyone a happy new year. Can I remind everyone to turn their mobile phones and other devices to silent in case they interfere with the broadcasting? Agenda item 1 today is declaration of interest. We have received apologies from Gordon MacDonald and his substitute today is Gil Paterson, who is attending committee for the first time, so I therefore welcome it as Paterson and invite him to make any declaration of interest. Could I refer to my declaration of interest around the public record and I have no additions to that in regards to my meeting here? Thank you and welcome. Agenda item 2 is a decision to take business in private. We would like to take agenda item 4 in private today and whether to take future considerations of evidence on the SNSE inquiry in private are the content to take agenda item 4 and all future items in private. Thank you very much. Agenda item 3 is Scottish National Standardise Assessment inquiry. It is the first week of the committee's inquiry and we are starting by hearing from a panel of witnesses, including those involved in designing and delivering standardise assessments. I welcome this morning Mary Shaw, director of education, East Renfrewshire Council, on behalf of the Association of Directors of Education in Scotland, ADES, Juliet Mendelovitz, director of assessment and reporting for the Australian Council of Education Research, ACER, Professor Sue Ellis, professor of education for the University of Strathclyde and Professor Christine Merrill, professor in the School of Education and Deputy Head of the Faculty of Social Science and Health of Durham University. Welcome to you all this morning. Can I just begin by asking you to say just briefly what your involvement is in SNSAs and then we'll move on to detailed questions from the panel. If I could just start with yourself, Mary. Morning, convener. Thank you. I'm a director of education, obviously. I oversee that within East Renfrewshire, although not directly involved in it. And as a member of ADES, obviously, continue to monitor its implementation to see how we can make best use out of those assessments on both a local and a national level. Thank you. Professor Mendelovitz. Juliet Mendelovitz from ACER. Until October 2018, I was the research director in general manager for ACER UK, which is an independent company, but it's a subsidiary of ACER Group. In that capacity, I was the person who put together our bid for the SNSA, as it became, and led the implementation and development of SNSA for the three years until I returned to Melbourne. So I'm now based back in Melbourne as a research director in ACER Group at large, but I still have a very strong connection and recent connection with the SNSAs. Thank you. Professor Ellis. I'm Sue Ellis from the University of Strathclyde. I was involved in some early meetings around assessment as a result of the Joseph Rowntree report we put on closing the poverty attainment gap in Scotland. One of the things about writing that report was that we noticed that there was no way of assessing whether or not any initiative had closed or widened the gap because there was no data. That report did call for better data in schools. As a result of that, I think I attended three meetings on assessment for the Scottish Government. Thank you. I'm the family in Professor Mendelovitz. I'm from Durham University. Until 1 July 2018, I was the director of research in the Centre for Evaluation and Monitoring, which provides standardised assessments as part of monitoring systems for schools, and many, many hundreds of schools in Scotland have used those assessments since about 1996. Thank you very much. Before we get into some of the detail about the evidence that you've submitted to the committee, I think that what we are generally interested in in this Parliament just now is some of the criteria that make for good quality assessment. Professor Ellis has just flagged up the fact that it's very important that we have good quality data and that didn't exist at the time. Could you give us your views as to whether you think that data is getting better, and what else do you have to do to ensure that parents and pupils and teachers specifically understand exactly what it is that makes for good quality assessment? I think that we'd be very interested in the general parameters there. First and foremost, we need to consider the needs of the stakeholders and establish the primary purpose for the assessment. Before we get into the technical details of reliability, validity, content, etc., that needs to be very clear from the outset. Once we've established that, because different stakeholders will have different needs, and if we think of the learner, perhaps they want to know about their current level of understanding and the next steps that they need to weigh in towards, we've got parents and carers who will need some information, we've got teachers who are looking for various levels of information, head teachers, management information, authorities and national level. We need to be really clear about what we are conducting the assessment for in the first place and then move from there on to look at the quality and how we might best assess to get the information that we want. I think that we have to start with the idea that any assessment is a tool and it takes time for professionals to learn to use it and to learn to use it well and to learn what it can do and what it can't do, what you can do with it and what you can't do with it. I think that the University of Strathclyde staff would want any conversation to be rooted, not just in ideological arguments about what stakeholders would like and what they need, but also in an understanding of where Scottish education and Scottish educators are actually coming from in terms of their current data use and how that's currently seen. For us I think we would argue very strongly that you do need some good, standardised data but you also need a really robust set of ethics around that. To highlight teachers, local authorities, inspectors, parents, the media and politicians what you can do with that data and what you can't do with that data. If you've got a good ethics policy around it you would actually be educating teachers to use the data well and you would be creating a system that actually worked for the children of Scotland. I think that the criteria has to be based on what will make learning better and what will make teaching better and any information that helps to inform teachers and indeed young people and children as well as their parents of the progress that they're making against national benchmarks if you like or any other sort of curricular element that would measure that progress. That's essentially what it has to bring about and that in itself will raise attainment and eventually. I think that Professor Sue Ellis is right. There needs to be a right sense or real ethics around it and about how we use it and I think we all have responsibilities in how we do that including parents and schools and local authorities and the media but essentially it has to be about improving the experiences of young children and young people and making sure that they reach their potential. That has to be borne out in terms of the criteria. In terms of whether I think the data is getting better yes I do. I think the publication of CFE teacher judgments is improving. I know that it is still experimental data but the SNSAs will help to moderate those teacher judgments as well as other activities that we will take about to make sure that there is dialogue and professional dialogue around those teacher judgments and that will be, as Sue says, one tool of assessment not the be-all and end-all but certainly one that teachers will be able to use to measure their progress and their judgments about whether they are on the same page as their colleagues. I think that criteria around those sorts of aspects would make it most useful for the whole system. Thank you. I suppose that the focus of your question was about assuring the community of the quality of the assessment and in that respect, I think the quality of the instrument itself the assessment instrument is fundamental and we take a lot of trouble and pain and expertise to ensure that the instruments are sound, robust, valid from a number of different perspectives and that includes ensuring that we consult very carefully and widely with people in the education community the stakeholders to make sure that what we are assessing is what's important. So if we're going to measure something we need to know that it's what we intend to measure and we ensure that by getting qualitative feedback from stakeholders, learners, teachers, people education Scotland and the Scottish Government for instance but also we have statistical tools to ensure that the assessments are measuring something coherent that has meaning that it's not just a random form-filling exercise. So we've got a lot of quality assurance measures in place and we've tried to make it transparent to the public how those are, excuse me, implemented. As a number of the submissions including my own stated no matter how good the assessment is if the results of it are not used and not understood it's pointless and the reporting of course is a very key element in that the clarity, the transparency, the accessibility of the reports is something that we've worked very hard with the Scottish Government to try to ensure. So there are different levels of reporting. Fundamentally the school level reports are designed for teachers to give them information about individual pupil performance. There are also school level reports that aggregate some of the data in a way that we hope is transparent and useful for schools and there are local authority reports that have a wider aggregative purpose but also give a lot of detail to local authority so that they can analyse the results in their own ways with support. And finally, the third key element in making an assessment valid and useful and quality assured is ensuring that there's a good mechanism for providing professional learning to schools, teachers, local authorities to make sure that they are able to interpret the results with information, with clarity, with intelligence and with effect so that the training programme that has been implemented alongside the implementation of the assessment itself from the beginning as part of our contract which Scholar from Harriet Watt is running is an absolutely key element and a really unusual I think in international an unusual element in a national programme that there was the foresight to bring along a professional development and training programme with the inception of an assessment, a national assessment to ensure that it is used in a way that's intended. Thank you. My second question before other colleagues will go into specific details really has two parts to it. Do any of you feel that there is a set of data that we don't currently have that would be helpful in informing the whole process of assessment? The second part, and perhaps Professor Ellis, I was very interested in what you said about an ethical strand to all of this. Is it your view that the interpretation of the results that we have currently are not being effectively used in terms of giving us the right results of what it is that we're trying to do, in other words, to raise attainment? Could you just expand on what you feel about that? Sometimes, when I'm in and out of local authorities in schools a lot and I talk to teachers, I talk to head teachers to local authority improvement officers and directors of education and there is an emerging amount of research that actually looks at the variability in the progression pathways of children is so variability but it's so variable but actually it's not appropriate to use any one-off standardised assessment for target setting, for tracking, for whole-scale interventions. So there are examples in Scotland where local authorities will test all the children in the local authority at a particular point in time and then put the bottom 20% automatically into fairly rigid and inappropriate for some set of work and they are using it sometimes for streaming and setting. Now that's not just to do with standardised data that they're getting. Some schools, local authorities are doing it with the formative data they're getting through from nurseries so some of the work that I do, I go into schools and where schools, I might have a two-form entry, I will look at whatever data the school's got and it might be book-level data, it might be standardised data but I will see a difference between the two classes and the heads will explain well of course we set on entry to primary one and we set on the basis of the formative data we're getting through from nurseries. Now when I explain to them how that enshrines disadvantage and how it's not an ethical use of data they very often change their policy sometimes when you talk to their directs of education I mean two directs of education I can think of have just sent emails around to schools saying that they are not to do this but I think that there is a very poor understanding at the moment in terms of the research on how reliable and predictable how an assessment score can predict results so the research I'm looking at at the moment is the research by Becky Allen when she was at the education data lab where only 9% of children actually followed the projected pathway from their first standardised assessment to their fourth one 91% either overshot or undershot now if you've got that much variability in the system at any point if somebody comes in and says right we're going to group these kids on the basis of a single assessment score that's unethical because you're going to get that amount of variability anyway so I think there will be in learning to use standardised assessment well we will need a really big mind shift, professional shift in terms of how staff think about assessment and how they use assessment it's probably the sort of shift that isn't helped by high-level ideological debates it's probably the sort of shift that needs to be made in terms of how you actually respond to the data that you get I think you see similar ethical difficulties when the media look at data from schools and try and pitch one school against another because very often in primary school the actual sample size isn't big enough to be able to make those sorts of judgments and so I suppose what I would argue for is a very grounded view of standardised assessment One thing, would you equate unethical with misuse? Is that what you're saying? I think that some of the ways that I see the sort of standardised tests that have been going on and non standardised tests I mean local authority devised tests some of the uses that I see happening in schools at the moment are not ethical and I actually see the introduction of a national assessment as an opportunity to open that up for debate and to get a much better use of assessment one that actually works for children and parents Can I just let the other panel members respond and then I'll bring in someone if anyone does want to respond to that point that we need, that we don't have if anybody's got any points in that I think there is a lot Can I just clarify? Are you thinking about what schools need or what you could do with us a couple of months? Because the schools are the ones that are delivering the assessment so yes, is there any data that you feel is missing when it comes to good quality or our ability to produce good quality assessment? I think there are assessments that can be done at different time points to the national assessments and I've got examples of schools doing that so they are collecting information from multiple different sources to inform their practice and assessments from the CEM would be one example of that so I had an example of a teacher in a primary 1 school from this current year who is assessing their children with the CEM assessment at the start of primary 1 because she wants some information about what those children know what they can do to inform her practice and then she's using the standardised national assessment later on in the year to confirm her own judgments now where the children are so that's a nice blend I think of both assessments With good results because that's the key thing what assessment process is giving the best results that's the whole number there It's giving the best results it's not too onerous on the child or the teacher as well so that was one nice example and as you go up through the primary school again they're using assessments maybe in alternative years to give a bit more information so that you're not waiting more than one year for information coming in I'd just like to add to Professor Ellis's comments about predictions and one study that we have done is looked at children at the start of school around 45,000 children and followed them up to the end of secondary school now that was in England the correlation between your attainments at the start of school and age 16 is 0.5 there's a lot of variation in there and children don't follow a linear trajectory maybe necessarily so you'll have a real burst in activity and then you'll have a consolidation phase so I think it's really important to look at this holistically and not just between two particular time points it could be that particular child is consolidating their learning or maybe they've just learnt something new and they're going on from that so I think that's an interesting study to look at as well so that's a fair in mind so there is a relationship but it's not a fixed one sat in stone and that matters when schools then respond to data in ways that are not appropriate Does anyone else want to come in on those points? I think one of the pieces of information that will be really useful to schools and the wider education community over time which is initiated now but isn't yet in force is the mapping of progress over time which the methodology that is being used for the SNSA allows to happen in a quite transparent way because there's a long scale which has been implemented in this year's assessment that wasn't available in the first year which will allow tracking over time of pupils as they go through the years of schooling from primary 1 through to secondary 3 in each of the subject areas and also allow equating over time at a year group so primary 4 for instance a school can look at how primary 4 results this year compared with last year, next year and so on and the methodology that we're implementing in the SNSA allows that to happen so I think that's an area of data that is going to be improved and is already instigated I suppose another area that I think is an important one that could be developed alongside or within the SNSA is qualitative sort of explanatory information about how children engage with their learning their attitudes to learning that school atmosphere and so on there isn't currently any instrument or servo mechanism in the SNSA alongside the SNSA that captures that kind of information and I think ways of managing that could be integrated with the SNSA and I think that would be really helpful in trying to work out why things are happening the way they are so... Yes, specifically in this question of whether something is ethical or not if it's not ethical to get information about a child and then decide how you're going to support the child or presume how that child might be supported in terms of what work they would get why would it be ethical to make judgments about an individual or a school against national benchmarks I'm not... Are you saying that the only way we should use the data is to support the individual child but we can't make any presumptions or assumptions about the child's learning from it because it then maybe locks them into a particular form of support or we're making judgments about an individual against I mean we heard from our colleague here that in fact you would judge the school or the individual against national benchmarks and is seen as a way of pushing up attainment I wonder where does the ethical... Where does that balance lie? I mean I think that any assessment any short assessment can only give you a snapshot of where that child is at that one period in time if you then take that snapshot and use it to make systemic changes to how that child is educated so you put them into a bottom set or you put them into a catch-up learning group from which they find it difficult to escape then that's... It's not about the support, it's about what then happens you can't move from, you can't progress from it Probably the school that I've seen making the best use is a school in Llymwood, Woodlands primary and they make hard use of assessment data to have hard conversations with staff but completely keep the children as part of a class, as part of a learning community where they are talking to... They recognise that learning isn't just about the programme you provide the child with it's actually about the whole environment that the child is in and I think that when you look at how some of the data is being used formative and standardised data is being used at the moment is it is sometimes being... schools are overplaying their hand now it may be that they're assuming and they're not doing it because they want to be bad they're just doing it because they haven't realised that actually the data doesn't have the predictive... We don't not be true of standardised assessments as well It's true of everything, yeah? So it's a snapshot and we shouldn't presume from it how a child should be supported we shouldn't assume there be predictability about it Can you explain then why it would be given such a priority in terms of education policy at a Scottish level if it neither predicts nor determines predicts the child's ability in the future or determines the support that it should have? Because it gives useful information to schools and local authorities about how individual children are getting on it could be very useful for a class teacher I think you've got two very different ideas about what the assessment is good for One of the things about curriculum for excellence it is a complex curriculum with many layers and very responsive and the emphasis in curriculum for excellence is very much on teachers getting the right learning mix for children Now that's different from 514 which was a much more rigid curriculum and kids progressed through the curriculum at different rates but you didn't really change a huge amount you perhaps made tasks a wee bit easier or a wee bit harder but you didn't actually change the learning it was very difficult in 514 to change the learning mix curriculum for excellence is premised on the idea that the learning mix really matters and to work out... So you do need points where you actually check that we're getting the right learning mix and that the learning mix and know who it is the learning mix is serving well and who's not being served well by the kind of learning mix Should a standardised assessment is not then just a snapshot it's showing us whether an individual child is getting the learning mix Is that what it's for? It can act as an opportunity to reflect on that it can be quite diagnostic and I think what we don't know that the data on the predictivity of assessments is very much based on English data it may be that because the Scottish assessments are in many ways broader and they are better linked that they do have a better predictor capacity but you need to be using them for about 12 years or 15 years to work that out so until that point the ethical position has to be do no harm and so you don't set you don't stream you don't put children into catch up programmes that are not that remove them from the main body of the class and put them in a different category from other children on the basis of one snapshot Thank you, with permission I'm just asked a very specific question about the process round standardised assessment and then I'll let my colleagues ask their questions we told that the test in the briefing we were given by Scottish Government officials round how the test would be run in a child's primary one we'll talk about primary one rather than further up so any time between the ages of four and a half and six so to what extent if there's that range can that be a standardised assessment given the gap and probability between a four and a half year old and a six year old and the second thing we were told is in actual running of the test so it's a multiple choice you can answer A, B, C does this word sound like this word which word is sound the same as that word there's a little button and you can press that to hear the word so you hear the answer being said and I asked the question would there be any distinction in the assessment that was given to the teacher or whosoever between a child who needs to press the button to hear the words and those who didn't and I was told there would be no distinction between those two would you not think that a child who was able to go through that whole process without having to hear the words said to them but could read it and hear it themselves that would at least be reflected in the test so the question of age range and the question of actual functionality how much more information we're getting than a teacher might get through working with a child in the class the standardisation of the assessment resides in the fact that there's a single pool of items, questions that are administered to an assessment is selected for each child taking the assessment in the year groups that have been identified and as you know it's an adaptive assessment so that means that depending on how the child is performing in the assessment as they go along they will get more difficult questions or easier questions depending on the capacity they've shown so that it's pitched at an appropriate level for the child to get maximum information about what they know and what they don't yet know so the standardisation is in the pool of items being common to all children in the year group in the fact that the assessment there are some limitations around the administration of the assessment and that the results are processed in the same way for all children within that there is some flexibility appropriate for an assessment that is low stakes so no individual child's future depends on the results there and it takes into account the different equipment that the child might have at their disposal because of this availability of hardware and so on at the school and also the child's way of approaching so we can't there's flexibility in the way children approach an item depending on what their capacity is now there are some items you mentioned whether they hear audio or not there are some items where the child would need to hear the audio in order to answer the question there are other items where they can already decode they don't need the audio support but if they can't they will with that assessment that the teacher got reflect that difference because I thought that was quite a basic thing but secondly there's another question about how valid a group is it if it's a child could be four and a half or a child could be six well the fact is the way that Scottish education works children can enter school at different ages there's an 18 better to be done by each rather than stage well our approach this is that children are in a particular year group and there is a curriculum that is established for that year group and we're assessing where children are in their stage of learning as Sue has said any assessment is only taking a measure of a child's capacity at a particular stage so when the teacher receives a report on the child age in case they didn't know which they probably will but I mean that's one of the facts that they would take into account in interpreting the results of the assessment and as you pointed out the child can take the assessment at any time in the school year so it's not standardised in the sense that there's a particular day on which they need to take it one of the key elements of this assessment is that the children are able to take the assessment when the school deems that they're ready to take it so it's designed to provide information to the teacher about where the child is in their learning given that there are benchmarks for learning for the stages of schooling in Scotland and to take into account the other factors they know about the child about their age, how they're faring at school their attitude to school and so on when they interpret the results as well as other kinds of formative assessment so that having to assess where the child's ready to be assessed before they're assessed they have to assess where the child is ready to be assessed and that includes practice running that kind of test with a child to know what to do We're assuming that teachers will take into account when the child is ready to do the assessment that doesn't mean when they're going to be able to answer all the questions correctly it means when they think the child is emotionally or psychologically or intellectually ready to take the assessment Thank you Can I just pick up on the standardisation as well and when we think of a child they learn more through maturation through their environment etc and they also learn at school and so we have been looking at children the impact of schooling on children at different ages now we've only done this through the primary school but there's a huge amount of learning takes place in primary one children tend to go they might know a few letters etc by the end of primary one many of them can read quite a lot of words they can do some comprehension they can do quite a lot of maths as you go up through the year groups that amount of progress tends to get less and less and less and when you get to secondary it's starting to flatten out and when you get into adulthood you're probably on a line and then declining so you've got the age and you've also got the stage so I think it is quite problematic to have a standardisation that covers a large stage as well as the age, especially for the young year groups it's not so problematic when you get older top end of primary secondary I don't think this is a problem but we have quantified that amount of learning that takes place in a school year and that needs to be accounted for in standardisation I think Michelle, did you want to come in on this? I think that, thanks convener there were a few bits and pieces related to the ethics and the use of assessments and I would emphasise that it is only one piece of assessment and that is certainly leadership is key in all of this and that leadership needs to be at all levels from directorate and especially from head teachers about the ethical use of that data and it has to be about using that as one point in time how that child has performed with that assessment in the midst of all other assessment information that a teacher will be using on a daily basis about a child's performance against the curriculum and the activities that are set for that child to make progress with the curriculum going back to an earlier one about pupils questionnaire or survey about pupil attitudes to learning and so on actually one of the things that was good about the SSLN was that there were both pupil and teacher questionnaires that measured and gave very good information to local authorities and indeed nationally about how confidence levels could be improved with particular aspects of the curriculum and that would be something that I would suggest if we are building that into the SNSE that we go back and look at that that was very valuable information and certainly something that we used in the strength when we reviewed particular areas of the curriculum Thank you Professor Ellis, you wanted to come I think part of the difficulty is that the you are saying what can teachers do with it you are almost looking at it in terms of a summative assessment so when you are saying is to determine whether or not a child has achieved a level and what I found quite interesting about the EIS mission of evidence actually was quite interesting was that they did have this debate about whether it is about confirming teachers assessments or informing teachers assessments and if you are talking about confirming teachers assessments you are almost talking about using the national assessment as a summative tool have they reached the level or not I think that with curriculum for excellence we actually need a shift in that sort of mindset and so we need to get teachers looking at how they can use the assessments in a much more diagnostic way and that diagnosis can be in terms of particular items so you might have a whole class where the comprehension levels are quite low and that gives a head teacher or the teacher herself opportunities to actually say well actually I've not got the mixed rights but you've also got opportunities in the new assessments to actually look across items and actually take quite a diagnostic view so for example there's one item where children have to listen to a story that's read to them and answer comprehension questions on it and it can be very easy as a class teacher to have a child who you know isn't really comprehending when they read they can't retell the story at the end of it after they've read it two minutes ago and they still can't remember it and retell it and as a class teacher if you have two children both of whom do badly on the reading comprehension but only one of whom does badly on the listening comprehension your actual point of intervention there is different it can be very very easy in a class of 25 or 30 to miss poor oral story comprehension and so the opportunity when I say that the assessments are a tool that teachers need to learn to use well and space to learn to use it well that these are the sorts of uses that we could actually be exploring I think that there's a lot of opportunity to provide really good case studies about how these assessment items are being used well and how they're being used ethically with lots and lots of explanations about why that's good use I think there's a slight danger in that we do have the there is a sort of a common strand going all the way up where you can actually look at that and there's a danger that schools will look at that and think that that is a predictive measure and so I think there needs to be a lot of education around that but teachers in Scotland want to do their best for the children that they care for and so giving them opportunities to actually explore that is important I think if you look across at the different views of assessment in all the submissions that you've had to this committee you'll actually find teachers thinking about assessment in very very different ways OK, Mr Gray Thanks very much I had a couple of specific questions for Juliet about the design of the SNSA but I wanted to start with something that Christine said you said Christine that the most important thing about the design and assessment was to have the primary purpose clear from the outset and that has been part of the debate around SNSA are these tests designed primarily to provide information what teachers can use diagnostically and their learning strategies with pupils or are they a way of measuring standards in schools and progress in the attainment gap so my question to Juliet is in designing SNSA were you clear what the primary purpose was and what was it? I guess I would answer that by saying that there are dual purposes so a single primary purpose is not something that I think we've subscribed to So there isn't a primary purpose? Well that's not to say that there's no purpose there are two really important purposes No but Christine's point was the most important thing is establishing the primary purpose you're saying in designing these you didn't know what the primary purpose was No I didn't say that I said that there was more than one very important purpose one of the purposes, a very important purpose was to give teachers good information about where children are in their stage of learning that would allow them to reflect on where the children are whether they're finding out something new about the children and to help them to take the next steps whether they're showing some challenges in their stage of learning or whether they are going great guns going ahead so that there's something to be done to help them to extend Children are as individuals you mean? Yes and there's also class level information so that you can look at where children are performing as a group perhaps very well but not so well that you might take action to support so that's one really important purpose another important purpose is to help the Scottish Government and the education community to both improve the overall capacity of children in the university and to close the attainment gap and in order to have information about what the gap is and whether it's being widened or narrowed one needs some national level data as well as data at the individual school level so both of those purposes are very important prime purposes for the assessment and I think in the way that the assessment has been designed and reported we are working towards meeting those goals So when the first report on SNSA was published back in December ACR were quoted as saying that the national level results had to be treated with caution and I think that was because the tests were taking place at different times in the year in different areas so I wonder if you could enlarge on that it says, results from all this is a quote in that report result from ACR results from all learners should be interpreted with some caution when making any comparative judgments so I wonder if you could elaborate because you said one of the purposes was that national monitoring but that seemed to imply the report seemed to imply that that only really worked if I guess all the pupils took the all the children took the test at the same time which they don't I think the quotation that you've just read out was referring to the interpretation of results of individual children or class groups or school groups or even local authority groups to take into account when the assessments were administered to the children when they were interpreting results against the national norms. The norms were administered and the norming study was conducted at two particular points in time in November 2017 and March 2018 and there was a scientifically drawn sample of pupils across Scotland which was stratified so it took into account the local authorities gender two other factors, two other variables were taken into account during the sample so we're confident that the measures of children's performance from those two norming studies are robust and reliable so what was pointed out in that report was that when you're looking at smaller groups of children's results at the school level or whatever in relation to the national norms you need to take into account when the assessment was done so we have achieved in that very first year of implementation robust national standards across the country that were scientifically responsible but those are norms, those are benchmarks so if you then in future years compare results against those benchmarks what those results come from tests which have been taken at different times that yes, they may so there's a flexibility in the design of the program for as we've mentioned before for children to be administered the assessments at a time when the school judges had to be appropriate but when interpreting the results of the child's performance they need to take into account the point at which the child was administered the assessment if they're looking at the national norms or points of comparison so both things are true you need to be cautious in making comparisons but there is a set of statistics that allows you to look at what's happening nationally I'm asking really about the year on year comparisons of the performance of the system does that not imply that that has to be treated cautiously so if we look at one year and then we look at another year and say that there's been an assessment or the attainment gaps closed that would be affected surely by when the children took the tests that's true and that's in the first year as well as in subsequent years but what we're recommending to the Scottish Government and I think they're also enthusiastic about this idea is that national norming studies are conducted regularly perhaps every couple of years so that we can track how the nation is performing over time okay can I ask a slightly different question I just wanted to exemplify a wee bit further that the purpose of the SNACs is either to confirm or verify or moderate a teacher's assessments then primacy then has in terms of measuring performance and measuring progress and whether the country is improving would be those teacher judgments not necessarily the SNAC I get that so in the submission from ACR about the design of the test she said that when the tests were being designed from the pool of questions that these were reviewed and critiqued by panels of experts from Education Scotland and the Scottish Government and later on those panels were consulted again I just wanted to ask what the involvement of teachers was in this that says the experts from Education Scotland and the Scottish Government just wondered what the involvement of practising teachers was in the design there was a little involvement I wouldn't say there was a great deal we did some piloting in February 2017 in schools and we invited teachers to give feedback on the assessments as they saw them so we took that into account the representatives from Education Scotland that were nominated by Education Scotland are people who come from schools originally so in that sense teachers were consulted although not teachers working in the classroom at that point I might add that we are implementing at the moment a questionnaire for teachers which will be distributed widely in February to ask them about their responses on several dimensions it's ease of administration usefulness of the reports the behaviour of the children and their attitude to the assessment so we are gathering systematic data from teachers during the current year but they weren't involved practising teachers weren't involved in the design of the test so far as we were given a brief in the contract I don't know how much teacher input there was but during the development of the instrument there was only a small amount of direct teacher consultation Mr Scott Thank you Can I just try and understand what you've been saying to Ian Gray about these norms so please don't try and interpret what you've said so a norm is a benchmark and are you suggesting that these norms on testings that you suggested to Mr Gray would be the way in which the Government or policy makers would assess what was happening nationally That is one way that they can assess what was happening nationally via the SNC What's the other way then Well as Marie's pointed out the A-cell collection is the primary means of measuring where the children are retaining the standards What is it going to be then? I don't think it's psychotomy I'm sorry I just don't understand I think we've all been asking these questions about purpose Is it about teacher judgment or is it about the national performance of schools? Well the SNSA is one contribution to the overall assessment picture that is taken into account alongside all the other kinds of assessment that teachers are doing daily in their classroom practice so I don't think opposing those two things against each other is the way the development of the assessment profile But again you said that there was a proposal to the Government for producing information to them which allows them to make a national assessment of what's happening in education will have to happen every couple of years because of Ingray's point that schools are not doing these tests at the same time so the data cannot be perfect and cannot be comparable year on year At the schools we're getting information annually from the SNSA and they'll be taking that into account along with the other kinds of assessment they're doing The way that the SNSA can contribute to information at the national level is by conducting normal studies at regular intervals to track whether the attainment gap is closing for instance one of the primary focuses of Scottish education But I thought I read ECR's view as well My understanding is that your suggestion to the Government when you were first asked to do this work is that these tests should all be done at the same time so as to be able to be compared Well If you want to If you want to have a strict comparison of results from one year to the next And isn't that what the Government asked you to do because the Government asked us to help them to develop an assessment that would allow teachers to understand where children were in their development of literacy and numeracy One of the purposes of having a national assessment is that there is consistent data that you don't have to have any doubt about different instruments being used and so on So there are a number of different factors that we can't in development programmes such as this and it has a lot of really wonderful features From ACR's point of view now, mission is to improve learning We're a not-for-profit organisation and we're really keen to promote programs that honour teacher judgment that respect that teachers are in the best position to make decisions about the individual child's learning in the school programs that combine that with being able to generate some useful larger scale data sets that can be used to to work out whether things are working well and where there might be needs to reflect on what's not going so well I think that's all entirely fair I'm just trying to establish whether you think that's best achieved if these tests are all taken at the same time during the course of the school year One of those aims might be best achieved if you want to have very strict comparisons between where how a child is going from one year to the next then taking a measure at the same time in every year and this goes for large groups too is important and that's why we put in that caveat to the national report about the caution that must be taken when making comparisons that doesn't mean that no comparisons can be made it means that people have to reflect on and appreciate the results that are coming out in a nuanced intelligent way I think that's entirely fair but you sought to achieve that consistency of data I guess that was the whole purpose of the work you've been trying to do for the government in establishing this testing regime across Scotland so is it fair for me to assume therefore that your preferred consistency of data approach should be to have the test done at the same time if that were the sole purpose of the programme yes but I think given that there are other purposes which are at least as important namely providing some formative information to schools and to teachers, individual learners that's the kind that Sue's outlined then I think having combining those is the way that we've moved forward that's fine so there are therefore a number of other purposes to the two standard tests as I've tried to answer Ian's question there's not one single purpose that supersedes all the others we're looking for an assessment programme that combines the best features that will serve the number of purposes thank you very much Mike Shaw as a director of education you were very clear at the outset to Liz Smith that there was one purpose and that was that again don't let me misinterpret what you said but you said very clearly to Liz Smith that there was one purpose and that was to assist teacher judgment on the learner on the journey that pupils was taking is that am I fearing saying that I think that is the primary purpose and I think that would bring about that improvement in learning and teaching could be used at many different levels so it could be used at individual teacher level individual pupil level but it can also be used at whole school and local authority and indeed national level so that you are putting the right supports in to help improve those learners' experiences so I'm not sure and it's about the multiple use of the same data essentially and in East Renfisher we are very experienced at using that that Sewell was agreed with what we do in East Renfisher but I think our results certainly stand for themselves and do your schools test in May? Our schools in terms of the SNSAs have a six week window to test we continue to use our own standardised assessments to bridge the gap between those being sort of tailing off and the SNAs becoming more robust in terms of the information that it gives us at the moment Is that a transitional measure? Yes, yes Now these tests are in place what are they going to add particularly at primary 1 to information that you just already have? In terms of they do give that measure against a national benchmark and therefore teachers can look at it and say children are performing well or not or they are performing as I expect them to perform given their performance in classroom against those particular judgments In primary 1 we do something very similar to what Christine outlined where we do use a baseline information on entry to primary school and then the SNAs would come along later in the school year once children or once teachers expect that children are ready to take those assessments against the national benchmarks and I think yes you would expect teachers to be looking at the curricular advice that they have and thinking how are children performing against that and therefore I mean they're not there and I think there is still a bit of confusion around whether in terms of what used to happen with 5 to 14 national assessments those were taken when teachers deemed a child to have completed the assessment the SNAs don't have to be but within that six week window it does and we do do them in May along with I think the majority of the rest of the country which does help with those norming exercises as well but I would point out that in your papers it quite clearly indicates that the EIS was instrumental in taking that opportunity for those to make them more high stakes if everyone was taking them at the same time. I wouldn't want I'm sure that parents wouldn't want them either we don't want to end up with children having tutorials and leading up in the way that they do have in England and therefore teacher judgment should be primary in all of this and making sure that this helps to moderate I agree but you don't think that's a danger simply because of the pressure now to close the attainment gap and all the national things that are being said by by education secretaries and so on and so forth you don't think that's inevitably what's going to happen the pressure's going to come on schools at all levels and P1 are up to make sure these things are all done at the same time that therefore national results can be produced and therefore the things can be said nationally about what's happening in Scottish education but the curriculum for excellence teacher judgments are gathered at the same time and therefore the timing of the SNSAs are important to be able to inform those teacher judgments which are gathered nationally so no, I think that goes back to Sue's point about the ethics of all of it we have to be and my point about leadership and everyone has responsibilities in terms of leading and making sure that this data is used appropriately and it is robust enough to be used appropriately and I have to say that I have spoken to class teachers who have used the raw data so they will go in and look at how children have performed against particular skills or particular questions in the SNSA and it has made them question or indeed have dialogue about pupil progress either confirming or indeed suggesting that children are making more progress more quickly or slowly than they would have expected from but it is just from class work but it is just one piece of information and we would never say that it's the only piece of information it has to be in the mix of everything else and in that sort of sense the EIS was right to make sure that it doesn't because if you do go down that road it does become high stakes. You're kind of saying we should stop obsessing about standard tests don't you? I think my advice to you would be that the profession has welcomed them from the professionals Some of the profession has welcomed them Well a lot of the profession that I have certainly in East Renfrewshire if I can speak about that they have welcomed them but I think the EIS's agenda is slightly different but I think you would need to ask them about that but they I do think that 5 to 14 assessments were always rubbish as not being robust enough then when they were taken away everybody thought they were the best things in sliced bread because at least they gave some sort of information which is why a lot of and that's what's in the ADSE submission that's why a lot of people went into standardised assessments and used the Durham approach but this is bringing back that opportunity for people to measure their own children's progress against what are national benchmarks and assessments that are taken on a national basis. Can I ask Professor Ellis one question and I don't want you to go on about East Renfrewshire Professor Ellis has wanted to come in on a couple of points already so if you could answer the question I don't want to ask what you're wrong here, do you correct me but I think you said something along the lines that we would need 12 to 15 years of data before we could fully understand what was happening. I can't exactly remember the context of the answer you gave to Joanne Lamont earlier on but that strikes me as one heck of a long time to find out what's happening and please... Maybe not that long but you'd need data for children to move all the way through the school system before you saw how and whether or not a a score that they got in primary one was determining what university degree they got so I think that's an issue I think probably the point that I want to make is any assessment if you're spending taxpayers money on it has to be useful and so in that it has to make an impact. I've worked in Scotland for a long time 30 years and the SSLN the only people I mean as you know the last few results have gone down the only people I heard talking about that were politicians and the odd academic I didn't hear directors of education saying we're going to go back and look at our system because obviously you know something that we're doing we're going to reassess our teaching I didn't hear class teachers talking in that way so you need to have if you want something that actually works and benefits the children of Scotland you do need something that actually has some sort of purchase with the practitioners who can actually make a difference so it has to really really speak to the teaching and learning that goes on in classrooms to how teachers think about the children sitting in front of them and that again is none of this is going to be perfect it has to be good enough and you have to interpret those results in terms of being good enough rather than a truth there are lots of different sorts of truths that's a Donald Rumsfeld if ever I had one thank you Mr Greer just before I move on to a different question a couple of supplementaries on some interesting points that we've brought so far just to start Llywyddyn's primary and Lynwood that you've mentioned is an interesting example understanding what you said there was that they're using the results to have I think he's described it as hard conversations with teachers but they're not jumping straight into ability set groups or anything like that just to clarify is that hard conversations about the needs of the individual children or are their conversations being had with the teachers are judgments being made on teachers based on the results of their class results hard conversations with teachers about the children they're teaching and what those children need because you wouldn't have in any one data set for a primary classroom enough data to say whether the teacher was doing a good job or not I mean it's just not a robust example but what the head will do is she will actually sit down and actually say you know how does this child feel about their reading what sorts of things are they enjoying reading what they find in difficult what are you noticing about that can we have a bit more information about this what can we introduce them to who are their friends what are their friends doing can we network them on so it's a very very inclusive approach we've actually done a very small study in Renfrewshire that looks at really really hard to teach children children who really the school system is not serving well at the moment and what we've found is that the children who are making the most progress and it is a very very small case study approach that we've been adopting but children who made the most progress were the children were at school system level the health and wellbeing data about how the child felt about school themselves as a learner was integrated in professional conversations with their literacy data and when you bring those two things together I mean it's not rocket science in a way if a child's happy they're going to learn better but when you get those conversations actively being integrated at school level by the head teacher you actually get children who both are more relaxed and happier in school and they learn more effectively as well so the hard conversations are very very specific conversations about how the child feels in their class what opportunities they're getting in their class and how those opportunities can be maximised Can I just open up this question about the class level data and take your point that the results of SNSs on their own would not constitute enough evidence to start making judgments on a teacher's ability or performance should class level results ever be used as part of a wider judgment of a teacher's performance that's a concern that's being raised by a number of teachers they're concerned that this data will be used by management or by the local authority to read into their performance should it ever be used as a contributing factor I don't think there's any I don't think there's a robust research base for saying that and if in fact if you look at the British Education Research Association they've actually just recently done a publication on bench line assessments and have made the points about how it's actually a very unrobust way to do it as does the academic papers published by Becky Allen from Education Data Lab Hi, I think that other means of finding out about a teacher's performance would indicate the assessment results would be a reflection of other pieces of information that you've already got there would be classroom observations there would be a whole host of indicators in a school that a teacher wasn't bringing the best out of their children and so rather than using the results as the motivation to investigate the teacher's performance all it should be doing really is reflecting what you already know about that teacher if you use them as the motivation then you bring in all of the risks of those data you know the children being coached whatever other risks might happen because the teachers are fearful that that's what the primary purpose of those data are going to be and just to move a couple of steps up again which is the substantial question that I had around use of this data at a local authority level so I understand, I think, we broad understand what the purpose would be in using it at a class level with an individual pupil at a school level Mary I was wondering if you could elaborate a little bit on what are local authorities using this data for I don't think we're using it for anything yet I mean certainly not any stream for sure but I'd get results on how well children are doing one of the things that we have developed and indeed I think we are speaking to Acer about is about getting the information so we have a tracking database within Acer for sure that tracks all individual children with lots of information in there not just about standardised assessments but also about teacher judgments and so on and we want the SNSA information to go in there as well and then we can use that and cut that in lots of different ways to have conversations and essentially that's all it's about it's about asking questions through the analysis of the data that it generates but what you would be able to do is to look at it if there were particular components for instance at a school level where you would say well we're not teaching say addition and subtraction particularly well and if we looked at that as a local authority and said there's an issue with that then it's incumbent upon us to do something about that and to bring about improvement and to help teachers to improve the learning experiences of youngsters through that sort of data and that's how it's used formatively in that summative information that we'll get to be able to do that and from what you've seen through ADES is that a consistent approach across the 32 or are you seeing local authorities taking a different approach to this? No, I don't think that that and I think local authorities will all be at different stages of development and in East Renfisher we've used standardised assessments for over 20 years and that information and that approach we didn't always get it right I think we're getting it better now and I do think that we do use it to ask the questions and it's the same as any analysis of data all it does is point the finger and ask you if you want to shine your torch on a particular area how would you bring about improvement certainly through regional improvement collaboratives you would expect that that system leadership or system improvement there will be sharing a practice in that so I can't speak for all local authorities I can't speak about what we are doing in the west partnership in terms of bringing about improvement I don't want us to take us down that road though and in terms of analysis of data and the analysis of data I think the scholar work that is going on will lead to real improvement and understanding of how the analysis can take place and what teachers should be extrapolating from the results of their pupils so that is to be welcomed but I think we are at different stages in real variability with at all levels in the system over the use of data so you mentioned that you've spent quite a lot of time and quite a number of schools recently what's your experience of the consistency between local authorities and the approaches they're taking to the data I think that some local authorities aren't taking slightly different approaches but we don't know often what those approaches are you also have things happening at school level that directs of education don't always know about so and different sorts of pressures on teachers to do and head teachers to do different things so there's one very popular literacy scheme that recommends if children are not doing well in their literacy in primary 4 they go and sit in the primary 2 classroom to learn for their literacy lessons now that walk of shame the daily walk of shame must do terrible things to how children feel about themselves as learners and be positively detrimental to their health and wellbeing but a direct education wouldn't necessarily know that that's what's happening in their school unless someone like me notices it or HMI call it out and I think that when we're talking about making assessments low stakes we actually need to be looking quite carefully at the checks and balances within the system so the asking HMI when they inspect schools education Scotland to actively ask parents about things like teaching to the test repeated repetition test so building that in so that you've got a monitoring going on there looking at how the inspector at themselves think about data and use data and talk about data looking at the language that we use and here's something that I think that parliamentarians could be really useful about because very often people talk about ability that the data is about the ability it doesn't just tell you about the attainment on that day for that particular child it doesn't have any sort of capacity or ability implications but so I think that the language that we use is really important I think having getting the unions to and local authorities to have really robust whistleblowing processes so that if teachers feel that they are being pressurised to use data in inappropriate ways that could be something so that there are a lot of very practical things that we could do that move away from assessment debates being simply about ideological differences and actually look at the grounded picture of how they're actually used in Scotland and how we can get them used really really well because that's the practical problem that needs solved if Scotland started doing that it would probably be the only nation that I've heard of that has those sorts of checks and balances in place it's not an impossible thing to do but it does require quite a hard and collaborative debate around that but it is something that actually would be possible and would really serve the children of Scotland well Thank you Did anyone else want to comment on those questions? Thank you Mr Mundell Thank you I kind of want to go back to an earlier point really round whether the tests themselves are robust enough to give that snapshot I'm thinking and I've heard from teachers in my own constituency who are concerned about how the tests are actually formatted and whether they work for example for young people with additional support needs dyslexia, dyspraxia, autism whether things being adaptive whether that actually works so people lose interest whether they've got the fine motor skills to actually maneuver the mouse whether the difference in time limit for how long it's taken people to complete the test masks masks, all the things that are going on and whether going back to Joanne's point we're actually testing different skills at the same time and the same questions do you recognise any of those concerns? The question about accessibility for children with additional support needs is something that's learned very large in our development of the assessment and we've implemented a lot of affordances in the programme to help children who have visual impairment or motor skill needs to allow them to do the assessment and I think that was clearly a very high priority in the Scottish Government's request and we have had a lot of workshops and consultations with accessibility experts in Scotland and beyond Scotland to do about how to make the assessments accessible for all those children so we've used WCAGs a world standard of accessibility double aid measure Do you recognise that in making the tests more accessible you can actually end up masking other difficulties a child's facing so by making adjustments to the test it's making it more difficult for the teacher particularly if you're looking at P1 when the test has been taken making it more difficult to pick up some of those nuances and when Professor Ellis was talking before about the differences in comprehension or other things by allowing more variables and allowing differences in how the questions are answered and other things does that not make it more difficult for the teacher to identify some of those things? One answer is that when we make when we introduce affordances in the assessments to allow children with additional support needs to take the assessments we are always conscious of what the key intent of the question is and we don't adjust the question in such a way as to obliterate what it's trying to measure I can't see having looked at the test and spoken to teachers how that can be the case because if you're looking at two pictures and we're back to the decoding for example there's a huge difference in listening to a question and reading a question those are two completely different things are they not? Is it not possible for them to get muddled up? So if the point of the question is to know whether a child can hear rhyming words for instance then they will have to press the button to hear the word to answer the question regardless of whether they're a child with additional support needs or not if we can't measure that skill for a child who doesn't have hearing capacity then that child won't be able to do that question so there's some questions that are not available for all children but on the whole as far as possible they are available for all children what we've said in our guidance to teachers give the children the kind of normal classroom support that you would give them to do this assessment as far as possible we've made the assessments available to children with the children's support needs without teacher support but if the child has an aid assisting them in a normal classroom practice then that should be available to them so it's striking a balance between making the assessments available to as many children as possible to the vast majority of children and preserving the integrity of what the assessments try to measure do you think you've got that balance right? I think we've done a lot better than many other assessments do it's not perfect of course I mean the fact that about 95% of the available assessments were taken when I think there are about 10% of children with additional support needs in the CMIS database suggests that many of the children with additional support needs have been able to take the assessment and when the teachers receive the reports of that child and reflect on it and understand what child's additional support needs are so it's a matter of interpretation and yours but it is in theory possible for children with additional support needs for example to perform better because of the adaptions within the test then their ability might actually be do you think that it's possible that in some cases issues that children have are masked because of adaptions that have been made perhaps even for other children not necessarily for those children themselves? I think the kind of the mantra that I use as a test developer and that's what I background in the test development when making adjustments to items for children with additional support needs is would the affordances that are being added help a child who doesn't have additional support needs to do better on the assessment and if it would then that's not a good affordance so what we're trying to do is create a level playing field so that children with additional support needs can approach the item in a similar way to what a child with additional support needs would be able to do. Does that make sense to you? It does make sense to me but it doesn't seem to match up with what teachers are saying about the test because when you say that in particular I think even for me I would find now as an adult hearing words that rhyme is actually easier to hear them than it is to maybe see them and if certainly if you can both see and hear them together then that's going to be easier to identify that they rhyme than just having one option or the other and I think if you take a bright young person they might well take the opportunity to listen as well as to read in order to maximise a chance of getting the question right and I think that certainly from speaking to teachers even ones who are very positive about assessments they have questions about how it's actually been configured and road tested and how it compares to what's done elsewhere. I'll leave that there. The other question I wanted to ask was just going back to we've heard about a rapid change in early years of primary school particularly in terms of people's ability and how much knowledge they pick up. Does that mean that standardised assessment is more useful at some stages than at others and is it better to let some of those things even out before starting to make judgments? Yes. I think Professor Meadow was going to come in first. I'll go first. I think standardised assessments at all ages should be able to give you useful information. What I was saying earlier on was when you do your standardisation is more important in the earlier years when there's that rapid period of change. If you assess children within a period of about say a month and then you base your standardisation on that and then you compare other children who are assessed at that time of their school year you're going to get I think a more reliable result than if you have a standardisation that spans say six months of the year because then how are you going to control for the amount of learning that the children have done as well as their increase in maturity through age? It was a warning was that what it comes back to is the basic point a standardised assessment can give useful information throughout your education career. My question I guess was whether there are less risks in starting taking your baseline once some of those initial variables have settled down after that period of rapid change because there's some children because of home circumstances or other things who maybe start off with less knowledge who might not be familiar with particular animals or might not have done a lot of reading at home but within a year or two years of being at school some of those things particularly for more able children settled down Absolutely. That's a reflection of their learning isn't it? Whether it's useful at all to have a snapshot of individual people's knowledge before they've had the chance to start their formal learning I guess was my question. It comes back to what you want to use the assessment for doesn't it and if you want to use it to inform your practice and the way that you're going to tailor activities towards the level of development of that child part of that phase is really helpful in doing that and then of course if you are looking at progress from that you're going to see the progress that they've made during school time if you leave it too late you're not capturing the amount of progress that they've made Can I just add to that I think I agree with everything Christine said but I think your idea of your suggestion that perhaps it would be better to wait until later on when children have their their knowledge understanding skills will have even doubt a bit the data actually doesn't support that view what we see in our data from the first year of implementation and consistent with what was found in SSLN is that the gaps between children's knowledge actually increases over time it doesn't decline so getting a good measure of where children are early on individual people shift around and there's a huge variation of individual performance in that time so it's whether something particular of interest is happening in that period of change for individuals or does it follow enough of a pattern to make that a useful measure Well guess what I'm saying is that overall looking at aggregates we see that the gaps in skills understanding capacity attainment for the individual child there are different trajectories many different trajectories of growth and work that ACR has done indicates that there are up to 6 years of difference in attainment within any one ear group and that's something that I guess we'd like to minimise to some extent as far as possible but I think we need to recognise that children are at different stages they develop in different ways Can I just add one more point? Just the last point your line of questioning there is that we don't want to be assessing children on their first day into primary one that's not what I'm saying we want to give them a little bit of amount of time to acclimatise to the new school the new classroom to settle down in that respect but not wait too long for that to happen A very quick supplementary Mr Greer on additional support It should be very quick just to Gillette hopefully this is a yes no question Are these tests a diagnostic tool for additional support needs? Are they designed to be a diagnostic tool for additional support needs? No, they're not Thank you Can I move on to Dr Allan? One of the messages for me without putting words in your mouth coming across loud and clear from what's been said so far is that assessment isn't anything new and neither are some of the temptations or risks that others have identified to do with problems associated with assessment anything new I haven't perhaps talked about so much yet is where these assessments fit in with the Curriculum for Excellence or you've already described how it's a multi-layered curriculum we have, it's not a statutory curriculum but can you say anything about the content of these assessments as it measures up against what we're trying to teach and to measure in the Curriculum for Excellence? That's Curriculum for Excellence Can I answer that one? Yes So the brief for the SNSA is literacy and numeracy only it's not the whole of the Curriculum for Excellence which has many other facets and even within literacy and numeracy there is no attempt to cover every aspect of the Curriculum for Excellence I think we have to be perfectly frank about that, acknowledge that for instance engagement in reading we can't hope to assess that in the kind of assessment that the SNSA is So given that the benchmarks that were published in draft forming June 2016 I think and then again in a finalised form in August 2017 are the basis for the development of the framework for the assessments the blueprint we've taken in consultation with the Scottish Government Education Scotland key organisers within each of numeracy reading and writing and shaped the assessment around those organisers and every item in the assessment has been aligned with one of the benchmark statements so they are definitely it's a Scottish assessment it's designed for Scotland as you know the instrument the original items came from an international pool but they've been reviewed in some cases modified in other cases items were rejected because they didn't align well with the benchmarks so the assessment does address aspects of the Curriculum for Excellence literacy and numeracy benchmarks but there's no attempt to say that it covers every aspect and of course it's only one ingredient in teachers evaluation of how children are coping with the Curriculum so it's got a particular focus but it is a focus that's matched to the Curriculum for Excellence I'd be interested to hear from Aris about that, I know that Aris has made a submission around the benchmarks themselves and how you think that the assessments will measure up against those in the future is there anything you want to add about that I mean certainly we take as red what has just been said in terms of the design of the assessments they are as I understand them measured to or designed to make sure that they are reflective of the benchmarks and the experiences and outcomes in Curriculum for Excellence and therefore teachers can use them to confirm or otherwise their own judgments about how children's progress with the Curriculum so I don't know that I've got very much more to add to what Juliet has already said So would your experience be that that as the assessments would be and develop that the expertise or the views of teachers themselves was fed into the process when they were devised is that something that you would be content to place I'm very content that that took place there was quite a bit of evidence gathering and and people within East Rainfisher, other officers within East Rainfisher were heavily involved in supporting both Addis and the Scottish Government with their brief for before the tendering document went out for instance for which Acer won so I'm very content that we've had input to all of that and continued dialogue indeed around improving where there were aspects and we've found Acer to be very open to that and indeed willing to sort of work with us, listen to it and I think what Juliet had said earlier about then taking information from teachers feedback on their experiences and youngsters experience with them I think is to be welcomed That leads me to ask in that case whether the panel here, people on the panel feel that the assessments we're now talking about the standardised assessments are a better fit with Scotland's curriculum for excellence than the kind of assessments that were taken place before I see Professor Ellis nodding her head I wonder if she has a view or I think they are I think that they are measuring a broader range of skills I think that they are if we get the right sort of ethics debates around them then we can help teachers, politicians the media, parents to understand that an assessment score isn't about necessarily some children being more able than others it's simply about the sort of experience they bring and curriculum for excellence is very very much about working to the needs of of children in a rich and inclusive way so I think that they are better than most of the assessments that I saw happening both local authority internally devised assessments and published ones If I can come back in there we redesigned our own internal standardised assessments to fit with the experience and outcomes as they were published quite a number of years ago what we also do is ask our teachers to make judgments about children's progress so that judgment can also be benchmarked against the outcomes from those standardised assessments so we took those steps so I'm not sure at this point whether SNSAs are giving us any more information other than that ability to look at how they're doing against a national benchmark if you like but I can't speak for what used to happen in I think around 24 local authorities who used the Durham assessments is timing that teachers get instant feedback I do think there is a bit of time that teachers need to get their heads around it I can't remember which submission it was but one of them said it gives you a lot of information at a very granular level teachers don't have time to look at that and part of me said they're thinking well you know suppose your doctor said well actually I haven't got time to look at the granular level of your blood tests so I think you might not want to look in that deep granular level for every single child but if my child is not being well served by the curriculum I actually do want the teacher to actually have that data that she can go into and actually look at it and interrogate it and think about it in lots of different ways I think one of the things that any assessment does is it gets teachers to look at lots of different kinds of data about what progress might mean for individual children it's not a very linear thing making progress as a reader it's working to a broad horizon and there are lots of different pathways you can take to that so I think that I do think teachers just need that time to look at it and think about it and learn how to use it in the context of curriculum for excellence but I think it's got that potential to do it if there's the professional and political will to let teachers do it Finally to pick up on a couple of coded references that have been there are not so coded but polite about the way that we as politicians talk about these assessments are there any lessons for the body politic in Scotland as to how we talk about these assessments and how we promote public understanding of what they are and don't Does anyone want to take that one? Welcome across party collaborative professional consideration about what what local authorities schools teachers and parent groups and the media could work together to design a system that actually makes the system work well for children I think there are issues about making sure it's not used to classifying grade teachers or classifying grade schools because those have negative effects on children but ultimately it has to be the thing that Mary started off by saying it's about teaching and learning and it's about empowering teaching and learning I suppose politicians to talk about more experienced and less experienced children rather than more able and less able I would like perhaps a little bit less a little bit more focus on what's happening in the system at the moment that isn't particularly perfect and isn't particularly desirable and to have discussions that are quite grounded in how we make things better rather than this is right, this is wrong, this is good this is bad Mr Scott, a very quick supplementary I'd like Liverpool to win the league but we don't get everything we wish for can I just quote just on this very area that Alistair Allen has been rightly asking about the EIS submission today to the evidence to us today tells us that Scottish Government officials when introducing testing or establishment, sorry terribly pejorative word to Scottish education said, and I quote, the assessments were said to cover at a maximum around one tenth of the skills knowledge expected at each CFE level in literacy and numeracy do you recognise I suppose there's maybe a better question for a director of education do you recognise that as the reality so how many tenths does it cover I happen to be able to answer that that's I don't work with the curriculum I leave the curriculum but unfair question about the one tenth but is it fair to, the EIS is saying here that this is the point about how important these are which I think Alistair has been driving at how important we politics should take these assessments are the EIS is saying here and it's rather supports your contention that we're all getting too obsessed by them that only one tenth of the skills knowledge of games at each CFE level comes from these assessments I think the point that the EIS is making though is that we shouldn't blow this out of all proportion and my advice earlier is about making sure that the assessments are allowed to be used for their primary purpose which is about monitoring the system is working well and yes it gives you as politicians information about whether attainment is growing and a gap being closed but more importantly is that it should be to inform the professional judgments of teachers and in that sort of sense what we need to do is to make sure that teachers and we give them confidence that we will allow them to make those judgments and expect them to use that information in a professional way so if it's measuring if that 10% is the most important skills which has been put into the design to make sure that you know those are the key elements that a teacher would want to see children making progress with then yes that's enough I would say but we all have a responsibility and I said that quite often this morning about making sure that we allow these assessments to for the public at large to have more confidence in the system itself and that teachers are getting it right and essentially that takes you back to that primary purpose of informing teacher judgments okay thank you Professor Merrill actually I think that's quite a helpful point for them to make and assessments will never assess every single aspects of learning against one area of the curriculum so you know I think it's a really healthy way to look at it actually and also it might if you are saying that openly prevent a narrowing of the curriculum just down to those aspects they're important aspects we can use them to monitor progress but don't let some think it's the be all and end all like you were saying and let's try and prevent that narrowing down to only learning those things Professor Mendelofitz? I'd just like to go back to Alice's question about what you guys could do I think I would hope that Scottish parliamentarians would grow to feel quite proud of this assessment it's got a lot of features that will be admired internationally and I think Scotland should be shouting about what some of its excellent features are for instance the way that it values very explicitly teacher professional judgment in combining the results of this assessment with their own judgments about children's progress the fact that it's an online and adaptive assessment I don't think that there's any other national assessment yet that has those features there are some attempts happening in my own country for instance but they have not been as successful just technically as the introduction of this assessment and the fact that as a colleague here pointed out before some caveats or some questions about the accessibility features the fact that the assessment has tried to take into account and is designed to be as inclusive as it is is also a very important feature that I think Scotland should be proud of so I would like you as a committee and your colleagues in the parliament to really take pride in what's been achieved so far not that there's not room for improvement there is but it's been quite an achievement so far Thank you I'm going to bring in the school roof Good morning to the barrel I'd like to pick up on Alasdair Allan's point with regard to benchmarking the assessments against curriculum for excellence and it's a bit of a historical question to start with Professor Merrill perhaps he can help with it were the CEM assessments that were used previously benchmarked against 5 to 14 we did a prediction of your 5 to 14 level on the basis of the CEM assessment but it was a percentage prediction rather than a direct link to a CFE level and that's an important thing to do because it was one assessment predicting how you were going to do on another one we wouldn't give a one to one mapping of that in terms of the content as well we worked with 5 authority teachers and with authority staff there many years ago to make sure that we're aligned to the Scottish curriculum and the CEM assessments themselves they happened every year with every stage is that correct? They were available for use every year but different authorities and different schools chose to use them with different year groups in their class The reason I ask obviously is because I'm a 5 MSP and you might be aware that 5 voted recently to scrap the SNSAs and revert to CEM and test every year so that's assessing more now than they would have been had they gone to SNSA so it's just a point locally I'd like to pick up on your point with regard to educating teachers to use data well which I thought was a really interesting one because I think historically in Scottish education data has been used by management in schools so principal teachers, deputy head head teachers and you spoke as well about the SSLN so when I was teaching kids would be taken out of my class I had no idea where they were going and then they would suddenly appear back into the classroom so that data to me as a practitioner was really unuseful it was not great in terms of informing my understanding as a practitioner and I also note in the OECD's 2011 review that it said without adequate training teachers may not have the assessment literacy and ability to appropriately interpret results and to identify areas where curricular strategies may require adjustment so I'd like to ask a question about teacher training and what kind of teacher training you think might be required for teachers understanding of this data? My experience of working with teachers is the most useful education that they find is when they're actually working with real data from their children so I think that there is a job of work to be done where teachers don't just learn about assessments in the abstract but they actually learn to navigate what's in front of them and how and when they can take a deep dive and actually look at the granular information that's being provided. I think one of the advantages of this assessment is that teachers do get the result instantly and they can click through and see the different ways that particular children responded. So yeah I can't remember what your question was. It was about training teachers to use the data. Perhaps I'd quite like to bring in Mary Shaw here because I wonder as a consistent approach nationally as I just got a view with regard to how this is monitored at local authority level the training given to teachers to ensure that there is that parity of access to train them all up in the same way to give them a good understanding there is a bit of a gap at the moment between what this data will provide teachers with at the end and how it will help to inform their practice. Scholar is already providing that so Scholar and I think the way that that is organised is we each have a link person from Scholar that we can have a dialogue with to say that this is where we're at that training is for head teachers and deputes and I take your point about it not always being available to class teachers the expectation is that it will be cascaded especially to those staff that would be using it who are in P1, P4 and P7 they wouldn't always be in those stages of course in primary schools in particular secondaries probably have more experience in using attainment data historically obviously and but we are in East Hampshire very pleased and I'm sure that my colleagues within ASDA within ASDA within ASDA are also appreciative of the information and we are able to bespoke that our conversations are about you know our staff are already well versed in using blah blah blah let's see if we can bring it about a more granular use of it and so on so I think that is already in place and it's up to us as local authorities of course to make use of that I think it might be worth going back to Scotland and asking Scotland to do some things that are stand alone things that teachers can download off school premises outside school time and actually get the information that they need and also I mean having things like checklists for local authorities and head teachers and teachers about what they know and what they don't know and to identify where the gaps might be for assessment because we don't really we haven't I think the roll out in terms of both initial teacher education and continuing professional development hasn't been quite as proactive as it could have been but I think it hit the schools at a really really busy time with PEF funding and a whole load of other things going on so I think take two now is our opportunity to actually improve that growth mindset In my list of things that Scotland should be proud of, another one was what I mentioned earlier in that a training programme was initiated at the beginning of the assessment programme which I think is a really innovative move on the part of the Scottish Government so what I'm doing is developing the professional learning programme as the SNSA matures so in the first year a lot of what was about just how do I access the assessment how do I assign logins for the children how do I download the reports and the more technical dimension increasingly the emphasis will be on interpretation of reports and what do I do with the information that I've got from the reports so those programmes are being developed at the moment and we're going to face meetings which are extremely important and probably more fun than sitting and looking at a webinar but there are webinars as well there are power points, there is text guidance on the platform for teachers to help them to become familiar with the assessment and how it might be used I'd like to talk about something you mentioned the questionnaire that will be going out to teachers I think right at the start there will that questionnaire look to I suppose consider their experiences and one of the things that I've certainly come up has come up in conversation with a lot of my friends who are still teachers is the provision of ICT in schools and the lack of opportunity to access appropriate ICT to deliver the assessment now that's not a critique of the assessment itself it's a critique of the provision of ICT so is that something that you're going to consider? There is a section of questions about the ease of implementation in the classroom where the school used the diagnostic assessment before kids took the assessment to make sure that they had the appropriate level of equipment how easy it was to get the children to log on and all those questions as well as the ones about the quality of the reports Mary Shaw, is that something that ADES are looking at that equality provision across the country? It wouldn't necessarily be something that ADES would look at in terms of what the provision of ICT is in each individual local authority but I think it's an important point that where Wi-Fi for instance isn't available it does make those assessments more high stakes if children have to be taken to ICT suits to be able to undertake them and therefore we do need to be mindful that where possible it would be best done in tablet form or within the classroom to make them as low stakes as we possibly can and that's certainly the advice that we would be giving to schools. Before I move on to my final colleague, can I just ask it a little bit so I've got a better understanding of the use of standardised testings previous to this as we've talked about the CEN test but my own son went through 5 to 14 and I remember I've talked a bit cat testing in schools. To what extent are these being used by schools? Are there, from understanding what you said, Marey, that Eustain Fisher has its own developed model, is there any other local authority that has its own developed model or is everyone else using it commercially and will the introduction of the new test replace the requirement or necessity for those CEN or cat tests to take place? My understanding was there were 24 local authorities that had that used the Durham assessments, I'm not sure I think there were 31 in total local authorities that used some form of standardised assessment although obviously not all the same ones. I think certainly the publicity and the advice around the introduction of the SNSAs was that it was going to save local authorities money because they wouldn't need to continue with those. I can't speak for other local authorities as to whether or not they have stopped using the assessments that they used previously to SNSAs but certainly I would think that the intention would be that you wouldn't over assess children and that you would but since we are involved in helping to shape and continue to shape the SNSAs until we get them into a sort of form that will be able to replace the assessments that we have. Does anyone else want to comment that? That's fine, I'm going to move to my final colleague Rona Mackay. I'm conscious of time so I'll keep it very brief. I was going to ask what you thought could be done to maximise the potential of these tests and I'm really interested in something Professor Ellis said earlier on about the health and wellbeing of the child and clearly these tests don't provide that kind of data. Is this something you think should be done? Is this something that could be done easily to ask a few extra questions on the test? Schools do collect health and wellbeing data a lot of them will use the Shinari wheels and they will ask children about their friendships, they'll ask them how they feel about the curriculum, they'll ask them how they feel about different aspects of learning in the curriculum, they'll ask them how they feel about coming to school all sorts of things. That data exists. Sorry, does that data go? Who sees that? That will be kept at school level and that exists at school level and even in local authorities where all the schools are doing that what we found was that the schools where they had their progression meetings with teachers to talk about the planning for the class and what the class needed and what individual children may be needed where the head teacher actually had all that data and they talked about all that data together those children seem to be happier and make better progress than where that data was kept quite separate and discussed in separate meetings. Okay, so do you think this should be included as part of the assessment? No. No, it should be kept separate. I think there are some things that you need to keep it simple and good enough is good enough. I think that there are points where you actually just have to say to teachers this is really complicated. You're the professional, you pull it all together but we need to be learning from schools that seem to do that really really well and be promoting that as maybe possibly good ways forward for others to follow in their data use. Okay, thank you. I'm trying to say I've got a final commentary from Joanne Lamont. You're quite right to be afraid to say that. Forgive me but thank you very much for allowing me back. It's just very briefly to ask a question, the suggestion that this is like a political battle and ideological battle and so on. Would you accept that the debate is really about the balance between the benefits of these tests against the consequence of the costs to local authorities or to individual schools in running them? What evidence does the committee round the experience of teachers of people with parent or carers with young people with learning needs additional support needs has been that those needs are not being met currently. There are a number of reports that suggest that schools are under huge pressure. There are fewer and fewer support staff to support teachers in doing their job. I wonder whether there comes a point where if the consequence of running these tests is not so much that there are more resources, as was suggested by a colleague from ADES, that you identify and bring in resources but resources have been taken away from that in order to deliver those tests. Do you accept that that is what has been said by many people at school level that the consequence of running the tests is that support staff have been taken in order to do that? If you believe that to the case are there other policy choices? In terms of a school exercising leadership would it be reasonable for a primary school head to say that the consequence of running these tests which may theoretically be good is that there is less support for young people in the classroom and if that is the case I will exercise leadership and say that those tests are not a priority. I think that that goes back to that ICT question about whether there is opportunity for the test to be administered in a classroom setting without taking children to another setting and who would do that. I am not sure that I have heard comments about them being linked to pupil support assistance. For pupil support assistance who are allocated to schools for the purpose of ASN there will be other pupil support assistance who may use that who would come under the category for instance of classroom assistance as opposed to those who are there for particular children. They are all now categorised respect as classroom assistance as we have learned from Mr Sussians. It is a genuine question. I am not sure that that is accurate actually. I am told anecdotally by people who work in schools primary teachers in particular additional support staff under phenomenal pressure are saying that these tests are bringing added pressure and taking them away from their core job of supporting young people in the classroom. If that were true if we could evidence that in a way that would satisfy you would your view be that you should be making different policy choices and that your first priority would not be managing these tests but would be to ensure that schools are proper resource to support young people in their learning particularly young people with additional support needs the reports that we have been given to this committee. I have to say that that is an unfair question. I think to ask us to say that the information that you would gain from such assessments against supporting particular and individual children they shouldn't necessarily be an either or. They shouldn't be but if somebody tells you that they are being at a local in classrooms in schools, in primary schools people are making that choice would it mean that you would want to reflect again on the importance of the priority given to the policies. Being a solution focused person I would find a solution. Would that involve further resource? I would find a solution. Which may include further resources? I would find a solution. Anyone else got final thoughts? On that basis I thank everyone for the attendance on the panel this morning. It's been quite a long session and we really appreciate you coming along this morning. I'm going to suspend for five minutes as we have to go back into private session and allow the witnesses to leave. Thank you.