 Welcome to the sixth meeting of the Education and Skills Committee in 2019. Can I remind everyone present to turn their mobile phones and other devices to silent during the duration of the meeting? Agenda item 1 is continuing care Scotland amendment order 2019. It is a draft ofordinate peaceful legislation that is subject to affirmative procedure. The information about the instrument is provided in paper 1 of the committee papers. There are two agenda items regarding this. The first is the committee with an opportunity to ask questions of the minister and her officials. After that, we will turn to agenda item 2, which is a debate on the motion that is published in the agenda. I welcome Marie Todd, MSP minister for children and young people. David Hannigan, team leader of looked after children unit and Elizabeth Blair, senior principal legal officer, children families and education division in the Scottish Government. I invite the minister to make an opening statement. Thank you for the opportunity to introduce the draft instrument before the committee today. The continuing care amendment order amends article 2 of the continuing care Scotland order 2015 with the effect that, from 1 April 2019, the higher age limit for eligible persons specified for the purposes of section 26A2B of the Children, Scotland Act 1995 is increased from 20 to 21 years of age. That means that, from 1 April 2019, an eligible person for the purposes of the duty and local authorities to provide continuing care under section 26A of the 1995 act is a person who is at least 16 years of age and who has not yet reached the age of 21. By virtue of article 3 of the 2015 order, the local authorities duty to provide continuing care lasts from the date on which an eligible person ceases to be looked after until the date of their 21st birthday. This draft order is essentially a procedural amendment to increase the higher age limit for eligible persons from 20 to 21 years of age and is the final of the agreed annual roll-out strategy, increasing the higher age range in step with the first eligible cohort of 16-year-olds covering all young people who cease to be looked after on or after their 16th birthday to remain in continuing care up to their 21st birthday. The draft order will revoke the continuing care Scotland order 2018. Continuing care and the accompanying secondary legislation stress the importance of encouraging and enabling young people to remain in their care setting until they are able to demonstrate their readiness and willingness to move on to interdependent living. Interdependence more accurately reflects the day-to-day realities of an extended range of healthy interpersonal relationships, social support and networks. Continuing care undoubtedly normalises the experience of care-experienced young people in kinship, foster and residential care by allowing strong and positive relationships between young person and carer to be maintained and reducing the risk of multiple simultaneous disruptions occurring in their lives as they approach adulthood. The recent consultation responses have shown that there continues to be widespread support for the policy of continuing care. However, we are listening and we are aware that implementation has not happened as intended in every part of the country. There appears to be some inconsistency in approaches being taken by local authorities and, as such, variation in the support being offered to young people leaving care. From engaging with the sector, we know what issues are being faced and where the key barriers are. We are working together to broaden our evidence base and gather examples of good practice where they exist to share knowledge and understanding. We will continue to work collaboratively with stakeholders to consider all evidence and to explore how best to support implementation and to remove any unnecessary barriers to ensure that all care leavers are given the support that is best suited to their individual needs. I will be happy to take questions. Thank you. Minister, we do have some questions. I want to move to Ms Lamont. Just two questions, really. One is probably related to the key issue, but not the actual Scottish itself. You said that there is some evidence that this might not be getting implemented everywhere in the same way. In your evidence to us, you say that you are awaiting the latest publication of national statistics from social work and children looked after in Scotland, which is due in 27 March to assess whether the quality of data will give us an accurate indication of update. That sounds like we are even at the first base in terms of knowing whether the policy is a reality, being about the real world. We do not even know if the figures that we are gathering are going to tell us what is happening. Do you have any idea of what proportion of young people are remaining in foster care or continuing care under the previous order at all? We have some idea. A continuing care category was introduced into the children looked after in Scotland data collection in 2017. The first full year of data on continuing care has been collected for 2017-18, and it is currently undergoing quality assurance. That is what will be published at the end of March, subject to the quality assurance process. We will publish aggregated data on continuing care as a destination for those who are ceasing to be looked after. We are exploring the feasibility of publishing continuing care figures with a breakdown by local authority. The information provided through that data collection is only one piece of the puzzle, and we work regularly with local authorities and stakeholders in the care sector to improve the collection of information around the uptake and eligibility of continuing care. Do we know roughly what proportion of young people are remaining in continuing care at 17, 18, 19, 20? At the moment, we do not have actual statistics on that. We are waiting to receive the quality assured data on behalf that we have gathered over the past year. Officials have been working with organisations such as Celsus to try to improve the evidence base that we have collected, or we will be collecting going forward. We are always looking to improve on that. We do not have statistics of the exact numbers in continuing care at the moment. With respect, you are looking to improve on nothing, because we have not got anything so far. I understand that it is not a policy that everybody agrees with, but the legislation is easy, but we can sit here every year and increase it. However, the reality on the ground may be very different. I am not sure whether you were obliged to bring the order forward before the 27th March, except that you may have done, but I think that there is a question about the gap between what the legislation that we all agree with is asking and the same effort that has been put into making sure that it makes a difference in the grounds. I do not know whether we can get a progress report on that. The other question that I wanted to ask, which I think is related, is the review on support for kinship care and foster care, the whole question of parity. I know that you mentioned in your letter to the review. Those things are interrelated because, if somebody is going to continue to be supported, the financial issues around it are connected. Do you think that it is reasonable for a review group to meet only three times in 15 months on an issue that is so important? We have accepted the recommendations that the review group made. I am currently working with COSLA and other partners to fully consider those review recommendations and see what we can do. I am comfortable with how the review group was constituted and how it met, and I am keen to work with COSLA to see what we can do to put forward. They have not completed their work. Do you mean the national review of kinship and adoption care allowances? That was undertaken by a review group. It published its final report and recommendations last September. It is reported last September. We have not got anything in front of us that shows that there is progress. That again is a fundamental issue that I think that people across the Parliament supported, which was that there should be parity with steam and support for kinship care, foster care and, of course, that is all related to whether people can remain in continuing care. It met three times since November 2017, and maybe at some point in the future we will do something about it. Do we have a timescale, timetable? There is enough of people campaigned very hard to get the recognition of kinship care. Do we have a timetable for that? I am not sure whether we are talking about the same. We are talking about the review of foster care allowances. I can certainly write to you and update you in progress with that. At the moment, officials have met officials from COSLA to try to consider the recommendations and see what we can do. I can certainly write to you and update you in that. You said that they are going to submit a set of options for your consideration in the summer. Do we even have an end point? Do we know at the point at which we will be able to say that the scheme will be up and running? You could work back from that. The foster care allowance is, as you understand, a local authority decision to be made. I will work very closely with COSLA and other local authorities. I am sure that the logic of that is that you will not bother having a national review and just be honest about it. We have agreed that there should be parity between foster carer and kinship carers across the Parliament and elsewhere. I understand the technicalities after we have worked out through COSLA. However, you are going to be agreeing an option. You are going to be giving options in the summer. Do you have an idea of when we can look to when that scheme will at least be clear to people that local authorities could adhere to or not? I can certainly write to you and update you in that. I do not think that there are any further questions. I am sorry, Rona Mackay. I wonder if you could clarify if the legislation extends to young people in secure residential care. I was visiting a care unit in my constituency recently, and I was concerned to hear that one young man left at 16 and was going straight into a homeless unit. I just wondered how that was going to fit. The duty to provide continuing care does not apply if the accommodation the person was in immediately before ceasing to be looked after was secure accommodation. That is for a multiplicity of reasons. For a number of reasons, secure accommodation is not an appropriate setting for continuing care. When external factors outwith that young person's control make continuing care placement are available, the local authorities are expected to discuss and to agree alternative support measures that meet that young person's needs. It is the local authorities' responsibility. I apologise, Rona Mackay. The agenda item is continuing here, Scotland's amendment order 2019 debate. I remind the officials that they are not permitted to contribute to this formal part of the debate, and I invite the minister to move the motion. Are there any contributions from members? I just wanted to make the point that the easy bit for us is accepting what is a good policy, but I am concerned that we do not seem to be putting the same amount of effort into establishing whether it is actually making a difference on the ground both in practical terms for those who are supporting young people and the kind of example that Rona Mackay is giving of what it might mean in certain circumstances. I would certainly look for reassurance that there is more being done than just simply passing the order, because if we are doing that, it will make ourselves feel good, but there is nothing on the ground showing that there is a huge difference in young people's lives. That is problematic. I recognise the work that the Scottish Government is doing around a care review, and I have more respect for the work that has been done there, but I think that we would want to be given reassurance that two things have been brought together. Any other contributions from members? Will the minister like to respond? I can assure you that I too want to ensure that this policy is properly implemented and that it makes a difference on the ground. I have my officials working very closely with stakeholders right across the country to ensure that that happens. It is a very high priority for this Government. I now move to the question that motion S5M-15747 be agreed to. The committee must report to Parliament on the instrument. Are members content for me, convener, to sign off a report to the Parliament on this instrument? That concludes our consideration of agenda item 2. I thank the minister and her officials for their attendance at the committee this morning and I will suspend for a few minutes until it witnesses to change over. We now move to agenda item 3, the Scottish National Standardised Assessment Inquiry. It is our final evidence session on inquiry, and I welcome to the committee this morning John Swinney MSP, Cabinet Secretary for Education and Skills, Andrew Bruce, Deputy Director of Strategy and Performance and David Lange, professional advisor of improvement both from the Scottish Government. I invite the cabinet secretary to make an opening statement. Thank you very much, convener, and I welcome the opportunity to contribute to the committee inquiry on the Scottish National Standardised Assessments. When we published the draft national improvement framework in 2015, the Scottish Government set out that the SNSA results would help inform teachers' professional judgment of whether children have achieved curriculum for excellence levels, which remains our key and annually published measure in the national improvement framework, judgments that take into consideration the full range of assessment evidence available to teachers. We have been clear throughout that the SNSAs would be introduced for diagnostic purposes to provide teachers with objective, nationally consistent information regarding individual learner strengths and the areas in which they might benefit from further support. The national improvement framework in its entirety is the means by which we wish to review performance at every level in Scottish education, national, local authority and school. In response to a widely accepted recognition that we needed to improve the data on Scottish education, the national improvement framework takes a holistic look at the education system, bringing together evidence and information from all levels of the system and across all aspects that impact on performance. The idea is that no one aspect takes precedence, rather it is the way in which different parts of the system interact and connect with one another to drive improvement. Within the assessment of children's progress driver itself, children's progress is considered in its widest sense from development in the early years, right through to their destination and leaving school and recognising the primacy of health and wellbeing throughout. I would like to highlight to committee members the value SNSAs bring to Scottish education to classroom practitioners and most importantly to children and young people. Assessment has long been recognised in Scottish education as a core aspect of daily learning. The SNSAs were designed specifically for the Scottish curriculum and as such should complement rather than distract from core learning. The information that the assessments generate is used by teachers expressly to direct next steps in learning to best effect for the individual learner. We know that the overwhelming majority of local authorities have been using some form of standardised assessments for many years. The SNSAs remove the need for local authorities to buy in those various assessments. They provide at zero cost to local authorities a set of nationally consistent assessments, which are for the first time aligned to curriculum for excellence and linked to the literacy and numeracy benchmarks. Comments have been advanced that the SNSAs will tell teachers nothing new. One perspective on that is that that is a strength rather than a weakness, indicating both that the assessments themselves are correctly pitched and that teachers' judgments of individual learners' progress are predominantly sound. Equally, comments have been made about the value SNSAs provide in ensuring moderation of performance across the system, resulting in greater confidence within the profession and vital diagnostic information when aspects of assessment outcomes suggest greater or lesser capacity than a teacher has expected. It is certainly the case that SNSAs do not cover all aspects of literacy and numeracy in the curriculum, nor have they been designed to do so. However, attainment in literacy and numeracy has been identified as one of the key priorities within the national improvement framework. Literacy and numeracy are the core building blocks for learning across the curriculum and, as such, it is imperative that we identify early on in a child's learning journey any obstacles to their progress in those areas. Committee members have expressed great interest in returning to the Scottish Survey of Literacy and Numeracy, whether as was, as an enhanced version or to run in parallel with the SNSAs. I understand the attraction of that objective. It was, after all, the Government's own starting point. As the Scottish Government's evidence to the committee has outlined, we undertook a review of the SSLN in 2014. That review concluded that scaling up the survey model to produce local authority-level results was not a viable or realistic option. Essentially, because that was not the envisaged purpose of the survey at its inception and the design features of the survey did not lend themselves to that kind of upscaling. In short, it cannot address the deficit in our data identified by the OECD and which the CFE level's data is designed now to address. It remains a strong belief of the Scottish Government that the SNSAs are the right thing to do, that they will bring value to the broad general education in Scotland, fundamentally that they will bring value to children and young people enabling all learners to reach their potential. That is not to suggest that the assessments at this early stage in their implementation are perfect. Our contract with the assessment providers has built into it the need for continuous improvement. The Scottish Government is alive to the need to continue to work with practitioners to seek feedback and identify means of enhancing the system moving forward. The primary one practitioners forum, the independent inquiry being undertaken by David Reedy, feedback from schools and teachers and, importantly, from children themselves will all help to inform our next steps in that regard. We will strive to improve the operation and communication around SNSAs, and I look forward to the dialogue with the committee as part of that process. Good morning, cabinet secretary. In taking evidence, the committee has asked most witnesses the same question, which is what is the most important thing in designing an assessment scheme of assessment. The committee has given the same answer, perhaps most clearly, to Professor Merrill of Durham University, who said, that the first thing is to be absolutely sure of what the purpose of the assessment is. I would like to ask you if you feel that you have always been clear of what the purpose of the SNSA is. I think that that has been the case. As I said in my opening remarks, the purpose of the SNSA is to provide a diagnostic tool to members of the teaching profession, which will allow them to enhance their confidence about the moderation that they are applying in relation to the performance of an individual child, and to identify any particular diagnostic issues or performance issues that emerge from the diagnosis that emerges from the SNSA. The purpose of the SNSA, in my view, has been a clearly expressed Government approach, has been as a diagnostic tool within the education system. Those are formative, rather than summative. Some of our witnesses—I think that Professor Blumer, for example—were strongly of the view that the purpose of the national standardised assessments has changed, that they began as a summative, an assessment that was summative and would give information about the performance of the education system, but they then changed into a formative diagnostic test. I do not think that that is the case. I have read the EIS evidence on this, and I see the point that they make. The EIS evidence that EIS essentially says—although I disagree with the starting point of that—that they started off as summative assessments, they ended up being formative assessments because of the EIS intervention. It reinforces my point that whatever they are now, the EIS and I have agreed that they are formative assessments. I do not think that the Government at any stage suggested that they should be summative, but what the SNSAs contribute towards—again, I covered this in my opening statement— is the formulation by a teacher of the information on the achievement of a CFE level, which, of course, is aggregated at a national level. However, that is a separate piece of data that is not informed by the SNSA, but that is only one aspect of the information that is drawn together to formulate that assessment. Where the EIS makes a fair point is that, in the construction of the national improvement framework, they were influential in ensuring that we ended up with a framework that was based fundamentally on teacher professional judgment, which is what we have, because it is an aggregation of the achievement of levels data that is contributed each year in June by individual teachers around the country. I was looking back this morning at an exchange at First Minister's questions between Liz Smith and the First Minister when Liz Smith asked about the removal of the SSLN and how we would have national level information about the performance of our education system. The First Minister said that SSLN had been replaced by SNSA, which she argued was superior because it was not a sample, but a census of all pupils. That completely contradicts what you have just said. The argument that I am advancing is that that picture is constructed by the levels data. I am sorry, but the argument that the First Minister was advancing was a different one. If I go back to what the First Minister said when she launched the programme for government in 2015, the First Minister, if I get the quote in front of me, was crystal clear about what the purpose of this all was. Mr Gray gives me a second. The First Minister said that the assessments will inform not-replaced teacher judgment. They will provide robust and consistent evidence to help teachers to judge whether a child is achieving the required level of curriculum for excellence. The new assessments will introduce greater consistency to curriculum for excellence. They will provide reliable evidence of a child's performance or progress, but they will not be the sole measurement. That was what the First Minister said on 1 September 2015 when she launched the programme for government, which included for the first time the commitment to SNSAs. I read that quote this morning, but I also read the quote from her from June 2018, versus it is very clear that this is a replacement for the SSLN. I think that what the committee has found puzzling and has asked some of the witnesses about is how it can be both of those things, and the witnesses have been quite clear that it can't be both of those things. I think that what is important is that we vest our analysis in the quote that I have just read out from the First Minister in September 2015, because what the First Minister was saying there is that the SNSAs will contribute to the formulation of teacher judgment, and the teacher judgment will then be aggregated into the achievement of level data, which will be aggregated at national level. We will have a more comprehensive picture of the performance of young people within the education system and their achievement of levels than the SSLN as a limited survey could ever have given us. That's the squaring of the argument that Mr Gray is wrestling with. I'm not sure how we're supposed to know which quotes of the First Minister we're supposed to accept is correct and which ones we're supposed to discount, but let's leave that for another time. Does that not mean then that we now have no statistically rigorous, statistically rigorous information about the performance of the school system as a whole at a national level, information that was previously provided in a statistically rigorous way by the SSLN, and has now gone? I think that we have got a statistically rigorous analysis at national level of the performance of the education system because of the comprehensive data that we now collect on the achievement of CFE levels at early level, first, second and third level, which are reported nationally openly on an annual basis. That is a superior and more comprehensive analysis than an SSLN survey at most would identify the contribution of two primary four and two primary seven pupils in each primary school in the country, and if my memory says me right, twelve pupils in a secondary school in each secondary school in the country. Sorry, did you say that we have national level information through summing the SNSA results? No, through the presentation of the CFE levels data which comes from teacher judgment. That's right, but our witnesses have repeatedly said that that introduces some questions of moderation or statistical rigor. Well, the SNSAs provide the statistical consistency and rigor to inform moderation across the country. But they can't be summed nationally? I'm not attempting to sum them nationally because that's what I said. They are diagnostic, but what they do is they provide the moderation of performance across the system, which many teaching professionals have said to me has been a challenge through curriculum for excellence and I think the committee has taken evidence that substantiates that point. Also, if you look at the evidence that Professor Patterson gave to the committee, the evidence that said that the SNSAs were statistically strong and robust to provide that consistency of moderation across the system, which then underpins the teacher level, the teacher judgment which informs the performance data that we collect and aggregate nationally? Another aspect of the difference between SSLN and the SNSAs and the CFE levels, which some witnesses brought up in the course of the evidence, was that SSLN was undertaken across both the state and private sector schools, but the SNSA only happens in state sector schools. Given that the Scottish Government's main objective in education is to close the attainment gap, can you explain how removing a cohort largely—I think it's fair to assume from the more privileged end of the pupil cohort—from the data that's being collected can help to do that? The purpose of the SNSA is to help to inform the diagnostic analysis of pupils' performance. That enables schools, and I think that there is increasing evidence of that now, to be able to better focus on the individual challenges that young people face in achieving the CFE levels that we expect them to achieve. That puts a rigor into the education system, which I think is of benefit to individual teachers to assist pupils in improving their performance, and by improving pupils' performance we have our best means of closing the attainment gap. How will we know if you are achieving that if a significant cohort at the privileged end has been removed from the data that you are collecting? We can't make that comparison any longer. What the data will be able to demonstrate is what levels are being achieved by young people within the Scottish education system. That will demonstrate openly what improvements in capacity and performance they have, which, as a consequence, helps to demonstrate whether we are seeing the type of educational performance that we would expect to see coming out of young people within the education system. That will not be because they will be removing from the data a part of that education system, i.e. the independent sector. However, what we are doing and the information that we present demonstrates what is the extent of the attainment gap within Scottish education, and then we demonstrate year on year how we are closing that gap. That is on comparable data year on year on a whole series of different indicators on which the Government has consulted and has got to a position of broad consensus that the measures that we tabulate indicate the framework for closing within which the closure of the attainment gap should be undertaken. We are excluding from that data a group who, I think that it is fair to say, we would expect to be at the higher end of the attainment gap because of their privileged position in the private sector. They are no longer part of the data that you are collecting to make the comparison that you have just described. No, because the data that we are providing is looking at the performance of pupils across the education system, where the capturing of data on CFE levels will be captured and monitored year on year. We will demonstrate what progress has been made in that respect. In the public part of the education system? It will be, yes. All of that has not been simpler if the Government had simply maintained the run of data provided by SSLN, even as they introduced SNSA, which, as you have explained at some length this morning, is for an entirely different purpose. We have to be mindful, and the committee hears this very frequently, of what are the burdens on the education system. The SSLN involves an additional burden on the education system. What we judged was that we had to put in place a system that would enable us to provide for teachers the core diagnostic information, because all of the analysis, all of the guidance that we have looked at from the OECD makes it crystal clear—the OECD review of Scottish education 2015 makes it crystal clear that we did not have sufficient effective data at our disposal in the Scottish education system to be focused on how we were improving performance. We had to put in place those arrangements to enable that to be undertaken. Our judgment was that that would be best served by introducing standardised assessments, moving to the capturing of CFE-level data that would be valid and available in relation to all pupils in the education system. If we continued to capture SSLN, we would be asking the education system to generate more information when we had judged what would be useful and reliable to meet the test that is supplied by the OECD report on Scottish education, which identified deficiencies in the information that we gather and collect on the performance of pupils in the education system below the senior phase. No witness in any of the evidence that we have heard has ever described SSLN as a burden on either the system or teachers. The IS themselves have made quite clear that they felt that SSLN was proportionate and relatively easy to administer. What we had to address was the fact that the OECD report indicated to us significant deficiencies in the information that was available about the performance of individuals within the education system. What the SSLN did not enable anyone to do—and I think that Professor Sue Ellis was really pretty emphatic with the committee about this point—is that the SSLN did not result in any scrutiny at local authority level about how to improve performance. That is not a criticism of local authorities, it is just that they would not know where to start, because they would not know the extent to which their authority or schools within their authority was in any way contributing to the challenge that might be highlighted by the SSLN. We realised that we had to do something about that point because there were not in place the mechanisms to assess that performance. We put in place a performance framework that enabled us to do that, but, in so doing, we did not want to add that on to existing arrangements that were within the education system, which is why we discontinued the SSLN. However, what SSLN did provide was a degree of accountability with statistical rigor for government in terms of the performance of the education system. Have you not thrown that baby out with the bath water? No, we have enhanced that because we have identified, because we have put in place a framework that requires the aggregation by teachers around the country based on professional judgments supported by the moderation offered by SNSAs of a nationwide individual-by-individual constructed picture of the performance of the education system, which is much more comprehensive than anything that the SSLN was ever delivering for us. None of the evidence that we have heard believed that to be the case at a national level. Even the evidence that was very supportive of a lot of the other points that you have made about SNSA. My contention is that we have now available to us through the collection of that CFE level data, a much wider cross-section of information about the performance of the education system. Crucially, what underpins it is the ability to deliver enhancements in the performance of young people because of the fact that the data is driven by the judgment of individual teachers on individual pupils, which the SSLN, whatever that picture presented, did not provide an analysis or a route by which performance could be improved? The Scottish Government's own evidence to the committee says that the SNSA specifically is not for accountability purposes. However, when I ask you about the accountability of Government in delivering a success and improvements in our education system, you say that that is provided by data that is underpinned by SNSA. Do you understand why there is an ambiguity here about the purpose of doing the SS? There is no ambiguity because I resolved that ambiguity for Mr Gray quite a while ago when I said that SNSAs were one part of information that was statistically reliable, consistent across the system, linked to the curriculum, linked to the benchmarks. All of those points were put on the record by Professor Patterson, which helps to inform teacher judgment in the creation of performance levels that give us a much more comprehensive conclusion. Let me finish my point, which gives us a much more comprehensive picture than was ever the case under SSLN and the ability to interact to resolve those questions. Cabinet Secretary, I am going to bring in Mr Scott. Just trying to understand, Cabinet Secretary, your points about the collection of national data because the evidence that the committee was given on ACEL, sorry, your point about the levels under curriculum for excellence, are quote, badged as experimental. The report on the statistics stated that over a third reported that they were generally confident in the robustness of that data. By definition, two thirds were not confident. How can you be so confident that you are building up that picture? As with all data, there is a period that you have to go through to get to a point where statisticians will give you that appropriate standard. We are on course to that in that work, but it is underpinned by the robustness of the assistance to the moderation of performance, which I believe the SNSEs provide us in contributing towards enhancing teacher judgment, along with other interventions that I would add about the training and the development that is undertaken by Education Scotland and local authorities. When do you think the ACEL information will be comparable? In which academic year would you expect that to be the case? With the caveat that that is not a judgment that I can make because it has to be made independently of me by statisticians, my expectation is that, probably in the next academic year, we would see that being the case. The 2017-18 report cautions against comparing the data across years, not years, so you are expecting next year, 2018-19 or 2019-20 for the data to be... 2019-20, I would expect the data to be, but I caveat that with the fact that it is not my decision to make, but that would be my expectation. That would be the first year in 2020 that we will have a set of figures for the performance of Scotland's national education system, which we will then be able to compare in 2021 or 2022. Would that be correct in making that assumption? In terms of all the interventions that have been made since national testing was introduced, it will be 2021-22, three or four or five years before we can assess what has changed. Obviously, we publish, as Mr Scott will know, a range of information through the national improvement framework in December, which tabulates a range of indicators of the performance of the education system. We have consulted widely on that to give us an assessment of what represents the attainment gap within Scottish education. We went to considerable trouble to consult on those questions to make sure that we had a comprehensive picture that would command confidence within the education system. There is a lot of data in there that enables us to form a picture about the progress that has actually been made. I am sure that that is entirely true. It is just that you have laid very considerable stress this morning in your answers to Ian Gray and to the committee about A, C, E, L—very considerable stress. Therefore, I judge by all the stress that you have laid on that that you consider that data to be the preeminent data in your ability as an education secretary to assess what is happening in Scottish education. Well, with respect, we are having an inquiry here about standardised assessments and about the curriculum for excellence data, which is why much of the conversation has been concentrated on that point. If we had started off today with maybe and asked a question about what would I be judging the closure of the attainment gap by, I would have vested in my argument in a whole host of different factors, which are already published and which give a very good and broad set of measurements as to how we are working to close the poverty net attainment gap. I entirely take that point, but I am not going to go back into what the First Minister said or did not say from earlier on. However, the arguments that have been used are all about we do not know what is going on. Government palm does not know enough of what is going on. I am just suggesting that you have laid considerably considerable stress in the A, C, L as your measure to produce enough data, which tells us what is happening. As part of that wider assessment of the closure of the attainment gap that is designed through the national improvement framework? Okay. Can I just ask a couple of questions about the testing itself? In the evidence that has been laid in front of us by a variety of witnesses, I think that it is fair to say that, particularly at P1, but presumably at other levels as well, the P1 was the point that was particularly made, tests are taken at different times a year, they are taken with different levels of support, some children are taken out of the classroom, some children have the test taken in class time, and teachers judge when these things are done and are appropriate. Would it be fair, therefore, to say that that is not standard at all across Scotland? Indeed, I think that you have made that point frequently. What I would say is that, obviously, it is the same assessment that is carried out. In each year, the assessment does not change in the year, it changes from year to year, we have replenished a third of the questions from one year to the first year to the second. The same assessment is undertaken. It is designed to give an idea of consistency to support the moderation process within teachers, within the teaching profession. It represents an opportunity to provide that assessment of the performance of individual young people, so it is an approach that, in my view, gives that consistency, which I think to me is a standardised assessment. I just generally cannot take the point about it being the same, because if children are taking it at different times of year and they are getting different levels of support, that is not the same, is it? It cannot possibly be considered the same, statistically or in any other way. It will be because of the fact that we are confident through the norming studies that we have done that the assessment is a true reflection of the level that individual children are expected to be achieving. Therefore, it gives a teacher an insight into how close to or far away or what are the relevant issues a child is facing in the achievement of a particular CFE level, and that will then ultimately inform the data that is recorded and upon which we place the emphasis for assessing the performance of individual young people in the system. I can accept that it is about where a child is and how a particular child is doing in respect of a level. However, if child A takes that test in November and child B takes that test in the spring of the year and child A does the test with all the other children in the classroom, and child B does the test outside with a support teacher helping him or her through that process, that is not the same, is it? That is what is happening, that is the reality of Scottish education. That is an argument for relying on teacher judgment about how best to utilise that resource to identify the performance of young people and the diagnostic issues that need to be addressed. That is well left to individual teachers to make that judgment in my view. The purpose of the assessment is not a summative purpose, and we have been very clear about that. It is not to identify who has passed or who has failed. It is to provide an analysis in the teacher's judgment during the year at what point is it best for me to assess whether the child is in command of the level that is being assessed. In some circumstances, a teacher may take the judgment that to do it early in the year will provide them with good diagnostic information about what they have to try to do in the remainder of the year to satisfy themselves that the child is actually achieving that level. In other circumstances, a teacher may say, well, actually, I am best to leave this to later on. I have a fair sense about what this individual child requires and we will concentrate on that, and then undertake the assessment to see if some of those issues have been addressed and what else flows from it. It is ultimately, for me, a matter of teacher judgment. I agree with that, but my argument is that it is not standardised. It is standardised because it is the same assessment that is being undertaken. I feel like I am darting on the head of a pin, so I will stop. One last question for me. I agreed with your comments about Renfrew. I think that it was published on Monday about Renfrew and closing the attainment gap. This was a BBC report, so it may not have reflected everything that you said, but in nowhere in this report about Renfrewshire closing the attainment gap is school testing mentioned, is standardised testing mentioned. There were lots of things that I recognise as extremely sensible that are being done there about helping young people to develop and to grow, but in all of this, three pages of quotes from lots of different people, no mention of standardised testing, I would just wonder what the link was. I mean, if we are to be convinced that this is so important to future education, it does not even get a mention in an area that is obviously made on considerable progress in those in the attainment gap. Well, obviously, the data that is collected is there to provide a comparability of performance over time about the progress that we are making that informs the wider performance information in the national improvement framework. The fact that the news reports about Renfrewshire do not make reference to that is a matter for the compilation of news reports, but fundamentally what we are looking at is the contribution in those particular reports that are made by individual local authorities and schools in a very focused way to meeting our expectations in the Scottish attainment challenge, which are about closing the poverty-related attainment gap. I have a quick supplementary question from Ms Gilruth. I want to return to Tavish Scott's line of questioning there, and he argued that the SNSAs were not standardised. What happened prior to the SNSAs in 28 out of 32 different local authorities used some form of assessment. Was any of that standardised? Do we know how many of them, for example, were benchmarked against curriculum for excellence previously? Well, there were 29 local authorities, some form of standardised assessments, but they were not standardised from each local authority to another, and they were not standardised to, they were not bespoke to the curriculum in Scotland. So invariably, they were products that local authorities bought in to provide some form of moderation within those local authority areas about the performance of young people, but they were not related to the curriculum and they were not related to other local authorities. I also want to return to Iain Gray's point on the SSLN. I think that Iain Gray alluded to the EIS who said that it was not a burden. Previously, when I was in the classroom, it was a pain in the neck, if I am honest, of having a group of children removed from your class during the context of a lesson, because you would then have to revisit the content that you had covered in that lesson. It was quite disruptive to teaching and learning. Sue Ellis told the committee previously that the only people I heard talking about the SSLN were politicians and the odd academic. Does the SNSA rather offer an opportunity to track individual pupil progress, which the SSLN could not do? Yes, that is the fundamental point about SNSAs. They are related to the performance of individual young people and they are there for diagnostic purposes to assist teachers in supporting young people to achieve CFE levels. As a consequence, that improvement of itself helps to improve the performance of the Scottish education system. Is there a cultural challenge in Scottish education whereby I suppose that data is for academics and politicians and it is not for teachers? If that is a cultural problem, I think that it is changing. I hear increasingly that it is very obvious in the feedback from Her Majesty's inspectorate about the greater utilisation of data within the classroom to identify particular challenges that children face and to use that data to assist in a variety of purposes, not least of which is enclosing the poverty-related attainment gap. I think that there is very clear evidence that the system is becoming much more focused on the utilisation of data for a purpose, the purpose of which is to improve the performance of individual young people and, as a consequence of that, we will see improvement in the performance of the whole education system. I would like to return to the evidence proofs that were submitted by the EIS. They quite poorly made the case that some of the concerns that have been raised with SNSAs has been, as a result of local authorities, not following the guidance that was agreed and issued at the start of the process. Many local authorities have not followed guidance that has been issued since then. Cabinet Secretary, do you believe that that is because the local authorities disagree with what is in the guidance or is it because there is still confusion about the SNSA's confusion about the purpose or confusion about the guidance itself? They either disagree or they are confused as to what the guidance is telling them. The answer, respectfully, is probably in neither. I think that it is because we are in early days, to be honest. This is the first year that we have had SNSAs. I think that there has been some established practices that would have been used under the old standardised assessment arrangements that local authorities would have run. The principle issues that the EIS has raised its concerns about has been about a window for undertaking the assessments. The guidance is very clear that there should be no window prescribed and that it should be up to teacher judgment. I think that a number of local authorities have rolled forward their previous arrangements, which used a window. I think that it is because we are in the early days. Some of the points, as I made clear in one of the contributions that I made to Parliament, we have reinforced that guidance to local authorities and I would expect there to be changes, differences and performance in the approach that we see assessments rolled out over future years. The window issue is an interesting example. You will be aware that the committee in the course of the inquiry has written to local authorities. You have probably seen the responses that we received. Inverclyde councils were quite a notable one on exactly that. The authority does not support the idea of testing a pupil when they are ready, as the tests are not designed to be used this way. We are also wary of individual teachers choosing to test pupils later in the academic year, because that might give the best-perceived results. Inverclyde sets a window between January and March. Do you agree that that contributes towards a culture of high-stakes testing? A window between January and March is, in my view, a pretty big window. If it was the first two weeks in March, I would think that that is a real window. A three-month period, we are beginning to get into the territory of stretching some of the definitional issues that are here. To be honest, I do not actually think that Inverclyde's position of three months is particularly unreasonable, but I would say that teachers should be making a judgment about when to undertake that assessment. Part of the issue that had been raised previously, and I am sure that teachers have mentioned it to a number of members, is about the fundamental tension between formative and summative assessment and teachers' concern about how standardised it is and how comparable the data may be. The example that Tavish Scott gave is a very relevant one, particularly in primary one. Some pupils taking the SNSA at the start of primary one will be four and a half years old. Others who take it at the end of primary one will be six years old. There is obviously a significant difference in development between a four and a half-year-old and a six-year-old. If the purpose of that was entirely about a teacher's judgment and information that they could gleam for that individual pupil, that would not be an issue. However, there is clearly an impression or at least confusion about how that data is used at any level beyond the individual, whether it is classroom level, school level, local authority or nationally. Do you recognise that there is a concern there about the validity of data that is aggregated at any level beyond the individual pupil when there are these inconsistencies? I think that we have got to get these messages clear and clearly understood. Yes, I accept that, which is why in my opening remarks I said that we are one year into this and there may well be lessons that we have got to learn and that we have got to strengthen communication about those points. However, what I hope that I can do with the committee this morning is explain where the SNSAs fit in to the wider assessment of performance of the Scottish education system. Fundamentally, the SNSA is there to assist with a diagnostic assessment by an individual teacher which will then help to inform practice and performance in relation to individual children, which will ultimately contribute towards a definition of whether a child has reached a CFE level and that information will then be aggregated. I appreciate the sensitivities of all those questions because I have absolutely no interest in the aggregation of data for the purposes that get many of those systems into dispute. That is why I am heartened by some of the judgments that are made about what we are doing by people like Andy Hargreaves or by Alison Scarrett who are saying that we are trying to do something different. What we are fundamentally trying to do here is to provide in the classroom an approach that helps teachers within curriculum for excellence to have greater advice and support in the moderation of the performance of young people, to undertake a diagnosis of their performance and to support them to overcome any challenges that arise out of that. That is the means by which we are trying to drive improvements in the performance of the education system because we will not drive improvements in the performance of the education system if we do not drive individual improvements in performance for individuals. At the level of the individual, cabinet secretary, you have said that there is a step in this process at which data is aggregated. However, the question that has been raised is how comparable is that data and how valid is the aggregate data when you have inconsistencies of, for example, at primary 1, an 18-month gap in development between children who have taken the SNSA? Where the aggregation comes is that, fundamentally, at different stages in our curriculum, we judge it at early level, first level, second level and third level, we make a judgment about whether young people have reached that level. I appreciate that young people at early level, first, second and third will be at different ages as they are going through that process. That is just a fact of life. Fundamentally, we have to make a judgment as to whether or not the needs of young people are being achieved and that information will be publicly reported as part of the reporting that we undertake on an annual basis. I am still confused about the validity of the aggregate data here because it could be about the issue of the age gap or it could be the other issues that Tavish Scott has raised about inconsistencies and the circumstances under which the test is taken about. I absolutely accept what you are saying. I would disagree to something, but I accept what you are saying about the use to an individual teacher of the data that is gleamed on an individual pupil and how they can adjust that pupil's learning experience. At any level of aggregation, that is not entirely standardised data. Any level of aggregation is going to have issues with the validity of the data, but you have not yet explained how those issues are resolved. You have explained that that data is aggregated. The result is in the reporting of the crude SNSA data at a national level. It will give a profile of how performance is being delivered across a variety of indicators and elements of the assessment. Ultimately, the assessment that matters to us is whether young people have reached the particular level within curriculum for excellence. That is informed not just by SNSAs, but by a whole variety of other... Absolutely. SNSAs are one of the data points there. I am questioning the validity of that data once aggregated. Essentially, the reporting demonstrates the range of performance against the particular elements of assessment within the SNSA. Fundamentally, it flows into the aggregation of data on the curriculum for excellence levels, which is the assessment that matters to us as to whether young people are in command of the details of the curriculum. Just one final question, convener. How is that not summative? If that is being aggregated, how is that not summative? That part of it is ultimately summative. SNSAs are both formative and summative. In that aggregation of data, it is summative, but the purpose of it is to be a formative assessment. I think that is a fundamental tension, but I recognise how much time I have taken up and I am sure that other members will want to talk about that. That is the character of the SNSA. It is a formative assessment because it leads to a diagnostic assessment— It is a formative assessment that leads to summative assessment data. Only because of the ultimate aggregation of the data. I just wanted to pick up on Ross Greer's point about high stakes and the test possibly being high stakes. The Government has consistently said that the tests are not high stakes and Professor Hargreaves, during evidence, agreed with that. Do you think that some of the narrative surrounding those tests being played out has skewed the purpose and meaning of those tests? There is an active debate. Indeed, we looked at this debate very carefully about the nature of standardised assessment. The 2011 report on standardised assessment by the OECD is a report that carefully maps out the benefits and the dangers of standardised assessment. If I identified some of the learning that we took from the report, the 2011 student standardised testing current practices in OECD countries and a literature review identified a number of themes and key lessons that countries had to be mindful of if they went down a route of standardised assessment. They did not, in any way, say that standardised assessments are not appropriate, but they did say that there are dangers that can end up creating that image and impression of a high stakes approach. Whereas what we found our view on was the analysis that was undertaken again by the OECD in 2015, where it said that standardised assessment tools can be used formatively in all parts of the system if they are referenced to the curriculum, which ours is, and flexible in their use, which I would contend ours is, and provide high-quality, just-in-time information for teaching and learning, which I would contend is the case, while at the same time having efficient ways to aggregate the results through the system. That was the advice that we got from the OECD in 2015, and I would contend in every characteristic that we have followed it. That was advice that was a derivative of a report from the OECD that said that there are merits in standardised assessments, but you have to be careful about some of the details of your implementation, which we have taken great care to make sure was the case. Mr Swinney, you have quite rightly said in the committee's past that you have ultimate responsibility for what goes on in the schools across Scotland, and I think that every parent sees you as the person who has responsibility for raising attainment. Can I ask you, in light of the discussion that we have had about the purpose of those assessments, that we accept what you are saying in terms of their prime focus being on formative and diagnostic assessment, but you have just admitted that there is a summative purpose as well? Could you explain to the committee how the Scottish Government would be looking at the evidence of those SNSEs in the future, particularly if you see schools or local authorities who have an underperformance? What would you be looking for in terms of evidence to support comments that might mean that that school or that local authority has to improve matters? Well, fundamentally, the approach that I am bringing to the education system is that I want to create a constantly improving education system. That is my mantra, and everything that I am doing is trying to support that objective. So, whether it is a school inspection, I welcome the focus of the chief inspector, which is on ensuring that inspection is about driving improvement within our education system. The work that we are undertaking through regional improvement collaboratives is about improvement, about improving professional learning, about improving standards that are taken forward, about improving pedagogy. The work that is being taken forward through the attainment challenge is very focused on creating an improving system. I want to ensure that we have a relentlessly improving... In previous appearances before this committee, I have said that I had a relentless focus on improvement, and that is exactly what drives the agenda that we are pursuing. In the context of your previous comments today, the SNSAs, in your opinion, will provide an enhanced assessment of the youngsters in our schools. You have also made very clear that you expect, because of that enhancement, that there will be a better focus and, hopefully, a better attainment at the levels of the curriculum for excellence. Could I ask you how that information would be used by the Scottish Government, or what would your expectation be in a local authority to use that evidence? I think that it is important that I take the right answer and take it right back to the individual children. We will not have an improving education system in Scotland if the education and performance of children are not improving. The improvement has to be felt within an individual classroom, which is why, in my answer, a moment ago to Jenny Gilruth, I talked about the greater utilisation of data within the education system to identify where improvement is required to be made child by child. Improvement in the system is not an amorphous thing that floats about in the ether. It is driven by what is the performance of individual children, which is why we have opted to ask for teachers' judgment about the satisfaction and achievement of levels at the different levels of CFE. Essentially, what the OECD said to us was that—I paraphrase their words—we were essentially pretty blind on individual performance across the system until the senior phase. That is the gap that we have remedied here. We have remedied it by reliance on a teacher judgment about whether a CFE level has been achieved. One part of the information base that informs that is SNSAs. Fundamentally, it is not just about me being focused on improvement or local authorities. It is individual class teachers having the available data to help them to improve the performance of young people's education. My job is to ensure that the education system is well supported and well resourced to enable that work to be undertaken at classroom level. Just to pursue the point about the role that you have, which you have rightly identified, if you felt through the inspection reports that you received that one or two local authorities were not performing as well as you had hoped, and there were clearly some issues, would it be the curriculum for excellence levels in that local authority across the schools that you are looking for in order to try to help to support that local authority to improve, or are there other aspects of what the SNSA is? There would be a variety of information that I would look at. I have looked at one local authority—Could you spell out what it is? For example, I have looked very carefully at the performance of one local authority very specifically, which was causing concern. That was Ergal and Bute Council. A lot of information predates the availability of the CFE levels data, but there was a general concern about the performance of the local authority. I asked the inspectors to look at that. They published requirements for improvement on that local authority. To the credit of Ergal and Bute Council, they have responded positively to that call. I have powers and statute that enable me to do that, if it is required. However, I would look at a range of data and information and see if the levels would undoubtedly be part of that judgment. In a wider context, in trying to create a constantly improving education system, I want to see local authorities bought into and participating in that journey of improvement and leading that journey of improvement supported by the Scottish Government. I think that the willingness of local authorities to work with us on the standardised assessments, for example, because this is a tremendous local authority collaboration in introducing standardised assessments. Why? For some of the reasons that the exchange I had with Jenny Gilruth is relevant, individual local authorities have undertaken assessment, but it was not related to curriculum for excellence. It could not give them any read across as to whether what they were doing in local authority A was a sufficiently comparable standard to what was going on in local authority B or across the whole of the country, which standardised assessments now help to inform that judgment by local authorities. I think that my final question goes back to the point that you raised in your introduction to the committee this morning, which is about what the OACD said. Despite the many strengths of Scottish education, it did highlight this issue about the richness of data, and it made some suggestions about what we should do to enhance that richness. You yourself have said that one of the reasons for introducing those new standardised assessments is precisely that. At the same time, however, the Scottish Government took out Scotland from other measurements, principally the pearls and the tins measurements. Could you explain in the light of what the OACD was saying when we did not have enough evidence and data, could you explain what the Scottish Government's thinking was to take Scotland out of those other measurements? Some of those decisions are quite historic, if I remember correctly, but I think that the relevant quote here is from the OACD, which is relevant to that answer. From its 2015 report, page 155, the light sampling of literacy and numeracy at the national level has not provided sufficient evidence for other stakeholders to use in their own evaluative activities or for national agencies to identify with confidence the areas of strength in the years of the broad general education across the four capacities of CFE, nor has it allowed identification of those aspects or localities where intervention might be needed. That is essentially a commentary on what the SSLN delivered for us. That would be my assessment about some of the international work as well. It does not give us an insight into what needs to be done to improve the system, which the current framework that we have been in place enables us to do. You would accept that some of the experts who have been in front of the committee—Professor Hargreaves being one and Lindsay Paterson and some of the others—have made very clear that those international measurements are important. We have subscribed to some international assessments, as Liz Smith will know, but our judgment was that we had to focus on the international judgments that were relevant and appropriate for us in balance with the other changes that we had to make in relation to enhancing data within our own system. First of all, you have made quite a lot of talking about formative assessment and the importance of standardised assessment. Professor Dylan Williams is an expert on formative assessment, and most people would agree with that. He says that those tests would have little, if any, formative use. Do you have a view on why he might say that? Well, I cannot speak for Professor Williams, but I cannot— You would accept that he is an expert in his field? I acknowledge his support for formative assessments. He disagrees with what you are saying about the purpose of his assessment? Well, yes, but lots of people disagree with lots of people in education. If I can perhaps help— So, we get to pick at experts, do we? I am just asking you if you do not agree with him. Well, I think that this is a good system. Professor Williams, I am not sure of the extent to which Professor Williams has studied our particular system that we have introduced. I do not know that question, but what I would say to John Lamont is to refer to the expert opinion that informed our judgments, which was in the OECD report at page 157, where they said that standardised assessment tools can be used formatively in all parts of the system if they are referenced to the curriculum, flexible in their use, and provide high-quality, just-in-time information for teaching and learning, while at the same time having efficient ways to aggregate the results through the system. Now, there is expert opinion from the OECD, which I would contend we have followed to the letter in making the judgments that we have taken forward. I suppose that what we are now testing or probing on is whether what you have put in place meets that purpose. Can I go back to the question of standardised testing and the fact that young people could do them under different conditions at different times? A child could be four and a half when they start primary one. They could be six by the time they finish primary one. They could be in a class of two or three young people being supported individually to sit the test, or they could be sitting in a bank of 30 children, whether with a laptop or an iPad or whatever is in front of them. In what way is that standardised? To take the four and a half-year-old doing the test on their own without any support and a six-year-old fully supported with weeks of practice—and your own officials told me that they were allowed to practice beforehand—what would be standardised about the results of those tests? Because they are undertaking the same assessment, they are being supported appropriately within the context of their learning, which is one of the important parts of the poll. Could you explain what that means? I mean that assessment is an integral part of learning within the education system. I think that anybody would acknowledge that that is a key part or characteristic of the system. It is therefore being undertaken within recognisable classroom conditions with the support that any young person would ordinarily get for their education. They provide consistent reporting and diagnostic information on individual young people. Can you explain to me how it is consistent classroom conditions for a child of four and a half to be doing the test in a group of 30 and a child of six to be doing it with one individual supporting them? How is that the same classroom conditions? It is theoretically the same test. If I am doing a test—I do not know—a driving test with a car that does not work on my own with no support whatsoever, no licence against somebody who has been supported on their way through it, those are very different things. Adventure to suggest that the example that Johann Lamont presents there is a ridiculous example in the context of the performance of a classroom teacher. A classroom teacher will never conduct an assessment in that fashion. Can you explain to me, a four and a half-year-old in a class of 30 doing that test without practice, how can that possible be the same as a child of six doing it with one person supporting them having had a lot of practice? Fundamentally, it is the same assessment that is being undertaken by every child and what the purpose in proper classroom conditions with appropriate support—not in the example that Johann Lamont cited to me—is that a child would be undertaken in their learning so that it was not something that seemed different or unusual. If the committee members look at the practitioner feedback, the practitioner feedback is that they wanted to deploy those assessments as part of the learning process of children in the classroom, which one day might have a child working with support or independently with a piece of technology and then, on another day, be invited to do this particular assessment using the same piece of technology. I would imagine a degree of supervision because it would be unusual for a five-year-old child to be undertaking such a process without appropriate supervision. Obviously, when you get to P4 or P7 or S3—which this inquiry is looking at as well—you get into very different circumstances where young people have greater capacity to do those things independently, so they should have. You would reflect on the fact that, even those people who came before the committee to advocate standardised testing said that this bit of it, where there was this complete flexibility meant that that notion of standardised information come out of it, would be a bit of a challenge. Can I ask you an example of the benefits of a diagnostic test? I think that we know the challenges around that. There was one question that we were shown where a child had to identify the word that rhymed with another word. There is a little button, and if you press a little button, you can hear it. I asked at the presentation that you put on for us, and we were very grateful for it. Was there any distinction made in the reporting of that between the child who could identify the word without hearing it and the child who could not identify it without hearing it? I was told that there would be no such distinction. When I raised that with experts before the committee, they said that that was problematic, because that was not given the information that was of any benefit. How can it possibly be a valid test where you are not even in this theoretical diagnostic formative mode, making a distinction between a child who can read without hearing a word and a child who needs to hear it in order to understand whether it is a rhyme word or not? I am going to invite David Lang, who is the former director of education in Stirling and Clackmannanshire, who is the professional adviser in this programme. It is a question of a level of detail, which I think would be appropriate that David Lang gave a view upon. Good morning, everybody. I recognise the question. I think that I was part of the demonstration that we gave to you in that one. It was a question about a word to do round with pi, if I remember correctly. The point of the standardised assessments is an adaptive assessment. The assessment questions progress through the range of skills—in this particular case, literacy skills—for primary ones. The question that you were talking about was particularly assessing phonological awareness, which was about whether or not they could identify a sound with another sound. Whether or not they were able to read that or listen to it for that particular question was not key. What was key was their ability to make a phonological identification. Therefore, children were able to give them that choice. It was a supportive process. If they were able to read it fine, if they wished to listen to it fine, it did not. That was not the significant part. The part that you are talking about, the decoding of a question, that ability to be able to read a question and understand what it means, comes later in the assessment. What happens when the teacher gets feedback is that they get feedback on the particular skill that the question is seeking to assess. What you have is a range. Many of you will remember that there were questions later in the assessment where they were asked to read without any audio support. Some of the children found that quite challenging. Others found it straightforward. That enabled us to identify there for which children were able to do that decoding. What you have is a range of questions. They go through those range of questions. They are being asked certain key skills and the assessment builds on that and it adapts to their ability level. To take one question to ask whether you need to know the audio or not is not the point. The question is whether the whole assessment will tell the teacher how well a child is able to deal with or not audio or visual stimulus. That is what they get. If you have seen the report and I know that you have seen examples of it, that is very clear in the outcomes. It is not the specific support but the skill that the child has been assessed on, the level of difficulty of the question and whether or not they were able to do it. Have you factored into the assessment the extent to which a child would be comfortable navigating themselves around the system? We were told at the presentation that there were lots of opportunities and people could have practised and so on. Will the test reveal the extent to which has been that support so that the child understands that we are around the system? Does that make a difference? Have you factored that into the report? On an individual school level, the teacher will know how much support there is. In order to say in a standardised level and to make that comparison a reasonable one, you need to know surely whether there was a lot of support beforehand and a lot of practice or not. Is that something that we do not think to start the results in any way? We will go back to the purposes of the assessment. We have said all along that they are diagnostic and that they are primary purposes at a school level. Therefore, we have tried to build flexibility and support at that school level. In a sense, the teacher makes the decision on how much support they are going to provide. The rich information is therefore at the individual level. I have heard a lot of discussion this morning about aggregation and what we have been very clear about. Support the child and ensure that the child goes through this in as comfortable and in an easy way as they can, given the support that they require. What we are assessing is their ability on elements of the literacy of the numeracy, not manipulation of the assessment tool. When you are looking at the reports, you understand the level of support that was provided by the teacher and the school. Therefore, any aggregation that they may make is based on that understanding of the support that the child was given. The standardisation is therefore about the assessment, the questions, the matching to the curriculum, the common reporting, the common scaling that they are on. It is not about exact same conditions or exact same timing, and that is a different understanding of standardisation. I get the point that it makes sense at a local level that people can look at different. It is a question of whether that can do the two things at once. I think that that is really one of the challenge. I have not heard anything convincing here that says that there is a rig around both. The teacher knows exactly what support was given and, therefore, contextualises it. However, I am not sure how that might feed it through to a Government level, but I think that that is perhaps something that has come out of the evidence. In conclusion, I might ask the cabinet secretary about the SSLN. You said that it did not give sufficient information. Maybe people could accept that. Is that grounds, first of all, for, therefore, getting rid of it? Secondly, when you have to say that we have got rid of the SSLN because it deemed to be a burden, when, at the same time, there are significant reports of the burden created by standardised testing in taking senior management staff from their work in order to support the classroom, learning support staff being taken out in order to ensure that those tests are done, and a lot of anecdotal evidence that there is a consequence to running those tests, which has created a burden. First of all, on the question of whether it is not sufficient, would you accept that it might not be sufficient? That is not grounds in itself for getting rid of it. If it is about the burden that is created, have you been able to quantify what that burden is? It sounds to me like the equivalent of groups of kids seeking from outclass out for an hour. It is the same as it annoins the inconvenience that teachers face all the time when there is a school show-on or whatever it might be. Have you been able to quantify what that burden was that led you to decide that you needed to drop the SSLN? On the first point, the judgment in relation to the SSLN was that we were essentially moving to a more comprehensive collection of data about the performance within the education system. I do not want to interrupt you, but did you at any point consider having a transition period? We have heard from Tavish Scott of the ACL that it is not really going to be something that you could look at toward 1920. Did you think of a transition period even when that is not sufficient? It is not perfect, but it is at least giving you information in the meantime. Did you at any point consider a transition period so that you would continue it until the other process was in place? Or is this burden overwhelming that you decided that it was necessary to get rid of it? What did you quantify in that aspect? What we considered was the potential to expand the SSLN to provide us with some of that information that would address the gaps clearly identified by the OECD. We gave very close consideration. Indeed, that was the initial first option that the Government was looking at to expand the SSLN because it had a degree of longevity about it to try to ensure that we got that more comprehensive picture. However, to do that, our judgment was that we would create an approach within the education system that would have been even more significant than the measures that we have in fact opted for. One of the reasons why we opted for those measures was that many local authorities, 29 out of 32, were operating in some form of standardised assessments. However, as I explained earlier on, those local authorities carried with them a frustration that they were operating those individually, they were not related to the curriculum and they also were not providing moderation of standards across a wider canvas. The local authority may have been making an assessment to the best of their ability in relation to the curriculum, but they were doing so without a bespoke assessment mechanism and had no basis of moderation across local authority boundaries. We were essentially putting in place a replacement for all of that, although I accept that in some local authority areas, for good reason, some local authorities are continuing with their previous systems in a limited number of cases because of the line of sight that they are interested in establishing in that respect. The final point that I would make is that we ultimately are trying to fulfil the objective of enhancing the information that is available to ensure that we can support young people in the improvement of their own performance, and that is why we have taken the decisions that we have taken. The issue about the burden of the SSLN. What was the burden of taking a sample group of young people out across the system? I constantly wrestle with the issues about workload. So you quantified it as what? I did not quantify it, no. Right, so it was not really about the burden then? Of course it is. I did not quantify it by doing a sum. What I did was I made a judgment that it was not necessary because of the fact that we were operating opting for more comprehensive reporting of the performance of young people, which would address the key challenges that are set out in the OECD report. I am mindful at all times, and I am under constant pressure about this, to remove things from schools when we are asking schools to do different things. I felt that, on balance, the right thing to do was to discontinue SSLN. With no serious suggestion that the SSLN in itself was creating a burden, but the removal of it has created a gap in the information that is currently available while the new system goes into place. I think that John Lamont has been cavalier with the question about burdens in schools. Oh, no. Oh, no. Not at all. Well, I am constantly— I am asking you—forgive me—one of the serious questions around this is displacement of people supporting individual young people into running a serious test. It is entirely legitimate for me to ask you what was the burden that SSLN created, which was so great that it meant that there is now a gap in the information. Your own experts have said that supporting this have said that they think that the SSLN could continue in an amended form. Forgive me of all the issues around this that I am concerned about, but burden is one of them. You must always balance burden against purpose. I am asking you what was such a burden on SSLN in comparison with the level of burden that the standardised testing has created for many folk inside the education system? John Lamont has just said that we have to balance burden with purpose. I made that judgment about SSLN in the judgment between burden and purpose. With the pressure that I am constantly under to remove burdens from schools, I judged that the right thing to do was to remove SSLN because we were moving to a more comprehensive performance measurement system. I am constantly focused on how I can remove burdens from individual schools. I cannot afford to be cavalier about that question. No-one is cavalier about it with respect to all the arguments that you put forward today. That is the least credible that the reason we are getting rid of SSLN is because of burdens. Everybody in the committee is very conscious of the level of burden on teaching and support staff in our schools. I am going to move on to Dr Allan. Thank you, convener. One of the things that you mentioned today has been about making sure that new assessments align with the curriculum. Indeed, the Government's submission to the committee mentioned that alignment to the Scottish curriculum is also key for Scottish teachers and sets the SNSA apart from other standardised assessments previously used in schools. I wonder if you can say a bit more about what the process was in deciding how to make that alignment and whether you are satisfied that it is there. I think that we have gone through in the implementation of curriculum for excellence a number of steps to essentially ensure that the system was well informed about what the different staging posts and levels were envisaged to be in the progress that children and young people would make through the curriculum. That discussion has been informed over time by the production of experiences and outcomes, which provide a shape and a framework of the curriculum. That remains a major part of the judgments that teachers make about the content of the curriculum. However, I certainly detected a level of concern within the teaching profession when I came to office that there was not sufficient clarity in the profession about what individual levels would be represented by. I asked Education Scotland to do some work on the creation of benchmarks that were designed to give clarity about what individual levels looked like, as opposed to prescribing what had to be the contents of those levels. The feedback that I get from the profession is that benchmarks have now made a significant impact in stabilising the understanding of what levels are. As a consequence, the profession is more confident about what levels look like. Against that background, SNSEs had to be set within the context of those benchmarks and the material and the contents of them were judged by our education specialists to be appropriate. It was the blending of that education advice with the content of the curriculum for excellence benchmarks that has given us the approach that we are taking, but that is kept under review with the replenishment of questions that are undertaken in each of the assessments. You have said something about levels and the structure of curriculum for excellence, but curriculum for excellence, as you all know, is about philosophy and pedagogy. How does that fit in with some of the comments across, if you like, the spectrum of views that have been made about how the assessments fit in with the philosophy behind primary 1 play-based learning? It is important that we highlight the title that Dr Allan refers to, which is play-based learning. The play-based approach in the early level is designed to ensure that young people are able to have command of the learning that they would be expected to acquire as part of the early level. The play-based approach is a medium through which they are undertaking that learning. The assessments have to operate in a fashion that are consistent with that approach. That is one of the questions that I have asked David Reedy in his independent assessment of the P1 assessments to challenge and to tackle as to whether or not we have got that approach correct. Fundamentally, the assessments are there to consider whether young people have command of the learning that would be expected to be undertaken as part of the early level that is acquired through that play-based approach. In putting those things together, the play-based learning and the approach to the assessment, however, how do you ensure that the breadth of what is being tested is relevant in terms of producing information in primary 1? We have to be clear that the standardised assessments only capture or address a part of the literacy and numeracy elements of the curriculum. Of course, the curriculum is much broader than that. They consider and assist in the diagnosis of challenges that young people will face in that respect. Ultimately, teachers have to make a more comprehensive holistic judgment about whether a young person has got command of the CFE levels, which will be the case by the judgment that is applied by individual teachers in the reporting that is undertaken on the CFE levels. It is with regard to teacher training. Professor Hargreaves previously told us about developing a collaborative culture and teaching. We have spoken today about moderation and having a better understanding of shared standards. He also said that the profession needed a role in informing the continuing development of the SNSAs. Is that the Government's intention going forward? Yes. Obviously, that fits into the wider work that we are undertaking to create a more collaborative climate within Scottish education. The regional improvement collaboratives are now gaining more significant momentum and, as a consequence, are beginning now to influence and enhance classroom practice. That more collaborative climate is crucial to how we advance our agenda in education. In a number of different respects, we engage the profession on our learning from the assessments in a very direct way. The P1 practitioner forum that I established is doing really good work in engaging with practitioners about the experience of undertaking the assessments in P1 and to help us to identify what are some of the challenges that have to be addressed. Obviously, we are capturing the opinions and views of members of the teaching profession about how they are taking forward the implementation of the assessments more generally, and we are undertaking extensive survey work of teachers to gauge their experience in that respect. Obviously, we will reflect on that as we take forward further developments of the assessments. On the teacher training front, I know that scholars have developed a programme with Heriot-Watt University, which is a 95 per cent rated satisfactory. Obviously, that is an online training programme, and there is a GTCS training package as well. Are you looking to develop a more consistent approach at a national level, or perhaps through the collaboratives to training teachers in the use of SNSAs? That is what I envisage has been undertaken within regional improvement collaboratives. We see some work being undertaken on that in the different collaboratives. The West partnership has undertaken some of this work, particularly in relation to moderation, which remains—I think that moderation will perhaps always be a challenge within the system, and particularly within the nature of curriculum for excellence. However, I certainly want to see the steps that have been taken to ensure that those programmes have an effect on the classroom, and that the profession feels strengthened by the ability to have access to this type of training, and that it is considered as part of the professional learning of the teaching profession. I wonder if I could first go back to the points that David Lline made in relation to Joanne Lamont's question about those specific questions with the button that you can hear. I am 29 now. I have a problem with reading, and I am prone to making stupid mistakes. If you would ask me that at four or five, I would not have known that, and I would have been quite confident in my own reading ability and my ability to go along the line and read a simple short question. I do not recognise that there is a difference in the result of the test shows based on whether or not I know to press the button, to listen, to check, and to confirm that my reading was correct. I think that the thing that we have tried to stress all along is that I am assuming that your teacher knew you well and what your ability was in the classroom. No, they did not. I did not get a formal diagnosis to further on in school. I would not have known a Norwood to teacher at that point that I would have found that difficult. Those assessments, first of all, categorically are not about assessing whether or not children have additional support needs. That is not the purpose of them. That is not the precision, if you like, of the tool. However, what they will indicate to the teacher, particular development and mental needs, you may or may not have. That is the diagnostic process of it. That will then be added to the teacher's knowledge of you and all your other work that you are doing. That is why we have stressed all along that this assessment is indicative of the ability that you have at that particular moment in time. It indicates to the teacher for further exploration. It does not definitively say whether this child can or cannot do this. You accept that the variability and adaptive nature of the test comes with a risk. That is a risk that certain specific learning difficulties or particular learning styles or patterns could be disguised by adaptive features within the test. There is a risk. I think that the point here, Mr Wendell, is that the assessments are trying to establish the developmental capabilities and challenges that an individual may have in their education. Can I just pull you up just when you say developmental? I find that that is a sort of confusing term to use really because when people talk about a developmental test they are not really looking at whether someone is achieving certain points in the curriculum. I think that in the nature of this conversation you would be then looking at a different type of developmental test to assess those things that they use in other countries and that we heard from certain witnesses. Forgive me if I have used the wrong terminology, but what the assessments are trying to do is to find out what are the areas of further development that will arise out of the diagnostic assessment that are made. In my earlier remarks, I talked about areas in which a teacher may identify greater or lesser capacity within a young person given the evidence that is thrown up by the undertaking of the test. I accept that, but my question really is, does the adaptive nature of the test, in some cases, come with a risk that, because of the design of the test and the design of the question, certain difficulties that a child is facing in their learning might be disguised or might not be as obvious as if the test or questions were done in a different way? I think that what I was trying to get to in my answer is to say that I would rather not consider this in a specific fashion. That is why I asked your official rather than yourself in relation to a particular question. No, but what I am trying to make is that I think that the analysis, the diagnostic assessment that is undertaken as a consequence or that arises out of this assessment might give rise to a number of points of inquiry by education professionals, which might be big questions as to why a young person might not be able to handle particular elements of the assessment, as well as one might have expected them to be able to handle them. It will not provide a diagnosis of a particular condition, but it might give rise to a series of questions about how we need to configure the learning of this young person to ensure that they can achieve their full potential. That might then open up a whole series of other questions. I understand that argument, but by the same logic is it not entirely possible that, through the design of the test, you could find a false positive that would close off lines of inquiry that otherwise, through the teacher's individual judgment, they might have thought that they might have been more likely to pick up. That test, particularly the adaptive nature, the time-related nature and the different possibilities for how you complete the questions, can present a false positive in terms of how well the pupil performs, particularly for bright pupils who may have a specific learning difficulty. The teacher might be inclined to think that they were a middling student and doing okay and close off lines of inquiry that, if they have done a diagnostic test, as was suggested by some witnesses, particularly at that early stage, might have helped to pick up the actual needs of the child better. Fundamentally, the question relates to the ultimate question that has been answered here, which is the teacher's professional judgment about the child. The SNSA informs that process, but does not dictate it. It is determined by the professional judgment of the teacher. In that context, a teacher would have to take into account a whole range of different information and experience about the educational performance and contribution of a child, which given the focus that we have within our system on getting it right for every child, would you potentially beg several questions about what were the needs of the child and how could they best be met? Fundamentally, the judgment that we look to educators to make within our system. If the test is poorly designed and features within the test make it more difficult for teachers to exercise that judgment, is that not a problem? Obviously, nobody wants that to be the case. I do not think that it is the case in terms of the assessments that we have undertaken, but it is obvious that we are impervious to that. What specific analysis was done to ensure that the tests were suitable for young people with additional support needs, what assessment was made of them, what evidence from experts did you solicit ahead of introducing them and suggesting that they would be a good thing for all children to take at age 4.5 to 6? It was one of our key principles at the beginning that we were as inclusive as possible. We took steps to involve experts in this field. We used Call Scotland. We had a reference group of additional support needs experts. We did trialling with children in special schools and in mainstream schools who had additional support needs. We had one-to-one cognitive labs, which were how children were interacting with the assessment. We made every effort to ensure that we included as many children as possible and were able to give useful information across the spectrum of abilities that are in our classrooms. We accept that that is a very big ask of one standard assessment that can cover every single child in Scotland. It is an ambition and an aspiration. We continue to take feedback and to continue to improve that process. However, it was a very serious attempt by calling in expert reference groups and by testing and trialling questions prior to them being released to be as inclusive as possible. The feedback that we got was that there is a significantly more inclusive than any other standard assessment that has been currently used in the Scottish system. We have made progress—I am not going to claim 100 per cent, but we are making the right moves in that direction. Would you be willing to share those expert views and analysis of the questions and tests or assessments with the committee? Most of it was done. The short answer is that I am happy to share what they shared with us. Most of it was very practical. It was sitting and looking at assessments and comments, and we had Scottish Government accessibility people also looking at it. I am happy to make that available. Just a final question for the cabinet secretary. Can you understand why myself and other colleagues in the Parliament might be angry and irritated that, during the debate that we had on this topic, some members of the Parliament chose to suggest that the tests could be used for the diagnosis of additional support needs? Can you see why that would cause confusion and annoyance? I understand that point. I just finished asking a question to the cabinet secretary about the way forward for standardised assessments. One of the areas that has been discussed in detail is about the time constraints on schools and the technology available to them and how they have been able to implement that. However, we were lucky enough to have a focus group with some teachers there. On asking them about the previous standardised assessments that were being brought in either at local authority or school level, if there was any additional how they managed the time around those tests, the answer that came back was that it was exactly the same. They had to accommodate all those tests within the curriculum. It was something that they were used to in Murray Council in their submission of stated technology had not been an issue because most of the schools were following procedures in place for previous assessments. Indeed, North Lanarkshire, a representative on-area, has stated that it is universally accepted that real benefits in SNASAs and that any issues encountered in the first year of implementation can be overcome in the subsequent session. Are you content that this is being embedded in schools and that there is not a particular issue in terms of resource and time for teachers and that lessons will have been learned from the first set of assessments that have been taken place? That is just in year 2. In year 1, there was a theoretical maximum of pupils that could be assessments that could be undertaken of 613,000. In fact, 578,000 were undertaken, which was a completion rate in excess of 94 per cent. On a first-year implementation, given the fact that we have taken the view that it is up to individual schools to judge whether it is appropriate and suitable for pupils to be undertaken in the assessment, some pupils would not be given the assessment. 94 per cent in year 1 on a new system is a pretty high level of participation. If I look at the data as of yesterday, as of yesterday, 144,941 assessments have been taken during this academic year. At the same date last year, it was 119,616, so there is a really quite significant increase in the proportion of assessments being undertaken by the 19th of February this year, so it was 144,941 this year compared to 119,616 last year at this date. I think that that is indicative that the system is adjusting to these and taking them forward. I have to say that in all of the work that I take forward, I am not always able to reach agreement readily with local government about many questions. Local government was very supportive and participative in taking forward the SNSAs because of the issues that I raised about the fact that the assessments that they were undertaking, through no fault of their own, were not curriculum related and did not support moderation effectively. Local authorities have embraced that very significantly and, obviously, to get a 94 per cent participation rate, schools would have had to embrace those readily as well. As with any new approach that we take, we have to be open to doing things better or differently, and I am open to that. That is why we are taking forward the P1 practitioner forum. That is why we have taken feedback extensively from teachers and pupils about their experience and why we will continue to do so. Professor Louise Hayward of Glasgow University said that the idea of having information from tests that support teachers' professional judgment is an entirely appropriate approach. Just relating to what Oliver Mundell was saying, friends have said to me that they wish that those tests were available when their children were young, so do you agree that that test is an assessment that will enable parents to know at a general level how their child is doing and that it will confirm teachers' professional judgment and that we are in danger of overcomplicating the meaning of those tests? Fundamentally, we are up for individual teachers to make a judgment about what information they share with parents about the performance of children through the SNSA, but if a parent was interested in that information, I could see no reason why that should not be shared with them as to how their child had performed. Particularly about what issues come out of that, about particular strengths or particular challenges, because ultimately, if there is a challenge that a young person has got, which has been identified by a diagnostic assessment, there can be valuable parental support that could be motivated to ensure that through school and parental support that challenge could be overcome for an individual child. Ultimately, I think that schools will be reporting on whether or not children are achieving levels of where they are in that journey. Those are the milestones of the broad general education. Obviously, our objective is to enhance the quality of that information by virtue of the robustness of the SNSAs. Thank you very much for your attendance. It has been a long session today, cabinet secretary, and thank you to your officials. We will now move into private session.