 So Chris Martin is going to talk a little bit more about Scottish Crime and Justice Survey in particular around COVID and mode effects. So Chris is research director at Ipsos in Scotland. He's been involved with large-scale surveys in Scotland, the Scottish Crime and Justice Survey, the Scottish Household Survey and the Scottish House Conditions Survey for over 20 years. OK, so Chris, welcome you to start your talk. Thank you very much. It's lovely to be here. I'm going to talk about mode effects and how the changes pre and post pandemic in the Scottish Crime Survey did or spoiler it up. I think Stuart has given you the punchline, did not change the result. I'm a little bit nervous after the questions in the second session. This might be a little bit whistle stop. I'll whizz through a bit of background and mode effects and touch on three previous studies that I think are particularly relevant to it. I'll briefly summarise the change of approach pre and post pandemic. But I want to concentrate on our analysis of what those changes and how they impact the estimates and making a distinction between the mode of approach and the mode of interview and then try and finish by drawing out some lessons for the future. Please allow me one slide just to summarise from a theoretical perspective at the start. How does survey mode relate to survey accuracy? Well, surveys are susceptible to many types of error and the main way of looking at this is through the total survey error framework. Survey mode tends to influence two areas and here it's useful to draw the distinction between how people are interviewed, whether that's by telephone or face-to-face or some form of self-competition, and how people are approached to take part, whether an interview calls at their address or whether they're phoned up or whether they receive a survey by email or by post or a combination of those. So research that relies on voluntary participation is always vulnerable to non-response. In other words, those who take part are different from those that don't. And how people are approached to take part, whether it's by face-to-face or telephone or postal, tends to impact on non-response error and the patterns you get there. Over the years, the crime survey in Scotland has been pretty good. When you compare it to the other big population surveys, it has done very well. I know the importance of anchoring so that's why it's against the big population survey, it's not the crime survey of England and Wales, but it has performed well. The other side of the total survey error is the way that mode impacts quality is through how interviews are conducted, the mode of the interview, and this tends to influence, measurement error, the way that there are differences between the responses answer and the true value. So, for example, if an interviewer is present or not, so respondents tend to be more engaged when there's an interviewer there, but they'd be more likely to give socially desirable answers. And there's also differences in whether information is transmitted visually, so whether show cards are used or not. Before turning to the crime survey, I just want to mention briefly three studies that I think provide useful context to it. The first is the Scottish Crime and Victimisation Calibration Study. Somebody reminds us that issues around about public value and value for money are not new. They've been around for a long time, so in 2003 there was moves to make the crime survey in Scotland into a large chunk of that using a telephone approach. They ran parallel fieldwork, so a large face-to-face study, a large random digital telephone study, and the calibration exercise said, well, how do they look like? But at the end of it, the report that compared the two basically concluded that they couldn't devise a waiting strategy that satisfactorily corrected for all the many demographic biases that were observable in the data, so in the end they went back to the face-to-face approach. The second is a much smaller study, and it looks at the impact, not really of mode, but of response rates on survey estimates. So this analysis was similar to previous work done by Joel Williams and others on the crime survey of England and Wales, so it assessed the impact of a lower response rate had on survey estimates by looking at what the survey estimates would look like if you hadn't done any re-issues. For most surveys, after the initial interview has gone out, you get a second interview to try and convert refusals or non-contacts. In the Scottish crime survey, that was increased response rate by eight or nine percentage points, so what happens to the estimates before if you didn't have that re-issues, so it's basically saying what would happen if your response rate was eight or nine percent lower, and overall the impact was pretty small. So I think this is the key findings on the left. We did over two waves, so the first was the 2012-2013 estimates of victimisation, and on the right is the 2016-2017, and you can see that on both waves the difference is less than half of one percentage points. That relates to the difference of eight or nine percent in terms of the response rate, so it's a pretty small difference for something which actually re-issues cost even more money to do face-to-face than your first issue ones. The third one, I just wanted to finally touch on the Scottish household survey mode report, so like most major surveys, it hadn't changed its approach radically over 20-plus years, it used a standard face-to-face approach, and then the pandemic hit and things had to change, and it went out into the field pretty early, so instead of interviews, visiting addresses, we relied on people opting in in response to advanced mail-outs. We tried to match telephone numbers to the sampled addresses, so we can make an approach for telephone, and we carried out the interviews remotely, either by telephone or by video. So four takeaways from the analysis. Interviewers are really good at persuading people to take part in the survey. The revised approach resulted in much, much lower response rates because you're relying on that opt-in. Secondly, the difference between the pre-pandemic estimates and the revised approach was small for most estimates, but there were some really notable exceptions such as tenure. Those you tend to lose on the most deprived, the least educated, the most chaotic, the most vulnerable, those with low literacy skills, sometimes the people who are most interested in developing policy for interviewers are really good at getting these types of people to take part in surveys. Thirdly, while the telephone matching increased the overall response rate, it didn't make the achieved sample more representative, it made it worse. Response rate doesn't equal whether it doesn't equal representative necessary. And finally, it's not just who takes part, it's how you ask the questions that matter. So we found evidence of a difference between whether people took part by video or whether people took part by telephone. So the report concluded that you couldn't look at the time series results because of the change in approach with the household survey. So how did the crime survey change during COVID? Well, I'm not going to go through the methodology before COVID, standard face-to-face approach, and we're quite lucky for the crime survey because when the pandemic closed everything down, we'd almost finished the 2019-2020 wave. This is, I think, how I felt when the pandemic hit. I'm still not entirely sure where we are now, what the fifth panel would be. I think it may have stopped raining, but I think I might still be up the tree. So the telephone survey happened in the autumn with the post-survey wave starting in November 2021, and it's the contrast between the pre-pandemic wave and not the telephone one, but the post-pandemic wave that I want to draw a distinction with. So how did the approach change? Well, the change wasn't that great in terms of the overall approach. There was a change to the response rate assumptions that are amended from being in the 60s to dropping to the 40s. The first half of it, the mode of approach was a knock-to-nudge approach, where the interviews would still go to people's homes. An interview travel was allowed then, unlike the SHS it started earlier, but the mode of approach was different. It wasn't face-to-face in the home. It used either telephone or video. So it was mainly the mode of interview that changed. The second half of the field of work was very close to return to normal by the lower response rates. So what was the impact on the estimates? So let's have a look at the impact on the approach, and first of all, the impact on the response rates. So pre-pandemic, 63%, post-pandemic dropped to 47%. So a drop of 16 percentage points, similar to comparable surveys. Although it's not straightforward to calculate, we can broadly say that the knock-to-nudge stage, the earliest stage, has a lower response rate of around about 3% or 6%. But as was discussed in the earlier session, as well as the overall response rate, the variation response rate is also important. Greater variation between different types of area would suggest a greater potential for bias. Response rates tend to be lowest in the most deprived areas, but you want to minimise the variation. In the post-pandemic wave, although the response rate is lower, the variation was still very similar. That's very much marked contrast to things like the telephone survey, where there's a much wider variation between the bi-area deprivation. Moving on to the estimates, we looked at a number of different measures, both weighted and under a number of different ways. So overall, for most of these measures, there was very little difference in the point of estimates after the waiting. The largest differences were in tenure, household income and entertainment. Let me just wish you through a few of the results. So first age, as you can see, there was really not much difference in the weighted figures. Unsurprising, this is part of the waiting schema, but also there wasn't really that much difference in the unweighted figures, either in terms of the profile of respondents, which I found was quite surprising. In terms of tenure after the pandemic, an increase in owner occupies, a decrease in renters, but relatively modest, 2.4%. This sort of was reflective that the achieved sample was very slightly more affluent on a range of measures post-COVID than pre-COVID. I think it reflects that when the response rate drops, the people who you lose tend to be the more deprived, those with lower educational attainment. And this was the finding where there was the biggest difference between the pre and the post waves. So in the post-COVID waves, the sample is more educated. So you can see a drop of five and a half percentage points in the proportion with no qualifications between the two. Now, how did the two halves of the fieldwork differ? So if you then look again at attainment and split the figures between the two halves of the fieldwork, this can't be a perfect analysis because they're not perfectly randomly selected samples, but they shouldn't be too far off. So as you can see from the return to in-home results, the dark blue bars, these were a bit closer to the 2019-2020 results than the initial knock-to-nudge stage, the light blue bars. So, for example, closer on the estimate for people with a degree. And I suppose that would be as you'd expect because it sort of reflects the return to in-home stage that's much similar to the knock-to-nudge stage to the pre-COVID approach. So just finally, I suppose, one of the main important areas of the survey is victimisation. So overall, the results suggested that there'd been a small but significant drop in victimisation post-COVID and in violent crimes. But how confident can we be that this reflects a real change and not just a change in the sample profile? So, I mean, I think that the key thing to do is to look at victimisation by the thing that most changed in terms of sample profile, which is the educational qualifications. Two things to note here. First, the likelihood of being a victim of crime is not that associated with educational attainments. It's relatively flat across the different ones. And secondly, across the different grippings, the overall pattern is similar. So a small drop in victimisation across each of those groups between the pre- and the post-banamic waves. So we concluded that this means that the change to the profile sample was unlikely to have more than a marginal impact to the estimates of victimisation. And I just want to briefly talk about the second half of that, which is not so much the impact of the change of approach, how people are asked to take part, but also the impact of the mode of interview. So overall, 57% of the interviews were carried out face-to-face, majority face-to-face, four and ten by telephone and just a tiny proportion by video, less than 2%. Looking at the impact of the mode of interview is much more complicated and much harder to estimate than the mode of approach for the response rates because this can happen. The effects can happen in a variety of different ways. There's not a binary. Did they take part? Did they not take part? So the fact is that drive people to... The fact is that drive how people respond to questions is pretty complex. We looked for two potential effects. First, we looked to see if people gave fewer answers to multi-code questions where the survey is completed by telephone without show cards. So for example, looking at educational qualifications and the number of qualifications they said they had if they said they had a degree. And secondly, we looked at how they used five-point scales from strongly agree to strongly disagree. This is a relatively standard formation for a questionnaire design but previous research suggested that when you're using that five-point scale face-to-face approaches that use show cards can build more responded engagements so you get less don't-nose and refusals but also that show cards tend to capture neutral responses so in the middle the response in the five-point scale. We found very little evidence of impact of this in the crime survey much less compared to the earlier SH, the household survey work. I think there was two major mitigating factors compared to the household survey. I mean, it's partly that the crime survey is comparatively less reliant on long show cards to give those visual cues to respondents in terms of how they should answer on multi-code questions. But it was also because the crime survey, unlike the SHS, were able to visit each home and could give paper copies of the show cards on the doorstep to people so they could almost always use show cards. So overall, for the vast majority of interviews we used had some form of show card and I think that was the key thing in terms of minimizing the impact of the mode of interview which is the big change between the pre and the post waves. So just a couple of the final reflections that again echo some of the things that have been said earlier on today in terms of specifically on the crime survey, the Scottish Crime Survey and the post where you can make comparisons. The difference between the approaches was relatively small so the impact of these changes on the estimates was small and unlikely to have a big impact on key measures. So that meant that we concluded that we were confident that trends over time in substantive findings represented genuine changes. But I think the sort of wider lessons for the future in terms of mode effects. One thing I think that we should say is that mode effects depend on what you're measuring. So a difference in approach may have no effect on one variable but might have a really sizable effect on another variable so it works at the question level, not at the survey level. The second thing is that response rates aren't everything. When you look at representative lists you want to look at how response rates differ between different types of areas and the characteristics of the sample profile variables that are most likely to be impacted. It's not just the overall response rate that's important. Also just in terms of inclusivity I don't think that offering a mode of choice necessarily improves inclusivity and I think that face to face is still the best way of reaching hard to reach groups, the less affluent, those of lower literacy, the less research literate. But these debates around about public value, quality and cost are going to be there and trying to unpack the quality side of the equation of the cost versus quality is much harder to do because these effects work in many different ways. So that's pretty much me. This is obviously an area where there's lots of work going on at the moment in a number of different studies, the labor force survey, but just to highlight that there's also work going on by the SRC survey futures work and the Scottish Government is currently undertaking its review of the long term survey strategy looking at issues around mixed modes and the future of survey research. If anyone wants the full details, the paper's online in the Scottish Government website. Thank you very much.