 Fy hoi, yn fwyaf ar y cyfnodau cyfnodau'r bywyd. Felly mae'r ffrindiau yma, sy'n ddod y cyfnodau, ac mae'r ffyrdd ddod y bywyddon yw, yn gweithio gyda'r Cymru a'r Cyfnodau Cymru. A'r anasawnau Catriona Coldwell a Joslyn Hickey gyda'r Cyfnodau Cymru. A'r anasawnau'n gweithio'r Cyfnodau Cymru a'r Cyfnodau Cymru yn cyfnodau cyfnodau cyfnodau cyfnodau Tras y ddod o Lady i'r cyfnodauNo chylaid yncyfrydyus. Mae house benutЦ gwlydd gymrydiadul essencei at gwaith i peace cyfnodau gyffredisau o defnyddio Peters humedig. Karmean hefydau hefyd hafu gan Cyfnodau Cymru ar gyfer ei bron inaugurau cyfnodau yma maeherwydd kidneys cy sendonau i chi. A'r anasawnau Catriona Coldwell gan ddod a mae'r r beachesu diwethaf e frновled. Er fathodolodau lodgegarol y GOD ū worth the cleaning suspension of the survey. OK. Then so over to you then. Anna, thanks very much Sarah. And hello ethical divide to be joining you hopefully you can see. The slides. OK and can hear me as well. Yes I can see you and he brilliant that's good to no OK so today Katrina Joe and I will be presenting and updating roughly in 3 parts I I will start with a brief overview and background to the Scottish Crime and Justice Survey pre-COVID-19. I will then hand over to Katrina, who will talk more about how COVID-19 has affected the survey, and Joe will close with an update on re-procuring the next SCGS contract and the future of crime surveying in Scotland. So, as I say, I will just start with a quick overview of how the SCGS has been run up until March 2020, and hopefully this will provide some context for the time series and also where we were starting from when we needed to make adaptations to the survey during the pandemic. So, looking back, typically the SCGS is an annual survey of around 5,500 adults aged 16 and over, living in private households in Scotland. The sample is designed to be representative of all private residential households across Scotland, and that's including Highland and Island communities, for which a systematic random selection of private residential addresses is typically produced from the Royal Mail's postcode address file and allocated in batches to interviewers. And to note here, the survey doesn't provide information on crimes against adults living in other circumstances. For example, tourists and those living in institutions or communal residents such as hospitals and student accommodation. The survey also excludes crimes against businesses. The design of the survey has been broadly similar since 2008-09 up to 2019-20, and as this slide illustrates, has been completed face to face in the homes of respondents with sections on more sensitive topics completed by the respondent themselves using the respondent's laptop or tablet as part of the main interview. And the SCGS is primarily a victimisation survey which captures information on adults' experiences of violent and property crime, including those not reported to the police. It also asks adults about their perceptions of crime, the police and the justice system in Scotland, and we collect information on more sensitive topics on drug use, stalking and harassment, partner abuse and sexual victimisation by a self-completion element of the survey. Data collected by the self-completion element of the SCGS is collated over two years, two survey years and published by annually, with the latest results available for 2018-19 and 2019-20 combined, which we published back in March. So I'll just hand over now to Katrina to talk more about developments since COVID-19. OK, I'll carry on for now just while Katrina tries to join. I have to apologise, I haven't prepared any speaking notes for this, so it may not be as smooth as the presentation Katrina would have given. So yes, going back to March 2020 and the impact of the COVID-19 pandemic, meaning that all face-to-face in-home interviews were suspended and so it then went through, oh, yeah, I'll carry on for now. So yes, March 2020 suspension of face, at the point where we needed to suspend interviewing on the SCGS, we then needed to undergo a period of rapid redevelopment to think about how we would be able to meet the evidence gap that opened up as a result of that suspension where the SCGS forms that kind of crucial piece of evidence where we are capturing crimes which are not reported to the police and therefore data sources that police recorded crime cannot provide an alternative. So that really was our main aim to capture victimisation. And so we developed a telephone survey using a re-contact sample of SCGS respondents from the previous two years who had agreed to be re-contacted for further research. So there were a few kind of key differences really between the SCGS and the new and standalone Scottish victimisation telephone survey, one of those being clearly that the interview was shortened to 20 minutes compared to an average of 40 minutes for the SCGS. And instead of interviews running continuously through the year, as is the case for the SCGS, they ran from September through to October with that smaller sample, achieved sample size of 2,654. And clearly with the change in mode as well, we saw that play out in response rate as you would expect, so response rate of 39%. And so we were very clear in the sort of production of the statistics and the communication of those statistics that the results were not going to be comparable to the SCGS. But to overcome that, we asked questions that were very specific to the pandemic and also had elements which were comparable to the equivalent telephone prime survey for England and Wales. So the next few slides just run through some of the results that we found. And so this gives a sort of an overview of what the SBTS tells us about prime in Scotland. And you can see that sort of to be expected that 39% of crime occurred after the start of the UK's first national lockdown on the 23rd of March and 61% in the period before the lockdown. And this was actually where the survey was really interesting because we retained that 12 month reference period where people were cool experiences that they've had. But because of the timing of the interviews, it meant that we had six months pre first national lockdown and six months post. So it gave us some really interesting comparisons in that way. So other findings relating to perceptions of crime safety and policing since the virus outbreak. Just over half of people 54% felt that crime in their local area had stayed about the same since the start of the pandemic. Since the start of the UK's first national lockdown on the 23rd of March. Findings included stepping away from the victimisation element and more towards perceptions of crime safety and policing since the virus outbreak. For example, 87% of adults reported no change in how safe they were feeling walking alone in their local area after dark since the virus outbreak. These slides will be available as well, so you can have a look at the detail in there in your own time after the conference. And then I think just finally on findings from the SBTS, we also asked questions relating to crime safety and policing since the virus outbreak. So one of the findings there, 74% of people were satisfied with the way the police in their local area were responding to the virus outbreak. So providing that information that was really specific to the pandemic where we couldn't rely on the time series. So just very briefly because I know that we're starting to run out of time and I want to make sure we get through to the next section. The return of the FCGS. So we're now back in field, which is very exciting and we are doing things slightly differently, well very differently now. So we're now operating a mixed mode format where an interviewer will send out the letter with all the details of the survey and how the interview will work. The interviewer then knocks on the door and this is referred to as knock to notch and a telephone or a video interview is offered. At the moment, we're not offering a face to face interview, but at the stage where it's considered safe to do so, appropriate to do so in the Scottish context, then we will be making that an option as well, but there will still be the phone and video interview options as well. And again the self completion where that would normally be completed by the interviewer using the by the interviewer using the interviewers. Tablet, this is now being made available for completion via web and paper. And again, at a stage where face to face interviewing returns, we would then be offering the in home on tablet completion method. So I will just hand over now to Joe, hopefully will be with us and to talk us through the repercurement of the SCGS. Very busy time for us in the team at the moment, alongside getting the fieldwork back up and running the 21-22 survey year. We are also undergoing a repercurement process. So this is the last year of surveying under the current contract that we have with Ipsos Mory and Natsyn. And so we now need to work on repercuring that contract with a timescale of going out to tender in spring 2022, letting the contract in October and fieldwork starting under the new contract. In spring 2023. With the first set of results being published in early 2025. And this is a really involved process which is involving a number of different projects. And our goal essentially written here on this slide is for this repercurement process to be transparent and publicised informed by the evidence, including expert opinion and advice to respond to the needs of users and to align with the priorities of the Scottish Government. And here just want to emphasise the importance of user and stakeholder engagement in terms of an input for informing what we are doing next on the survey. And it's really a time and an opportunity to reimagine crime surveying in Scotland and set the tone for the next four to six years. So yeah, very exciting stuff. We've got a consultation which is live at the moment and I really want to flag this because it's got a few more days left to run. So there's definitely still time to be submitting response. I think it closes on the night of December. And the way that we've structured this consultation is by a number of themes for feedback, which you can see on the slide and hopefully this really kind of gives a flavour of the fact that we're up for revisiting, discussing and challenging pretty much every aspect of the survey as we know it at the moment. And one of the things which may be of a kind of particular interest to this group is the SCGS and further research, so how the SCGS can be more conducive to research which others may be doing. And our kind of engagement doesn't stop with the consultation, so we just wanted to flag and let people know that we have user workshops coming up in January. And they will be informed by the themes emerging from consultation responses and discussions that we've been having with users to date. And these are the free emerging themes at the moment in terms of how we may structure those workshops. So the first is on options appraisal, which is really about continuing that discussion that we've been having over the last year, almost two years in terms of how we adapt the survey, given the new context that we find ourselves in with the pandemic and really making sure that we're future proofing the survey. The second is around, again, how we can focus on our kind of ongoing user engagement, how we can be most helpful to those who use our data. And then the third is on questionnaire development, which will recap on what we've been hearing and talk about what work we'll be taking forward over the next few months. So, yep, that's the final slide there, which just gives a bit of information about how you can get in touch and get involved. So please do consider submitting a response to our consultation or just reaching out and sending us an email. We'd love to hear from you. Great. OK, so we will move on now then to the next talk. So the next two talks are about the crime survey for England and Wales. The first one is an update on the latest data and findings by Catherine Grant. Catherine is responsible for the management of the crime survey of the ONS, and she recently joined the ONS team having been involved in the survey for a number of years at Cantar overseeing the development of the 10 to 15 survey and the introduction of measures of fraud and computer misuse. OK, then so over to you. So, yeah, I'm Catherine, just responsible for the management of the survey at ONS. I'm here to give an update on the crime survey and the work that we've been doing over the last year and very short term developments. And then I'll hand over to Joe, who will be talking about longer term transformation work. So I just wanted to run through a general update to the survey. And then really start to look at the detail of the telephone operated crime survey, the TCSEW. We published the annual update in July this year, showing the first estimates really for the full year from that. Wanted to touch briefly on comparability with previous face to face estimates before returning to face to face interviewing and plans and an update on how that is going. And then I'll just give a quick update at the end on microdata access. So as you're probably aware, the face to face field work for the crime survey was suspended on 17 March 2020, obviously due to the pandemic. We'd interviewed 33,735 adults at that point and the response rate was 64%. So overall, apart from a slight shortfall in the interview numbers and response rate, the impact on the survey estimates from COVID has been pretty minimal for the 1920 year. So we're able to do a normal publication for that 1920 year. We wanted to try and find another way to produce the crime survey estimates, obviously in that period when we weren't able to do face to face interviewing. So the telephone operated crime survey was launched on the 20th of May as an interim measure to provide those headline crime estimates. That sample was formed from respondents who previously participated in the face to face crime survey in the last two years and who'd agreed to be contacted for research purposes. It operates as a panel survey, so reinterviewing respondents at three monthly interview intervals. So the sample size was approximately 3200 households per month. At Wave 1, we ask all respondents about crime they've experienced in the last 12 months. And then at Wave 2 and subsequent waves, we ask about crimes experienced since the last interview. So that's broadly speaking a three month reference period used from there on. We're currently on Wave 6 of the telephone survey and we expect to continue that interviewing up until the end of March 2022. I've just listed out here the current sample design, so we chose a panel design to try and maximise the sample. Obviously, because of GDPR, we were only able to use respondents who'd taken part in the survey in the last two years. So we had quite limited numbers for people that we were able to re-contact. We wanted to try and make as best use of that as possible. We felt that the three monthly interview intervals were the best option for conducting those waves. And then the first annual update was published in July 2021. And I've just listed here the spread of interviews across the year that were included in that update. I think one thing to note from the TCSEW estimates for 2020-21 is that they can't be compared with the 2019-20 face to face survey. And that's due to the overlapping reference periods. I'm going to come on a bit later to talk in more detail about those overlapping reference periods. We'll just hold that for now. But it is worth when you were looking at TCSEW data, there are a couple of notes on comparability that need to be considered. So the population of the study is restricted to those 18 years and over. Overlapping data periods mustn't be used for the main estimates of crime. And the threat and harassment screen, the question needs to be removed from both the current and comparative years when you're using the data. I mentioned earlier the TCSEW survey was set up really its main purpose was to provide the estimates of crime and to make sure that we were able to still produce those estimates throughout the pandemic. That meant that was our focus and it meant that some of the detail and some of the modules from the survey had to be cut to make sure we could have a short questionnaire. The survey itself was set up in a six week period, and the telephone survey needed to be shorter than the face-to-face interview. As you'll no doubt be aware, it's obviously much harder to maintain engagement with a respondent on a long telephone interview than it is when you've got that face-to-face report that's built up. So the telephone survey questionnaire was shorter than the face-to-face survey. It was just 25 minutes compared with the average of 50 minutes that we have for the face-to-face survey. And as a result, that meant that we were unable to include a lot of the survey questions that you're used to seeing on the face-to-face survey. In addition, the 10 to 15 year old survey was suspended, and we're not planning to relaunch that until April 22 on the face-to-face survey. We're just thinking about what was on that shorter questionnaire. What we included was the household box, so picking up the relationships of people in the household, the screener module, although that excluded the screener question that asked about sexual victimisation. And then there was a change made to the threats screener, which I'll come back to in a moment. We then used a very shortened victim form for both traditional and fraud and computer misuse offences. And that shorter victim form was designed again to focus solely on being able to classify offences. So a lot of the detail that you might be used to seeing from the victim form wasn't included. We included a new module covering concerns about crime in the COVID-19 context, and then the usual demographic modules included as well. I think one key point is that the self-completion modules were not included in the telephone survey. And that's been one of the key driving factors in our thinking around pushing to restart face-to-face interviewing as soon as we're able to. I mentioned briefly the changes to the harassment question. At the end of the screener module, towards the end, there's a threat screener question, and we changed the wording on this question to include harassment. And that was really to try and fill a gap in the survey that's been there that we were aware it's a gap, and we wanted to try and take this opportunity to plug that. So I've just put the wording up here. So if the wording was, and apart from anything you've already mentioned, in that time, has anyone threatened, harassed or intimidated you in a way that was intended to cause you alarm or distress? So the consequences of changing the wording for the TCSEW screener question were much bigger than we'd envisaged at the start. It's quite a minor change to the wording, but actually it ended up increasing the number of offences that were captured across a range of offence types. So in particular, estimates of violence without injury. So for anyone who's not that familiar with how the crime survey operates, we ask a range of screener questions, which then trigger a more detailed victimisation module for each incident that the respondent has mentioned at that screener section. The classification of the offences comes then from that more detailed victimisation module. So what we were seeing was that the yes answers to the threats and harassment question were generating more victim forms, and they weren't necessarily always being classified then as threats or harassment, but were also classified as other types of crimes. So looking at the analysis, we realised that both the comparative years, so the year ending March 2019 and the TCSEW survey year, would be more comparable if we removed all offences resulting from the original screener question and those from the new screener question in the TCSEW. Although that does mean that it's an underestimate of the true level of crime. So you'll see from our publications that we produced two estimates there. We have a total estimate of crime and we provide a separate appendix table, which is comparing with the year ending 2019. Looking now at how to use the TCSEW data. As I've said, the main measures of crime are broadly comparable provided the estimates are on the following basis. So the population of the study being restricted to those aged 18 years and over, and that's because we were using the recontact sample. So the minimum age for people that would have taken part in the crime survey in the face-to-face version was 16. So obviously when we came to recontact them for the TCSEW survey, they would have got older and we had no way of refreshing that sample to bring younger people in. So that's why we need to restrict the population to those aged 18 years and over. We have overlapping data periods that mustn't be used for the main estimates of crime. And as I've just talked through the threats of harassment screener question needs to be removed from both the current and the comparative years for the main estimates of crime. The other key thing about using the TCSEW data is estimates obviously based on interviews rather than respondents as we see in the CSEW. So the result of that is that standard areas on the TCSEW are much higher than those that we've seen on the CSEW because they're based on re-interviewing the same person rather than interviewing that person just once only. So really the effect is that the TCSEW has a stronger cluster effect on the sample and the larger standard error once the original sample strata and the primary sampling units are considered in that. So drawing a sample based on those previous respondents does mean that the sample isn't unique. It's a subset of the households or individuals that we selected in that original sample. Under normal circumstances, the crime survey sample is based on a unique sample of individuals which changes from year to year. So that means one survey year can be compared with the next. So that's not possible for the TCSEW because we'd end up presently double counting in incidents that are in that overlapping reference period. And that's only because the samples are not unique over those time periods. Hopefully this graph, this chart illustrates this a little better. So if you look here, we've got month of interview down the side and the reference period dates along the top of the chart. And you can see the shaded boxes here are the reference periods. So if you look at the bottom chart, you can see that if you were doing an interview in July 2020, you can see that the shading there overlaps with the shading from the previous year. So you would have had potentially the respondent of the same incident included in both reference periods. So that's why we've had to use the earlier comparative year rather than using just the prior year. For further information, the year ending March 21 publication is available on the website. We've also produced a comparability study, which is really interesting, I think, although I'm a bit biased, but definitely worth reading to look at the differences across the face-to-face survey and the telephone survey. And I've just put a link here as well to the latest quarterly publication that was published at the end of October. So wistly, over the summer, the restrictions around COVID started to ease. And with that, we've started to look at a return to face-to-face interviewing. As I said, one of the main driving factors around that return to face-to-face interviewing is the fact that we've not been able to include the self-completion modules, which include the drugs, sexual victimisation and domestic abuse modules. And we think we're very keen to be able to get back to producing those statistics. So we resumed face-to-face interviewing in October 2021. I think it's fair to say that the return to face-to-face is definitely experimental at this stage. There are quite a number of unknowns that we're waiting to see how that pans out during fieldwork. Obviously, we don't know how willing the public will be to participate in in-home research again. Similarly, how willing interviewers will be to be back out working, knocking on doors and conducting in-home interviews. And as we've seen over this weekend as well, we're starting to see new COVID variants coming up. So I think the future within the next six month period is still quite uncertain about how successful the return to face-to-face will be. But we're looking at trying to hopefully be able to produce some estimates based on that six month data, but definitely being in a good position to continue with the face-to-face interviewing from April 2022. And as I mentioned, in April 2022, we then also plan to relaunch the child survey, which we haven't launched on this sort of experimental return to face-to-face fieldwork. So I just wanted to give a brief update on the latest results from the survey. So for the year end in June 2021, what we saw this year was a 12% increase in total crime. That was largely driven by a 43% increase in fraud and computer misuse. We did see a 14% decrease in total crime excluding fraud and computer misuse. And that was generally driven by 18% decrease in theft offences. There was little change overall in the total number of incidents of violence, but a 27% decrease in the number of victims of violent crime, and largely driven by falls in violence where the offender was a stranger. And that's in part reflecting the closure of the nighttime economy for several months of that year. So when we reported the differences, what was quite interesting is fraud and computer misuse offenses don't follow the lockdown-related pattern of reduced victimisation. So increases in those offenses actually more than offset the reductions that we've seen in other types of crime. So what we've seen is a 32% increase in fraud incidents, largely driven by substantial increases in both consumer and retail fraud and advanced B fraud, and an 85% increase in computer misuse incidents, driven entirely there by an increase in unauthorised access to personal information, including hacking. Just a brief update on the data sets. So the 1920 data set is now available in the data archive. And we're planning to deposit the 2020-21 data to the SRS and UKDA by early 2022. But that will be the non-victim file only. I mentioned earlier that our focus across the TCSEW has been producing the files that enable us to get the crime survey estimates out. So the structure of the TCSEW victim files is quite complex. What we're planning to do with those files is to wait until that survey is completed, so at the end of March, and we'll then deposit the full data once we've had chance to edit those data files and make sure that they're appropriate for use. At the moment, I think the structure would be very difficult to work through. But I think our recommendation will be that they wouldn't recommend wider use of those files really for the victim files. But that's our plan, is to pull them all together into a single set of files and put them in later on in 2022. And we're currently just working through the remaining historic data sets for resupply following the changing methodology for handling repeat victimisation. And we expect those to be available in the data archive in January 2022. So that's all on me. Let's move on then to Joe's talk. Joe is going to be giving a talk about the future plans for the crime survey for England and Wales. And Joe, over the last 20 years, has worked on and developed a number of household surveys for government. This includes the labour force survey, the annual population survey and the crime survey for England and Wales. Joe managed the crime survey for England and Wales for the last 10 years and is now responsible for the future development of the survey. Before I start, given the presentation just on the future plans and survey development, I was just thinking about the last two presentations and just reflecting on one of the things and what it might mean for the future development of victimisation surveys. And as we all know, or at least those of us that are familiar, certainly with the crime survey for England and Wales, we know that the survey is operated using a similar approach and the same mode of interview certainly for at least the last 20 years. And that's of course until the pandemic hit and then it triggered a necessary transition to a different mode and to telephone survey operation, which happened similarly for both the crime survey for England and Wales and the Scottish survey. We also heard from Kat's presentation as well that we're returning to face-to-face interviewing and that's a very good reason really, you know, collecting domestic abuse and sexual assault estimates and so on. But I do wonder whether the days of single mode face-to-face surveying are actually over. And I think it's worth reflecting on the fact that multimodal interviews appeared more robust during the pandemic than those that weren't that were single mode. And in particular, I'm thinking of the American National Crime Victimisation Survey, the NCBS. It's interesting that the American Survey re-interviews people repeatedly every six months for up to three and a half years. They conduct the first interview primarily face-to-face and then they follow it up by telephone. And as a result, it was very much easier for them to react to the COVID pandemic. And what struck me is that it's notable that, in effect, the crime survey for England and Wales and indeed the Scottish survey moved pretty much in line with the NCBS for the pandemic. Basically, we took the sample of previous respondents to the crime survey for England and Wales, face-to-face interview and then conducted telephone interviews. So, if you think about it, it isn't really that much different to the American multimodal model and panel survey kind of design, which I think is certainly interesting and worth reflecting on really. Anyway, it was just a thought that struck me about the last two interviews. OK, so just moving on then to this survey. So, what we're going to do? Well, it was about four years ago we started doing some development work around the transformation of the crime survey. It was a part of a wider programme of social survey transformation and we did some work looking at self-completion and the main screen of questions. So, I just wanted to mention some of the original work that we've been doing around moving the survey online, certainly before the pandemic hit and maybe there was a couple of other small pieces of research that were going on alongside the pandemic. So, I'm just going to talk a little bit about Wales as well. The interesting thing about the pandemic itself, of course, is that in the way it was almost like a live experiment in itself with looking at alternative modes of interviews for victimisation surveys. So, on that, I just want to reflect a little bit as well in the presentation on things that we might have learned in relation to victimisation surveys from the pandemic and how we reacted to them. In fact, the thing I was saying earlier was probably similar to that. Anyway, and then the last few things I want to do is just to say what the next phase is that we're going to conduct in terms of research and what are the factors affecting that research and maybe then just maybe touch and maybe we can have a discussion about this, about what a future survey might possibly look like. Okay, so I think there's three notable pieces of work that we did over the last few years. The first one was the redesign of the crime survey, core questions for online collection. The second bit of work that we did was on the ethics of online data collection relating to the sensitive topics that we usually ask in the self-completion modules. And then also some research that we've done on domestic abuse statistics, all of which are very much affecting how the future survey may operate. So just touching on that first, then, investigating the approach for adapting core sections of the crime survey for mixed-mold data collection, particularly online. In February 2017, Cantab was commissioned to undertake a three-stage scoping and testing project. It was a very large programme of work which included, amongst other research, 99 in-depth interviews with people on a slightly redesigned crime survey instrument. And what we were doing is just seeing whether that complex initial part and key part of the crime survey, the screen of questions and the victim forms, those bits which derive the main estimates of crime in the previous 12 months, whether that complex part of a survey could actually did operate in an online environment without an interviewer being present to help and assist. So the conclusions really came that the test script did provide data of sufficient quality to assign offence codes in all cases. So you could ask questions where you could actually get a good description of the offence and be able to code it. The main issue related to complex cases and to two things really. One was that somebody who fell victim to multiple or repeat victimisation found it difficult to actually indicate how many numbers of instruments that took place and became confused. And also, with single incidents which involved multiple features of crime, were susceptible to double counting as well. So rather than following the primary offence rule as we did in, as the counting rules do, people will start recording multiple crimes at that point. Leading to, I mean, which was really, is really problematic. Anyway, so the recommendations from that piece of work was that more testing should take place, especially on these kind of more complex scenarios. And of course that we should do more investigation to other modules and other parts of the survey instruments, including sexual assault from the self completion module. So in light of that, we then started to do some work in early 2020 and looking at the ethics of moving the crime survey data collection online. Assessing the risks to the physical, emotional and psychological well-being of respondents when they're asked questions on these topics. We did this work in early 2020 with 25 in-depth interviews, which were conducted with victim survivors of domestic abuse, sexual victimisation, stalking and abuse in childhood. All of those things, which we, those characteristics, which we recollect on the self completion module. I think it's probably in summary, I think it's worth to say that it was a mixed picture. It was some of it was very positive. There were also some negative impacts on respondents well-being and the data quality. And I think importantly, it concluded that when many victim survivors would be able to weigh up the risks of responding to the survey, it's actually impossible to estimate the extent to which such risks might occur and that, you know, you have to consider the possibility of serious harm being caused to an online respondent should a perpetrator of one of the crimes of interest that become aware of the participation in the survey. And so I think that was probably one of the key conclusions for that piece of work, although, as I said, there was no clear argument either for or against proceeding with further work looking into it. We then moved on in autumn 2020, we put up a research tender, which was won by the University of Bristol to take to look at the redevelopment of domestic abuse statistics in relation to specifically to explore the issues with the current survey questions and data collected alongside user requirements and to investigate the use of alternative survey modes to ask respondents about the experiences of domestic abuse. There are a number of issues with the data currently collected. Mainly it's because they don't align with the definition of domestic abuse, which was introduced in the Domestic Abuse Act of 2021. They also exclude offences of coercive and controlling or controlling behavior introduced in 2015. They don't measure the number of incidents or the frequency of abuse. And there's also more data is required to understand the nature of the abuse that takes place. So the top three conclusions, there were a number of conclusions. But the top three conclusions in relation to that was that the headline prevalence measure of abuse should be revised and improved. The survey should measure coercive control following robust questionnaire development and testing. And there should be a measurement of impact of abuse, domestic abuse. And then one of the other ones interesting was that consideration should be given to the feasibility of a move to a long term online delivery as the preferred primary data collection mode for domestic abuse data. So a slightly different conclusion to the work that Kantar had done previously. So that was all the work that was ongoing just before or around the time of the telephone. We switched the telephone operating crime survey due to the pandemic. So what are the lessons learned from that? Well, one of the first things is a shorter interview length. So we reduced the number of victim phone questions required for the estimation process of crimes in the previous 12 months. So you can rationalise the survey down a bit. You can make it a bit more efficient. And that was certainly done. And for a multimodal interviews going forward, of course, face to face interview, the crime survey face to face interviews quite longer. And again, we heard from Bob Scotland and from Kat as well on the survey that when the crime survey switched to telephone survey operation, there's a need to reduce the length of the survey. And of course that's an issue going forward, of course, because if you do move to a multimodal operation in the future, then the length of the survey has to reduce and the amount of information collected has to reduce. The face to face random sample design was set aside in favour of a recontact sample. So it is based on those who previously taken part in the face to face survey over the last two years. It had low attrition rates really and very high agreement to recontact rates. I mean, okay, these were people who responded to the survey in previous two years and there was a recontact question added the response rate at wave one. So when we first started re-interviewing was 50%, which is pretty good really. I think, you know, we weren't recruiting them to a panel survey. We haven't specifically said that we were going to be conducting multiple interviews with them. And so 50% response rate was great. The agreement to recontact across all the waves over the following year was 98% consistently. Absolutely, you know, a kind of like, you know, a really, really, really high surprisingly that agreement to recontact. And then the response rates at wave two 79% response rate at wave three 81% really low attrition rates then between waves. We were re-interviewing people at three monthly interviews on that survey. And then there was also we looked at the comparability for mode of face to face module and found really. I think, I think the conclusion to report was that the estimates were I know cat was mentioning that there were lots of issues, and there were lots of issues but that was to do with, you know, the way the samples design because we were re-interviewing people we couldn't use. There were no 16 year olds in the sample, for instance, or very few 16 and 17 year olds. So that's why we have to, you know, one of the issues was the samples you had to use it on the 18 plus age groups and we redesigned the harassment question, which I regret really having included it. And so they were the issues with comparability in relation to mode. It appeared that there were very little differences on those main estimates. I think you, you know, overall, I think we can conclude that. And indeed, that's something that was reflected in the, again, the American victimization survey who does face to face and telephone interviewing that they've done some a limited amount of research, which also suggests that there's very little differences between at least those two modes of operation. So really, there's some quite positive lessons there about the ability to shorten the survey, about the sample design, about re-contacting respondents maybe moving to a panel survey. You know, there's lots of, you know, valid information in there about how we might move forward with the survey. So I just wanted to touch on ongoing development work. So in October 2021, just recently, we awarded a research contract canta for the development of an online survey completion questionnaire that can be used to estimate prevalence and incidence of crime. It's quite a large programme of work to be conducted at pace. It starts with a rapid evidence review, which is going on at the moment. We're going to redevelop the existing script. So that 2018 piece of research that canta did on the screener and victim form questions. We're going to look at that, take on all the recommendations from that report about how those questions should be developed, and then push that forward. And then we're going to do some pre-testing people with complex histories. Remember, as I said earlier, that original canta research showed that there were problems with complex histories, you know, with people providing incident estimation really in the number of incidents. When they have complex histories and repeat victimisation, multiple victimisation or complex crime scenarios. So we're going to do some pre-testing with people around that. And then there's quite a large-scale live trial on canta's public voice panel, where we'll be using the redesigned survey instruments testing across two modes, actually telephone and online. And then we're going to conduct some post hoc cognitive usability interviews. And there'll be a final report you around April 22, May 22. So there's also the ongoing development work around domestic abuse in October 2021. Again, same time, we wanted to research contracts to a consortium led by the University of Bristol. So that piece of work, the main aims of that research is to understand how questions should be asked to provide the information that users need, focus on measuring coercive controlling behaviour, the impact of abuse and the frequency of abuse. To develop and qualitatively test survey questions with victim survivors of domestic abuse and the general public. To investigate the use of alternative survey modes and further examine the options and associated issues through qualitative research with victim survivors, the general public and victim services and support providers. So, again, building on the research that was previously done in relation to domestic abuse, taking on board those recommendations and trying to move that work forward. And we're going to be reporting that in the spring of 2022. The final thing is just on work that's been ongoing, and I haven't really mentioned the survey of 10 to 15 year olds, and I did want to mention it just briefly, really. Today, really, the work on crimes against children is concentrating on assessing the feasibility of a separate survey measuring the prevalence of child abuse in the UK. We conducted consultation with stakeholders on proposals for what the survey would look like in January to April of this year and we published a response in July. We also consulted with key stakeholders on the content of the 10 to 15 year old survey as well. And we've just now done some, a programme of work, again with Canter, as part of the research for the main survey instrument. We've also included a piece of research which investigates how parents and children feel about taking part in an online 10 to 15 year old survey. And how that should be conducted. I just wanted to touch on that and of course we'll be moving that forward over the next year or so. So now, just looking at future issues to address, I kind of give a summary of probably where we're at in terms of survey development for the crime survey for England and Wales. I think you can see that there's been a focus of research today on key parts of the survey instrument, the self-conclusion module, looking at the ethics, looking at the type of questions. Do we ask how we ask the questions, you know, looking at redesigning the questions for online or multimodal surveys. And then also looking at those other key aspects of the survey around the screener and victim form questions. So we're looking at very, and then doing some work on the children's side as well. So you can see that we've kind of very focused on some of the more, I don't know, some of the more important strategic, I suppose, parts of the survey instruments, some of the more complex parts of the survey instruments, some of the bits which have complex questions around it about moving to different modes. So we started doing that work but, you know, there's a kind of, there's a lot more information and data that is carried on the survey. And it's how you actually achieve that when operating a multimodal survey instrument and also, as I was saying earlier, considerations around the length of the survey. And the other thing I suppose that we can say is that the future development is dependent on funding. So at the moment we know we have funding till the end of March and there's office prioritisation going on following the current spending review. And we should know very shortly about how much money we get in terms of redevelopment work going forward for the crime survey. And of course the programme of research will very much reflect what those conclusions are. So that's another factor which we need to take into account. On the future programme of work, I think we can say that the delivery mechanism, so the sample design and whether we've moved to an online first survey instrument and what the issues are surrounding that haven't been addressed in the current research. So there's got to be something around what response rates are like, what the sample design is and what kind of non-response bias we get from those response rates. There's also a complex thing. Other surveys, the labour market survey, the LMS, which is kind of like a replacement for the labour force survey. That's moved online and it has had some success. So there's one big difference between the LMS and the crime survey for England and Wales and the LFS. The crime survey for England and Wales has traditionally always interviewed only a single person in the household so that we randomly select one adult at random once we get round to the address. For online data collection, that's quite complex. If it's an online first survey, you only have the path, you only have a household, a postal address. That's the only thing that we use when we do the sample design. So a push-to-web survey, which uses the path, you have to somehow contact the household and then somehow select an individual from that survey for operation. You can't do that with a, well, it's very difficult to do that or think of ways of doing that in an online situation. The Scottish survey, the presentation and again this morning, it was interesting because of course they're going round to people's houses and then asking them to take part multimodal. And that's one way that you could actually operate it. One of the advantages of operating online is meant to be cost and I don't think the cost of actually sending somebody round to somebody's house asking them to take part in an online survey is particularly effective at reducing costs. And then the other thing is, is a panel survey, the possibility of a panel survey. The CCSE indicates that a panel survey has its advantages. So, you know, I think that's another thing that may well affect the delivery mechanism. And then the final bit, as I was saying, of course it is. And this is just a thought really, is do we need to make these surveys pandemic proof? We find out is all this going to go away in the next few years or are we in home interviewing something which is going to be more complex and come in and now I don't know. All I do know is that multimodal surveys seem to, as I said, seem to have operated or reacted to the pandemic slightly better than single mode face to face in-house surveys. And as I've already mentioned, then the survey length and the remaining survey content. And there's just some links to some of the research work that we've conducted to date. And I think I'll leave it there.