 Alright, so I'm Anastasia, so I'm coming from an organization called Intersect and I'm managing the training portfolio. My background is in physics and as part of hard science person, my mind is always looking for kind of concrete solutions. So we're working a lot on the training impact and measuring that, like since I joined Intersect in 2017 or 2016, I don't remember anymore. So Intersect is a not-for-profit organization and it's actually providing its services in a few states in Australia, so in New South Wales, ICT Victoria in South Australia and one of the major things that we do is the training portfolio. So we provide digital skills training to researchers. So just to show you how much we do, this year we have delivered already 279 live hands-on workshops and we train around 5,500 researchers across different states. You can see the map in terms of like where these people are based and also historically we started like really like slow back in 2012. We turn people trained and then like start climbing and the more training courses we offer the more disclaims and we reach the milestone of 20,000 participants in 2021, which is incredible. So short-term evaluations, I'm going to talk like in this presentation about two aspects. So the short-term and evaluation of our training program and the long-term. So the short-term is something that we did almost since day one where we were trying to evaluate our training on HDR students, researchers and stuff. Here is our administration system. I don't want to over complicate things. I want you to focus on the survey part. So the survey part that we use is we use some questions with a scale from 0 to 10 where 0 is not at all, 10 is extremely and then we evaluate our training based on a few different metrics. So one of the most common ones is the net promoter score that is widely used in industry, which is the typical question how likely is that you would recommend something, a service to colleagues or friends. This is like so if you see this question it's the net promoter score in other services. Then we use five different quality metrics for the training. So for example, we ask how was the training atmosphere, how comfortable was interacting with instructors, if the instructors were knowledgeable, if the instructors gave clear answers and if the instructors were good communicators. So we're trying to get the feeling of the teaching style and we use three additional evaluation metrics which is about worthwhile, attaining how likely is that you will use this technology and if you feel confident to apply what you learned. There is a tricky part there because like we know that people when they do training they feel super confident but it's good to capture it there as well. We capture also qualitative feedback which is used actually to evaluate teaching as well, improve the course material and get feedback on course development. So we get feedback on people like what they would like to see and what is missing in our course catalog. This is an anonymous survey so we only captured the course date, course name and where it happened. All trainers, all instructors asked the attendees to fill in the survey and the course but also we found that people who are leaving the course earlier they may miss the survey so to boost the numbers we created an automatic reminder that sent to all participants the following Monday. So all the participants who did the training last week and reminded the following week. Okay let's see some numbers now actually like what's going on so our net promoter score this year is plus 76 based on 2100 responses which is around 40% of the attendees which is a great response rate and then in terms of the metrics the five quality of teaching metrics out of zero from 10 like you can see that it's all of them more than 9.4 so we try to keep capturing all these metrics in order to understand better like if what we provide is really valuable for all the attendees and it gives a evaluate also like everything that we do all the different aspects of the teaching. Historically like you can see also that this is growing as well so it's here is improving but also our average NPS is improving so we're very happy to see that and it's based on 8000 responses so trying to get as much data as possible this way we can evaluate and we can feel more confident about like the story that we're saying and also like about the quality of the teaching but now like the biggest question like how do we evaluate the long term so short term is something that we feel confident that we capture quite well and we get a lot of feedback but what about long term so how do you understand the long term behavioral change and impact of digital skills training on HDR research and stuff so we wanted to capture what's the impact on the researchers workflows the long term one what support services the researchers use after training because like if you see literature like the confident of people like doing some digital skills training is very high in the beginning but then super fast drops down and needs some support in order like to get going and adopt the technology and then also like if there is a link with between tools and technologies and research outputs which is something that all the higher ups are looking for okay so what we did this year is we started with a team of eight people from Intersect to start this project there was initial discussions about how do we do it we checked the literature review we saw other initiatives as well so explore different metrics that are widely used and then we had some experts in the team who designed the survey and what is the best design how do we capture all these some things we distributed the survey for the first round we did some preliminary analytics and the goal is that we integrate this in our systems the survey it has three sections so the first one that the first section of the survey is about training impact so captured the long term behavioral change on researchers workflows the second section of the survey was about post training support so what services they use after the training and the third one is about research productivity so is there a link between all these digital tools and the research outputs and grants so the first bounce was sent to almost 5000 people we received 743 responses we're planning to send another bounce later this year and sorry about that and we're going to receive even more so hopefully we can reach a thousand responses by the end of the year so we had to set up all these it's an overwhelming dataset to be honest like after the 743 responses and so reach information there so we had to set up where we can store all these like it's in cloud and computing using like different tools as well to analyze the analysis can happen in different ways so it can be grouped by different faculties because we had this information it can happen in technology that they use in the role and position the competency and the number of courses so if they did one course or if they came to different courses and also there are different topics of analysis so the demographics like check the behavioral change the post-training support also like check the e-research services e-research analysis services that we provide to them and also like link the digital tools with the research grants and outputs and the last bit is like to provide a reporting so I'll talk about this like quickly so just to show you the preliminary results this is the what Katherine mentioned this is the Kip Patrick's model so we use these for metrics that you can see here reaction behavior learning and results I'm going to quickly show you that you know for the first one if you're attending the course was worthwhile so more than 75% said very worthwhile or extremely worthwhile I forgot to mention that this sent to people who attended a course at least a year prior to the serving so all these people are talking a year after so in terms of behavioral change so you can see how frequently they use it and the other two are the most important ones for me so confidence so more than 80% are saying that they feel much more confident or more confident after a year and to what extent it's the technology has been helpful you can see that more than 50% they're saying very helpful and if you include somewhat helpful is close to 90% so very good results very promising and trying to capture like with quantitative ways like the impact also I'm gonna show you a bit the link between digital tools and research outputs so we ask did the knowledge acquired in the course contribute to your ability to produce materials that led or made leads to the following research outputs so people could select one or more so here you can see the distribution of responses so most people answer general article followed by thesis presentation conference abstract etc so more than 80% almost 80% of the survey recipients respondents selected at least one research output which means like that there is strong correlation between all these digital tools and the research output so of course further analysis needs to be done but this is something preliminary just to show you actually like a bit of the data what's next so as I said we need to integrate this one and be an automatic procedure that we send to all people every a year after they do a training so hopefully we can get some thousands of responses we send it by annually so this is the second time that we're going to send by the end of the year we hope to produce a report with all our key findings and share it with the wider community our ambition was to do it by the end of the year due to all these things that are happening hopefully it's going to happen by early next year so we're definitely going to share our key findings with everybody and then of course enable our members to explore the data even further so we're going to share the data openly with our members and they can check the data and they can help hopefully like provide some more inputs on the training impact