 All right, so thanks, Nikola, for the introduction and everything. So I hope everybody is able to see what I'm seeing on my screen. So hi everyone, I'm Adrian. I'm the project coordinator and training consultant on this joint project involving the ARDC, Born University, the University of New South Wales, and the University of Sydney. So today I'll be presenting on behalf of my other project team members, Adele, Hayton Wade, Brock Esky, and Jackie Cho. Our contact details will be provided in the slides later. And of course at the end of this session, we will email you the slide deck and some other instructions too. Okay. So today's session will be in three parts. We will start with some contact setting as well as what we're trying to achieve here. Then I'll show you some of the Pisci highlights based on the prototype. So whenever I use the term Pisci, I'm referring to the RDM training experience that we're developing and testing. So Pisci stands for principles aligned institutionally contextualized training. And finally, the last section will be a bit of a hands-on. I will provide a walkthrough of how you can be involved in this project and to provide feedback on Pisci. So just a bit of a background here. The ARDC and participating units are developing an institutional underpinnings framework that will inform the management, sharing, retention, and disposal of data across all universities. In other words, we are working towards having more consistent and coordinated RDM practices across the sector. So as a part of the IU project, Bonn University, University of Sydney and UNSW have banded together to see if it's possible to develop a common principles-based introductory RDM training that can be adapted based on a unit's context. So context in terms of policies, processes, and systems. And the long-term pretty ambitious goal here is that if the Pisci RDM training is raw in all institutions, there will then be a race of baseline awareness of RDM best practices, which will, one, assist researchers to make informed decisions around managing their data. And two, given that there's principles like RDM training across institutions, this most likely will facilitate cross institutional management of data. I mean, that's the ultimate goal that we're heading towards. So there are some baseline RDM competencies that have been so-called identified by the IU framework, which is now out for consultation. And I think at this point in time, there are like 12 required competencies that's in the document itself. I've highlighted RDM areas in green. The orange and gray areas sort of indicates how much focus is given to an RDM area in our Pisci training. So for example, for competencies that require actions like applying or determining, these are highlighted in orange and will be given a little bit more focus in our training. And for competencies that relates to thinking like knowledge or awareness, those are highlighted in gray. So in a way, this minimum competency helps us ring fans what should be covered in our Pisci training. So from the IU framework and also from the collective RDM experiences at the three universities, we've come up with these objectives to sort of ring fans the Pisci design content. So the first one and only just that training is just a piece of the puzzle if you're trying to go after behavioral change, you know, RDM best practice here. So let's say whatever we are doing here has to join up with and also be consistent with university policies, processes and systems. The second objective is that the Pisci specifically designed with new HDRs and mine, and it's only about getting them started with RDM. And this links back to the IU baseline competencies or minimum competencies. And there is a thinking and doing component to it. So that's our objective here. In order for the Pisci to be engaging and effective, the content will be highly contextualized, but without losing the common RDM principles component of it. The training will also be designed such that you'll be relevant to the early project phase, the HDR project phase. The upshot here is that it means that there will be certain competencies that will be foregrounded in our training. So this slide gives everyone a high level overview of how the Pisci was developed or materialized. So over the last couple of months, we have not only developed a RDM principles version, as you can, that's the green color track below. And from that principles version, we have created contextualized versions of it so you can see the different tracks coming out from it. So through this co-design process, we found that while we are three very different universities, there is in fact a common RDM ground that we can fall back on. So this snapshot really shows that in a way we can be same-same, but yet different. So this slide sort of packs, it's really a summary of our RDM training or RDM experience that we are developing here. So again, we are focusing on new HDRs, somewhere within three to six months of their candidature. Content focus, it's really introductory and baseline. And the objective, as I mentioned before, there's a thinking component which is thinking about RDM and also hopefully they'll be able to add RDM best practices. We are targeting somewhere between 75 minutes to 90 minutes for an average user to complete the experience. And we have sort of structured it around eight different sections. Eight sections seems like a lot, but quite a fair bit of the sections just take a couple, just take somebody a couple of minutes to go through. So that shouldn't be that onerous. In the next slide, I would then unpack those sections a little bit more. And at the end of the training, at the end of this Pisci training, there are some demonstrable outcomes that we hope the HDRs will exhibit. So for instance, the training is designed such that there'll be activities, oops. So the training is designed such that there will be activities or quizzes in it, embedded within the training itself. And the end user can't proceed unless they get the correct answer. And of course, you know, to help them get the correct answer, we will be providing feedback if they get so-called, if they choose the incorrect options on the first attempt or second attempt. And the quizzes activities are really formative in nature, so they wouldn't be penalized if they keep trying. They basically can keep trying and trying. And at the end of the training, we hope that there was some sort of RDM plan or even submit a plan on the UNIS RDMP system. And since there are really new HDRs, we really hope that after doing the training, they will be able to have a chat with their supervisors around RDM. For instance, how to share their data or how to access their data. And finally, at the end of the training, we will get them to complete a survey. Again, I'll provide a little bit more details about the survey, but the survey will have two components. One is really about checking their understanding and knowledge of research, their management. And the other component is really about the overall experience of the training itself. So these are the eight Pisces sections that we have created at this point in time. So you can see the details are there, you know, and also the approximate time that it takes to complete each section. The longer sections would definitely be the case studies. I will talk a little bit more about that also later on. I'm just going to leave it here for a couple of seconds just for everybody to have a quick look. And as mentioned, the slide that will be shared with everybody at the end of this session. So the survey items that we will be using or will be adapting for this project comes from my previous project. We have published a paper on it. So if you're interested, do have a look at the paper that we've published. So as mentioned, it's really, they are really just two components. One is on research, their management knowledge and understanding. And the other one is the quality of experience. And of course we may tweak the survey items a little bit. So because to stand around the training is a bit different. Okay, so that's really a bit of the context setting to give everybody an idea of the background of where we're coming from and where we are heading. The next part is really just showing everybody some of the highlights based on the prototype that we have developed on iSpring. And iSpring is an authoring tool for creating online training or online experiences that is sort of based on PowerPoint. But it allows interactive activities to be embedded into it. So I'm going to show you through everybody the born version of that prototype. So this gives everybody an idea of the, it's really about giving everybody an idea of the look and feel of what the product that we are, that we will be developing. So one of the key features is that it's definitely the accessibility mode. So clicking on that button, you know, it actually gets rid of all the extra graphics and images and just display text. So this is really for people who have visual impairment who may need a screen reader. So this function will actually help people, those people assess the training. And also there's this history panel button where end users can easily navigate to screens that they have previously visited. So just a quick overall look and feel of it. As mentioned, we will have activities in there and the activities are interactive, you know, you can click and select options and we, and there we pop ups to provide feedback. And some activities are really just checking in tuning in and also just to give everybody's sometimes it's just there to give the end users a mentor break. And one of the key highlights of our trainings definitely the case studies. This is to ensure that, you know, we are able to push information that is very relevant to the HDRs. So in this case, this case studies is really about data classification classifying the sensitivity of their data. So we get them to choose, you know, which are the data types that they will be collecting or handling. And then clicking on one of the options there, you know, it will bring up a second menu where they can really look at, they can really choose the case studies. So at this point in time we have four groups of four buckets that we have designed. So as I mentioned, when you click on one of those options, it will bring up another separate screen. So there's this branching option here. And for instance, if somebody clicked on the lab experiment data, they will bring up that lab experiment data menu. And from then, you know, the HDR can then choose a case study that is more that is relevant to whatever they are doing. So we are really placing the end user front and center here. So this is an example of the case study. It's really short. Just a couple of sentences and then we'll just ask the HDRs. You know, what do you think the classification of this data is? And then based on their selection, you know, if they get the cut, if they choose the wrong option, we will provide feedback on or some hints on how to get the correct answers. So very formative in nature, as I mentioned. And even if they get it correct, we'll reinforce why they've gotten the answer correct. So just some of the useful features of using that I spring authoring to. So same thing. So that was on the previous one was on data classification. And then we also have case studies on data handling, where the focus is really about storage and access. Know how to, how to safely secure your data. So same thing, you know, they'll be able to select a case study that is relevant to whatever they are doing and we have broken up that we have broken up this case study to four parts. So for instance, you know, they'll be at the university before their confirmation of review, what should they do? If they're out in the field collecting data, what should they do? And when they are back at the university and they did the share data with their supervisors or share data with external collaborators, what should they do? And finally, when they leave the university, you know, where should they, but what's the best place for them to archive or retain their data. So this is the project timeline that we are working towards. So at this point in time, we are somewhere in April and that's the reason why we're having all this consultation and as mentioned before, have I mentioned it? Yeah, so we have also conducted focus groups with the HDR as well as running similar info sessions at the three universities to get their internal stakeholders review inputs and to review the content that we're doing. And hopefully if all goes well, we will be developing the pilot testing phase will come in somewhere in July and we will have a pilot product for everybody to engage with them and hopefully everybody will be keen to provide feedback on that. And just to share a bit of some good news based on the focus groups that we have conducted with 32 HDR across the three different units. And yeah, I also forgot to mention that in parallel to this project, we're also running it like a research project and so we have gotten ethics approval for it too. So this project really adopts a participatory design approach. And so it's important to involve our target end users, our HDR in this case in our design and development process. This would then lead to a product that our end users would really find useful and there's this whole idea of this participatory design approach. We want to create a product that's really useful and relevant for end users. And so we conducted focus groups with HDRs to get their inputs about what we're trying to achieve here. So based on the focus group discussions, three teams emerged. One is really on the interaction layout and it's considered this are some of the feedback or inputs that were given by the HDRs. So basically they like the interactiveness of whatever we have developed the prototype. They like the graphics. They like clicking on buttons. They feel that even though it's interactive but we have designed such that there are only a couple of buttons to click. It's really easy to use. So that's really a good win for us at this point in time. The second one is really on the tone and language. So we've designed it such that the training uses a very conversational tone. So it feels like when you're reading it, it feels like somebody's talking to you in that way, like a coach or mentoring figure. And as you can see, we've injected some humor into it and so most of our HDRs do like that part. There are some funny parts and there are some serious parts. And even there are a couple of HDRs whose English is not their first language. They find the way we presented the language very easy to use too. And it's really comprehensive and easy to read. And finally on the case study switch is one of the highlights of our training. Again, they find that the examples we use were very relevant, very appropriate. And the design focuses on data types instead of like research context or faculties. So one of the HDRs say that it really works for him because even though he's in a big faculty, you know, he chose a case study that was outside his faculty because he was really focusing on the data types that he would be handling and collecting. And of course the information on the data handling, you know, they find very informative and useful because it's really about storing data on supported platforms or secure platforms. So again, you know, all these things are really early indications that we are on the right track. Of course, you know, there will be suggestions for improvement and we take them very seriously. So quite an affair of HDRs asking for more data types for the case studies. So that, you know, it appeals to a wider group of audience. And the other thing is that they are really looking for like some sort of not really, not really a certificate, but something to some some take away as in after completing the training they hope they are able to take something away from it. And also they are saying that, you know, it would be good if there's like one or two pager key takeaways, right? Like, what are some of the important things that we've covered as well as where to look for more information. So that's something that they are looking for. And the last one is not a surprise, you know, again, because this is a prototype that will be box. So for instance blurry images and screens not displaying properly. So these are some things that we will seriously look into as we move to the next development phase. So at this point, it's, yep, that ends me that we come to the end of me talking. So this is the information. This is our contact details. If you need to look for us this way you can find us. I'm going to stop now just to open the floor to questions. If anybody has any questions before we move on to the so-called hands on face. Yeah, I can look at the chats. Yeah, I can throw chat questions to you if you like. Do you think the individual eight sections or sub parts could be reshuffled to be appropriate to deliver to newly hired research or supervising staff at institutions? Very good question. In fact, that is a common theme that is appearing in the across the three units because we have been very upfront to say that whatever we're doing now is just for HDRs. But a lot of the other units after looking at whatever we develop say that this might also be applicable to academic staff and supervisors. So yes, it can definitely be adapted. But at this point in time, what we have is just the HDRs one and because it's principles base, I think we are fairly confident that down the road we can adapt it for other target audience. Okay. And for professional stuff to get that noted, a really cool tool. How did you talk to slash engage HDR students to begin getting them to go through the ice spring tool, like a 10 minute talk or a whole hour run through with the IDMP aims etc. Okay, so the focus group was one and a half hours long. So for the first 15 minutes, we got them to engage with just one section of that ice spring training. So the, you know, so that one section basically gives them an idea of how to interact with it and how it looks like. And then the rest of the time for that focus group was actually spent on them looking at the content itself on a static digital whiteboard which will be our next section. So, so after this Q&A, I will actually get everybody to access that mirror board that digital whiteboard that we have developed. And that's where you can look, you can actually see all our information as well as to provide feedback. Yeah. What's the greatest challenge the team had to overcome in contextualizing the training for each of the three universities. Alright, so it was already a challenge when we tried to develop a training for one university in the previous project. And now we actually took on more challenges by engaging with all three universities. So I would say the challenge is really that is really around building consensus. Because, you know, every university may have unique needs and requirements, but we always try to remind them, you know, that there is this higher RDM principles. And what we are really trying to do here is to focus on introductory and baseline training. So it's really about, you know, just reminding the universities, the internet stakeholders that, you know, this is the scope of the project and this is what we can do based on our current resources. And as I mentioned earlier, even though all three universities are different, but we do have a common RDM ground for instance, you know, all three universities do see the importance of being able to classify your data first before doing anything else. You know, that was a very important thing. And then in terms of handling data, I think all three universities do agree that, you know, using supported university supported platform is the first choice. So there are commonities across all three universities. Do you think the training objectives slash principles that emerge from the IU project element might change now that they're being exposed to sector feedback? In the ideal world, I hope that it will stay the same. But we know that, you know, it is possible that, you know, it might change. But I think that we are fairly confident that most of the principles or the information that we've designed to our training is pretty common. As I mentioned, you know, I mean, at least based on the experience of the three universities, you know, there's far more commonality than differences. So I think we should be able to work around that. Yeah. I think, and this from UNSW perspective, I think looking at the IU principles, we don't feel that because they are described as minimum competencies, and we don't feel that they would change quite significantly. Because obviously, you know, we have each institution may have different ideas on what is the kind of ultimate best practice on data management. But this is really about just setting that initial baseline, you know, what is the minimum standard that we want our HDRs to be able to do. And so far in conversations, most of the institutions have agreed, you know, these are just some very basic understandings that we've struggled over the last couple of years to get our HDRs to reach up to that level. And this is what both the IU competencies and also our training is designed to do. So it's a fairly, and we believe that standard is fairly common across Australian institutions. How do you anticipate rolling this excellent content out to other universities? So the reason why we chose to go with iSpring is first of all, iSpring is, let's say, add on to PowerPoint, so if you know if you are good with PowerPoint, you know, it's quite easy for you to pick up iSpring. And the output of the iSpring product is actually a SCOM package. And we know that it's actually pretty easy to deploy a SCOM package on major learning management systems. So for instance, at the Tree University, all three universities are using three different learning management systems. So at UNSW, it's Moodle, University of Sydney, it's Canvas, and Borne University is Blackboard. So we have done some preliminary testing. So all the SCOM packages actually work on all three, across all three universities. But in the end, what we hope to make publicly available to everybody, to the whole sector, is really the static content. And that is really up to the universities to decide how best to deploy that static content. So later on you'll see what I mean by when I say static content. So what we're doing is really just developing the information there. We've got just a couple more here. Great work. The data classification module will be very useful. Have you done web content accessibility testing? How do you address accessibility requirements at each institution? So we are not at that stage yet. As mentioned, we have just done a prototype. So once we have done up the pilot product, we'll get each university to test out the training at their own LMSs. Because even though I did mention that it's generic enough to be deployed across all different LMSs, it actually looks a little bit different when it's being deployed. So we'll do further testing on that. And then finally, I think an interesting challenge. Why should developing common training on RDN be such a challenge? The Code for Responsible Conductive Research applies to all universities and the law's disposal schedules applying to research data aren't so different from one state to another. Agreed? So that's why this is being termed Pisci, which is principles like institutionally contextualized. The principles are like part, as I mentioned, there is some sort of common ground which everybody agrees upon. And in this case, I think David is saying that it's true. There is a code of conduct and so on and so forth. And we even have a schedule for the management of data. The tricky or challenging part is really to contextualize it for the institution by putting in the institution's policies, processes and systems. The idea is once you have a training package or information that's highly contextualized to an institution, it reduces the cognitive load for the end users. They don't have to make that leap to all right, how can I apply this in my setting? We make it very explicit for them. In your university, this is what you should do. I hope that answered your question. Yeah, and John adds there was less coordination previously. We're working towards coordination in this project. I think it's also interesting to look at the differences between universities in the ways that policies and processes are applied. Also, the differences in the way that they are structured and the way that RDM supports services are provided and the differences in who is responsible for providing training between different universities. And sometimes that training doesn't all come from one place. It can be kind of distributed across the different business units who are responsible for different parts of RDM. So I think those things all add up to differences in at least historically differences in the way that training has been provided. Yep, well said, Nicola. Thank you. Sorry, I can't help myself. Were there any other questions from the, here we go. This looks great. Can I ask why you've identified HDR students as the main target? Is there a plan to expand the audience for training? Yeah, I mean, one reason is because HDR is a more accessible target audience group for pilot testing. It's easier to get them. I mean, everybody's busy, we know, but you know, researchers tend to be busier than most people and HDR is very happy to come. It's easier to target the new group of people, I would say. And in a way, yeah, and yeah, it's just easier to ring fence that group of people. It's a lot easier for our HR systems to identify who's in HDR. I don't want to say that. Well, I think from an institutional perspective, we have a much more kind of structured management structure around our HDR cohorts in terms of when they can do training, when they should do training and, you know, and when they graduate and that sort of thing. With staff, it's a much more fluid process, you know, staff can move between faculties. They don't really have a start and end date for staff really. Yeah, so it's a much more fluid situation for staff and particularly professional and academics staff and kind of differentiating between the two is sometimes difficult as well. A lot of questions that like, I mean, I think that's very reasonable answer. And like the practicality is obviously really important. But do you foresee that they are like in terms of tone or content that they'll need to be many adaptations moving between like a HDR cohort to sort of like, maybe an earlier career researcher cohort or beyond. So, yeah, so the principle, as mentioned, the principles will be the same across all cohorts, right, because basically there are the best practice, right, so it should be the same for everyone or researchers or and even for professional staff. So in terms of adaptation, it would definitely be the type of maybe you know certain certain parts can be shorter. You don't have to have that much information or to front load. To front load too many information because you know, assuming that you know they have really gone through that four years or five years of research experience, you know, you can cut down on some stuff. I think that's one, one of the things the other thing would be the, you know, the tone and language, things like this. Yeah, but we don't foresee much adaptation but yeah. Yeah, thank you Elena. All right. Okay, let's let's move on to the fun part. And this is where everyone here would get to see the work that we've done so far. So but before going there. So this, so what kind of feedback are we trying to look for. So the first thing is based on what you'll be saying makes. You know, this are just three prompts for you to so called frame your feedback. So one would be, you know, does the draft content achieve the Pisces objective for instance does it align to the IU framework. The other thing will be, you know, is the way that it's being designed all the information that we're pushing through is it frame to encourage RDM best practice for instance, secure secure solution access. And the last one I think is most important one is based on what you see, do you think it will be able to adapt it for use at your institution because as mentioned, you know, at the end of this project we hope to to make available the principles version of the training and then you know, other universities can then adapt it for their own use. And of course, other feedback to will be welcome to. So this is where I will case the link in the chat or where is the link. I can do that. There you go. All right. Fantastic. All right, so can everybody click on the link the link is a public access link so you don't need to sign in on anything once you click on the link you should bring up a mirror board. What does a mirror board look like a mirror board will look like this. It will take a while for the mirror board to get loaded. So if you have any issues with assessing the mirror board please just post something in the chat and we'll see how we can help you. Here I can see people coming in early. So some of you may already be very familiar with me robot but I'll just do a quick walkthrough. Let's just give a couple of maybe one, two minutes for everybody to join the mirror board. Okay, so it's, I will share my mirror board screen. Okay, just in case, you know, some people may have some issues with assessing the mirror board. Okay. So I assume everyone has access to the mirror board already. Okay. Okay, I'll try to pull everybody to me. So I think the most important thing here is that what we are showing here, it's really the principles version and the UNSW version just to give everybody an idea of how the principles version can be contextualized to any university. If you are interested in the University of Sydney's version or the Borne University's version, please contact the relevant project leads. So for University of Sydney will be Adele and for Borne University will be pro. Okay. And the other thing is that at this point in time, we are reserving all rights to the content at this point in time because it's really just in development. So I feel that it's probably answerable for wider use and adoption at this point in time. And as I mentioned, it's planned that a form of the principles version will be made available under creative commons license at a later date. Right. Again, if you have any questions, feel free to contact me. All right, so without further ado, let's try to look at the mirror board. So for everybody else, you probably only have two functions. One is the hand function which allows you to pull yourself around the mirror board. You can zoom in and zoom out using the scroll button. And the other one, which is the most important one will be the comment button there. So that's where you can provide feedback on our Pisci training. So you just have to drop the comment on the relevant screens that you input your feedback on and that's good to go. I'll show you some. So if you click on the comment button, can everybody for those of you who have access to the mirror board can just you can just try doing the putting in the comments. Just click anywhere to make sure this just to test whether comments button work. Yep, somebody has a comment there already. So that's John fantastic. Thanks, John. And most importantly, you know, if please identify yourself in the comments so that you know in in the event that we need some clarifications, we can go back to you. But if you don't want to remain anonymous, that's fine too. Yep. Wow, fantastic. Thanks, Carl. Okay, so they are basically two different colors. All right, yellow for UNSW the yellow boards and the green color boards. The green one will be the so called generic principles version. So you can see how we have sort of contextualize or tailor it for specific for a particular university. So section one is their section to section three. So once you go to section three, that will be the data classification case studies. So it gives you an indicate using an idea of no the different type of case studies we have. But for the purpose of this particular round of consultation, we only have two case study details for the two case studies for you for everybody to review at this point in time or it's really just to give everybody an idea of what a case study is and what it contained. The final version will have around nine to 10 case studies, but just for this consultation we are providing two examples. Case number one in case number three. Same thing that if no once you navigate yourself to like section five, which is on the data handling case study that should in the final product we are looking at three, three different branches. But for the purpose of this consultation we are just providing an example to case number one. Okay, and you will find the details of the case studies at the end of the relevant section. Yeah, one thing to note and the reason why we only provided a small number of case studies is, as you can see the case studies example is highlighted in yellow, which means it's the UNSW and contextualized version. So the case studies have to be contextualized for the data classifications that are available at your institution. And also the data handling case studies have to be aligned with kind of what platforms and what storage systems are available at different institutions. So, yeah, so that's why we've only provided a couple of those as an example of how it works at UNSW but obviously for Sydney it was completely different for Bond it was completely different like the types of data and the examples of the cases but the same but the answers will be obviously be very depending on the institution. Correct. So for instance, at UNSW and Bond University, our data class, our data classification is highly sensitive, sensitive, private and public. However, so we have four levels, whereas at the University of Sydney they only have three levels, which is highly protected, protected and public. So that's the reason why, you know, the case studies are very contextualized and same things as what Jackie mentioned for that data handling case studies, you know, each University use very different platforms and they have very different preferred platforms so to speak. So that's the reason why we're only providing an example just to give everybody a flavor of what it can look like. So anyone is anyone stuck in the mirror board. It should be fairly straightforward. As mentioned, you only have to function, you only have two icons one is to pull yourself and other ones to leave commands and of course zooming in and zooming out. So you can put specific feedback on the specific screens that you want to. If there are any general comments or feedback, you can leave them at the end of section eight, you will see this pink color pinkish colored or salmon looking colored board. That's where you can put a general comments or feedback. Yep. Happy to take more questions to if you like. John. John. Yep. Sorry, I'm not too sure where Bond was located. Are they all New South Wales institutions? Yep. So as, yep. So, as mentioned, we are only showing the New South Wales version of it just to give everybody an idea of the contextualized version of it. If you want to have a look at like the University of Sydney once or born one you have to contact the project lead directly. Okay. Bond in Queensland. Yeah. Okay. Awesome. I'm just wondering some of the retention things, but that's awesome. Yeah, retention. Yeah, it's just that's slight differences, but overall it's pretty similar in terms of the retention period. Yep. Generally for kind of this state based variations, they tend to center around them kind of more sensitive data types like health data and that sort of thing. And usually for introductory training, those type of information, we always go back to please contact your ethics department, ethics team or your data, data support team, because there needs to be kind of tailored support for that for those type of data. In terms of the next stage of the pilot, are you thinking that there would be state based groups or would it be sort of individuals and whoever was putting their hand up? Just I'm thinking in terms of the West Australian Unis, a lot of this will be unified. A lot of the things that we're talking about will be unified across the state. So I wasn't too sure that was how the pilot was going to be organized for you. Oh no, when we talk about the pilot phase, we are only really running the pilot testing at the three universities. Yeah, I mean if you ask me, I would love to run it across all the universities. However, we will make the pilot available for you to have a play at. Yeah, but in terms of the real pilot testing, you'll only be done at the three universities. Yeah, you can I tell you, this is really useful. So just for the longer term like this, the principal green version will be kind of available to the university. And then we look at it and go okay cool we're now going to like change the contents of the slides and the case study and they kind of answers to the case studies to suit our needs. Correct. Because that's great. Yeah, so you put your colors on it, you put your things on it, you're away. Correct. Correct. So, for instance, I mean if you look at some of the green screens, I have like areas where I've highlighted yellow, you can see that you know we just change the wordings for the three different universities. Yeah. So same thing for the case studies, you know, we will. Yeah, you would just have to look at the case studies and then put in your preferred answers. Yeah. So we will give. So we will give, we will open this board for approximately two weeks for everybody to give their feedback. I'm happy for you to share, you know, within your team or within universities to, we will send a follow up email to detail all the steps, including you know when's the deadline for providing feedback. Yeah, because we need some time to collate and consolidate all the feedback before moving to the next phase. So just, just, I'm actually, I'm at the University of Canberra, and I'm giving effectively this material to new HDR students in two weeks. I've developed a version of this, it might not be this good so I might be in touch with you if I've got, if I get around to implementing some of this and then how it went. Yep, yep. Yep, so just, yep, so just get in touch with me. If you want to use some of it. Yeah, because as mentioned, you know, it's, it looks pretty complete but actually there's a lot of places that we need to still improve upon. That's the reason why we're saying, you know, at this point in time it's alright reserve, but if you need to some parts of it. Yeah, just contact me and we have a chat about it. Yeah, I agree with how this looks awesome. I was thinking that this is probably going to be super useful for any university coming in at step zero trying to get to step one. I'm just wondering if there are areas that it could be flagged within the training that this is an area that if you are at a more advanced level or more experienced or more mature service, then you could include things at this point. Rather than it be sort of very, sort of, I know it's sort of Jackie was saying it's the entry level requirements, but maybe if there was just little, I don't know, flags or notes sort of saying, look here if you are doing advanced stuff in this area like whatever big data, text binding or something like that, that could be useful for institutions that are further along in the journey. Yep, yep, yep, definitely. I just mentioned this is like an introductory baseline training. And of course you can add others, you can only can connect or link other trainings to it. Yeah, because alright so at the end of the day, whatever thing that we're developing will be deployed on some sort of LMS right some sort of learning management system. So it's going to be a course page. So I would, I would think I would guess that you know probably whatever additional supplementary resources would be on that course page to. Yeah, this is just one part of that course page right. Yeah, that's my thinking. Right, so it seems that everybody is able to navigate the mirror board without any issues so as you know, so there's a public link, it's an open link, just click on it. Right. Okay. There's the options. So, Janice, I think the answer is, yes, in further down in section four, you would see that we will be talking about storage options and classification. Yeah, it seems pretty quiet. Everybody seems to be having fun with me robot. We can end this session early if nobody has any other questions or comments. Can I ask a general question just around UNSW is approaching training like these online modules are fantastic and I think that they're you know, a really nice streamlined way of like thinking about data management. In terms of like online delivery versus sort of in person kind of training and things like what's your strategy around like pushing people through the online modules versus like doing kind of more intensive I guess engagement with faculties or with, you know, different areas of the university. Our current view in terms of our rollout is still the primary point will be online at the first point will be online in terms of how they're engaging with the material. It just as a way of quickly targeting a large cohort of incoming students and also because faculties and schools tend to hold their induction kind of sessions at various times throughout the year. And then after they do their online training, we then tend to follow up with more detailed support sessions that go into that we then have to tailor for the faculty. So for example, if we go to medicine we select specific tools that may be more relevant to the medicine HDRs and we provide that more in depth kind of run through of kind of what support the university can offer those HDRs. So that's the way that we've approached it at the moment is that online first just to make sure that everybody understands the base level expectations and also where just how to navigate all the university web pages and emails, and that sort of thing so that have all the information up front. And then we follow up with kind of the more like you said more in depth more intensive support and and we do it and every year we kind of have to kind of evaluate based on, you know, data on the usage of platforms and that sort of thing, which faculties we need to target a little bit more and which yeah and that sort of thing. That sounds very reasonable. Sorry, Adrienne and Jack, I did have one last question. Would it be possible if you'd add a timeline of the implementation plans on to the mirror board, just so we can look and see maybe when, when everyone else can get their grubby mitts on it. Sure. I'll, yeah, I'll paste the project timeline again. Yeah, it's me. Yeah, as of today of course. I think if we don't have any more questions, then we might end it there and give everyone a couple of minutes to get to the next meeting. Yeah. Brilliant. Thank you so much. Jackie, that's, yeah, it's a really cool project and and really awesome to see such a collaborative project as well coming out of this program. That's, yeah, really exciting. And thank you everyone for your input and feedback. And yep, as these guys said, keep keep it coming for the next couple of weeks. And I'm sure that we'll be in touch again soon to let people know how this and other projects in the program are going. See you all. Thank you everyone.