 Sure, thanks very much Jim and I'm more than happy to sort of talk right the way through the presentation slides but equally please interrupt me, I haven't got the chat box in front of me but Jim if there are any questions as I go or people want to comment on what I say or disagree that's brilliant, gives me a rest as well so I'd be happy to be stopped in my tracks. But anyway, thanks very much for the invitation to present to you all. I mean the first thing is a sort of a declaration that in my current sort of role as sort of head of program designer learning technology at the University of York there's no obligation for me contractually to do any research at all and some parts of the university think that professional services shouldn't be doing this, that this is the domain of the academics and that's a sort of culture I've come up against and it's one that I firmly disagree with because I think evidence-based practice is the key to a good practice and to winning the hearts and minds of academics and we're in a research-based university so we're often challenged about where's your evidence for introducing this change or that change. So supporting evidence based evaluation of central services is something I've always done but equal I've been sort of passionate about supporting academics who want to enter into the space of educational research as well and this paper that I'm going to be speaking about is very much one of those. It's a collaboration with two academics, a husband and wife team in our Department of Biology who wanted to learn more about sort of educational research and we teamed up. So that's what I'll be talking about. In terms of the menu, this is kind of what Jim asked me to address these points. So I'll say something about how the project came about and then defining the scope of the research and how we arrived at the right methods and then I'll talk more generally about the journey to final publication in the research and learning technology journal and I'll finish up with some lessons there. So that's what I'm going to cover for you today. Okay so I mean first of all as a former historian actually I always find timelines chronology helpful so here's really the chronology of my involvement in this paper, they're writing the paper, got involved initially and sort of invited into the Department of Biology in February 2018 to sort of talk about their plans around peer-led learning and use of slack and three years later almost well a little bit more than that the paper was published in the journal. So quite a lengthy period of time but I think the first point is that there was no, it was never predetermined in February my first sort of involvement with a project that we would publish and I think that's a nature of our profession and what we're here to do. Support evaluation but not necessarily, I mean evaluation outputs can come in many different formats so a journal article isn't always the right one and it wasn't clear and sort of midpoint in the work that's the direction of travel that we were going to follow. Equally, I just want to reassure people, I'm not suggesting for one minute that it takes three years from start to finish to get something published, you can do it far quicker than that but as you can see from the timeline this was broken up into concentrated periods of time when I had time to work on the project. So 2018 to 19 was the data collection period and then there is this sort of hiatus and then may start to put together a skeleton of a paper what it might look like and then the real nitty-gritty drafting was over the summer period when we could have some quality time to actually do that work from June to July to first submission in August 2020 and then after that obviously waiting to hear back from the journal and resubmit and then finally see it published in March. So three year period but it doesn't take that all that time to get something done. So how do I become involved? I think the first thing and as a learning technologist and program designed sort of person is it's good to have a network across the university teaching and scholars in different departments very interested in doing applied research based on their own sort of subject domain so that's a sort of a good outlet to do sort of collaborations as well as program leads and more senior sort of managers. So I always encourage my team and myself to actually sort of try and sort of pick up on what's going on in departments and offer our services which may be simply to review evaluation plans that departments are thinking about or you know in some cases to be actively involved like this one and it helps if you've got credit in the bank already and in terms of having worked on and demonstrated that you're sort of capable in this area. So with biology department we I've done work with them previously on student transition to higher education building BLE sites and actually reviewing their impact in terms of how that helps students on their journey to higher education as one example also MOOCs that sort of thing. So actually having that relationship as a starting point it's money in the bank because you don't know when you're going to cash in in terms of being able to do something really exciting but by building that trust in those relationships then things can move quite quickly and in this case I was sort of brought in to advise on a project which they'd already started on which was an elective for a final year undergraduate biology research project which they were leading in. It was around genomes and the use of bioinformatic tools which I won't go into that sort of detail but what they were seeking to do was to try and address a long-standing problem which was their students when they do these sorts of projects they tend to just work in silos they all the sort of the collaborative behaviors that you would see in a physical lab where people spontaneously ask questions and share what's going on with their own sort of lab work. None of that actually happens in dry research projects so what they wanted to do was try and although they want students were conducting their own independent projects they wanted to build in that sense of information sharing collaboration and support and from a multidisciplinary perspective as well because it's important to say that each of these projects were looking at different aspects of sort of genomes dealing with different data sets so they weren't doing the same thing but they could have their approach to their work could be shared and my colleagues James and Seth really felt that by baking in those that collaboration this would be good for employability as well so that's the sort of the genesis of the project and what they wanted to do. So the solution what they came up with and what I helped them with was to actually sort of frame this as a blended learning peer led group research project so that's based around the use of near peers to facilitate unguided group and individual project work so near peers and peer led group work you can see there's an offshoot of collaborative learning theory so collaboration obviously is when students are all working on one enterprise and at the same time as opposed to cooperative learning where students will have individual components of a larger task which they will get on with which sort of feed in. This is different in that it is a group of independent researchers with different projects being facilitated in sharing ideas about their different approaches to group work by someone who's more experienced and the use of near peers is very common in the biology lab in wet projects through demonstrators, facilitators who are there on tap in order to support students so it was that sort of approach that they were looking for and the blended aspect of this was weekly plenary face-to-face meetings when the academics and the students and and and near peers these postgraduate students who were performing that role would come together but during the week when students were left to their own devices they could be working on these projects on campus or off campus anywhere and they were given a sort of virtual space where they could they were encouraged to feel comfortable in sharing what they were doing ask questions share practice and the tools that were used for that were institutional ones so Google Team Drive which I'm not going to talk an awful lot about and this wasn't really central to the research we did but that was a kind of repository for some sort of shared digital files which all students had as a sort of a starter for how to approach these projects so that was setting out the the context and protocol for what they were doing but Slack was the was the sort of medium through which they interacted and Slack when we started this in 2017 was one of the new kids on the block you know similar sort of genre of next generations of team-based platforms with all the sort of benefits of social media in terms of prompts for communication as well as a searchable message repository which and different channels through which students could interact with their peers and and others and with those sort of push-out notifications as well so a bit of everything there with mixing social media with digital repository facilities too so those were institutionally owned platforms but Slack really hadn't got going at all as a teaching tool at that point it was more for staff use at a sort of staff departmental collaboration sort of level but not for teaching so when we started it's important to say and this is a one of the aspects of the project it's you don't you don't have to have big numbers to do good research you can do sort of rich picture research with with small numbers and we certainly started with our first cohort in 2017-18 only six students went for this bioinformatics elective and and what set and James presented to me was in terms of how they wanted to research this this was the sort of start of a 10 you know we're interested in learning about student engagement with Slack all that work as a collaborative tool because this is new can it support peer-led project work and what lessons can we learn about this for a sort of future course design because this was the first time they were they were doing things in this blended way now you know we've all been through COVID now and remote learning so this doesn't seem as revolutionary as it was then when they're really sort of taking the whole project work off campus for the majority of student time which I think leads me on to to this sort of statement of the sort of challenge that we were facing and I think all of us you know if we're not full-time researchers this is the reality we face so there is an eye in my view an idealized sort of pathway for how research can be conducted and I was originally sort of research-trained by PhD by doing some outline sort of reading and well I always sort of build up a sort of papers to identify the sort of the target research area that I wanted to sort of look at and then when I got on the sort of PhD track I did a sort of more detailed literature review in order to understand the state of the art in terms of where research was at and from there define my niche you know and and sort of build research questions around that sort of niche area that I was going to look at and then from there sort of think about the experimental design how I was going to research this particular sort of these key questions that identified and then and then from that identify and select the right research methods and that's what I did you know and I put it all into a position paper which I did to defend in front of a faculty and then sort of seek ethical approval and then a way I went so that's the idealized pathway but that just doesn't work with with learning technologists most of the time because we're not paid to do that and we're largely reacting to opportunities where they arise for collaborations that well that's certainly the case for me so when I got into this cycle the experimental design the use of slack google was already predefined and these sort of higher levels of research questions were already being formed and in terms of research methodology my colleagues had already designed some survey instruments entry and exit surveys for their students which focusing more on the confidence levels that those students had with their use of the bioinformatics tools so when I came in it was really to sort of take a step back and and as you can see the quote below I won't read it out but that's one of the notes it's an extract from an email which I sort of sent out early on in the project to colleagues you know trying to take a step back to think about what are those sort of the big questions we're looking at here and to really sort of hone that those research questions so refine them and then from there confirm the right research methods that we needed to do the job in understanding what we were looking for and and then and only then seek ethical approval and then get on with the research so it's a little cart before horse not quite the the conventional way that you'd want to do a research project but I think that reflects the reality of how to do things so with that came up with the sort of the the definition of the the research questions which at first sight you may think well they don't really sort of move us on a great deal but the first question was really focusing on the blended design approach which had kind of not really been sort of flushed out in the in the original thinking about this so looking at specifically how peer-led learning works both face-to-face and online and the complementary nature of the the the the virtual time that students spent within Slack and then the second question was about Slack as a collaborative platform can it do the job in supporting peer-led learning and by that you know looking at the affordances of Slack as a tool for knowledge sharing in particular and problem solving which was the kind those other kind of behaviors that we were interested in and wanted to see students engaging in in their research tasks so that was the sort of the definition and from those research questions once those were agreed and we were all happy with them then and only then you really you know you should be thinking then about the research methods and and how you're going to look at this and what we came up with was I think it's a transition from simply looking at surveys which is the default sort of behavior of academics and quite a lot of people because of the ubiquitous sort of end of term module survey and there've been a slight twist on this as I said in that we had originally sort of identified sort of entry and and exit surveys to to measure the changes in confidence that students had with bioinformatics tools but when we did sort of interrogated you know what the research project was all about and we sort of changed tack and in terms of focusing very much more on the on the learning and the blended learning design and and there are there are many ways in which you can approach this from an objective perspective on on on measuring learning and we could look purely at the activity logs that Slack generate in terms of how when students communicated within Slack and with whom they communicated we could track that and even do network analysis around that and so the activity logs very interesting but what we were also interested in is is understanding student intentions in terms of why they used the tools Slack environment in the way they did who they contacted and why and equally what we couldn't see through simply statistics there's a lot of behaviors that will be going on which are not transparent to the researcher that will take place outside the formal environment that we're working in so in order to capture all of that we went for semi-structured focus groups which was actually one of the things that I I did because I was new to the completely independent of to this course and to the students I could do that and those focus groups well focus groups were arranged after the the main project work being completed by students but before they'd received any assessments so we wanted to avoid what's called the halo horns effect of bias based on on students perceived performance and and an assessment so the focus group side of things really important in terms of getting a sense through student eyes their sort of their take on on experience but but also we did something more with the activity logs what we wanted to do was to try and get a sense of what was going on there in terms of the different layers of learning and for that I got some prior experience of doing that type of work content analysis in a in a sort of previous paper on blended problem-based learning research which I did for a master's law program I used Fox and McKeos framework of content analysis which involves some 16 different categories of cognitive thinking so it's a very useful framework and I'd recommend sort of looking at it if you if you're doing content-based analysis so very interested in using that again because it mapped very well to the stated objectives for the peer-led learning research that we were interested in so essentially what we're looking at here is self-directed research and the type of behaviors could range from simply sharing of resources opinion-forming sort of making declarative statements of how students were progressing with their project and and sharing their results all the way up to higher-order cognitive skills so by that articulating and explaining the research findings that we're having also looking at what their colleagues were doing critiquing and challenging the conclusions that they were reaching so that's very much the sort of the higher end the higher order so what we did was to and I'll say a little bit more about this later on but we went through all the sort of contributions by students, staff and near peers our postgraduate students and each as a unit of analysis each message was one unit but it could have multiple categories of behaviors within that unit and what we did was then classify them against those 16 different categories from the framework so that there were some hard yards in in doing that work it's quite mind-numbing but it would have been great if we'd had a research team a wider researching to share this out but it was very revealing in the end in terms of providing an insight into the type of behaviors that were being exhibited and once we've done all of that James who's highly sort of technically literate developed a Python script which would enable us to sort of log all this information and then categorize those behaviors based on a further layer of categorization so we use Rod Regres's methodology to group together some of these behaviors in order to distinguish to have between purely sort of collaborative behaviors such as information sharing sharing resources acknowledging contributions and asking sort of questions so we'd see we'll classify all of those then together as a meta category around collaboration and then we group together what we saw as the higher order cognitive behaviors under this banner of criticality so they were from the Fox and McKeeo framework I just proposing actions based on developed ideas so this was often something that Arnie appears did taking ideas from students and then suggesting further sort of research actions based on what they proposed also supporting positions on issues where students would sort of weigh in and give confirmation to a particular research plan or action based on reasoning to individuals actually reevaluating their whole view on what they've done based on the feedback they received so that was all part of the criticality sort of grouping and says on the slide there then we sort of produce this sort of this spreadsheet with those categories and that data helped us before we sort of embarked on the focus groups we already had an idea of what was going on and then we can actually through the focus groups to probe that more with students to get that the more the explanation of from the student-owned mouths of why they did what they did and when so the other aspect of sort of research is when you're doing it in the conventional way no doubt you know you've either got a research fund money or a sponsor and there's a clear sort of start point and sort of end point and probably predetermined sort of outputs that you've pledged to deliver when you're offering an evaluation service as a learning technologist that's not the case and and so but in some ways that's not necessarily a bad thing research can be quite dynamic and the end point can be flexible and so we started off I was brought in to sort of work with the 2017-18 cohort to run the focus group and then do some of the data analysis and some of the results from that weren't quite what we wanted we found that our students were still contacting academics rather than going through near peers and the level of sort of sharing information wasn't as good as we hoped for but we learned an awful lot about what the why the rationale for why that took place so through a kind of action research sort of approach unintended at the start but sort of drawing on those lessons learned we for the next time we ran this for the 2018-19 cohort we introduced near peers and their role more clearly to students at the outset that they were the main go-to people for queries they were going to facilitate the learning and they were sort of the hub through which students should go and we gave a specific steer for students to share their their insights their questions within their group spaces within Slack rather than doing what they've been doing in 2017-18 which is sometimes direct messaging the academics and by doing that and no one else could benefit from the questions that they were asking because they could only be seen by the academic so so that with that those corrections we we re-run the experiment in for the 2018-19 cohort so I'll just show you some slides I won't go into a huge amount of detail about the research findings because you can read about that in the paper if you're interested but what we were doing then in terms of the if you like the objective research in terms of looking at the the the logs was sort of as I mentioned before tracking how many time students staff and near peers interacted in the shared spaces and also in the plenary space so each though we had two near peers that were managing their groups and so they would have their own specific groups of students they were working with and then there will be a plenary channel within Slack where everyone could sort of share our ideas as well so we we tracked that by looking at who was asking questions sharing solutions displaying data that sort of thing and focusing on the peer interactions that's the objective side of things and here are some sort of screenshots of how that was done this this first slide shows students sort of sharing some of the output from their work and actually sort of asking for approval um obviously am I on the right track and the near peer is then providing immediate feedback and confirmation of what's going on and in the second slide you can see if you can read the text there that the the near peer is then looking at the the results and then providing guidance and suggestions on how to sort of tackle particular errors that they found in the in the data calculations so through the tracking and then the the coding of students and staff and the peer responses we through this meta category that we created we could then sort of graph the the the pattern of interactions over the the course of the this particular project and you can see that sort of displayed here and I think this is for the 2018-19 cohort so this is the revised experiment based on the lessons learned and this seems to be going in the right direction and that's um at the beginning of the project you can see staff um activity is highest and that's isn't is to be expected as they introduce the project and the ground rules the protocol uh and that that drops off um after sort of week 10 and we see um students and near peers becoming more um much more active um in in terms of the collaborating in the in the middle to latter parts of the the project and this slide here um gives a little bit more detail about how we visualize the the data um and this was done as I say through the sort of the python um scripting and and through sort of color coding it's not very accessible apologize for that breaking all sorts of accessibility rules here but um you can see um the way we've presented visually the different categories of behavior that were um coded um such as um students um offering resources offering information replying and then raising questions so these were very much in the ballpark of um collaboration you can see the different um levels um over the different weeks of the course based on the on the different roles and then on to criticality which was um looking then at the higher order behaviors and again this um this corrective approach that we introduced for 2018-19 cohort um seems to have worked in that um you see this peak with near peers from week 12 um onwards sort of um really sort of leading on this sort of the critical engagement with students and and staff very much dropping off which is what we wanted we wanted staff to deal with a sort of the bigger picture summaries of students sort of work at the um in the face-to-face sessions but we wanted the online that the select to really be the domain where near peers would lead that facilitation and that seems to be borne out by our coding and the figures there so um we got all this wonderful data um and we've got those insights and and the immediate payback if you like is that um the behaviors that we were looking for um the 2018-19 um cohorts um actually were realized and that we saw a graphic um going to academics more uh critical engagement um supported by the the near peers so that's all good um but what more could we do with that data well at the start um we didn't have a clear dissemination plan and again in a conventional project you probably would have one in terms of knowing um and pledging to deliver a a report or um an output of some shape or form this was a very much a dynamic sort of process where we started off very small and I and I however experienced you are with research I think it's really great to sort of um um just like you do um um research and an iterative process of testing your ideas the research questions so you do the same I think with your dissemination and we started very small with our own institutional learning teaching conference um with a poster presentation simply for the first cohort laying out the the the use of slack and what we were doing and out of the um the focus group survey and some other research we um we produced a a case study which was largely for internal consumption um for other academics within the biology department to look at as well as more broadly um across the university to think about what team-based environments like slack can do to support um student learning so that's what we did in summer 2018 and then we moved on as we moved on with the project and and transfer these methods over to the the 2018-19 cohorts um and we built up a bit more data um set colleague in biology decided to take um a paper to um advance HE STEM conference and then with a more complete data set I went up to Edinburgh in September 2019 to Analtsea there and presented a short paper so presentation but actually sort of drafted out a sort of um a paper for um well a very short sort of summary of um sort of our findings at that point and um that was um those various steps very very much of helpful in helping to define refine the sort of the angle of of the the future paper that we were going to write and really sort of hone hone in on the USP for the paper and to think about the um also the the the target journal that we were going to sort of look at what was that journal interested in and and how could our results and our approach align to that and of course that would provide us you get that alignment right and you've got a better chance of getting published so I'm going back to the original project sort of timeline slide now um to um really sort of um just sort of round off now with um the actual writing writing up now of the research so really we're in May 2020 now we we've got the data um we've been to Analtsea and um and the paper was um warmly sort of um received um there was a lot of sort of supportive um commentary in the session and and and someone blogged about it afterwards after Analtsea which was really encouraging and I would encourage people to do that whenever you attend Analtsea to um sort of um give that feedback and give that support um and what I've tended to do in the future with Analtse is even if it's just a short paper it's actually to write something um and and offer that up um when I'm giving the presentation with the hope that I expectation I might get some sort of um interest and critical commentary because that's really helpful and instead of um testing out your ideas and getting on the pathway to a more formal publication so May 2020 had some time so there was a bit of a hiatus um after Analtsea but in May managed to get fine some time doing a skeleton outline which was shared with um James and Seth and then we had various meetings where we actually then did some collaborative drafting on the paper and then we had something which um we could put into um uh research and learning technology as our first submission um why did we choose uh that particular journal I think there are a number of reasons why um I mean first of all I think Analtsea um has made a sort of um an explicit sort of um link um in the conference um for those people interested in research to actually target the journal obviously it's so it's it's one that's obviously um easily identifiable for our profession it's got a clear uh learning technology mission there but equally it's interested in um innovations um evaluative studies and and specifically for what we were doing here longitudinal studies as well and if we've got a criticism of um what we do in learning technology tends to tends to be very sort of short lived sort of case studies um we we we produce sort of results for um sort of one-off um rather than a persistence of um um uh innovation so it it's hard to have that sort of longer term picture of how effective our interventions are what we've done here is you know through action research we we we've got data over um a two-year period so we thought that would play well with the journal as well so all of those things are important um but equally the fact that it's an it's an open journal creative commons and and has got a decent impact factor as well we're we're attractive to us for our choice of the journal so with all of those sort of factors in mind that helped in the way we positioned the the paper and also the cover letter explaining why we were wanted this you know to submit this paper to the journal these things all are all important and you shouldn't underestimate those when actually representing your paper but then um submitted it and then we had to wait a couple of months and then we got um feedback from the reviewers um and um now I can only speak from personal experience here but um I don't I've I've only had one paper I think which is one or two papers which have been accepted first time around I think it's very rare um it's quite common to um be asked for revisions and uh although that's disappointing because at the time you think you know you you've worked and and like a sculpture you've you've gone through sort of um many iterations and fine-tuning of that that that the paper it's a bit of a shock to get some searching criticisms but um I've learned over the years to um really see this as an extended part of the writing process because what you're getting there is some fresh perspectives um to really sharpen up the the paper and ensure that it it really delivers um well so and that's that's what we got through the um the resubmission process so I'll just show you this slide here responding to the the reviewers this is how I sort of did it um write a cover letter obviously thanking reviewers for the time that they have taken and when you know they've offered constructive but searching criticisms and then what I've done for each reviewer there's a table there with um the point by point that the the criticisms that they've raised and then um our response to those criticisms so one of the things we got um was that um you can't make a sort of a cause or link between your research design and then all students um receiving sort of doing well you know in terms of their academic output and it's true we all students had in the 2018-19 got marks above and beyond of um what we normally would have expected for these projects but uh absolutely right to say that's a claim we couldn't possibly make um we needed to be clear in terms of the contribution that we were making in the paper so we couldn't make any sort of case for um cause and effect in terms of between the learning design and outcomes but what we could talk about and more clearly was the engagement um patterns that we observe through the use of slack demonstrating the potential of collaborative platforms to support and the appear learning so that's what we could talk about and uh as part of an exploratory research um approach and we could also talk about some of the um emergent variables which we had um you know um seen in the first with the first cohort and then addressed in in terms of the the guidance we provided students around clear clear communication of instructional roles socialization of students and um and of course training for our near peers around their responsiveness um to um the needs of students and from all of that we could make a sort of a clear contribution in terms of um there's always a sort of a so what with the paper um you know why should you publish this you know what is your contribution to the um body of knowledge and ours was then a sort of a framework for um near peer learning conducted sort of virtually um but one that could be we felt could be applied to other team-based environments so Microsoft Teams not not just slack and um to other disciplinary context as well so that's how we um defined our USP and and the reviews were very helpful in helping us to sharpen that up as well as to make a few connections with with our own sort of literature survey to um uh some other learned tech papers which we haven't seen so that's it Jim more or less I'll just round off with some learning points and um these are the things that I sort of take out of this sort of three years sort of journey um one is an obvious one which is be open to these sort of collaboration opportunities and uh and you can generate them in variety of ways as I said uh through the work you do with departments um also we've done essentially we we run evaluation of learning technology projects we actually run training around that around evaluation approaches and um and build that in also to um our postgraduate certificate of academic practice training so that's a way of getting the message out to academics that if you're interested in doing applied research we can help and then the second point obviously be flexible and adaptable in your evaluation approaches and uh but that takes two to tango as well and I had fantastic um collaborators in sat and james chong um who were open to sort of changing their thinking around the use of surveys and the sort of the the confidence approach to a much broader sort of take on blended learning and then patience um so I think the key thing to sort of take out of today is um from what we did is that nothing's predetermined and the beauty of it there's no nobody putting a gun to your head saying you've got to publish you know publish or perish um that's you know that's the pressures that you know professional researchers have to live under and that's one of the reasons why I'm not doing that job because um that is that does put you under a huge a lot of strain but um the beauty of this is that um we could sort of test out ideas along the way with different outputs get feedback and then decide when we were ready to actually go for a formal publication and in some cases you'll look at the data and we certainly did after the 2017-18 sort of cohort and thought there isn't enough here really to to to to do a publication there isn't a compelling message it's really only after we've done the corrective work that we could see and and we refined our focus that we had something and then don't be discouraged with criticism and um I'm I'm sure you know um you're all probably involved in reviewing articles um I I do that and um I think that's you know it's quite right and proper that we all sort of muck in and that respect but um and so I've seen it from both sides and um the the review process but if it's done in a constructive way however challenging it's valuable and um and those comments um really valuable to refining your article and then the final point just to finish off is the journey doesn't finish after the you you get that publication it's wonderful to tweet it out afterwards yet but um who's going to read it um you've got to actually then point people to the article um and that obviously I'm I've got my agent here a secret agent Jim who does some of my publicity work for me but um but no you've got to actually summarize and boil down one of the key messages the takeaway so what I did and I do with sort of my research work is then blog about it um so I I'm on a WordPress site um really sort of um I blogged out so that was for internal purposes to draw people in and from within the university but also externally as well so that's important don't forget to do that and of course with research and learning technology you get wonderful metrics to see who's tweeting looking at what you're doing but just to round off I just want it's you know be remiss of me not to acknowledge um my my collaborators James and and set Chong that um they were the disciplined domain experts they kicked this all off and they played a big part in in they were writing this paper and Arnie appears um Annabelle and Kimberly um who played such a sort of a key part in that and the undergraduate students and and my final slide and I will finish um these are some of the references so um uh the the case study which we produced after the first cohort um the Fox and McKeough reference the Rodriguez one and then the blended PBL one which explains a little bit more about content analysis and how I did it and that's that brilliant uh thank you Richard thank you so much I've learned uh I've got I've got a whole list of questions