 We're really excited to have everybody here for our sixth Gibbs seminar or six seminar for this this semester, a bit of a reminder to everybody that next week and the following week, we have a two week teaching break. So our next Gibbs seminar will be actually in a couple of weeks time. I'm going to hand over to Michaela to do housekeeping and then I'll introduce our speakers for today. Hi, everyone. I thank you for joining the Cecil Gibbs psychology seminar series. Just a few housekeeping rules. Just a reminder that this seminar is being recorded and will be available on the psychology events page and YouTube. You'll also be sent the recording via email. Just a quick note there. We have been having some issues with our YouTube channel. So as soon as that's available, we'll send it out. Upon entry of the webinar, you've all been muted and we please ask that you stay muted for the duration of the seminar. If you have any questions, you can write them in the question box below at any time and we'll have question time at the end. Thank you and I'll hand off back to Christian. So we are really excited about our presenter today. So Dr. Connell Monahan is a new member of staff here in RSP in some very late 2019 and into 2020. So this is our first clinical health theme speaker for this semester. And we're really thrilled that Connell was willing to share his exciting research with us today. So Connell is a lecturer here in RSP and he's primarily teaching in the master of professional psychology program. So I'm very lucky to call him a close colleague compatriots in our day to day work. He has a range of interests in research. So he has really strong interests in personality and personality disorders, particular Machiavellianism as a construct, but also in statistics, psychometrics and clinician burnout. In addition to that, he informs me that he is a saltwater fish tank and cycling enthusiast. And we're really happy to hear from him and his research today about how to make research more shiny. So I'm going to hand over to Connell and then we'll have time for questions at the end. Fantastic. Thank you so much, Kristen. Give me one second to share my slides. It's really fantastic to get the opportunity to talk to everyone today about some research that we've been doing on making research more shiny in terms of increasing the quality of participant engagement. This is part of our new lab here at the ANU, the personality, individual differences and assessment lab, but we're still working on a title. So if you have any new suggestions, we'd absolutely love to hear them. Although today's presentation is part of the health and wellbeing stream, we're going to try just talking about something slightly different today in the words of the Reverend John Cleese, something completely different. But hopefully, the concepts we discussed today will spark some new ideas from different areas around the RSP and open your mind to some new possibilities that are available. So where will we go today? Today we'll start by recapping some of the difficulties that we all face in terms of collecting face to face or online survey research. We'll talk about the key role in participant investment in getting good quality data. Then I'd like to talk a bit about one of the solutions that myself, Boris, here at the ANU and some colleagues came up with. And to do this, we made our research more shiny, which is an online open source platform developed by our studio to make interactive web platforms. Then finally today, I'd like to talk about how you can make your research shiny too. And think that the only limitations to this platform is your own creativity. So for a long period of time, we used to do research like this. Paper and pencils to do our surveys, having a good marketing skill to approach people and ask them to complete our surveys. I guess my personal strategy was often to target the libraries to try and find students who around exam time were really keen to try and find a desperate excuse to procrastinate. And they were really my target market. Luckily, now things have changed. And the majority of our research is now done like this. We collect surveys online, which is really fantastic. And no wonder, I mean, online research is an online survey research is really dominating psychology. It's comfortable. It's easy. We can reach a very large broad range of audiences. And these days, it's so rare for people to be more than a few a few meters away from a phone or any kind of computer. And the only reason these days that you might do paper and pencils, it may be a cognitive task or an ECG task or whether you require something that requires that person kind of in person to respond to what you're doing. But let's look at survey data. Top journals in many fields of psychology publish the majority of their manuscripts now based on survey data. In fact, as you can see there, depending on the journal upwards of 90% of manuscripts submitted are based on the self report surveys. If we choose to administer the surveys ourselves, the cost of this is it almost requires approaching each person we want to do the survey face to face kind of a direct contact methodology. Sometimes we can use fantastic groups like sample size on Reddit, different Facebook groups. However, even using these online platforms, often it still requires reaching out or still having that similar marketing spill to to what we had when we had to approach students in the library, almost selling our research to each person individually. So it's not really efficient. It's no wonder then kind of the dominance of online research platforms. Luckily now we can get good quality responses from an amazing array of online platforms. Many of these platforms can get entire samples in under a day. And you can see there that over 1000 studies in 2015 was published based on M-Turk data alone. But unfortunately here we're still faced with often uninterested and what they call professional respondents. So people who are completing it to make an income. They're not really caring too much about the individual responses or the interests of the survey. You can see the first logo there is Qualtrics. There's also WJX in China prolific. And a lot of these platforms are very reasonably priced and can get a lot of data very quickly. But for those people who have used the I guess the larger more professional platforms, the online research unit is a good example of this or official Qualtrics polling sample. Well, if you've had that experience, you might have a similar reaction to this. That can be somewhat eye watering and prohibitive. A short 15, 20 minute survey can cost $4,000 for 300 participants or so. Let alone if you're doing a large sample technique, you win 1000 participants, you can get upwards closer to $10,000. So it really does become a huge burden at the moment in terms of costs. As I said before, often we're faced with professional respondents or people who even come students who don't care too much about their individual responses or the survey at large. But this is quite different. I mean, we are definitely blessed to have many people who volunteer their time for a cause that they see as important or personal to them. I can see that the research is really worth doing. We like to have those people kind of be involved in our research. And there's many different factors that we've identified that can really influence the quality of responses. So obviously the length of the survey is a big factor. If somebody is completing a survey with 5,000 questions in it, the chances of them accurately responding to question 4,300, I mean, they're not going to give a really strong response to that. Because it means they're overwhelmed with the length of the survey. So short surveys definitely do much better. The type of the task and what they're asked to do in the task, whether it's a cognitive task, a survey, watch something or rate something, the size of the remuneration or the prize that they're going to receive with some more recent research suggesting that that remuneration can be quite important up into a certain point. And then kind of large prizes actually don't add that much to the quality of data, potentially also the oversight that's there, individual investment and interest in the task and impact on knowledge. So all these things can have an impact on the quality of responses, but also people's motivation to do the research. Even for people who are quite interested in our research, often we might only give them a thank you or our contact details or a worst case scenario, you know, please click this link if distressed, which can be a big letdown for people who are genuinely interested in the research who are keen to participate and want to know what's going on. More diligent researchers might provide an information sheet or link to other papers. They might upload their paper to a repo or to an online resource to access later. But this is, you know, a lot of us do try to give and lots of different researchers do try to give good quality feedback. And this especially importance a clinical organisation or some scholastic settings where the research might be part of a broader study or individual treatment efficacy programs or anything like that where much more personalised feedback can be used. But this really posits the question of the day that I'm hoping that we can answer is how can we make participants actually want to complete our study and care about how they respond without incurring large costs or without personally having to debrief each participant in person. So how can we do that? So I'd like to welcome everyone to All Things Shiny. I'd like to introduce you to the R Shiny platform. And this is a lovely Shiny app. You can see here on the left is my Shiny app. So if you look to the left side of the window there, you can see Hello Shiny and a slider input. So in real time, the user can change the slider input and based on some coding in the back, the histogram there to the right will change instantly. In this instance, the person can choose how many bins they want to have in their app. This then sends back to R, changes the histogram and then sends that back to the user. So Shiny is an R package that makes it easy to build interactive web apps straight from with that in R. We're reading R studio. You can host standalone apps or you have them in a web page. This can be indexed or non-indexed. For example, they can be available for everyone or the public to search and to find. Or for example, if you're using these to explore some data or to engage with something with your lab or your research group, then you can make them non-indexed. People can't search for them in Google. You can embed them in markdown documents. For example, in PDF or HTML. Or if you want to get more complicated and more fancy, you can also embed these within what they call a dashboard. So kind of larger server and things. If you want to extend these further, you can make them quite complicated. You can include CSS themes, HTML widgets and also kind of JavaScript actions. So really, all you're limited here in terms of the user putting some kind of input into the program and then what kind of outputs you want the user to receive. So we had this idea and we thought, well, is there a way that we could adapt this to allow participants to get custom feedback on their survey results in real time? So welcome to our world of Shiny-based surveys. To achieve this, what we did is just use that simple idea, that open source platform we are, to build a customized website that provides feedback. So this is our website on two-dimensional Machiavellianism. And for those people who have been, I had to listen to like many talks on Machiavellianism, it's the willingness to exploit other people for the greater good or to see other people is about to exploit you as well. So it's better, kind of a dog-eat-dog world. It's better to get them first. So you can see in here, we have some information about Machiavellianism. I'd like to really thank as well Boris Pazimic here at the ANU, but also Todd Williams at Grand Valley State University and Martin Selbong, who's now over at the University of Otago, who really helped with this project. On the left-hand side, you can see different tabs that we just put within our welcome page as information about Machiavellianism. You have more information. There's a GitHub code, so you can download all the code for the website and adapt it for your own website, but also put up there some information for other resources, so different questions and different how to do different kinds of research for people to access. But the most important thing there for today's talk is the test yourself tab. When participants click the test yourself tab, they're presented with lots of inputs, like hurt scales in this case, which then communicates to R, calculates their results, and then gives them the feedback. And this is what it looks like. So here we have our awesome keen users. They access our Shiny website. They fill out the survey. That data then goes to R. And this is where things split. So using the Google Sheets 4 package, it then stores that data on our Google Drive, you know, Google Sheets. So we can store our data for use in a later in research or whatever we want to use it for. But then also sends back the output back to the user to be amazed. And this is what a feedback looks like. So here you can see there's a few simple normal curves, which originally compared each participants scores to a normative sample. But now we've told R just to extract the website data. So as new people complete the survey, that data pool gets bigger. When new people complete the survey, it gets compared to that. This is using the UK's BBC News' theme, an idea of how to display information. And you can see here that they get three different feedbacks of how their scores relate to overall samples. So we might say here, well, this is fantastic. Are people actually interested in this? Do people want to learn about themselves? Are they keen to get feedback on some of these psychological researches? So as of yesterday, we had 9.9 thousand unique users use the website, which resulted in 12,000 different sessions. So some people are completing the survey more than once. On the bottom here, you can see kind of over time, from the beginning of the year, up until now. And you can often note there how there's some plot points, but there's also two big spikes there in the data. So this is where ANU Media was a fantastic asset to have, and Rachel Curtis there, and you absolutely reach out to her if you have any needs to advertise the research or to you want to do any media work. But ANU Media did the first thing there, how Machiavellian are you, and that was released in April. And then later, in late June, early July, Sima Kader, all in the mines in ABC, did another bit of advertising on this as well. And each of those were two big spikes in respondents. It does look like it peers out in comparison, however, we're still getting maybe 150 to 200 respondents every two weeks or so. So there's still lots of people out there who are very interested. But then we might say, well, who is interested? Where are these people? Who are these people? So this is our data extracted from Google Analytics. You can see that obviously based on the ABC and ANU Media's advertising, the majority of people are coming from Australia. But we're still getting people from North America, South America, and really all over the world with a few exceptions here and there. So it looks like there is a big appetite out there for people to learn about themselves and to be more engaged in the research if they are provided some kind of feedback or to learn about themselves in some way. So who are these respondents? Well, we've had maybe 55, roughly 55% of these people have been male, which is fantastic for a psychological sample, give them that, at least my experience doing surveys, especially with undergrads, can be a very high majority female, so it's great to see a more even gender balance. There's no surprise that the people who are completing this, given it was advertised through ABC and ANU Media here in the ACT, were economically believing kind of a large government based on some of the questions that we asked, and also socially progressive, so being more willing to value that people can think and can do as they wish. In terms of the age of our respondents, the main peak is around kind of late 20s or around 30, these are kind of the people who are interested in this kind of research, but one of the really nice things about this is we still get quite a lot of respondents in kind of the 50, 60, 70s and even 80s age range there. So you may be also asking, well, how Machiavellian, we talked before about Machiavellianism, is this tendency to rationalise exploiting other people for the greater good? Are these people who are more Machiavellian or less Machiavellian, or people who want to find out about themselves, given something unique about who they are, who they believe they are? Well, in fact, what we find is a nice normal distribution. So people who tend to survey looks like it's the distribution reflects the general population of people being kind of in the middle of the latent scale there, getting a really nice kind of distribution of people from all walks of life. But the next question is how good is the starter? Is it good quality? Are we not paying people for it in any way? It's completely based on their own initiative and their own interest. Well, if we compare our findings to the broader public or to previous research that we've done, if you look at the internal consistencies of the scales that people are completing on the website, in previous research, we've found that the scales range kind of in the views a bit lower, because they're both only six items, but in the 75 to 85 range, with tactics being a bit stronger. In the table there, you can see the response from our data. They really were kind of falling in the middle of the upper range of the internal consistencies with good estimates of internal consistency there. We have Alpha and Omega based on your statistical leanings and then also fairly strong average inter-item correlation averages. So it looks like these scales are at least working how they're intended to work and if they're reliable. But if you look at some stricter tests of the data coming in, confirmatory factor analysis, for those people who are kind of interested in CFA, you can see that the website data on the top right hand there, the fit indices are all quite strong with CFI above 0.95 SRMR at 0.07 and AMC at below 0.06. But then in a comparison, if else to compare this to our previously collected online data from M-Turk and Polific, it looks like these responses are actually stronger in our current website. And similarly when we ministered the same survey to a matched university sample in the University of Otago, so in New Zealand, the fit indices on our website are much stronger than that. We have no data about why this might be, but our theory would be that well people here, when they're genuinely interested in learning about themselves, not about kind of the pay out, then they're more likely to take their time and to complete kind of the survey with a high quality of responses to to the future and beyond. So where to from here? Well, given the kind of the success of this early platform, we're now going to try something a bit stronger. So we're going to try a much more of a survey-based technique. Previously it was pretty much just the Maca-Valism scale, but now we're going to try a range of different questions in a much longer survey to see how this goes. On the left here you can see the feedback from the Big Fire personality, the ubiquitous model of understanding normal human individual differences and variation there. So they get a feedback on all five of their dimensions. And then on the right hand side there is a need to belong scale. They can see whether they need to have this inclination or desire to be accepted by other people more or less than the average that we're comparing them to. The other benefits that are available through this platform is that we can give feedback because memories are for other. So we can have people download. As you can see here they can download their own printout. This is in HTML so they can go through and select their feedback for each of the scales that they completed along with information and further readings on each. Remembering that this is completely customized to whatever kind of they scored on the survey. And they can quite easily download this. Also we can have this email out to them if they really want to have this email. That will all happen through the SHINee platform. Now I know that lots of us aren't engaged in survey-based research. You may be thinking, well how can I make my research SHINee as well? Here's a platform that we built to monitor psychologists' mental health and well-being when clinicians can fill this out and then they enter in their ID. And over time they can graph their well-being, their burnout and also their secondary or vicarious trauma at the time. So they can see whether they're trending up or trending down and put things in place on engaging self-care in advance. Once again this is exactly the same platform and exactly the same techniques as the survey with information being stored in a Google Sheet and then just when people enter in their ID extracting all their previous all the previous times that they've completed the survey and then putting it into the SHINee platform. We've also started using this platform for engaging in statistics where people can and students can play around with different distributions, with different techniques or different analyses and see it change in real time. We can also engage as you can see in other very essential monitoring tasks for a bit of fun. I put this together the last weekend. I managed my first attended smoking meet and I wanted to measure that the temperature of the meet over time. You can see there that perhaps the shoulder, the shoulder piece of meat from the thermometer, the neck piece of meat and the chamber temperature. There's a dip about 7am, 6am where it got really cold and I fell asleep. But at the same time I learned lots about smoking meet. The reason I bring this up is because it allows us to see the flexibility on what we can use these platforms for. But I know what a lot of you are thinking. I've done that in the past. I've heard it's difficult. It sounds complicated. There's words I don't know. Do I really have to come in our bath? Let's check this out. Here's our simple app before. We had on the left-hand side the user input. Which in this case is a slider scale. And on the right-hand side here we have the histogram. Which is the output and as the user changes the number of bins, the number of histogram changes based on exactly what they input. So here's all the code you need to do that. On the left-hand side you can see that it tells us the server. So the server is the R side of things and it tells it to render a plot. And to render that plot it tells it to do a histogram and to make the number of bins whatever the user inputs. On the right-hand side dictates the user interface. You can see here there the title call it Hello RSP. Half row down you can see that it tells it to put in a slider input and then below that in the main panel you can see the plot output. That's our histogram. So that is all the code that that app requires. But unfortunately that isn't too much code and I think it's more than palatable for everyone everyone who's interested in this kind of research, especially kind of breaking this apart and just changing the code. However, some assembly is required. So these are some basics needed to be able to kind of change the code. But most of this is quite understandable or you can break this down simply from the from the apps that are there. You can simply just kind of change the code to your meetings. There's also fantastic Ashani tutorials and help communities. RStudio will host five apps completely for free. So you can have five of those online at any point in time. But also what you can do is that's only online. So as soon as one of these surveys or one of these studies that you're running you no longer need for the moment. You can put to sleep, open a different one and then wake it back up at a later date if needed. It's all deployed and it's all happened straight within RStudio with a simple click and it's all good to go. But if you are interested in doing the surveys you know incorporating this kind of feedback into your research. I have uploaded the skeleton code. So the outline of all the code required for the survey website to see there the GitHub logo and also to the open science framework. So the full skeleton code there pretty much plug and play good to go. And please let me know if you need any help with implementing implementing the code in your research. So it's obviously going to be pros and cons of commercial platforms versus shiny. I mean commercial platforms are very easy to use just putting the questions that we want. Participants become extremely quickly. I said before that you can often get your participants even in a day. It's also very familiar to us all. The cost of this that the cons of this is often it's quite costly. It's very limited to the platform but you also can't integrate it within larger within larger websites or give more information to researchers or more information about the research. You can only administer the survey and it's obviously no feedback. So you're relying on word of mouth or prizes or remuneration in some way to get people to do your survey. On the shiny side of things it's amazingly customisable. Anything that you can think of doing with different input styles different output styles whatever you want to do is doable. The only limit is your imagination. We can have feedback and intrinsic incentives so people complete the survey because they actually want to know their scores not because they want to get paid but also we can integrate that within other within the Azor the Maccaballianism website. We can integrate that within the broader scheme with things and more information about your research or more information about your lab other links and other things that they can do. The cons is also going to be the cons that this is obviously going to be a shallower learning curve. So it's going to take a bit of time to get used to the code but once you get used to it and once you have your own kind of skeleton codes similar to what we posted online you can really take it and run. You have to recruit your own participants but you can you can even do you can ask participants within other frameworks so like mTOR, Qualtrics, Polyfit to to do the survey research as opposed to a to a Qualtrics or a Google survey page. And then you can also be scary. It's always hard learning new things. But luckily there's fantastic materials available on the Shiny website. As I said before we uploaded the skeleton code the website which is completely open source completely freely available at the open science framework and GitHub. If you want to give you a go yourself you can always just try the scale there at MacaValeonsomeScale.com but also always here always willing to help with your research or to help you shine like you already do. Now I'd love thank you so much for listening today and I'd love to to hear any questions that you might have about this. I might hand it back to Makayla if that's OK or to Christian sorry or Makayla. So it's we've just got a question from Julia. Fantastic. Connell, thank you. Could this be used clinically? For example, for a routine outcome monitoring and feedback? Thanks, Julia. Yep, it absolutely can be used for anything that you can think of is it can absolutely do that you saw before with our clinician wellbeing that we're using that to monitor once again, at least that website that we've built is open source. It's not part of any kind of survey or anything at this point. It's really out there for clinicians to use. But in terms of clinical research, you could absolutely use this as well to get people to monitor their own mental health or, you know, obviously get feedback on how they're going without a treatment. The other idea there would also be something to share with the clinician that people could fill it out and then you and the clinician could discuss the results or give the clinician information about even the therapeutic alliance or about outcome measures or potentially even anything that isn't that the participant doesn't alter that the client or participant doesn't want to discuss in sessions could use that as well. The only difficulty, at least with the way that this is set up at the moment is it is everything's done to Google Sheets. You might have to think about the level of privacy required. If Google Sheets is fine or you could you could anonymise the responses and participants in some way. You can absolutely do that. But you might have to think more about maybe an SQL database or something if you want something that's much more secure. There's SQL backends absolutely already built and the packages are already designed for this. If that's the way that you want it to go. Thank you, Connell. We have another question as well. Mikaela, we might go to Michael and we can unmute Michael and he might be able to ask his question. But I believe that's OK, Michael. Yeah. So hi. Thanks for that fantastic Connell. Really great. It's exactly what I want to do for our prejudice since since the beginning of the year turn it into a prejudice you know, version two so people can get feedback. But we use a lot of open-ended questions. So a lot of people we ask people type in your experiences. Can you can a system handle it? Um, the question so would be for me in understanding that would be what kind of feedback would you like to give? Well, we well, so good question. Yeah, fair enough. We do have quantitative scale data. So we probably give feedback on the scale data and not we wouldn't do kind of content coding through the system because I don't have any idea how to do that. But if people can at least but the major focus is actually people's written subjective texts that they're giving us. So I don't want to get rid of that. No, no, absolutely. So I guess I'm trying to give a very broad overview today. But if within reason, if you can think it, you can do it. So the way that this this platform works is you can select any kind of input you want. There's text input, there's video input up anything you can think of. I think there's at this stage 20 or 30 different kinds of user inputs that you can use. And then what happens is that thing gets sent to your server end. So you are and then you can do whatever you want with that data. So you can send some back in terms of feedback and you can store others. You can send it or you can send it all back. You can store it all for that new study that we're about to launch in India. What we're doing is actually getting our to calculate, for example, the all the computer of our variables get rid of all the poor responses and then only send us the clean data or even send us a report ourselves all that we're doing. You can absolutely do that. And if we in terms of storing the text responses, you can just store it in the Google Sheets but only send back you know, whatever you want to send back. There's if there is an R package that's codable for qualitative responses as well in the future, then you could even get the R server to run the qualitative analysis on the data. If that's that can happen to an R platform and then send back that as well. So you can do whatever you want or you can even do is have two platforms, one that sends the feedback to the respondents and then another one that you log into that just sends you a different kind of feedback. For example, with this data, we had an admin login where a little admin tab that you could download the data. You just click download data from it or you could I just told it to plots and different plots and statistics. So I could log into there whenever I wanted to and just see that. Great. So and so in the feedback, I could say you wrote this and just present it back. And when you answer our scale data, whatever, you, you know, here's how you here's the distribution of previous respondents and this is where you fall. So I could I could do both of those. Yeah, so I just look at this. So this is the downloadable report. So you can have it on the website so you can see here our text and that text can be just raw text or you can input value. So you can say without even giving this in the text, you could say you scored 20, you know, input user, user data on this scale or when we asked you about this insert text response, paste the text response in. This is what you said in the past and similarly exactly for this downloadable report. You think this is all what I put in myself. So I wrote all this and then I just said, for example, here, put a histogram and plot what they got. And then the bottom here, you got a score of four. This puts you higher than 66.65% of your peers. You also have confidence intervals or whatever you wanted. Here I've said because on the website of us, do people want to be compared to men, women or to everyone? Here I've told it to print out. These results are compared to everyone just as you requested. But it would say women would say men as you requested as well. Right. Okay. That's fantastic. So I just want to follow up question in terms of kind of following up Julia's question as well. Does if we use Google Sheets, does Google own the data? And that's a great question. I'm actually not sure on that front. And it's something that we'd like to know. So these studies have all passed through ethics. Right. In terms of it, it would be possible so it would be possible to do a workaround where all you would do is you would have an Excel sheet in the root folder that gets updated. So it actually doesn't actually leave or go to Google. The only difficulty that I would see there and the reason I didn't go with that approach for this platform is that you may run into difficulties if two people do it at the same time because then you have because benefit Google Sheets, you know, it allows like multiple users at the same time with an Excel sheet wouldn't. But I'm sure there's a way that we could do that as well. The alternative would be it's a little bit more difficult, but I'm sure maybe I could help you or Jamie maybe could help you. It's just to have an SQL server. So I'm sure that actually there will be any database and you could try and link that in the back end. Right. Great, thanks. Fantastic. I'll let someone else ask a question. Thank you. Great presentation. I have a question from Tassity and we also have a raised hand from Erin. So maybe Tassity, if you have a microphone, we could go to you first and then to Erin after that. Hi, Connell. I think hope you can hear me. Great presentation. I was just wondering, I guess, from an ethical standpoint, if there were issues with giving feedback, particularly for like clinical traits or clinical disorders, things like eating disorders or depression, anxiety, those sorts of things and how you handle that, I guess. Great, thanks. That's a lovely question, Tassity. And it's definitely something that requires a lot of feedback. And at the downside, so obviously in a clinical setting, the benefit there is that you could actually talk and work through the feedback with your participants. In this case, they can't. So what they're doing, they're almost left alone to their own devices with the feedback. So I can actually point out I have to be very, very careful with the feedback that you give on the website and in the feedback form. We really express it's not clinical data and that there's kind of larger confidence intervals and it's really there just to learn a little bit about yourself, but also being really careful with the way that you word things. And at least in, as you can see on the screen at the moment, with this feedback, it's still not finished because we really want to be careful, especially with these last two here, you can see the experiences in close relationships and also the need to belong scale. We're not quite sure whether how we can give this feedback in a kind of responsible way or whether we should give this kind of feedback at all. So there's a really good kind of questions to think about. But definitely, I mean, when wording things, being really careful around just emphasising that it's not feedback. So it's not not clinical diagnostic feedback. And it's really there just to kind of learn a little bit about yourself. Yeah, OK, thanks, Connell. Thanks for that. OK, Erin, you should be able to now. I think Erin came in and then disappeared. Maybe we'll come back to Erin in case. And again. Opts is in. Wonderful. Great to see you, Erin. OK, can you hear me OK, Connell? Perfect. OK, beautiful. Hey, so my question was really the same as Cassidy's, but the work you're doing is really fascinating, obviously, like opening lots of interesting doors and in terms of the way we communicate with a broader community about the work we do. So from that perspective, really, really cool stuff. The question that came up in my mind was really very similar to what Cassidy had said, but sort of even outside of the clinical context, right? So we might be doing research that is sort of at its early stages and we don't yet fully understand how reliable our measures are. Obviously, I'm talking outside of your area, but for example, in an experimental site or in cognitive psychology. And we know that you have a slide right that says memories are together. I'm thinking about my own research on giving feedback to people about their own experiences and how that can actually change their memories. So I'm curious, right, about where is the boundary condition about when we should be sharing individualized feedback? And what do you think? Have you pondered this? It sounds like you have in the clinical space. Any other thoughts when I sort of push you over to the experimental site? So I guess if I heard you right, you're wondering about what would the cost of giving feedback be where they would change their behavior after that time? Yeah, I think there is some interesting potential, right? So I'm thinking about some of the false memory work looks at giving people feedback or other information how that changes their memories and behaviors. And you can think about that in a really positive way. You could produce positive changes, but it could work in a negative way too. So yeah, it's just curious. It's a similar sort of question that Cassidy asked you. And maybe your answer is the same. Yeah, well, it's interesting actually, when you say that as well, what comes from my mind is kind of, I don't know, it's astrology or, you know, the kind of things where you read your, you know, you read your briefing in the morning with your email or something and then you go out and go, fantastic, today's going to be a good day. So you run and do all these fantastic things and the day was fantastic, not because of the horoscope, but if it's fantastic because you made it fantastic. Yeah, actually that's a fantastic question in terms of the feedback and it's something that actually we haven't thought about in too much detail and kind of going forward. What we try to do is when we express the feedback is at least on the website, it gives you confidence intervals and those kind of things. It also just really rates that these things do change and they're not based on clinical and hard feedback. But it does make me wonder whether that would be kind of interesting something to look at in terms of follow-up research. Is it give people unique identifiers and you could say, gave you this feedback, has anything changed? Also, I think it's on the ethics side of things that might be actually really important research to know, especially if we start to do this more commonly and give this out, we really do need to know whether I guess how people are using this information coming back. Yeah, that's really interesting. Even thinking about asking people at follow-up, what do you think your feedback meant? What do these things mean? Yeah, very cool stuff. I'll let someone else have a go. I assume I'm not actually wondering whether they'll follow up question around, do you remember your feedback? I'll be awesome as well. Yeah. Without chatting, I'm looking at your report. Okay, thanks, Erin. We have time for other questions if people want to type them in and don't raise their hand. Hi, Connell. So one more question I had was around the cost of getting people to participate. So would I be correct in saying that it's free for the researcher if they use a platform like this and they can just literally advertise it and market it as finding more about yourself and that way you don't use avenues like Facebook or M-Turkway to pay the participants a large... Or not the participants, but the different platforms? Thanks, Cassie. That's a great question. One of the real powers of R is that it is open source and it is completely free and that's always the way it's going to be. It's actually... A while ago, it's where it kind of diverged from other languages where the R people wanted to make everything completely free and open source and that's the way it remains till today. RStudio, which is a more corporate side of R, makes it free to the public but if you want kind of the more high end or you want individual management, then you can pay which we wouldn't use in this for a large business. So what that means is that for us and whenever you want to, it's all completely free. The Shiny package and also RStudio is all completely free to use. R will host five websites of yours completely free at any point in time. But what that does mean is that you can sleep, say, if you ran five studies and you want to run two more but you're not doing the old ones, you can put the old ones to sleep, you can delete them, you put them to sleep on the server and then open the new ones. Within R and RStudio, it's really just one click. So once you link it up to your R, the top right hand corner, you can just click to deploy and then it's on the server end of things as well. It's a bit, it's a little bit harder if you want to make it indexable. So if you want robots text in there, so if you want it so that it, it appears as like a main search in Google. There's ones that you can do that. It's a little bit harder. It's much easier just to pay but there are ways that you can do it for free, like we can get to a different website. But if you don't want to make it indexable, like so the Machiavellianism scale, as you can see there, that was, that's not, that's not indexable but that's all free. So you can just click on that and go to the survey whenever you want. And that's where I'd suggest if it is something that you think people would be interested in, absolutely touch base with Rachel Curtis at AEM Media who's really fantastic to me and you can see there the advertising that went out is kind of how Machiavellian are you. And that was in the great toilet paper pandemic or epidemic where marketing team currently with this idea of getting all the toilet paper, defending for yourself or leaving, only getting one piece and letting other people use it. And then as part of that same summer cater at All in the Mind and you see actually read that and then did a short piece on it as well. You can see that that was responsible for the majority of the things there. There's also other big websites, like what comes to mind is YourMorals.org where I think, I can't remember how many people have completed that but they did some marketing and kind of news paper and some other avenues and they would have had tens of thousands of people complete the survey as well. It's absolutely worth trying to find some way to market. But as I said earlier that we're still getting 150, I think I might check 138 over the last kind of two weeks. So we're still getting kind of this trickle now once it's out there, people are completing it. But yes. Okay, thanks. Thanks Connell. So we have Neema has his hands up, hand up. Hello, can you hear me? Yep, hi Neema. Hi, thank you for the presentation. I was wondering if I want to, let's say conduct the research on social class, based on your experience, to what extent do you think I can get access to people who come from lower class and middle class or upper middle class and stuff like that? Is there any information about that? I think that would be, I'm actually, I'm not quite sure how, well it comes to mind, I'm not quite sure, I've kind of an easy way to do that. But what comes to mind is it's more about how you advertise and how you market this. Once it's online, it's just a simple web applet. So it's just a website that people can go to and they can do on their phones. In fact, I think it looks much nicer on the phone because it already renders everything to the phone. It looks much nicer. But what it would just be about is about how you're able to communicate that this research is available and that it's interesting and important to the people in your target group. So depending on where that is, that might be through your different channels or kind of word of mouth or anything. But what we've found here is that once enough people know about it, then it kind of maintains itself because there's still people clicking on and doing that. I think that would just be about how you can kind of communicate that to your kind of your target market. Okay, thank you. Any other final questions from people? I might ask one before we wrap up. I'd be really curious, Connell, this kind of branches a little bit from both what Cassidy and Erin were asking. What would be your immediate reactions to things you wouldn't use this platform for? So you sort of mentioned clinical or their particular areas or particular things where you think the feedback would not be appropriate or this wouldn't be ideal for those kinds of areas? Thanks, Christian. I definitely think anything that actually is, as Erin quite nicely pointed out before, anything that you think that people are gonna make or at least kind of important life decisions on or any kind of way shape or form in clinical, maybe even forming a sense of self or understanding the self. You're basically leaving people alone with their own devices to understand this themselves. And it's very rare that officially clinical feedback or any kind of feedback can be able to sort of isolation. So my suggestion might be to, if you have people who get different schools, you'd have a different kind of feedback or even just being really careful about the kind of information that you're putting in there, definitely anything about or anything that also seems kind of really certain about them. I mean, this is definitely who you are. We all know from the reliability of measures that you could take a measure several different times and get slightly different scores and really being sure to insert those kind of caveats. I guess anything there that people are going to or people might misconstrue, be left alone with this and misconstrue the results. What comes to my mind is also potentially using this, if you were going to use it for more of a clinical setting, it could also highlight certain things and then please bring this into your next session or something or potentially what you could do is have it send a report to your clinician or to someone else kind of in the back end will be quite easy to do as well. Awesome, thank you very much, Connell. I think that's pretty much it, I think, for the day, unless anybody has burning questions they want to send through before we wrap up. I want to say thank you, I thought that was fascinating, a really interesting topic, something that's nice for us to be thinking about in terms of engaging with our research more broadly but also the ways we can help engage people more in the research that we're doing but also those ethical questions as well, I think are really important for us to be thinking about. So I wanted to say thank you, Connell, on behalf of the clinical theme but also your colleagues more generally and we're really excited. We have another presenter, we have Luisa Tulipski presenting when we come back after our teaching break. So we have another group of really exciting presentations for the second part of semester two. So thank you, Connell and thank you to everyone and we hope you have a lovely rest of the week. See you soon.