 Yeah. Thank you very much, Manish. Thank you very much, Edith. Well, hello, everybody. I was in like Yuma. I didn't bit in any Yuma in my slides, but linking back to your talk on the 4th of May, I'd be very happy for anybody to contribute already now to the topics a little bit that Edith mentioned. And let me start sharing my slides. So I was asked to talk about colleagues' names at the bottom here. So we're working as a team and what I'm presenting here is the work of the team while I was the project lead. So a little bit of background on me. I have a background in industry. I started off working for our DS area manager and then worked for nearly 20 years for American Express and for Visa in technology related roles. While I was a visa, I wanted to start to give back to students and started to guest speaking at the University of Sussex. And then in 2015, I started to join the University of Sussex full-time and not working anymore for Visa. But so I transitioned from industry into academia because I actually really enjoy the opportunities academia offers, thinking holistically about problems and approaching things holistically. And while the tool I'm presenting today is for practical purposes, in reality, it's underpinned by research as well because we want to enhance learning and enhance knowledge. I changed to WBS from Sussex around three years ago. And I work in the area of information systems and management. Today, we briefly talk about an AI task and finish group that I'm leading and that anybody I'm happy to have externals and we already have externals. So if you're interested in joining, you can join the group. And then I'm talking about the tool. And then we talk about other things around AI that are actually relevant because AI is one thing, but it has to fit into the broader tools and propositions of higher education. So it's not actually unique. It's just brought things to the forefront, really. It's absolutely neutral and it's about how we applaud and how we deal with it, which makes it unique. And so my positioning here today, and I just wanted to be upfront about my positioning, it's again an image produced by Dali, is that actually, with my background, of course, I'm embracing technology. I'm aware of the risks it poses. And as part of the group we are creating is that we have, as you can see at the center, we have an academic integrity and ethics. And if you Google me, you see some of my research is all about AI ethics. And so I am aware of the risks, but I still see the many opportunities it offers. Looking at our current priorities of our task and finish group that you're invited to join. Strand 1, we ask the bigger questions around AI. So what makes humans unique compared to machines? What are the things humans can do and machines can't? And that is meant to help us to think about how we teach and how we assess. So we have Strand 4.0, which looks at the opportunities for teaching using AI. And we're not looking just at chart GPT or comparative AI, we're looking at AI more broadly. And then Strand 2, we're looking at assessments. So how can we develop good AI assessment principles? And a bit of what we said here before, assessments are, you know, have to be similar to before authentic and have to have some features that where humans are unique and can demonstrate their critical thinking and it's not just AI generated, but it's about helping colleagues designing assessments. And Strand 3.0 then is feedback, which is not just this tool I'm talking about today, but feedback more generally. And feedback is always ranked lowly by students. So how can we actually improve giving feedback and creating a dialogue with AI for students? So all this is our broader task and finish group. And within, we have the tool that I was asked to explain to you. And the tool we developed was based on WBS students and previously Sussex students telling us that they see academic writing as their main barrier to academic and employment success. So to a certain degree, chart GPT now helps, because if academic writing is seen as your weakness, then you all like believe that actually now you see the fluency of chart GPT and you think actually your academic writing is solved. However, we know it's not the case and there's still a need for academic writing. And also, as we know, NSS assessment and feedback always scores quite low across the sector. And again, we're trying to address this here by providing formative feedback. The emphasis is on formative feedback. By giving formative feedback, we hope to democratize education while keeping the principles of meritocracy in the meaning of if a student wants to work hard, we give them a tool to work on. So it's about supporting students who actually want to learn. It fits into our work business schools and works over all missions and strategies. And students, as you can see to the right, as students agree with us that actually using an AI tool such as this is valuable. And it's important for me to get student support, because we know before with various occasions where students are against using AI in a summative environment. We did a trial and asked students afterwards about how they felt about giving them formative feedback rather than a human tutor. And actually, as you can see here, the majority, the middle line, students have no preference. But there's actually more students appreciating AI to give them feedback. You're generating the feedback. So we're coming to this afterwards. Yes. So we're coming to it. Sorry. Just still trying to frame it. The important bit so far, my message so far is that actually I have students on board on this additional feedback using AI. Is there an audio issue here? Because I can't hear as well. Yes. We seem to have lost Isabel. Oh, you're back. I think you've disappeared again. Yeah. I think you're back. But I know I can see I have a week. What about, can you hear me at the moment? Yeah, we can hear you now. What about if you briefly, I go back into the, I go via Google Chrome, because I think that might work better. So what about if you briefly discuss and I, or should I just continue at the moment? Or should I go into Google Chrome? We can hear you. My internet is okay. Yeah, let's get me on my back. If you drop again, we'll have another. Okay, good. So here you are. You see a good mixture of nationalities and gender using the tool. And the last point I have as the preamble to the tool is that what we hope to achieve is increased peer review and peer assessment where students work with each other to learn. And it's something which students do voluntarily because they get feedback. And then they think, actually, let me, let me compare it with somebody else. So here's the positioning of the tool. So we have, you know, teaching happening, students finally submit a draft essay into a user interface, which at the moment is web-based for us. And so it's only work. Students have access to it. It produces an AI report. And then sometimes we propose group tutorials and depends which, which modules students are taking part with, or they just go straight away to their submission. On the technical side, if you think about the student essay, we use a mixture of statistical features, which are really a calculation. For example, what is the percentage of transition words? Transition words being, of course, more over, further more. Therefore, and using this type of words, we calculate unique transition words. And it's just a basic calculation for, for other features. And for the majority of features, we use NLP. And we also have a feature in there, which is sentiment analysis. And all this ends up being in a learning, learner facing report. So students, how it works at the moment, students upload their report. We run our two every two hours. And then within two hours, they get feedback. But of course, it could be instantly. It's just that the computer power, we need to would have to pay more for it. So we let it run only every two hours. And it's Python coded. And as I said, we use a mixture of the statistical features and deep learning algorithms. And we, we developed it on based on coding available from the open source communities. So really, we used codes that that anybody can get access to, for example, by a GitHub and, and then integrate it into one, what we thought, coherent feedback. And we did not use any form of subject specific labeling or subject specific supervised learning. And so, what we think is a is a minimum viable product. Because, you know, we and I would say we have on the one hand students at the moment not getting any feedback on the other hand, they get some feedback. And some feedback is still better than no feedback. And that's how students see it as well. And our user interface is web page web based and so they they log in and they upload and they get the feedbacks through the same user interface. And some of the features we use and I like it a lot is our knowledge graph. And you have the information about knowledge graphs, but how students and I see it as you submit your writing. And then you get feedback from somebody who has read your assignment who does a bit of like a like a mind map or concept map. I know there's mind maps and concept maps are slightly different. And so our knowledge graphs, but it's to just visualize that somebody actually took read through what you what you wrote and put it into a graph. And I really like it. It helps students to see relationships. And so, for example, a biology student would say, actually, I misinterpreted whatever. This is a subset of X, but in reality, it's the other way around. And the knowledge graph would demonstrate those things. We also use other things, such as argumentative zoning and very popular with international students. And for those of you who work with students and a lot of students actually always want to know what should be the share percentage of my introduction, what should be the percentage of my main part and my conclusion. And here you can see how AI using sentences to actually decide what type of, you know, which part you you spend most time in. Coming back to dissertations. For example, and I saw students who didn't do an awful lot of actually primary research and and had hardly anything on the method as well, because they were not that proud of their findings. But through the feedback report, they could see that actually they had to increase it slightly. And because after all, it's a dissertation which was spent on primary research. The writing scores to the right show how good they are in different categories, drama conventions, sentence fluency, word choice, organization, ideas, and content. And it's a score, you know, from zero to five. And then for each of the criterion, we explain what we mean for each of them and how they could possibly improve on it. At the bottom, at the bottom, what I've shown there is we use a bit of coming from a business school. We have this cups ranking, where journals are ranked. So students see which type of quality journals they are using. This only works for business school students. And actually, we make it clear in our disclaimers that as students one of business school students should really ignore this section. But business school students will find it useful to see the rating of the journals they are referencing. And afterwards, I can show you the actual reports. But let's briefly discuss the research angles we take. It's all part of a research ethics approval. So it's not, you know, in our Warwick Life Systems, it's about being part of a research project. And I've just wrote a teaching case on it. We're thinking about technical contribution. We're also thinking about how we designed it as an approach. And lastly, we have one PhD student who just focuses on the ethics part of this project. Just a reminder on kind of we use the EU AI ethics framework to underpin everything we did is this. We checked against all those different criteria. And while being aware that actually, you know, a checklist might not be the most appropriate way to assess ethics. And this is what the PhD is about, that you can't assess it with a framework such as this, it still helps as a starting point to ensure that we are trying to provide an ethical solution. We are not the only ones doing something like this. And you will have heard of Turnitin draft coach. And there are other commercial tools such as Grammarly about to be rightful. So other universities also do many interesting stuff in on the topic of AI. And we're not talking here about applying and writing using chat GPT, but also generally developing tools using AI. And then, of course, there are things linked to chat GPT that we are also thinking about. We have something called stylometry where we look at authorship authentication, again, based on statistical rules. So unless there are some questions at this point, I would want to briefly share perhaps the live report. Unless they are, let me just take this off. Are there any questions so far? Well, then none in the chat, but if people would like to take the mic and maybe if you're happy with that, then yeah. Okay, so there is a question in the chat now. So I can't judge on the studio. I don't know studio, I don't know studio city in detail. And Grammarly is of course very, this where it makes sense to actually look at the report. We are doing more than Grammarly because we have other sections. And actually what we do on the Grammar is perhaps not as user friendly as what Grammarly does. However, it does, it's actually very robust. So we actually spot things that Grammarly might, you're talking about the enhanced version, but the free version at least does not do. And the reason what is important for me is just to point out again that for me the free version is the one we want to compare ourselves with because after all we provided a tool that is free for students. So we don't want to, I think it's not fair to just oblige students to pay for something like Grammarly. Our data protection is interesting because they submit it anonymously and it goes past, we don't, you know, it's not going on any form of internet. So it's not to turn it in issues. The big thing as well that we avoid and data protection because they choose, they use the interface, they choose, they don't have to disclose any information. So no personal information is disclosed at any point. And we make a point of the moment they have submitted their report that we are raising the report again and vice versa. So they are submitting their essays and it's interesting because would you think, is there any other concerns you would have? Yeah, well, hi, thanks for taking the time to answer this one in a little more detail. Yeah, I mean, we're obviously concerned about students, a bit longer term on scale, students submitting information which might be confidential client information if they're doing business reports based on clients that we work with, for example, and we want to give them feedback on those business reports, but they're disclosing confidential information which would expose the client. So I suppose in the longer term is our concern about submitting directly through OpenAI interface for getting feedback, for example, is we're training the LLM and that data is being held to train the LLM, which in theory means it could be exposed at some point. So we're investigating, looking at connecting to GPT through the API. And just recently I noticed, I mean, literally a couple of days ago, they just noticed that you can now turn off chat history, which they are claiming gives you the privacy of your data submission, but I'm interested in what your approach to this and what your thoughts are on it really. Yeah, but for us, this is it. We try to make it so basically, it's we keep saying, this is your interface, we keep saying to students, you don't need to do it. Only if the moment you submit, students need to be aware that we have access to their submission. And it's just how it is, you know, the data scientists have access to their submissions. So again, coming back to confidential information, not that the data scientist is going to read the reports, but they do have access if they wanted to. So we point out that who has access. So we don't know their students' personal names, but we do have access, but we erase reports, so we don't keep them. But I see this is where maybe it's quite difficult to know because we have our disclaimers and we believe that is fine. But of course, we are not legal experts. And so it's actually quite difficult to decide whether we are fulfilling everything. But as far as we are concerned, it's students' choice. First of all, it's students' choice whether they do it. Second, we don't collect any of their information. They can be completely anonymous. Third, we don't keep their report, their essay submitted, and their feedback report, we keep as long as the next feedback report that they request gets uploaded. Like I said, happy really to just use this as a discussion point with others as well to see what are the other aspects we have to consider. The next question. I think the concern here is not the fact what the institution is doing in terms of holding on to data or so on and so forth. The concern is what the AI company is doing with that data. And the fact that if we are submitting, for example, just let's say, for example, we were submitting information about a brand new product that was coming to market as a report from one of our clients, then in theory, if someone asked a question to the LLM about is this company bringing any new product to market, they would have been trained to know that that company is bringing something new product to market. There's a confidentiality issue here around training the AI with new information. But we don't use an AI company. I agree with you. And actually, this is what makes our product special because it's in-house. We don't use an AI company. It's in-house. We have no AI. And I fully agree with you exactly. We don't know. We ask all those crews on TrackGPTV. We don't know what they do with it. But can I step in? Yeah, I think I'd like to add here, if there is an NDA type of situation, I'm sure the student who's uploading this to an AI could be pre-worn that don't put anything that is an NDA into the system. If you want feedback on the channel work, that can be used. But if you assigned an NDA, obviously, you should not be disclosing. So you could put that kind of warning. Even in your system, Isabel, because you said the data analysts or the data scientists would be having access to such data. I hope that answers your question. Joel, we can move on perhaps to some of the other runs as well. Yeah. The next one is numerical submissions. Now it wouldn't work. We don't calculate anything. We don't verify numerical submissions. I believe Birmingham did a very good tool, which they've actually shown to me how if you're aware of this, but basically you, if you're marking numerical work, and AI learns from you, and then if you have big chunks of marking to do, then AI learns from you marking, and then they continue on their own. A question on just we're going back to next one, summative assessments. No, I wouldn't want to use it because that would raise many issues with students as well. So the reason I've always stayed away from any form of marking, nor giving too many, you know, we say to the students, this is good or not that good, but we don't give them any indication of how, how the mark is going to be and we don't even want to do it. And I don't think actually our in-house tool coming back, our in-house tool is good enough to even start to do summative assessments because human markers can see things that our, our tool doesn't. It's a completely different approach. So I wouldn't even want to start with summative assessments. And I think students wouldn't want to. Next one. Well, of course, they can do as much AI generated stuff as they wanted to. Absolutely, they can submit to us and actually we wanted to test, we haven't done it yet, but we wanted to test and if we generate our own assignments, how they will fare on our tool, but we didn't do it yet. But yeah, absolutely, we don't know whether they do it. But even then I would say if they've generated it with them using the generative AI, then at least they still learn how to improve on it. So data protections, okay. And then the next one, okay. So I think the next one is very inconsistent. So I, again, I don't, you know, I detector exactly. We use dialometry where we looked at, at previous, you know, we train it with previous assignments from the same person and then we, we pass on something new and then we say what I was the same person as before. And we haven't rolled it out either yet. But that's a different tool. It's not the tool I'm describing. So any, if there are no other questions, should I just share the actual report and we look at the reports? Yes, please. I think that we've done with it. So I'm sharing my screen now. So you let me, I can't see this screen anymore. And so you need to talk to me if there's anything. So I wouldn't share this. I don't think we can see it yet either. You don't see it yet? Yeah, it's only just your camera at the minute. Okay. Let's, even though I share the creation, share this window. Yeah. Now, now we can start to see something. Okay. Yeah. So we have, we have, we have a welcome page, which I haven't got here, but the welcome page tells students again, it's a long, long, long disclaimer because I, we also want to avoid that students get advice from our tool. We aren't seeing it yet. We are seeing your Blackboard Collaborate screen. Perhaps minimize that and, or select the one that you would like us to see. Yeah, from there. If you click that. It's already on. Okay. Um, but we can't see any example screen. Ah, yes. Yes. Yes. Deep learning analysis on content and ideas. Exactly. Exactly. That's better. So we, yeah, we have a, we have a welcome page, which I'm not showing you, which has all the disclaimers, but then we're coming to our, our actual tool. So initially we had structured it differently, but following academic feedback, we had, we had a section which talked about comprehension and critical thinking. However, because we're, the tool isn't that strong on these areas. We then deleted those areas and went just with these. So content analysis is mainly, it's a word cloud which shows the type of word students use. It talks about stages of negation, which is an AI feature and which shows a little bit of critical thinking, because we know that stages of negation are quite informative for, for critical thinking. And then comes what, what I explained before, our knowledge graph, which captures to students what they actually wrote about. This is part one. Let's see, part one. And you can, you can zoom in into it and then make it bigger and look at each of the bubbles in more depth. Part two is then academic writing. Part two, academic writing. This is what I showed you before. You have the different, you know, content organization, word show, sentence fluency and grammar conventions. And as an, I have English as an additional language and I never make it to five on, on grammar as much as I, as I try. But so we have vocabulary usage. We're coming back here to the features like unique transition words, which I mentioned to you before, which is all very much statistical features. And, you know, it's AI generated feedback and using, for example, the flash reading ease readability score. Not sure if you heard about that one. It's a quite common score. And students should have a, writers should have a score between 15 and 40. If the score is below, from 15 and lower below 15, the essay might be too difficult to read. And if it's higher than 40, it might not have enough academic language. The issue here is what I hear from colleagues all the time is if it's a very technical paper, would students then get a too low score? And should they then change it? And, and this is where it comes back to where we don't want to tread on, on anybody's toes. So it will be up to the lecturers to explain to the students what they would expect for the individual assignment. And then comments on, on sentence fluency, again, writing conventions, the argumentative zoning that I've explained to you before, where we look at each paragraph to decide whether actually is it an introduction, this is a conclusion. And then you can compare between your peers, you can compare what each of, of them does. There is no perfect, you know, no perfect pie. And that's really what we say all the time as well that they, it depends on the assignment, but it allows them and something to, to compare with their peers and to discuss with their peers and to, to critique each other's work. And, and then some comments on general organization, which again is, you know, AI generated. Part three is our grammar check, where we, which comes closest to the free version of Grammarly. But we also look at sentence length. And so sentence length comes next and words over 64 sentences should be reviewed by students with 64 is seen as the appropriate length of the maximum length of a sentence. And there are still some issues with our tool. So for example, that it doesn't recognize certain parts of separation and we cannot do anything about it. Because this is the different when you do something in-house versus if you buy it commercially, there are things that would take too much resources to change. So we have to live with, with our tool not picking up really if it's a true sentence or if there were any form other things of separations. So we, we all, we always think about whether we should not have this section, but we still keep it because some students find it useful. And finally, there's the Harvard reference check in this case. It's, it's not a business school writing. So there were no, no cups ranked journals in this, but you remember, this is where it compares with cups and journals rankings. And then the fourth part is our sentiment analysis and discourse analysis. And the interesting thing here is it was based on my writing. And, and so I write, you have heard of PFHEA perhaps, you apply for this principle fellowship and, and you write loads of positive things about yourself. So it picks up the positive things. And then I have a friendly reviewer and, and the friendly reviewer comes up as the most negative sentences. And the reason I like to show it is because it just shows how, how does it, that it works? Because basically it understands the sentiment that my friendly reviewer is, is questioning things and asking things and, and not being as positive as I'm about my own achievements. And we also have discourse analysis, where the score above 2.2 is, is seen as a good paragraph and lower than 1.8 might need improvement. And so we then pick up what are the scores for your writing for the highest and the lowest where the flow is perhaps not as good. Isabel, are you still here? That's it, you're repeated. So the, the idea here is that it's, it's a tool that was developed in-house and that it should give you all inspiration about things you can do in-house, rather than going with an ATAC firm. And it's all based, as I said, codes that are available. So it wasn't us. Well, we did the, our data scientist, a PhD student did the combination of codes, but it's all publicly available codes that we used to develop this tool. And that's why it was asked to, to present on it. So any, any questions at this point? Just at that point, is, if it was publicly available code, have you then made the code for this available publicly back or? No, no, we haven't. No, because, well, this is it, because after all this, our time went into the combination of codes. So then the readymade, it's not readymade available to download. It's individual things that are available, but not, you cannot find us now on GitHub and just use the same tool in your own organizations because we are still using it for testing and, you know, research purposes. That's fine. Yeah, thank you. Isabel, did you get any feedback so far from, from students about the, the tool and the, the level of advice they receive and how they're able to then improve their writing? Absolutely. So students like most the grammar section, you know, the grammar section is the all time favorite because it's so easy. You get the suggestions and the next thing they like is the knowledge graph because it just shows them the type of concept they've, the words they've written about, whether it makes sense, whether it was indeed what they've written about. And for anything else, what they also, you know, those that get a tutorial on it, take more out of it than those that work on their own. So the more peer work you do, the more time you spend on improving the assignment. But for every, every student who participates is asked to submit a feedback and they're all like, generally positive. A couple of students, of course, would say, oh, we would want to see more actual suggestions, how to improve. And that's where we argue that it's actually meant to be a tool for learning, rather than just telling them the answers. And this comes back to the grammar section. What they like in the grammar section is this is wrong, right? This instead, such the one to one transferable learning with all the other sections, they have to think about how they want to change things. Thank you. And what about your academic colleagues? What's been the, how have they received the tool? How supportive are they? Well, I would say it's polarized. It's those colleagues who want their students to actually use it. It's a bit like church GPT, coming back to church GPT, how positive are colleagues about church GPT? They're colleagues who think church GPT is amazing. And they are those who are against it. It's very polarized there as well, isn't it? So it's the same. Thank you. Just sorry, going back to the students, have you seen any difference in terms of student profile and how they welcome the feedback or maybe, you know, with the undergrad versus postgraduate students? Is there a profile that starts to emerge in terms of who really benefits and make the most of the tool? So we look at uplift. So we don't know students who participate, so we know several things. We know that students who participate and then give us their feedback, we voluntarily ask them to also give their demographic characteristics so we can track it. And that seems fair, you know, across. There was no trend either way. And for those students where we compare, what we know as well as students, because we do certain tests where we know that students who participate, they get an uplift for their real submission. And I think the uplift we calculated was like 6% or something. And the idea is there is an uplift, because I needed to do it because colleagues always ask, is there an uplift? And yes, there is an uplift, so I can confirm that there is an uplift. However, the issue is that I don't really believe in the uplift because I think students who use the tool are more engaged to start with. So we don't know the true uplift because I believe that only students who want to do well use the tool and then they improve on it, but we don't really know how much they would have done without the tool because we don't monitor this bit. So yes, we have seen an uplift and it seems fair across demographic characteristics. However, it's very difficult to give a true figure on those things because they are the factors that come into play. Thank you. I can't see any more. Oh, yes, there is. Exactly, exactly like Rebecca says. There's correlation but not causality, absolutely. Yeah, thank you. So we would like to make sure that we finish a bit earlier than the hour colleagues to join usually their next session or have time to have a bit of a break. Would you like to have some closing remarks? Well, I still have my presentation was still a couple of slides. Yes, go for it then, thank you. Let me go back to the slides. Thank you. We can see the slides. However, we can't hear you. Yeah, we seem to have lost you, Isabel. Still no good. So it looks like the technical issue are taking over this session. We can see the slides moving, Isabel, but unfortunately we can't have your comments. So we see there are a number of events you're going to be hosting, 17th of May and 14th of June. And we have the base email addresses and website addresses if colleagues wanted to follow that up. So I'd like to thank you, Isabel, on behalf of everyone here today. It's been a very interesting conversation. Thank you to colleagues also who joined the call and for your questions. Thank you from us at the Old South group and we hope to see some of you hopefully next Thursday at our JDI session. Thank you everyone. Thank you all for coming. I'm going to stop the recording now.