 So impact is a huge word. Some of you may be familiar with it. Some of you may be less familiar with it. And the idea is simply to ask yourself, what am I changing when I'm doing this? Whatever action you will do in public engagement, it will probably have an impact. Maybe a huge one, maybe a tiny one. It may be only on one person, or it might be on 2 million per people, depending on your action. It may be a deep one, meaning that the person will completely change their whole life, or it can be a very shallow one. Just the people will be aware of that one new word, and that's also a kind of impact. So the question we ask ourselves when we think about impact is, what am I changing in the world, in people, in society, in research right now? And for us to explore a little bit this idea of impact, I will ask you to do one more activity on the mural when we speak about evaluation. So, first of all, what is evaluation? When we talk about evaluation in public engagement, we can refer to two things that are quite different in the goal that we have, and sometime in the methods that we use as well. The first one is formative evaluation. Formative evaluation is typically what you do to improve your action. So you do it very often, even before you start your activity. So it's not something you do afterwards to reflect on it, but before to help you build it, or during the action to help you improve it. So typically, I did a formative evaluation at the very beginning of this training session. Can you identify what it was? You can write this in the chat. Did you go through a formative evaluation action, or at least, let's say, a part of something that could look like formative evaluation action at the very beginning of this session? No? What was the first task I gave you? I asked you when I say evaluation, what words come into your mind? And you wrote in the mural some words, such as improving, reflecting, investigating. And by doing this, I am not only starting to engage you with all the training, I'm also having a glance at, do you have any idea of what evaluation may mean? Maybe you are going to tell me, oh, for me, evaluation means when I'm at school and I'm receiving gradients. And I know that maybe it's not that all the same meaning that I am going to use it for. So I had to clarify this. So basically, I did a kind of formative evaluation very symbolically, it didn't do a full evaluation, but by just asking you before we have a moment to discuss about evaluation, what does it bring as words, as ideas when we say evaluation? So this could be a formative evaluation. Typically, you have an action about climate change. Well, you're going to take some time with some people who are typically going to be part of your audience and you're going to ask them, when we say climate change, which words come into your mind? What does it bring as memory? What does it refer to in your mind so that you know what the links are already present and when frames are already present in the heads of your visitors, of your audiences? And typically, you could build something like a mind map, like the one you are seeing now, where you would map with these people in the audience what they already know or think about climate change and you'll have a first map of what you can use to discuss this and to build your own action in a more relevant way and more tailored way for your audience. Then we have a second part, which is the summative evaluation and summative evaluation is the kind of evaluation we do almost systematically when we do public engagement action and it's very often required by the funders and if you are a funder or a supporter of research, please require it, even a very small one that do require some evaluation, which is to document your action and to assess your activity after it's delivered. Not necessarily assess to say if it was good or bad but at least to be able to reflect upon it to build lessons out of this. And we'll mostly talk here about summative evaluation although a large part of what we'll see can be applied as well to formative evaluation. So should you evaluate? Obviously I'm gonna say yes, you should. You should evaluate your action and the first reason I think you should evaluate it is to simply clarify your aims and objectives even before going to the let's say real reasons of evaluation, engaging in an evaluation framework, building an evaluation plan will force you to clarify a lot of things about your action. Suppose you say, I just want to make a game about my research and have this game played by teenagers, you make this, you start doing it, that's great. If before you do it or while you are doing it, you start building an evaluation plan, you will be forced to ask yourself, okay, what are my objectives here? What do I want to change here? What is the long-term impact I want to contribute to? And that will absolutely change the way you build your game. Very often if you do this, your game will be much more tailored to have the right impact. If you build the evaluation plan after you build your action and just before you're going to implement it, very often you'll see that there are some little discrepancies, little, it will not fit perfectly together. And that means your action may not be tailored perfectly to have been back to evaluating. So I really, really engage you to build the evaluation framework and plans. At the same time as you are with the action you are building and use it as a way to build properly your action. Obviously, the goal of the evaluation is not to help you develop your action, but to help you improve your own practices afterwards, reflect on them to document your action and provide evidence for funders, for supporters, but also from other researchers that you can share it widely. You can demonstrate the value, the benefits and the impact of your activity. And you have, and I would say that one of the most important element, you can share the lesson learned with others. That means it's extremely important whenever you do a public engagement action, sorry, to evaluate it and share what happened with your co-researchers, so as to build a culture of research to be able to reflect with the others and get support from the others or support the others. So one way to define evaluation or at least summative evaluation would just be to say summative evaluation is just asking yourself, what happened? What just happened? And this is a good question to ask yourself after you've done any public engagement action. Personally, when I do an evaluation of an action that I did not lead and I'm meeting, for example, the public for the first time and I didn't see the public before, it's a very nice question to start a discussion about, ask them, well, you've been through this, what just happened? What happened? And they tell you about what they lived, what's their experience, what did they perceive as what happened? And that is sometimes quite different from what you intended. And that doesn't mean it's bad, it just means it's good to be aware of what we are actually creating. To plan your evaluation, you have several steps. So first thing you need to define is your general aim. So for this is the very big picture of what the action you want to have. This general aim is going to translate it to objectives. Objectives are going to be clear things you want to reach in order to say the aim have been somewhat achieved. Obviously you'll have to frame the evaluation as you framed the public engagement action knowing the topic, the audience, the partners that what we have seen in the second session. And you are going to ask yourself evaluation questions to know if the objectives have been reached. Now to answer those questions, you'll need to collect data and you'll need to analyze data through a methodology. And finally, you will have a report which can be just a 200 pages report extremely documented or just one sheet of paper with a few notes depending on the level of details you want to give and depending also on what you will do with this report, who will read it and how you will share it. First of all, the aims and objectives. So the aim is very, at the same time, a very simple thing but not always very easy to frame as it's what do you want to achieve? And it's quite nice when you are building a public engagement action. I'm going to do a talk at the radio. I'm going to build and I'm going to share on social media my research. What do I want to achieve? Why do I want to globally share my research on social media? What will it change? And then translate into objectives. Well, if I share my research on social media and that's because maybe I want to have it to create a culture of research on my field. I'm doing it, let's say a research on molecular biology and I want to create a culture of research on social media on that field. So the objectives are obviously has my research been seen on social media? Has people engaged with my research? Have they commented? Have they posted other things? Have I been quoted by other accounts? Have it created discussions and debates? And we will try to have these objectives as specific as possible, measurable so as to know if they have been reached or not or if they have been halfway reached, 30% reached. Are they achievable? So we ensure that these are not just utopic ideas that cannot be reached, that they are relevant obviously and they are time defined so that we are aware of this. This is not specific to public engagement so I'll go quite quickly here. These are usual project management, objectives, qualities. Now, let's go more specifically into evaluation. For the evaluation questions, you're first going to ask yourself, what do I want to know about the outputs? So what are the outputs? The outputs are simply the elements produced, the games, the events, the talk you've made on the radio. These are what happens. So what is their quality? What was their reach? Did they reach a lot of people? Who did they reach? Did they reach only educated people between 30 and 60 years old? Did they reach mostly young girls between 10 and 15 years old? So what was their reach in not only in terms of numbers but also if possible in terms of groups? And what is the general quality of these outputs? And here on the right, you can see an element coming from the Quest European project that ended recently. It's an H2020 project that built indicators on quality for good science communication. And I will send you the link as well if you'd like to download the whole deliverable, but here you have elements of all its indicators so as to see if your outputs are good in terms of science communication. Are they trustworthy? So on what science and on what knowledge are they based? Are they engaging? Are they clear, coherent? Is there some interaction? Then obviously also, are they connecting with society? Do they have, can society relate to it? Is it targeted? Is it impactful? So we can discuss this more in details if you'd like, but that's the first question you can ask yourself about the outputs. And this is the first of your remember when we did these three boards about we would like our events and science communication to be so as to create, so as to contribute to. So this is the first element. The second one, so as to create would be what we call usually the outcomes, which is quite different from the output. So the outcomes are not the events itself that you have done. It's not the book itself that you have written. The outcomes are the benefits, the effects of changes that were immediately created. So did your book create a new awareness of something? Did you promote, did you create a new understanding? Did you lead to a change of perception of who are researchers or of that element linked to your research or that part of society? Did you link to a change of attitude, to a change of behavior? People are now going to act differently in their daily life. Did you create a new capacity? People know now how to build something just like the young lady you can see here in the picture. She now knows how to build a kind of box and prototype some kind of things and maybe it also created it to her an empowerment in terms of confidence, in terms of the way she perceives herself, in terms of how she trusts herself or how she trusts other parts of research, for example, so that she now feel empowered to do things she did not dare to do before. So this would be the outcomes. And the last aspect of course, the third box that we discussed before is the long-term impact. So these are the long-term changes. So the transformation of society, of research or of the relationships between research and society that you are contributing to. And obviously you are not doing this all by yourself only by your public engagement action. You are contributing to it. For example, you want to foster science literacy or you want to strengthen the public trust in science and you are not going to change this all by yourself but your action will contribute to this long-term wide impact. You are you going to support your sitting becoming carbon neutral? Are you going to develop a scientific culture on social networks? Are you going to make research more relevant and responsive to social needs? Once again, these are wide, wide, wide impacts that you can only contribute to but it's extremely important for you to be able to link your action to outcomes and then these outcomes to impact. Why is it important? It's important first of all for let's say funders and supporters, usually funders if we'll fund your action because they are funding several other actions and if all these actions are contributing to the same wider impact for them, it's very coherent or because it links to the funder's mission. So looking for those links is quite important and let's apply this right away. You have access to the chat right now. Can you write in the chat what was the output of this training session? So this element, the output. So you are participating to right now a researcher and supporters of research training on public engagement in the GRACE project. What is the output here? Can you write it in the chat? The video recording of each session is definitely an output. All right, I'll say the first output is simply this training you are participating in. This is the first output. What we are, the event we are having right now. The second output would be definitely the recording of what has happened. Another output will be the report that might be prepared out of this, absolutely. Another output will be a resource sheet that I will send you after this session where you will have links to all the various topics we have discussed in the past so you can look for more detailed reports and other resources. So all these are outputs, things that are created and in themselves. Now what do you think is the outcome of this training session? I would say that Jessica has already written one of the outcome, Jovita, another one. So one of the outcome is a new knowledge. So I don't think I created new knowledge by myself in those sessions, but I would say that indeed maybe we created a new awareness of what public engagement is, maybe even a new understanding of the various public engagement types of action or maybe a new understanding of what is evaluation of public engagement. We also created maybe new skills, maybe now you have some basic skills about how to evaluate or how to build the public engagement action. We created an awareness about public engagement. There was a knowledge transfer, that's absolutely true. And now what is the wider impact? And we already have one from Maline here in the chat. More public engagement activities. So one of the wider impact I could claim if I was going to do a report on this would be to say, I expect, I hope I designed this action so as to have more public engagement activities done by researchers in Europe. Obviously this is just one actions among many for this but I'm contributing to this. What else do you think this training session is trying to have as an impact, as a wide impact? So we saw already one from Maline, more public engagement activity. Oh, transformation of researcher public relationship. Great, so this is also a wider impact. We would like in Europe to have the relationship between researchers and the public to be transformed. So probably if I were to write it, I would be a little bit more specific in terms of what kind of transformation do I want? Do I want it to become awful? Do I want it to become hateful? Or do I want it to become closer? What kind of transformation? But that's exactly a wide impact that we could be contributing to right now. Transformation of the relationship between researcher and the public. We could say that we are widely, we could say that we are now contributing to a new culture of research that in the past research used to be only focusing on basically publishing articles and from your work and I mean, academic articles. And that now we are trying to transform what the culture of research is and having a new culture of research that is embedding public engagement and research. We can have, yes, to have more effective collaborations between public and researchers. We are contributing to a more transparent research. Absolutely, Mimi. Thank you very much. So I think you've identified very relevant ones here. So you can see the output is very clear. It's the things we are creating. It's the object or the events that are happening or the reports done or the recordings, the outcomes. We created maybe a moment of shared discussion around public engagement. We also created a new awareness and new understanding, several elements. Maybe I'm going to ask you in a few minutes to create plans for public engagement and so we created a bit more than awareness. It's also maybe I'm going to change some of your behaviors maybe because maybe you are going to engage in public engagement in the next few weeks. And last, we are with this contributing to wider impacts, transformation of the relationship between researchers and audiences, contributing to transparency of research and fostering and strengthening public engagement activities from researchers in Europe. Great. So these three elements, I would say the first three things you have to identify when you are planning your public engagement evaluation. What are the outputs? What are the outcomes? What are the impacts? And then you can ask yourself, well, how can I know? How can I know if I did this? Did I produce the outputs? Are there good quality? Did the outcome happen? Did we create this new awareness? Or maybe at the end of this evaluation, you will say, at the end of this training session, sorry, you will say, well, I don't know at all what is public engagement and I don't have a clue anymore. So maybe it was not productive in creating this new awareness. And last, it did contribute and that's the most difficult thing to evaluate, of course. Did it contribute in some ways to this wide long-term impact? Very usual, let's come back to the presentation, a very usual public engagement question you can ask and that we very often ask. So the very first thing we almost always ask ourselves when we evaluate is what is the reach? So what are the numbers? How many people read our book? How many people were coming to the event? Who were the people involved? What were their age groups, for example, or were there specific groups that were present? Were there people? I also wrote People Are Rich and People Involved. This can be quite different, usually by People Are Rich. I mean people who are exposed to the material, so people who are listening to the talk, listening to the radio, seeing my show. People involved mean usually that they have been involved in a stronger way, that they have participated in the action. It's not just a reach. A reach would be people so opposed on social media. It's people who actually got involved into an activity. What is their general experience? We can ask for satisfaction for their emotions and feelings. So just a word about this. Satisfaction is something we very often, if not always ask and to be honest, it's something that is not always very useful. If I ask yourself, are you satisfied with this training? And if you tell me, yes, I'll be very happy, but will I know a lot about what happened not much? If you tell me no, I'll be very unhappy, but will I know a lot not much? So it's something you can ask, but that's usually not very, will not give you much insight. Whereas if I ask you, what emotions did you feel? What feelings did you have? That might give me much more precise information. We want to, we wonder if we created new awareness on some things. If we change the perceptions of researchers or the perception of the research topic or of your daily life. If I created a new understanding or knowledge and just also questions like, what do people remember of it? Does it resonate with other experiences because your public engagement action is never alone. It's always linked to other elements. And does it change people's behavior? Does it change people's behavior individually or collectively? So these are just usual questions that we are going to ask ourselves when we evaluate public engagement. And for data collection, the first thing that I have to say, we always have to say this is the distinction between quantitative and qualitative data. So some of you may be already familiar with this, what we call quantitative data. Thank you. Platforms like SurveyMonkey is definitely also very good. Something that can be very relevant if people have access to these. Quantitative data would be numbers, ratings or responses to factual questions. Did you like the event? How would you rate the event between excellent, very good, so-so, quite not really nice and absolutely awful. And you have people rating the event and you can gather their answers and you can do statistics with them. So quantitative data is great to get a lot of responses. You can send this to every participants and then you just do statistics with them and that tells you quite an important insight about your event. And they can be very representative as you can have a lot of answers. The drawback of this is that you don't know what actually happened in people's mind people tell you what they liked, but if they liked, if they didn't like it, if they felt like this or like that, they give a rating, they give you numbers, but what actually happened in their mind, in their hearts, in what happened to them more in the complexity of it, you usually don't know what happened. So whenever you want to improve with quantitative data, usually you find if you need to improve and sometimes the kind of area you need to improve but not very precisely what happened, what was the issue and what needs improvement in details, you don't get it. So for this, you will turn to qualitative data which are responses to open questions, discussions, sometimes even artworks that can tell you elements of qualitative, some can give you qualitative data. And here you will understand what is happening with a lot more depth, but usually it requires a lot of time to get responses only from a limited number of people. So typically qualitative data would be an interview. You have an interview with someone, you ask them question, they give you answers and discuss with you. You discuss a lot, you spend one hour or half an hour doing the interview, then you spend sometimes analyzing what they said and that takes a lot of time for only a few people interviewed, which is in a way less representative than the quantitative data approach, but then you really understand what happened in much more details. So obviously, ideally, you will try to mix both techniques if you can, meaning so as to have a quite representative, broad overview and then understanding in a few people what actually happened. So that's the ideal. You cannot always do this because obviously evaluation can be constrained in terms of time and resources and in that case, you will just do one of the two, but it's great if you can have both. An element I have to mention when we speak about evaluation, it's always to be ethical, which means first of all, to treat participants with respect, which means if someone says they don't want to answer one question in your interview, please respect it. You never know what people are actually living and what is their experience. Sometimes they have very good reason not to answer to a specific question or to ask for a specific element. So be extremely respectful when you're evaluating. Be fully honest and transparent about what are you evaluating? Why are you evaluating this? So that they do not take part of an action that they would consider contrary to their beliefs or to their opinions and values, for example. The question of ownership is very important. You always ask permission to record or to take data. And usually if you do a recording of an event where you will also record the participants, that's great for evaluation purposes because you can come back to the recording and see what was happening to people. But always obviously ask for permission and use for this. You have very classical forms where people are giving their consent, make sure they are giving their consent, make sure that they are able to not give their consent and still participate to the event. And that did not feel pushed to give their consent. Be honest about the constraint and influencing decision and do not leap to conclusion without evidence. So this is just in terms of integrity. It's when you will typically use the data, be very, very honest about what the data can help you tell and what it cannot help you, what it does not support as a conclusion. And the last one, which is extremely important is the confidentiality. If you are taking pictures, filming, if you are taking any data, if people are having interviews with them, you are obviously recording the interviews, please protect the data, keep the recording on a safe place. There may be some elements of their lives they will be telling you that they do not want these elements to be shared widely or to be accessed by anyone. So always keep it in a non-accessible repository. All right, I will give you a few examples of tools now and just three to begin with. You already mentioned questionnaires. So questionnaires is the most classical usual is the first evaluation tool we always jump on because it's the most obvious one. It's also the most boring one to be honest. It's also the one that people hate filling in. So it's not necessarily the best one. Just a couple of reflection on this. We love questionnaires. We love questionnaires because we can ask directly the questions. And if we want to know if it was fun, we can ask people, was it fun? And they say yes or no. So it seems very straightforward. But this is often an illusion I found because very often when people see this, first of all, many people can't get bothered to answer the questionnaires. They don't want to spend more time just answering a questionnaire, especially if it's long. So they often don't answer a part of them or the ones that found even boring are not gonna now answer a questionnaires saying it is boring and even more bored than before. So usually the only one who will answer will be the ones who found it fun and you will get plenty of answer, but the great answer is telling you're even was fun. So the questionnaire is not always the best one and be aware of this kind of bias. By using a questionnaire, I am only getting the answer of people who are happy to spend 10 more minutes on the event so they are not already bored. On when I give a questionnaire and I say, was the even fun? People might also be tempted to write yes so as to please me and to flatter me because they want to be nice with me and they are nice people, but that's not the answer I want to get. I want to know what's actually happened. So questionnaires are always possible, but they have drawbacks. If you do questionnaires, I engage you first of all, to make them extremely short and by extremely as short as possible. The questionnaire of one question is almost never happening, but would be the ideal. Just have a few questions. If you can have less than five questions, I think that's the best. Getting more than five question is, it's not a way, it's becoming a real task to do and that's not, I think that's not very kind to the audience, to ask them more than five questionnaires. It's possible, I've done it several times, we all do it, but I would encourage you not to do more than five questions. If you can design the questionnaires, having the graphical design here, you can see the one that you see on the picture is designed for children and that is already seen that the children see that it's something for them and that it's not, let's say, a much more formal usual questionnaire. And then on the right, you can see a kind of flower and imagine that at the end of your event, rather than giving a questionnaire, you just put some posters and one of the posters is, did you enjoy? This event was super fun, nice, not bad, but they're boring, extremely boring and people have sticky dots, little sticky round of colored dots and they just put their sticky dot on the answer they find appropriate. So just by going outside of the event, they just put a sticky dot on the flower and they are actually answering your questionnaire, but in a much more engaging way, much nicer way. So you'll get much more response and probably not only more data, but also more quality data, some responses are also often more honest. Let's move now to interviews. You've mentioned them. I just want to speak about this. So for semi-structured interviews, you'll often just pick a few participants, maybe two or three, and have a discussion with them, either as a focus group, so the three of them and you, or individually, one to one, one after the other, have interviews. I wanted to stress a few things regarding interviews. In semi-structured interviews, you are looking for qualitative data, which means what you are looking for is people speaking. You want them to speak as much as possible and you will make usually for interviews some interview questions. We often speak about semi-structured interviews, meaning we'll have a few open questions like four or five questions and we'll ask them, so what happened? So what was the event like for you? What feelings did you have? How do you perceive climate change now? And people will speak. And our goal is to have them speak as much as possible, which means the questions have to be open. We don't ask questions that can be answered by yes or no or that can be answered with just one word. We ask questions like what happened or how do you perceive climate change after this conference? And people have to elaborate sentences to answer. We try to make these questions very simple so that people don't have to struggle to understand the question, but they are forced to speak a little bit to us to answer. And usually we want them to speak a bit more. So typically, you ask, how did you experience? How did you live? How was it for you to perceive it? And someone tells you, well, I felt quite awkward. Just repeat a very usual trick we all use is repeat the last sentence of the person we're interviewing with an interrogation mark at the end. Someone tells you, well, I felt quite awkward during the event. And you said, you felt quite awkward and you just wait. And usually they will start elaborating. And this is something, well, it's very useful. I urge you to use it as well in your personal discussions in your family and in your professional discussions in your teams. But this tool of whenever there is a statement where you feel everything has not been said, just repeating the last words with an interrogation mark is a way to gently invite to say more without whether if I said, oh, why did you feel awkward? This is a little bit more difficult to answer. Or did you feel really awkward? This is much more directive and even can be a bit judgmental. So we try to be as little to not be judgmental at all. And that's why we often just say, repeat the last one and say, oh, yes, just an, oh, yes. Oh, that's interesting. Can you tell me more? Just these things make people speak. And that's how we will gather the data and as much as possible. We, and the questions you have in interviews are questions to start with, what we are looking for is not so much the first answer but the discussion that comes afterwards where people will often tell you a bit more, be a bit more intimate. And the very first words are often words to please you. They would say, oh, yes, the event was great. That was great. And you said, oh, that was great. Yes, yes, that was great. Although there was this and that and this and this and that's when you will gather the interesting bits. And the last thing I wanted to stress is that you can also do evaluation without words. And that can be important for some specific audiences. You can have people speak or answer questionnaires but you can also have people draw or also, sorry, here we are having people write again words. These are just with sticky notes on a board or on an object. If you have an object which is representative of the research, I don't know. For example, if you talk about climate change you can have the planet globe of the earth and you have people write a sticky note and put it on the globe. So that it's a bit more meaningful that putting it on a board. I remember one time I was doing an activity about new technology. So we were in a room with screens and new, not computers, but let's say technologies of virtual reality and things like this and augmented reality. And at the end, I asked participants to write just a question on there posted. I said, after you have discovered this new technology I would like you to ask a question about this new technology about what it is, what we can do with it in the future. So they had just been half an hour exploring this technology. So they knew a lot, but I asked them what question would you ask about it? Write it on a post-it and put the post-it on the technology, on the object. So at the beginning we had a shiny new screen and an object of technology. And at the end, we have the screen and this object full of post-it notes questioning it. And this can be quite engaging as well. And once again, that's a nice way to collect data. And so the last one, yes, if you don't want to use words at all, with the post-it you can also ask for drawings. But one thing I like very much. So it's quite intense in terms of time. So, but it's nice because people don't have to draw and some people are a bit afraid to draw. They feel they are clumsy and they feel they don't know how to draw is collage. So for a collage, you will just give a pile of paper of magazines to people, a lot of magazines of different kinds so that they have different kinds of photos inside. You give some markers if people want to draw or to add some elements, obviously. You give them some glue, you give them some scissors and you ask them to make a collage with what happened or a collage of how they feel or a collage of something related to your evaluation question. And they can even put some legends or with the markers indicate some words if they want. But they can also leave it just like this. And you look at the collage and you ask just people to tell you what their collage is. And that's a nice way to move a bit away from the words. And so that people can just feel, what did I feel? And they look at the magazines and usually they don't know what they feel. They don't know what happened. They look at the magazines and suddenly they see a forest. Suddenly they see an image of the sky. They see an image of someone who is happy or someone who is miserable and they will start adding these on their papers and organizing it and it will make sense for them little by little progressively. And at the end, they will show you their picture. This is part of the data. They may present you also with a few words, their picture and that's part of the data as well you will collect. And that's a very, very nice way. Once again, quite long, it takes some time but it's a very nice way to collect some data and to have an insight about the experience of participants in your event and to get some feedback. There are many, many, many more. And one of the resources you will find in the resource sheet is the Public Engagement Evaluation Toolkit that has been developed by Queen Mary University of London. So for evaluating public engagement you have a lot of toolkits, of guides, of handbooks that have been developed. You have a nice one from the NCCPE, you have several ones from British universities and other universities in Europe. One, this one, which is the one that was developed by Queen Mary University is quite nice, it's in two booklets and the second one is focusing mostly on tools. And you see here a wide variety of tools that you will be able to explore more in depth in this report. But that will give you once again more ideas about how to collect data that is not just sending a question that can be more engaging but also more easy to handle for you and give you another kind of information. Last thing you will do obviously is to analyze the data. So you will do this in a three steps process. You will, first of all, try to notice and collect. Noticing is quite important. Once again, we come back to the idea of observation but also when you have gathered your data and noticing what is happening when you look at it at first glance, does it tell you something already? Do you notice already a pattern or something specific that's happened? Then you will try to sort and think. So you'll try to see, oh, are there some clusters? Lots of people saying it's great. Lots of people saying it's not great. Lots of people saying that it has changed in this way. And obviously you will use your critical thinking to interpret the data and to say, all right. So we can see that it changed people perception in that way but it don't seem to be ready to change their behavior. Or this is right and this is not right. Or that part worked very, very well but that part didn't seem to have any impact. So how can we change that part? And you will, this can be done in a very structured way or it can be done in a quite loose way. Obviously for analyzing the data you will need some time. You will need also to come back all the time to the objectives and to the evaluation questions. What do I want to get answer to? Where do I find in this data some answers to my evaluation questions? Does this data tell me if I have reached or not my objectives? Are there some patterns or is there some data to group? So the idea of coding and this on the side it's quite technical and I don't think you will do any coding at that stage but more simply if you have to gather some posted notes just like we did in the activities here gather them if some of them are similar or if there are some groups and see what are the main clusters emerging. You can, you simply print some text that has been written and highlights the key points that you see. Always find some representative quotes. Nice quotes are great because they are also giving a little bit more of a sensitive content. So they can be great always use some code try to find some quotes that are representative but are also quite nice to put in your report. Be critical in the interpretation obviously and be reflective. The ultimate goal of all evaluation is to ask what happened, what worked well, what didn't work well what would I do differently if I were to restart? And that is even for me for my new action or for other people when I will share with them so that it can support them other researchers doing public engagement. All right, we still have a bit of time and I would like to use this time to ask you so just checking in the, yes to ask you what are your plans for the future? How do you plan to add public engagement in your research? If you're a researcher or if you're a supporter of research how will you plan to add public engagement in your support? Do you plan to do a talk? Do you plan to do a call for public engagement actions or to add a public engagement condition to some support you are giving to researchers? And you want to plan public engagement in your lectures, great. So I would like you to reflect on this and then to reflect we'll do it right away on the very first steps that you have to do for this. So not only the whole plan of everything you have to do to get there but what is the first step? And the other question I was wondering is with whom you need to speak about it? You need to speak about it with a supermarket who will be a partner. You need to speak about it with your supervisor or with a collaborator. You need to speak about it with the possible audience. With whom will you have the first conversations about this?