 Hi everyone, thanks so much for joining us today. My name's Katie Joyner. I'm the Patient Advocacy Program Manager at MPE. We're really excited to have this webinar today to talk about evidence-based patient advocacy. And I will go ahead and introduce Yann so that we can get started right on time and make the most of the afternoon. So thank you so much Yann Geisler who will be presenting today. We're really excited to have him talk about evidence-based patient advocacy and how to apply the principles of evidence-based patient advocacy to the European Cancer Summit which will take place tomorrow and Thursday. Yann has been a patient advocate since 2001 and has co-founded a number of patient advocacy organizations including CML Advocates Network and WECAN, which I'm sure a number of you are familiar with here. He's a member of the European Cancer Organization's Patient Advisory Committee and he is also CEO of PAPFICITS. For those who didn't have a chance to review the email we sent just a few notes about housekeeping, this webinar will take place in two sessions today. The first will be on evidence-based patient advocacy and the second will be on applying those concepts to the summit tomorrow and Thursday. So if you are not planning to attend the European Cancer Summit, I'm not sure how applicable that session will be for you. So if it's not of relevance, you are welcome to leave the webinar during the short break in between. So we'll go from about now to 3.30 in the afternoon on this topic and then we'll have a short break for those who need to leave and then we'll go into the session on European Cancer Summit. So without further ado, I'll turn it over to Yan. Thank you so much, Katie, for your introduction and thank you for all your help also during the preparation of this session. I'm delighted to have you all with us. 35 people have turned up, which is great. I hope you're all safe in these uncertain times and the advantage of having a virtual European Cancer Organization Summit is also to actually broaden up this training to a larger group, not only the Malama Patients Europe Advocate Development Program participants, but a couple of other advocates which have joined us this afternoon. So welcome everyone. This is actually done as a webinar. So there are some specific issues to think about how we do that because of course, I want to give you the chance to ask questions in between and interrupt me, which in a virtual meeting is not so easy. But before I do that, just to outline what we're going to cover today. First part, as Katie said, is about evidence-based advocacy more in theory and a bit in practice. So why do we need evidence when we do our patient advocacy work? What evidence-based advocacy is? Then I'm going to speak about some examples of evidence-based advocacy where some of the European umbriders have actually used that in research, regulatory health policy and also some learnings that we had and a bit about methodology and evidence-based advocacy. Of note, this is about advocacy. This is not about generating scientific evidence. It's about generating evidence that supports our work as advocates. The second part will specifically cover the European Cancer Organization Summit's agenda. So I have a couple of slides about the agenda as such and I would take you through session by session with some thoughts about why these topics matter to us as patient community and how we could use evidence actually to support our argumentation in those kind of topics. So that's the plan. Just some house rules on Zoom. We're using webinar. So you can't use your microphone until I let you do that. And that's why I'm monitoring the chat which you can find on the bottom of your screen. I'm going to monitor the Q&A, the question and answer section where you can set questions and I can decide to answer them now, answer them later, answer them in writing. And also I'm going to use raise hand. You can find that when you go into the participant list, you scroll down to the very end of the long list of names. And there's a small button. Looks different on every Zoom version. Sometimes it's a blue hand. Sometimes it's a button called raise hand. If you open that, yes, thanks William. You tried it. Then I'm going to see that you have your hands raised and I can either give you the microphone or I can respond to a question you've posted in chat. So feel free to ask questions why we're doing this. And as I said, I'm going to look at the list so I can see your hands up. So without further ado, I actually want to step into the topic of evidence-based advocacy. And why are we advocates? Of course, I mean the core thing that we do as patient advocates is support patients and their families affected by a challenging disease like cancer to make the right choices for themselves. But we also want to help clinicians to give the best service possible to their patients and to prevent bad and outdated practice, which we know even though healthcare systems tend to say it's not the case, it happens all over the place. We want to influence regulators and payers to make sure they're basing their decisions on our preferences, on what patients really need and not on what probably is needed from a regulatory or from a budget perspective. So they should really know what patients want and then they can decide whether that fits into their healthcare system and not other way around. And tell politicians to do policy for patients, not just about patients. So all these things, I think are self-explanatory for many of you. So again, if we think systematically for the patient organizations, there are three core areas that every patient organization does. Some focus on patient support, so peer-to-peer support, help patients to inform themselves, give them support and help them navigate the healthcare system. Then health policy is to work with policy makers and with the different regulators to really make sure they're basing their decisions on patient preferences and then research to really make sure the researchers are doing the things that matter most to us in terms of outcomes and priorities. Some patient organizations focus on patient support, some charities are very focused on research, some European umbrellas are mainly doing policy in Brussels or in the capitals of the respective member states. Not all organizations are doing all three levels, but evidence is important in all three levels in the end. And that's one of my favorite comics that Zach, which some of you know well, CEO of Leukemia Care and also of the Acute Leukemia Advocates Network uses quite frequently because it's often a matter of perspective what patients want. And if you just look at the left picture, this is how parents might see that and on the right, that's the perspective of the actual individual, which might look very different and might probably give a different perspective why people like things. And what we sometimes tend to do as advocates is say, doctors knows best is not the best priority. So we tried with patients knows best and that sometimes also not the right thing. Even though we're angry, even though they were really desperate to change things and we see misconduct, we see people suffering, we feel this needs to change. Slamming our head against the wall might sometimes be a good instrument, but in most cases, it's not the best way to do it. Emotions and opinions matter, but if they're supported by data, then they become really powerful. And unfortunately, especially when we talk about cancer, there's often unattractive choices and trade-offs that patients need to do. That's the norm unfortunately in most cancers still today where you need to decide between the unknown because there's no data, the bad, where you know that it's going to be a bad thing or the ugly. And that's the difficult trade-off and that's where we need to help the different stakeholders and the regulators and all decision makers in healthcare. Actually to make sure that they understand that this difficult trade-off exists and that it's not the same thing for every patient because what sometimes regulators, I mean it's regulators and policy makers try to think in categories. So they say, okay, patients with a specific stage of disease at a specific age might have specific preferences, all of them, which we know as patient advocates is not the case because same disease, same stage of disease, same age might mean totally different pressure preferences because patients are not the same. And you can see one of the charts on the right side that was adapted from Francesco Pignati from EMA where you can actually see you have different typologies of patients and on the vertical you see estimated times on the horizontal you see estimated benefits and even though it might be same disease, same stage, same age, you might have patients which are more harm tolerant. So they probably weigh up the benefits differently than the harm intolerant patient. And of course the policy maker tries to make a choice somewhere in the middle between the two which doesn't mean it's the right choice for the individual patient and that's what we need to try to transport. And of course to argue in that case we need to be quite clear in what these different subgroups want. So what does the harm tolerant patient want? What does the harm intolerant patient want? And do we need to find that kind of middle ground or might we probably, which is the judgment that patients and responsible doctors take all the time, they try to find the best individual solution but that's difficult for policy makers of course to take. And that's that kind of difficult balance but that we need to find when we're discussing with all kinds of policy makers which of course can't have a law or a regulation for every kind of single case. And what is even more important for us is that actually patient advocates should know their trees and make sure they bark at the right one. And we know how often it happens that people complain that the EU should do this or should do that. And if we really look down at the reality we know that for example that might be the Member States competence. So for example, if you talk about reimbursement of treatment that's certainly not EU competency and the EMA doesn't have to say anything on that. While for example for cancer drugs the approval of medicines is usually done by EMA on the European level. There might be things that are being decided by the European Parliament where the EU has competencies and there might be things that are decided by your Member State Parliament. So it's very important to understand that difference because it doesn't help if you complain about the EU not doing this or that if they're not responsible for it and vice versa. The same applies to physicians. I mean, if you could talk about physicians or healthcare professionals in general but again, when you think about advocacy what's the right target group? Is it something that a specific clinician because he's the key opinion leader in the community might influence? Is it the study group because you need the leading doctors to agree on something or is a medical society like ESMO or EHA or others which might more systematically think about things or might be responsible for guidelines or whatever. So when you talk about healthcare professionals again, there are different stakeholders need to make sure that you're complaining and suggesting to the right group. The third one is companies versus industry associations. There are things that you can do with specific companies. So if they have a specific development on a very specific disease then you can work with a single company but there might be issues which are more systematic like for example, codes of practice. This is a typical thing you would do with FPA or ABPI in the UK or VFA in Germany or one of the industry associations. So that system. So again, when you complain about industry or if you want to suggest something to industry think about is it single companies? Is it a group of companies or is it the association? And then of course disease specific action. Some of the things you can achieve in like I've written your osteoporosis just as an example but it might be melanoma, myeloma, CML, lymphoma, brain tumors, whatever. So there might be specific things which are specific to the disease but the European Parliament will never have a CML law or lymphoma law or something like that. If you want to address inequalities or like data protection or things like that then you need cross disease joined action. You need to work together with the other umbrella organizations to make a systematic change. And the more diseases the better when you can prove it's the same problem in lymphoma like it is in CML, like it is in brain tumors. Then you can actually address things in a healthcare system. So know what you can do and also think about what can umbrella organizations do. We tend to say can't EPF or WECAN or ECPC or any other organization actually fix this. Probably they can't because they only have a couple of staff if at all and they have their limits but they're very good in some things and they're probably not resource to do other things. So again, think about what can your organization do which is the best positioned organization that you could partner with to bring a specific thing forward. And then again, science and policy are usually not separate. They usually go hand in hand because you need data from science to actually take policy decisions and needed the policy framework to get the right framework for science. So they usually go hand in hand but what is really the core is that first of all, you need to know what you want. What do you want to change? What's the specific thing that you want to address and then know how to achieve it? So think about how you get there and that's very important. And one of the things that I really liked was a presentation we had at our Esomaster class 2018 was more or less negotiation tactics because that is a lot of the thing, is a lot of that what we do in advocacy is actually trying to convince others to actually understand our issues and probably make a difference. And there's this step seven step process which I always find good when thinking about negotiations. You start with defining your outcome and the fallback scenario because you probably won't get A so you need to think about what's your B because if you can't get A and you don't have a plan B you're probably not getting anything. You need to of course consider the other parties position what do they want and what's their resistance point? So where do they want to counter what you're bringing forward and why? Then be very clear with your reasoning. So what's the benefit of making a difference and what are the risks and probably address all of them. And that's why I've marked that in red is actually present evidence and prove to reassure that the reasoning, the benefits, the risks are clear that you understand the risks that you understand the benefits and you can clearly say why things should happen. If you have evidence on that then actually you can really make an argument. Then of course you need to make it compelling that people understand and want to actually think about this and you need to explore the barriers and resistance and address them with empathy. If you're just saying you're damn stupid you're not going to get anywhere. So if you're trying to think where's the resistance coming from and try to take them and give them a different direction it's probably better. Then represent your case and ask the other party for commitment. So typical it's not based on evidence here but evidence is a big part. That's basically a good guideline actually to be the success for negotiations. And that's where we get to evidence-based advocacy. So evidence-based advocacy means that we as patient advocates advocate in a targeted evidence-based well-educated and professional manner and we measure the impacts and outcomes of what we actually do. And if you look at the three circles because you're going to see those elements all the time in the examples I'm going to explain later on is targeted advocacy. So you're trying to think about what you want to achieve and how you get there. And you think about which tree are you barking up? Who's the target group? Who do you want to actually address this to? Are you speaking to, if it's not a good strategy to say you're speaking to everyone you think about who's the decision maker and what are the supporting circles around that and then target those. Then really good data. So you need to think about how do you support your argument with numbers, with surveys, with data, with patient preferences, with interviews, with whatever. So think about really good data that would support that argument. And then the third circle is the right packaging. Connect it to the target group if you want to bring something forward to a politician think about the elevator pitch that you can actually do in two minutes because otherwise they will stop listening. If you talk to a scientist you probably need to bring papers and science and data. I need it to be quite well-equipped for doing that. If you want to talk to lay people you need to have simple language you need to break down complexities into something that somebody who's not as involved in advocacy as you can actually digest. So that's the right packaging. Think about how your target group would actually perceive what you're trying to say and how you package that in a way that they can actually understand it for them. That's where the sweet spot in the middle of evidence-based advocacy is and that's where we're trying to actually think about how we can improve that because we have been hammering our head on the wall for years, for the 18 years I've been in advocacy, we've been doing that but the evidence-based advocacy thing is something we knew was right but what we're trying to do more systematically these days and that's why we have to survive it now today. So generating evidence let's talk about how do we generate things because I mean you can't find most of the data that we need as advocates to actually convince stakeholders so we would need to generate the evidence ourselves or we could do it. And what's the challenge? I mean, we've increased the awareness and the kidney cancer coalition, myeloma patients Europe, metanoma patients network Europe, we can Academy, we've addressed evidence-based advocacy for quite a while. However, many organizations aren't still not equipped because they don't know how do I do that? How do I resource that? How do I find my way to run a survey in so many countries, find the right patients to respond to them, analyze the data, publish it and so on. So many of the organizations are lacking the knowledge on how to generate data in a methodologically good way. Often we don't have the capacity to implement a project like that because of course this is hard work. We sometimes lack the knowledge how to use the data and advocacy. So how do we apply that? How do we package it? How do we bring it forward? Where's the forum that we can use that? And then the knowledge capability and resource to publish the data. And I'm going to speak about this because we sometimes forget about this great data not out in the public. And then of course our own strategy to generate data proactively and not just what we're being approached by industry because they have an interest in actually generating data for a specific purpose. And I think the right graph shows it quite easy and quite well because usually we think it's a linear process of generating data but how it feels for us as advocates is often trial and error and we are trying to get from A to B but we're sometimes stuck and dropped with the X. So this is why we do for example and we can this evidence-based advocacy training. We do it today in the advocate development program on a very general way but then we run an evidence-based advocacy training program for a year which SAC coordinates. So thinking practical, this was theory. Thinking practical, what are the issues or what is the data where patients or what kind of data could patient organizations generate? And I'm just going to bring in some examples and trying to actually bring real-life examples also from our community on that. Adherence to therapy. So doctors thinking patients take their therapies but we know in reality they don't or many don't for different reasons. Inequalities, we know these kind of charts where it says a treatment is available in a country but we know that in reality only a fraction of patients actually get them for various reasons. Current care patterns where we know that care is probably different in reality than it is in an individual world that is often being shown to us. So we can actually show what the real patterns of care are in a disease and what the real experience and the journeys are. We know especially in cancer and in some of them we just go through series of misdiagnosis until we actually get to care and then it's all but let's say linear roads to getting best care possible. Quality of life burden of disease in daily lived experience that's very often omitted because we know what the patients pretty much experience what it means when they come home from the doctor when they leave their normal working or daily life or their family life or whatever and what the real burden of disease in everyday living is. And I think we have quite a good feeling but that's where we can provide data and many of our organizations are generating surveys on that topic. Then impact of illness on society. So especially, I mean, we at the moment see unfortunately quite well in COVID what it means and what let's say illnesses can have on society but what the real impact is is something that we see pretty much on the ground as patient organizations. Then these related outcomes what are the really the outcomes that we're seeing in patients and also probably the outliers that are not usually on the big charts. And then of course, benefit risk if we thought of think about what are the benefits what are the risks, what do patients prefer? And I'm going to speak about that in a moment in a bit more detail. And in the end, if we think about evidence it's very important that we listen to evidence and sometimes we have a preconception that we generate evidence and we see that probably our hypothesis is not right. So we should be ready to be challenged. We should be ready that the data we generate might question our position because we see that very often we're all patients we have our own experience and sometimes we have some predispositions how we personally see things and it's very important that we generate evidence probably also that contradicts our own thinking and then be open to admit this and act on the evidence even though that would be not my personal preference do not use evidence just to back up your existing point of view. Of course you're doing research because you have a hypothesis but it's important to actually challenge that. So now some examples. First of all, I mean, I've stratified that into disease mapping so typical disease specific issues, inequalities so inequalities between groups and health policy and that's probably also important because many of these topics are going to be addressed in one or the other way during the summit from tomorrow. So first disease mapping. Disease mapping, that's the typical type of evidence generation done by patient advocacy group in different forms. For example, what's the patient experience data on quality of life, data on the burden of disease, data on the unmet needs that patients none of the treatments or care patterns actually fulfill what patients really need need patient preferences or very specific disease specific issues like adherence. And that's more or less the simplest most simple category that we have. You can actually see there are a couple of examples from our community, the CML Advocates Network adherence survey which I'm going to explain the lymphoma coalition is doing their global patient survey with a lot of data on lymphomas and CLL. In CML again, we've done one on stopping treatment and deep remission and they could be like you may advocate some work has done one on quality of life. And I'm going to speak more about some of these examples moving forward that you understand what the patient organizations have really generated. So one of the examples, one of my favorites I've been using for many years was data that was developed in myeloma back in 2009. And they basically made a big survey of 300 physicians and 260 patients and cares in 43 countries. And just one of the questions on the survey was what are the treatment side effects that have the most negative impact on overall wellbeing? And you can actually see here in this chart there's a blue bar and there's a bright orange bar. Blue is physicians, bright orange is patients. And on the horizontal you can see actually to which ones people in percent people agreed. And you can actually see that physicians rated hair loss with a magnitude smaller proportion than the patient where almost 50% actually said that's one of the most negative impacts. Or if you look at numbness of the fingers, neuropathy for example, you see that there was an 80% disagreement rate that neuropathy is probably the worst thing that the patient experienced is why you can see that the rate in patients was much lower. And you could see similar things for example, respiratory events were actually so difficult to breathe where physicians thought this is just a minor problem while it was almost one fourth of patients that actually thought this is an issue or one of the most severe issues. And that I use that chart all the time to demonstrate that asking just physicians just gives you a part of the picture because that's what is being dealt with in the clinic. This is actually what the physicians see and they very often don't see what really happens when patients come home. And that's why it's when we don't know it better but we have a different perspective and we see things that is often not there in the clinic. If you just talk about sexuality, typical example, almost no patient really addresses that with the physician proactively while we know in some of the cancers that's really a huge issue. But we know in our groups we hear about that a lot and we can actually report a lot to improve the situation for patients which are affected. Then again, the Allen Acute Lachemia Advocates Network Quality of Life Survey, this was an example of evidence-based advocacy that was published recently. What Allen did is actually they made a survey amongst acute lachemia patients and they translated the survey in 10 languages. It was a very extensive survey with over 70 questions to identify factors associated with quality of life difference in acute lachemia. And it was actually done because Allen knew that because of the severity of acute lachemars, which are very short notice, very severe impacts, patients very often hospitalized within 24 hours after diagnosis because it's getting life-threatening within hours. How do you deal with quality of life? And that's what Allen tried to find out and what they did is actually they not only develop the questionnaire professionally but they incorporated a patient-reported outcome measurement instrument called HM-PRO into the questionnaire to assess the impact on quality of life in a validated manner because that would give the data much more, let's say a higher evidence level because a validated instrument was incorporated in the questionnaire. And actually this was a very, very successful example. It's been published at the American Society of Hematology Imposters. It's been also presented at EHA, the European Hematology Association Congress. And I think there was quite a lot of impact in the community because Allen was one of the first really to spotlight quality of life issues because the severity of the disease actually shows that everybody's looking very, very much at the clinical things, leaving patients behind when they survive how they live on with it. Then the CML Advocates Network adherence service. I mentioned that in our community, that's my community and we actually saw that adherence to therapy is a huge issue. When we talked to the physicians, they said, well, this is cancer. Of course my patients are taking their medicines. Why we knew that quite a proportion of patients for various reasons, because they didn't want to take their drugs, because they were taking drug holidays over summer or on Saturdays, Sundays or whatever, or because just they forgot, we knew adherence is a huge issue. Because there was little data about this, we generated our own data with a survey. We recruited 2,500 patients from 63 countries in just three months. We again incorporated a validated adherence instrument, a scale that was used in breast cancer and diabetes and so on. And they use that in our questionnaire to make sure we have, let's say, a good instrument to measure motivations for non-adherence. And the project has been quite impactful because the data was also presented at the ASH meeting and at the EHA meeting, so big medical congresses. And it led to quite a strong discussion because we actually, the data has shown that about one-fifth of patients are poorly adherent. So they're not taking drugs at all as prescribed, while only one third really followed the guidance actually the guidance how the drug was actually prescribed. And I just want to show one example, because this was funny when we actually presented the data at the European Hematology Congress in 2016. This was one of the charts that was presented. And you could see actually we had mapped from the 63 countries that we interviewed where we had patient respondents with more than 30 responses per country. We mapped out the percentage of missed doses on purpose in the last year and the percentage of doses missed accidentally in last year. And you could actually see that the countries performed differently according to our responses. And you could actually see that, that's why I put the arrow here. Australia was doing quite poorly in terms of missed accidental. So unintended non-adherence and quite poorly also an intended non-adherence. So then after Giro presented that, one of the key opinion leaders, the one of our top guys in CML clinicians actually came and said there's something wrong with the Australian data because I know that patients are quite adherent. I know the patients in my center and I think there must be something wrong. So what we then did, we did this up and it was just of Australia and send it to the clinicians. And I think that probably changed the discussion and also the perspective that things might not go as they seem. So that just demonstrates one of the examples of really used evidence in advocacy work because having that data was extremely powerful. Now another example and I talked earlier on about patient preferences very basically, what are patient preferences? Patient preferences are trying to measure the patient's value for a specific component or attribute in absolute terms or in relation to another attribute. So it's about relative importance and trade-offs. And you might remember the whole thing about the good, the bad and the ugly. If you think about cancer, we are very often have this discussion about what are you facing? Is it overall survival? So you're trying to prioritize that you live a long life or do you prioritize a very high severe impact? So a very impactful treatment where you getting into critical state but you know that you're out of it after probably a year or so like with bone marrow transplant or do you go for chronic treatment and you have low grades but let's say forever low grade side effects. And that kind of trade-off between overall survival chronic side effects and high impact, high severe short-term side effects is some of the choice that we often need to do between the bad and the ugly. And what basically the EMA acknowledged that they struggled to generate data on those patient preferences between those different coordinates without patient input. So what they did, they did a collaboration project with Myeloma Patients Network Europe and Metanoma Patients Network Europe and Myeloma Patients Europe to actually develop a methodology on benefit risk. And this is basically what you can see in the small triangle here. And even though it was a mixed group you can actually see like their clusters. There might be some patients which are more on the side where they prefer let's say severe toxicity or might prefer overall survival and others might prefer chronic side effects. And I think that's been a really an interesting project. And really helpful in developing methodology and also stimulating the discussion. And then another chart that also adapted from Francesco Pignati some time ago demonstrates that kind of difficult trade-off that we are often faced and where let's say regulators try to find one decision that fits most of the patients. And if you just take these two scenarios you actually see horizontal is the time in months and vertical is the survival probability. Everybody's alive at day zero and you can see these unfortunate curves. And we compared to treatment A which is in red means that 50% of patients will be alive after three years but everyone will be dead after eight years. And if you take treatment B much more higher impact in the beginning 85% of patients will not survive treatment or disease in the first two years but 15% have long-term survival. So that's kind of the trade-off that some patients need to decide on for themselves. Would they go for the very small but still their chance to have long-term survival or would they prefer to live good three years? If you just look at it here on that graph from a regulatory perspective drug A might be better because more patients respond longer or if you say the area under the curve here is larger if you look under the red curve. However, some patients might prefer treatment B because of the red chance of surviving. And this is again data which we need to bring into the discussions where we say so A and B both might be valid but it might be different subgroups. How do patients take choices and how do we support, for example, physicians to consult patients if they have that difficult trade-off and how do we guide patients in making up their mind what is probably is their preference A or B and I think data for that helps a lot because it actually helps you to think about different ways you address these different patient populations. And then again, coming back to the EMA example that I outlined you can actually see there were subgroups in that assessment that myeloma patients Europe and melanoma patients network Europe and the EMA actually found out. But what they found out, for example, here is that there was considerable heterogeneity in the patients they saw but they could actually see from the data that severe toxicity so very strong toxicity at the start was ranked higher amongst younger working and those people that were actually looking after dependent family members and those who had more frequently experienced severe toxicity in the past. So that way you can actually start to think about how do you target patient information or specific regulatory decision-making. So moving to different fields, moving to inequalities and this is let's say an evolution where we talk about inequality mapping looking for subgroups. So which are the groups, for example, with worst outcomes or lowest access and trying to find out what are the reasons that some groups might be strongly disadvantaged by demographic factors like country, age, gender or specific category. There are a couple of examples which the community has generated and one of the examples that you might be aware about that at least those that are in the MP Advocate Development Program is the Access Atlas. And that's quite a leading and pioneer project in the cancer patient community because what MP has done is actually to assess all the different countries on access barriers to exist because sometimes we as advocates tend to be too aware. People try to convince that the price of the drug is the only category that is really decisive on the lack of access. But we know in reality it's different because there might be so many different reasons. Cost and reimbursement of new treatments, of course you can see that here is a very important barrier but slow drug approval process, lack of clinical trials, lack of referral to specialists, bureaucracy, treatment side effects, lack of supportive treatments, lack of professional training, ethnicity, patients non-compliance. I'm just naming some of them. You can see them all here in horizontal 23 different reasons. All these reasons, all these barriers are actually making it hard for patients to access appropriate care. And what MP has done is collecting that data systematically, collecting healthcare system data, collecting data on specific treatments and their availability, collecting information from the MP members and collecting information from physicians and also from industry and regulators and trying to merge that into a kind of map. And the tool that MP has today is much more sophisticated. This one is already a tool from 2016, but I find it quite compelling because you can see on the vertical these are the European countries sorted by cross domestic product and in the end health expenditure. On the horizontal you see the different access barriers and you can see how colorful this map is which actually shows the inequality and diversity we have in Europe. Now, of course, you can see much more red on the bottom with the countries that are economically not as well as the countries on the top, but you can see that also you see green next to red in some neighboring countries which actually demonstrates that there might be neighboring countries and one, even though they have similar healthcare spending are doing very well in some bones. And for example, if you just compare France and Germany years ago, we saw about 10% differences in breast cancer survival between France and Germany. And of course it's a justified question for you as advocates to ask, so why? If spending is the same, where in healthcare organization is the waste or the lack of process or the lack of good conduct that we can justify such a survival difference even though healthcare spending is different. And I think what also this map of MP demonstrates is that there's no single strategy for all countries. You cannot have a European access strategy. You need to think about the different barriers in a country, you need to collect data and then you need to act on the local level to trying to address these kind of issues. And what the MPE Access Atlas program does is actually collect data but then do individual coaching of their member organizations to build a local strategy to do advocacy on the national level. So pan-European data translating into trading on the national level to develop a strategy is quite a compelling approach, especially for the European members. And for those in MP, I can just advise you look at the Atlas, it's a quite a unique project. And of course other organizations are doing similar projects, but I would say MP has been the pioneer here. Then third area, if we just talk about health policy, we've talked a lot about disease mapping and treatments and inequalities between countries but now talking about influencing health policy. And I just want to quote one example, there might be millions because we are all trying to convince politicians to make up their mind to change the way we organize healthcare. But if we, just one example, one example I was involved back in many years ago, it was actually by the time when the clinical trials directive was being reviewed, which was in 2010, it's the regulation on the European level that actually provides the framework on how clinical studies are being conducted in the European Union. And that law was from actually 2001 and 2004 and it was enormously bureaucratic and it still is because it's still in effect today. But there was an increased unrest that we can only do good research if we look at that law and probably revise that. And then the commission started in 2008, nine started to do consultations and it was the time when cancer patient advocacy was getting a foothold in Brussels. And I was involved at that time when I was director of ECPC in looking at the clinical trials directive. And when we went to the commission hearings where they asked us for opinions, we actually collected data. So we started to work and I'm just quoting one example of the German Hodgkin study group. So we asked them, so for a large clinical study, what's the effort you need to do in terms of safety reports according to the current legislation? And the Hodgkin study group said, for example, for one study they needed to provide about 100,000 copy pages of documents to ethics reviews and authorities for a single study with 280 participating clinics and 65 ethics committees. And the study group had to provide 1000 35 folders with 12,000 pages for study conducted in 13 centers. And what we said in those meetings that we put that in a position paper with all the numbers, we said, this is not in the patient interest. We have an interest in safe studies, but more paper doesn't make it more safe. And one of the illustrations we used at that time, and this was actually something that I saw at a conference. You can see with all the arrows, you can see the reporting pathways for serious adverse events between the sponsor, the applicant and the market authorization holder and the different authorities like the national authority, the EMA, the member state investigators and ethics committees. And you can see the, let's say, circles here, no, it circles the arrows. You can actually see that this doesn't look like it's a streamlined process. And that's basically one of these discussions has led in the discussions about the revision of the law, that in the new clinical trials regulation would will come into effect at some point in time, will actually have much more streamlined reporting mechanisms. The reason why they don't get it implemented is because this central database for adverse event reporting and safety reporting and so on. And for the whole process of clinical trials is still not in place, but it will come into effect at some point in time. But it just shows the political process in the background. So, sound methodology, I just want to raise one point. I don't want to raise the bar too high because a quick survey can also help very often to actually to make your case. But at the same time, the more you move into science, the more you move into politics, they actually look at how have you generated the data and how representative is your sample and have you, let's say, asked the questions in a way that you did not presume the response already. So methodology counts and even though we don't need to be methodologists, we need to think about this. And what I just want to say is scientific involvement policy must go together and that's actually what I said earlier on, but it's not very difficult. I mean, if you could learn it, I can learn it too. That's why what I often thought when I was talking to scientists, I can just try to learn the methodology because that's the way how to achieve the goals. And we need to think about, there's no single solution. It depends on the question that you have and it depends on what kind of evidence you want to generate. And that's a typical example. If you look at the comic, let's get by a show of hands who prefers qualitative data over qualitative data. So who prefers numbers over worded responses? And then people show their hands. But in the end, when we're asking somebody's giving a response in words and then actually the teacher says, I should have mentioned you only use numbers in your response, but you can't respond in numbers. And that's why you need to think about what is the methodology that we're using. And that's why it's important from the start before you do a surveys, asking, are you asking the right questions? Are you asking the right people? Did you ask enough people that you get a proper sample? And is your conclusion what the data actually shows? And this is what I said, let the data also contradict you in your opinions. And also is it statistically valid? So have you counted in the right way and do the numbers just match up? So I think that's very important. It's important to think about methodology. And just to share one example, which probably is for my community because I mentioned the CML advocates at here in survey. What we did well there is we really tried to anticipate where the issues, where could we be criticized by the clinical community, by the physicians, by other stakeholders when we present the data. So we wanted to get it right. So that's why we ran a pilot in the beginning. We tested it with 150 patients. Our questionnaire was 150 patients from 10 countries from our members. Just to see does the questionnaire work as such and do we ask the right questions and can things be understood properly? We did this only in English. Then we contracted a professional market and research agency to actually develop or provide the survey tool to conduct the survey and do the first data analysis. As I said, we used the validated instrument for adherence measurement. We needed to pay a license for that to generate trustable data. And we used our global network where we recruited 2,500 patients within a relatively short time. Interesting for us was that the best recruiters from our members were not the largest countries by any means, but the countries where they had the strongest patient organizations. So even small countries with a good outreach to the community had by far more responses than large countries with let's say only a regional group. And we cooperated with four clinical networks because we provided the questionnaire also offline through the clinics with a stamped return envelope because we wanted to prove that online is not different to offline responses. And we've actually seen that the differences are not so large. There are some difference but not so large. So people couldn't say just because you recruited online, your data is not valid because the patients out there who are not on the internet are behaving totally differently. We could actually reject that type of thesis. But where we were really bad is we didn't have a biostatistician from the beginning. So we have the data on the hard disk but we don't have anyone in place to do the analysis of the end of the agency contract. So I mentioned the example of Australia that worked well but we couldn't do analysis afterwards just because we don't have anyone who still can access the data in a way. We would first need to find somebody and that's not what you do if you just want to answer a question in the snap. We did a pre-publication pitfall. So we presented the data at a conference and then we submitted the properly analyzed paper to one of the top journals in leukemia. And in the end, one of the reviewers rejected that because he said you presented something else. So the data you presented at the conference contradicted in some way the journal article. So the journal rejected the whole paper because of that. We had no publication strategy in place. It took us three years to get the data published because we relied on a series of volunteers one after the other to help us write the journal. But in the end, they were all great people but they were too busy to really help us to get the paper done. So we should have from the beginning thought about paying a medical writer to do that for us instead of hoping that we're being helped because we know we're all, let's say on the limits of our workload. One thing, we were a victim of our success. We licensed the adherence measurement instrument but we only licensed 2,000 users. We forgot about the contract two years later and we had 2,500 responses. And then actually he, that the licensed holder of that instrument threatened us, legal sent us their licensed shark who actually was just there to collect money. And we said we're sorry, mistake of us were happy to actually pay for the 546 additional licenses at the price we agreed at that time. But they then actually threatened us by legal and said they want 10 fold the license fee for the residual 546. So they just wanted to cash in. And this was not what we anticipated but we've learned if you sign these kinds of contracts with licenses you need to later on think when you're more successful than you think you are that you quickly re-license. We never used the qualitative data. So we actually had a lot of open-ended questions and form fields but in the end because we collected it in from 63 countries in 12 languages we actually didn't really manage to process the qualitative answers. So the words and the sentences that people provided in the end you talk a lot about quantitative data about the numbers which you can analyze easily but it's very difficult to look again at the written responses that you got. And then of course there are no resources for follow-up. So did we really achieve our goals in terms of developing educational tools? I'm not sure. We tried our best but it wasn't really easy because we didn't really have a plan what we did after we got the data out. So great impact on the scientific community but did we provide enough tools to our own patient community? I'm not sure. It's probably a bit of a victim of being a volunteer group. And then finally before I'm actually at the end of this presentation I just want to reiterate publish or perish. If you want to let's say sustain the scientific world and if you want to talk to scientists it's important that you don't just have a PowerPoint of the results of your survey but that you also publish it in a journal because that gives it a lot of more credibility. If it's good data it's being quoted and there's a much different regard of what you've accomplished. I know we're not advocates, not academics but for some stakeholders only publish stuff counts and that's probably what we didn't took serious enough in the CML Advocates Network from the beginning is to pre-plan how we would actually publish. So to think about how would we generate presentations how would we select the media? How would we find somebody to write the publication for us? How would we write it? Who would be co-author? And how would we actually deal with all the changes that the reviewers would actually ask for? All that is important. And what we can at the moment is doing together with Envision the Patient with a professional company in the publishing field who's been very instrumental in developing good publishing guidelines and patient engagement in publications. We're developing a four-step training module for patient advocates on patient involvement in publications and this will actually go live early next year and will be open for open access. So if you want to think about how to plan your publications there will be guidance coming from weekend in early next year as well. So that's it for that part of the meeting. What I would say is I now open the floor for questions and I can already see while I'm scanning the chat I can see that Iris and Yurika have questions. I go for Iris first. I will turn on your microphone. So please be ready to actually ask your question now. Iris, you should be muted. Accidental, sorry. There was no question. Ah, okay, all right. Thank you. Yurika, do you have a question? I'm not sure this works. Does anyone else have another question? Jan, there's a question in the chat from Zvika just to define KOL. Ah, that's right. The acronym. Yeah, sorry for the acronym. So that's the term for key opinion leader. So that's the, I would say, pharma and healthcare system expression for the leading clinicians or investigators in a specific disease field. So usually a KOL or key opinion leader is being used for the top clinicians in a specific disease area. Then Gilly has a question about, are there any tools you know for free text? And I must say I'm not aware about it, but if somebody is, maybe you can also post your response in the chat. Now, Jan, can you hear me? Yes, I can hear you, Yurika. Okay, thank you very much. I had to change the microphone. Yeah, thank you very much for the presentation. It is one of the best presentation I could hear in months. It was very exhaustive and yet very comprehensive. You were talking about inequalities. You know that this is also my favorite subject. And you were also mentioning a new principle, evidence-based data. This is something which we used in the UN. We used to talk about evidence-based management. However, when we talked about evidence-based management, we have also suggested in connection with a particular problem, we have also suggested, you know, what exactly the outcome is supposed to include? How can we address the specific issue in order to address the situation? You were talking about inequalities. I think that you should also, we should in our future actions, we should also recommend, you know, what exactly are we aiming at? And I have mentioned for years that one of the venue would be kind of a harmonization, not necessarily to be at the same level as Germany, for example, but to establish some platform, some standards, you know, some benchmarks, you know, decent benchmarks. And whatever we do in order to address this issue, I think that we should include that. I have suggested that for years, but I was told that the politicians in Brussels, because they are thinking in a box, it's difficult to take them out. Yet, we should be assertive enough as to suggest the venue whereby inequalities can be addressed once for all. And one last issue, I want to tell you that inequalities do not necessarily exist only between less developed countries and very and fantastically developed countries such as yours. The COVID has demonstrated to all of us that inequalities exist even between countries which are very developed, because what I mean through inequalities is some kind of standardized approach to specific problems. Particularly when we live in a globalized world, when we cannot tackle a problem in our country without prejudicing the individual who tomorrow morning may move to the next country. Thank you very much. That's all I wanted to suggest. No, thank you so much, Verika. And I mean, I agree. And that's why if you remember the three overlapping circles about, let's say, target, the right targets, the really good data and the right packaging, the fourth one, which I was just adding in the text, is actually also monitoring what we achieve. And that sometimes, I mean, sometimes we're so enthusiastic about the things that we do, that you don't think about reassessing whether we've actually hit the target and have actually achieved our goals. And I think that's very important. And I understand that that's part of what you're saying if we talk about inequalities. Do we really measure that whatever we did has really changed, let's say, has leveled out inequalities, has increased access, has actually changed the way people receive care and so on. So I think that kind of measuring and monitoring part is extremely important. So thank you so much for that, Verika. So next one is actually Ananda. Oops, Ananda. Yeah, can you hear me? Yes. Okay, well, thanks for that talk. I came in a bit late, so hopefully I didn't miss this and making a ridiculous question. One of them, I actually have two, one of them is around the type of evidence that needs generating. I think we all have a good sense within the patient community about what kind of evidence we need to guide the discussions we're having with stakeholders or to inform our own work. But sometimes that clashes a bit with the kind of evidence that is supported and funded by industry, for instance. So you can see that there's specific trends from industry that they support us generating evidence and advocating supported with that evidence, but only a certain type of evidence, especially the kind of evidence that helps when before hitting the market or having to advocate when you go to to deal with HDA bodies, for instance. But obviously, do you have any advice on how to deal with that? Because obviously there's still a need, and I'm not saying that that is not important, but there's also a need for us to generate the evidence that is important for us as a community. And sometimes we find that when we then are ready to take off and do our research and gather the evidence we need, that there's survey fatigue and patients have been surveyed out for the year and you can't do yet another one. So that's one of the questions. And then the other one is around methodology because I, sorry, I'm a true believer in the fact that not all evidence needs to be generated through a patient preference study, for instance. And at the moment, it's very fancy to do patient preference studies. Everyone wants to, but what it means to do, to carry out a patient preference study is, it eats up lots of resources at time, not only from the team, also the kind of rigor you need to prove read the questions, make sure that patients aren't upset with the way they're asked. And it takes rounds and rounds, so it eats up, is it really necessary and is that kind of rigor, wouldn't it be sufficient for some kind of evidence to just run a simple survey monkey with five questions and gather the responses that you need because you're not only gonna approach regulators and HTA bodies with that evidence, it's also just for you to make a point on a hypothesis that you had to just know with certainty what matters to patients or certain subgroups is that kind of rigor. How do we deal with that and how do we ensure that a simple survey monkey is as regarded as a rigorous patient preference study because the quality of the responses you get is equally good, I think at least. No, I mean, thank you for raising these two points and probably if I just go for the second one first to think about methodology, what kind of rigor do we need? I think we need to, it's a difficult balance to strike, but it has to do with where do you intend to use the data? I mean, if you want to generate data that in the end goes in the regulatory process where you want to, let's say, use the evidence to actually convince health technology assessment or reimbursement bodies or the EMA or anyone that you have a very different opinion on things and priorities, then you need to have a quite a high level of rigor in the methodology because otherwise they will use every opportunity to say that your data is not valid enough and doesn't fulfill, let's say, the evidence requirements in such a, let's say, decision that has a lot to do with legal litigation and also patients' lives and safety. So these decisions are not taken lightly. So very high requirements on methodology, but if we did that for every kind of survey we did, we would never get anything done because sometimes you can't prepare for two years to actually generate data and use all methodologies and to, let's say, spend a year on planning and half a year on conduct and then analysis. Sometimes a relatively quick survey which is kind of representative and uses, let's say, has a sound methodology and questionnaire design might be equally impactful or even more impactful because you can generate it more quickly. And I would say 80, 90% of the work that we do as advocates, that kind of evidence level would probably be enough. What you, of course, need to think about and that's why we're doing the weekend training on evidence-based advocacy is to do strategic planning and say what kind of data do I want to generate? What are the different tools that I could use? Where do I, what kind of budget do I need if I take in external contractors? And also, where do I get, let's say, validated questions from? What, for example, was extremely helpful for me was that the weekend academy training we had last year where Madeleine Pei from URTC actually presented about the stuff that URTC is doing and where they have this question library where they fed in all the pro instruments that are there that have been generated and validated anywhere and you can actually compile your questionnaire based on already validated questions. And of course, you need somebody who really knows what they're doing in terms of pro design to think about whether there's contradictions in there and probably things that you cannot compile. But there are tools really to do that, I think in a methodologically consistent way without, let's say aiming for the highest evidence level and generate data relatively quickly which can convince you on, convince the stakeholders because you have good evidence from your community. But again, what is very important for us is of course that we have a good outreach because what doesn't help if we just run the study in two European Western countries and then claim that we understand the issues because we know access is incoherent, healthcare systems work differently. So I think the power in some things is really that we reach out to the members in very different countries and also that the member organizations actually support that data generation effort by promoting our own surveys instead of creating survey fatigue by just signing up for other surveys that other parties do. And that probably is connected to one of your first question Ananda that you said. What do we bet on? Because we get, I mean, we all are survey fatigue because we're getting three surveys a week and all these kind of market agency, market research agencies that are there that industry collaborates with because they want to get evidence for let's say patient's needs or unmet needs or issues or experiences that patients have especially when their products are moving towards regulatory decision making. And sometimes we spend a lot of energy on those but we have no data ownership and we can't get hold of the data. We might get a summary report if we are lucky and we don't even know where the data is ending up. And I think we need to have a different strategy as patient organizations to really say what is the evidence that really helps us? What evidence would actually support to tackle the main challenges our community has and then partner up with different organizations might be pharma, might be academia, might be others to actually generate that kind of data. But we shouldn't just sign up for everything that's coming across our lines. And I think at the moment that's one of the challenges we have because of course doing our own data and then also owning it and being able to do analysis later on is a harder job than just promoting something somebody else has invented. I don't know if that answers your two questions or whether you want to add something yourself also from the perspective of MP, is Atlas access Atlas or so? No, I think it does respond. I just, I mean obviously conscious that you know, there's very different ways to gather evidence and that Atlas is one of them but to ensure that that kind of data is recognized by the outside. It at the same time informs whatever you're doing in the organization and for your members and for your patients. And then on the other hand, ensure that that data is up to date and accurate and complete, which I can say that is not very easy when you have a program like the Atlas for instance. It's extremely complex when it comes to questions like access because the source is where you get that evidence from you know, some of them are just not as transparent as you wish them to be. And so, but I mean, we're working on it. We're getting there. Thanks for the response. Thank you very much. And then one more question that Gili has asked is are there any GDPR issues in data generation by patient organizations? And well, of course, that's an important question because I mean, GDPR is the data protection regulation on the European level, which was set in place just about a year ago or two. I can't remember. And of course, this is a pan-European regulation that really protects private data and should actually counter the main intention was actually to counter, let's say, Silicon Valley companies that were collecting data, but also others. Well, of course, there was a lot of debate about research. And it's important for us as patient advocates to understand that because the data protection regulation plays into how clinical trials are being run, how data can be used for research, how data can be used outside of research and also what trial participants actually need to sign up to when they signed informed consent documents when joining a study and so on. And of course, this is not just for the professionals if we become data processors. So if we as patient advocates collect data from our members, we of course need to be aware about data protection measures and also be aware about the mechanisms to protect the data, protect the privacy. And I would say, especially in the area of genomics or in rare diseases, it's very easy to identify single individuals just by region from where they responded and by disease area and age. So think about the consequences that that would have, would that information travel also to people that are probably do not have good intent. But at the same time, we're very interested in research and we want to drive innovation. So it's a difficult field where, let's say, the professionals also looking at us as patient advocates and say, so what do you guys want? What kind of protection level do you want? And how would you support research and where's the balance on data protection? So I think that's why it's very important for us to be aware about this and also to be partners in that debate to really say where the border lines and probably just to tease you a bit in the session that we're going to have after the break now, at the very end of that session, I'm going to have a bit of data about this. So if you're interested, and I would say we break for break now, then join the next session. The next session is really about the summit of the European Cancer Organization that is starting tomorrow. And what I'm going to do there is I'm going to work you quickly or less quickly, I don't know, through the agenda of the meeting for the next two days and just trying to raise some of the issues that might be relevant to us as patient advocates. And also, always with the background thought, if we as patient advocates are part of those debates at the summit and afterwards, how would evidence help us to argue what is really important for patients? So I have that in mind. And with that, I would say we go for nine minute coffee break or whatever you want to do. And we meet in this room again and this virtual room at half past with coffee and whatever you need. And then we're going to have another probably one hour at max to go through the agenda. So see you in a moment.