 I'm going to talk about the lessons learned in the design, development and deployment of an electronic medical record, or EMR, for multi-drug resistant tuberculosis, or MDITB. So I don't think I need to describe to anybody here what a nightmare MDITB treatment is. It's very long, it's very complex, lots of side effects, plus it doesn't work very well. So with the arrival of two new TB drugs, Della Mignon and Better Quilling, we finally have hope for a better treatment for MDITB. So the MTIB project strives to find a new, shorter, better and less toxic treatment for MDITB. The key to this project is a clinical trial which is ongoing, but it's five years before we get those results. So while we're waiting for that, we're putting patients on treatment now, patients who need the treatment now, and we're collecting data on them, their outcomes, their adverse events, and we're going to analyse all that data to see what is the effectiveness and the safety of these drugs. So it's a complicated disease, it's a complicated treatment, and it's a complicated organisation. So to get enough patients to have meaningful results, we needed to join forces with our colleagues from Partners in Health, from Interactive Research and Development, four sections at MSF, Epistand Access Campaign and others, all funded by UNITAT. And collectively, we decided to design and build a new electronic medical record. So first of all, we needed to design it, so we needed to know what we needed. So we identified some key players and we tried to listen to what everybody needed, and then we tried to describe what we needed. We only wanted everything, we wanted to do clinical management and programmatic management and use it for research. We first got together in June 2015 with a software developer called ThoughtWorks in India, and with some key people, we tried to describe to them what we needed. But meeting once wasn't enough, so we were meeting weekly to try to get them to understand what our needs were. And we had to detail in a document called a metadata exactly what we needed. Every variable, every option, every answer that we wanted to have in this electronic medical record. So in August 2015, we started building. So every week, we would meet virtually with the clinicians, so Boston, Dubai, Paris, Geneva, with ThoughtWorks in India. They would show us what they were developing. We would give feedback. They would update it. They would develop new things, and the next week we'll go on in this cycle. It was really a cycle of them building, releasing and giving feedback. So by April 2016, we were able to pilot the first version in Armenia and Georgia to very well-established programs. Each organization had a team of implementers, and those implementing teams between the organizations discussed a lot together. And the implementers went to the field to put in place the EMR. And then every six weeks, we would update the implemented versions. So they would come with us with a new release. We would test it at headquarter level. We'd give feedback rapidly. They'd adjust the content. Big no, no, don't change the metadata in the middle because everybody hates that. It gets very complicated. They would improve it. They'd be adding features all the time and fixing any bugs. And then we would remotely upgrade the sites that are already implemented with the new version. And this happened every six weeks between April and December 2016. So what did we end up with? We ended up with a platform that multiple users can use in the field. They can use it for patient management. So you have a comprehensive patient summary, which you can see at the bottom there. And you have a monitoring tool, monitor 22 parameters that you can see at the top there. And both of these can be seen on a tablet or a mobile phone. In terms of project monitoring and reporting, there's 28 inbuilt reports that are automatic reports that they can run in the field. And then for pharmacovigilance monitoring, pharmacovigilance is an important part of the project. So serious adverse events are reported within 24 hours from the field to the pharmacovigilance unit in Geneva. And they're entered into their database, a special pharmacovigilance database. In the field, they enter just the initial information, the term, the data onset, and the number given by the PV unit. And then every three months, we import the ultimate causality and outcome data from the PV unit. So that we can be sure that's complete and coherent between the two databases. Every day, there's an automatic export generated, and we've already used those exports for multi-centric analysis that we've already presented in some forms. So by December 2016, all the MSF sites had been implemented, and now currently all the NTB sites are implemented. As I said, we upgrade rapidly to the new version. We just have a new version we're going to upgrade next week. It's available in multiple languages, and there is a hotline and a help desk. Where are we in the cohort? We're about midway. This data is from December 2016, but there's about 1,200 patients included in all the sites. MSF has already reached its targets. Unfortunately, the majority of patients in the world treating with delaminated are treating in MSF projects, but that's not something we should be proud of, something more the rest of the world should be ashamed of. The end user feedback, we try to do a bit of a user questionnaire. We just have feedback from the first two pilot sites, so they're moderately satisfied. But we did identify some training needs and some frustration at field level. It's always difficult to have an EMR that will do patient management, program monitoring, and for research, and usually it's the field level where the frustration appears. What are the key lessons learned? So really need to identify those key stakeholders, listen to everybody who's going to use it, and also identify some focal people who are going to make the decisions during the development and really define the roles of who makes what's decisions. And in MSF, this was very well defined, but in the other organisations it was much less well done. You need to define the scope very early on, otherwise you'll get lost in the sort of wish list things that come along the way and you'll lose sight of the ultimate goal. It's very easy to get distracted and never get to the ultimate goal. Prioritise, prioritise, prioritise. You have to prioritise. You can't do everything at once, it's short. And even if you prioritise and you start doing things, you have to re-prioritise because things always take longer and more complicated than you think. The metadata, I can't emphasise how much time it takes and it has to be done early in the project. So the people who are going to define these things have to be identified and they need to be given the time to do this and to reach a consensus. So this is a really big lesson learned that this takes more time than any other step and it has to be done early because changing it afterwards is annoying. Cons of communication and feedback is essential. You think you understand, I mean I worked out what the user interface was, that was good, but you think you understand each other, but when you see what's been developed you realise there's miscommunication as in everything, as the keynote speaker said. And nothing is impossible. The question isn't, can you do this? The question is how long will it take? How much money will it take? How many people will it take? What risk is there? So it's never impossible to do anything. You just have to decide what you want to do first and decide how much money and time you've got to do it. So where are we now? So we are developing more easy package to implement in non-NTB projects. It's already implemented in Mumbai and will be implemented in Papua New Guinea soon. We're giving ongoing support to the projects and we'll work on some more training. It's an open source platform, so we're hoping other people will use it and adapt it to their needs. Big question about how can this platform communicate with other platforms, such as the hospital one that you'll see in the next presentation. And who decides about that? It's a big problem in MSF. Who decides about these things? And of course, improving its utility is to say it's never ending. You can keep going, you can keep developing features. You can keep improving just when to stop when you run out of money. So thanks very much to all the people who are involved and thank you for listening. Thank you. Another great example of a highly complex multi-centre study, but also a good example of real-world, real-time evaluation. And whilst the project's going on for five years, you're collecting the data now. So that's awesome. And I look forward to hearing more about that. Any points, questions for clarification? One at the back there on the left. Two on the back of the left. I'll ask. Thanks, Cathy. It's fascinating. Who enters the data? Is it the clinician or do you have data entry people in the field that do it? So all the MSF sites have data entry people. So the clinician's feeling of a paper form and then the data is entered into the database. There's 200,000 clinical observations in the database. It's quite a lot. Thanks for more questions. Thanks, Claire, from MSF. I just had a question. For people that are enrolled in the study, obviously they give consent and presumably they're told what happens to their data. But for people outside that are receiving the drugs but aren't in the study, what are the issues around ownership, use, all of that? And to what extent is that sorted? Yeah, excellent question. So this could be a good question because I personally think that this is, the data is analysed as aggregated data and anonymous data. We do nothing to the patient. This isn't a clinical trial. We do nothing to the patient that would be not be done in a programmatic management of India and you're obliged ethically to collect data on what happens to the patients, to record side effects, to know the outcomes, to know that you're running your programs correctly and to know that the patient's being treated correctly. So I think it's non-ethical not to collect the data. So this data is collected for all patients. The only data will be published up on patients that sign the consent form to be in the study. So there are programmatic data on how many patients started treatment, for example. They will be included in that. Good. And just to finish it, so it's highly secure in terms of any data that comes from the field is anonymous data. So we never see anything that's got a name or not showing anything. There's another part of the story of sending samples. That for sure, we would never send samples for patients that didn't consent to do extra testing but this is not part of this.