 Hi Shorjit, thanks a lot for that great set up, so as I said, we're going to be switching gears a little bit here from talking about the DHIS2 design model, to a little bit more about how you're modeling your requirements for your Trumper program in country. And there's a reason that we do this, which I'll go into, but generally we find a lot of people are just trying to transition from their paper records into electronic tracker system, but tracker opens a lot of potential for different ways to use data for different levels of your health program, and we'll sort of talk about how those design decisions that you make now at the early configuration stages might influence data use down the line. So the outline of the session is building trackers that support data use, and deciding on what are the most important indicator outputs from your tracker program. And in order to do that, we'll be talking about how you might gather different indicator requirements, who to involve, and what to review with that process. So this presentation comes from the Tracker Implementation Academy, but we'll just briefly touch on some of the core key concepts of that. And the Tracker Implementation Academy reintroduced this concept of the tracker house. And this is basically a way of thinking about your own tracker program, the types of requirements that you need to succeed with taking your tracker to a large scale. So there are some foundational things. Some of these might be outside of your control, such as the infrastructure to deliver data over the internet, certain privacy legislation and policies, of course, funding and having the support of key institutions behind you. There's also a number of key considerations that you might need to think about as you're planning the tracker, like on posting a security, whether this will be during a secondary data entry, whether you use Android. And then of course, also that the top here, we have the real goals of your tracker program. So that's impact on health policy on some clinical or performance improvements at a local level. And of course, data use to actually make all of those changes happen. And so today, we'll be talking a bit about this interplay between the design and configuration process, which you are now all taking part of through this Academy. And this core goal of your trackers, which is improving data use in your health program. So we're bringing this up now at a kind of an early stage of the Configuration Academy, as we just talked about the data model, because we really see that there are a lot of decisions that are made upstream about the types of indicator priorities that you have that really affect program design, and then the types of data that are available to use downstream. So in six to 12 months after you've launched your tracker program and you're reviewing the data that you have in your system, you might be seeing that there are some missing gaps that you can't go back and course correct now, right? So you might be say upstream, you make the decision, we need our program to capture X. This might be we need to capture TV clinical visits and laboratory results. So you might just say we need to repeatable program stages because these are different encounters are capturing the same events. And then downstream again, 12 months down the line, you might be saying actually, it would be really interesting to know what is the delay between when the request is made for a lab from the facility level between when it's actually returned and reported when the lab result is reported. But then you say like, well, we can't actually do that because these are two repeatable stages, we haven't thought of a way to link the laboratory, the laboratory event to the clinical event. And so then you have to do some, you know, fancy workarounds with SQL views or API queries to get these data, which might have been very simple to collect in the first place that we've thought about this requirement initially. So maybe some of you are thinking, we've already worked with aggregate data sets, we've already worked with events programs before, what exactly makes tracker analytics different and why do we want to bring this up here? And it's really because the patient's longitudinal record, right, this this enrollment day that you're collecting is a new dimension for data analysis, right? Now you have multiple events for a single program stage, you have their entire their entire health record over time. Maybe you want to connect different programs together for some interesting, more nuanced analyses. And so this really opens a lot of opportunities for different types of analyses. You also have more end users, you're probably have different people who are our key stakeholders in these data, whether that's at the facility level, if you have a healthcare provider that is entering the data directly, or maybe the patients themselves will also be receiving messages based on their healthcare record through program notifications. So you'll have a lot more stakeholders and probably more demands for specific types of data analysis as well. So this is both a real big challenge, right? But it's also a great opportunity to really reimagine data use in your own project context. So now you're not just doing the same old aggregate reporting forms, but you have a lot of opportunities to introduce useful analyses into your routine health information system. But in order to do this, they means we really need to actively plan for data use. That's defining each indicator that you really want to in clear, precise terms, determining who exactly will use this indicator and when. So it's not just a matter of we want to know the number of patients, but we want to know number of patients on a monthly basis for a district integrated meeting with a bunch of a bunch of different care providers to present this and make some decisions. Maybe describe how it would be interpreted for decision making like we want to know the number of patients so that we can increase resources or we can change change some different types of administrative decisions or you want to know about linkages between different projects. And then also assess indicator feasibility within the tracker design. Can you actually make this work with DHIs to analytics or possibly some other system and then prioritize implementation, which ones are actually most important to consider and which ones would be nice to have that are non essential for for starting out your tracker program. So sure, she just gave a great introduction to the DHIs to tracker data model. And so as you now know, there's the aggregate data model. There's an event data model and tracker. And within tracker, you have all of these different events within a individual's enrollment. These are tied to the the org unit of the enrollment and your unit of the event themselves. And within each event, you also have different data elements, a specific program stage that is linked with and the track entity instance record. But now I want you to like slightly shift from you're not just going to be trying to represent your form in DHIs to data model. You also need to think about the data use that you want to see in your program long term. And so just as we were going through these IDSR forms in the in the last millimeter exercise, you should also be thinking about what are the data outputs that I want to see for my tracker program and how can I make sure that I include all the requirements for those analytics outputs in the requirements for DHIs to design. So try to keep both the starting point of the case reporting form that you're working with and also this the end points of dashboards or charts or analytics in mind at the same time. So when we start start out with these programs, sometimes you'll want to do a indicator requirements assessment, right? So that's that's starting to come up with a wish list of different types of indicators that you want to see from your program. And I want you to consider who is requesting this type of tracker data? Where is the real demand for a tracker program coming from? But in practice, who uses the tracker every day and what are their incentives for working with tracker? So our typical tracker users are going to be mostly the facility staff or M&E officers or whoever is entering in these data on a routine basis and whoever is consuming the dashboard or other analytic outputs. Those are the people who will be mostly using the system, right? Some of these large scale trackers, you may have thousands of users, but only maybe a dozen people working on the Ministry of Health or IT staff managing that program. But the typical tracker designers that we see with making inputs to how the program should look are overwhelmingly going to be from this top level, right? They're the people who have to decide on the types of features that their tracker program is going to have and the types of indicators it needs to collect. And so this mismatch can sometimes cause problems with actually incentivizing people to enter quality data on a timely and efficient manner. So the indicator requirements assessment, you might have a list of the different types of stakeholders who want certain data elements or certain indicators within their tracker. And so just as one example, you might be talking to a health system manager who is deciding on resources to provide to different hospitals for antenatal care or for maternity wards, right? So they might be saying that it's essential for us to know the number of hospital deliveries per month and by method so they can get a sense of our health system. And it would also be great if we could know the time of data these deliveries happen so that we know how to staff our hospitals accordingly. But then you might be also talking with the data entry personnel who say that what they really want are, you know, some ways to make clinical decision-making. So getting a list of a patient's risk factors that were observed later on in pregnancy and maybe also getting some type of feedback on how many people were actually getting their blood glucose tests done at the right intervals. So these might be people who are actually working at the hospital and want to see line lists of their patients and which ones are critical and also know how well are they providing care according to guidelines, right? The problem is that sometimes when you're coming up with the indicator requirements, if you don't talk to these data entry personnel who are actually interfacing with the patient and actually entering data, then maybe you won't get that type of inputs on the types of indicators that are essential to consider when designing your tracker. And so sometimes these nice to have indicators are what gets prioritized over the data that frontline workers really consider most useful and essential. So that brings us to some different approaches that we often see people make to designing a tracker program. And those of you that have years of experience working with DHIS2 or other data collection and analysis systems might have encountered projects that fit one of these types of models, right? And this isn't really normative, but this is just how people often approach the problem of introducing DHIS2 tracker in a new context. So sometimes we see that there's a paper disk screen approach that people take. So you really have a limited budget and you're just trying to maximize the efficiency of these routine reports that you have to do like this IDSR form. So you basically are just trying to get this paper form onto digital outputs to improve the efficiency of data collection. And this might be quick to get started, but it can be difficult to really maintain and scale because you haven't really considered how these data elements on paper are actually useful for anyone if they need to be collected or maybe you haven't considered how many at times or how often each point needs to be collected. So they may be redundant or misleading and the design does not really match the end user's workflow. But it can be quick to get started and just show a proof of concept. On the right you see like a maximal dataset approach and this is really when you get someone who's more familiar with doing like a demographic and health survey in charge of working with the DHS2 tracker. Because for them they really see this as a way to capture as much data as might be useful to get a better understanding of the client or the patient and the types of services that they might be receiving or lacking. So they might go for default to collecting more information that might be useful later on down the line in hopes that they can develop some sort of data use strategy around those data. But the end result is that oftentimes if you provide more data elements in a program than you really need then the end users might become overburdened with collecting so much data that might limit the actual quality of the data if they're skipping over fields that they never use. And also you have some privacy concerns because you're not really using the data that you're collecting on these real people. So often what we might recommend or what we see to actually create the most long-term support and generate quality tracker data is a more user-centric approach. So this takes an initial time investment upfront of course and it requires doing some focus groups with your key stakeholders to really talk through how they're currently collecting data and what types of indicators they might like to see for their health program. And maybe you develop a system incrementally and share different prototypes with all these end users. But that really gets a lot of great buy-in from the different clients and probably will also do a lot of the work of eliminating all of these unnecessary or redundant questions or just impractical indicators. And it also might help you as DHIs to configuration experts to really understand your user stories and your user base well. So some of the sources that we might consider for indicator requirements. So in each of these methods you'll probably be using a mixture of all these different sources for your requirements gathering. So the first might be the current aggregate reports or the current case reporting form. So what is the current frequency of routine paper reports? Maybe this is a monthly epidemiological surveillance form or sorry weekly. And what are the expected disaggregations? So you can see here it's broken down to cases under and over five years of age of malaria. And are there existing data reported into aggregate at the highest quality currently? So you know how much can we expect that these data are actually being reported on a timely basis and are they actually relevant for the care program? If they're not relevant at all maybe you don't need to include them in the tracker program that you're designing. So here's an example of an M&E framework from the Ghana Health Services National Malaria Control Center. And here we can also see that you know the M&E framework also includes indicators as well right? So maybe we have some process indicators here. We have some outputs, some number of people trained, a number of beds, a number of nets procured and distributed. And then we also see some outcome metrics as well. And so maybe your M&E plan could also incorporate tracker data as well. So maybe you need to go back to the program's national strategy and say hey we can actually contribute some data for program monitoring purposes. How would you define these different indicators and we can try to do this in DHIS to tracker as well? And should these that be done you know monthly, annually, etc. Finally, one more thing you might think of is these more advanced analyses. So maybe some of these couldn't be done in DHIS2 analytics but you would still want to collect data on them that you would want to then put to some other statistical software or you might want to do some more advanced DHIS2 program indicators to try to assess them. So maybe one example of this would be doing an assessment of linkages between HIV and TB programs at different district or facility levels. So you can try and get you can try and assess how many people are enrolled in both the TB and HIV programs. What is the sort of gap between when they're enrolled in both or other types of data elements might be related to each of them. But here's just like a list from the the global funds list of recommended analyses by program and topic area. And so these are all topics that you might want to consider talking about your health domain expert on your program to see what is the like a very high impact analysis that we could do with individual level data and maybe try to work that into your program somehow if it's really essential. And then finally you might also look at different quality of care indicators. So if your tracker program does actually track an individual over the course of a different a different health area. So for example here this might be an internal care program. You can actually assess is this person receiving care according to the national clinical guidelines they should be following. So in this example for tetanus immunization we see that in the first ANC visit if they had not been vaccinated for tetanus and they should get their first dose at week 28 and their second dose at week 32 right. So maybe you could actually start capturing tetanus immunization information in your ANC tracker to assess the the effectiveness of this of this quality of care metric or to improve your services as well. So finally I'm thinking about Beyond Ireland with program stuff really does involve like talking with the people who are actually gathering these data and might find the most useful. And so really the there's a huge world of possibilities for what this might include and it requires talking with people who provide services on a routine basis to actually ask them what they might need. So maybe for a nutrition program this is a percent change and infinite wait since a previous visit for example. So that's sort of calculating from a previous event what is the the change in a data element value. You can do cohort analysis for something like ART nutrition and HIV program. You can also do more time to event analyses but at more CHW level for connecting a ANC ANC record on time of delivery with the CHW's postpartum care visit. And maybe even doing some relationship analytics if you're something like a malaria program to see the number of new cases that are linked per malaria per index case of malaria. So that might require using relationships analytics and for program indicators. And so in order to to actually get to that point right you have to be configuring the program to meet the requirements to gather that type of data but it does require talking with all these stakeholders. You might be saying how does this actually impact my tracker configuration in like real tangible terms. So think about this discussion we just had on program stages and some of you asked you know when should it be a repeatable program stage when should I make it two stages. So again that depends on the requirements of not just the data entry workflow but also on the requirements for indicators that you will get at the end. So maybe if you actually do want to get the time between two different events then you can make it two different non-repeatable stages and you can create data elements that would track the difference between the two just as an example. But if you do want to capture within compared to different data elements it's usually better to keep them all within a single stage so that you can actually link the two data elements together. But I mean this also gets back to other types of data element questions like which ones do you actually want to include the types of data value type like is this a yes no binary choice do you want to include an option set maybe make it more generic. And then on top of that are there different data value ranges and warnings and show hide errors that should be included for that type of data value. Are there skipped logics just to get to that question to make sure you have quality data and what are the types of relationships that you might need as well across other different types of other types of programs. And sand goes for the different access levels. So if you want to collect data on community outreach for example maybe this you should include the community units within your org unit hierarchy as well and provide a way for chw's to enter data or to report data at that level. Okay but the the key takeaway is because I know I'm cutting into your break now but before building a program is to really write down the expected data outputs that you want to see. So program design might impact your analytics options for routine statistics and it's really important to write down the indicators that you want to see out of your DHIs to tracker at this early stage. She'll also consider balancing a smooth data entry workflow and all the downstream information needs. So you should be looking into these data that you might need but if you can't really find a good use for it right now and you know that it's going to take an extra 10 seconds for the data entry clerk or care provider to find us information and enter it in then maybe it's not essential to include in your tracker in the first place and to improve the workflow you could remove this type of indicator. It really is a balance here and there's no so it's more of an art than an actual science to it. But you should also consult with all of your national strategy guidelines project plans and also your stakeholder group to consider the types of information that they might need on a routine basis. And I especially say this to include the information needs of frontline users not just those who are providing the requirements from the top down and really ask people how will the more efficient data delivery model of DHIs to tracker help with your program monitoring or program monitoring on like a weekly or daily basis because now you're getting these data in sort of in in real time you can do much more much more nimble analyses as well. And then finally also you know be specific about what these indicators are measuring like actually write down are you counting the number of clinical visits or are you counting the number of individuals for this indicator because sometimes you might see that there is actually a mismatch between what you get in your tracker data and what you get with aggregate data and a lot of that has to do with just the definitions that might be different between the two. And so it's important to be really specific about what you need by each one. Oh and that includes by the way also thinking about your population denominators as well for all of these indicators if they are a percentage. Finally some other next steps to just consider if you were going to sort of think about this as an exercise during your break but maybe just spend a couple minutes you know writing down in a sketch pack what indicators are most important to your project. You can think about this from a different a few different types of perspectives right like the MOH might really care about X but I know that the M and E managers or care providers really care about why they might be a bit different. And then start ranking your must have indicators and really define them in detail. And then also think about how can you deliver some key performance indicator to people at lower levels who are actually capturing this data so how will you plan to deliver those important indicators not just up to the central level but down to the facility users or front line workers as well. And then finally sort of thinking about when you actually say that you really need these indicators what key decisions do you actually plan to make based on these tracker data. So what is the real end goal for your tracker and then sort of work backwards from there to see which indicators you need to include and when you see no indicators you can include you can start configuring to the appropriate requirements. Okay so that's all that I had for the presentation but I can see if we have any comments in the slack now or we can take it to break.