 me. So I'm going to try to focus a bit on like software design, software principle and also like system implementation principle as here and said. So you're going to talk about generic design, accessibility in apps. We're going to talk about software methodology as a satiric choice. And then system and software design principles. We're going to touch on some of the main challenges that I see and others see that we have going forward. And then we're going to leave it up to discussion. I don't know how much time we have, roughly how much time do we have for this? I mean, it's up to you now, Lars. It's your call now, so you will have, when people start to sleep, I don't think to repeat that we need to pick a larger brain because it's kind of us. So this is also your chance to give away all your legacy to us. So please. Maybe we can take a back up and give it to you. Let's see. Okay, so yeah, I mean, there's lots to be said, right? So we can't stick with everything in for the minutes, but let's talk about a few things. So first of all, generic design, I think clearly has been a success factor for us, right? And I think this is nothing new to you guys, you know, this, but I would say that like having a generic design for these, so it's really been fundamental to the success of the HS2. So what I mean with generic is that when we started the HS2 in 2005, we had some kind of difficult years in the beginning without too much resources and then it started to kind of take off in 2010 in East Africa and India. And when we moved from country to country in East Africa, like Kenya, Tanzania, Rwanda, Uganda, these countries, we quickly realized that things were quite similar, right? There were quite similar challenges, needs, requirements, even users, right? So that kind of triggered us to make things generic and flexible and configurable so that we didn't, you know, have to kind of reinvent the software for every country. We saw that there was a lot of potential for building on what we have instead of starting from scratch. We've seen in this space, in the national development space, we've seen a lot of software being made from scratch for particular projects for particular countries and so on. I think some of the US donors have been guilty of that. So instead of starting from scratch from every country, we said, okay, let's try to use the same software in every country. And that, of course, has a lot of benefits. You can take what's there, you can continuously improve on what you have as opposed to starting from scratch and losing all knowledge every time, right? It also, of course, reduces maintenance cost overall because of this one code base to maintain. Clearly, there's a more complex code base, right? So there's more code to be maintained, but I think overall, it's a lot less maintenance cost to have one software than to have many, right? And I think also important is that when you have a generic software, then it implicitly also transfers a lot of best practices and knowledge of the space because the thing we realized was that once you kind of think through and you think deeply about some kind of problem in one country, and then you move to the next one and you see that, okay, they have the same problem, you know, so instead of them having to figure that out all over, you just build on the solution that you have from other countries. And I think that also has an accumulative effect, right? So once you've been doing this for 10, 15 years, you have accumulated a lot of solutions, a lot of knowledge, a lot of best practices for different problems, which you can then implicitly transfer to the next country. So we see now lately that when we move to when new countries now that haven't used these as to not that many left now, but when new countries come on board, like, and just want to do aggregation events, it's quite quick, you know, come to see and get it up and running quickly and do things in a nice way because they're basically building on a lot of accumulated knowledge and solutions and best practices over the years. So I think that has been absolutely fundamental. This isn't a surprise, but I think it's been really critical. And then when you talk about generic design, what does that mean, right? What is generic design? And I would say, and you're also agrees to this, that there's at least two dimensions, maybe more to generic design. One is what we say, context or use case or domains. So when you make something configurable, even though we started within health, and it was like health oriented for, you know, seven, eight years in the beginning, we still kept it open ended in terms of domains, we didn't hard code anything to health, right? We kept it open ended, flexible, and we didn't hard code the word health or patient. Well, we did, but we took it out. And we ended up with track into the track end for the instance, which is this super convoluted, but generic name. And then, of course, that means we can now use the system across multiple domains. And today we see, you know, LMI, education, forest, agriculture, some of them, we need to do a lot of work, but some of them are kind of quick and easy to bring on also. So that is one part, like to be generic, to be applicable across countries, domains, new cases. But I will also say there's also like a time dimension, and I agree with this, there's a time dimension to generic design also, because by making things configurable, then you also allow for a lot of, for evolving systems over time, right? So I've seen, they're just incredible, like how many software in the world exists that just do the same stuff, they do data entry, they do some kind of validation, and they do dashboards and reports, right? That's, and everything is like hard coded, domain, it's hard coded variables, everything is hard coded. So every time you want to change something, you have to go back to the developer team and it's just incredible, like how many software like that's been built over the years. And I think what we've done with HS3 is to take a step back and generalize that whole notion of like data entry, maintenance, validation, analytics, and make it flexible and configurable. And the benefit there is that then people can actually also then evolve their configuration over time. We know that people add more datasets, they change indicators, this new pandemics or epidemics, people go from aggregate to tracker this, like people are changing the way they do data capture and data management. And mostly they can do that without talking to the developers, they can configure the changes themselves. So all of this stuff can be done, I'm not saying all of it, but most of it can be done without needing radical changes to the software. And I think that has also been tremendously helpful. It's really incredible, like how self sustained some of these countries have become without the need of from the dev team. So that has been highly critical, I would say. Okay, so I just want to take a quick step back and talk you through a little bit of the history of how we have been thinking about, you know, generic design and accessibility and so on over the years. So I would say generic design emerged, well, even from the beginning, right, but I would say it really started to be the key focus around 2010, right, when we started up in East Africa and also in India, Indian States, Sierra Leone, South Africa, maybe, that was mostly 1.4, where we realized sort of the power of generic design. So in 2010, we started to think like this, like how can we really generalize and make it applicable for many countries. In 2010, we didn't even have an API, right? The API came in 2012, there was no API, there was no custom web apps, there was like everything was ruts, hard-coded, very hard to change and extend, right? So everything was just a product, it was not a platform, I would say. And of course, that made it very hard to extend it with new features, extensions, basically happened by forking the entire software. And I would say we saw a lot of forking, for instance, in India, where the Indian team essentially, like every time they moved to a new state, pretty much they made a fork. So they took the entire code base of this too and forked it and hard-coded like reports into the system and added things that they needed, right? Custom UIs, custom imports, functionality, custom reports, stuff, those kind of things. So that was of course, like in horribly laborious process, like it did now have 15 DHS2 to maintain and it was not really maintainable all the time. You couldn't really maintain that, I would say, and not even upgrade, right? Upgrade becomes a nightmare because now you have to merge in your changes to the master branch, so to speak, which is going to be just becomes more and more complicated over time. And this is like a problem that many people have, like SAP is also famous for this, right? Every time someone upgrades SAP, it's not the same like that. So then you started to think about like, how can we make an extension point? I think Ulavai had a famous discussion, I remember, smoking a water pipe in Sanzibar, I think, and discussing like, how could we, you know, expose some of these things that we have in DHS2 so that other people can take part of it. So in 2012, and we also had Morton joined in 2011. And as you know, like Morton is a API guru, so he really made a huge impact there, started to build out the API. And in 2012, we had like a basic API up and running with some basic metadata, data import, export, and so on. And suddenly people could at least start to import, export data, they could build integration jobs, you know, the typical like Carol scripts and things. And that helped, right? That helped a lot because now you can at least, you know, do bulk imports from your own, you know, desktop and integrate systems in a basic way. Okay. And then over the years, we started to invest more in the API, 2016 more comprehensive API come out. And we also started to see for some of the first lightweight web apps being developed. So that was like the beginning of the web app development. And of course, that made it possible for people to build specific UIs without forking. So the most of the time when it comes to came to UI, people didn't have to fork DHS2 anymore, they could build an app. So that was a huge step forward. The problem was, of course, that it's still complicated and time consuming to build apps, like it's, you have to figure out everything, right? People had to get sit down and people, you know, picked Angular or Vue, no, not Vue, they're jQuery or whatever framework they knew, you know, and started to build an app. And all the apps looks very different, completely different style didn't really look like DHS2 apps. Every app have different structure was complete anarchy when it came to the structure. And I would say like fairly low quality apps, no consistency, common design principles. So that made it very hard to build apps, it was expensive, it's hard to maintain, and it broke all the time because when you change something in DHS2, all the apps broke and so on. So that was a step in the right direction, but still have some challenges. So in 2019, and I think around 2018, Austin joined us and started to think about this app platform, which also think was a huge inflection point and a step forward. So with the app platform, in its infancy, we had now the beginning of more uniform apps, like apps were starting to be built in the same way. It put some restrictions on which frameworks, like how to build it, the structure of the app, the technology, et cetera. And you start to see the qualities started to go up because people were kind of guided in the right direction, and it became cheaper to build apps because you didn't have to do all the boilerplate stuff, you know, structure and build up all this kind of services and tooling around it. So that was made it move in the right direction, still apps was kind of costly to build and maintain, it still is, right, to build. So what we're thinking now, so this is like 2022 and now and in the future, to address some of these challenges, what we're trying to do now is essentially to add more extension points to existing apps, and also to build a shared and reusable UI component library so that people, app developers, they can use the app platform and they can also at some point then take this UI component library and start to build their own apps quickly made up of existing UI components and existing building blocks instead of having to start from scratch. Because like there's no point in building the original hierarchy a million times, you know, and all these selectors a million times, it should be almost like drag and drop. Low code is a new thing, right? No code, low code. It should get to a state where people can almost like drag and drop the components they need into place, write some kind of glue code and then have an app without having to start from scratch. And the goal there is to kind of change the rules a bit and make it easy, affordable, and quick to build custom apps. I think this particular relates to Tracker, right? In Tracker, we would like to have, in Tracker, we see a lot of the custom needs for workflows. There's a lot of special workflows. Sometimes to see a Tracker capture is being too complicated, like there's too many features, not too less, right? Too many options, too many features. People find, end users find it sometimes hard to use if they just want to do some simple stuff. So what you need to do there is like make it easy to build custom apps that's cheap and can support very specific workflows. And that can be apps that can be cheap and also then, you know, easy to throw away. If the core supports it, you can throw it away. You know, if you need to rebuild it because technology has changed, you can also throw it away. And you don't need a huge budget and a big dev team. Like, you need some skill. So trying to kind of change the game a bit by making app development cheaper and faster is key. The other thing that's going to be key in the future is to support better custom backend microservices. We do see that a lot of these workloads we have are, you know, typically backend-oriented processing, such as data integration, data processing, heavy kind of analytics jobs and so on. And we very often see that people, you know, you know, they're saying like, if you have a hammer, everything looks like a nail. If a web developer is tasked with something, he will go and build a web app, right? Even though it shouldn't have been a web app. So what you're trying to do now is also allow, in an easy way, to have backend extensions so that people can build extensions and solutions that fits on the server easily as a backend extension. Lars, Lars, sorry. Just one question while you're still on this. No need for you, but just on making app generic and sustainable, what about the maintenance cost of this custom, very easy to use custom apps that can be so many of them if they are very easy to build? That's a good point, Chris. And so like, we're having a lot of apps now, right? And the question is like, how do we make it sustainable? And the thinking is that, and I mean, I know this is not possible, but the thinking is that if we can have a really good UI component library with ready-made components that the core team can hopefully maintain, then that should reduce the level of effort for building an app. Because instead of having to develop all these components yourself, you can interiorize and just take it and put it together to become an app. And combined with the app platform and docs and everything, then that should reduce the costs of building an app. And if an app is very cheap and quick to build, you don't really have to worry so much about maintenance. If you made like a $200,000 investment in an app, you don't want it to go away next year, right? But if you spent $5,000 or $2,000 on building an app, then it doesn't really matter that much if you have to build it again in two years, right? And you don't really have to share it with five other countries or NGOs because it's cheap. So that is just another way of attacking this problem by- You don't really need maintenance, you say? Yeah, you reduce the need. You don't take it away, right? But you reduce the need for maintenance long-term. That is the idea. Good question. All right. So that is some of the thinking we have on the team now. And I think if you can pull off these two things, I think that will be a huge win for DHS too. All right. So switching gears. So I also want to touch a little bit on software methodology. And when I say software methodology, I mean things like what type of development process do we have? What type of releasing do we have? What type of planning do we have? And how do we really think about how we build software? And my point here is that many people will come and tell you that something is the best. Something will tell you that you need three-year plans. Scrum is the best methodology. You have to use that. Old culture always be thoroughly tested. Quarterly releases is the best. I mean, people will come up with these very kind of dogmatic opinions on saying what is the best to do. But I think what I've and we have learned over the years is that when you think about how we build software, it's really a strategic choice that should align with the strategic interest of your organization and the product. And it also relates to the face that the platform is in. So what I mean with this? So some of the things you should kind of then make a strategic decision on is planning. Like do we have very long-term planning? Or do we have short iterations? Do we react to user feedback, develop something quickly, be very responsive, put something on the door? Or do we say, okay, we have a three-year plan. This is the plan. Like, we're going to stick with this. Do we have long-release cycles? Like, do we have, you know, six-monthly yearly releases? Or do we have continuous releases where we just continue, continue to release all the time? What about testing? Like, how much effort do we spend on testing in QA versus building new things? And then do we favor high stability over rapid change? Because you can't really have both, right? If you want to have a very stable system, you can't really change it all the time. And the thing is, like, some people say, you need to use agile. You need to change all the time. But the problem with that is that if you have a very big code base, then if you keep changing it all the time, then it's going to be a horrible mess over time. So some of these things are something you need to think about in terms of when and in which face is the platform you're trying to build. So I think you should look back at, like, these as to what we have done over the years. So I would say in the beginning, we were in kind of a start-up phase, right? We only had a few developers. We didn't really have any users. We had some users in India, something in Sierra Leone, but we didn't really have many users, right? And we started up in Kenya and so on. And we were even trying to make it work in Kenya, basically. And I remember that in Kenya, like we had, I think with Morton and I and what I was a lot in Kenya, we basically just traveled around by car to the districts. We talked to the district officers in the morning, then we were driving in the evening, and then we were coding and designing at night, you know, and the next morning we pushed the prods, you know, straight into production in the morning. No testing, very little testing, no CI pipelines, there were no releases, nothing. It was just like the latest version of the code, post straight into production. That was what's really running in Kenya for some time. And then people could say, okay, that's bad, you know, we need testing, you need to have a release cadence and all that. But I would argue that when you are in the start-up phase of a product like you said, you should really focus on building the right solution. Like that is what it should obsess about. Focus on building the right thing. That is only one matter. If you don't have any users, it doesn't matter if you have a million tests, if you have the perfect CI pipeline, right? You can have the best release cadence and scaffolding and testing and everything. But if you don't have any users, that doesn't matter. Nobody's going to use it. Right? So in the beginning, we really focused on rapid prototyping. We had frequent or even no releases, meaning the latest was the release, continuous releases. We tried to listen to users all the time. The developers were sitting with the users. We tried to be responsive. We had short iterations. We pushed it out quickly. We didn't really wait. And we didn't really have any committees, which is like, let's make a decision, move forward. And really prioritize building features over testing and stability and things like we just said, they're going to be some bugs. And at the time, I would say that was the right thing, because that allowed us to really change quickly and understand what we need to build. Right? We got the data model pretty right. We got the workflow pretty much right. And that was, I think, the right thing to do at the time. Then after some years, we ended what I would call the growth phase. So now it started to take off. Many countries in Africa started to use it, India and so on. And in this phase, you have to start balancing things like, so you have to start to balance features and new things over stability and not breaking existing users and clients. So in that phase, it's important to focus on making the platform useful for many, like try to focus on scale, like get a lot of people to find value in your platform and have many people in many countries find it useful. And we attacked that with like generic design, trying to make it generic flexible, open-ended. I would still say that in this phase, which was like, let's say 2012 to 2016, there was still a lot of breaking changes. We still changed the data model in radical ways. We had a lot of people complain that we broke the API, especially the data model, right? You have lots of people saying, oh, you broke the API, you have to rebuild our app and so on and so on. But I would say that was actually the right thing to do because the reality is that change gets more expensive later, right? When you have more users, more countries, more clients, more consumers, the harder change gets, the more expensive change gets, right? So I would say you need to be able to change the system to build the right thing because you need to iterate and react to user feedback. And you should make those changes as soon as possible. Don't wait until it's too late, right? Because then you kind of stuck with, yeah, I think now it's too late to make radical changes in the data model. Or, yeah, I would say that because then the downstream effect is going to be so huge. So in the growth phase, it's more like a balance where you still should allow yourself to break things, but of course, slow down a little bit and focus on stability and making people happy. And now I would say over the last three, four years when we become like this, you know, the global most adopted platform in the world, you know, 90 countries and so on. Now, of course, it's much more about stability, right? So when it comes to the core platform now, it's much more about stability of the core platform and less about making radical changes. So what we're trying because now people want, people want stability. They don't want bugs, they don't want things to break. They have, I think they're mostly happy with the features that they have, many at least, and they want to focus on stability. We're trying to balance that by pushing innovation through apps and extensions, right? So we focus a lot on like app frameworks, app platform, extensibility, APIs, and we allow for innovation in our own apps. Of course, we have, you know, wonderful new apps like, you know, line listing and capture and then data entry and everything coming out, bringing innovation and new things without really changing the core, the backend and the APIs. And we also have the community contributing with innovation, like local innovations in countries by building their own apps that don't really break the core at all because it doesn't touch what other countries are using, right? So, so trying to kind of then focus on stability of the core platform and then having innovation and user prototyping and like being, being responsive, happening through extensions and apps. I think that is the right, the right model, I would say. Yeah. So that was just a couple of thoughts on, on like methodology versus and in relation to like the face of your product and how we should evolve. Any super quick question there? So moving on. So from, from methodology, touching on many things today, we're going to talk a little bit about like system design principles. And now I don't really talk about the software per se, but more about like how do you design the adhesives to system. And this is, of course, not something I've come up with. I think this is like, you know, all you guys here and then the researchers and everyone come up with this and trying to summarize a little bit. So, so I would say that some of the principles that have really led to success for us would be that we always have this top down national scale coverage approach before focusing on low level detail data. So they always try to say, let's focus on like getting national scale before we go too deep into any kind of vertical or specialized data or disease or automatic area. Because we've seen over the years that in international development, like there's been this kind of pilotitis as we call it, right? There's, there's this famous diagram from Uganda where they have like a hundred different application apps, you know, running in different districts for this all kind of mobile apps, you know, web apps for particular diseases. And it's only running in a couple of districts run by some NGO or something. And it's a complete nightmare to maintain. It doesn't really add any value to the national miniature health that's trying to keep this kind of national level overview of the country in terms of planning and research allocations and monitoring. So, so what's set us apart from all those hundred other apps that probably look cooler than these activities that we focus on like national scale coverage before we go too deep. We did that for many years. I think now with Tracker, we're going more into like detailed data, individual data, like patient workflows and so on and so on. Well, would have liked to tell us that still a lot is on paper and backlog entry. So, but we're taking now gradual steps into, into becoming more of an individual level system. But, but I think that this has really been critical for us. I think we should not abandon that principle that we focus on national scale coverage before we go too deep. And I think Rebecca will agree there that you need to have a solid backbone, like a solid platform to build on before we do Tracker, before we go into the individual specialized data streams. So, getting like the basics, right, aggregate data, maybe events, national level scale, solid configuration, good team, all that is critical before we move into the fancy stuff. Yeah, use of participatory design, of course, critical, you know that we always kind of try to listen to users base, base the sign on real user feedback and not just sit in some, you know, meeting room somewhere and in an ivory tower and make up requirements, we always try to listen to users and have it be real feedback that guides us and not, not, not going to name names of other organizations, but we know some that that like to dream up fancy architectures and so on. And tell countries that you need this, I think we're taking the other way where we say, okay, let's listen to what the countries need and what the users say. And then we basically done on that. Yeah, high flexibility, of course, talked about that already make it configurable, but also not too configurable, like finding the right level of flexibility is also critical, right? It's more like an arts, then come back to that. Integrate the central data sources. So, that has also been a key to the success. Look at the essential data sources in the country and integrate those into the DHS2, either by setting it up within the DHS2 as a data set or program, or by integrating it through some kind of integration. We, we know that, you know, 10 years ago when you went to a country, there was always like, there was a system for initiation, there was a system for HIV, there was a system for family planning, maternal health, you know, family, all these things had their own software in the system, known forms, so on. So, sometimes just Excel, of course, but what they did there was that they tried to integrate all those data sources to just become data sets and programs inside the DHS2, and that's what we managed to integrate everything into the same system, and we can do integrated analysis. If you have like functional stable systems in the country, you can also do integration, of course, to get the essential data sources into the DHS2, but the point here is that it's more important to get the high level essential data than to get a little piece of individual specialized data. So, to make efficient analysis in health, you need to have, you know, maternal health, you need to have child health, you know, you have this like different, actually, malaria TB and so on, so on. So, try to get those together, and don't look at individual data, but get the aggregate data in first. That is like where it started. And of course, best practice configurations, I think the metadata package has also been critical, right, you know that, that I think flexibility is great, but that also comes with the chance of building, doing something silly, right, you can get it wrong by giving too much configurability. So, that we now have these kind of metadata packages that that embodies some of the best practices that's been established over the years, I think it's really good so that people can either import or at least be inspired from the best practices out there. It's kind of the same thing as the DHS2 software, it's just that now it's more almost like reuse of metadata, as opposed to reuse of software, that you can basically reuse what other people have built in the configuration world, instead of starting from scratch. I think all of these things have been been highly critical. Lars, Erik von Tirog had a question for you. Erik, about the Tivana method. Yes, thanks Lars, I put it in the chat as well. It was a couple of slides back when we went through the different faces, this one, and especially the way you described, and it sounded almost verbatim, Tivana to me. It's two by two matrix of keeping the core stable and then flexibility in the apps. So, the question was really, did you sort of discover and work out those similar principles on your own, or was there any, I don't know, I guess theoretical input to this or experience it from other places, or how did these insights arrive? Yeah, yeah. Well, first of all, I would like to say, this is not me, right? This is the team, so this is the team. We have a lot of discussions within the team, so yeah. But I will, I certainly know that you and Bendic have been writing about this kind of dichotomy between heavy weight and light weight. I read that article from Bendic where he talks about having a stable core, which is heavy weight and slow, and then there's lightweight IT on the sides, which is innovative and can be cheap to build and all that, which is very analogous to what we're doing with apps, right? It's almost the same thing, that we have a stable core and then people build apps. So, it's very much a principle that I appreciate that's also coming up from the research side of things. Let's say it's a mix, right? It's like experiences from the field and also listening to the research team, I would say. Yeah, sounds fair to me. But if I can just look it forward, because the implication which you also have been repeated here with Tivana is that you have increasing irreversibility. By now, you're stuck with the core. What then with, and this is the blind side of Tivana, with competing platforms which possibly would need, again, later radical, is this something you discuss or are open to or think about? It's a very good question. So, I think this is what we'll describe as the innovative dilemma, right? The S-curve. So, the innovative dilemma, yeah, yeah. But the innovative dilemma states that, you know, exactly what we talked about now. In the beginning, you can be flexible, you can adapt, you can quickly change, and you can be very agile and aggressive. But then, as you get more users, you have to slow down. And that makes you a little bit vulnerable to, you know, new people, new organizations, new teams, new products coming out in the field that doesn't have the drawback of having a huge user base. So, that is a very kind of classic problem that I think any kind of product domain will face. And I think we have discussed a little bit. I think Mike sometimes likes to talk about this. You have to discuss this a little bit on the question. We should probably talk more about it. I'm actually not so, I mean, it's a dangerous thing to say, but I'm actually not so worried about that right now, because I feel like when it comes to, like, HMIS health platforms, I think this has to be quite far away from the next in line. There's almost isn't any competition in that space. Love competition in EMR is mobile apps, but not in this HMIS space. I don't really see anyone kind of attacking that position in the near future. And I think that as we kind of, we still mean, I mean, even though we have a monolith, we still manage to, you know, build new things. We can be flexible. We can still put out new features every six months. There's continuous releases of web apps. It's not like the code base is reached a level where it's too hard to change it. So, and we spend a lot of time on cleaning up the code base, you know, adding tests, refactoring. We had this massive code monolith modernization project some years ago that's kind of still ongoing, but in a couple of months, you're going to get rid of the struts and going to move to like pure react-based front end. So we do spend a lot of time on modernizing the code base and the architecture. We'll probably spend more, but I think that helps in terms of like allowing us to still be a little bit agile. But yeah, I will say, we also should be aware of this and look at blind sides, like you said. And if something radical comes up, you know, if it's AI or if it's, you know, whatever this in terms on the technology side, like it could be a little bit vulnerable. So we need to, we need to keep an eye on that. That was a very good question. Okay, continue. Yeah. Okay. So moving towards the end. Okay. So a couple of like high level software design principles. Again, nothing shocking here, but we're going to talk about some more like low level principles, but first some high level principles. So I would say like some of the success factors here have been that these are two can facilitate the entire data flow. So from data capture and data import, data management, validation, and also analytics visualization, we see that in other spaces, like in a typical, you know, private sector, whatever, the typical way to do things is that you have, you know, one system for data capture, like some, some kind of survey or data collection system, then you have some kind of data lake, you know, in the cloud or something where you do validation. And then you have like a BI tool or like BI platform that, that do the visualization like Power BI or Tableau, whatever. But the problem there is that gluing these things together is actually very hard. You know, if you have the disparate data sources that comes in as, you know, CC files, whatever, trying to kind of harmonize and join as they call it and harmonize all those data sources to become a uniform and integrated piece of analysis, unified like data repository that can be used for integrated analysis. It's actually very complicated. It requires a data engineer, I would say, that knows how to do transformation and cleaning and data lakes and things, which, you know, not easy, not trivial skill. So I think that the fact that we basically allow, you know, we do all these three things together and we have this design where all the data basically ends up in one table in the database, which makes it very easy, it makes it hard, a little bit hard to get the data in. But once it's in, it's actually quite easy to analyze in an integrated way. And I think countries really appreciate that, that they don't have to kind of set up different systems, they don't have to hire a data engineer, data scientist to kind of bring all these data sources together. It's once you get like the data import process going, once you set up your forms, it's actually very easy to integrate data. So that I think has been a key to the success, that you keep it simple and avoid like complex, making life complex and having this complicated data, merging and joining operations. Yeah, and generic design, talk about that many times, open source, of course, has been critical. You know, there's a number of like projects and software that's gone under because they were very dependent on project financing. So that when, you know, the project funding ran out, the software basically ended. But I mean, these are two still cost money to implement, but the open source license, of course, reduces the dependency on like time specific funding. So that's, even though like some funding runs out, the country can still, you know, keep it, keep the lights on and keep the project. Because the ability to talk about that one also is very key, of course, make it easy to integrate with other systems, because we would like to integrate system, you know, data sources, most possible, scalability, I think it's been key, make it work at the national global level before focusing on details. I'm trying to say that it doesn't matter if you add a feature that makes the software slow, because if you do that, nobody, you don't really have a system, right? If you're adding a feature that makes the system three times slower, that's really helpful, maybe in a individual data level, that doesn't really help, because now the entire system doesn't work. So focus on scalability first, performance first, I think it's been key, because now we can use the system at the country level. And of course, like hosting anywhere has also been critical, I think that we do supports on-prem hosting. I think, you know, obviously many countries like the nuts, like they prefer to host in country and not have the data in the cloud, and then they should be allowed to do that and own their own system, right? So some of the competitors we have, they only support like managed cloud hosting, that's been a major problem, I think, for some of them. Of course, that increases the work we have to do, because supporting multiple versions and supporting on-premise installations is much more complicated than just if we had, you know, one cloud environment to support, but it's definitely worth it. It's been key of the success. At the same time, it's also, of course, possible to host in the cloud, and there's also managed hosting. So organizations that just want to sign up without hosting, they can also do that. So this, having this spectrum of hosting options, I think it will have been critical to the success. Okay, so last section, well, there's also discussion, right? Just a couple of points on like software or product design principles, and I'm talking more about how to design features and behavior within the system. So I just want to state, first of all, real user needs and inputs are the foundation of product design. So we talked about this already. Focusing on real user needs is really critical. Like, don't sit in the meeting room and make up your requirements. It's really based on real user needs. And having people, like we have, that's been spent a lot of time in the field, it's been critical. Because like, when you go to the field for a long time, you start to get this implicit understanding of what people actually need and what type of users do we have, right? So being close to the users, understanding users, that's of course the basis. This is maybe one of my favorite principles. When we try to design software, we should try to focus on having people tell us their problem and not their solution. We see this a lot. So if you've been part of some of these design sessions, if you talk to users in the field, you will realize that they always try to tell you their solution instead of telling you their problem. So they will tell you that, oh, we need to be able to do this with data elements and download to Excel and we have to do this and that. And they come up with the solution in their mind on how they would have designed the solution based on what they know. So when you try to talk to users, we should also be aware that we should focus on the underlying problem and not necessarily the solution that people present. Sometimes the solution is good, but often it's not. So the users really sit with the knowledge about the problem. But I think software engineers are usually better to devise solutions to the problems. So trying to peel back a bit the layers and understand the underlying problem is really critical instead of jumping on a solution that comes out. So that takes time and skills, but I think that is absolutely critical and not just a yes to what people say, but really analyze what they're trying to achieve. Okay, this is a quote. This is like a meta quote, which I also like. It's kind of portrayed as being said by Henry Ford, but it's actually not said by Henry Ford in the enter of Ford Motors. But it's a good quote anyway, so I'll use it anyway. So the quote says, if I'd asked my customers what they wanted, they would have said a faster horse. So this is kind of symptomatic sometimes when you do user research is that if you ask people what they want, they very much think within the boundaries of their existing knowledge of their existing understanding of the system. So if they know what they're trying to do and you ask them what they want, they will say, I want it to be a bit faster. I want one more option in the pivot tables. I would like the solution to be this and this. It's kind of hard for end users to really innovate, to come up with something completely new. But I think that I am a kind of a strong believer in incremental improvement. I think the majority what they do should be incremental improvements, like listening to users, making it a little bit better every time. But I think that we also need to be aware that if you do that forever, you might miss some opportunities. There might be new technology. There might be some radical new ways of doing things that they might miss out on. So I think that we also need to kind of take a step back. And I think what Eric is also alluding to is that often for every five or 10 years or so, there will be a new shift in technology. There will be new platforms, new opportunities that come out. And we need to focus on that and see if we can kind of leapfrog a bit and make some real innovation, as opposed to only doing incremental improvements. Nothing wrong with that. But we also need to focus on sometimes every now and then making some radical innovation. Okay, next one. So if you try to please everyone, you won't please anyone. This is also I think a very important principle. So as these two developers, we get a lot of requests, right? There's a ton of requests coming right left and center. And people ask you for all kind of stuff. We need to do this. We need to join to this domain. We need to go into that domain. We need to support this. If you only have this feature, we can do this, et cetera. So the thing is, people ask you to do a lot. But I think the paradox is that if you say yes to absolutely every request, eventually nobody I think will actually use your platform. If you're trying to say yes to everything, if you add like a thousand features here and there, the kind of the problem is that the paradox is that your platform will become so complex that eventually nobody's going to use it. It's going to be so complicated that people will just throw their hands up and say, this is too much for me. I don't want this. Can't use it. So while we do want to be responsive to users and we want to say yes and we want to listen to users, it's also important to say no every now and then to say, I hear you, but we can't do everything. We can't do absolutely everything because we're not really doing anyone a favor by adding a million features of DHS2. We need to focus on what's critical. What do we think is most critical for DHS2? And focus on that. And we shouldn't jump into every little specific domain, specific feature, specific system. We should try to stay core to what we think is most important to us. And that means saying no. Saying no isn't bad. Saying no is actually good for the people who ask in the long run. That is the paradox, but I think it's really critical that you're not always doing people at this service by saying no. You might actually be doing that person a favor. You should say yes to critical things that really make sense and aligns with your platform, but you should say no to things that's on the periphery or what you're trying to do. And then, quick one, benefit of our features. You should justify the added user interface and core-based complexity. So this is also about feature prioritization and deciding on what to do. So again, when people come and ask us to build a feature, we always think through what is the added complexity of the UI and the core base for this? Are we making something that's horribly complicated to use and maintain or not? And I think that's, at least my experience over the time, is that open-ended features give you the most value in the long run. So building things which are kind of not very specific to a very particular disease or a particular domain, but building things which are open-ended and can be used by many, and that's easy to understand and easy to kind of build and grasp and understand by looking at UI, those things tend to have the most value over time. Because, you know, there's been a number of these requests from people say, oh, we have to go into this very particular disease for mosquitoes and classifications and things. But that never really took off. You know, then you're only helping a few people, but at the cost of improving the complexity of the UX, extending the complexity of the configuration and also the complexity of the core base. So again, it's trying to keep it simple, keep it simple, keep it relatively open-ended, so it's easy to use. That tends to give the most value over the long run, at least in my experience. Okay, quick word on generic, back to generic. So being generic is very much about abstractions. So in my opinion, like abstraction should be generic enough to be useful for many and also specific enough to be useful at all. So what that means is that you really need to strike the right level of abstractions. So being generic, you can also be too generic, right? If you're building flexibility that nobody needs and nobody really uses, that's bad, because now you're adding on the complexity, you make the system harder to use. It doesn't really add any value, but you're just getting the added code base complexity and user complexity. So and again, if you make it too generic, it also might not be useful, right? If you're making it so generic that nobody knows how to use it, it becomes pointless. So that's really a balance. You need to strike the balance between being generic and also being a little bit specific. One example is like the org hierarchy in Disha 2 is very specific, right? It's very particular. It's not like just a list or some of the things to build are a little bit specific to like the way things are done in health and maternal health and so on, especially tracker is a little bit geared towards maternal health, child health, because that is like the majority of the use cases. So you need to support your key use cases enough, but also, of course, don't make it so specific that it becomes not possible to use it for other domains as well. So striking the right level of abstraction is really key and that requires a little bit of thinking, of course. Okay. And to work the end, this is my favorite quote of all times, start simple. That is always the important thing. When we build things, we should not try to build something very complicated from the get go. Like all complex solutions start simple. You have to start somewhere. So instead of trying to kind of make up what people want, you should start simple, build what you know, and then get it out. And my preferred way of building software will be build what you know is right, then just release it, get feedback, and then you repeat. So if you have this process when you kind of gather information about what you know you need to build, you gather information about what you know people want, you build that, you release it, you get feedback, you just keep repeating that. If you do that consistently, you actually reduce the risk of making the wrong choice a lot that reduces the risk, I would say, of software design, a lot of making huge mistakes, and especially now that we have such a widely adopted platform, we can't really afford to make major mistakes. So we have to kind of understand what we're doing and have some kind of confidence in that we're building the right thing. So by doing this in principle, I think is, yeah, reduces risk and increases the chance of building the right thing. Yeah. So just a couple of reasons why we start simple. You shouldn't make too many assumptions. Like if you're building something very complicated from scratch, you're making a lot of assumptions about what people want. Building too much increases risk of building the wrong thing. It's a high mental load for developers. And we often see that users don't know what they want until they actually see it. It's hard for people to kind of conceptualize complicated solutions. If you go into a room of five people and you ask for a complicated solution, there will be five different pictures than people said. So people really like to look at real software and then you can iterate and build on new things in small pieces. Yeah. So innovation really comes from prototyping, iterations, and getting feedback, right? That's looking at real software really stimulates the mind that makes it easy for people to understand exactly how things work. Okay. So I don't know how we are in time. I think we can move over to the discussion part now. I don't know, Christian, do you want to do any questions? Do you want to talk about challenges? Can you tell the question that maybe also allude to your challenges because, you know, we need to go into the challenge? Yeah. So Knut, can you tell the question yourself? Yeah. So Lars, one thing we have, but it hasn't been really developed very much is the relationship model. It's kind of been hanging there for years. Yeah. But in principle, and I'm not saying we should do this, but I'm saying in principle, it could allow you to have a virtual entity relationship thing where you kind of build your own factor model, right? I mean, if you have many, many relationships, then it becomes super complex, obviously. So just what are your thoughts regarding, because that would allow us possibly to, you know, go into more e-government things and do more complicated workflows in principle. But maybe it's stupid. So yeah, what's your thoughts? Yeah, yeah. That's a good question. And maybe this is Mike in Mark's territory, and maybe you should ask the master. But I think your point is good that in theory, like if we had a more kind of built-out relationship model and relationship, we're not talking about tracker, right? Track into this and how they relate to each other. And you write that. I mean, if we had a better relationship model that was kind of built out more, you could also model more complex use cases, right? You can build this kind of, like you said, it's virtual entity diagram, pretty much. You can say, okay, here's an entity, here's an entity, and this is how they relate, almost like a database diagram, right? And you could support a lot by doing that. So I agree. The cost is, of course, that this becomes very, very complex when it comes to like using that data. We know that we have struggled a lot to build like a good relationship analytics interface. Because you can imagine it becomes almost like a graph, right? Like a graph database, very analytics purpose that you need to ask questions like how many people have a relationship with that type of entity and how many siblings and how many people are in that entity and adding filters on the different entities, navigating the graphs and so on. And that's to me like, it's almost like a graph database. It's almost like a new product to do all that stuff. It could be very complicated to get that right and do it well. I think that's just the reason why we haven't moved more on that is that it's hard to get right and it's a lot of work. But I do agree in principle that if we can do something like that, it could really open up for new opportunities, if that's what we want, like e-governance, etc. So how do we find out how far we want to push that? How do you think we should approach it? Well, I think we should collect, first of all, start by collecting, you know, some of the user stories, but collect some requirements like what are we trying to do, give some examples of what people would like to support. That is always good to look at something real. So try to collect some information and background on like what we're trying to do, like what we're trying to support. That would be a very good start. And then we could sit with the team and see how feasible it is to support some of this and how we wait against like existing priorities. I would say that's the best way, engage the tracker team. Okay, so some of the challenges, and I think there's lots of challenges, right? There's a lot of challenges. And I think we can open up a discussion now and talk for a long time as usual. But I think some of the challenges that I see would be like one, data use. I think still data use isn't exactly where it needs to be. Like having people that regularly use data at the district level, facility level to improve outcomes, to improve planning, research allocation, that kind of stuff. We need to improve and also like document outcomes better so we can come back to the donors and explain, okay, people are actually making an impact because, you know, people are starting to think like, okay, we're making these huge investments, but what is the impact? What are we getting at? So focusing and finding maybe even new ways to improve data use, I think is going to be very critical. And that's a multifaceted approach against, you know, software implementations, configuration, training, et cetera. Another big problem is, and I think Ural and Rebecca will agree, that metadata management over time, that is a big problem. We see that some countries like South Africa, for instance, when it's very mature, they have these kind of review processes every year and they keep things super tidy. In other countries, we see that there's very little discipline when it comes to metadata management. People just put in new data sets all the time. People don't reuse data elements. People don't reuse category options. So we're creating a lot of duplication and there's like five, five data elements called Malara and there's six categories called on the one, et cetera. And there's no clear naming convention. There's no kind of review of what forms should be in the et cetera. You end up with this messy database that people don't really know how to use. You heard many people say that that's, you know, I would like to look at Malara data, but there's five elements called Malara and none of them are like national coverage. I don't know how to do it. So making it more discipline, like giving more structure, more governance around metadata management is going to be key. And again, this is like a software issue, training issue, capacity building issue. Fragmentation of country systems and this is interesting. I think what we see now in many places is that there's a lot of these just popping up now in different countries. I think Rwanda is the glowing example where this lot of these just to be both ministerial health is running a lot of these installations. And there's also like agents, different agencies like USA, then CDC and NGOs and what knows that also come up with their own these instances in parallel. So, and of course that becomes a nightmare because people don't really think about interbility from the beginning. So they end up with like different org hierarchies with different codes and there's different codes, data elements and things. And it's, we have this open to you thing that doesn't really work. And then like bringing all this data together at the end of the day is becoming hard. And as you're in like to say that this is kind of interesting because in the beginning in the early days, like these are to try to fix this problem, right, by bringing together all these different systems into one so we can have it together analysis. Now, people are actually starting to move us back to the same place and having the same problems just by using many dishes to installations. And then that opens up a little bit of space for others like Sanisys and others to attack us and say, we can build this beautiful integrated dashboard so all your dishes to installations on top of the mess, right? So we need to figure out like how to govern like how many dishes installations should a country really have. And if they really need many like there should be some kind of higher level data warehouse on top of this that can pull in again, integrated data sources that we can look at integrated analysis across this stuff, because it tends to grow out of control a little bit in at least in some countries that I've seen. And the last one, not the last one but on this slide, applying research and software development. I think that we have a lot of potential when it comes to aligning the research team at the University of Oslo with the dev team. I sometimes like to say that, you know, the dishes to court team probably has the largest R&D department in Norway when it comes to tech organizations. I don't think they're using it a lot. So we need to find a way where some of the research is being done actually can kind of feed back into the software design and development. And I think you should make research more actionable, also more applicable and also easy to understand so that the software team can really understand. It can be hard for the software team to read some of these very academic papers. So finding a way for information from the research to flow into the software design would be very good. With that, I'll stop. And Jörn had some points that I've just put up here and if you have any questions or else we can start to discuss it.