 Okay, okay. Welcome everybody. So we're going to talk today a little bit about generally how to get data out of dishes too, but also a little bit about the team integration we have with me, Bob and a few others. And we're going to kind of give an introduction to what what we've been up to and kind of. It's a little bit repeat from last year, but we also have some additional things and and we even have to use cases that we were presenting. And so are you here. So he's not here yet, but he will be coming soon and then we have it from, hopefully, and then we have to talk we will also present Indonesia case. Okay. Basically, very, very quickly going to give it all of you of some of the API is available for these just to introduction, quick, quick introduction to our Java SDK and kind of what's the purpose of it. And then we're going to talk a little bit about a part of chemical point we are made and a couple of use cases on top of that. These are quite simple examples but they kind of give you an introduction to it and later we will see a bit more complex example. But there is also multiple examples in our GitHub repository if you want to see that. And of course we're going to talk, as I said, and so we'll talk a little bit about some fire integration we've done in Latin America, and through the power for a service or a five is it that similar but not the same. And the custom fire profile they have made. And then topic, as I said, he will introduce a system they are created in Indonesia for also doing fire integration and a few other things that he will show that later. And at the end, we're going to talk a little bit about the roadmap for team integration and what kind of what we are planning to do in the next year or two years. So this is a team. Bob Jolliff sitting there. He's what you call the product lead. We call him outside of the technical lead I don't know these roles are not set in stone. But that's what we call calling us. And we have crowd mama which is not here but he's in Malta doing some coding for us there so he's a full time integration engineer working for us. And this is what I'm going to present today the Java SDK, and also that part of karma component, and a few real world implementations of using that. We also have a calling into junior integration engineer. I don't know if you see here in the room. Yes, that's why I'm there. That's the one. So he's working for the water. So he will also he's also joining us a little bit for time to time. That's very nice. We need more people. Definitely. So that's a very good. I think we just mentioned a couple of people here but they are, you have this weekly call integration call and maybe some of you are already on that call I think we have set up about 15 whites or something. If you want to be part of that weekly meeting please just let us know and we can add you to that calendar invites. And it's very open. And we have some views to send us an email from time to time with kind of upcoming projects. I just don't know what you believe with about what we're doing. There's a couple of links here. One is just give us a general overview of the integration and what we've been doing and links to a few projects. And the other one is fire, kind of the current standing at the record dishes to on fire. But that's kind of gives a little bit of a view of the current status of dishes to on fire. And that's something we have definitely been building on. And we will talk a little bit about that later. So yeah, so just kind of mention. I'm sure you mostly you guys know this already so there won't be any kind of examples necessarily of that but these are kind of the three main. API as we have in these as to you have of course the metadata API. And I've given lots of links there so you can kind of get the slides you can kind of click on those links and they will take you directly to the documentation. Of course, the metadata is kind of the building block of these to you. Data demands or units and, you know, programs and data sets and all that stuff. We have a full set of API is for that. And everything you can do in any of your apps is done through the API so the API so the maintenance up for example is only using the same API as as you can be using. It does support a few different features. I'm not going to go into that now but we will see a couple of practical examples of that later when we are going to get into data out of dishes to kind of the two big ones for metadata is filtering for the object filtering or searching for particular objects that might be our particular code or a name starts with this name, for example, and so on. And it has what's called a field filtering that basically allows you to select exactly the payload you want to get out of it. Other than that, of course, we have the aggregate API. So the aggregate API is well for aggregates. So whenever you go to the data entry app, that's also using the aggregate API. So that allows you to send in both single value updates but also full data value sets which is kind of having. Sorry, my problem is not great today. So you will have. Yeah, what kind of data you can see that later maybe but but it's basically giving you a way of sending in aggregate data. So there's links for it there and then you can go in and look at it yourself. I think feel free to put the email on the first slide. So if you have any questions about you know this API so you're struggling so please please just contact us. And of course, the latest, the last one is the tracker API. And that's, of course, what a lot of people have been using lately and it's probably very very something that you definitely going to be working if you're going to do any kind of fire integration, for example, is more about patient data. And kind of the big thing now is that we have to track importers actually had it for many years. Yeah, yeah, yeah. We would call it a new importer for the last five years I think so yeah, long stuff. Okay, just a tracker importer. Exactly the new one. So yeah, so I don't know if you could write calling the other one legacy but that's that's kind of what they're what they're selected there but but that's kind of big deal because that's a completely new API. But luckily it's the domain object themselves look more or less the same. There's a few changes here and there, but this should be a relatively simple upgrade job I think, and it should be a lot quicker. Hopefully. So that's, so that's a much, much more robust importer. I don't know in the safe 40 right I think we say 40 is the one we kind of call it production ready. Yeah. What's your question. Okay. Anyways, so this is kind of important to know because if you're starting an integration today. And you're going to keep that going for a few years, you might be targeting 238 today but you know if you're going to keep that going for the next couple years you might want to also actually target version 40 and it is not much difference but what's different from the outside is that there is the responses is much more model like the metadata importer so you know you can get the more exact errors and so on. So so that should be very interesting. So definitely something to look into. So, what do we do in dishes to an integration like what what is a stack. So, we've probably seen this before if you've been to any of our other presentations we've been using this for a while now and hasn't really changed. So kind of the core if it is Java as dishes to itself is Java this was a natural fit. We have been playing around with other things in the past but we kind of stuck now with Java. And it's just in spring boot this is very common framework these days. If you have done any Java development lately probably I've heard about spring boot. It's kind of takes the spring framework and kind of packages it nicely allows you to create things more executable Java files. And then of course a bunch of camel, which is the implication of what's called the enterprise integration patterns, but it's basically an integration engine that has I don't know how many hundreds of components but it has a lot of components that allows you to talk to telegram talk to Twitter, pull data from information to pee and allows to do all kind of stuff. And that framework is extensible. So this is where our own component comes in. It's easier to work with issues to and you'll see that later that's kind of them. So we kind of just using the framework that camel gives us basically to implement around what's called an endpoint. We have been playing around with a little bit of mapping languages. I think we kind of sticking with data sonnet at least for now. So either we're doing it in Java or in data sonnet and data sonnet is just language for doing Jason to Jason transformations. It's a neat little language. It has some limitations. So sometimes actually what we'll show today is only using Java. But this definitely is something we have been using. I think for an entire rapid pro project we have been using only data sonnet. That's also something you can look into if you want, but there's not this requirement from our side for that. We kind of just give you a few building blocks for talking to the justice to and kind of doing that a bit more elegantly but it is something that you can add to it yourself and of course camel has a component for the data sonnet so it's going to neatly package together. The MQ is again not the requirement. It's just one of those things that are part of the stack. We use it, I think, in the rapid forward of using this. I'm not sure, but we have been using it from time to time and it's kind of a nice little queuing system. It's very easy to embed in camel or you can kind of set up your standalone. And this is actually what we're also using these just to if you know that. Internally we're using actually MQ Artemis for audits. Now in 240 or 40. We also support what's called event hooks. And that again that's also supporting active MQ so that's something that's now supported and we're not going to go through that today I think maybe also has a session about that but but yeah, but that's important to know. So does this look familiar to people who have done Java development? Is it completely fresh or is it okay? If you are a Java developer at least this should be very common things to work with. We do have a little bit of Java coming later. I will try not to do too much of that because I don't think that's the most interesting part of this. We want to spend some time on the use cases. So what is the Java SDK? Well, it's basically just a wrapper around HTTP request and allowing you to talk to DHS too. It allows you to kind of do set up some basic authentication. So if you have a client and a DHS client will then contain the username password or maybe the patch if you're using the pattern access. Personal access token. And one thing that's kind of neat. So this is, as you will see here, it's a bit specific, but you actually auto-generating the main ball of DHS too. So we're not taking out the ball of DHS too, but we are kind of regenerating it in our own system. This is the scripts we have made. And that allows you to basically, instead of creating kind of your own classes for this, you can just take this class that has already exist in the SDK. And you can just get an organization unit and then you have the folder there and you can say, is name present, is ID present and so on. And then you can just get those values. So that kind of helps you a bit. We'll probably change this one soon to be version 40 only and not have .1, .2 and so on, because it doesn't really make sense. So we're going to clean that up a bit. And we have actually, I know Claude is working on a, not sure if it's fully finished, but Claude is also working on a way of generating this using the OpenAPI specification. Has anyone seen or heard about OpenAPI? So that's something that's coming now in 240 or version 40. We're giving you kind of the full OpenAPI spec for the entire system that's been auto-generated and also published on a documentation site. So that's something that's very interesting. So before we were looking into the objects of the Java class themselves, but now we are just taking that specification from OpenAPI and generating these domain classes. And of course we had the camel component, which is kind of the more interesting part. So that's, of course, using the Java SDK for all the interactions with the dishes too. And you will see it's specifying the actual client and we will again see that very soon how do you create those. It has a few endpoints that you can use, or a few operations. I'm just showing you a couple here where you're just getting an object. So there's two ways of getting an object. You could get what's called a collection. And that would give you an iterable and basically you can, if you're getting, for example, organization units, you can kind of iterate through them one by one instead of just getting all of them. But if you're doing the resource, you will get the full thing. And we will be using a resource today. And we will see that later. And of course we have the fields, these maps of the field filtering we already have in these as two. And then when you then using this endpoint, and if you're targeting one of these auto generated classes, it will just automatically populate them. So I'm just going to show you a couple of examples. So this will be a Java code, of course, so I'm sorry if not everybody will understand it, but the point is just going to quickly show you how to create these clients. If you ever use the happy fire client library, this was looked for this system review. So you basically have a client builder, you create a new client, and then you just give it the base URL, this including API, just to be aware of that. And then use the name and the password. In this case, I just created two, two, one source and one target, and they're just going to link them together. So all this code is linked from the slides so you can this is all available. And of course we have a very, very simple definition of it here we're just linking to the dev server and the latest 239. And that will be the target so in this case it will just take the organets from 239 to sorry from them and then put them into 239. They are the same, but we will make some modifications and add some organets and you will see how they are reflected in both of the systems. So, again, this is some Java code, sorry for that. So this is how you define routes in camel. The actual route that does something interesting is this one. So this will be reading organization units from these just to we're turning off paging so we get all of them. And then we're selecting the specific fields that we want to use. And as you see we're using the resource we saw before, and we are targeting the organization units. And we are using additional to client source which is kind of the double step that the server right. And then we are marshaling into the metadata class is just kind of an object that contains all the classes of dishes to basically so he has collections for organization units for data elements and for all of these things so this allows you to wrap that into an detailed package. Feel free to ask questions if there's any. Then we have a very simple target. And we don't really need to do much because now that we have read in these things and that metadata is already in what we call the body. So you basically can write the dishes to without kind of giving it more it already understands that the data is going to send is in the body. So, again, we are using another target. So, just for showing us a sample we also exposing to web endpoints. The one is just all you which is just gives you the organization units directly in the browser, but to camel so when we trigger to that endpoint, it will go to the source, which is to, and then present them to you in the browser. And then we have one where we can trigger a sync. So what we do now, and starting that that route on startup, we will trigger it once just to get to get initial sync. Of course here we could have a timer that did this every hour or so so, but in this case we just exposing it as a trigger instead. It's a very different way of doing it that's kind of up to you it depends on the use cases or on maybe you will have a bash script or con job or something that will do that triggering for you. And again, that's it so it's a very simple example, but it shows in not many lines about 50 lines or something that you can read from dishes to just format, send it back to another dishes to and then even exposing the web endpoints right so this is very simple way of doing this is a very simple code is hopefully and but it sort of shows you that the power of camel and what is possible there. So, let me just log in here. Let me show you two instances you see, and this is the, the one and this is a 239 one. So what will happen now, it will use dev as a source, and it will take all those organets and send them to the 239 instance. So let's start it up. Now it's started up. It's doing an initial sync internally. But now the, the, the organize are the same on both sides so it's this doesn't really doing much. And so let's try to add a new one. Sorry for doing this directly on the deaf sorry but that's fine. So these are just the required properties of, of the audience units you need to have the test or the name and the short name and then start updates. But of course, now this one is just standing there right you can imagine having this on the server and the Docker container or the LX container, and it's kind of sitting there waiting for you to interact with it. Now what you need to do now is to show up here. So now you will trigger. And you see what's interesting now is, is even giving us a response from dishes to, of course, this is not dishes to this is actually going through camel talk delicious to and coming back with kind of the the import report. You will see it created one, which makes sense. And then updated everything else. And now if you go back to back to the 239.2 refresh. You'll see that we have the test there. Right. So this is how you can very easily automate that single process and we will talk a little bit about this on the roadmap, but we are actually going to make this into a secret station product at some point where you can kind of synchronize multiple these instances, not only organization, it's but also other other type of data. So any questions about this. The code is available as I said, if you want to have a look at it and play around with it. Yep. Sure. Yes, yes. So it's using the dishes to client that we created, and then using. So if you go back to the code. Yeah. Okay, so I just want to repeat the question. So the question is, is it going to the database or is it going through the web API, right? Yeah, so in this case, it's definitely going to the web API. Of course, you probably don't even want to have this on the same instance that you have issues to. So this is all using the API. So that's why you're setting up and setting up the source and the target to using HTTP. So that's definitely doing that. Yes, yes, yes. We are only using stuff that we already are using dishes to you. So there's no magic really happening here. It's just, it's just continued as a wrapper for HTTP requests. Basically, I mean, it does more than that, but that's at a really simple, simple level, you can look at it, look at it like that. And the same way you would go to your browser and go play dev slash organization units, and then you put patient falls or fields. This, this is basically what is doing for you behind the scene. Do exactly that. Yeah, definitely. Oh, so, so Camel of course has components also for Postgres and SQL in general. So that's definitely something you can do. Of course, then you probably don't want that remote you want something maybe living on the same server. Because it's security and everything but but that's definitely possible. And that that's because this is just one thing but maybe you have another database right. And yeah, yeah, yeah. Yeah, of course, of course you can imagine pulling from the analytical tables, for example, and if you're pulling a lot of data, it might be a lot quicker to just do that to SQL directly and but you can use a camera anyways. And then you can wrap that into a nice object and then you can send that to somewhere else or whatever you want to do with that. Yeah, so that's definitely possible. Yeah. So, okay. Yeah, sorry. Yeah, just do the long question. So basically, you want to use GraphQL to query this just to, is that what you're saying. Yes, the question is, can you set up Camel in a way that it acts as a facade of this just to but it's what is providing a GraphQL at the view. Yeah, that's definitely possible. I'm pretty sure Bob that Camel has GraphQL components. Yeah, yeah, yeah. So you basically do that, that you do from it's probably similar to I wouldn't be surprised if it's quite similar to this one that we have the rest that it has something similar for GraphQL. We basically can expose it, you might have to do some bit of processing on the data because it's not the same of course we will have to translate that, and you will see an example of that in the next example where we transfer from this just to and we convert it to fire format. Yeah. So that's definitely something you can do and it's a very common thing to do, actually. And one of the things we have done with some of our other projects is that we have actually exposed the other web interface, which provides you with fire. But actually, it's actually pulling from these just to you can do the same with GraphQL where you kind of expose the GraphQL. And you can even talk to three different different instances and give them all data from this year that's definitely possible that definitely is kind of the stuff is made to do. So that's the 100% possible. Let me close down this one. I'm also going to show you a very similar example, but this time you're going to do some mapping of the data. So we're going to transform the organization units to a very simple representation of fire using location and organization, which is kind of the way they represent health facilities and so on. Let's see. So, everything here, except for this one, everything else is the same. So this is the same project seems setting up the source and everything. There's a little bit of small differences, but that you can look at yourself and we kind of setting up the same thing here. We're doing an initial sync. And then we also exposing it through a trigger. And before. So you see that it's very, very much the same. What's really has changed here now. You see, I do a little bit of ordering and so on, but that's just normal dishes to you. I'm just doing level two. So one on two. No, no pitching, but you don't even need pitching here because it's not that many. And then we are reading from dishes to the source. I guess that's the play there. And we're doing the same here where we are. I'm actually using metal data. And the new thing here is now as we're converting the body. This is basically a way of taking one format and I'll put in another format. And we will of course look into that class and see how it does it. The rest is pretty standard. It's actually using so fire themselves have also created a component for camel. As I said, we have them many, many, many components for several hundred. So here we are actually sending all of those organization units to fire. So we start up a fire server in the background. And then we will basically send it there. And of course we do the same same thing as before. We are exposing it as an API. So this is just a typical fire and points. Just calling it by base R4 bundle. But they could call it whatever you want. And then we have another one for the triggering of that. Given the time, I should look into too much details, but we'll show it. Actually, how it's working. I need to find. Okay. So this is how you run. Happy fire in Docker. It's a very nice way of running it actually. They have their own happy fire jar file. You can also download, but this is kind of the preferred way of doing it. Honestly. So who knows fire here? Is that something people have been looking into? Yeah. I see more hands raised that than for Java. So fire is kind of the new hot stuff. Right. So it's when they're doing a health in its ability. Fires is something that people talk about. I'm sorry. I'm sorry. So before I run it, I will actually show that. So as I said, this is more of a system, but. We do have something new. And this is what's called a type converter in a. In a common speak. So. What we're doing. We are basically, if you know fire, how the Jason, the payloads are constructed. We're basically taking the happy fire client library. And we're kind of creating them. So this is why I like using Java in this case, because we know that the Jason representation will be correct. Right. Because it's using the happy fire libraries itself. You could of course use data sonnet. And I think we even have examples of that from last year from the, the, the conference. I think we still have those online. Probably we have the size for it also. But again, depending on your complexity, it might make sense or not make sense to use another tool. And they are of course more than data sonnet. There must be many, many tools out there because it's JavaScript if you want. I mean, that's really up to you, but in this case, I'm using Java. So not going into the fire details there, but this basically constructing this entire object now, and you end up with a bundle. That's what they call it in the fire, which just contains all the locations and all the organizations. And they are linked together. And we are using. So when we are adding this, you're doing a put and not a post. This allows you to send the same multiple times. So in the case we are doing a searching for the identifier here. And then if it exists, they will just update it. And then also create the location. So that's kind of how they represent it. And you will see here. I talked about this generated domain objects, right? So you will see here that this organization unit is actually using the organization unit class that you have generated. It allows you to get code in this case and you return an optional and you can just check if that value is present or present or not. That allows you kind of do conditional mapping depending on what's there. Okay, let's run it. So again, it kind of does the same as before. So it will run it once and start up. So let me just open up this. So this is our fire server. It's up and running on port 8081. And you can, you can see here if you go to location. You'll see if you search. It can be a bit slow sometimes. You'll see there's absolutely nothing here because I haven't done anything yet. So this is a completely empty fire server. We'll go back here. Let's go up, but let's go in here. We can start up again. So what's happening now? It's going to be just to is getting those organization units. And now it's done. He's done the initial sync already. We can just confirm that. Sometimes it takes a while for it to show up. Oh, maybe I don't do the initial sync here. Sorry. Okay, let's just go to the trigger triggering. So this time the endpoint is fire. And you have to do. You will see here. You can do. Here you will see we have all the. The location from dishes to Sierra Leone and test as we made before. And a few other ones. That's kind of off the level one and two of dishes to you. So this is very simple simple, but what we have done is kind of complex, right? We've gone to dishes to you. We've gotten a huge payload of organizations. We took them into camel. We have converted it completely to different formats. Then we have sent it to fire server, right? Very typical use case. Very typical something that you will be doing probably a lot if you're going to use camel. This is a very nice, nice way of doing it. And this is also our general approach to our fire integrations. And we will see also the power use case. Which I will switch over to very, very soon now that we are using Java. Java for everything we constructing all the objects in Java. Which kind of get a bit messy. But definitely that can be done. So that's just how it is. Just close down this one. Okay. So now we have Enzo. He will present a little bit of background information about the power. This is the first of two use case presentations. Should I practice how to use the microphone first? There we go. Thank you, Morten. You're so popular. You have people sitting on the floor. There's a couple of seats here. I see one there and one up there. If you want to join also one here. Because that was a hard floor. So feel free to move. I'm not going to talk about the, you know, what Morten is actually talking about because that's what we have him around. But I'll tell you a little bit about what the PAHO is savvy use case is. And when I'm talking about a savvy, I'm talking essentially about AFI focus on the PAHO region. There's a with the new COVID vaccines, you know, things were getting approved a lot quicker. There was a lot of skepticism amongst the public. So there was a higher focus on vaccine safety and vaccine surveillance and making sure that these vaccines were safe and good for our population. And to just keep the public at ease that they were doing the best they could to kind of look after these things. And already when looking at the manual for how AFI was processed in the region, we have one initial very clear difference. Just the name in most of the world, we talk about AFI adverse event following immunization already in the Americas. And for those of you who don't know, PAHO is the Pan American Health Organization. They are essentially both their own organization, looking at health in the Americas, but also the WHO for the Americas as well. So the name for AFI, there is events supposedly attributable to vaccination or immunizations. Already in the name, we have some differences. And there are a few other differences in the actual guidelines, but mainly they are based on the WHO guidelines. The first thing they did was to go around the different countries what systems were in place for measuring and looking at the AFIs. And they found that 62% were taking everything on paper and then doing a transcription into a spreadsheet. So Excel is king as usual. But then they found that there were also a lot of isolated systems that were quite fragmented and a few centralized web-based systems out there. But in general, it was not looking great. So PAHO started a quite large project to work with this and one of the components is DHS2, in particular for those countries who did not already have a system of any kind. They divided the project a little bit in two. On the one hand, we already have a metadata package for WHO following the general guidelines for AFI surveillance. So what we did is that we grabbed that package and we essentially adapted it to the guidelines from PAHO, right? The 25 core variables that the AFI package has turned into 33. They included standards in the package, which was a whole lot of things because you can't really include standards in something that you are publishing when a lot of these standards are actually proprietary licenses, et cetera. So that complicated things a bit. But the main difference with the generic method of the package is those two things and the addition of an investigation stage. The WHO package assumed that when you were reporting an AFI, the investigation will happen elsewhere and will not be recorded in the HS2. PAHO expected that to also be recorded there. So that increased the complexity. The investigation stage itself is longer than all of the other stages combined, I think. And after a while, we realized that it's a really big use case to go to a country and start from. So a lot of countries were a bit hesitant to start straight up generating their own national instance. So they essentially divided it in two. On the one hand, they had active surveillance, which was looking into AFIs and AESI, adverse events of special interest. So it's like a lot more data recording. And those were called a sentinel deployment. So they were having a few hospitals in like a lot of different countries who were doing this specialized active surveillance and were used as sentinel sites. And in addition, then they had the passive AFI surveillance. So that's for national deployments, which is essentially a metadata package based on the PAHO guidance. All right. Now, if you want to look at the difference between those two, there's no doubt. But between the national instance and the sentinel one, here's the main difference. The main difference is that each event that gets notified gets to be divided in whether it's an ASAVI or an adverse event following immunization or an adverse event of special interest. And then they get classified in the end, depending on the whole process and what happens in the investigation stage. While it's the one that countries are required to use, it's a bit simpler. It only has three stages, but there's still some complexity there as well. And this is where it gets interesting, perhaps, for you guys. The idea was that, well, not everyone's going to be using the HS2, of course. We have Vegiflow also in the mix, which is a program used to send data up to the global repository for vaccine safety. And the idea was that countries with a national DHS2 or other systems were going to have to be able to combine the data from those sentinel sites and send it all into a PAHO repository, which is in yet the differences. And the idea was to use HL7, fire in the middle to make sure that everything was understood in there. And we still haven't quite gotten there. Right now, they're doing imports. They're essentially sending CSVs into their data warehouse. So we're not quite there yet, but we're very close. That's essentially how we have envisioned it to go. Currently, it's going a bit slower than we expected. Some countries, most countries, they think that this is very important, of course, but when you look at the list of all the things they have to do, it's perhaps not the most urgent thing to suddenly start working on, especially now that the big drive for COVID vaccination has kind of died down. But there are more and more countries that are getting involved. Brazil and Paraguay are using it in the sentinel instance or in the regional DHS2 instance that PAHO has. Bolivia has approved their pilot, as well as Suriname, which I didn't put in there. And Honduras also started working both with the sentinel and with their national instance. In addition, we have Ecuador there, who they looked at our metadata. They looked at PAHO and they're like, we're doing our own thing. So they have their own configuration for Esabi, which follows the guideline, but they have done themselves within the DHS2. And the rest have, of course, a variety of other things. I think that more or less gives you an idea of what's going on in there. I don't know if there's any questions from you, Morten, or anything I missed. We're good. Excellent. Right. In this case, they are all using individual data. So this is not aggregate data. They are essentially sending the cases directly, right? So they are not sending any type of aggregated indicator into the repository. They're sending each case right there. Yeah. So then the denominators more or less stay the same because we don't even deal with the denominators in the regional instance at all. Right, right, right, right. Yeah. I mean, each person has a couple of unique identifiers and that's essentially used there. But yeah, no, I don't think that has been an issue that we have considered that much. No. No, like this is tracker data. So one person has a profile and then it's all linked to that person, essentially. Yeah. But if the same person has multiple AFIs, they would be counted as different AFIs because we're looking at the AFIs in there. Yeah. Yeah. I mean, there is a mandate that Panthers have to sign. This is, of course, they have to agree to do this, right? It's not like they can force them, but Paho does have, you know, a repository for their data. Yeah. Yeah. Or it should be pushing. Right now it's more, they're uploading the CSVs. Right. I mean, the main thing is that what's recording in VG flow is not necessarily the same that is being recorded in the HS2. Yeah, absolutely. And that's something that a lot of countries have requested, right? We're going to have to put it in VG flow anyway, bring it back to us, give us an XML that we can put in. And ideally that would happen, but it's just not part of the project scope. So I'm just going to, quite quickly now because of the time, show you the current status of the project. Let's see here. So there's, you know, so what Paho has done is basically, well, not Paho, but I guess one of the HL7 organizations in Latin America, they are going to be working with Paho and creating what's called a fire profile. So basically that's, you can compare it to a metadata in the kind of the way you kind of structure things, saying this is required, this is required, so on. And then they're using something called a questionnaire, questionnaire, which is a very basic fire type for collecting data. And then you're sending back a questionnaire response to that, that questionnaire. So the approach we are taking is more or less exactly what we are going through right now. You will see if it's starting with the route itself. It's a little bit more complex this time, as we are also doing a bit of prefetching. So, for example, we want to focus, as you know, in tracker data, you have the data element and you have the value, right? But that value is just a code. And you might want to name for that. So you want to reverse it, right? So you want to take the code, convert it back to the name or maybe a translation. So we're kind of prefetching a few things here. So we can use that as a reverse lookup table, basically, so you can go back from where you were. And we do this particularly two things. One is the food drug, which is basically a huge terminology list of drugs and of course including other vaccines. And we do the medra. A medra is a classification system for AFA cases. So if you have a skin rash or you have a high temperature or something, they will have a code for that in the medra. So that's why we're kind of prefetching that. Except for that, you will see something very familiar here. If you go down to the actual route that does the fetching. So we basically just setting again, we had a specific program in this just to we're saying the organization wanted to be accessible. That means that give me whatever I have access to. In this case, we just do one per page size. We don't, this demo server doesn't have too much data. So that's why we had a hard code to be a bit specific. And we actually had a connected tone in in Bogota, Colombia. And then there was some exercises there. So we're kind of using those exercises as a way of getting the data and transform it. So we have a kind of pre-made examples. So this is not the real data I'm going to show you. And then we have, of course, the field filtering. We have not very specific here. We will of course update that to be a bit more strictly later on. But again, here we're doing the very simple stuff. Resource, extract, enter the instances. Of course, remember now, this is the legacy importer. So this is using the type of the instances. In the new one, you would use tracker, right? Sorry, not in the new one. It's in the current one. And then we don't do much. We just, in this case, we are not using the generated classes. We've been a bit more specific. So we have created some, some classes using Lombok, which is kind of gives us a very quick way of generating DTO classes. So we're just converting it back to that. And we have a convert body. And we just look at that. That's kind of what's doing the actual work. And then we're just saying we want fire version four out of that. And we are also exposing a questionnaire response endpoint. So you will see how that works. Of course, in this case, we could, of course, also have sent to your fire server, but we don't have it. And we don't really have an active fire server right now. That's kind of what we can be using for this. So you're just exposing it to the API. But that's coming definitely soon. So in this case, so the start of the converter is quite similar to what you saw before. This is kind of doing all the magic. You look at that. But it's basically just wrapping things in a bundle as we did before the entire case as a questionnaire response. And you can have one case of many, in this case, we just have one, but you can have many, many, many, many cases. Okay. So we're doing, okay, I should be done in four minutes. So this is a book. I'm going to go through this one. There's quite a lot, but the basic building blocks are exactly the same. Right. So before you create a bundle, in this case, we are not creating locations and organizations. We are creating questionnaire responses. And we are, so these classes are not something I made. This is all coming from the happy fire cloud library. It makes it very quick and unnatural to create them, although some of them can be a bit of a worry. You can see that it has some really long, long ones. Questionary response, questionnaire response item component, for example, but it's still a nice way of doing it. And it kind of helps you with not, if you're doing a misspelling or anything like that, that you kind of don't have to worry about, which is can be positive or can be negative. And I know that it has only probably discovered be smaller, but you will still have to do with the processing of the data. So, yeah. So this is kind of the way of, should just doing it. And of course, in this case, the actual turning is actually in Spanish. That's why we have a little Spanish in this file. But yeah, it is quite a lot. I probably should split this up at some point. It's over 1000 lines right now, it would be much, much more when it's done, I think. But you see the general approach is the same, right? That's kind of the point. You're just using the DHS to Java, a camel component, getting data for DHS too, using a type converter, exposing an endpoint. And if I were to send to fire, it would be online, right? That's how you see this to change it. So let's start this up. Hopefully the service behaving. Yeah, they're all linked in the presentation. All right, so then we have a couple of postman URLs there. So I just want to show you, this is actually the same, this is actually the same URL that we're now being constructed for me inside of Java. So I just want to show you that it's not magic here. We're just using the API. And this is a question just normal DHS to this data. But of course the point was to transform it. And we have now exposed an endpoint here, slash fire base R4 questionnaire response. So what's this doing now? If in background again, it will go to DHS to get it into camel, convert it completely into fire compatible JSON, and then expose that directly as an endpoint. And you see that that we're fine. So now you'll see it has all of this stuff. And it's generated. Of course, it's not something you normally would want to generate yourself. So this is why we have this very nice happy fire library, that I also kind of generate these things to make sure that this is fine. So yeah, so this is again, this is the approach we're kind of recommending now. This is a very simple way of putting a facade on DHS to, and you can expose many other APIs if you want. And of course, again, you can also send it to DHS to where you can do whatever you want with it. That's definitely a recommended approach. And you're kind of getting the power of you've got the other handling and everything for free, right? So that's kind of nice. I think we need to stop there. I think we need to go over now to the next use case, which is topic. Sure, sure, sure. So extractable. Oh, no, no, no, no, no, no. This is nothing, no, no, no. It's just a very simple questionnaire that they have created. It does have some links for terminologies and so on, but it's going to implicit by the questionnaire response itself, what it is. So yeah, we have not created those questionnaire. I wouldn't call it a profile. I don't think people like colleagues questionnaires profiles, but it's basically what we have kind of gotten that format for us. This is why it's in Spanish. I would not have selected Spanish if I was creating that proper questionnaire. Okay, topic. Are you ready? Hi everyone. Yeah, we want to share about the Indonesia interoperability journey, especially in Indonesia. We talk about the integration and interoperability. This is the process what we are doing right now, especially how to use the SS2 fire and how to build the ecosystem in Indonesia. So I want to share something about the SS2 in Indonesia. Since 2016, we implement SS2, especially for the data aggregations. You can see the use case from our paper here, how to develop the information from the data integration into data aggregation and how to use it into dashboard. In 2029 and 2020, especially, we use SS2 and transform to individual data record. It's meant to escalate the information to use for the contact tracing COVID-19. And the total number of the users is around 700 active users every day. This is our difficulty to use SS2 because the result is very, very big and the most is about the cost. And we transform based on the COVID-19, 2021, our ministers established the new office because like digital transformation office. I trust you already recognize this information and this information to use the individual data for every activities, not only to use the information from the aggregate one. So we have the struggle how to use SS2 as the aggregate and how to utilize the individual data from the many perspectives of the information here. So in this information, we can see the information about the SS2 is still running currently for the malaria. We received support from development partners here and we established information for the malaria use case, individual data exactly here. And now it's for the national dashboard. Currently, it's the aggregate one. So what does that mean? The implementation of the SS2 is currently transformed to the SATU SEHA. We call it here. You can see from dto.camcast.go.id and we can share the information about the SATU SEHA implementation. This is the current condition of the SS2, SATU SEHA, fire and everything. It must be combined and mixed into the conditions. So what is the motivations here from Indonesia, especially from our minister? He wants to see the information not only on the aggregate because we have the struggle how to merge and how to reduce the information, data capture, data collection in many applications. And based on the situation, we made a survey especially. A lot of application at national level. 400 applications to get to collect the data from the individual data source. So this is the information. He hoped from the beginning of the cycles, we can get the individual attribute, individual case and individual surface to use the information for the aggregate one. So we established the information or something like this, but you can see the most important here from Indonesia. Previously, many years ago, we shared information about Indonesia and we mentioned about thousands of islands. But now we can see healthcare facility currently. You can see right number here. More than active numbers. Health industries and data is not only from the aggregate. It must be from the individual. We established the fire here and nationally. And the second, we established the standardization for medical standard. We cannot utilize from the hours perspective. We embed ICD-10, ICD-9 clinical modifications, SNOMED CT, LOIN and DICOM. Currently, we already success and sorry, to buy or subscribe the SNOMED CT international implementation here. And we embed the surface, especially because like health facility master data for the patients. We optimize with the master patient index. Facility, we hold with the facility registry, health worker registry, pharmacy device and equipment, health cost and surface. So this is the important what we currently doing. And the next good things from Indonesia. This is not only about the technicals, but also the regulation we established for things here. The platform is like the regulation on medical record establishment. The application if you remember about the one application from Indonesia because like Padul Lindungi, Satu Sehat Mobile, this is the part of this area. Standardization, research and policy. This is the regulation we can support for the implementation. This is the part of the Indonesia. So how we establish the fire? In Indonesia, we establish fires not for the only implementation. We start from the development phase. This is for individual data. You can see, I can see later. So we implement the seventh step of the employment development phase. One is about define the use case here. Because we have the limited resources. Previously, since COVID-19, we only have three persons has the expertise on fire. So that's why we start from the defining the use case. We start the simple EMR in patient or patients and based on another use case of the health program. For example, the TB, case management, TB supply chain and malaria for example, and we go to the M&H situation. And the second part, we assess the variable. It means we know from the beginning of variables we have to observe. This is the same attribute or same element must be contribute for another application or another program. We have to establish and we have to assess on this. And the third is to establish or to create the fire profile. A lot of applications, we receive the invitation from like the subscription application for something. But the cost and the person is only three persons. So that's why you use the Google seat to modify or something here. And the second, sorry, the next step is about technical guidance. We establish on the simplifier because like implementation here about how to develop and how to utilize postman, how to utilize web API bundle or for example here. Testing, this is the step. Testing is how we establish the testing. We create the champion. We make a relationship with the university research center for Indonesia and we calculate how many persons must be received the training for the SATUSER for fire for example. This is the concept of this champion here. And next is the review. We remake a recycle again and publish to the simplifier for another reasons. So this is the sample you can see from our simplifier. This is the published version. So that's why if research center, if the hospital want to establish the fire instance, they can copy the metadata. They can mainly copy and also they can allocate their resources to offer some revision to offer some suggestion to the national team. So how about the HIPPS Indonesia? To support SATUSER or to support fire on Indonesia, we establish application because like JUMPA doctor. There are a lot of applications from Indonesia but why we use the JUMPA doctor here? One is we want to establish this as two for the individual data. So this is the part of the implementation and the motivation here. We establish JUMPA doctor, web dashboard user patient they can download from Android iOS for example and also the medical doctor or the healthcare they can use the web dashboard here. This is the JUMPA doctor is based on this is two tracker and we want and we establish the connection to SATUSER nationals implementation and we copy the metadata from the amplifier directly. And how to utilize especially from this is two. This is the simple method. We make a mapping especially from this is two to the research fire especially like the patients, organization or organizations. Even to diagnose, even to another service for example, we utilize this area and the next is very important. We can share our resource from SATUSER from the JUMPA doctor tracker to another applications. This is not only this is two to fire but also we can replicate to another application what we have in Indonesia. So this is the find especially for fire this is the process and how we store it into that area. I think it's over to Morton. This is my slide. Yes, the mobile apps for the doctor, API directly. The API we utilize this is two as the core of mobile apps. So we utilize the only user interface and the API is directly to our instance this is the instance. Because the one thing is because the national fires surface is not allowable to get data previously but now due to the ethics, due to the privacy for example we want to establish the privacy. And you can see the address the address of fire for Indonesia. This is the national implementation right now. Thank you. Thank you Morton. Thank you Taufik. This is Morton's slide. It's just a way for us really just to engage a little bit with you guys in terms of what some of our thinking is on how to plan the year ahead. And an opportunity to get some feedback, some critique. It's always great watching Morton give a presentation. He takes the men and women and separates them from the boys and girls and they leave the room early. So we're left with the serious people and yarn. But yeah, this is the way we've been thinking about our roadmap. We want to finalize it in the next week or two. But just highlights tentatively is the first point is just boring stuff. Just taking some of the things we've already been doing. Getting everything up updated. Some of you have heard about this open API specification that's been worked on by the DHS team core team. One of the benefits of the core team developing this open API is that opens up lots of possibilities from an integration perspective even in terms of interrogating the API and being able to use it. In terms of interrogating the API and being a bit smarter. The webhooks or webhooks we're not allowed to say webhooks. Eventhooks. New feature in the core. I think for a long time something that people have been asking for forever is how can we listen to something that's going on within the DHS too. We think about an API it sits there. I like to use the analogy of ice cream. You have all these systems that sit there exposing APIs but they're passive. You need to have something that's going to lick the ice cream. That's usually what your middleware does. If you have no way of actually listening to events it means that you're doing things like polling checking every five minutes. Is there a new org unit now? Is there a new patient enrollment? It's nice. Very useful. It's very useful to have the ability to be able to register yourself as a listener to events that take place within DHS too. I don't think it replaces polling. Sometimes it complements polling because the thing about things like webhooks is that sometimes they can fail. Sometimes they can't do anything to clean up. But anyway, this is new functionality that exists within the core and we want to make it just really, really easy for people, if they're using the Apache camel component for example, that they can really easy take advantage of the new event took functionality. The kind of things you can do I think is that we have a Kafka target for example. You can register interest in something like a new org unit has been created or a change in an org unit and just get that placed onto a queue and then you decouple the problem with someone else's concern to decide how they're going to dequeue that sort of thing. That's a big part of our roadmap. The next thing we're going to come into core is this root API thing. Even if we look at our camel components ourselves, particularly when you look at some of the examples that Morton showed there where he's written a camel root that actually exposes an API itself. Ideally we want to be able to root all of those APIs through a common kind of gateway. We want to get into the territory of open him in a way. That's sort of what open him does to a certain extent is rooting to different back end components. Then the integration middleware itself. I should make the points about this. This is kind of opinionated. Not everybody loves camel. I don't even love camel. I recognize that it's really, really powerful and you can do some really good stuff with it. But people are also doing really, really good stuff with Node.js. Not so much to be honest. Python we've seen some, I think we're going to see some examples on Wednesday of some of the work that Wahit has done on, again, based on a completely different framework using Apache Airflow. There are many frameworks out there, but we just wanted to pick on one so that we can build it out in such a way that someone can take it and work with it. The penny dropped I guess about two years ago. We sort of had an integration team for years and we spent a lot of time talking amongst ourselves and talking to some of the partners, maybe to some selected partners. The penny dropped about two years ago that we don't do most of the integration. Most of the integration happens out there. In fact, it's often our own HIST groups. Our own HIST network. A lot of the integration is happening out there. Rather than us trying to solve all the integration problems it's let's make a really robust toolkit that people can take up and use. So nobody's obliged to take it up and use it, but it's useful for folk to know that it's under active development. We take to the full ownership of it. And the nice thing on basing it on something like Apache Camel is that it's not a Mickey Mouse thing, right? It's used in the financial industry, in the airline booking stuff. I think it's used in the national health service I've seen Camel running. Even some of our friends over in OpenMRS have made an OpenMRS camel component. So I'm actually looking forward to getting together with them and joining our camels together. But yeah, we're going to continue building and strengthening the tooling that's there. We want to build in a few things. What we have at the moment is a really quite interesting website or code repository if you're into that kind of thing of lots of Java code snippets and examples. The idea being people can take some of those like Vlad wants to now go and study examples maybe we should be able to take those and build their own. I think that's still the intent and the hope but I think we also want to build in a little bit more functionality that comes out of the box. So it's not just a fine grained Lego set but there's some big pieces of Lego as well. One of the functionalities that we know it's been again, it's been a requirement for many years but supposedly simple problem of synchronizing organization units and you get into facility registry discussions and things like that. We know that whether we like it or not people will use DHS too particularly the aggregate HMIS system as their de facto facility registry or even if they have another one they will feed that from the HMIS system. So we need to do a little bit of a better job of making it easier for people to take systems like that and synchronize org units in other DHS two instances of it. Or if you go some other route like Morten was showing even converting some of those things into fire bundles that systems other than DHS too might also be able to read off it. Providing beans. Okay now we're getting back into deep stuff. Morten was showing your roots using Java codes Java is a really powerful language for writing these kind of things but sometimes it kind of a little bit hard to customize it on site. You might have an integration route which meets 95% of people's functionality but a little 5% that you need to change. I don't know if you showed any example I don't think you did of camel routes using other domain specific languages other DSLs. Java is not the only way to write roots in camel it's the most powerful one but it's not necessarily the most customizable one and if we expose particularly hard things do that in Java then it's possible for people to configure routes using camel is a popular language for configuring roots XML has been there for a long time the point is these things are just text you can declare write those roots and customize them without recompiling any code and make use of Java beans which have been declared to do some of the heavy lifting for you. And that was the next point on the slides. I only read one point at a time the other thing is again from what you saw was basically like scripts running at the back end these can be really powerful scripts I don't think we've got into a lot of the detail in terms of the things you can configure with camel like like HTTP failure detection exponential back off dead letter queues very very flexible error and logging dealing with concurrency you might have a particular part of the roots that works fine and then another part of the root that's actually really slow so you can make that part of the root multithreaded for example lots of powerful things that you can do but it's all just back end scripts at the end of the day and we know that in most cases even if it's a back end integration engine people are going to want to have some kind of user interface on top of it to be able to start things off stop things examine errors again if people are familiar with open him some of the kinds of functionalities that you see in open him not not trying to replicate it exactly but trying to think through what's the kind of user interface features the people are looking for we didn't get the chance to show off what I owe it's something else you can Google it's there in the diagram and what I owe on the top that's a really nice kind of metrics front end that you can really dig into your camel context and you can see things like how many messages per second how many failures you can start and stop things but it's still it's a little bit of a geeky kind of interface it's not really a user interface for normal human being users okay and then of course there's fire it's interesting that we put out this call for abstracts we particularly said we're interested in abstracts on integration and interoperability and architecture and I think we only got one I reviewed about 43 of them horrible job but well not really there was some really interesting ones but there was too many of them and only one of them mentioned fire and that was Talfix one that's kind of interesting but we do know that there's a significant interest out there the significant demand it's maybe not coming very directly from the community that we're immediately dealing with but it's somewhere on the periphery so we've been working over the past year or two or longer I guess people might remember a couple of years back about 2019 oh I'm nearly out of time a bit to move faster than that what are we going to do with fire yeah basically trying to do the simple things we can already do quite a lot of simple things but again they're all just snippets and examples we need to put it together and we should be able to kind of download the DHIS to camel based integration engine which will have all of these things baked into it some of this other stuff is related to detail a really interesting thing that's emerged this year is trying to turn the problem around sometimes it's quite hard to figure out well somebody's giving you all of these complicated fire profiles from WHO smart guidelines they can be quite hard to map to DHS too but we can do it the other way we can also take whatever's in DHS too and export that and make them more easily consumable and that's me Alice good afternoon welcome good I'm good how are you alright we are time up we have a minute sorry for taking a long time over the roadmap but that's the roadmap I'm easy to get hold of if you have comments or suggestions or inputs to it it's just Barb at dhs2.org we're really interested in some whatever thoughts and inputs and what it is that you think we should be doing alright I was very worried I was going to catch fire putting this thing on my hand but it seems to be going wrong