 Good afternoon, everybody, including Dan, who you sit down. Out. Yeah, that's funny. I remember when I was at university, I had a lecture. I said, if your phone goes off during the lecture, you have to stand up and sing a song. You've been warned. Well, welcome to this session on fire. I'm only going to talk very briefly, a couple of small introductory slides. And then we're going to set Morten loose. Sorry, for those of you online, the physical participants have been locked out. Do we need to wedge it open or something? Rewind. Will we try again? Welcome again. Usually during the annual conference, we've got a mixture of some more kind of health related sessions, some more higher level management kind of sessions. And every now and again, we do some kind of deep dive technical session. This one starts off fairly, fairly lucid and then is rapidly going to descend into into sort of nuts and bolts. So if that's not your thing and you don't get it, we won't feel offended if you feel you need to need to walk out. But yeah, what we want to talk really is a little bit about about where we are within the DHIS to strategic and tactical thinking around fire. And mostly I want to introduce some of the tooling that we've been developing that makes it reasonably easy for people to do all kinds of integration really, but also including including fire. We show a few examples of that Morten will do that. So I think most of you here, you know what fire is. I can never remember exactly I have to read it out. Fast health interoperability resources. Okay, it's a specification. It's a bit of a moving specification. It's gone through a couple of maturity levels. It's been maintained by the HL7 fire foundation. So it comes from food heritage, I suppose. It's been gathering support and traction within the healthcare IT industry. I guess, particularly in the US, so bit in Europe and also things happening elsewhere. It's been gathering quite a lot of attention within it within our own traditional communities, if you like, particularly the likes of open HIE, which I've been part of for the last 10 years or something. WHO, many of you will have seen what was the digital accelerated kits and now the smarts guidelines. Given the DHS to footprint in the wells, and sometimes I wish we didn't have it, but we do. Given the DHS to footprint in the world, it's quite important. It's necessary, in fact, that we have some kind of coherence and pragmatic and strategic, and I should add helpful position regarding fire. They're looking at it broadly. We've had long discussions with who did we talk to? James Agnew from the Happy Fire server. We've looked at what they've done in the NHS in the UK. I had some chats with Epic engineers in the US from some years back. Basically, we've got two basic possibilities. We can take Tracker and Tracker's data model. Everybody hates the Tracker data model. You only get the right to hate what you use. We can just get rid of that and replace it underneath with a Happy Fire repository. That's not really very likely to happen. For all the same reasons that most of the other legacy systems, be they Epic or NHS, whatever they are, there's too much value already being drawn out of Tracker the way it is. Also, it's not just used for health. We're also using for education, forestry management. First option is not really on. The other option we have is to build what they call facades. We have DHS2 and its data model there when you have something there which provides a facade so that from the outside it can read and write fire stuff. We've mostly over the last six months, I guess, developed quite a number of ways of building fire facades. That was not really for us to do. We've developed some tooling models going to show you later, which is mainly to help people out there who are doing integration projects to use. It's not just, in fact, for using with fire. It's for using more generally with integration. But given that we're not likely to do the first one, we need to have much more focus on this, to drive much more discussion in the direction of using fire for interoperability rather than thinking that somehow what's going to happen is that everybody's going to rip out their data model and replace them with fire repositories. As fire repositories talking to each other, that's like saying everybody's got to use Microsoft Word. It doesn't solve the fundamental interoperability problem of dealing with legacy systems that have to make facades. This is my opinion, by the way, this disclaimer. You can argue with me. So a technical approach, and I know some of you have been around for a bit longer, might remember probably two or three annual conferences back. There was this DHS2 fire adapter. It was created by a very smart German guy by the name of Volker, who was working with us unfortunately for quite a short while. Volker left, and it was actually quite hard to maintain. I think the approach that he took was maybe in some ways a little bit too clever. Often what we need is something much, much simpler. It's not dead. The repository has been forked since now officially maintained by ITI Nordic, which is Ranga's team. Ranga, where's Ranga? Ranga's not here. Okay, it's Adrian. Adrian's Ranga is standing here with ITI Nordic. So ITI Nordic is a company started by Ranga. It's kind of based with one foot in Norway and another foot in Zimbabwe. Most of the developers are in Zimbabwe. They had started using it in some projects, so they decided that it was in their interest to carry on with it. The other thing we've noticed quite recently is that the OpenSRP project as well have taken that repository and forked it. And I think they're now calling it the OpenSRP DHS2 connector. That's okay. We're generous. We don't mind. In fact, it's good. It's good that all that work that was done has not been wasted, right? And that there are folks still interested in doing that. Well, we've done, I suppose, within the DHS2 integration team. I should say it's basically just, it's me, it's Morton. We've recently got a full-time integration engineer, Claude, who I haven't seen. Claude, I thought you were hiding away somewhere. And we've quite a lot of people who are in and out. We're coming from different parts of the DHS2 system, whether it be in Oslo, talking to the tracker team, talking to Rebecca in the packages team, and mostly talking to people in the HIST groups, because we're talking much more tomorrow, a little bit about architecture. I mean, basically, all integration, not some more integration, all integration happens out there in the field. None of it happens here. The best we can do, really, I guess, is to try to find, provide tooling and approaches and patterns and friends, like our open function friends, that make it easier for people out there in the field who are solving real problems. And that's a bit what we're going to show you today. A little bit of using some of this generic tooling to help people do fire stuff, and not doing it in the sense of assuming that we're a fire repository, because we're not. And there's always going to be some limits that result from that. But in fact, we can do quite a lot. Getting data in and out of DHS2. I mean, it's very, very possible using fire. If the fire has been designed sensitively, let me say, and it's really not so hard. Are you on from here? I don't know. Do I have enough? All right, so that's a brief intro for me about the general approach. I'm going to show you lots and lots of, well, lots, I don't know, two or three, interesting examples, I hope. And hopefully we've got an hour or so. We've got a little bit of time. Yeah, I think we will have some time for questions in the end. My demo's gonna take that long. It's going to be a bit of Java code. This is the warning Bob mentioned before. We start getting into that stuff. Does anybody want to ask me anything before we let him loose? All right, good. Ah, Manzi. Don't ask me anything difficult. Oh, all the acronyms. Yeah, the space is full of acronyms. We even have DHIS too. Yeah, all right. Okay, FHIR. That's a standard or an emerging standard that's developed within the HL7 community, which is traditionally the kind of consortium building standards in the health care sector. OpenHAE is something completely different. OpenHAE is a collaborative community of different projects who are involved with providing different parts of a health information exchange architecture. What was the other one? OpenSRP. I don't know what OpenSRP stands for. It's a project by a team called ONER. They're based in Kenya. They're doing quite a lot of FHIR stuff, quite interesting things. And one of the things that we've seen they've done is they've taken our FHIR adapter. Okay. Right, Morten. Do I need to take this off? You have your own. You want to shut me up? I don't know how to put it. I think it's just on the top. But it's fine. Okay, so let's get started. So the example we're going to show today is maybe not the most interesting ones. We've done this in the past. We're going to use something called MCSD, or we're going to use that as an example. MCSD is basically for exchanging organization units. So today, we're basically going to take organizing strategies to put them into a happy fire. But the approach is more interesting that they actually end results. So to achieve that, we kind of have kind of developed, or not developed, chosen a new stack for the integration team. And everything we do going forward, we're going to be using these components. And we're going to use something called Apache Camel. Bob could talk about that for days, I'm sure. We're just going to know it's basically this implementation of what's called the enterprise integration patterns. It's been around for, I don't know how long, 15, 20 years for a long, long, long time. So that's basically kind of the engine of everything that kind of starts up the integration that handles routing, that handles decisions, that handles formatting of data and so on and so on. Another software we have selected, and you might recognize this from DHS-2. This is actually MQ Artemis, which is already supported in DHS-2. We're using it for audits. So if you're on a recent version of DHS-2, maybe 235 and up, you're already using Artemis behind the scene because it starts up embedded in DHS-2. But you can also externalize it if you want. So that's basically used for queuing basically. So if you have something, and if you can just process, you want to push something into queue, and then you have one or more consumers kind of working on that queue. It's a very common pattern when it comes to integration. So Claude, sitting back, our new integration engineer, is developing what we call in the Java SDK, which is basically a client library for DHS-2. It's quite simple, but it just kind of takes away some of the hassle of talking to DHS-2. You're just creating a DHS client and you kind of start talking to DHS-2 and you're getting resources and so on. We also developed, based on JSON schema, virtual classes or organization units and data elements and so on. And I pointed to a specific version of DHS-2. We're not going to look into that too much today, but I just want to mention that. There's a link here if you want to know more about the project. It's in heavy development, so there might be some bugs and so on, but it's being worked on. The next one we created our own camel component. So if you ever use camel, it was the example of this student, but if you ever use camel, you know, camel has what's called components for all kinds of stuff. It could be for mail, it could be for Trilio access, it might be for JMS, it might be for HTTP access and so on. It's basically a way of accessing some data. So for DHS-2, we're saying give me this resource on the DHS-2 server, for example. And we'll see that soon. Today, we will also be using another camel component called the Apache Caval fire component. And that's basically what takes care of sending the end result after we're getting the organets and transferring them, sending that to the actual happy fire server. It's a very nice component. I highly recommend it if you're using... I think the camel on fire is a very, very nice component. Yeah, this is a picture of the same as I just said. We don't have to go into that. I think we're just going to jump to some examples. As I said, we're just going to focus on a profile that came out of the open HIE people called MCSD. It's, again, just a way of wrapping organets into fire, basically. That's kind of a standardized version of doing that. In fire, you would normally create something called a profile that kind of puts constraints on resources. So yeah. Very simple. We're going to start with something simple. There's a Java-based converter that will take data for DHS-2 converted to using Java itself, and then I push that to fire. We're going to do something called a data sonnet-based converter. This is a language for doing... Well, it can do a few things, but basically using for JSON to JSON transformations. So it's actually a language. It's based on JSON. You can have it with the for loops and so on. We will see that soon. And then we will end up with the nice PubSub example for Artemis. So this is the published and subscribed model that we're using Artemis as the queue in the middle. So there will be two scripts for that one. All the examples can be found on my GitHub. If you want to know more examples, we do have some examples on the main DHS-2 repo also. There are not that many right now, but there are a few. I probably moved my examples also into there at some point, so we can kind of keep them in one location. So the last link is not an example, the post-story, but it's actually something that's going under going and heavy development. And that's our DHS-2 to rapid pro connector. And that's it's using everything you're going to see today. So it's using camel, it's using data sonnet, it's using all the things you will see today. So if you want to see a bit more real example, and not just like a few lines, this is probably the place to go. What can I just add a little bit? And that doesn't use fire, right? The DHS-2 to fire to rapid pro. More is going to focus on org units, but to me there's quite a lot of other fire resources that we've got examples of. And probably the one that's going to be most interesting, I don't know if we have anyone from his Latin America, or from PAHO. Yeah, we've been talking a lot with PAHO over the last six months or so, I guess, on using fire questionnaires and questionnaire response. So there is a questionnaire example in that integration examples repository. But you're not doing it today? No, we're not showing that today. There's also a patient example, a very simple patient example that just pulls up patients from DHS-2 and puts into a happy fire, but also very, very basic. It doesn't have any data or anything like that, but it can be a starting point. And again, what you're showing today is just an approach. So you can replace the organist stuff with patients or whatever, and it will be repeated with the same approach to it. That's kind of the idea. Okay. Let me just open up my things. So I probably have to... Probably a bit small. Let me try to zoom in a bit. But it doesn't matter. I can just zoom in on the actual... So I'm not going to go step-by-step on all the things here. I don't think that makes sense. But I will kind of give you kind of a overview of the general approach. So this is not the one... Let's start with this one. So this is the first example I mentioned. Trying to zoom a bit. Yeah. That should be okay. Hopefully. So this is the first, first example. All it really does, it will start up timer. In this case, we just... Something we just do once. So it's not the cron job or anything like that. It's just a one-time thing because it's demo. Naming the root can be useful, especially if you're going to use something like hot.io. So I don't think I've mentioned before, but that's kind of a monitoring tool for... ... ... Other than that, we are seeing an example of the users of the DHS2 component. And that's probably the most interesting here, is that you probably recognize this order and paging and fields. It's all from DHS2, from the field filtering API. And then we're just setting that as parameters to the component. So you even know how to add that when it's doing the actual request. Order is a bit important here. You have to do that just because you get the levels, level one first, then level two, level three, level so on. So you should know all the parent pointers are correct. There's a bug here, but I will not... Don't take so much about this right now. This is not required soon. What is interesting here, I did mention that we have created models based on the schema. So in this case, we are actually pointing to a specific model for DHS2. So we're targeting a specific version of DHS2. You don't have to do it like this, but that's one of the approaches we kind of decided on for this project. That means that you can also target 238. So if something was added in 238, you can just point to that 238 organization unit instead. And you know that would be correct and you would get auto completion and so on for that class. And of course, the last one is the client. I'm just going to quickly show you how that's set up. It's pretty straightforward. As you see, it's not very far from how we define a proprietary client. But basically you're saying doing a client builder is saying the base URL, the username and password and so on. And I think also Pats are supporting that. Soon, soon. Okay. So this new personal access tokens that we have now in DHS2, there will also soon be supported by this one. You see, username and password is probably not what you want to do in going forward. And then we are basically just injecting the client here. That's all. So now this component know the client it can use. So you can have multiple clients. You can imagine the source client, the target client. If you have DHS2 to DHS2, for example, then you just have two, sorry, from DHS2 to DHS2, but with different clients. It's just a source and then target client. The next thing we do, we do what's called a split. And all that really does is to kind of take the list of organization units and just gives you one by one. It was like a for loop almost. And then we convert one by one into the MCSD profile. This is basically a location and organization. And you can quickly go through that. I'm not going to go through all this code. This one you can have a look at yourself if you want. It's not much, but it's just too much to go through now. Basically, you're setting up a camel converter. You're saying, okay, input is organization unit. Remember, this is the specific version we had as a parameter. And then you have the exchange where you're going to get the body and all kind of stuff if you want. Then we're returning a bundle. A bundle in fire is just like an atomic unit, basically, if you want. It doesn't have to be, but in this case we're doing a transaction bound bundle. What's really important is this one. I just want to mention that this is just so the fire component know which bundle to actually send to the server. After that we fire off the fire component. Again, we're setting the client. In this case, of course, the fire client. We will get some kind of return. Whatever is returned from that transaction it's a JSON. So we just marshal that. Sorry. We marshal it. Marshal into fire version 4 and then we just print out the result basically. In this case, it's all successful, but it might be some bugs that you might have some to try retry logic or you might have some exception handling and so on, of course. Again, this is a demo. So we don't really... All of these things like from and to and split and body and convert body into those are all camel primitives. So when you're using Apache camel, you get to the command completion on them, but you basically describe your route like that and it can also be rendered in a graphical form so you can see your route as well. So camel provides you with that sort of scaffolding or putting the things together. Yes. So what's remaining now is to actually run the example. I'm just going to check that play that is actually open. It goes down sometimes so I just want to verify that. So I'm actually pulling it directly from the play dev server. It seems to be running. So good. And then I will start up a fire server. So I'm just using the happy fire CLI. Probably not necessarily what you want to do in the production. We probably want to set up your Tomcat or whatever it runs properly. But in this case we're just using the happy fire CLI. So this is version 5.6.0 version of the happy fire client. Just starting up a server now on port 1990. We are using version 4 of the fire standard. Version 5 is out, but version 4 is going to be the most used for quite some time to come. And that's also what we're going to base ourselves on. Who knows? At some point we probably want to modify, but that's not happening yet. So I'm just starting that up. This is an empty server. We'll take a little bit of time to start up. This one you can just download from the website by the way and then you just run the jar file basically and add the parameters. It's also available in up to get I think and the brew for Max and a few other things. So it's a very nice and simple way to start starting up a fire server. Especially for the demos and so on. So I'm just going to refresh page. So now we have a fire server up and running. Okay. And I created a configuration file a spring boot configuration file for this one. So you can just have a look. You see this is my setup. I think I actually changed the ports. I was just going to change that 1990. The rest is not so important right now. I do hold IO integration. We might want to look at that quickly after. But right now this is kind of the basics. Using just the normal player server admin district and so on. And we're doing a bit of introspection using jmx and this this basically just get hot IO to work. We'll show that later. Okay. So let me start up here. So we are going to run this thing. Is it big enough? A bit bigger. Nothing too crazy. I'm just building the project basically. And then you see spring boot is starting up. And now we are starting on. Now we are talking to play dev server. Soon we will see a lot of text coming up with these other results. All the results from the fire server. From the split. And again there is a for loop. So we are doing one organ, one organ, one organ. I try to do it in one big batch. But happy fire doesn't seem to be too happy with that always. So it seems to be a lot better to do many, many small ones. And actually it's quicker in a way. So that seems to be the best approach to it. And you will see now they all created. But because of the way I define the bundle I'm using a search parameter on it. So if you run it one more time this will stop it. And you can run it one more time. And this is kind of a nice approach. We have this create or update strategy which is kind of the default. And by using a search parameter where you put here it will create if it doesn't exist and update if it exists. So now you see it's giving a 200 okay and that's basically update okay. Actually we don't actually update anything in this case. But still that's how it will work. And we can also just verify and go into application search. You'll see now we have a bunch of locations. We also have organizations. And you can just have a look at one. So you see this is on the okay. But you see this is just the fire resource itself. Yeah. So that sure, sure. 100% yes. Yeah. You will have some potential issues just because of the metadata. You need to probably skip sharing. You probably want to, if you have attributes you probably want to sync those first. If that's it's going to fail. So you have to think, have a few considerations before you do that. But when those things are going to in sync you can definitely do this. You can do like a nightly job or whenever you want it basically. Yeah. We have something better coming up that you could also use. We're going to create what's called the eventing system inside the DCS2. So basically what that means is that instead of kind of nightly just getting everything and pushing it into fire you should be able to listen to changes of metadata. And then you can react to those. So you say just maybe it will be part of the DCS component. You're saying just listen to DCS2. I want this type. Then you can do immediate sync. You potentially do real-time sync. Although for organics probably not that important but you could. Martin is just because if anyone wants to ask a question we love questions. Point to Martin. He'll come with a mic. Because there's not so many people on Zoom. Hi Martin. So I understand Camel is the way we are talking to the underlying fire transport mechanism. So Camel gives us the interface the API to that. Camel is kind of the engine that runs everything. Camel has a bunch of components built in. And we have also created one of our own. But Camel is the engine of integration basically. What is fire giving you? Why do we need both? So yeah. As Martin was showing at the start. Camel has a lot of components. Components talk to AWS. Components talk to Kafka. You can talk to Slack. And it has a component to talk to fire. So one end point. In your route would be a fire end point. Now what Claude has made is also a component to talking to DHS2. What Camel does is a way to allow you to string those things together. I do things like the split. Yeah. Before the initial version of that he just read the org units from DHS2 and sent it to fire. Now when he realized the end performance issue with the fire server it was very simple in Camel using their primitive to just say well split this and it does that. So those are the kind of things that Camel does. It gives you the scaffolding and a fire end point on one side and a DHS2 end point on the other. Yeah. Not necessarily. You could do stuff without open him. Open him does provide some nice ordered stuff in the line. Open him is built with mediators. Mediators can be pretty much anything. So you could build a mediator like this. This thing that Morton has just done you could register that as a mediator within open him. So they're not mutually exclusive. They still are. We've got to get off example one. Hopefully a quick one. A little nitpicky but so the other attributes that exist in DHS2 like location coordinates for example shape files like coordinates are supported by file. There's a dress. No, it's not the coordinates. It's not polygon snow. But there's a I have another example in Python and that's using the geodeson extension. We basically take the geodeson and then basics to force it and you just say telling this this is a geodeson. Yeah. If I wanted to say use this and I want to maybe can this now out of the box include those or it would need to go. Somehow. So this again is just demos. The next step we're going to look at is using something called datas on it. This is basically a JSON to JSON transform. It's probably easier if you don't know Java to modify than the Java code itself. So you can imagine having a general. It's to location organization transform and then you just point to your data sonnet. So we give you the data sonnet that's kind of out of the box gets at least a job done and then you can potentially add to that and can modify it if you want to use the coordinate feel and so on. Yeah. That's definitely possible. Although for certainly more complex use cases I was still going to do the Java route because it's you have more control but for simple cases definitely data sonnet could also do that. Yeah, yeah, yeah. This is the most exciting is going to be by the way this is the only example that I've built around to do in the same but in different approaches to it. Yeah. You still end up with organist but yeah. Okay. So this one is a little bit different approach to things. Not a huge difference. You will see we're using another component of Hamel already also built in called the data sonnet transform. Again, we can just start from the top you'll see it's exactly the same as before we just again you're just getting organization units from this just to put the safe forward in this case I'm not using the built in organization unit class that you also provide I'm actually using my own class just to show how that's done and in this case since we're not using the built in iterator of the built in classes we will have to split it's ourselves because as you probably know if you go to API organization units there's another level right so you have API organization units and this organization is array which actually has all the organization units yeah so we're just doing it's not showing here but I can go a bit down basically doing very simple we just what's called a splitter being and we're just returning the correct you're actually going to loop and you see again you're doing a split so again think about this as a for loop since they're sending you the data sonnet it doesn't have knowledge of everything you're doing so what we're doing we're also setting the base URL this is just so it knows there's certain things you want to set when it comes to identify for example you want to say this is identified from this system so we need this base URL and I'm just removing the API part I'm loading up a file from the file system called organizationunit.es we will look at that shortly and this basically transforms all my organization units now into a map so this is just an object in javascript or whatever this is the key value and then you end up with that I'm simply converting that map into a string and I'm deserializing it using firejson so now I have an actual bundle a real class because the fire component requires actual kind of real classes and not not just a string or a json and again we are just getting the result again and outputting it but the interesting here is not that the interesting here is the data sonnet file you might not want to go through all of this I just want to show you the general approach to it as you see this already looks like json so you basically you can almost start with just an example of how you want it to look then you start replacing the static parts with some dynamic parts so for example I told you about you can set a put two times and the first time you will create the second time you will actually update and to make that work you basically add a pointer here and you will see it's an object called payload that has all the properties name, ID and so on so we are just adding that this is again we are doing it here but I will show you the main runner of here the engine or the loop this is another one obviously this is not valid json it just checks if the description is there or not so if it's there it will add the property description if not it will keep it out same with the code code is not a required field in dsh2 so we add but id is so we are always adding the id because we always don't have it then we check in is there also a code here and if there is you will add it please be aware that dsh2 has very few constraints on the code field but fire does have constraints on it so in dsh2 you can have h above 5 and you can space between right this is not valid in fire we can also have all kinds of signs and ampersands and everything that's not a valid identifier in fire so you need to be really careful here this is especially true when it comes to terminologies, option sets and so on we again we allow almost anything as long as it's not more than 50 characters we allow anything in there probably a lot of emojis also so yeah sure the interesting difference between this approach and the previous example and it actually came from discussions with Pavel both of them are just different ways to map and integration is almost always about mapping at some level the first example the mapping was kind of done in the Java code it's probably very efficient but if in the field they need to change something but maybe there's a there's an attribute that is changed there's a new attribute in the Pavel case there's a questionnaire maybe there's a new questionnaire item then they can customize this file where it is that with all the other sort of Java stuff in place it's driven by essentially a mapping file but that's the main benefit of doing a good data so it's got a few drawbacks doing it this way but the real benefit is that you can customize it locally without getting involved in programming sorry no that's fine let me just show you one more thing and then I think we can move on oh sorry I don't even have it I was thinking I was doing the for loop but we're actually doing one by one so in this case so yeah there's no for loop sorry again I will just run it it's not going to be that interesting it's going to be the same again but just to show that it actually works and again this is doing almost 100% the same as the Java code it's just now using it as an external script instead oh I didn't change the port today I'm just updating the examples so we have it for later anyways it will work as before yeah that's good for now so I have one last example let me just close down this one so the last example requires Artemis which I have here the other example not this example well it has it but it's also interesting because you don't yeah anyways whatever if this time you can also show what I own then again I'm just running up this is just normal Artemis I just created a new instance called duckdemo nothing special here it's just starting up there's no I don't even use authentication we will see starting up the port 61616 this is the jms port and we're going to use jms for doing the exchange of data here so before you do anything I'm just going to show you the listener before the producer so the listener is very very simple again I'm using the first example here so I'm using the java-based converter but it could of course be using data solitaire also very very simple it listens to a topic called organets and you can call it whatever you want maybe it's a better description and maybe it needs to name spice it but for now we're just calling it organets whenever there's a new event there so a new organet coming there we unmasher it into an organization unit we call it a body just as before sending it to the fire client as before and logging out the result as before so you can imagine replacing from here with these just two something something as our event system for example that whenever you click on something an organet add, delete or update you can imagine just having a small thing like this listening on that of course not sending it to fire maybe but sending it to DHS too so let me just start up that one yeah that should be more and more more and more and more and more so what you see now when it starts up it's actually nothing happens right because there's no events there's no events coming in but it's now it's connected to Artemis just sitting there and listening and see if there's anything happening just move on this one and let me then open up the producer I will also show the code so this is the producer again what you see I've done I'm honestly just took half of that out and put that in here and the rest of it out and put it on the one so it's almost the same as before even the split to be in everything the new one here is I'm sending it to Artemis using JMS so we're not sending anything we're not converting here we're not doing anything with that we're just sending it straight to JMS which is our Artemis server and that's it, that's it so let me just start that up so let me actually open up two windows so again I will just kill this one again again I'm just starting up the listener again but it's in the twins so we can see the two at the same time up and running again listening and now let's start up the producer so again it will just go to playdev get all the organets now it's starting to loop and you will see this is all through the Artemis but you can imagine not having one listener you can imagine having five listeners to five different additions to instances right so if you have a nightly job you have one master facilities for example and you can imagine doing one update on the main one but you're actually updating multiple servers maybe also fire maybe also not different system you just create another chemical component for that and it works exactly the same so now you have kind of decentralized the whole thing so you kind of split it up and you do a very very small route it's just 10 lines of code almost that's all and then you have three lines of code on the other side for listening and that's it so now we have this very dynamic nature and this can be a real time if you want because now you're doing just not yet but at some point you'll be able to actually just small small changes and you can actually push those out immediately they don't see that not sure we can show it off here no no no it's fine no no I'm saying the people online they don't see your drawings it's really quite interesting the way he's growing they're actually doing something quite similar now but to solve another big problem that we have which is real time generation of indicators how many vaccinations have I done today we have problems with program indicators of large scale as you know but in Rwanda they're just looking at all the messages coming through on the Artemis Q and the ones that are interesting too and then they have them in real time and put them on a on a portal or whatever else they do almost that is not just about fine two quick questions one is that if you're using it for the kind of synchronization you talked about later is it possible here to simply have a synchronization that is semi-automatic meaning that if you have for instance an update instead of when something new that you can stop and actually ask the user do you want this update or not because again here I just know from experience that sometimes you know you're getting updates which is actually wrong you're getting new things which turns out to be duplicates you don't want that duplicated or unit as a simple example to be spread into all the other instances so that's question one the second question is you were mentioning that you couldn't have in DHS2 is very few restrictions and I mean I compare it with DHS1 where I built in a lot of restrictions you couldn't have double spaces no trailing blanks no funny characters in data element names that can courses you know ampersands and stuff that might crash in HTML and I mean all of that I generally blocked it up from what I see in databases that I'm cleaning a lot of time a lot of queries and fiddling to find, identify and clean them up is it possible to use fire because you mentioned that fire has a a different set of criteria for codes for instance are those criteria customizable so would it be possible to take if I have 12,000 data elements which has duplicates but one has a double space would it be possible to send them all over to fire have a range of customized restrictions on that that actually auto cleans up most of them and then send them back instead of having to spend like three weeks going trying to find the 2,000 out of the 12,000 that has a problem I'm just asking I'll take the first one first and yes that's possible in 238 we have something called metadata change approval so it allows you to send basically you can request saying okay this is a new name for example then you send that as a metadata change request and then there's no UI yet but then you can go in there and you can accept that metadata change I think it's only for organist right now but that fits the case we're showing off yeah exactly that allows you to approve changes before they actually go live so that definitely would work no no no any kind of change at least create an update not sure about deletions but create an update for sure there will be a UI for that hopefully in 239 but yeah that's definitely coming that's definitely coming the second one I don't know if you want to but in general if the fires in fire they're not customizable so they are kind of strict requirements they basically follow the requirements of our UADs cannot start with the number must start with the letter and then have a mix they can be longer than these are still not element characters but they can follow the same kind of rules so I don't think that would be a good problem I would probably dump it in Excel honestly as we always do I think that would be a better approach to it I don't know if you have anything to say to that Bob if it's a better approach it's a little bit off topic I understand the problem exactly using fire to somehow address that problem probably not the right way to do that if we could discuss that online alright in the HIV space for USAID we're looking fire for interoperability from different systems lab systems logistics treatment testing the HIV still not having fire has always been a challenge because we're seeing countries doing it the hard way there's really good news but one thing I want to bring to your attention is we're also looking for third party fire testing as a service now since you guys are creating this dynamically through mapping you don't have to answer it now if we bring in a third party as a service for fire testing how would it work in your scheme of things I mean so whenever you have so I assume that's using some custom profile defined by some of the parties here right yeah exactly exactly I don't know I have to wind up a last comment on that what we've seen a lot of places now getting into the business of developing fire profiles Indonesia we saw Ethiopia it's really really important that we don't create these things as mathematical abstractions up in heaven and hope that they are somehow going to work in a system profile development ideally should be done hand in hand with implementation like in the early days of TCPIP when you create a new variation on the standard it needs to be demonstrated that it works but we could talk for many days on this unfortunately I think our time is up so thanks everybody online