 I guess we'll end up with those, I think the recording out there, because there's still a lot of fun. Yeah. So I'll have to do extra things. And we'll have to wait for the time in the same place. I'll have to do extra things. I'll have to do extra things. And I'll have to do extra things. Yeah. So I think we'll get started. We have lots of things on the agenda today. So I'm working. I'm going to present a little bit of our integration team and what we've been working on. We also have with us Chandra, who is part of the team, and Morten Svannes, which is kind of hijacked from the security and pattern team as part of today's presentation. We will be starting with a little bit of the overview of the team itself. We'll be talking about active projects through the Executive Pro AFI. We'll be switching out to demos, with the Java demos, but we'll quickly get through those just because we have a good time. We only have none like it this day. And then we will have half an hour with OpenID Connect and the integration on how that works for teachers too. And specifically we'll be using people over that demo. As I said, I'm Morten Hansen, so there are multiple mortens in our team. So I'm the Morten H guy. We also have one colleague who's working out of Ireland. He's kind of a product lead and a technical lead. We have a full-time engineer. He's also just working for us about a year now. And we have Chandra working with Chandra and Kevin also with us for part-time supporting the project. So I'm just going to jump into it. So lately we've kind of switched in stacks a bit. We've been trying, through the years, we've been trying many things from Python to Node.js and different things. And we kind of settled down, I think, on a good stack that seems to be working well for us. At this call it's Java, what we've been using for a long time. It's based on... On top of that, we're using Spring Boot, which should hopefully be familiar to you if you have been using Java already. And we're using Apache Camel as an integration framework itself, which is based on this kind of enterprise database we've had there, I'm sure, and we all kind of flow control and so on. And it has a bunch of components for HTTP and all other things, which you will see very very soon. We're also using, for not all of our projects, we're using some kind of data on that. It's a nice little language for doing JSON to JSON transformations. And actually, I'm Q. It's just basically a Q-ing system. You can put stuff on the queue and you can put it out using another script. Again, we're going to see them off that later on. We have been building our own stuff on top of this. So we have our Java SDK that we've been working on. It's still early, early days, but it's kind of good enough to be used. And on top of that, we have built our own Camel components. So, again, we will see how it works first. We also use a lot of hotel monitoring, but I will show you that later and we will see how it works. And there's a bunch of testing stuff also, which I will not get into today, but we have examples of that in other post stories. But it's basically the main step. You will see the standardized stuff also in the ECS-2 core. And the only kind of difference is that we actually use some of this public show metadata packages as part of our integration pipeline. As I said, we have a simple Java SDK. The main thing it provides is basically a client to talk to ECS-2, which takes a little bit of your authentication protocol. We might be using API tokens, what's called the TATS now or personal access tokens, or it might be using this basic code. Hopefully, in the future, we will build on that and support public protocols also, but right now, those are the things we support. We also generate a full model of ECS-2. So for every version of ECS-2, we are actually generating the full model for every version basically. So you have, I would say, close approximation to the ECS-2 model. It's not the one-to-one. There are certain things that require fees as well and they're not 100% there yet, but these will get better over time. So at some point, you will kind of be able to say, give it authorization units. It's not going to be authorization units from ECS-2, but it's going to be one that has a lot of utility methods around it that kind of helps you communicate with ECS-2 basically, without having to create your old tasks again and again and again. Although I will be doing that since some of these demos. The link is there. So you can please feel free to start playing it out. Give us PRs if you have any bugs and generally just talk to us if you have any issues with it. Every more interesting thing when it comes to the integration is that based on the Javis-2 KVL, we also built out the camel component. These are two specific camel components. Again, it kind of takes some of this kind of tedious work. We talk to the ECS-2, posting tons of ECS-2, going back and forth, back and forth and all of the kind of things. And then kind of a nicer model, nicer abstraction. Again, I will show you the demo this soon, but you will see that you can do your simple stuff that give me all the organics on ECS-2. Again, point to the item type. So we point it to the full task name. In this case, it's version 2.8.1. And the path you want to and then the fields you want. So, you know, this difficult field field to start playing in ECS-2. Yeah. So I have some demos, but is there any questions before I jump on the demo time? It's not part of this. So the first something I'll show you. I think we can move this one. Yeah, sorry. So guys are just reorganizing everything. Yeah. So again, I will have to be a bit quick here. It's a pretty much a standardized spring group application. So you will recognize the spring application run and so on. What is new here is that we are building two clients. Again, this is using the Java SDK. We have set up again, all these examples will be online later. So that's why I'm going to leave it pretty quick, but you can go through these examples later and you can have a look. Basically, it's setting up our application purpose while we are pointing to the instances. In this case, we want to synchronize some organets. So we have a master or a base or a starting part and then we have a target. So there are multiple targets here, but in this case we have an application between two instances. So whenever something is added to demo one, it will be also added to demo two. It's a simple straightforward example, but something that's a very difficult use case and a very difficult integration case, right? You want to, there might be data elements, there might be category options, there might be options as there might be something else, but you want to synchronize what you're doing between multiple instances. So again, using this application file that I'm building two clients. And we're building two clients. So it also has two spring beans. So we have one called this as client target onto this two client source, right? Nothing to press there at all. And again, please stop me if you want to give any information. So just with this ask. So that's part of the Java SDK. So the dishes to try this part of the Java SDK. No, we're just depending on the Java SDK. So the Java SDK is here as we have it as a dependency. We see some of it was here. We actually, we are linking to that. The common component, which also driving the Java SDK itself. So the dishes part of this is not something I'm creating here. My project is something that's been pulled in part of the SDK. So it's basically creating our, we have two clients to configure one. It knows it's going to be sourced, which is a demo one. The target knows my, my, my, my, my connections with demo two. So the first, first, first round we're going to do. So it's a little bit. But don't worry. It's pretty straightforward. So we set it up a timer basically. And then again, this is up to you how often you want to do this every night or you want to do this. In this case every 10 seconds. But. That's all this really means it's just a fixed rate of every 10 seconds it will run this route that we're going to talk about. Yeah. And this is just the name of the route itself. So whenever you see something, the log, you know, it would be called read organics. It's something just by default, it would be called something generated like route one or something. Just to name it. And then again, you'll see here. This is what we're doing. What we're trying to do. We are trying to get all the organics. We're trying to get all the organics. We're trying to get all the organics. We're trying to get all the organics. From the source. And we are also setting super parameters to that request. So. I'm not going to go into all the details, but as you probably know, we have this parent. Hardly. So you usually want to order by level. So it means that you're also going to get level one first and level two level three and so on. So you don't get this missing a kind of reference issue. In this case, we do all of them. Which is depending on what you want to do, depending on how many organics, it might not be a good idea. It might be better there. We are also not doing any last. The field training is just a simple example. We are just going down to level two. So. We have only one level, but you know, if you might have a very deep tree, you might not want to see that all levels. So I'm just going to let that. And then we are these other fields. We are interested. Right. So you might do many more. It might be different. You might want to have translations and so on. And in this case, we do get simple, just for the typical, the code, the name, short name, description, open date, and the parent, respond with five parameters to make it work at all. And in this, this particular case, we are not using this generated. Generated classes. So we are, we are instead using just a simple long book. So again, just to show that you can use what we want. We don't have to use the provided domain classes. This is a very straightforward example. I know exactly what I want. And in this case, it's very, very simple. So I just created a very simple number of class, which just helps you. Yeah. Yeah. Yeah. So you see, basically this is Java, right? So it's not dynamic. So you just want, you want to target DTO class, the target class. And in this case, you just get exactly what you need and nothing else. But again, that's kind of up to you. You can use our own classes. You cannot, depends on the worst of your target thing and so on. But in this case, yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. So the point is coming from here. Come this to learn. You basically got to end up with a single bytes, right. By string basically, just a bunch of. Text. So what do we do with that? Well, this is actually just injecting the same, but I'm not sure that that called into Jason. Oh, from just into this, or this record class. And now you have an organization in this class with all the organization in this person in that class. And now to make it simple for ourselves. Instead of, you know, potentially this could be 10,000, we could call this one thousand ordinance. So we're doing a simple split. Or split we could basically think of as a follow-up. So you're not taking. This organization units were using what's called. It's a simple pattern. They're basically. Extracting just the ordinance. And that's a list and now all this list that you can look to one by one by one. And in this case, we're doing nothing at all with it because our target is also dishes too. You don't need to transform it much. You just kind of just repackages it back into the JSON. And in this case, again, this is managed during that, I decided to start up an artist. So this is an internal queue that's been started up with your application. So now it's all just doing is putting it on this topic. So one on one organics that we have, we will put on this topic. And that's it. That's all that's happening right now. But we also have the receiving side. Again, on the receiving side now. And remember, this could have been two separate instances, two different servers or whatever, right? So this is why we're kind of splitting it up. If you have all of these in the same shower, you might not always want to use this, although it can give you some robustness, especially if you're going to be handling that topic with a lot of things that might take some time to process one by one. So it does make sense. So here on the receiving side, remember, we were just sending JSON. So basically, the text screen. So okay, so what we do now, again, we want to name the route because it just makes it a bit nicer. We again, unmask for it. But remember now, not to the organization units, but we're receiving one on one organization unit. So we just did that today getting a full organics. We're doing a little bit of processing. Again, this is just to make it easier for ourselves. And this is a show. Instead of using the organization unit endpoint, in this case, I'm using the metadata endpoint. And I'm also, I just want to show you that I'm actually using the classes from our, sorry, just a little bit. Okay. So it's actually using the classes from SDK in this case. The receiving side is using the classes from the SDK. I just want to show you that how that works. And so in some exciting issues, it has simple noble classes. And the receiving side now they're using from the SDK. And you're wrapping it in a metadata wrapper. As you probably know, that's just how you, whenever you're censoring the API metadata endpoint, you have to wrap it in the metadata wrapper. We just have a list of organizations. Again, all of this stuff we see here, set organization units. All of that stuff is auto generated for you by the SDK. So that's something that's just available for you. And after we're doing this little processing, this could also be done is that converter and the money based doing this. And this is just a simple way of doing it. We again, just setting and updating the body of the message. So Camel knows what is what is currently processing. And if you see here, this is the last one. We are posting. You have to remember we are posting now, the path metadata, the body, the user itself, and the client. Of course, in this case, it's a target. Remember, before you have the client source, this is the client target. This is demo two. And we will show you quickly how that actually works. This is demo two. So any questions for that before I will actually show you the demo? Again, all the code will be available. It's currently in the private repository, but you'll be made public, maybe tonight or something. And it's linked from the slides. Yeah, I guess. Yeah, because a job is just as simple as that. Yeah, we don't have to. And in this case, we even sort of embedded Artemis. And but but it's just the way of showing up. And you cannot use many other protocols like stomp and everything that there's also for no customer or everyone to use. So that's just a way of showing to you, basically. And I mean, you can do that. So, okay, so what they're showing us from online to show how to have those things are working. So it will show you is for our rapid pro integration, something called hot IO, which is basically as little between UI and Office. And it works as a UI both for XMP Artemis and UI for the tablet itself. So you can see the round some of it. Yes. Yes. Yes. Well, we don't have to write. So that was the particular sample. So you can just use it for SDK. Yeah. This is what they're using SDK. What is this? Yeah. So this is again using the SDK and even the SDK, the main model. So this is all coming from the SDK. On this receiving side, this is all from the SDK. Yeah. So it doesn't have to be, it doesn't have to be. That's up to you. I mean, that's, that does not have to be. What you need to do is sometimes you can know the version you're working with, right? But they might, if you go back to like 3.3, like a geometry would be very different from 2.9 for example. So you need to know this kind of stuff. But if you have a need to convert, this is the place where you would convert those kind of things, right? So you could create or convert before it. Yeah. Because if you go back long enough, we had, so say for example, you had the polygon as a one column, and then they had an actual coordinates of one column, but we have to go back to the geometry field, right? So you could potentially do that as part of the integration. If that's something you need. Well, you need to know. I mean, there's no magic here, right? So all of this, this domain model is modeled after the version of a target. So if the version you come from is not compatible, you will need to take care of that. There's no magic here. It's not fun yet. So yeah, so this is a target, yes. So again, we are reading from one side is in source, and then we are targeting the other side using the client target. Again, we have to move on because the time is going to be slightly if you don't. I will show you the example. The people on the line can see this. I'm not even touching let me try. So while we're doing that, I just want to show you. So yeah, this is demo one and demo two. I hope you can see this. I have to hear me. I just want to show you what's currently there on the same unit. You'll see there's only one called country. And then on the other side, because we're also using the savings for another demo, I basically set up the same, same here. Now we have, so we have the same organization that you will see in a while. And it has the same year. So in this case, right now they are all in the same. So let's start up our script. And so now we are running the Springfield project. All it's really doing is kind of building the file itself. Of course, you could build the jar file. In this case, I'm not doing that. Then we will just wait for it to show up. Should I make it smaller again? So now it's up and running. You have about a 10 second delay here, but I've never done something on the screen. So I'm sorry for that. Anyways, so there's not much happening because obviously, there's no difference. So there's nothing much for it to do. So let's see, you can create now in demo one, you create a new one, you just call it something like this. This is a live demo by the way. So I'm hoping this is working. So now we have another one at level two. Of course, we did it now. I think we have to have that 10 seconds. You see now, when the screen updates, you will see now in demo two, we already have a hook available. So you're sorry for the delay. Yeah, that was missing now. As you will see here now, it's already there. So the same way, if I want to go back here, I can go to hook up again. I can say also, right, so then we just go back here. I will save it call it also go back to demo here. And I was called also here also. I mean, it's a simple example, but it's just some of the things you can do using this kind of integration strips. And this is something we also going to focus a lot on going forward in the integration team in general is to support these as to these as two integrations. So this kind of thing, of course, you have to handle errors, what if the surveys down, what if you want to do partial tree synchronization, what if you don't have access to read by to that instance, a lot of things to consider. This is a simple example, but but it just shows you how to be done using a recipe. So right now, no, but that's because of something else. We'll get back to soon. So if that was case, you really kind of have to over the wall stuff. And that's a bit more complicated because we don't really have a good removal of the dishes too. So basically, you just if you remove say also from instance one, all the script now we'll see is that we have country, some country, but that exists. But it doesn't do more stuff. So we go back here. Let's just get here. Yeah. And that's kind of how the hard side is. And in the same case, if I now remove country, now, do the same as I would complain, because all the other wouldn't even complain because nothing has happened. There's something about events, which we'll talk about later. I have to hurry up a bit. But what I will show you something later that we have is just doing this in a much more smarter way. But if you're doing great read and the leads updates, you can get a notification and you can react to that notification basically, which is a lot better way of handling it. Because right now, as I said, when you believe something, it just gets gone from the input, but you don't know why or how it's done. Who did it or not really much. I mean, that's up to you. And I mean, right now, this is my data. But there's nothing stopping you from using the same approach with data values or whatever you want to do or enrollments or. Yeah. But to remember that, again, this is a quite simple abstraction right now. So and when it comes to enrollment and tracker, there's going to be a lot of issues that you'll have to kind of take care of yourself. That's just how it is. I mean, it's got to be. Yeah, it starts with high cloud. Even if you say that I can get cloud with the source adaptive experience subscriber or any standard changes. So in this case, it just pulls. Yeah, yeah, it's just, it just pulls, pulls, it just can timer. Hopefully, I mean, ideally, in the future, we will have, you can do from this just through, and then you just listen to events, for example, that don't become the ideal thing that or it's very easy in camel to set up an H2P server. So you just have to use from the server endpoints, and I really get some of the server endpoints that push you to start, start around. It's much, much, much better in the big units. Exactly. Exactly. We're gonna have to, but we're sorry I think that's only negative. Okay, I do have another example, but I think I will skip it now just because of the time. We will show you later, we will talk about keep look. We also show that we also have a bit of a keep lock integration. But we were sure that that's a demo when the market is talking about OpenID Connect. Other than that, are there any more questions before I have a one? Again, the repository is linked from the slides to be able to get all these examples. And you can also ask me if you want. It's all on the tab. Yeah. All the examples today is on the tab. They are linked from the slides. So I just want to talk about some of the other projects we've been doing. We also been working with WTO EMC to create an integration with WTO Flow and WTO Base. So as you probably know, our packaging team in Austin, they have been basically creating all these packages for different dollar show standards. And one of them is for the A5, which is the adverse event reporting. So if you get the COVID vaccine and you have a bad effect, bad reaction to that. That's usually important in a global system called WTO Flow. And that ends up in another system called WTO Base. And now we have created a small script for that that basically allows you to have automated that process. It does not currently work. If your integration, that's quite difficult with WTO flows right now. Hopefully in the future, you'll be able to do that. We also want them generally to double check what's coming out. That's what we've been told. So that's what's happening right now. But we can schedule a weekly email or a daily email or so on to basically any kind of inbox. Then you'll just get a list of similar with all the new cases, then you can import that into WTO Flow. And we also have an API for it where you can just also go and this last option parameter is saying maybe all from yesterday or whatever you want. So that's one of the things we've been working on. The current multiple countries trying to start this up. And also just work with Maldives, but there's also a lot, there's also some more I hope to put it into a few other countries in Africa. So it's being used more and more. I will switch over now to Shatran. So I will stop sharing my screen. Thank you. One of the new stages that we have at first using the average performance and the average performance because I'll try to quickly go through this one because I think modern has more interesting stuff and suppresses for us. So what would be more one of these cases where we integrated VHIS with VHIS too. But in this case, we have VHIS 2 on one side and then we have something called RapidPro on the other side. But this is going to be the structure in most of the innovation cases. So we have two more systems. And all the systems know how to talk with some kind of data representation language, which is GSM in this case. And also they know how to use some kind of a transfer. So in this case, VHIS 2 uses HTTP and RapidPro uses HTTP, so which makes things much simpler. And in between when trying to integrate these two components, we have one of our plug-in that we have developed using the VHIS 2 panel components. And then we have also used something called data sonnet, which is kind of like a scripting language which supports transforming payload from one form to another. So in this case, it's GSM. And I mean, we can even use data sonnet to transform from GSM to GSM or GSM to GSM. So those kinds of things are supported by data sonnet. So when it comes to RapidPro, I think almost all of you have seen real-time programs where you are expected to, you know, work for the participants. So you can simply SMS with some type of a code and then the participant's ID or something. So the system automatically captures that code and, you know, increase the number of words for all the participants. So that is kind of like the most basic thing that can be implemented with RapidPro. So instead of handling such a basic thing, RapidPro provides us the ability to create a message to us, which are more complex than which kind of, you know, presents with questions for users. And then when the user sends us with the response, it automatically captures that response. And then it can decide to present users with more questions or enter flow and create one large payload, which includes all the responses from the user. So for instance, if we get a simple example, so I mean, if you take this event itself, we can create an RapidPro flow to capture the number of participants from each piece. So I mean, a user can initiate that flow with some type of a code, maybe his Asia, it can be anything. So once the correct SMS, the RapidPro initiates a message flow and it can start asking questions. So it can first ask how many users from his Vietnam and then the user can respond with let's say 10 or something. And then RapidPro registers that in the messaging workflow. I mean, it can be like his Vietnam 10 or something like that. So it is basically building a JSON structure, including all the responses from the user. And then once that response is recorded and that is a valid response, RapidPro will move on to the next question and ask how many participants from his India, so something like that. So it can go on like that until the flow is completed. And at the end of the flow, RapidPro saves that response as a single JSON object or even we can configure RapidPro to call an external webhook. So in the RapidPro to VHIS to interpretation, what we mainly want to do is capture some community level aggregated data from RapidPro users and then send them over to VHIS to in the form of data value sets. So in order to do that, as you already know, your data value sets needs to have some kind of an organization unit binding. So for that, we need to know which user is sending these requests to RapidPro and as well as the organization unit where the user belongs to. So in order to facilitate that, the first functional requirement that we have identified is synchronizing users. So what happens here is the first page users with valid phone numbers from VHIS. So it's going to get that as a JSON payload as we have previously seen. And then it's going to transform that JSON payload into a format where RapidPro understands as RapidPro contact. So in VHIS we have VHIS to users and in RapidPro we have RapidPro contacts. So it's basically doing this transformation from VHIS to users, RapidPro contacts. And it also includes VHIS to organization unit ID as an additional parameter in the RapidPro contact. So when we receive a message, I mean RapidPro identifies users based on the phone number. So now when the message is received, RapidPro knows which user from VHIS who has sent this message, as well as the VHIS to organization unit that belongs to that user. And I will come back to broadcast remind us later. And the next functional requirement is, I mean, this is the main requirement of the entire integration that is to transfer reports from the RapidPro users or RapidPro contacts directly to the VHIS to system. So when the RapidPro contact sends us a message, RapidPro first going to work that user through that messaging flow and capture all the information that needs to be captured from the user. And it's going to create one large data set, which includes all the responses from the user. And then if we have configured RapidPro to call a webhook, I mean, we have two approaches for doing this. Either we can configure a webhook so that it will call one of the webhook that we have exposed through our integration component. And then it will transfer all the information as RapidPro gets them in real time. Or we have another approach where the integration component can call periodically RapidPro or RapidPro with a flow ID and some kind of time stamp so that it will get all the new events that appears after the time stamp specified. And once our integration component receives some kind of a message or an event from RapidPro, then it can call back RapidPro and ask RapidPro for the information about the user that has sent this message. For instance, our integration component now has all the data to create the data value set, but it does not know the organization. So it can again go back to RapidPro and ask for the organization unit where this user belongs to. And then finally it can create the entire payload that should be sent over to the data value sets API and simply call DHIs to data value sets API through the DHIs to camera components. And then we have broadcast reminders which is also a functional requirement. So if a data value set is kind of like expiring then the integration component can periodically check for the expiring data value sets and remind the users to enter data if they are behind the schedule. And there are some non-functional requirements for this integration as well. So the first one, the first kind of most important one is reliability. So that means if RapidPro registers a message from the user and if the user completes the flow without any errors then we should guarantee that it's going to be delivered to the DHIs for users and create the data set that completed. And then some of the common non-functional requirements are security and maintainability and also the integration solution should be fast enough because now we have discussed only about use case where one user sends an SMS but in production environment there can be hundreds or thousands of user sending concurrent SMSs and dealing with multiple concurrent flows. So all these accepted messages should be highly delivered with an acceptable truth and low latency. And the last non-functional income that we have identified is extensibility. So this was useful in Uganda case. So by the way this has been successfully deployed in Uganda. So in their requirement there's another component in between our integration component and DHIs too. So they have another homegrown solution which accepts messages from the from our integration solution and that that shows that solution kind of do some kind of auditing and stuff and then they normally deliver those messages to DHIs too. So extensibility is important. So we can easily you know switch components. So if you don't want to submit messages directly to the DHIs we should be able to simply change some of the available codes and deliver that message to something else. And apart from that we are also providing management and monitoring tools and also tools for recovery. So we are using how-to-use for management and monitoring which is kind of a jmx-based tool which is natively supported by Camel as well. So with this tool we can stop and close out if that is required. So if there's some kind of maintenance going on we can simply close some of that out. So we don't- we will not- we will kind of stop accepting messages on that route temporarily. And also we can- we will be failure. So if there's- if RapidPro has successfully accepted the message but if it has failed when delivering that to DHIs too. So we can simply be in those failures and you know manually address them if that is possible. And also throughout the Camel routes we can you know add info logs or debug logs or logs at any level. And this how- how TIO is capable of you know taking those logs registered then provide an interface, a nice interface to meet those logs later. And also it provides the ability to analyze latency so we can see which route takes more time. I mean if the route is taking a significant amount of time we can you know easily debug why it's happening and things like that. And also we have ht for h2 database for recovery. So I mean we kind of use that as the data channel. So if something fails we simply save the entire payload into h2 database so it can be later analyzed and we post manually if that is required. And also it acts as a repository for logs and it can be used for because it registers which RapidPro uses the same same term as age and whether it was delivered then all those things so it can be easily used for debugging as well. Sorry Auditing. Yeah that's about it. We are not actually sending any events to dhi's to with the same user. We are using we are using some kind of a common user for sending all the events. We use user just to identify the organization. No we don't have to code anything. I mean there's an option to code as well that is required. I mean if there's some kind of a custom version which is hard to implement we can code it. But it provides a user interface where you can bracket the components and configure them. Yeah it's like a dialog. Yeah map flows supports Facebook messages, WhatsApp messages and all those things so we can do it as well. Yes it's available on dhi's to dithub. Yeah now we can dhi's to dithub and if we type RapidPro then there's no problem. So let's let's continue. We don't have that much time so I will try to be with the last class. Just a little bit about our kind of approach to fire going forward. As you probably know we tried at some point created this kind of real end of fire to dhi's to adapter, dhi's to fire adapter to waste chinking. It didn't really go that well. I mean it is being used but it was never deployed anywhere by us or by Oslo itself. So we kind of switched kind of folks a little bit when it comes to fire and we kind of tried to support specific use cases. So again we are using the same stack as we just have gone to the RapidPro with the example of the organ sink and everything. And so going forward we will be very much more specific when it comes to dhi's to fire integrations that might be synchronizing your locations and your organizations, synchronizing your code base or cosystems and other sets and so on. So just a little bit very quickly about that kind of how that approaches. We do have some examples already that this isn't one of the linked repositories. So feel free to kind of look into that if there's something that interests you. We have been working also a little bit with Paho just doing a little bit of this kind of fresh clear response stuff which basically maps dhi's to events. So that allows us to hopefully in the future we will also be able to see these kind of events as fresh mirrors basically. And again there's an example there that you can have a look at. We have a couple other conferences, one of them was the annual conference in the summer. Again there's three or four examples there using SDK, using data sonnet and a few other things to create bundles. So MCSD is basically a profile for organization units as simple as that. And it's basically showing you four ways of implementing that using SDK. Some of them using data sonnet, some of them are not. So please have a look at that and that's the public repository. And all the examples from today will be on this currently private but will be made public soon. So please go there if you have to look at any of the demos I've shown you today. And there is also the key of demo that will be shown soon. And we do have a new website which is probably the most important part of this new fire strategy. Feel free to go there. We have this new fancy website, also a new website that is new page on the digital station 2.org. And it kind of goes through a little bit about our standing on fire going forward and the technologies, the profile we're targeting. And so please have a look at this time of course. There is also a link to the presentation we did in the summer which again is showing a lot of the same stuff I'm showing you today. So please have a look at that if that's interesting. It goes a little bit more in details regarding how this fire site has been used. And if you go all the way down you will also see some links to more examples that we have made. So this is probably kind of the deep place to go if you want to know anything about fire initiatives too. We'll be updating this going forward whenever there's a new project we'll be doing using fire, it will be added here. And whenever there's a new presentation it will also be added here. So please have a look at that. That's just up now a couple of days ago. So this is kind of fresh. So a little bit about what's happening in the future with the integration team, what are planned ideas. So as I said, kind of switching gears a little bit and doing something that we probably should have done a lot to start with is that we're going to focus a lot on these two integrations specifically as I showed you the organized sync stuff. That's going to be kind of leading to the product itself, something you can just download and you can implement on your server. You potentially have a source and then multiple targets that you want to synchronize with. Potentially some parameters depending on if you want to synchronize the full tree, partial or what you want to actually synchronize. And of course you don't necessarily just want to organize, you want to organize groups, the group sets, you might have attributes on it. So it's a bit more complicated than what I showed you today. Of course it's a lot more involved in just synchronizing some small parts of the ordinance. We want to expand that maybe specifically to offset categories, category option potentially and kind of build on that until we have again a kind of more full-fledged product. But this is again a long term plan. It's not a six-month a year and also going forward in general. Potentially there might be data integration there also happening. Again, focus on this whole fire rate makes sense. If you have a real fire need in your country, not because you just some dollars telling you you need fire, but if you have an actual system with fire, you want these just to work with that to make the integration happen. We are very, very happy to support with that. The email was in the start-up of the fire. So you just send us an email. Again, please have a system fire ready because there's a lot of people coming to us. They just want fire and they don't know the profile, the targeting. They don't have a fire system in place. They don't have anything. So we want to support real-world use cases. So if you have something, please come to us with that. Another thing that's more and more important is that working with DHS2 core itself. I'm also part of the passion team supporting this instance of sensibility in DHS2. As I said, event works. This is the first up-coming that will be the N240. Well, the initial version will be the N240. API gateways is another one allowing you to call out to other systems without having to use the core or anything like that. But you can have a properly set up authentication happening with encrypted passwords and so on. So you just call the API gateways X and then you go to another server and you get the data and then that's how you do that. Another way we really want to influence DHS2 is system identifiers and how that's working currently is not great. So at least we have the code field, single code. We don't know what the code represents. We want to expand on that. We want to maybe model it a little bit on fire, but there are also other identify systems out there. We want to kind of look into how we can do that in a better way. And of course code is in general. Again, these are two options sets. They get the job done, but they're not great. They can do the code missing the context, the missing release to build those kind of things. So that's something we also want to work with the team on. And maybe the bigger thing is there will be an integration academy in March. The exact dates will be announced soon. It's probably happening in Rwanda, in March, mid-March or something, or the end of March. But again, the dates will be announced in your usual places. I will reduce time on minutes and then I will hand it over to more people to open up the next stuff. Again, I just want to tell you that the event work stuff is a much more interesting way of doing the organizing I showed you today. We're going to focus on web hooks to start it. So something happening in issues three, for example, you get new organets, you will support the target being web hook, but in the future, the target can be Kafka, it can be actually M2, it can be anything that you want to support, basically. And that's kind of the idea here that without tying the source to the target, it will support multiple things here. But again, for 240, it's going to be metadata and probably only web hooks. We will see what you have time for, of course. And another thing is that we have this internal Optimus Queue being used for all this and so on. And this is not meant as a direct replacement, but it might potentially be in future that we can reuse this event for the same things. And I'm hoping for it. So I have a very experimental demo. I just want to show you how it's going to work. Hopefully it will work. This is again a live demo. So yeah, I have a very simple, this is nothing that's publicized yet, by the way. So this is not something you can test out yourself right now, but I'm just running a very simple demo. This is a dictionary locally. It has CLI on database, so it's nothing special. If you go to the web hooks endpoint, you'll see currently it has no web hooks at all. So nothing special there. Of course, it should be started with no web hooks. So let's create the web hook. So in this case, again, oh, this is still very small. So again, you see this, yes, of course, the name has, in this case, has something stable at the UAD, which is always recommended. The path in this case is just metadata. You will see that the actual path of the source is going to happen with the metadata.OrganizationUnit.The UAD of the object being affected, but you could just listen to all metadata using this. There will be much more options here, including field field and so on. But right now, all you can say is, even this path, I will list this path only. And you can have targets. In this case, I only have one target, but you can press multiple targets. So you put the target multiple instances, or you want to do one web hook and one Kafka and one something else, but should be all these important. So you just set the type in this case, of course, web hook. I have another date here, I will show you soon how that works. And this type, I'm using API token. So this is not just API token, by the way, this is something that's implemented on the receiving side. So this allows you to do, if the gateway self needs an authentication we support HTTP basic and API token for that. So you can just enter that here, or you just switch out type, API token to type HTTP basic, and then they will automatically assume you mean a basic authentication, and then you have a username also as your field, as your fields. Okay, so let me just send that to the system. Hopefully it's okay. Okay, one web hook created. Thanks for your work. We'll see if we have the web hook. All right, so we have the stuff here. Right now, the topics are so unencrypted that they will be in the future. Again, this is a starting point. You can also set stuff like custom headers, if you have other stuff you want to do, maybe the authentication is not using the right headers, so you want to set up your own API token or whatever the might be, so you can add whatever headers you want to that request. Okay, so let's see here now. Let me start up the other demo. So I have created a small little project here. As I said, this is Solicitation Spring with Tree and Spring 206 and Zendredi K17. Nothing special. I'm just receiving the web hook request, getting the payload, and pulling out authorization header from that. Then I'm double checking again here I have a hard-coded API token that's checking that it equals without 401, not authorized, and then we're just printing out basically what you want to receive. So what makes this interesting is again, for the future, maybe not the last name, but you can actually run this as a binary. So one of the things that's very cool about distributions of Spring coming out is that you can actually build a project as a binary. So in this case, I can run this up. I don't even need to have Java installed in this case. It's a full binary that contains everything you want. This allows you to push that into a Docker or a real type, whatever you want. It makes a really lightweight container. Okay, it's up and running. So let's see if we, as I said, this started up, I mean, say for one second. Just to compare, if I'm starting up, she started with normal Spring with, I'll do this also, sorry, just as a comparison, because this is something that's very interesting. And Solic Hamilton does not support Spring with 3 yet, but it will in the future. It doesn't take much longer in this case. It's just 1.1 second, but you can see the, it's a lot quicker, right? So the native, it just starts up immediately. But again, let's just use the native one. Okay. So let's go back to my Postman. Let's close this down. And let's create a new organet. Let's move forward. Even though this is Elioan, I'm not adding any parent. I'm just doing it simple. So you're going to end up with two roots, but it doesn't really matter for this test case. So I'm sending this in. Okay. Two of all created. Perfect. I can go back. Go back to my listener. You'll see the order is not 401. Sorry, the authentication was correct. You'll see the actual path is metadata, organization unit, dot the UAD. In this case, again, this is just like a progress that how it is to look is will be different. But it's going to have the same as operation that was supposed to create operation. And then you have the full payload. Again, you probably don't want the full payload. So there will be a few building support and so on. And so again, we go back to Postman. You can do a little bit of updates. Let's just call it, I don't know, call it the country two percent. It doesn't really matter. And again, we're sending it in. Again, there was a 200 okay. Now, another one is the descent. The same path because it's the same object with the other way. And now the op is update. And now maybe the most important thing we should not really do is deletions. So now I'm going to delete the organics I created. Again, 200 okay. This time, you will not get the full payload, but it will say, okay, this is again the path. The operation this time is delete. And this is the object. It's just the idea of the object. So usually now I can actually react to deletions of organization in this deletion too. If you are synchronizing with an external service, it's not always as easy as doing that because obviously if you have a long running system like that, you might have data linked with us on. But again, that goes back to the complexities of organizing. But again, now that is we have the option of reacting to deletions. You have options to react to deletions of organizations. Yeah, this is a quick demo. I have to switch over to what Ness. Is there any question with this before it comes in here? But I think we will just move on. Okay, so I will hand it over to Mod Green. Yeah, so I'm just going to briefly talk about OpenID Connect. So, yeah, OpenID Connect is provide your agnostic single-sign-on solution. It works with the organization as such a multiple consultant to share a central service. One way of saying it, so this is basically what we are going to demo right today. We can also very easily use it in that business. So, I don't know if you know about OpenID Connect and what we do, but it's a very common solution map for OpenID Connect. So, we're just saying like logging the build without logging the tracebook and stuff like that. That's OpenID Connect. So, it's often used in launches that already has a national education, I think, show digital education system where the government typically has a server and probably session of kind of ex-care of writing that service. So, for example, like KS in Norway and they implemented a COVID vaccine system, but they're using openID Connect with their existing government and the solution that was openID Connect. So, openID Connect was really already in three business. So, that's been supported since 2025. I think KS was the first that actually took place. So, it's been in production for a while now. It's very easy to set up the first minimal integration. So, yeah, we can demo that now. So, yeah, we also support just the web problems, which I'm really finding actually supports logging right now. So, there's the challenges to this. So, yeah, the challenges that they could play. Yeah, emergencies that, yeah, but do all the role management and do the session part, state, fire, and so on. So, yeah, that's kind of a challenge. It's central authentication system. So, for example, yeah, that fire case, because they think they've made their systems, solutions, and they go there to solve this problem, to have a secure system that also administers these roles and these should have was openID Connect. All of those authentication. There's no authorization, there's no role management, but they're doing authentication. So, yeah, we're looking into that now. So, maybe I have some kind of visual, you know, their tools that people that want to use this can kind of stop using. So, yeah, one challenge also, you can see a demo layer when you're logging out from physical instances, there's no automatic kind of synchronization with the sessions on the other instances. So, yeah, there's some links there. We'll see some examples with Marla. So, yeah, we're going to do a quick demo now. Come on, Jim. Yeah, he's saying about how Marla sets up that. If you use something like the code, you're not just into the DCS to set up other systems. So, if you have six systems in your country, I don't know what you have, but just six systems or two or so or six systems, that's everything. If you can't do this, you also get the Sahara system across many different systems. Yeah. So, I hope I didn't connect this already. So, in this system, there's like everybody's using that now, and it looks like it's going to stop. So, this is just going to show you the configuration, some books to set up the configuration, which is necessary. So, basically we have, in this case, three configuration demos of DCS2. They're all of the same. So, you're just going to show you one of them. So, just going to show you one of them. They're basically the same. There's a little bit of small, small changes to all of them. I'm going to let Marla talk, but that's basically all. So, you're just going to show me one now and we can talk about that. Yeah. Just going to show you quickly, like an example of how it can become computer. So, you can actually have multiple OpenID Connect providers also. So, on the front page of the DHS, sorry, you can actually have like one button when logging with tCloud and one button logging with Google. So, it's not like, yeah, one at the same time. So, yeah. So, this is like the minimum OpenID Connect part. So, this is the tried ID and the client secret. That's like the under redirect URL. That's the key most important information. So, this is, this one is more just pointing to the tCloud server. But this, all these URLs are actually standardized in a way that every provider actually provides them from URL. So, you can do dynamic registration with this configuration. You don't support that right now. That's the minimum setup. Yeah. I'm not going to show you exactly how the tCloud, the tCloud is very easy to start with digital mode. You start it up in like a demo, and you put up some bytes on the website. So, we have tCloud running here. So, we're just going to add a user. That's username. Yeah. So, that's it. So, here the email is going to be the, what's called the mapping name. So, you're going to have a user in the DHS server that has mapped this email as the mapping value. That's it. There's no role management down there. This is just a minimum setup. So, yeah. Now we see this sign-in tCloud. So, this comes from the configuration that you showed it and what might be generated from that config. So, yeah. There's more providers. It just comes in here. So, now another big tCloud. Sign in with the tCloud. So, yeah. tCloud supports, like, when you're provisioning like a user, you can send an email and it has to change the password first and then send it. And the way is to do that for user provision. Now we're going to just run our code. So, now I can go to the other instance, log out that already. But, yeah. So, in the automata, we get logged in there without going to the login screen or something in tCloud. So, this is like the thing. So, the sessions are not automatically synchronized there. But they're already logged in to tCloud to send it directly there. So, that's the demo. Again, the bigger issue is synchronization of your users. That's the biggest problem. Yeah, but then we have created the email ID and the user ID. Exactly. And that should work. So, we are working on a little bit of tooling around tCloud and DHS2. I'll show you where it's simple. This is the last thing we're going to do today. I'm just going to go back to what Martin did here and I'm going to delete it, because I don't want to do this manually. I think that's boring. So, I just delete. So, now it's gone again. So, we go back to the instance here. All this bad native. I guess because the session is not perfect. That's why. So, I'm just going to show you that in the material we have. So, now in tCloud there's no test user. But of course, the reason it's even work was that we had this user in all three instances. And that's one problem just to handle. It's the DHS2 uses synchronization. But now you have to have tCloud. And now you have another place where you have to add the same users. So, that's something we want to help you a little bit. So, we have three users here. But in this case, we are just using passwords, right? So, they're not using the external computation at all. But we do have one user called test user. The same as Martin just created. It has, this one clicked, the external of only fact. So, what do you really want to do ideally now? We want to just take a look at this server and put this test user inside of kCloud automatically. And for that we have, again, this is very much working progress. You will require a few topics as well. But again, as Martin said, there will be this one right up on this at some point. But then I want to show you now. We're using the user integration at THS2. And we are just pulling directly from the demo tree. Again, maybe we're going to use the SDK and the kernel component as we saw before. We have to have a set list of targets. So, this is the key to be targeting. And then we have the token that we have. And again, this is something that we can probably write up at some point, but it is pretty straightforward to get your API token to make this work. But you're doing an API token. If not, you will get access to mine, of course. So, again, this model is quite similar to what we had before with the organizing. So, again, this time, of course, the big change here is, hold on, make it bigger again. Sorry. In this case, I'm just doing it once. Again, we can do this every night whatever timing you want to do. I make sure that the external auth is necessary because those are the, in my case, those are the only three, the ones I want to care about. Your case might be different. You do what you want to do. You might have other fields to see on that user reference and so on. What we don't really care about is the ID. In this case, it's not being used actually, but usually it might be nice to have the UID also in the user name. Of course, it's important, the email, and then it is disabled or not. So, if you should get disabled and keep your list up, allowing logging with that. This case, we're just pulling directly from the user's endpoints. Again, we're doing the unmask learning, set it as before, splitting as we do before, and then we're sending one-on-one user updates over the wire that we usually have in this. Then, on the receiving end, so usually the most interesting end. Pretty much as before, we unmask in that back into the user class. In this case, since Key Club has a different domain than the users too, I have created some logbook classes to model exactly what's required for Key Club. Again, the demo will be available. You'll see here, it looks a bit different from this as well. The way it handles, for example, credentials and so on, passwords and so on. It's a bit different than this as well, but that's fine. So that's why we have to transform it. Again, one of the biggest issues we're going to see is passwords. It's just that there's not explosive password. That's a good thing, but it does require a credential on the receiving side. In this case, we should find a set of emails, so the user has to use email to verify the first end of the end. Really not, so we're just setting a static password. We do add an action, which means that the first end of the end, they will have to change the password, so they'll have to change immediately. So you can email them the some generated password and say, okay, please log in with this one, but then you have to change. And of course, if the user is unable to not. And again, we're setting the body. This is the new body that Camel cares about. In this case, since we are not targeting these just too, we cannot use these just too as the receiving property. But the good thing about Camel, it has components for all kinds of things, including many different HTTP clients. So we are setting the method to be a post, because we are posting something to the endpoints. The authorization is using the bearer token, so we're just putting down the token we had already in the property file. This will have to be authenticated. The contact is what we're setting, because we're setting JSON. And this is the end point we're sending to the system, especially users. And then we just end up loading out the result. And I'm not sure if I have it open here already. Oh, yeah, I do. So again, you can see that we don't have the user. And again, you can only run once, because that's my timer I set up in front of it. So in the starting app, I feel like you see something. Hopefully the token has expired. It might have. Yeah, for once. Sorry, the token has expired. So that's how it works. Sorry for that. So hopefully now it should be working. Unless we have changed something else in the server, I don't think we have. This is the demo. The end result is that we should have the user. So if that doesn't happen, we just know that the actual. Okay, right. This is the key club. If this time it was successful, you go back to the key club, you can refresh. We have the test user. Right. So now we have the test user here. If you go into here, you will see that it has some credentials and so on. So yeah, nothing too exciting, but at least worked. And now we can go back to one of our instances of this as well. We make sure we have that. We already love that. And we allow that. And then we should be able to sign in with key club. Okay, we set this aesthetic password. Remember, so let's just copy that. So we have that one. And we go back in here now. Also, and this test user and also remember that if you set the action, so what happens now, you need to change your password to activate your account. Then we change the password to something very secure like hello. And now we are logged in again using this one. So the next time we go out here, login here. And then let's put in different instance. So this is demo four again. And let's just verify that it's still working. And it's working fine. So now you kind of get the same result. It's just this time I was a little bit more automated that you get the key club user automatically created. So that's a different so this is a kind of project. This is the integration project. So this is just a way of synchronizing the users again using the SDK. Yeah, and this demo will also be available in the same repository. Yeah, I'm also going to write some tutorials. Yeah, there are a few steps involved when it comes to actual integration, because we have to get to talking and the kind of users that have set up some authorities and so on. So there will be some guides around that. You can still, it's okay. So you can have a mix. Some users can use a password. Some can use a key club. So we have a mix. You're still allowed to... Yeah, that's why when the key club is logged in, it just fully directs the business too. It will verify the key club that you're logged in already, and then there's no login. Demo for now, you didn't have to actually go in and do anything, right? All you did was to click on the button, but you still have to click on the button. So there's no automatic, but if you got the instance, you already logged in. Exactly. It's just because I auto-filled it. That's why it looked like a user password, but I did just click on the button. The user password is because it was not allowed to use that at all. All right, I think that's it. We're a bit over time. Sorry for that. Are there any kind of final questions before we end? If not, I think we are okay for the day. Just a quick couple of points. Firstly, big hand to Morten and Chatterer as well. Thank you very much. And if anyone online or watching is back ready to ask any questions for any of you, what's the best way to get in touch? Integration at tisha2.org is one of the easiest. Excellent. Thank you very much. I think we've got the lunch now, and we'll be back in 50 minutes.