 Yeah, it's. Right. Um, So, Good morning. Good morning. Good morning. Interesting. Yeah. This one is already. Is it on? Yeah. Should I start? Okay. Okay. Okay. Okay. Everyone. I think we're going to get started with our session. We're missing a few people, but we have a number of people online. And of course some all of you are here as well. It is our last day together, but we still have a lot of interesting sessions that we want to share with you today. So we want to make sure you get started. So in our first session today, we're going to speak about integration. And I'll let more and explain a little bit more about the different work that him and his team are performing to support this. I know a lot of you have various integration projects and efforts that you're working on. So I think this will be a very interesting session for you. Still doing that thing, huh? Yeah. So, yeah. Hello. It's cool. That's fine. Good morning everybody, but that's too loud. Justin, is it good? There's some feedback before, but I don't hear any sound on here. No, no, no. Is this okay? Everybody hear me? Okay. And now it's going to go on again. I have to wait a few seconds. Yes, okay. Good morning everybody. Today we're going to talk a little bit about the team here, what's called the team integration here in Alstop. A little bit about what we're doing lately, who we are, and what we ourselves occupy with. We have a couple of, we have one guy from our team here, called the Cheshire. I don't see him right now, but it's somewhere in here. We have Mort Best from our petrol team. They will also do a little bit of session showing the Up9.D Connect and how that integrates with TCSU. So, yeah. So, can you hear me? I think it broke out again. Of course it had to be. So, yeah. Just a quick overview of the team itself. We have four people currently working on this. Some people are part-time, some people full-time. Me and Bob, we're kind of leading it. He's the product lead, I'm the technical lead. We're both more or less 50% or less on that. Then we have a full-time engineer called Kjell from Malta. We should probably meet at some point, hopefully. And then we have Cheshire here with us today who will also do a session regarding the journey's non-accommodates. So, yeah. We had a bit of a journey when it comes to the technical stack we've been using. If you've been following this just and integrations for the last five to ten years, we've been using Python, we've been using Node.js, we've been going through many hoops to figure out what we should use as our technical stack. We have landed on something that we feel is quite good and it serves our purposes quite nicely. It also aligns with the core being that it's Java-based. So, this is our current stack. It's mainly based on Java, of course. Something called Spring Boot, which is basically the Spring framework kind of put in a nice package. Camel, which implements what you call the enterprise integration patterns. So, they can do all kind of routing of your messages. It has probably about 300-plus components of what they call it. So, you can listen on topics from queries or any kind of PubSub system and then you can react to that and then you can do something that can transform that message into something else, for example. On top of that, we have created a DC2 client SDK. It's really basic to be honest, but it kind of gives you the building blocks of the DC2 either using basic of or by tokens. And then you kind of have an easy to create as a build, which you will see later. Then you can just put that available through the Camel. On top of that SDK we have the Camel component itself, which again you will see a little bit of that later. I'm not going to demo today, by the way. I'm trying to keep it a bit less technical over the Tuesday demonstration. On the side here we have a couple of other technologies that you don't have to use, depending on your situation. You might want to use it, one is Data Sonnet, which is a JSON to JSON transformation language, which is kind of interesting. And then we have actually MQ arguments, basically there's a PubSub, so it's a Q. You can push messages into and then you can listen to it from other spaces. And all the demos I'll be linking to today are all using arguments. We also hope that you obviously will demonstrate a little bit later. Yeah, so again the Java SDK is meant to be a very simple abstraction over an HTTP client, basically giving a little bit of these are two specific methodologies. So you have this is a client you can create, you can give them this client builder and you build that with a certain API token for example, and then you have that ready. It seems like to do it, for example, a rest template in Spring and a rest template builder in Spring and then you can kind of use that. I think it's there. We don't usually use this directly though. What we actually end up using is this one, which is the KAML component. So the KAML component is basically our component in KAML, the KAML ecosystem, to make it easier to get resources, post resources, to update the deletions and all that stuff through a unified these two components. It's still very much in working progress but it has been used for example for a rapid pro integration it has been used for many many years already so that's kind of the basic it's not sorry, it has been successfully used in multiple integrations and again, all the examples I'm going to do they're all using this component. So this is just a simple example you can see here I'm using the SDK to create a new client and I'm kind of putting in the endpoint, the API endpoint and in this case I'm just using basic authentication because I also can use API authentication but yeah, I'm just being in the Spring ecosystem so I can inject it anywhere and you will see, you'll do that here exactly it's a difference here I'll try to keep it not too technical but in this case you're basically listening to your queue so this is an Artemis queue so whenever there's something new on that topic you're listening to we will take that what was on there, which is just a JSON tail out we will put that into a Java class do a little bit of transformation of our class and then we will post it to DCS2 and again, the last line here is probably the most interesting this is a DCS2 component in this case, we are posting to metadata endpoint using the resource that we already defined and we use the client that we defined up here called DCS2 client target and again, this is kind of the practice of how it allows you to set up very easily using Java you can do a lot more complicated stuff on this you can split what is up you can do four loops over it you can do separate processing you can do all this kind of stuff if there's something you want you can also do conditional flows if you want that also and many other things so I'm just going to talk about a little bit a few of the projects we have been working on which is a product of what you want but there's something that's already integrated something that's already already and one of them is the AFI which is the adverse effect integration so this is all based on the metadata package in Houston the Donut Show AFI package how many is using that Donut Show AFI package? okay I know there's more but I'll show you all customize it quite a bit this is also why the integration itself allows you to customize it yourself so out of the box it will give you assuming it's all the Donut Show metadata package but you can rename all data elements and attributes and so on whatever you do you do support metadata so metadata is international standard for calling off reactions so if you have about 38 fever or you have a skin rash we all have a different code in metadata so we do support that that's part of the Donut Show basic data package we have about 30 or 40 different reactions to vaccines and we all have the metadata coded we all code it by metadata we are still working on Donut Show drug it's taken a bit more time than it should but hopefully in the next maybe January, February we will have that also working if you use the basic vaccination package we have in the list of vaccines those should be linked to Donut Show drug also but we're still negotiating with Donut Show UMC about that just because it's a paid package so we need to have some negotiations about that again we have also working with UMC about API integration taking a bit more time than it should but we are working with that so hopefully that also will be working at some point in general they want us to kind of verify the data so what we have done is to implement basically an email client on the side of the integration all the new cases that they can it's an XML, you can go through it you see the XML format here and then you can verify the information and you go to which flow you just import directly to which flow and it's done and it's done there and yeah I just linked a few more the first material is just a link to the GitHub for the integration itself we have a couple of training material and some use cases so feel free to look at that later but it was a fire and we've been working with fire for many years now we have this adapter that you probably heard about a few years ago which was great in itself it didn't really get used by us in any case at all so we kind of take a different approach where you have a targeting specific parts of fire so for example exchange of codists or option sets so that's something that's very typical between instances or organization units that's another one, you want to send an organization unit or exchange it between multiple instances of it's just to do some fire or fire part of the server and that's just kind of a general approach going forward that you want to do more and more like that when we have the third entity instance especially the patient there's a patient profile in fire that we want to kind of send a support just to give you a minimum required for a patient profile and that's also something we have worked on and also data value sets for data so a bit more specifically we have been working with this MCSD standard a little bit so MCSD is it's part of OpenAI one of the standard we've developed by that basically it's not really ready both the standard and our integration but it kind of serves as an example because every case here we probably want to adjust how to do stuff so but what we do support and what our example do we are pulling out conditions to everything at level one and two and start if you want the whole three and then we convert that to MCSD you can see the example here this is the early on and you see it has system identifier and also the call so it can have multiple identifiers how much are you using fire today in a live system nothing production nothing production that's typically what you see in most countries that talking about it so at the point where you have something that you need help with we are very happy to help with that but it's been a lot of non-working systems or system-to-system integration where they want fire but none of the systems support fire so yeah okay okay sure sure sure I think companies support the fire back already so this should be a thing again I wanted to show you a screenshot this is the whole bio this is a very simple UI but it gives you the full overview of not only what the camera is doing you can see this is the leashes to the MCSD one I just talked about you can you can start if you look in the spring world if you look into I'm curious so it's not showing up but you can it gives you a very nice UI and it gives you a very nice overview it also allows you to if you have a timer every five minutes and it's a particular busy day you can stop it again later so again this is not another of this open as a standard so this is all about standard service all about sharing of value sets and cold systems but what that basically means in our domain is option sets and options but that's what it maps to so we've done a little bit we have a few different integrations it's just one of them but this one is actually creating what's called an implementation guide using what's called FISH FISH is a language for it's an abstraction over the very old language that's used to define implementation guides it's a very simple markup language you can see what you have here you find the cold system this is the ID the title, description and so on then you just define all your categories or the dimensions and you will see it shows up here and that also gives you a full guide basically as you can see on the top so I have it here so this is just this is all from the series on this is a bit I guess I'm so sorry you can do it like it's interesting one type of population right so you have migrant population refugee, employee worker and this one is something that somebody on the fire will understand but it's all coming from FISH S2 one thing to mention is that as you probably understand we're not using code we actually use just the UID from FISH S2 and that is because our code is just to you have to be a little bit careful there because our code is just to cancer for spaces and all kind of characters even emojis I think and all kinds of strange stuff fight us not so when you're designing your code for integration you have to be a little bit more just to really be more careful so and it basically follows the rules of UID so don't start with a number don't have spaces and so on I don't know how many people here actually design their own codes when they design like a big data romance do you also put something in the code field or just leave it blank yeah just stand there for that stand there so all this other stuff it's gonna help you do integration later if you follow along even though this has to be a lot it doesn't mean we should necessarily follow it should have some restrictions on that so this is just more more examples so these are just some links to more examples there's something from the digital annual conference we had in summer this one is all from this one we also show off this event group demo that we did the other day then we also have a new flashy website which I will show very quickly this is probably gonna be a non-stop first place to ever go if you want to do anything fire with digital 2 do you see it has a lot of information about the fire in general but also how it relates to digital 2 so if you go down a little bit on the page you'll see digital 2 on fire and if you can explain the way we can approach fire and how that works I'm sorry it's a bit late sorry it's coming soon it is coming so in case if you're interested in fire in general just have a look at that page then if you have any questions regarding it there's a community practice and there's also an integration at digital 2.org which gives you some email to us if you have any use cases just email me directly and all the way down there will be links to a bunch of examples of fire especially what I linked to already but coming going forward there will be a lot more examples here that you can look into so every time we make a new example or even a product we're on fire then we put it on this page every time we make a slide there we put it on this page so just very quickly about something that's very relevant to the integration itself that's not really part of the integration team but I'm also working on the platform team and this is something I've been working on for a while now it's called EventTurks so we currently have in digital 2 we have something called the audit log into the audit log it goes most updates if you have enabled it it is possible to externalize that and listen to that but it was never meant to be kind of from itself it was just meant to store audits basically that was the whole point of it so in 240 we are making a fully fledged EventTurks system where we in this case as you see we're creating a webhook but we will be supporting multiple clients, multiple servers so webhook is one Kafka will be one after MQ will be one and so on so on and we have a specific need you can tell us but we can add multiple recipients to that one quite easily actually and again this format is not ready but we see in this case I'm listening to metadata which means everything related to metadata I don't care if it's great, update, delete, I don't care I just want everything related to metadata and if you go down here this is the client so in this case you just receive a create notice here you could potentially take that payload send it to your MFL or send it to your other systems and do an update there in this case you just pick it up but that's just an example and as you see what I'm listening to the path metadata the actual path is metadata dot authorization dot the ID of the authorization so I could potentially here listen to changes of a single organ if that's what I wanted it could be very specific and you will also have field filtering on that so in this case you get the full payload but you can actually be very specific I want to just want the ID I just want the ID and how that you can do that way you'll be able to do it very soon so metadata is the target for 240 if time allows you might be adding some data like a special start special finished which can be very important in some cases if you want to have a script that does something when analytics is done that should be able to do it 240 so you're just listening to that noticing analytics run is done and then you can start your other process and then you can start pulling out data from this as well because analytics has finished running everything so one last line before I hand it over to Shatra totally with what is already but the main way we see forward is that we will shift a little bit the focus we will see the focus on integration of the systems but we need to be a lot better on the integration with ourselves because more and more now we see DHS-2 instances in one country we might have 3, 4, 5, 6 instances of DHS-2 and we need to keep them in sync somehow and that's getting more and more tricky to be honest so that's something we will be working on we will be starting with organics so there will be an organics a sync product that will support organics and then maybe option sets then maybe data sets and data elements and then you can go from there depending on what people want so you can keep all of this stuff in sync where we will clean something up and make it a bit nicer again fire is something that's important for us if you have a use case we are very ready to listen very important to know that we are not anti-fire just because we don't have too much fire products we are very much into fire we are very happy to support that so we have a system in the country a real fire system that we need and we are very happy to support that it's like from our existing fire integrations on the web website so we will continue building the SDK we will at some point probably add some fire layer to that maybe our own custom fire component that will help you to do that transformation but of course fire is itself is as generic as we just do so for example a patient can be nothing no name, no date of birth, nothing that is a valid patient so fire itself also requires a profile so that's a little bit important people don't always understand that so it's still important to mention fire itself is the profile that is configured before you can have that inspiration to happen I think we will this is actually very important so we will continue working to add accessibility to this as well we are starting with the event works as I just showed you it's still important that's going to be game changer I think but we also will be doing API gatewise maybe 214 but probably 241 and API gateway is basically a way of defining your own endpoint and design this as well but if you pause to get from that endpoint you will get more information later and then with that you can securely define the tokens to access it so basically it's not going to do that it will even be encrypted instead of this as well so the client doesn't see any of this as that stuff it just goes from the endpoint that you just do your customer for example you just go to the endpoint and they will be pulling somewhere else this is of course an experience important if you're doing something like MPI for example it's very typical use case we also recommend them to update how we look at identifiers as you all know we have this code field everything that happens is a code but you don't know the context of that code you just go as a code and you might have multiple systems want to have different codes so that's something we also looking into supporting so you can have specific codes so that's also it's coming multiple problems that the client will have and it will be 231 or something and also code lists so we have option sets but option sets are not reusable in file for example you can create different what they call the code systems then you have what's called a value set which is what you actually use and you can pull out codes and create a list of codes on these systems then you have a value set which is exactly what you care about we're not going to take that mold directly but we are looking into how we can optimize how we do code lists in general and maybe the bigger announcement there will be a integration academy coming up in March in Rwanda the dates are TBA but there will be an ounce how we usually announce all this in the academies so that's coming up sometime mid-late March it probably is I hope to see some of you there that would be a full week so we have much more time to get into the details of the SDK, the camera component everything so we have much more time and that's my part so the next part is Chetra are there any questions for me before I jump Chetra do you have any questions after that we will have the integration alright I think Chetra you can Good morning everyone so my name is Chetra and I'm going to work it through one of the latest integration that we have done on the toolings that Moran just mentioned especially the camera component for DHS and before moving to the slides I would like to mention my friend Claude who is kind of doing most of the heavy lifting of this integration so he's not with us today hopefully he's joining online so the DHS to RapidPro integration the structure it is similar to most of the integrations that we have to do nowadays so we have two or more independently running systems and they kind of know one or more types of transfer protocols so in this case RapidPro knows HTTP and DHS do knows HTTP and they kind of knows how to communicate with the known payload format so when integrating RapidPro and DHS we have to develop which kind of communicates with both the systems back and forth and exchange these JSON payloads I mean JSON payload from one of the systems and transfer that to JSON structure which is known by the other system and then simply send that to the transfer protocol supported by the second system so the communication happens in both directions in this case I will which I will explain in the next set of slides and when developing this middleware we have used Camel which is kind of based on the DHS to Camel components and in addition to that we are using data sonnet which is the scripting language which can be used to transform one payload from one format to another so in this case data sonnet supports JSON, XML, CSV and it supports JavaScript so I mean if there is a JavaScript that you want to convert into XML or JSON that is also supported in data sonnet so the object you can maybe you will be getting that from serialized beta or it could be RPC4 so whatever it is data sonnet is capable of transferring and this integration is by the way available on GitHub obviously if you want to try it out so let me introduce you to Grappistro so I hope most of you have seen on television especially when it comes to reality TV programs where they kind of ask you to work for the participants I mean what they may say is you have to type some kind of a code space and then the participants ID and then send that message to whatever the number that they mentioned so that is kind of the most basic thing that we can implement on Grappistro so it's kind of like a messaging workflow that can be designed through the UI itself and starting from the use case that I just mentioned it's more complex message flows which involves tens of hundreds of questions maybe so for instance if I I'm not sure whether it's visible so so if I take this message here it asks for if you have anything else in your cluster so then what it does is once this flow has been initialized by one of the users by sending that which specific to this particular rapid flow flow then it's going to send this SMS back to the user and then user gets the ability to send yes, no, something else so in this case if the user responds with yes then it's going to continue with this flow and ask the next question and if the user responds with no then it's going to just send configurations and that's going to end the flow so that's kind of how you could be messaging workflows on rapid flow and what rapid flow does is while the user is going through all these messages hierarchy it's going to capture the responses from the user and ultimately create one large payload which includes all the responses from the user and then we can configure rapid flow either to send that as a message to an external system or rapid flow by default saves all those responses into the database so let me go through the functional requirements of this integration so the first and the mandatory requirement of this integration is synchronizing GHIS to users and rapid flow contact so GHIS has this concept of users while rapid flow has the concept of contacts so in order to receive a message successfully into rapid flow we need to have a contact predefined so it is associated with some kind of a mobile number and by the way rapid flow also supports other communication platforms like Facebook and Microsoft but in this case we are only interested in services so we need to have a valid contact in rapid flow with the valid phone so for that we kind of force GHIS to for users with valid phone numbers and then it populates rapid flow with a copy of GHIS to user so this is mainly done to identify the organization units of GHIS to user because we can associate the GHIS to users organization units and the ultimate idea of this rapid flow integration is taking aggregated events from the users in the form of SMS and then populating them back to GHIS to as data value sets so as you know when we are sending a data value set we need to include the organization units so this is how we capture that requirement so we take users with valid phone numbers and also we capture the organization unit where that user belongs to and then we save that user into rapid flow as a rapid flow contact and as an additional parameter we save the GHIS to organization unit into that contact and the next requirement is broadcasting reminders so this is done when a data value set is kind of like overview so we do that component periodically for the data value sets and then sends a message as a reminder SMS as a reminder to all the rapid flow contacts so that they will be reminded about entering data and the most important and the most important requirement is transferring aggregated data or transferring reports as I mentioned so as I previously mentioned when explaining how rapid flow works we have two options here when a flow is completed in rapid flow we can configure rapid flow to immediately call the web group and notify about the new event or I mean from our middle bear we can even call rapid flow periodically by specifying the flow ID to get new events which are available on rapid flow end I mean the middle bear has been implemented to support both these approaches so you can start the middle bear by configuring to work with any of these models so what happens here is when I use the Samsung SMS it goes through that messaging flow and if the flow is successful and if the middle bear is configured and rapid flow is configured to notify the middle bear through a web group rapid flow is going to immediately send a web group call to the middle bear and at that point it's going to go through the camel routes and data sort of transformations to convert the new event into the data value sets payload and at the same time since we have to plug in the organization unit into the data value set the middle bear is going to call the rapid flow back with the user ID or the contact ID and get the organization unit and plug that into the data value sets payload and finally call DHIS to the database so the same applies for calling instead of immediately calling for an incoming event middle bear is going to immediately call rapid flow and take all the new events and do the same procedure I created and while doing this integration we have identified some non-optional requirements some of them are common for any of the integrations like we need to be reliable so that means if we capture an event on rapid flow and if a flow is successfully completed whatever we have captured should reliably land on the DHIS to database so that is one of the things that we should guarantee I mean when it comes to webhooks it's kind of little bit hard to do that I mean guarantee this reliability so that's why we have introduced a polling method so if something failed it can repoll for the same events and the other common non-optional requirements are security and maintainability which are common for any other integration and one of the most important non-optional requirement is it should be fast because so now we have considered only a simple example where one user is sending messages to rapid flow but I mean there can be scenarios where hundreds of thousands of users sending messages at the same time so the immediate component should be fast enough to deliver those messages from source to destination with an acceptable throughput and a low latency and also these integrations should be extensible enough so that we can add more components or more features for instance this entire integration has been successfully deployed in Uganda and it's in production but their requirement was little bit different so instead of transferring directly from rapid flow to DHIS-2 they have some homegrown solution called fcrap flow so the integration I mean if you consider the high level overview of the integration it's the integration from rapid flow then we have the middle level and then they have the fcrap flow which they kind of use mostly for quality and then fcrap flow publishes messages from from its database to the DHIS-2 so in order to facilitate such requirements the entire integration solution should be extensible enough and other than the functional and unfunctional adhering to the functional and unfunctional requirements they have provided tools for management monitoring and recovery and I think Morten already explained what Houtaiyo is capable of so we kind of like provide input support for Houtaiyo which is capable of stopping and posing doubts which is really important if the downstream system is under maintenance or something so let's say DHIS-2 system is offline for maintenance then you can simply log into Houtaiyo from the browser itself and stop all the camera routes which carries data from rapid flow to DHIS-2 which is really helpful so you don't have to log into the server and stop the middle there and over there you can simply do everything on the browser itself and also it supports being logs and it shows if there's a failure in the route it simply shows how many failures were there and what was the cost for the failure and also you can analyze latency of the route so if one route is taking like minutes to complete you can simply log in and identify what route is taking so much time and you can take necessary steps to prevent that and the other tool that we provide is H2 so we have a H2 database which writes to the disk and we use H2 as the data manager so if there's a message of failure we simply write that message back to H2 so you can later log into H2 and inspect what's going on and if you figure it out you can simply manually replay that message so you don't lose any messages and H2 also saves success logs and failure logs and everything basically whatever the logs that's been produced in the rapid-pro route will be saved to H2 so it's easy to analyze and can be used for digging purposes because it's recording all the I mean H2 contact from the rapid-pro contact who sent the message and along with the error that he said yeah so that's all about rapid-pro integration so over to you Morteness any questions? no questions, thank you there's a YouTube video for that so if you search for H2 rapid-pro there's a full end to end demo of it good morning everyone I'm Morten can call me Morten senior I'm one year older than another Morten or you can call me Morten security because I'm also working with security or you can call me just Morten Swann with this my surname so yeah I'm a there you go so yeah I'm a Java backend developer I worked as a contractor for the University of Oslo I started in 2019 so I work on the platform backend team and I also work on the security team with Bob and Michael and Austin and Jamie and Finn so my background from computer security is mostly from bank and finance I was running a Norway's biggest backup company for several years I was a time with the security expense from when I was very young that's one of the first thing we learn areas like if you plug the computer confidential information into the wall you will lose your job by the same day so it's very inconvenient to have to take out the information with the hard drive in the suitcase and go strap to your hands and go it's very inconvenient but it's very secure so that's kind of a computer security there's very much about this convenience versus don't have to go so this is my first time I've been been invited by I work in there because I live in the Philippines for four years so I'm pretty close to it so feel free to reach out if you have any questions plus eight times yeah this week been very inspiring it's very nice to see people actually using all this stuff I'm just sitting in my home office and working on it's been very inspiring and very grateful to be invited to meet you all there so I'm just going to talk a little bit about OpenID Connect and what it is and what we can do with it OpenID Connect is an industry standard protocol for authentication and identity so it provides what you probably heard before like sing and sign on yeah so what it basically does is takes away the need to manage everything for multiple instances if you have a lot of instances you have the same users you can have them in one central place instead of each instance and you have to deal with that so this is simply good about this what we're going to talk about with this so this is very often used in countries and organizations but although they have the existing authentication system infrastructure for example in Norway the government works together with the banks to maintain OpenID Connect server authentication service also for commercial actors that wants to have security, reliable authentication authentication also the Norwegian healthcare system also has their own OpenID Connect compatible authentication service this is the one they're actually using for their such as instances so actually Norwegian healthcare system when they implemented the common tracker system they were the first to actually start using OpenID Connect you know so you've probably seen it when you're logging in with Google, logging in with Facebook this is the same technology it's a very common technology it's been supported since 205 as I said it's used by several there's also some challenges to using OpenID Connect a central authentication system primarily that is role management and authorization does the HSS has a very complicated role system it can be very complicated OpenID Connect doesn't really support authorization it's a many identity and authentication system for the big invitations they have made their own custom solutions to manage and synchronize and deal with this so there are much custom solutions to have this so yeah and also synchronizing users between instances is also necessary when you have to maintain the roles and authorization also and also session synchronization between instances so yeah challenge yeah so what we have been trying to do for a while now is to set up an OpenID Connect server which is a 100% open source OpenID Connect server and it's very a long enterprise it has Red Hat is currently maintaining it they also have a supported version it's been around since 2014 it's very mature and it's very easy to set up very modern user interface and it's very easy to configure so yeah I'm not going to show you a demo of it today we did that in Houston it's a very very simple set up I love to do this you can typically do this in a couple of hours to just set it up and get started if you want to run your own OpenID Connect service key clock is a second turn so yeah this is basically just very simple what you can see when you're having an OpenID Connect login you see this sign in key clock this can be anything you can see sign in with everyone then instead of logging in with the username password on the website it will be redirected to the key clock login page when you sign in there then it will redirect it back to the HSS so if you want to read more about this and see how other people are using it there's some links we got this one yeah that's it any questions if you have any questions just feel free to email me if it supports the OpenID Connect protocol you can start using it just configure the instance and then you're ready to go for this yeah it has a support OpenID Connect as I said feel free to email me if you have any questions about this so thanks everyone that's it for our conversation today I just want to say we did have a talk exactly about this on Tuesday that's quite a bit more technical it has examples of the SK usage sample of the chemical component usage and so on as demos so if you just go to YouTube you just search for this as too you should be able to find out quite easily I think it's called yeah this one I'm not going to show it now if you just go to go to this as your channel you'll find the 90 minute talk there and there will be actual demos of all the stuff we did including real-time demos of the event work of the rapid process of the camera integrations and so on so feel free to look at our channel we are very happy to do that we are trying to put all the content we are doing all this week and this is the less technical one so we are a bit short on time which is fine but feel free to go to that one if you're interested in Java coding and those kind of things we have more real demos there showing the actual things that are functioning including the key clock integration that we've been working on basically the transfer all the open ID connect you search from one instance into key clock so yeah feel free to do that other than that I think we are okay are there any last questions for us for the integration machine as a whole or any wildfire or anything that will be working on any kind of issues that we need to have just kind of yeah then I think we are okay for now so feel free to again contact us yeah we are very open just don't bury your friend tree if you have any kind of integration questions please just send it to us we will get back to you we don't have a 24 hour response time as the security team but we will get back to you and we also have this weekly community call where if you are interested in integrations in general feel free to join us and if you want to be out of that list again just send it here send a request here I will add it at least for the currently in white the timing is probably the best it's about 5pm here yeah 5pm here now in Asia so that's 6th 11 in Norway 10 in Ireland it depends on what they want but again if you want to be part of those weekly community calls feel free we have a lot of his people there from Africa sometimes from Asia but it's very open if you want to join just send it to us that's for me honestly any last questions I will finish that's it questions from the people online that's it again, thanks a lot have a good day relax relax that's more questions was it too complicated? now we're not live anymore so now we can be more freely yeah yeah that's again I showed you the email address feel free feel free to send emails there now you know my email address so every integration case it's also going to be different it's also going to require different approach to it you know in Cambodia there was a lot of stuff happening and it was a complicated integration but we made it work and then there's going to be a lot of different countries there's going to be different conditions and that's going to be complicated probably also there's very rarely one size fits all of the integration so it's also customised locally it's just very very common yeah as modern said it is very useful but it's useful for certain use cases can we know that? I mean if you have one Nisha's 2 you probably don't need it so part of the issue is open-ended connected so they still have to synchronise the users Nisha's 2 and then you use open-ended connected as the authentication specifically so whenever somebody called you oh I forgot my password you don't have to go into three different instances you have one unified place to do that and actually the password isn't even stored in those places anymore they just have the open-ended connected which is great because Nisha's 2 doesn't expose your password when you do an export of the user so if you're doing a user synchronisation from one place to another place you cannot bring along the password so it makes it easy to synchronise the users on multiple places so if you have open-ended connected or key cloud in this case to store the password and then you can work for every single instance but of course you can just configure your user through different access rights in every instance which is very important because it might be tracker, it might be HMIS it might be something else COVID and so on and you don't want the same user to have access to the same things in every instance so that's still up to you so for the open-ended you should probably I'm trying now I think it's on the right to share with you guys these are the people because we don't have the all the we don't really have so again sometimes it kind of depends on the question if it's a little bit more sensitive depending on the categories of credentials and so on but in general starting with the community practice is probably better I can really explain about the community so if you search for this is the first link that you will come to so here you can create your own user account and there's a lot of topics here that you can go through and we even have a special place for integrations so but if you try to post it anywhere it really makes sense there might be I think there's already some open-ended questions here so open-ended you see there's already people asking yes so there's it's actually solved so this is open-ended so don't confuse open-ended with open-ended connect so make sure that your system supports open-ended connect and if you haven't been using OAuth in this just do open-ended and OAuth is basically merged together into open-ended connect basically what's happened simplification so open-ended connect is the new one so feel free to write here Martin's very happy to do it I know there was a key question anyways the point is this is a good place to go if you have any kind of additional questions in general but also for integration also for stuff like the curricular stuff this is kind of your go-to place for those kind of questions and I hope you all have an account because this should be your starting point whenever you have an issue even before you create a JRA issue and you always say create a JRA issue but feel free to start in the COP there's a very very nice guy called Dussin he's watching over most of his posts and whenever we don't reply because we are lazy then he will tell us please and then we reply to this one and then he will remind us so feel free to do that and I think it's probably the best place to say it I mean, feel free to even more know maybe this is my holiday so that doesn't always work so yeah so let's go to security it's a couple of things general security you can ask on the COP if something you think is a bug or something that's critical you can send that to security at measures2.org that's the place for that any more questions about any of the camel stuff or any of the things we'll be doing feel free and again feel free to come up to us later we will be here by email or come into practice who knows what camel is I don't see any camel this is not the kind of camel we can get camel with chrome no so basically how we do the integration how we do this integration framework which implements the integration patterns that we are using to go all the way back to the slides I had before again I tried to do this non-tech to go so I have not put much detail but camel is basically what allows us to camel is basically what allows us to do this chrome stuff and that's what the set up is going around so it allows us to do all kinds of things using predefined components of the camel subsystem that just makes it a lot easier for us to write so in the links that I've been sharing you go to the this one you go to the hisp asia1a there will be examples of organetsync keyclobsync and a few other examples and the one above it again using camel using datastornet that Shasha was talking about and how to do that so please look at the examples try it yourself if you want to do that and see what you can do as we go forward in the integration team in general we will try to be creating more and more products so that means for example a ready-made product that you are making that's a product that's ready to be implemented the other rapid pro the product that's ready to be implemented but it's all based on what I showed you today it's all based on the SDK it's all based on the camel sub so we will be providing products but we will also be providing the underlying layer so you can build yourself as I said every use case for the case will be different so it's a very great company to do local customer sessions yesterday we know in our health system it's fragmented it's a vertical system we have many most of the time we have been integrating with others how do you integrate it and the technique of people and then they do the script mapping this one that has been the approach before there is no standard way of doing the integration so you have seen in Indonesia when often it was there it was a different procedure on time or on time now they are trying to standardize so that any company wants to integrate to start from there see I want to integrate this or the data or things see what is available rather than starting coding please don't tell your technical people okay I want to integrate that technical people I don't want to look I don't know how it works so they will create their own development every time but that has been our biggest challenge especially dealing with developers so always see how things are to make use of this thing people are going to create many tools so they can try to actually use that and then your development time will be ready to start what this is do for us is you have this tool instead of starting from scratch it would take 109 days now it can be like 20 minutes so it's also linked to our budget make sure that like I said have a look at from here and then it could also be to explain or write a so that they know okay that we have a purpose on here let them spend their time in creating at least so that we can reduce the time of our developers and other people to not spend all that much spend on things that was the whole idea behind seeing that you have a real little time I just want to show you one of the demos that's available on the slide very typical use case you have one or many different chestry instances maybe you have one we call the master the master was an analyst or you have the main source of your organics I started up the integration here which is available in the demo I'm not going to go through the code anything like that I just want to tell you that it's running now and they might look the same sorry, it's a bit late but it's actually demo one and demo two there's two different instance of malicious view I have configured the source here to be demo one and the target to be demo two of course you can also have multiple targets but it displays the target as demo one and two it's just an example of a typical what can do let's go here create a new one call it fococ set the opening date maybe open today and then wait a bit I don't remember the timing now and oh shit I made a spelling mistake it should be fococ ABC so I just go back here save it this is my source go now to the target set it to 10 seconds you can also set a much more shorter time because now it's been updated so two completely different instances of this as well but a very typical integration case where you are keeping things in sync there are a lot more to it than that, this is just an example but we will be making products, integration products available that will help with this contact helping keeping organisms sync across multiple instances giving some parameters one in this group, this group and so on same with the option sets the building blocks of issues too you will be doing that and that's something we will focus on for the next six to one year and while we are expanding our SDK expanding our panel and models any kind of integration please write an email or describe from this particular system what happens is it comes from the big owners they integrate with the rapid draw if they get inspired at a country level so we are not getting any country's pieces so we are just giving the uses from the global level okay if I can so please write the email and the things so that is probably what we want to try to do and let them handle so since we have five more minutes I can also show you the live so what is required to get keep working and this is kind of what gets people a bit confused sometimes what is required to have keep working across multiple instances is that you have the same user same username, same keep in this in the multiple instances so in this case we have selected a user we click on that one you will see he has the email address he has the name and so on everything external authentication only and the mapping value is just that user that is actually not that important you can use your username also but when I click on external only the password field is removed so this user does not have any password and in demo 4 if you look at the this is demo 4 you will see the same user again this you can synchronize in many ways also if you want to go any kind of way but you have to synchronize this what is really important here is again you have the mapping value external only and the username those things are important you can have different names and what does this allow me to do well I will log out here I will log out here now we are logged out on more instances and go back to my user here you will see I have test user again test add user exactly the same user as I had to use this too so now the password is controlled so I can reset the password just do something really secure reset the password for that user but it does not affect this because it does not clear because when I go here I click sign in and click log and I know this is going to be a bit confusing because my browser has auto-completed the admin stuff but this I basically ignored so let's pretend that's not there this is not important log in and click sign in and click log other parts are even ignored click on it now we are in click log if you look at the URL now for our click log instance user was called test user and ABC123 and we log in so now we are logged in to DHS2 with the test user and we have all the same conditions as a test user of course in this one we are not logged in yet again this does not affect testers you know those parts but now you understand your click log because it's the same browser same session just click it and log in no password, no afternoon if you have multiple instances of DHS2 it's a very nice way of handling it if they call you and say oh I forgot my password you don't have to go into those two instances you just click it up and change the password that's DHS2 potentially we can do something then but right now it's all about authentication but potentially there's something to be done there there's a much deeper integration and so we have to look into a lot of it but potentially there's a signify user so you are defining all your user and everything in DHS this particular user can only view data and not create data everything is same you actually can download the data and do that you can download all the data based on the any DHS2 user view data that can be also be done but they download the data and then they change it here we have not stopped for many months to download the data right this is it those things are in DHS2 the only thing you can focus on is if you have multiple different DHS2 you can just log into but it's exactly the same user in one DHS2 you can view in other DHS2 you can use HIV but instead of having HIV there's a different user from the password here you can password all things but then the end user is confusing this is my password I always want to get this is HIV1 all those things and also maybe the usefulness is even bigger if you also have other systems like OpenMRS or other systems that support OpenMRS so you can have the same user name in multiple systems not only in DHS2 but they all link back to your same keeper so when they forgot they used the same password they can go there and reset it and it's reset everywhere automatically that's kind of it might be weird that they have DB, online system they also have other systems those are not DHS2 and then they also have DHS2 for a new gate or in 3D and all that at the password level one person has to log on to this multiple system multiple user name password and then DHS2 has also the prescription right the password should be one capital one letter and one thing others it's like it's not my laptop that's what I would say it's much better to have this before so that they are helping the end user so they can just remember one password and they can log into all of this any other questions before we finish if you have any thoughts just a three break so we'll break and we'll be back after if you have any questions thank you guys thanks so much, bye bye