 So we can, they can hear me on Zoom. Loud and clear. Okay, great. Hi everyone. We don't have a ton of people in the room here. There are a number of people online as well. So we're going to keep it kind of informal and casual. We'll talk about containers today and DHS to in. In containers and in more high availability environments. But yeah, think think about your use cases and how what you would like to discuss because we're a pretty small group so we can, we can talk about it and a little bit more informally. We do have a couple presentations today. And we have Felipe joining us from Chile online. And I'm not sure what time it is there Felipe what time is it over there. And seven in the morning, seven in the morning. Thank you for getting up early for us. Good morning. Yeah, so we'll be, I'll just give a quick intro. I'm probably everybody who's joining this session is already fairly familiar with what we mean by containers when we say containers. But I just threw together a few slides. So for those that aren't familiar. There's a lot more that could have been added to these so this is very, very high level and probably there's some details that are missing and things like that but what we talk about when I mean containers it's a lightweight way to virtualize a basically a process in a Linux machine. It's lighter weight than a full virtual machine and LXC is what is the basis for a lot of the systems that we are familiar with such as Docker and others and that is basically a kernel level way to to isolate different systems within a Linux machine. So we got a few more people joining here, which is great. And Docker comes up quite a bit, because that's kind of the hip. It's been around for a little while now, but the hip way to do containers. And there's a number of other ways that you can basically tools for building running and managing containers Docker is one of those LXD, which is by canonical organization is another one and pod man, which is red hat, I believe, the organization is another one those are all more or less equivalent they have some differences but basically they're tools for building and deploying those LXC containers. And who's Docker by Docker is by Docker, Docker is the organization. Yeah, it's not anymore. And yeah, so there's a number of details in here as well there's been an evolution for a lot of these tools as well over time. And yeah, so basically containers I guess I put LXC there but containers and the main difference I would say between Docker and LXD and maybe maybe Bob can back me up on this as well or add some nuance is that Docker is focused on virtualizing an application, whereas LXD is more about lightweight virtual machines. I've read that description online a few times. I'm not sure you can use them both for both to some extent. But that's kind of the ethos I guess that the two are kind of approaching would you would you agree with that more or less. Yeah. Exactly. And podman is kind of the more the more recent one of those of those three. It's been basically it's billed as a an alternative to Docker so it's basically a drop in replacement more or less, and then has obviously some differences and a different organization that's running it. So I have more container orchestration on the right side here. And so I just listed a couple again, and there's many others. Docker compose is fairly simple but it is a way to in a single file you describe kind of a setup or a topology of multiple containers that then get kind of deployed and connected automatically in in this case Docker. And so this is what we talked about when we when we say orchestration is you have more than one container that work together to kind of provide the full service of an application or a use to solve a use case and Yeah, when you can then take that to kind of the next level and have much more sophisticated ways to do that orchestration and Docker compose only works on a single machine. But if you want to spread it out over multiple machines for redundancy and scalability and all these different things you have to start doing more sophisticated work around scheduling when containers start up and stop and how you distribute those containers over all of the different nodes that are then talking to each other and the latency between the nodes and scaling up and down services based on the demand and the load that's coming in fault tolerance or a thing if errors start happening how can you roll things back how can you do upgrades in a seamless way there's lots of things that that can Container orchestration can provide for you and there are tools that can do that and some of those are kumar Denny's another one is hash code know nomad hash corp has a few open source tools for this and that's a little bit different than kubernetes but has a similar kind of approach in terms of managing services that are spread across different nodes. In this case it doesn't actually have to be containers but there's a few other things as well. And there are many other examples of these but kubernetes is by far the most well known. So just wanted to give kind of a very high level intro to these concepts for those of you that aren't familiar. And I just wrote this slide up in a few minutes so probably there's a lot more that I could expand on or go into if we wanted to deep dive. And there are others who are much more expert at this than I am. I could probably correct half the things that I have on this this slide already. And yeah but just to start off does anybody have any initial questions or things to add or things that you think are missing kind of from this very high level what are what are containers and what are we talking about when we're talking about containers. Online feel free to speak up as well. Yeah. Yeah. So the question was for the for those online. The question was what on the right side here if you were to set up a horizontally scaled deployment of DHS to those are the tools that you would use. And the answer is, though you can use those tools for that for sure. So one thing that docker is again kind of focused on an application level. Kind of encapsulation. If you have one basically one process running in one Docker container and that process is DHS to running in Tomcat, for example, if you want to have two Tomcats you need two different Docker containers. And once you have more than one Docker container then you need the things on the right, something to be able to kind of orchestrate those together and figure out how your your then your DHS to deployment now becomes two or three or 10 things that are working together, and they have to know how to talk to each other you have to map network and disk and all these different things. Usually these all of these have a concept of kind of a private network for those services to talk to each other and then certain ports and certain network adapters to talk to the outside world so then you have usually some sort of a progress management where the requests that are coming in or then routed to the services internally and the response or routed it back. And so all of that happens on the right here on the left is is much lower level where you're actually just kind of encapsulating the individual process or service. Docker compose is only on a single node yes they used to have Docker swarm and it still exists but it's been kind of deprecated. And that was around kind of basically a swarm of nodes that then could you could roll out like that. I don't believe you can do Docker compose across nodes I don't know if anybody has experience with that but I don't I don't think that that is currently supported. Yeah, basically, I think there's, I'm sure that Docker has some plans to move into that space again or to expand on it that there's some, I don't know there's a whole bunch of history as well with, I think they. Docker swarm got kind of like sold to another company I don't know it's a whole thing. Yes. So these ones that the top year Docker compose is only a single node. There's Kubernetes and nomad and there are others as well. What's what's another messos I think is another one Apache messos. There are several ways that you can basically spread things out over multiple services. Okay, any other questions or comments. And I'm sure when, at least when Philippa gives his presentation and we'll learn a bit more about the, the right side of this. Okay, and this one I actually will we'll turn it over to Bob it's at one point to take a look at but I wanted to talk about a few of the things that we as the UI team and DHS to do related to containers there's more than this but this is kind of a starting point. And one of those is the server tools that Bob and Tito and the server support team have put together to basically to automate and standardize the way that you can deploy a secure and performance and well designed set of services to run DHS to and that uses LXD and containers under the hood. And it's basically just containers for running DHS to and the different services that are associated with it with some isolation between those services and from the outside environment. And I'm sure that Bob could expand quite a bit on that. And there's more. There's lots of details and you can see all of the ansible playbooks that run these different containers at that GitHub link. Bob, do you want to add anything to that just to start off. No, okay. Another one we do produce official Docker images with all of our releases so all of our builds of DHS to DHS to version 40 that just came out last month. There's an image on Docker hub. So you can go to hub dot Docker dot com and then you find the DHS to repository or DHS to organization and the core repository, and you can see builds official builds of our releases of DHS to we also build development builds and canary builds that you can find on there as well. So the PR is also built and published there, but the, these are the, the official releases that are packaged up as Docker containers. We don't yet recommend those for production, we've been trying to kind of be a little bit conservative in that because there are things that we want to make sure are in place before we strongly recommend those for any production implementation. There are people that have already tried to do this and are maybe using Docker in production. And so we're working on things like security hardening, and I know this is something that Bob and Michael who's in the back and Phil and I have had a few conversations about and we'd like to do more of this. Because we're we're producing these Docker images but they're kind of frozen in time. And as security issues might occur in some of the underlying images that the containers are based on, we want to make sure that we have a mechanism for updating the even older versions of DHS to updating them based on security issues that might arise, and otherwise also just kind of optimizing those images for production use whether that's performance optimization security optimization, lots of things like that. And this is kind of part of an overall move or kind of push towards becoming a little bit more native in the container world. So this includes things like configuration that lives on the file system is not exactly the kind of paradigm that a lot of containers, your containerized applications use and so there are ways we could improve the way that DHS to behaves to work better in containerized environments, and there's kind of a push for, we're doing more of that over time. So, Phil, Bob, Michael, you want to talk at all about DHS to Docker images maybe you have a couple slides later on, I'm not sure. Yeah. Yeah, yeah, I think that's the next one. Okay. Yeah, so if you have any questions about those as well. Feel free to ask us about the Docker images and probably you'll be seeing more kind of attention to and support for different types of deployment in the future. And then last one here is D2 cluster and through this in here, not something anybody should be using in production at all, but it is just a basically a wrapper around Docker compose. So you can also find a standard Docker compose for DHS to that. And this is just a command line tool for spinning up and down and seeding databases in a development environment. So it might be something that some people are interested in. We'll hear about another tool in the presentation from ICT today as well that I think is quite interesting and kind of this space of Docker and DHS to and some development environments as well. So do you want to talk a little bit about infrastructure management? Can you hear me. Okay. Yeah, this, this slide is just to give a bit of insight into how this affects us as well as implementations and you guys too. So, you know, in, in internal infrastructure, we have to host a lot of instances. You know, we use them for demos. You're all probably familiar with the play server. We also, you might be familiar with demos server. But in addition to that we have many other servers or virtual machines that we use with many instances on which we use for testing. We use for training. We use for the sort of online academies, the instances behind there. We use for development, different aspects of development including package development, for example, so actual metadata development. All these things have sort of instances that we host behind them, which have various requirements, some very close to kind of production like others, different sort of requirements. And, but we also, because we've got multiple teams using these, we want to make it easy to manage those and we've been using Ansible and various other systems and as I said, running these on virtual machines. But we've, we've taken a step over the last couple of years to, to try and shift that and to take more advantage of containers in order to support the sort of requirements that we have for our teams internally. And these are the things that we want. It's, we want to set up a kind of self service system where people can run up instances when they need to run them up temporarily for, for many of these activities. And when it's self service, it needs to be quite intuitive in terms of like an interface just start something up to stop it as you need, and so on. You also want it to be fast, responsive to get things running and then get rid of them. I put there is it's built spot our workflows and the point is that, as I say they're sometimes quite different than production workflow but there are a lot of overlaps as well some things are very relevant to production support. And we want things to be low maintenance of course we want things to take care of themselves we want things to auto scale if we need more resources, we want things to auto restart if there are, if there are issues that bring things down. We want things to clean up. If people forget about instances that they don't need anymore we want them to be removed from our resource pool, and so on. So there are various things like this, and we do want them to be secure we don't want to, to sort of compromise our own infrastructure. So, it was just to introduce that we have all of these kind of same problems internally with we've been building the system there's a kind of image of how we're aiming for it to look we've got to actually an interface which implements some of this and we've got quite a fairly complete back end based on Kubernetes that manages our instances. So, we're starting to shift over to using those more and more in our, in our environment. So, yeah, I think I can, I can demo some of this at some point if necessary but. Okay, thanks Phil. Yeah, and then. Yeah, as Phil said, this is something that we're starting to use a bit more internally. And it also helps us to work on the development of, like I said, more container native and more scalable DHS to in general. And a lot of these tools could be of interest to the community as well so there's something we want to we want to look into how we can share and how we can build them in a way that's not specific to just our use cases but probably there are a lot. We know that there are a lot of other organizations and implementations that build or run multiple instances of DHS to even within a specific country you probably want a development and a staging and a test instance and a production instance and you want to manage them and you want to manage upgrades and you want to manage databases and all of these different things. And so, having a kind of a streamlined way to manage those could be quite helpful. So we're going to want to explore. I think this is my last slide here. Yeah, so these are going to be for after, but I think to start off I'll open it up for for maybe five minutes of questions and then ask Felipe if he can present on the work that they've been doing in Chile around scalable and container as DHS to any questions or comments to start us off with what we've said so far. Sure. Just waiting on the microphone. I have a question when using Docker containers. How about customization of DHS if you want to do a customization and the code level. It's possible. I don't think it is, but is it possible to use Docker to cut the so you mean if you customize DHS to warfile or instance. So, in that case, you would not use the the official instances that we publish. And, but you can use the same basically the there's a Docker images and output from the build of of DHS to so even if you modify the code and you run your maven build commands. And there are a few additional commands you need to run but you can get a Docker image out of that right so you that's another way to to generate a Docker image. And you can even customize some of the templates and things like that it's it's using and jib. Yes, to to generate those Docker images so it's part of that build process it's out of the box. So basically there's a there's a maven command that generates a warfile and there's also another maven command that just generates a Docker image, and your output is a Docker image that you can use. And that being said, another kind of direction that we're moving to some extent is trying to make things more modular so you don't need to fork and rebuild a lot of the pieces of the course that's something we've been working on for quite some time with. Modularian applications I'll talk more about kind of how we can try to support some more kind of service based architectures for different things within DHS to. And so hopefully in the long run moving away from needing to fork and then merge upstream changes and things like that. And but that's that's a little bit of a bigger discussion. Thank you. Yeah. And my colleagues in the audience feel free to jump in if you have any comments on any of these. Yeah. So I guess, I guess I'm just trying to figure out, like, if I wanted to run DHS to my Mac laptop. Is this a good way to do it. Is it going to be a lot slower. I assume it's easier to set up like maybe this is off topic but what are the pros and cons of that. Tell me it's off topic and I can ask you about it later. So I think, I mean, it's not entirely off topic, but it, I mean, is a little bit different than a production use case, right. And so one, one thing I would say an advantage of doing that versus running it natively is that you have all of the dependencies bundled into a container right so instead of needing to have a local Tom cat and a local postgres and if you want to test it on multiple versions of Tom cat, then you need to have multiple versions installed locally or run VMs or something. You can put all of that into a single container so there is. That is one of the advantages of containerization and it's very helpful in the development process as well to be able to kind of encapsulate the entire development tool chain into a single place. So there definitely is an advantage and this D2 cluster was designed and is used by a lot of our team as well as other people to be able to spin up DHS to instances for testing pretty easily. And that's you don't all the only thing you need for that is Docker and npm. And then you were node and then you can run basically an instance very quickly, but it's also just a Docker compose under the hood so you could just use Docker compose to do that. So it's definitely useful in that sense I think, and, and that's one of the challenges that you run into a lot when you're trying to kind of mutate the configuration or the state of any server or system. It becomes more complicated whereas Docker and Docker compose are declarative so you know what is there, and you can modify it and then redeploy a new copy and take down the old one instead of trying to always mutate the state and keep track of all the different dependencies that you have going on in your system. It shouldn't be that much slower there. There's a little bit of overhead for containerization, but especially for development or workloads like that I wouldn't say that it's, there's any performance implication. You shouldn't find significant. I just wanted to comment on the previous question. People rolling their own rotation because there's a bit of attention there because there's, there's kind of advantages of this discipline. Yes, yes. Some of the advantages of rolling your own is like you can package your own. Yeah. But the, the other thing is some of the security concern that if you've got a, if you've got a sudden vulnerability in your SSL library or whatever it might be if you've got the power to roll your own you can move it at the implementation stage. The downside being part of the benefit of using the official image is that all these combinations of Java version and Tom cat and what have you have been tested. So when you're all you're all you lose that benefit and you need to supporting yourself much more. Yeah, I just wanted to, yeah, completely agree with what Bob just said. When I said, trying to move away from not rolling your own Docker images but but forking the forking the core and maintaining separate separate code bases. Absolutely, I think there, there is an advantage to rolling or there can be an advantage to rolling your own Docker image and it's, it's a fairly light kind of wrapper around DHS to you have a war file that's then exploded probably on the in the image itself, and then you have a few of your dependencies, and you can roll that yourself, if you'd like. And so, yeah, good point. We had one more question and then we'll move to Felipe. Thank you. Thank you for that. So, my question goes to have a special number you have multiple instances multiple images. Yeah, so you want to run multiple different countries and you want to have this is it up with Docker or Docker image. So, which approach is best for that. You're running the normal DHS to with all the metadata and everything around it and you have the organization in it and all the structures from around that. But then you want a much quicker way of doing the deployments that is production. And so you have maybe two or three different countries running separately but you want them to be able to just quickly spin up their own instances of DHS to separate servers. So it would be better to use the Docker images. Currently, I've been using the D2 Blaster but that's still on development, it's not development production. So, which would be best. So there's a couple of different angles to try to answer that question. One is, I mean, you mentioned metadata configuration and org units and things like that. So, you see this is talking about the processing power right the the the the software itself and not the data. So the data would live in a database which is it might the database might be running in Docker, or it might be running in a container, but probably. You might always your your actual database data would live on on a disk somewhere still right so you'd still need a logical database somewhere or physical data disk somewhere, holding that database and that can be using a Docker volume there are a few ways to kind of mount or mount different disk space to containers that would run that so I don't think that these tools. There's a little bit of a. There are some tools that we've are working on in the instance manager that Phil demonstrated to manage databases as well and backups of databases and basically starting a new instance from an existing backup and lots of things like that that could be could be relevant and but Docker itself and containers itself wouldn't solve that problem for you. And you can yeah there are there are ways you might be able to to basically bundle your database as a Docker image or something like that but I wouldn't recommend it. That being said, a lot of these tools. And it's. It's not one of them, right that is better than the others necessarily. And but a lot of these tools could help with the problem that you're mentioning of needing to be able to quickly spin up new instances for a new country or a new person. So the instance manager is a good example of that in that it at that point is just you click a button and you have a new dhs to instance with a postgres database and an engine x gateway, all set up in the configuration that you've defined. Maybe you start the database from something that already exists or maybe you start a new one. And you can do some of that, even with just Docker compose, for example, or D2 cluster which is, I wouldn't recommend again for production. Docker compose is what D2 cluster uses under the hood and then you can define. I want a dhs to instance I want a postgres instance I want an engine x gateway. I want the disk for this postgres instance to be mapped to this physical disk that I have. You can set up the forwarding of ports and all of that for your instance. And then that sits in one file, which you can then basically run as many copies of that. And then you have the images or the containers that are defined in that file, you can spin up new ones as you'd like, and it'll spin up the whole set of things that you need and expose them in a certain way. And so that's one way to do it. That's the lightest way way, but something like Kubernetes or a more sophisticated orchestration platform, have a lot of concepts of requirements or different things like this where you have this set of services you maybe have different scaling requirements for different services within that so maybe you want to have three copies of dhs to running talking to your database or maybe you have all sorts of different ways that you can set that up so there are, you can use all of them in some way. I don't know if there's one specifically that I would say is the best for it because it depends on your requirements. I don't know if anybody has a another answer or suggestion in that regard. Yeah. The other one is at the top here right and you could also just create a new virtual machine and use the the ansible tools to set up a new version of our instance of dhs to. And that's another way to, that's a very repeatable standardized way to set up dhs to instances. Okay. So think about your questions or things that you would like to discuss as we get a little bit further in this. I think there are a couple. Okay, yeah, there are a couple comments and in the chat and Renee, thank you for for answering those. And, but maybe Felipe will turn it over to you so I'll stop my screen share and you can tell us a little bit about what you've been doing and what is going on in Chile. Actually, I'm going to try to answer a few questions that I heard. Oh sure. They were really good. Are you seeing my spring. No. No, not seeing your screen right now. This one. Now. We see the slides but is this is this your screen or is this just the slides here. Yeah, the slides. Yeah, can you try to switch the slides to the next, the next slide. Yep, we see it. Yeah, okay. And well, I'm connecting from Chile. I'm part of the Minister of Health and we are actually pretty new working with in the highest to we started in October last year. And well, I think we have done a lot until now. And today I'm going to present you a bit about infrastructure that we're working on and why we did. Well, a little bit about Chile because I know that most people don't know about it. Actually is the longest country in the world. And that is a big challenge because, well, you have different ecosystems and also different health facilities around the entire country. And if you take it on top of Norway, as you see in the map, it would take like, from the northern part of Norway to almost Egypt. And actually, that's for you to dimension how long is the country we have the desert on the north. We have what glacier is that assault. Well, it's pretty difficult to work here because you have many different ecosystems and things like that. And one thing here is that we are working with DHS to solve these issues. And why we started working with this. Well, because, well, we need a system for the collecting data from the network country. And actually, it was pretty simple to work, but we started as everyone I would say, well, at the beginning, working in our own computer using Dr. Paul's Austin site. And then we started looking for something that would have higher availability. And that's how we moved to a key internet based infrastructure. Actually, even though you see this, I'm going to tell you a bit more how it works later on. Then don't worry if you don't understand. But what I want to tell you, and it's more important is that you can deploy DHS to in local based servers, as most people do. But I would recommend you to move to the cloud if you have the possibility. Why? Because I think that is more secure. And actually, you have higher availability if you're going to deploy something that is home to what and it's going to be used for many people. I've heard a lot that most countries are moving from aggregate data to tracking individual data. And actually, that is also important for performance. Then you're going to need a height or a better infrastructure. Then why Qvernet is well, actually, you can see here, everything that you need to configure or you can configure using Qvernet. The important thing here is that, well, the meaning of Qvernet is a word that comes from Esperanto. These international language or universal language, I would say, and it means captain. Because it's the captain that is going to control or orchestrate, as I said, everything that happens at infrastructure level and also at app level. And that helps you to deploy an application really fast and infrastructure really fast and control everything. Then everything goes into a container level, as you all mentioned. And you have a pod running that is going to have that container and then you can control the infrastructure using Qvernet. The good thing here is that you're going to have high level ability by default because it's going to be able to work with one pod or many more. Let me go back to the next one. Everything is secure inside of virtual private cloud in this case. Then even though you have this that is exposed to the world, actually everything that is inside or within the network, the private network is going to be controlled by you. You can say what they can connect to any machine or how you are going to control the Qvernet engine. And actually, you're going to have, you can add some cybersecurity layers for having not only a layer two and three of cybersecurity or firewalls level two and three, but you can reach a layer seven. What is pretty cool actually is a good standard for working with when you're going to have nominal data from patients. Then how we work actually you need to have your Google Qvernet engine. Well, in this case, we're deploying in Google. I didn't say that. For working in Google, you need to have a Google Qvernet engine and you're going to deploy your database in another pod or another service. In this case, it's called Cloud SQL. If you're going to work with more than one workload or one pod, actually you need to have a ready database that is going to share the catcher between the machines. And actually you can do some pretty fun stuff. For instance, what we do in general, we work with the DHIS2 Word file, as Austin said. We download the file and then we do Docker build with our necessities, I would say. And we push it into the artifact registry where you're going to have all your containers. In this case, you can have similar services in other places. And actually we do some DevOps where it deploys automatically if you want it. And it's going to update all the pods or all the machines without taking down the service for the per person that is using the service or is registering the DHIS2. That's the most useful thing I would say. In this case, you have nodes. If you don't know much how it works. And those nodes, that it could be a machine, I would say, a server in general, you are going to have pods inside a node. You can have multiple pods. You can have one. It would work like a normal build promotion, but you can have multiple pods deployed in just one node. Or you can have multiple pods in multiple nodes. Then it's made simple to escalate horizontally, as you mentioned there. And you can have some levels for making these expand or contract as you need or as the application needs. Then it's going to optimize also your costs in general. How does it work? Then you have some limits and some requests that is your base level that you're going to be using. And you have another level for requesting. For instance, when you work with normal servers, you work with one CPU. You're going to need two CPU or three, four, five. In this case, you can use a millicore that is mentioned. And then you can optimize the service for using 300 millicores or something like that. That is 300 milli-parts of a CPU. Then you can even use lower levels of a CPU. And that makes it even cheaper for the service that you are running. You are going to have some deployment descriptors, as you mentioned here. Where you can say, well, the database is located in this pod. You can have different services or different versions of the HIs to running in different pods at the same time. And actually there could be connecting to the same database that you're going to have in another pod running or in another service. And you can optimize for each of them, which is the CPU level that they need. For instance, here you have something that is running with 600 millicores of CPU, not even one. And they are going to charge you in the cloud for the use of 600 millicores, not one core or one complete core. You are going to have 300 seconds for run as well. Then you have some smaller levels in this case that is going to be measured in millicores. And actually, if you don't have enough for using all the CPUs that you need, or, I mean, for instance, if you have three CPUs available, because that is what the node has. And you request 1.5 CPUs and they are going to run correctly. And the third one is going to be a throttle that is the orange color that you have there. But your app is going to still run, even if you have these issues. And that's something really important at the infrastructure level. And for memory, you're going to have the same. You have bytes, key divides, maybe bytes and gigabytes that you can control as well. That is different than the gigabytes in case because they are measured for the disk. And for the CPU, as I mentioned, you have the millicore when you have different chairs and you can optimize them as well. The important thing here and what you can do with this is, as I told you, you can have high availability at different levels. You can have multiple pods running and actually you can update versions pretty fast and simple. I would say, for instance, we moved, we started working with 2.38 version on October. Then we moved in January to 2.39 and we are actually working now at version 40. You know, that you change the number. And actually it was pretty simple. You launch version of the VH is to team launch version 40, like a month about, as Austin said, and we were working with it like $48 later in our development environment, we tried everything that we wanted. And actually a week after we were working in production already because we knew that it wouldn't affect the functionality for the user, not for the administrator at our level, but still, it was pretty cool. And the user was pretty happy with this change, at least for the icons with the icons that was pretty good for them. Actually, we didn't take down the service as you do with other services or with docket composing sometimes. And then that was pretty cool as well. And what we are doing now, because I'm just telling you what we do with Kubernetes, and we are going to share this with everyone because the community help us a lot. And then we're going to give it back to the community. We are going to give this infrastructure, the body in Kubernetes using Terraform. The idea of this is that even though we're deploying it in Google Cloud. At this point, we're going to release the script and using Terraform for the plane this easily in AWS Azure, Jital, Ocean Oracle or whatever you're doing. The important thing here is that this infrastructure could be deployed with Terraform and well, we did some testing a few days ago. And in 2.5 minutes, you can have an entire infrastructure running. And in a few minutes more, you can deploy the workload and you're going to have cloud native infrastructure for DHS to deploy in a few minutes. And this could manage an entire country registry, not only at aggregate level but also at tracking level. Just for instance, we started working with this because during the pandemic, we were challenged. We registered around 33 million registered for PCR at nominal level. And we need the same level of infrastructure for handling something like this. Well, and just if you want to collaborate with us or know a bit more, you can contact me or you can contact the team actually at the DHS2 app, from the Minister of Health of Chile. Well, and that's my presentation. If you have any questions, you can ask them. Thank you. Thank you, Felipe. I don't think you can hear the applause, but there was a lot of applause in the room for you as well. Thank you very much for sharing. Does anyone have any any quick questions for Felipe and then we'll turn it over to Nacho to give a quick presentation as well. Very impressive. Yeah, from, yeah. Just the cloud case in general, does it make sense for African countries when there are no AWS or cloud regions, just maybe in South Africa? Does it make sense, will it get high availability and performance if you are in East Africa or West Africa? If you don't know, the closest center is in Europe. Yeah, in that case, you're going to have a delay. I don't know what the legislation means is within African countries, but anyway, you can do the deployment. You're going to have a bit of delay, but it's not going to be much because you can deploy still in Europe. And actually it's pretty fast. And the problem is that you're going to be handling all the South African, for instance, information in Europe. Although they have the GDPR, you need to see whether that complies with your own legislation. Actually, it's a bit difficult, but I would recommend you to try it. And if it works for you, well, that's it. At least, I'm going to tell you that it's going to give you high availability with a bit of delay, but people is not going to notice that. Actually, I didn't say it, but we have a data center here in Santiago for Google, and that's why we started working with it. And it's always good to have a data center close to your country, but still if you use from another country, for instance, we can use the US based one, and it actually works pretty well. And then it would be something similar. It depends on connectivity as well. Yeah, I think just a general general comment on that, that I don't know about if you have some some thoughts on that as well. But yeah, generally I would say it's important to test right to test the latency to different geographic data center locations. There aren't. It's true that there aren't any AWS or Google data centers outside of South Africa and on the African continent that I know of but I think there is some plans to open one in Nairobi potentially or a couple other places so. Yeah. Oh yeah. Sorry, I didn't understand. I was an age. Oh, sorry. Yes. I was NH. I was confused. An edge server. Yes. Yeah, there is an edge server but I think, yeah, it's important wherever whatever the country requirements are to test and to see what works. There's also, I think, some advantages to exploring multi multi data center deployments potentially, and if you have one that's local and then another one in the cloud to be able to have some redundancy there. And that could be something to explore as well but it's really it really depends. Do you have something to add on it. Yeah, thanks for the but that was really interesting. What struck me when you were talking about the allocation and costing their resources used. Is that a lot of these new developments just like capitalism reshaping itself. But it might make developers start think a bit differently that now, a particular program indicator, for example, you can put a dollar and sense cost on it that's going to cost you $10 a month just for that. Program indicator that uses 100% of seek you for five minutes on all running analytics every night. That costs you $20 interesting new way of looking at optimization for sure. I think we have five minutes left so I'd like to turn it over to Nacho I know he had another presentation that we kind of split between the non technical stuff and and I wanted to give him a chance to talk about the technical stuff. So I'll go ahead and give you actually it's probably easiest if you use that one. And thank you very much, Felipe really great presentation and it's really exciting to see all the work that you're doing and sharing with the community around kind of repeatable high availability deployments of DHS to am so excited to work with you more. Okay. Thanks. Okay. Okay. Yeah. Okay. Um, is it working. Nope. Is it okay. I tried. Yeah, it's working. Okay, so I will be fast, because we don't have too much time. How many minutes. I mean, it's okay. And it's not only me. I'm not so from my city and I'm CEO at ICT and I have in remotely my colleague, Adrienne that is a project manager also in ICT. We're here to present you the two Docker, the two Docker who is actually part of a big suite of generic applications, maybe some of you know about them. We're trying to create community with this. What we try to do is basically generalize the problem of a certain client that we have in order to to make it possible to be used for all the community. Everything that we are building is open source and so please do not hesitate in going into our website because we have all the explanations on this I go fast. Yeah, so basically this is what happened to us. This means that we, we are also that I'm managing the test tools. We were bothered all the time by, please can you dump a database please can you test this as developers also we were needing back ends for testing the front end that we were building on top of the test tools. And so that's why we built a silly I to run them all so the two Docker, trying to help each other is nothing but a Python wrapper on top of the Docker compose. And so it's a multi platform system you can run it on my you can run it on Linux on all windows and it's nothing but configuring your Docker compose in a way that you can run. Luckily, or you can expose it then later to to the world, and a series of dockers. Basically, it is helpful for implementers, like imagine for example, you need to create some different variants of some metadata and so why not just tightly exposing that metadata. It has to instance that a certain country can run locally and can operate on that and can submit modifications that you can work on. You can also train even on a certain piece of metadata go into the country and with a verb on the server with your own computer, you say okay please connect to this machine. Every time that you will run that it will be reset to the original if you don't commit it will be reset to the original status before you launched the instance. It works for six admins like imagine that you need to create copies of production how many times someone told you ask you for that environments to debug. No longer debugging on production directly download luckily you run luckily they the instance and then you debug there afterwards you go to production. You can also use db copy automating deployments that match much easier by having dockers and even for developers as I was saying, like you want to run CI CD test that you're running on your, your applications so you can run it automatically by using it had actions or whatever. I'm not going to explain you the case of with it because otherwise, I think I didn't want to talk, but just to tell you that we are using it in real time in the largest DHS to installation in WHO that is having more than eight departments now entering input in the data inside. It's a common platform, and they are using doctors for all the environments, but production, and they are using this tool for doing that. So you have two ways of running it, you can run it like in your machine standalone approach, and then you can have also your own Docker registry I have seen in a previous slide so Docker Hub that is perfectly fine. But you know that you need to pay for Docker Hub. So what we do is to use hardware and which is about another Docker registry. And basically on all of the deployments we test configure it is open source you can download you can use it, and you have your own Docker registry. So running it is as simple as this. This is literally a screenshot from how do you install it in Linux in Windows we have a wonderful guiding in a wiki and in Mac the wonderful guide is even longer. So it's just executing a command and then basically this is a structure of your image. You have two things that's the only thing you need to retain is two things one is the core, and the other one is the data. So you can basically change the core and run the same data and applications on top of a different core to see how your data behaves in a different version. And that's all managed from the command line with this. This is a help message of the command and you will see that you can easily copy in a couple of seconds an image you can list the number of images that you have you can commit you can push to your Docker registry. And you can start the image obviously. And then the last the very last comment that you have there that is deep to Docker API is to run an application that is exposing an API. And that basically is what is going to tell you. I'm not going to show you the demo because we don't have time. Please, if you're around. Yeah, can you hear me. Yeah. Okay, super quickly. So basically I'm going to talk about the to do great application that is basically a wrapper on top of the to docker that is what this national has just explained. The idea is that we have been a graphical interface to do all this kind of function from the, from a web browser. Next. Yeah, so, sorry. So basically the idea is that we have this hardware hardware is something like a Docker repository. Same that you have with the source code in GitHub but this color hardware. And then you have the to docker application that is on the middle. The idea is that you can basically fetch a Docker container. You can basically docker compose what we have in here but fetch a docker container and start an application by by application mean idea to run in a postgres database and so on next. So in this hardware, you can basically have a different docker instance I'm giving you three examples and you can have training instances that you want to to to start in a laptop in a server in a computer anyway. So a real case that is the one that I'm going to talk next in about that is the emergency you can have a skeleton or a production server without net data and with this, the to docker application that is a web interface you can basically say okay I want to start an emergency or a training container, it will download the server for you and start with a couple of clicks you can also do some other actions as the one that we have there. Next. Thank you and we are a little bit over time I'm really sorry that I made you a rush but we have the next group coming in here so if you get to just wrap up quickly that'd be great. Sorry about that. Yeah, no no worries. So this is the one to create a container with only four fields basically in the first field you can select the container, and then you can say okay I want to start it this is the, basically the dashboard where you're going to be controlling the different data that you have in there and you can execute this action. Next. And this is the use case is basically the emergency free hospital that we have using Ukraine in Turkey in Malawi, and the idea is that basically just a single line in the line and this will start a the to docker application in the web browser. And basically you download the emergency free hospital skeleton you do some customization by pulling the data the metadata and the server is the server actually the laptop is ready to go to with this pool metadata is the important thing next natural. Because basically these two boxes happening a web browser so that yes just to get a single line a single instruction in the command line, everything from the browser from this interface. And then with the pool of metadata you're going to be doing the customization customization customers they have to do with the response in this case if you go to Ukraine. For example in our case we need to have questions regarding the surgeries that are taking care of if we go to Malawi as we went is basically more about cholera and the question are slightly different. And this somehow it's all I don't know if you get the idea, but this was the concept. Thank you. And nacho. Some applause here. The this group with this will also be presented with this focus on the use case in the next session, I believe, right so thank you very much. Apologies for the rush. And just in general, I think we're going to need more than an hour next year because I think this is a big topic that a lot of people are interested in. And you might see some more of this in the what's next in DHS to session on Thursday as well. Thank you everyone for for joining and we'll move on to the public portal session, which is next one public dashboards.