 Oh my god, okay Morning, almost afternoon. My name is Miguel Suniga. I'm from Semantic Cloud CP which is Cloud Production Engineering So today I'm gonna talk you about This little topic which is could you please pass me the pass on the last open-stock summit in Vancouver We went through a really deep into what we were plans of how we were going to architect Which were the technologies that we're looking into it and this is pretty much I want to use this time to make it like a follow-through and also introduction how we're doing things in semantics So first of all, let's just jump into a really quick agenda So you guys can actually go over and see for those who have seen my all our talks, you know that I talk a lot So let's just go through it. First of all, it's gonna be creating the past. We're gonna go through What we need what were which requirements we were looking for and exactly what we were What which is our main goal on creating a pass here? You'll see that it's a little bit more focused into something or into other type of users not actually like the Usual ones or the standard IT guys Then after we go through all the creating and what we were looking for and requirements We're gonna go into why we did first of all the developer experience That's pretty much the whole deal of a pass if you look into bigger passes out there One of the biggest one is heroku the easiest thing is that any developer can go over and employ anything that they want That's the whole point of a platform as service So once we've seen that we're gonna go into a little bit of micro services So what people is actually moving into for and moving into it and how it relates to containers After that, we're gonna move into which options we look into it. We basically made an Kind of like a POC and analysis of multiple open source pass projects that are out there And we're gonna just give them like a really what was good about them What was not good about them and which one we actually picked to go through it and the reasons of why? after that we're gonna go through a really Not really deep architecture of how we're doing pass at semantic But there will be some diagrams you'll see how pretty much the developers are gonna be able to go around deploy stuff without worrying about Infrastructure or anything else or then coding itself So after that we're gonna leave for questions and answers and that's pretty much it It looks like it doesn't look a lot of content, but you'll see in a second So first of all creating the pass What was the whole deal of doing this stuff? The reason is that I'm at semantic we already have cloud which is IS But a IS as you know involves doing a lot of things knowing a lot of stuff in administration and all that kind of things For a software company is way easier to just like hire more developers and the fastest They can actually develop and automate everything the more fast that you get products outside So one of the things is pretty much allow the developers to facilitate deployments across any environment in our vision The past should be able just like an abstraction layer where the developers go to and they just select where they want to go And deploy whether if it is in private cloud on top of open stack on public cloud or bare metal Doesn't even matter where it is. They don't even have to know where it is Just like go over and decide like okay I'm gonna deploy inside of a private cloud or inside of public cloud and that's pretty much it Once they've gone through those stuff the other purpose or the other requirement that we needed for creating the pass Was remove all operational Things and processes from the development teams Why is this right now a lot of the teams out there on application teams? They have a lot of developers see sad means IT architects and all that kind of stuff if you pretty much abstract all that of thing You literally end up with development real development teams that they don't have to worry about anything else They don't have to worry about latency. They don't have to worry about load balancers. They don't have to worry about networking They don't have to worry about any piece of infrastructure at all They just can focus into coding which will give you a lot of more agility into it And you'll see in a second why it's important important to just remove the operational process because we'll give you a Agility to go around deploy on every single commit a new version of the application and you can see it right away The next piece that we're looking into it is to provide scaling self healing metrics and logs from I mean I'm a developer myself also one of the things that I'm looking more into it. It's not like how the Networks it up or all that kind of stuff. I'm taking I want to know like how many Metrics I'm actually getting how many sessions how many requests per second will you have which are going to be logs? What if my application gets some error? So this is something these are the four main things that we're providing on past to allow the developers to just like keep Getting all the data that they need out of their application in the different environments after that We're pretty much moving into continuous integration deployment They don't have to worry about this stuff because the past is already deals with it It will go ahead and rebuild the application for you Provision the application for you. You don't have to worry about going over and creating packages again You don't have to worry about anything like that. Our main requirement was just like go over hit commit Push and there you go you get your new environment So one of the other things that we're looking into it in order to have a lot of those type of Automations is to have configuration management or a cloud management tool Why is the reason of this stuff once you stay? Okay, let's say you architect your cloud and everything How you're gonna scale it? I mean the easiest ways to create and to increase the pretty much The perform not the performance the availability of a pass and the capacity of a pass is seated put it on top of IS In order to do that the fastest way to go around just like provision more notes or more notes to your past It's just to use a configuration management tool Whether if it is puppet chef ansible whatever it is But you really need to have it in there the reason is because once you start growing into it The developer can go over and say okay I'm gonna deploy this application and I want to scale it and have a threshold between I don't know maybe a Million requests per second and five hundred thousand requests per second So that pretty much has to translate into how many bm's are gonna be using to run that application itself Or how many containers you're gonna be using to run application itself configure all that stuff on the fly is really difficult even with our tools out there like Configuration management tools that you have you still have to pre-warm your cluster that you are actually scaling So it's really important. You have something in in there so you can facilitate your operations in itself Also, I mean then jump into hybrid cloud. That's another thing that we need Hi, we try high hybrid cloud in the past is kind of a tricky thing because you might think okay You know what? I'm gonna go hybrid. I'm gonna deploy inside of my data center and outside But that doesn't mean that you're gonna be able to just go over and click here and everything will work smoothly Why because if you're sitting in one data center and your public cloud sitting on the other side of the country There's gonna be latency So that one also involves a little bit of software architecture and how you're Architecting your application itself most of the times That's where micro services jump into it and you can go hybrid and say okay all these services that require low latency I'm going to deploy them on the past that is sitting or that is deploying inside of this data center and these other Bunch of services on another place and then you can just replicate the data into it with a Cassandra cluster or whatever You're actually looking for Then which is the technology that we require for containers Docker. Why Docker even though there's right now? Chorus this has its own Containers there are a lot of containers are being out there for a long long time But docker is pretty much the most popular the most easy for the developers to use and they can and pretty much follow up the same way That they're coding and create a new our git commits to create a new another Image in the docker in the docker world so pretty much both processes of development and actually create a new docker Images are pretty much aligned so it's really easy for them to jump into it. So that was a huge Requirement that we had and for us and then after that we had to do multi-cloud Same way that we go with hybrid Multi-cloud you can go over and have in multiple pool of clouds in different regions or inside of your private center in different That is interesting different regions of your public cloud doesn't matter The whole idea is that the developer can go over and say okay I just made a new commit push it into it and the past will automatically deploy your application in all the pieces In all the places that you have specified without having to worry about anything else. So Once we Fee and finalize all the requirements that we had we started looking into why we're gonna do this stuff So we went into the development experience. That's pretty much what it's all about. That's that's it It's these not like automations that nothing in there is the development experience Why are we looking into this because a lot of the time like I mentioned it before when you're having a Development team a lot of the things especially DevOps the developers have to go over and learn infrastructure And that's not really the whole deal. They're there to code You know, it's not like they're there to create packages or to do something else or something Anything infrastructure related at where when you have a pass you have to focus into you know what just tell the Developer to keep coding and push. That's the only thing they don't care about anything else You can just tell them You know what just push here and here is your URL of your application go over and check that it's up and running and it's Properly working. So as I mentioned before the pass environment the main customers are developers. They're not admins They're not architects. They're not DevOps. All of those guys probably are gonna be customers of bias, which is fine It's not it's not a big deal But the same way as software as a service your customers is the end user on Past the customers or your end users are gonna be developers. That's the whole point of platform as a service Then provide them with application templates and architectures is easy to use The developers might know how to code but might not know how to create an architecture of your application Especially if you're jumping to microservices, which is like small is more business oriented that actually application oriented But if you provide them with a specific application templates like let's go over and say a small three tier application template They can go over and follow up that same application template and just put it into something else and just modify the little pieces like okay You know what instead of pulling the code from these git repo pull it from this or a git repo And these this type of code or is the go language or if it is Java language and specify it You just give the templates to the developers to pretty much just like a form that you're gonna fill it up And then the pass will basically bring all that instructions into it and build your application based on only the on the specifications of the developer so once you have that You pretty much reduce the operational complexity. You don't have to worry about anything else It should be transparent for all of the developers just the whole goal is to commit the new change Push it and see the changes reflected. You can have multiple environments on top of paths or multiple applications You can go over and say okay this environment or this I don't know this application template is gonna represent my development Application this gonna be represent my test up in test environment. This one is the actual production So when the developers actually going over and committing into it You can just go over and create one repository for each of the different environments and the past will basically pull down the new code Fix it up merge it or whatever is doing creating the new docker image or whatever you're actually you're doing in the back And we'll push it over to the to according to the specifications of the application that are put in place But that's not the whole deal applications are one part of it But sometimes you need something that is going to be a little bit more persistent Which is the reason why we're providing back-end services Why is this sometimes the developers will require to have a rather MQ cluster as a measures bus or they require a Galera Or they want to use Cassandra or something else these are basically bring back-end services for the Developers application is not the back-end services of the past itself So we have another layer of automation where the developers can go and select Give me a Galera cluster and the whole automation will bring up the Galera cluster set up everything for them Monitor it self heal it and pretty much just give the developers the IP or the URL of the Galera You're going to connect to that's the whole main purpose of providing back-end services So and one other thing that we're actually looking into it And that's something that a lot of people has to struggle sometimes Make the clear line between stateful and stately services Why is this because of the different type of technologies? Stateful service you can have it in there You can set up a cluster or stateful services are more like you need to have it running whether it's something goes wrong or not You still have to have it running does the reason why we're pushing into back-end services a lot of this stuff Whether on container world you can have it running But even if the container dies and brings up again another container and put you plug back the volume into it You still have a small piece or a small pretty much downtime on that specific container So the stately services is easy to go into container world and the stateful services just to put it in back-end services Because of the same reason. It's easy to just go over and kill I don't know a doctor running a patch or your application on rails or Java and then just have everything separated on a I don't know Galera clusters outside that is always up and running no matter if it But unless you lose the whole data center, it will basically kill the whole galera cluster, right? But if you lose a BM, it won't mark any type of downtime So once we went through like why we're doing this. Let me see already 13 minutes. Okay Once we went through all why we're doing this and that developer is going to be the main customer We decided to jump into a little bit of doing micro services Why we decided to go this way because pretty much Aligns with the Docker world and Docker world and the technology is done I'm going to show you in a second or I'm going to talk about in a second. So first of all Doing micro services is hard. You have to pretty much especially if you're coming from a monolithic application where the application is actually doing everything for you Splitting their part and doing a service based on business logic and saying okay now instead of my application sending the email and Actually doing the building and doing something else and doing back and forward and creating user management and all that kind of stuff You have to create like small Departments like the same way that you run your business you have Mail application and then you have another one that does the accounting another one that does the building in there You have somebody or and some other service That is actually keeping track of the users and you pretty much to make them talk between each other So jumping into those type of applications is easier if you just go over and give them like a small template like I mentioned at the Beginning here is your template for a micro service Go ahead and put it in place if you need to create a template for another micro service Use the same one just point your new code or your actual git URL to that a specific Template and let they pass actually build the application for you after that you can just pass over and make one service communicate to the Other when I talk about services and I'm saying like it's more like the whole mold or you can see a small Monolithic application that is going to be a specific from business function tied up into a group So inside of that micro service you're gonna have a database Web server or whatever it is that you're doing now and you create another micro service Which is going to be another business? Or oriented to another business part of your another part of your business And that will also be doing some other things and then you have both of them talk through APIs That's pretty much what we're trying to move into it Then we to have this top up and running you have to provide persistent back-end services for the same reason I mentioned Before one of the other things that we taking a lot in count is to support the languages right now in our past we can Deploy Django Pearl go Java Java in multiple versions of web servers or application servers and Ruby Multiple versions of Ruby even see so it's pretty much to give them the whole idea is to just have the developers I'm going to create something new. This is the update the language that I'm going to pick up And this is the up in the template that I'm going to be using just push them forward and start coding And you can actually go and let them let the pass actually work in there Keep your Docker image locally on private repository. This is something because it's specifically to Docker You know and I had to put it in there because you have no idea how many Docker things I found in Docker registry like a lot of things that shouldn't be there because Developers just go over and say, okay, just push and just push and push And yeah, you end up finding a lot of really cool stuff in there that shouldn't be there And I mean from coming from us We're basically our way of deploying is that we have a private Docker registry in each data center that would basically pull everything from Docker itself in there but air all the past environments and all everybody that is using Docker is basically pulling from those registry inside and Jumping to a really quick Diagram, this is what we're going after and this is the whole target that all the developers should have in Semantics just like you have your monolithic application Some of them are pretty much in the second part in the in this section in the middle But the whole purpose is just to just jump into this place where each of them is a microservice They have them you can have one two three many of them and you end up like talking to each other Through APIs this will allow the past to just like have the same type of Services back and forward you can move it around with whether if you're deploying your past in public cloud And then you're jumping to private or into private and then public the whole idea is that since it's just an abstraction layer on top of IS it won't matter if you're running on AWS and Google compute engine or open stack It will be the same thing over and over and over so the same one click deployment that you have inside of your private cloud You can have it outside if you wish okay, so Now I actually Know I need to talk a little bit more on this one. Let's just jump into the cool stuff past technology options here we did Route six around six Demos first of all days or yeah days days. What is this? This is an open source pass Project that it's out there. It's really new so open source completely based on Heroku, but it's not there yet one of the problem is that It uses a CLI that has everything built into another huge application that runs on the server There's no way of actually running and expanding this stuff It works fine if you're actually jumping from like a dev environment Where you're gonna be the player or you plan to deploy into Heroku afterwards works fine Works pretty much similar you can use it to play around but not to actually go over and just deploy something into production with it It has potential. It has a lot of future. There's active development into it But it wasn't something that we needed in there or that was already there that we could actually use then the big guy in the I mean the big kid in the straight cloud foundry believe it or not We say no to it why we did say no to it first of all the Docker support Cloud finally decided to are and to add Docker into Diego release And if you go to Diego release is incubator completely incubator So they might have a lot of time doing Passervice in the past, but it's not using the application or the technology that we're going after So that was one of the that was the reason of why we didn't wait with it It didn't make pretty much our needs Cloud foundry is has a lot of the things that we're looking for into like the requirements that we're looking Sorry into the requirements that we're looking into it, but like I said our deal breaker was the Docker support They supposed to get out of incubator in the next. I don't remember. I think it's in the next few months or something Yesterday, okay. Yeah. Well, there you go The only thing is that for us we're already just like working out on the other technologies then After we play with those two we decided to go into a little bit more Docker type a open source Kubernetes That's pretty much where we're around in May On the open stock summit where you're playing with Kubernetes Messos and Kubernetes messes project Back then it was working fine and still it works fine But from the development point of view It was a lot of things that they had to look into it a lot of huge JSON files that it created like a bunch of pods in Kubernetes and all that kind of stuff and all the requirements that they needed It wasn't user-friendly at all. It works pretty well. I mean it works the Kubernetes is a really solid project There are some few bugs. There's a lot of development into it a lot of pushing to they've been releasing a new They just released a new version like I think it's one point oh one But still a lot of things are driven by the community itself The reason why we didn't went with it is because we had if we went with Kubernetes only We had to pretty much invest a lot of time into the user experience, which we didn't have time back then So we decided to just move into something similar, which is based on on Kubernetes Which is open shift open shift like is like you can imagine is It's from red hat itself Open shift version 2 was something like on their own way their own way of doing things their own stuff This time they literally grabbed the Kubernetes code. They keep developing on Kubernetes pretty much A lot of the changes that go into Kubernetes on into the main project is based on Git pushes and pull requests that red hat is actually pushing into it or the community from red hat is actually doing so It's really good because they didn't modify anything from Kubernetes and if the Kubernetes goes over and add something on top of it Open shift pretty much just pulls down the new code adds their API on top of it And that's pretty much it it doesn't even mix doesn't even change anything once you deploy open shift Basically, you have two API endpoints the Kubernetes API endpoint that is basically running everything in Kubernetes itself And the open shift API point that takes care of a little bit higher level things Like for instance the user management the source image the SDN and all those pieces that a Coronet is not working on it like if you go right now and deploy Kubernetes you still have to deploy You have to deploy open b-switch or something else to actually do these SDN layer on of the past environment on open shift It already has it in there and not only that one open shift They have a way of creating plugins for different SDN sections So we start with SDN using open b-switch or it's called open open shift of Open shift OBS switch SDN and now we're actually moving into open shift with contrail the reason we are moving to control is because our IS infrastructure is running on contrail and We're gonna be pretty much emerging both layers instead of having something that IS is doing with SDN and contrail And then having another layer that passes actually managing on top of that one The whole point is that we're gonna be merging both of them So at the moment we deploy in there and the past clusters We're not actually using a second SDN layer We're literally using the hypervisor contrail layer to maximize the performance the reason of this is because even if you go into AWS and deploy at the Coronet is itself whether if it is with the chorus project, which I don't remember which is the networking part Or you go with open b-switch you have a hit a hit on the performance about 30 percent If you know how to configure open b-switch properly if not you're gonna end up with probably around 60 60 percent of loss of Performance on the network, but with control pretty much gonna merge that one and we're gonna see if we can actually reduce everything Around to 5% of loose So the other thing that we have in open shift is the source to image Basically the developers just commits and they have their really nice hooks that go to get half pulled on the image Create a new Docker image and literally deploy into it and they have user multi-tenancy. This is all this stuff Except for the source to image Everything else is inside of open and on Kubernetes like the user and multi-tenancy is already in there But it doesn't it's not user-friendly and you have to call a lot of things into it to make it work And also the security context that's something extra that red had actually pushed into it Which allows you to go around the fine. I want to run these Docker container for this application with the user one thousand and that's it You don't get to use any other type of user. That's the only user you're gonna be able to also on the security context manages the Network assignment per project or per container which allows you to go around the fine. Okay, if you have Kubernetes cluster which is running five different applications This one belongs to these user of this project This one belongs to the solar one even though they are running on the same host is not going to allow them to actually talk to each Other doesn't even matter if you have Docker running on the same host and different applications in there Because the way Kubernetes runs is that you usually have everything inside of a pod talks to itself on local host But you have another pot it will basically create another like loop back in a connection into it So that the those containers can talk also into local host, which is a security concern But with the security context that these guys added. It's pretty much it goes away so This is pretty these are on four of the all the six that we look into it and then unfortunately I had to give the bad side of this stuff also Which is we're on it's not a bad side. It's actually an application catalog Miranda's created it But it's literally an application catalog. It's really good at it It's really good at it does but it doesn't give you the user experience that we're looking into it They didn't look into what we were looking at or what we were at going after into it So we decided to move into OpenShift because of is it's not actually a past environment, you know With Murano you have still have to have OpenStack With OpenShift you don't have to have OpenStack You can go and deploy the same configuration set into AWS into Google compute engine doesn't even matter where you're running or if you have Just a vanilla OpenStack You can just go over and deploy it in there and then have another vanilla OpenStack for something else in there or like for instance You have liberty and you have you know, you have kilo you deploy pass on the three of them Doesn't matter where it is whether here on the OpenStack project Sometimes you have requirements that is just like you have to plug it to Keystone You have to plug two other stuffs, which is not really something that we're gonna look into it And the other one which is Magnum, which is not even there. I mean, it's really new for OpenStack It's something that we try to actually put in place. It's more like container management than actually Than actually a pass environment like a platform as a service So those were the three of them that we're looking into it and I'm sorry the six of them And that's the reason why we went into OpenShift The next thing we're gonna move into it and we can go through questions after this is probably the first Talk that I'm gonna leave time for questions after if I have time here And then we're gonna go into pass what we're doing here. This is pretty much the whole Really sketchy setup or pretty much just the sketch of what we're doing here As you can see this is just a section of the pass I have another two diagrams that we're gonna look at it and this is how the pass works We have master nodes. We have infra nodes and we have nodes for the user That's pretty much the only thing that they're gonna look into it On the master nodes, they're pretty much in charge of managing the cluster itself of the platform as a service The infra nodes we will talk about it in a little in a second and the nodes itself Those are basically where the applications are running where your application the developer the actual application of the developer will be running in there After that, we have some requirements that we needed to which is collector metrics Which the developers can actually go and push into it. We have a storage server to provide persistent storage And the persistent storage basically goes into whether you want to use cinder on open stack Or you can use seft or you can use NFS if you want or it doesn't matter which other I think seft, cinder, seft, AVS on AWS But at the end the whole purpose of the storage thing is that making the API's calls to whatever the cloud You're running into it and plug it into the nodes that are actually Running on the containers and then passing the volumes as another Pretty much another Docker volume into it and then you have your private registry After that we have also the metric database which is collecting all the metrics not only from the nodes But also at the container level from the containers themselves right now We're collecting metrics, which is CPU memory file system and load So we can have a really good description on how much your application is using Based on on the Docker metrics that are actually pulling out of each of the containers We have the monitoring section which allow us to basically out of scale not out of scale yet But you will out of scale based on Kubernetes replicas and then we have nice dash Well, we're gonna have nice dashboards so that the users can go over and see graphs and things like that So from this section where it happens most of the magic here is at this layer as you can see the second one Everything from here down above is pretty much back in services for the past itself But the applications and everything that you're running into it runs here Especially on those two words says infranote and notes itself and notes So let's talk a little bit about the infranotes. What are the infranotes pretty much? It's just another set of containers that are providing basic services basic infrastructure services to the developers So they don't have to worry about nothing They go over and create manage the load balancer for them. They collect the logs They create the Docker images for them when for instance Let's say I'm going over and creating a new git push or I do a git push into my Repository pretty much the image builder detects the web and the hook from or actually Get half basically sends the the JSON hook into Into our pass that one detects it pulls down the new version release and starts building the new Docker image for it Automatically that one gets stored into the Docker registry that we had here Down there and if the developer wants to auto deploy it They just need to click the check mark to go over and start deploying it as soon as a new image gets registered into the system So it's pretty much continuous delivery on Itself they just literally go over and push it They start getting the new image build and then start deploying it you can go around tell it Okay, I just want to push and I don't want you to detect anything at all just remove the hook I want to push and I want to build the image, but don't deploy it Just remove the hook the the deployment hook So it's pretty much up to them where they are gonna deploy it if they want to auto deploy it or not and then also we are providing it collects the container metrics, which I talked about already and We have SSL Terminator and the NS resolution. Why is this stuff? Because When you create a new application itself, you should be able to have SSL Certificates that you're gonna push into it So on the application template the developer can go over and pass and say pull down the this is the cert itself that is encrypted Or you can pull it from here and actually configure it on the load balancer automatically to do SSL termination in there right now the load balancers are pretty much Software load balancers that we're using but it's their corner is actually adding support for not only they already have Google Load balancers support, but they're gonna add ABS support and also F5 support So which we're actually looking forward into it and the DNS resolution while we're doing this is because Once you have the application up and running your application will be running in any of those other nodes Which the whole the user will basically go and try to say okay? My app domain go in there and the DNS resolution goes around tells you forward all the traffic into this specific node Which is running this container, which is the actual application itself So once you have all those services in there It's really pretty much transparent to a user as you can see it's not only for web applications You can have any type of application you can even go over and tell it have a email server in there Doesn't matter is it will basically just run in a I don't know a send mail or a post fix container Depending on what type of container you are actually creating into it and it will give you the URL also or actually the domain Which it gets auto-resolved and then you specify on the application template is running on the port 25 And but internally on the container is running on port. I don't know 25 25 something like that Everything is managed automatically by the load balancer, which is also working as a router between the internal connectivity of the past section itself So moving forward into this and I'm much almost running out of time here This small diagram is how the user experience works They go over and just create a new application template and they define the settings in there How much they want to replicate like how many? Instances of their containers are gonna have if they're gonna be one two three or five if they're gonna be stateless or not if they're gonna be using something like a Certificate or not and pretty much which image are you're gonna be running into it? Then platform as servers goes over and tells it okay is it a stateless application? Yes, or no if it is a stateless go over and jump into OpenShift Which is the container management tool which takes care of everything that I talked about a few minutes ago Or if not if it is actually something else like provide a backend services use the automation that we have in the cloud to provision Scale configure and self-heal the cluster that you're looking into it Once you have both of them up and running which doesn't have to be one application template for each of them You have one application template for both of them at the same time They you'll just literally have to go over and tell them okay connect these to these other IP Which is giving me the past environment and there you go. You can access your database or something like that So after we decided to go into this How it works so here's pretty much the same way the user goes over and jumps into creates the push into it Let's say I'm gonna deploy again Goes over commits a new code pushes into it gets the image builder Once the image builder is there and if you have the hook set up and running It goes through the same place it goes on says the platform as service. Okay. I have a new image for something I'm gonna read the actual application template and go through the same workflow and decide where it's gonna be running on The if it is gonna be a stateless service, that's pretty much something is more that we're giving them to like you cannot touch You know, you can go over and request a new stateful service like a mysql 5 5 or you can request a specific version of Cassandra, but they don't get to upgrade Cassandra itself That's something that we provide them and we go over and tell them You know what we have the Cassandra the Cassandra version this one or we have this other Cassandra version or this version Fibre MQ that's something that we're gonna basically go over and provision for them automatically in the back like for I don't know even if you guys want to use Hadoop or something and You can just literally go over and tell them this is the new IP into it or the new IP port that you have to connect your application to it So one of the questions that we had most of the time is how you define how to connect to what into there So on same way on the application template Like all the settings that are actually getting built on the fly for instance Which is the IP on the container the on the note that the container is running Which is the port all that kind of stuff is pass over through environmental variables to the application itself So the users don't have to worry about oh, you know what? I don't know which is the IP that I have to go over to connect to they can just reference on their application and Application name underscore mysql underscore port and then M application name mysql underscore IP And that's pretty much how you let the user connect to their services that they're running into it without even knowing What exactly is or where it's actually running and with that in time? I let five minutes Do I have 40? I don't know if I have 40. Oh, yeah, okay. I'm gonna live like the last four minutes To question answers. Thank you Sure Yeah, exactly. So the thing is that actually doing pass like you said is it's a lot of different pieces You know, so one of the things that we're looking into is to see that we were trying to go after is okay We grab a pass itself or we grab something in the back end and we built on top of it And we even consider like creating it from scratch like creating our own type of a Kubernetes type of things and all that kind of stuff But from the things that are out there and we look into it The ones that are I mean the biggest one is Cloud Foundry, you know, and that's pretty much gives you everything The other ones were just like half into it Kubernetes itself is more like to be honest is more like job scheduler than anything else, right? Is is that's pretty much it. The only thing is that OpenShift is twitching it enough and fixing it enough to actually make it look like a pass and it gave us pretty much a lot of Plosives in there especially especially on the development experience type that we were looking into it So we're using tools outside that are not an open ship or anything we I mean Yeah, no, no what is happening is that would be the pass UI itself is crazy An angular JS application that is doing all the connections back to OpenShift itself Or it's actually doing the connections to your cloud tool The one that we're testing right now is scaler because this open source is a cloud management tool And you can define like specific farms in there They're gonna go over and say okay I have all this I had one engineer that figured out how to auto build one click deployment of a my SQL Cluster and then you can just go around tell it and on the past UI Go over and clone that cluster and just start it up and we'll basically go over and start it for you Give you all the IPs back into it, which are basically sent over to open shit Yeah, so the whole idea is that on the the OpenShift UI will already have started commuting a lot of changes on to the UI So one of the things that we added was pretty much a Services section where they go over and just define a Galera cluster Hadoop something or whatever it is They just click it that pretty much makes the backend API call to scaler Which starts building everything and starts monitoring the farm itself for us the farm is just a service persistent service for a user So for instance if you go into the past and you say I'm gonna need a Cassandra cluster on the past You're gonna see just like okay Cassandra click deploy what is actually happening in the back Is that we're talking to scaler and going telling them? Okay, grab the farm the the template or the actual farm of the Cassandra cluster clone it and Deploy it and that pretty much gets locked down to that a specific user But the the lockdown from scaler itself is just that we lock down so we cannot actually get destroyed It cannot scale or downscale pretty much on the way that whatever it is But on our context inside of the past environment and in our database we're telling it okay This user has specific Scalar idea of deployment, which is a Galera cluster itself What we're doing is that once the deployments up and running it will give us the front end the the FIP in case of open stack will give us the FIP and that's what we're giving the user to actually Pretty much incorporate into your environmental variable so they can actually connect it back to their application To a user snow No, it's No devops is only developers No Because the whole purpose that we're looking into is that we're gonna target developers not actually devops itself If we wanted to go over and just have devops most likely we're gonna deploy some other tool or something like the Murano Catalog or something that they can go over and actually use but through is if they want to use I mean if it is more like devops thing that they want to go into it They go into IS itself if it is only development they go to pass You you told oh, thank you You told about the services still can plug in in your application if you deploy a Galea across a cluster or less exertion whatever, but you miss two things. I think because first all of these Pass platforms Cloud Foundry or whatever This community services are really sucks because they are not Cluster ready or whatever. So you need a lot of manpower to to build new services to To to your platforms or as you said you you use external service providers Yeah, but then you have other problem because if you're in Europe safe harbour is cancelled now And you can play status in America so in US so you need a European provider and that is not easy to find and So we have two problems Maybe you find a source provider in Europe or you build your own services So on that one, that's what we're doing wait a minute. So we we we are hosting now a cloud phone reasons one and a half years of public platform in Europe and Yes, this is a big thing because you have not really options Yeah, I know in in our case we are trying to the phone to Architect pass completely abstract of everything, you know That's the reason why we went to into a cloud management tool because we're trying to have something that we're gonna be able to deploy Anywhere doesn't matter where it is and still have the same type of automation that we're doing it in-house One of the things is that you see this type of cluster This cluster doesn't it's not just that one cluster that we have across for everything We go over and deploy the same cluster in each of the data centers that we have run in open stack And it's gonna be running on each of the regions Sitting on AWS or Google compute engine So the thing is that when the developers wants to go over and define something they can go and say okay I have this application template I'm gonna go and roll it out in all the in in every single region of the cloud itself or and everything where I can go And deploy but the whole thing is that if you build something like a specific service Like you mentioned it into some place and out of the nowhere You just like loose or they go outside or they go out of like out of business or whatever you get stuck into it So in our case so so for instance, we have the analytics team which is doing our own Hadoop type of Mathness right that is gonna be the same way that we're gonna go over and deploy into Whatever it is that we're deploying to Europe or here in in Asia or over there in America But the whole idea is to just like abstract and don't have any limitations absolutely, absolutely, but but it's To start to start your own cloud of past platform or self-hosted or whatever as a testing platform or whatever You you don't have the options to to really say okay, please attach me or please create me Galea cluster and attach to my deployed instances deployed applications and You you you have to spend a lot of money and a lot of of manpower to to get really production ready. So So you have two options you you are using Such something in like AWS service or something like that. Yes. Yeah, I know and we are aware of that one The only thing that we're looking into is that like I said It really depends on how they want to architect the application Some of our teams are gonna be just going over and deploying like I don't know some piece of us into pass And the rest of the probably they're gonna jump into IS and do it themselves because I don't know It's more sensitive data or something like that that they want to keep an eye into it better We're just pretty much giving the options to the developers to go ahead even if they go over and say okay We're building in top of pass and we're doing everything to agile and to make more agile the development self You can go over and they see if they want to they can just go and deploy and I as if they want It's not something that you have to go over and yeah, right? Okay. So that's pretty much it awesome. Thank you. Any other questions? Yeah, the automation tools what? Complementary tools Yeah, so for instance there are tools like for instance the collection of metrics is done by Collect the and we have also our building house metrics to the system the metric database is in flux TV The dashboards is dashing. We use only open source projects. So that pretty much diagram is just like a Mix of a lot of things running into it. So for monitoring. We're using save X For what else did I put in there? I don't remember what else I had Monitoring save X dashboards is dashing I owe And also the the dashboards from savings itself the storage is pretty much Just a server running an FS itself or ice cosy. So that's pretty that's it. I mean, it's not really a lot of things Most of everything is centralized into OpenShift Okay, any other questions? Oh, no, I'm all ready past time. So really appreciate it