 awesome user experience so yep thanks yours so thanks so my name is Michal so I'm working as a platform management team leading open shift the engineering team in redhead so I'm not a sale guy even though I have the enterprise keyboard there I won't be selling you anything so you don't have to buy anything. What I'm going to talk today is basically how we or I think we should stop calling OpenShift OpenShift we should actually start calling in enterprise Kubernetes because that's what it is essentially but before I go there I will just briefly go through like how it all started with containers you probably heard all these stories already in previous talks also spoiler this won't be like a super technical talk so if I go too technical or you do understand something just raise the hand and I will gladly have you know answer the questions so let's start with the Kubernetes so this is basically something that Google throw out when they come with the Kubernetes for the first time so give everyone the power to run agile reliable distributed system at scale and the work everyone is here very important right so in the in the past like it wasn't really easy to make to have a distributed system available on your developer laptop right because it takes like a lot of machines networking setup and configuration and everything work reliably to probably have your infrastructure in a company you don't have it on your laptop you can just play with it right the Kubernetes the Google itself like it's basically what they run is only distributed systems to to power their services so if you probably have a you probably have a Gmail right so or some other email account so so Google is every service inside Google is running inside the container and it is orchestrated by some container orchestration engine it's not Kubernetes yet but it's called I think work now or mega I don't know like what exactly they run now there but it's it's a it's a containers that are running inside the Google so let's start talking about the containers and this will be actually very controversial so if you want to fight with me I'm glad to take your arguments why I'm wrong so for me if you if you thinking as a containers as building blocks for a modern applications right or as entry points for the services in these distributed systems so for me the containers are not lightweight virtual machines right so if you use the containers as a replacement for your virtual machine because it boots up faster I think you are doing it wrong right so what's the point using container except isolation right or well kind of isolation right so so if you if you for example if you're using containers like you should not use it as a lightweight virtual machines because they are not lightweight machines and I will extend this to the point that if you if you running a system D or service manager inside the container to run your services inside the container maybe you are doing it also wrong you know if somebody want to fight with me about that like I can make my arguments it probably that is cases where you want to run system D or a lightweight system D inside the container I don't think the system D is in the state yet that will allow you to do this maybe yes maybe no but personally I hate the idea of having system D or service manager inside the container one reason is that if you have a platform for example Kubernetes the platforms have to know like in what state the container is like or the process that is running inside the container if you have a single process there Kubernetes can read the information about the process and can make a decision is this process stuck does it have a problems does it crash so it can restart the container rescue the container right if you have a system D they are managing the process how the Kubernetes know what it should do when the process crash so it's up to you as a developer to define the unit file or the rules to restart the process if something go bad so you basically moving away from the platform managing this for you to system D level the other thing is that for what containers are not good for is making a monolithic application more secure I heard these argument many times like I have this big job I just put it into container and claim it secure right and just run it and we are now safe because we are running in a in a container like this is maybe true in some extent but I don't think it is really secure it's still a monolithic app it's just running inside the Linux namespace right so and the other argument is that to make the monolithic application scale right so again I have my big Java app big fat Java app I put it into container and it will scale because I can just copy the container right it doesn't really fix the problem scaling this monolithic application it's just you copy the application like multiple times right so the doesn't get the scaling the monolithic applications you should start thinking about breaking into microservices or something like that like there was a talk yesterday about how you can do that so so my my rule of thumb is basically write applications don't write containers right so if you find yourself writing a lot of shell scripts just to make your applications working in a container you should probably start rethinking what your application is right so the application should take advantage of what the platform is offering to you or what the platform is offering to your application so if you look on the Kubernetes for example what the platform can offer you is a service discovery right so I start my application in a namespace inside the Kubernetes it can automatically discover like what services are available for this application so this application requires database fine you will discover it because the it will have a service with the name database in there right so or or it can I don't know require some messaging solution right so it can discover like what what services are running in this namespace or Kubernetes is offering to me I can use with this application or you are writing a rails application a rails is no to work with the multiple databases right like multiple kind of database you have a post grey you can have a bicycle you can have whatever there so it can automatically make decisions depending on what you have available by in the cluster right in the platform the other advantage that platform giving you is auto scaling right so if my application is taking a heavy load like the you know the the application itself can tell the platform like hey scale me because I you know I'm missing some requests you know something's going on you know the container is taking a heavy heavy load from the internet you know you posted something on the facebook and now suddenly everyone is clicking on your link so your application is taking heavy heavy damage right now so the auto scaling is really important so the platform can monitor the application metrics and and the application can go back to platform right so they can work each other the other thing is persistent storage management right so the applications well containers are ephemeral right so if you have to write something in the container you probably want to do it on the persistent storage you don't have to write it in the container because once the container is done like or it crashed that your everything you store there is done right like it's gone you never get it back so in Kubernetes or the platform should offer you some kind of persistent storage management system where you can say mount this volume to this directory and the application will store the data to this directory the other thing well this is controversial a little bit is a configuration and secret management like how I configure my application so should the configuration be part of the docker image or should I have some option to replace the configuration with my own right so the platform should offer you something that manage the configuration for you the secrets as well secret management certificates keys ssh key to clone something and so on like how you put it in the do you want to bake it into image that's probably a really bad idea right so you you you somehow want to get the secret from the platform so the application can use it and there are many there are many talks about how the secret management should look like either if it is using vault or custodia or free IPA or whatever right or just plain plain form secrets but it should be managed outside the the container image the next thing I think is very important at the platform sure for the application is to have an API access to the platform itself so if the application is or the the container is running in in this platform it should know how to connect to the platform and get some useful informations like as I said metrics what is the current number of replicas there what is the current state of the application you know a lot of useful things and in the end the application itself can deploy to the platform right so instead of having a complicated base scripts that are creating stuff as what you can do the application can just come back to the API and create them right from inside the container so so it can you can basically create an installer that is running from the from the container taking advantage of the fact that you have an API inside that container okay so that's about the platform let's talk about the kubernetes architecture as I said that this will be higher level so I'm not going into deep detail but if you want you can ask me questions and I try to sync with the latest terminology the kubernetes is using so I will be trying to be backward compatible and tell you the previous name of the stuff I'm talking to so so the first important piece in a kubernetes architecture is how it runs so basically you have a control plane which is which was previously called a master oh well the thing that is running the master so it's a component uh is a set of components actually that runs a single master node that is expected to change in order to support high level with nodes so you can have a multiple uh nodes there that's acting as a master so what is running on the kubernetes master so first off is the API server itself so you uh if you are a client how you talk to kubernetes through the rest of the API where the rest of the API is running on the master so master have an API server now every time you talk to API create something right so what will happen the master of the API server will store it in the hcd so hcd is kind of a data store for for kubernetes where it persists the data or the definition what should what should the world looks like right in the cluster and there is a scheduler that basically decide like okay so this is how I define this is how the work should looks like and scheduler decide like what how it will happen basically right so where the where the ports or containers will run and then you have a controller manager which means which which is basically a process that manages different controllers so in kubernetes kubernetes is basically a set of small demons or controllers running inside that are observing the current state of the world and trying to reach that so it reconciling like what is currently happening in the cluster and trying to match what you defined so there are there are a lot of controllers in kubernetes so then so that's a master now where the containers are running right so the containers are running on nodes so the nodes include the services necessary to run the application containers and manage they are managed by the master system so the node has a container runtime so which can be docker and rocket I guess right now uh well they said it also can be hyper and OCRID and in future it will be crio right so uh yesterday Antonio gave a talk about crio uh so that will be a container runtime that you can just pick like what the runtime will actually be right so so you can so it's not longer just docker right you can have you can pick what container runtime you like um the next service that is running on the node is kubelet which works something like a relay agent between node and the master so kubelet is watching what is going on on the node and relying on information to the master and looking for the master talking to kubelet saying like like start these containers right the kubelet is also reporting a little bit of metrics about the node like how much what is the cpu usage was the memory usage was this usage is monitoring all this kind of stuff uh and relying it to master to basically to make the to make the scheduler making advisor decisions right where to place the containers and there is a kubeproxy uh to serve the kubeproxy serve for the services so it's basically a routing or a networking component or networking piece so the pods are running in the private network and the kubeproxy serves as an agent there so it's providing the ip addresses and it basically knows how the pods can talk each other so so these are like a high level of concepts in kubernetes so you have a masters you have a nodes and then uh what you can create what you can define uh in the world so the the highest level of resource is a namespace so namespace provides you a scope for your for the names of the resources right so it is also intended in an environment where we have like a multi teams or multi tenant uh environment uh so the users basically see only their uh resources not the resources and the other namespace uh it's also used by for example if i go and create in my namespace resource name or deployment name engines then some other users can create its own namespace and create deployment name engines right if we we just work in one single big namespace then it won't be possible because we will have a name conflict uh so namespaces are something like a sandbox for for users where they can play and that basically isolated from the list of the of the system so then pods so in a in a kubernetes there is no concept of containers so container is not a resource so how the kubernetes is running uh containers is that uh you define a pod and pod can have a one or multiple containers so and those containers basically share data so if you have volumes defined they can share the data on the volumes they can also share the network namespace so if you have a one container that starts something on local host the other container can just access it on the local host right on the port if they know the port of course and they can also share the other resources like ipc and another thing so so why so why you want to run more than one container in a pod right so usually people don't do that well in fact the redhead or the open should have a customers that has like 40 containers in one pot right so somebody wasn't there crazy right like why they need so containers in one pot right so so this comes down to the patterns that were defined by i think it was brandon burns and david openheimer from google before they start creating well before they start working on the kubernetes which basically define a container patterns like how you should use the containers in the distributed systems and they i would recommend you google it up so and read it so how they basically come in the conclusion that the plot should be the manual resource but some of the interesting patterns are basically things like a adapter pattern so i have a one container with my application and i have a second container acting as adapter for let's say messaging right or database right so i can easily change like what the service is database or i can easily change the messaging because the adapter is acting as a proxy or you know translates those messages to the messaging system it can also serve like a cache or something so if the messaging system is being upgraded it just preserve the messages so they won't be lost so the other interesting pattern there is like an election system right so you have your application and your application require election so it can it will elect a leader you know and you have multiple ports like connected to like this membership cluster something like mongo db is a great example right so so you can have a sidecar container that is basically responsible just for election right so it will just it is collecting the all the members and doing the election election re-election if the one member go down you know it will really like the new one and this doesn't affect your application at all so the application is still there they're still there they don't have to know about this election system it's the sidecar container working there to make the election happen for them so recap pot can be one or multiple containers sharing the data so okay so i have a pot i created a pot and it's running a good container it gets some ip address that can change anytime right so how i made that pot or the container useful for the other pots in my name space like how can they connect right so the way they take connect is basically you can define a series which is abstraction which defines a logical set of pots and the policy how you can access them what does it mean so a service is a resource in kubernetes so you create a service you you create you add a selector and you basically say everything every every label every sorry every pot that has these labels serves as an endpoint for the service so when the first pot comes up and it has a label that is matching the service selector it will become an endpoint for the service so and so service has a single ip address that don't change you can start talking to it and once the first pot come up it will route the traffic to that first pot then you start a new pot like another pot it's it also have the matching labels so there will be second endpoint right and third endpoint and so on while the service will do it will do a long balancer between them so you have a load balancing for free just by creating a service right so the services in kubernetes like they are not just a simple load balancers you can have a different kind of services you can have a services that has a cluster ip you can have a services that are headlights you can you know there are different kinds of services but that's the principle like how the services work they just there is a single endpoint for your application so the other pots can talk to it and you don't care what what pots implemented right the service defines them in the selector so i was saying that okay so you have a one pot and then you create another pot and you create another pot how that happens right so you do want to do it manually so every time you know i want to scale i just create a bunch of pots and i want to scale down i just remove those pots like that's not really practical right in the in the you know and admins probably don't like will not like it at all so how kubernetes is replicating those pots there are multiple ways how you can replicate a pot so one way is to define a replication controller or now take with replica set which is basically the same thing as a replication controller plus it have a more advanced selector so you can do more magic in selecting which pots will be managed by replication replica set and basically you specify the number of replicas you want of this pot running in the system and the replica set will ensure that that number of pots are running at any anyone at any time so if for example a pot gets assigned on the node the node gets down and the pot you know it's dead what replication controller will do it will automatically schedule a new pot on the other node right so it will it will try to keep the definition of the world you make always what was defined you can also change that the replica number in any time so let's say i start with a replication controller replica set that has three replicas and i bump it to 10 so what will happen the kubernetes will automatically scale so it will create seven more pots for your application and the same way down so the other interesting way how to replicate pots is a demon set uh so what the demon says is doing that uh it ensures that the pot is running on every node in the cluster so if you have a hundred nodes in your cluster it will make sure there are hundred pots running and that each pot is there is at least one pot on each node so how is that useful right so well there are many use cases for that so think about a simple use case coming my mind right now is ntp right so i want to make sure the time is synchronized in my cluster all nodes have the same time how you do that right how you make the running the ntp client on every single node well you can you can put it into host system right and make it like under kubernetes or you can just create a demon set and say okay ntp synchronize time and create a type demon set every node will run the ntp client i will make this time set mm-hmm what do you mean like the in order like in which order they start well i would like my time to be before my applications get started right so yeah so the question is like how you how you create the demon set and uh make sure that the things like the time is seen before i start creating applications so so you create the demon set as an admin right so you basically you create the demon set and then you can tell your users now you can create applications there is no way how you can say that uh you can start creating application in spots only after the demon set running the pots right that doesn't exist at concert in in a kubernetes so okay so demon set basically ensure that there is one pod running on each node so that this can be used by things like ntp or things like logging right or i want to run a tune d let's say or something like something i want to run on each node uh so the other thing that the how you can replicate the pots it's a stateful set uh which is the former name of stateful set was petses they were just renamed recently because one thing that kubernetes and google does really well is renaming stuff so they're renaming stuff after they release renaming multiple things uh so so what stateful set is basically doing that uh when you create the replication controller or replica set and say i want 10 pots what will what will happen is that it will create 10 pots with a random identity right that they it doesn't preserve the hostname of the pot if one pot goes down and up it will get a different ip address different identity completely right and what the other thing like the replication controller is not guarantee you that the pots will be started in the order right so that means i bumped a number from five to ten and now suddenly five pots are getting started in parallel right so you you can guarantee the order and there are other things so what stateful set were designed to is to solve these problems so first off all the pots created by the stateful set have an identity so they have a stable hostname right so their hostname is something like foo-0-1 foo-0-2 and so on like it's it's it's going like how are you scale how many replicas you have so you can rely on those numbers right the other things that they guarantee is that the pots will be starting in a synchronized uh is in a sequence so so first the first pot in the replica set needs to be ready then the other then the next replica is started needs to be ready and and so on it goes uh this is extra useful when you are building stuff like uh replicated databases right or something that has to have order how it was created or has to retain the identity like a mongo db right if you're doing a mongo db cluster mongo doesn't really like when you just randomly adding and removing members on the fly right well it does like it it's supported but if you start playing with it you will shoot yourself to a like after after some time so that stays full set um the last thing is a cron job it's not really a replication but i will mention it because macho is sitting here and he's the author of the cron job so the cron job what what doesn't make sure about is that that there is a pot running at a specified time so it's like a chrome tap right so i would say like okay run this pot which is actually a job is not a pot but job and pot is basically a one-time pot uh so so what so what will happen is that i want to run this job which is a pot uh at this time right or every five minutes or every 30 minutes so how is that useful for the users well you can think about things like a pruning right or something like a making a backup of your database every one day or 30 minutes or i don't know how you like your database so so it's basically things that are you know and that's managed by kubernetes by scheduler so you don't have to have a cron job in your containers to do this job okay so that was kubernetes and what is in the kubernetes world now talk about open shift right so so so what i mean with the open shift equals to kubernetes well if you run open shift you actually run kubernetes because open shift is just a distribution of kubernetes just heard the analogy from our marketing that open shift is something like a fedora distributing the linux kernel i don't i don't think i agree completely about that analogy but that's like how you can think about it so kubernetes is like a vanilla thing right and open shift basically runs kubernetes and adds some more tools to it right for for you to make your life easier um i will and i now i will talk about what are those things that are making your life easier before i go into that so as i said open shift is a distribution of kubernetes and by accident we start matching with the release numbers from the release 1.3 of kubernetes so cube 1.3 matches the open shift origin 1.3 and so on 1.4 1.5 and hopefully 1.6 will match with 1.6 origin depending on how big the rebase will be and that is also matching with uh with enterprise numbers so open shift enterprise 3.3 has the kubernetes 1.3 and so on and i'm not making you a promise that this will continue but so far like we were trying to catch up with every uh with the cube uh what what this slide also said that if you run open shift you're running a stable version of kubernetes every time so if you pick open shift origin 1.5 you are running a stable version of kubernetes 1.5 by stable i mean that well we in redhead have these qa teams that are doing this all this testing for us right so somebody actually tested and proved it work uh and yeah we have a lot of test suites uh on top of kubernetes to make sure things are working together uh kubernetes itself has a very extensive test test suite for about the open shift adds a little bit more on top of that so you can be sure that what you're running is cube that will actually work for you so what we in open shift edit to kubernetes like uh what what is the sugar so so first of when we start thinking about open shift open shift v2 was developer focused so you have your git repo and all you care is about having that application in the git repo running in a platform as a service thing right you don't care about containers you don't care about gears you don't care about selling looks so because we manage that for you and you know it was always there so it should so the platform should still be developer focused as a developer i want to have a good tools that i interact with platform as a developer i don't want to be in the business of building and distributing docker images uh right i just care about max horoscope nothing else and i want to have a greater than latest tools available for that i want to have integration with things like eclipse and you know other things uh so the so by saying i want i don't want to distribute the docker images well the platform should also give you the integrated docker registry so i can actually push the docker images somewhere and run them right so i don't have to push them into docker hub or i don't know how or create my own docker registry just for kubernetes right so so open shift comes with the integrated docker registry i also want the ultimate user experience i want to have a really nice looking shiny web console i can be showing to my clients and i can have i want to have a cli that actually does more than create and edit and bunch of other commands right so i want to have a cli that i can manage the platform itself if i have to as a developer you know sometimes i want to add the new note just to test out like if things are working properly uh as a sysad mean i want to have the centralized management of entire stack using the cli i don't want to go to answerable you know to make the note unschedulable or evict ports from the note when i'm evacuating it or it's going down or something like that so i want to have a central management of entire stack in one place and uh the most important thing is that i want to be a multi-tenant so kubernetes doesn't do a great job in being multi-tenant right so they have these if you start the kubernetes you probably have a one single namespace for everything uh the users will and groups well they are in the kubernetes but i think by default they are disabled so you don't care what users you are you can you can turn it on but you know uh yeah and there are other things with multi-tenancy that kubernetes doesn't do very well uh that also counts with the security as i said the users groups are there but they are disabled by default there are other things that you can't tell in kubernetes like i don't want this user to see my cron jobs right i don't want this user to run the ports in this namespace and kind of like bad things um yeah and production ready uh yeah kubernetes is doing a great job in releasing but how you well how you install it in production so let's come back and talk about open ship concept so what in open ship we actually add so first thing is builds right so as i said is a developer focused so a core open ship use case is basically uh the out of the operators on the host application and provide developers easy to run software environments so as a developer i want to have something that i just run and works for me and i have i build this docker image and i give it to my form of exchange between the developer and the operator of the image so so the open ship allows you to create builds uh in terms of build config so operator basically give you developer a builder image right so let's say uh i have a developer that is developing a node application right so as an operator i built or administrator as i built the node j s builder image with node j's version i trust with the libraries i trust and i know what the libraries are there and i send it to my developer like use this for building an application right and developer can say okay yeah you know i don't care i i i i want to have a latest in china's and they can do this exchange right but in the end the operator should give the developer the image the builder image the developers can use the source to image project so they don't have to create the docker files that's the main point of doing this right so you you just basically as a developer have your github account you push your application there and the source to image in open ship will combine combine the builder image with the application source code and yeah and and to make that happen the builds can be triggered automatically so every time you make a change in the source code in the on the github the build automatically triggers and run uh the same can be it can be with the builder changes so if the operator or the administrator push a later version of node j s with the latest security fixes we automatically rebuild your application and redeploy it right with the with the security fix uh so now how how we store the images right so the kubernetes doesn't have the concept of storing images or the images at all like they don't care like for them the image is basically what you write in a pod spec in the image fields that's that's where that that's where it ends in the kubernetes in open ship we have extended this right so because uh the developers producing the images we have to have a place to store them and we run this integrated docker registry but we also want to perform actions every time the developer push something to integrated registry we want to create a new deployment right or we want to create a build right or uh other things so we want to trigger when the something change in the system and for that we need to store the history of the image in time and an open ship this concept is called image stream so the image stream is can be used to perform action when the new image is created or pushed to a docker registry each repository uh in the integrated docker registry that the users will create automatically maps to an image stream in open shift right every time you push an image even manually we docker push into our registry we automatically update the image objects in open shift to reflect the the the the change and yes we can track the external images as well it's not that fancy as having the integrated open sheet registry because we can't import like every every second because that will basically be pulling to docker hub and docker people will hate us even more than they hate us now so so so so basically what we will do is we will set up an image important we will say like this image will be imported every 15 minutes which is a default so every 15 minutes we ask docker hub like do you have a latest newer image there and if yes we will just import it and update the system this can be configured of course but 15 minutes is the same default and then in the end uh you want to deploy right so I built my image I push it to a registry I create the image stream I have the image object there and now how I deploy it in kubernetes I didn't mention that on purples there is a deployment resource right so how that deployment resource uh comes from it comes from open shift in redhead because we say that well we need to have something that will transition from the replication controller a to replication controller b right so why does it matter so if you update your image how are you going to propagate that update in kubernetes cluster so so basically what you can do if you have a replication controller you just scale it to zero right and update the image and scale it back to uh to what it was originally right that's not really a convenient way how to do like updates for your application the other thing is uh the other way is that you create a replication controller b like a second replication controller start start scaling the old one down and start scaling the new one up right and in that way you will achieve something we called uh roll out rolling updates right kind of or you can define deployment resource in kubernetes and that will do it automatically for you so what will happen there if you create a deployment resource you specify the replication controller template which has a port template because you know templates is the thing templatize everything so so so you go down to the port and uh what will happen is that when you change the replication controller template in a deployment uh kubernetes will compute the hash and if that hash is different from the previous uh well from the current running uh replica set it will automatically deploy so it will basically do the transition from a to b so there are different strategies uh so this will happen in kubernetes right it doesn't happen that way in in in open shift so different strategies are basically roll out uh roll out recreate or custom so the so you have a roll out and recreate in kubernetes you don't have a custom in kubernetes what the custom is do is that it provides you a way how to customize the entire process of how you want to scale down how to want to scale up you you can hook something there like okay i scale down to 50 percent and then do something and then continue depending on how that something uh you know finish in so the custom is only available in open shift deployment config uh the other thing that is available only in open shift is uh life cycle books right so maybe you want to run something before the deployment starts like a pre-hook right like before we start going the new update you want to do something like backup database right or or do some i would not send a notification then maybe you have to you want to do something in the middle of deployment right so i'm in the middle 50 percent and i'll run some script right and the same apply for post after deployment is finished i want to execute some script or something that will send me an email that well things were updated you know um the other thing you probably want is a rollback on failure so if something goes wrong with rolling your application or deploying and it has it should rollback to the previous version and not keeping you in a state with a broken application and the other thing you want is triggers right so you will probably trigger by the configuration change well that check like that is in kubernetes available in deployment but you also want to automatically trigger when image change right so if the image that is defined at deployment config change you want to trigger a new deployment because one you want to deploy or you want to do it manually so anytime in any point of time i can just deploy the other things in open shift are pipelines so as we go from the build to image to deployment that i would call the like very simple pipeline you can completely automatize it so you push something on github and it will create a build that will result the image and that will result the deployment that's the way how you can basically update your application just by pushing changes to the github that's a simple pipeline but you can also have a complex pipeline like it's not that easy all the time like for production i probably don't want to redeploy the master branch on every commit right to my production so you can have a complex pipeline space on stages right i can have a stage in the staging namespace where i deploy my project see if everything works tell the qa to test it and so on and i can move from one stage to another and that usually requires some kind of orchestration and kubernetes and open shift don't want to be in a business of orchestrating the pipelines well we just want to present you how the pipeline looks like we don't want to the pipelines so because what is really good in doing pipelines in jenkins right so jenkins have this pipeline plugin and just works you know so why to reinvent something that the jenkins already have so we just we run the jenkins for you and we are talking to jenkins and jenkins is giving us update in what stage your your pipeline is and we report that bag as a build uh progress um yeah so the last thing i will mention here are templates uh from the core resources so templates basically what they do they describe a set of objects uh in the list is a parameterized list basically so you can you can store everything in a big list like all the resources from a project you can set up the parameters you can process the template and once you do that the parameters get substituted in those resources so that way you can just distribute your entire stack if you if you have to so i can give you a template that will create a mongo db replica cluster or uh i don't know my issue of application that will define the deployment config for the database for application and other things so it's a distribution thing uh the templates are not in kubernetes although there is a proposal that was accepted in kubernetes for templates and of course it will be completely different what open should have so but it's there is not it is not implemented yet you know or kubernetes i think is still waiting for somebody to actually do the work yeah so so that was the resources the other things uh while open shift comes with the building authentication so it has a users and group it has an external identity provider such as keystone, LDAP, basic out, github, github, google i don't know how many others so you can just connect open shift to use the github and every time you log in into open shift it's actually redirect you to the github page you authorize it and come back to open shift and you while you have a user account created right um so that's identity uh yeah but the other thing a feature i really like on open shift is impersonification so you can actually act as some other user if you if you if you want to so you don't have to be a czmin all the time you can just act as a czmin when you need to see it which is like familiar to sudo right so if you need to like you can act as a higher privileged users if you have access to of course uh the api access in open shift is done we are all out uh access tokens or the client certificates so there's authorization i won't be talking too much about this because the previous presentation was basically about it so you have rules roles and bindings rules are things like something can create posts something can create a uh a cron tab or a cron job sorry something can create something and the roles are a collection of those rules and the users and group can be associated with those roles and bindings are basically associated between users groups and role so uh so installation open shift uh you can use an uncivil installer uh so you can just grab the we have a full uncivil installer it's open source you can run it against amazon against gca against local vms again digital option it will work without any problem uh you can also get all in one vm so that's a virtual machine you can just pull and run it and it has open shift and everything inside or you can use oc cluster up which is our greatest and latest tool so you can just do oc cluster up you just grab the oc binary it will pull the open shift origin image check if you have everything in your system and voila you have a kubernetes running in 10 seconds so once you have a kubernetes and open shift running you can create a new project and you can create a new app and that app will build we will create a bunch of uh resources you can see the logs so it will install the application source from the github and in when the build will finish you can just get to the app so in 30 seconds you have from nothing i have my app running php up with the database how to get the oc cluster app you can just go to the release page on the github and uh just download the binary and run oc cluster up if you have a docker on your system it should work oh you might have a disable firewall of the dem yeah demo time i don't have a time for demo time 10 minutes for questions 10 minutes for questions so if there will be no questions i can show something actually so thanks so cool questions if you have questions i can actually show it live so if you ask me something or you want to see something have something works i can just go okay okay yeah question how can you update origin so installer so unstable installer i think have a playbook for updating so it's just a pm so it was just yummy install basically and there is a playbook that will renew the certificates update the configs and so on uh in this in the oc cluster app while update is basically specified there's the image it pulls like a new pitch yeah yeah if you're just playing with it and you mess on it real bad there's an uninstall playbook and you can start over yeah it's really cool yeah playbooks are really cool so this is the open ship that console i was talking to you with the ultimate ux experience so it's really really nice piece of uh thing created by redhead because redhead is known to suck in ui but this really it's cool so i can create my project produce created so okay so now i can create something so let's say i'm a javascript guy pick the node yes give it a name and the repository url which is some sample up let's create right application is created uh continues over you blah blah so now the build is running for this application so i can view the look you can see that i'm cloning the something from a github i don't know how this works with the networking here but it should work um the other things you probably notice is the warning i don't have a health checks defined for my application right so i can add the health checks so i can have a readiness probe or the madness probe readiness is basically saying when the when the pod should register to us well wait wait wait wait is available for the service and life is saying that it is life like is the application working properly it's not Kubernetes will shut it and restart it so once this build will hopefully finish it will automatically kick the deployment of the application and the application will automatically deploy uh okay this is my really quick demo so any questions so far yep can you say that again i don't know because i'm not a networking guy but open shift run yeah sorry so the question was if i have if i can have a kubernetes container with make a villain network and running in open shift i guess uh well the pod will definitely start i don't know if it will manage the network it's because the network is managed by open shift itself it has the open sdn i guess building so maybe it will work but i can't tell you if if it really works i haven't tried but the pod will definitely run i don't know if it will anything useful for you okay any other questions yep right now oh this is mech right so but this is uh well this is the as is sent us seven so this is a x-hive machine with sent us seven no it's just a plain bare centers nothing else we docker installed it will work on it should work on redola federa 25 and rel of course and atomic yeah so i'm not using the osu cluster up here this is like my developer environment basically osu cluster i will give you the same result so the question just to repeat that question the question was that what operating system i'm running open shift on but in fact like you should have a like running open shift on bunti or debia and if it is a docker and you can just grab the osu binary like it should work this is the version master good so okay any other questions okay well thank you no that's the thing we'll start with okay what do we have For example, I have a master branch, but I don't deploy it to the production, I have a master branch, so I have some kind of branch, then I can do it. Yes, you can do it in your branch. How do you do it? In practice, you say that you have one class where you will have QA, C or ULAB. This will change completely without practice. It is possible to write it down, because I have a lot of information about it, so we can have some kind of projects that are good for production. Is it on the level or on the level? That is the question, it depends on the quality of the person. When I believe that everything is in the same class, it will work fine, so there is no problem. If I have a plan, I want to have it physically done with some things. It will not affect the security. And then you have a single endpoint, a direct image that has a registry, which is an exploit, and the path changes completely with the image. I mean, I have a registry, which is sent to the developer, or from the staging, or the image will be signed. For example, we have a single image, which is sent to a different environment, so it is better for me to do it on the level of the project. It is difficult for me, because I do not understand the image or the product. It depends on the level of the project, when we go to the branch, which we do not know yet. Good. Thank you very much. You can put it anywhere. Good. What are we going to present? The same thing as usual. Oh, that is not right. Where is my mic? I put it over there. Oh, okay. It is 11. Is it working as expected? Yes, it is working as expected. Yeah, I am just talking. I am talking about deployments. Just deployments. Oh, I am doing this. I am going to do this final. Five. Six. Why? Is he talking? Okay. Yeah, the letters. Am I going to be seeing whatever they are seeing? Ah, that sucks, because I have to type. That means I have to figure out. I think I am going to do the letters bigger than this. That is interesting. I thought one was this. I am good.