 Jdu to dobrý. Neznam co to je. Jo, ale toto je. To je oblán. Prýčetelnění cesté bobradou. Pětčke, já to možnou mám. No. Aha. Tak je to, že... Ty časí, co ti tam bude ukázat, to je na tvůj tol. Tak budeš mít na 47, na Vdalské, jo? Až ti to ukážu, kdo v tači, tak to znamnáš. Ještě pak máš 10, na Vdalské. Takže 40. 40 dohor? 40, tak to se na Vdalské. Jo, tak se na Vdalské. No, 40, jsou na Vdalské. Super. Dobrý. Mikrofón, který když tak, kdybyš z Pakti o Vdalské zopakovalo? Mm-hmm. Ony má velký dosat, takže to bude. Dobrý. Hello, guys. Please welcome our last but not least speaker in this room today, Peter Schiffer from Red Hat, who is going to tell us something about OpenShift on Google Cloud platform. So, welcome everyone. As was said, my name is Peter. I work in Red Hat in Atomic OpenShift team, specifically in end-to-end sub-team of Atomic OpenShift, where we work mostly on the reference architectures for OpenShift, which will be I talking about today. So, slides are available in GitHub, if you would like to follow on your own or check them out later. And let's do this. So, today I will be talking about what we do in our team that reference architecture, why we are doing it and mostly how. Now, reference architecture, or what is it, or what do we want to provide? We would like to provide highly available production quality, OpenShift architecture, which leverages native infrastructure of cloud providers. Now, we are working with more cloud providers and specifically those five. For every cloud provider, there is a different reference architecture. All of them are done within my team and I specifically am working on Google Cloud platform architecture. So, now, why we are doing that? I guess mostly because it's fun and otherwise because you would have to do all of this by yourself. Now, we as a Red Hat provide, as a Red Hat company, we provide OpenShift product, which have some installer. There is like basic and advanced installer, advanced installer use is sensible and it can pretty neatly deploy your, deploy OpenShift on something. So, installation of OpenShift itself is pretty, I guess it's pretty well done. Now, it's mostly automatic and even the OpenShift is pretty complex applications. The installation of OpenShift is pretty easy. But now, we have various infrastructure providers. As I've told before, there is like, there is Amazon Web Services, there is OpenStack, GC, Microsoft Azure or VMware and every infrastructure has its own specific features or ways how the infrastructure do stuff. And now, if you want to deploy highly available production ready OpenShift, you need to configure your infrastructure in some way. And because of OpenShift is complex, your infrastructure needs to be complex as well. So, this is basically a reason why our team was formed like almost two years ago, where we found the problems, or, let's say, other way where we want to fix issue, where it was, when it was kind of too complex to deploy OpenShift on some provider from those before. So, we are working on reference architecture which is kind of a guidance or it's really a PDF document which describes the infrastructure of the provider. It shows all the necessary parts which are required to have highly available production ready OpenStack installed. And it guides you step by step what to do, how to do it and why to do it. So, while this document is pretty neat to have because you don't need to find out all the stuff by yourself because the providers are really quite complex and even if in our teams we were working for the first initial release of the reference architecture for almost a year. So, this is great, but it would still let the most of the work on you. So, you would have to go to Google Cloud Console and you would need to click there all the instances, all the networking and everything, all the configuration you would have to do yourself. We have a web interface or from the command line interface as most providers provide some command line tools for configuration. So, to fix this as well we are also providing some code which basically can do all of this for you automatically. So, in GCE we have a currently shell script combined with Ansible which with one command can deploy all the infrastructure, deploy OpenShift and do some configuration of post installation task for you. Basically, it's done within half an hour or so. The paper is or the reference architectures are official. They are reviewed by plenty of people and they should be high quality. Code, however, is not officially supported by RedHead but we are doing... Well, we do unofficial support so if there is an issue, if there is a pull request with review and stuff like that but we don't guarantee anything. All right, so... This might be the end of the talk but it wouldn't be so great. So, let's make a new agenda a let's talk about how we provision infrastructure and about the infrastructure itself specifically about the Google Cloud platform how we deploy OpenShift and how to configure it and do some post installation task. So, about the infrastructure. Before we try to provision it let's describe it. Let's talk about stuff we are using in our reference architecture by OpenShift. Kind of most simple or the first feature I'm going to talk about is just Google Cloud DNS which provides DNS service for the cloud. A good thing or very important thing is that it's scriptable so you can configure it automatically from command line. This is like one of the cool features of hosted cloud providers specifically GCE, AWS and Azure. It's pretty good also for OpenStack but for example if you have VMware and you are managing your own infrastructure and you don't have solved DNS by like it's not possible you don't have solved DNS it's not possible to automatically configure it you have to include manually because there is no way how to make a super general reference architecture to handle all of the DNS configuration and we don't want to provide DNS server for infrastructure because we don't know nothing about it. So, with GCE it's pretty cool you don't need to care about only the required thing is that for the domain you are going to use you need to point it to the name servers of Google and that's it. Also that's like I think like one manual task kind of you need to do before deploying the OpenShift infrastructure. Next we are using Google Cloud Virtual Networking without that it wouldn't work of course we are using a single network for all instances but traffic is like heavily controlled by five or all rules control all ingress and ingress for every instance so like every service can communicate only with specific boards of every instance so there's trying to be as secure as possible In GCE variable rules requires text so basically you need to take every instance in the GCE and the rules are bind to the text so the variable rules also never works with internal IP address because there's no need for them and if you have multiple instances with the same type like masters or instances or OpenShift nodes it's really easy to configure the firewall and pretty easy way Now we are also using the Google Object Storage where we store images for the registry this is supported by OpenShift so we really just set it up for OpenShift installer and let it work the way and the last most used part of GCP is Google Compute Engine which provides virtual machines disks and everything related to cloud virtual machines it would expect so now let's talk about GCE this is the most important and biggest part of GCE of the reference architecture now for base images of operating system for every instance we are using custom images because even though Google provides real based images they doesn't contain the sufficient subscriptions so it's not possible to install OpenShift there now to register Google provided real instance to your account and attach to your subscription but the instances created from the official Google's redhead images are more expensive so you would end up being double charge for the one service so in our reference architecture we are using official real7KVM image provided by redhead from access portal which also needs to be downloaded by by customer and it's used by the shell script which deploys the architecture so that's the second manual step or break with it yeah, we automatically convert and change the image so it's accepted by Google Cloud platform now before talking about other instances used by OpenShift we are using one more instance called bestchant which has which is only instance with public SSH access and we are using to proxy to access all other instances and we are also deploying OpenShift from there so this instance has two use cases one with security so no other instance have public SSH access and the second use case is the installations where installation where we can easily provide a known environment for deploying OpenShift we could deploy OpenShift on GC be from user's workstation but we don't know about Ansible version here and stuff like that so to minimize to minimize problems and use a known environment we are doing this way but there will be probably another option on how to do it we would like to provide the installation or Ansible with installation script as docker container so it would be possible to deploy the OpenShift from anywhere with known configuration but that's still on to do it's not ready yet so now about instances Google provides two neat features one is instance template where you can pre-configure how your instance will look like meaning that you can specify machine type contains like number of CPUs and number of RAM your instance will have and you can also in the instance template specify which image will be used to create instance disks will look like and stuff like that so you don't need to create that instance separately and instance groups this feature I'm not aware if OpenStack is it or AWS probably not but it's really nice where basically for instance groups provide a single entity to manage number of same instances so to create instance group you specify instance template and how many instances you want there there's like this basic configuration of instance group and when you created Google Cloud or Google Compute Engine will create the required number of instances automatically it supports health check so you can easily create auto healing it's used also for load balancing and stuff like that for now in our reference architecture it's not possible to add additional nodes to OpenShift cluster for now but later we'll be adding this feature and we'll be utilizing the feature of instance groups where it will be enough just to change the number of instances in an instance group and the cluster or OpenShift cluster will be automatically adjusted but it doesn't work yet now probably the most important the hardest part was figuring out networking and load balancing in GC because from the architecture let's see if you want to be highly available we need at least three masters ideally in different zones that's they are here we have two infrastructure nodes which are like the regular OpenShift nodes but they are running only infrastructure applications it's like a router and registry and any user applications and last part is like any number of application nodes which run user applications so now to make it work we need to load balance traffic for masters and nodes or routers so only routers are accessible from outside network any traffic for the application running in OpenShift has to go through the router of OpenShift in front nodes but for the masters there are two types of traffic one is public traffic which is going from users to OpenShift console whether it's whether it's internal browser base access or whether it's command line utility and a second kind of traffic is internal to OpenShift between masters and nodes now OpenShift is using certificate based authenticity so every node in OpenShift is identified with its own certificate and masters can authenticate that certificate and they can say this is valid OpenShift node but those certificates are usually not like publicly valid certificates so when you would try to access the OpenShift masters through those certificates you would get warning that it's like in valid or insecure certificate so to solve this problem we are using SSL proxy in front of masters for public connection so there is a single valid certificate facing the public network which user sees when connecting to the masters so that's this one now there is two network based load balancing one is for master for internal traffic and one is for routers from public traffic there are also two registries running on every infrastructure node but the service is automatically load balanced by Kubernetes so we don't need to care about it also this is the current version of reference architecture but in the next version there will be three OpenShift infer nodes because OpenShift 3.4 introduced a new suite feature called I don't know, something like zero data and upgrade and for this feature to work we need infrastructure nodes so the update is without downtime so it will be changed but we just add an initial infrastructure nodes node so let's say we somehow know what we need to load balance and let's discuss how we will do it basically Google provides four types of load balancing first is HTTPS load balancing which works on HTTP layer we originally used that for web access but we found out that the web console is using web sockets so it is in a way so we switch to SSL proxy this was pretty fine and it's enough for our needs there is a single certificate in cloud platform which is served by Google and the nodes or instances so that doesn't instances don't know about it and that's perfectly fine and two other load balancing methods are network load balancing and internal load balancing we are using network load balancing for both internal master access external router access this is basically because when we were creating other friends architecture internal load balancing didn't exist yet so we will be probably switching for internal master access we will probably use internal load balancing the network load balancing the main difference between those two is that network load balancing requires public IP which is again fine because we are filtering all the traffic with firewall so there is no really any security issue but internal load balancing will be probably better so now we probably think all right it's not that bad it's like four times what can go wrong nope there are a lot of things you need to configure but if you want any of this to work so for SSL proxy SSL proxy is a little bit more complicated than network load balancing basically for every load balancing google you need something called forwarding rule which binds public IP address with target proxy or target pool target proxy is used I think the SSL proxy is very similar to HTTPS proxy there needs to be SSL proxy similar to HTTPS load balancing there needs to be some proxy which contains the certificate and will end SSL connection but you can choose if the connection between target proxy and backend services will be encrypted or not so you can still use on master instances we are still using HTTPS certificates but they are internal and they are not mallet publicly but the connection is encrypted so the forwarding forwarding rules contains IP address and target proxy which represents where these IP address or the traffic for these IP address should be forwarded in case of SSL proxy needs configuration of SSL certificate and configuration of backend service which represents the instance groups or instances itself which will accept the traffic network load balancing doesn't contain the proxy just a pool with instances so but it's still not it because you need to think about level of resource are you using because the resources in Google cloud platform can be in free states or can act on free levels which is a global for everything regional which is local to region and region is like Europe America, Asia and so on and zone which means I guess data centers so so you need to know that for SSL proxy you need global IP address but for network based load balancing you need regional IP address and stuff like that so it's kind of hard to put it all together as well there is I think it's required health checks for the backend services so when health check fails the traffic is not forwarded to the broken instance for now we are not using auto healing but that will be featured later with option to adding additional nodes and basically that's it about infrastructure so once you configure all of this you can deploy open shift yeah and with regards to the infrastructure it's kind of really easy if your infrastructure is running so we are using advanced installation for open shift which uses Ansible like directly and you can modify the parameters for for Ansible and advanced installation do you all know Ansible who worked, please put your hand up if you worked with Ansible nice almost everyone so I guess you know that there is one feature called dynamic inventory there are two kind of inventories there are static and dynamic static inventory is like a file, text file which contains all the hosts you specify manually and dynamic inventory is basically a script which can query something to provide the group provide instances and group of instances based on some specific criteria GCEs or Ansible supports dynamic inventory for GCE and it returns so it can return all instances and groups by text so we appropriately take every instance master of infrastructure nodes or application nodes so advanced installation can automatically recognize all the instances and it can pretty easily deploy master services to masters and stuff like that so basically that's it we just provide some configuration file for an installation we don't know nothing special with that and last part of the OpenShift or the last part of the reference architecture is configuration of OpenShift and some post installation tasks which which are all done with Ansible and which contain some specific configuration for GCE, for example OpenShift storage which can OpenShift storage supports GCEs persistent volumes so we just configure that we slightly configure firewalls on instances mostly for health checks that's not done automatically by OpenShift installer we set up some quotas deploy registry which is also default part of default OpenShift installer and we also do deployment validation where we try to create project in deployed OpenShift we deploy some application wait for it to come up and check if it works correctly then we delete it so once basically once you are the script finished you have OpenShift installed and now that it is working if not you will see some error messages you need to debug yourself at least you know that something is not working the validation also test builds so also the registry and the build service is tested and validated so if you want to try it for now you need google account basically it's free and access to the google cloud platform when you try to for first time i think you have credit like 300$ to trial it so you don't need to pay for that if you want to try and you also need reddit account for Retina Enterprise Linux and OpenShift subscription but we will be also working for version with Cento as an OpenShift origin it's not ready yet again it's something on our to do but we definitely want to provide that option as well now if you have all these accounts it's really easy no well if you have the accounts now you need the gcloud utility on your workstation from where you will be running the deployment it's pretty easy to configure there you can install it with this super unsecure way or there is also yum repository and app ppt ppa repository for Ubuntu so you can do it that way as well and once your cloud in it cloud utility is installed and configured you just clone our repository with reference architecture script you change directory, configure one file and run the script so the file the configuration file contains plenty of configuration options most part of the reference architecture is configurable meaning that you can rename everything how it's called in the provider and you can also change size of the disks change number of instances and stuff like that also you can configure the region and zone where your open shift want to be running and stuff like that also the deployment script supports the river adoption which basically removes everything from the cloud created by that script so you can test it and if there is something not working you can remove everything and start deploy again or stuff like that so i have 4 minutes, i just show you the configuration file basically is the repository with the reference architecture architecture stuff all scripts for all providers are there there are also links for the pdf papers Microsoft Asia is probably not publish it but it's very close the code is already there and reference architecture for open stack is somewhere else i don't know where because within the reference architecture they are working also on better integration of open stack with open shift so it has i think it's on the repository so the gc script and configuration is here and the file looks like this let's say basically this is the most important part the configuration which basically you need to adjust there is the path for the rl west image there is the configuration for rl subscription and the redhead account gcloud project zone and the dns configuration you can also provide your own HETPS certificate if you don't provide any it will be automatically generated so signed so you can try just that and ocp identity providers provides the authentication service by default we are using google identity provider but you need to specify some your IDs and secrets but you can basically change it to any authentication like github or if you are using any local identity provider service you can use it here and the rest of the file contains default services which almost everything can be configured if it's size of the instances names number of instances and stuff like that also there are all fiber rules specified here if you need to modify them or adjust them basically that's it so do you have any questions yep actually this math and architecture is not yet supported by the code the question is about supported the question is whether the reference architecture is supported or not the code the scripts are not officially supported we are providing them as a guidance and you can use them at your own will if you want the paper i think i'm not sure really now but i think there is also a know that it's like guidance you can inspire yourself even if there is also a problem with the reference architecture you can open the case for it but there are no SLAs because we don't really have the cloud provider under our control and it can change anytime and stuff like that so we can't by it's supported so we are used to for masters yes we are using native HA before there was like you have to use AJ proxy but not anymore but yep it works pretty fine we don't the masters are not yet divided in multiple zones for GCE they are for AWS but there was a back in Kubernetes which which prevented us to use them but this is fixed in 3.4 so we will be updating the reference architecture the supper is almost ready for it just a couple of variables in the script but we couldn't use it until the 3.4 is out any more questions so the question is about if we are using some testing of performance and validation of this reference architecture so we edit and we are adding continuous integration for the reference architectures so the scripts are automatically tested every time you make a push the first was AWS and I think GCE is already edit or is being edit for performance we don't do that by ourselves but we have a team which is working on that which works on OpenShift performance in general sub team but we do work with them and we have them reviews and stuff like that so we use their guidance any other question most major the most major provider most major reference architecture the reference architecture for AWS was the kind of first one or we were like most other architectures are following that one kind of every architecture is a little bit different but I guess the best integrated architecture is definitely for OpenStack because that's also our product and we can do much more for integrating the services for example on OpenStack OpenShift OpenShift is using OpenStack networking so there's like not virtual networking on virtual networking because OpenShift has its own SDN software defined networking so basically that what we can use in the cloud what we can use from OpenShift of the provider we are trying to use it as for example the object storage and stuff like that and in the end we would like to use also the networking make it better and stuff like that but there's a lot of limitation it's allowing in a time so and about which reference architecture is best really if you can have your own OpenStack so that's probably the best solution for your company and for public providers there's not like big differences but it's true that Amazon reference architecture for AWS has some features sooner than others like adding nodes feature results already present in reference architecture for AWS and for GC it will be added later stuff like that but it really depends for example for Microsoft Azure it's possible to load some file which will be displayed in the web interface and you just click it that you want to deploy this and it will deploy the infrastructure based on that file from graphical interface but this is again, this is a feature of the public of the cloud provider so not all of them provides that so it varies any more question if not, thank you very much for everything we have been talking talks in all the D rooms the schedule is ready it's on the web and it's in the Android app so check it and go take a look the party at Fleda starts at half past seven but if you haven't got anything so far I think you are out of luck because there are no no no no there is a big sign it's really hard to see really if you ask me everybody knows it maybe it's good it's possible because it's on the web it's on the web but it's true that it's on the web it's on the web and it's on the web so it's really hard to see it is to see pavl porque they feel it so i have reached it got it Tři anteny tam mám tam přikrátné, s tím, že jako můžeme jedná výfinaje, tak pojede nás tři. Tři anteny a druhá karta je juba na jim, že jim říkáš toho. Ano, mě jsou, kde bys dělal, jdeme dolajíš. Tohle mě tam možu být. Ano, kde jim být jako pomytný. Ale je tam nějaké výsere, je tam nějaké, jakože něčom, čo mi jsou jen ten syd. Ono, se pošli, že můžeme sygnalit. Ono, se pošli, že můžeme sygnalit. Ono, se pošli, že můžeme sygnalit. Ono, se pošli, že můžeme sygnalit.