 Right at scale my name is alexander I work at Piotl. I'm kiting Chen from VMware They will talk about Using CFCR to run your quest clusters This is the most important slide you'll remember it from today because you'll see it on all presentations But why we here who here have deployed something using Bosch? Please raise your hands who haven't deployed because I see lots of having deployed So I expected all of you agrees that Bosch is awesome to deploy things and it's awesome deployment system and the thing is that Is cloud founder can tell runtime just another Bosch deployment that if you know how to deploy with Bosch you can do it and And the answer is yes, it's just another board deployment a bit fine service that uses Some extra Bosch features. What is just another board deployment? So I wanted to start by talking a little bit because this is this is the first session of the day About why we want to why we're even here talking about kubernetes. What does kubernetes give you in? addition to your existing cloud foundry infrastructure So there's a couple of things the abstraction that kubernetes built is Fundamentally at a lower level than what you're going to see out of the application runtime It handles that deals with pods which are essentially just Colocated containers and it also means that you can essentially run any existing Docker container you have So if you have some functionality that that is already easily represented as a Docker container Well kubernetes makes it very easy for you to run that it handles a replication and scaling in the those containers automatically for you and I mean kubernetes is a little bit more tied into the underlying infrastructure for example if you have something like a Some persistent disks in your vSphere infrastructure for example that you want to use with your with your workloads Then kubernetes makes it very easy to make that integration happen And last kubernetes supports a rolling updates for your workloads It makes it very easy for users in addition to access all of these features So that's why we're here to talk about kubernetes, but The problem with giving you all this power and and having these low-level abstractions is that kubernetes is very difficult to Install in in a production environment. It's it's not hard to get started with I think Probably a lot of you have tried to use something like mini cube Where you have kubernetes running on your local machine and you don't need any infrastructure really To bring up a kubernetes cluster at all and that's that's really easy to get started with but if you're talking about running kubernetes in an Enterprise Environment or in a production environment, then there's a lot more that you have to think about a lot of considerations That you wouldn't have in just a development environment So the first thing I'm going to talk about is security because kubernetes is kind of meant for you to run fast It's it's meant for you to go and get started very quickly Which means that some of the defaults that they've chosen are not the best for for production These are some quotes that I've taken directly out of the kubernetes documentation You see that by default Kubla is unsecured So any work worker node that you have if someone were to gain access to that network and were able to ping that node Well, they could run whatever Workload they want on that worker node You also have to think about things like user authentication In fact Here's basically a list of the security considerations that you have to have to worry about When you're bringing up a production kubernetes cluster if you want to have actual users what you need to give kubernetes an external User an external authentication provider So how are you going to do that? How are you going to get those components that talk together? How are you going to tie that into your authorization and like governance infrastructure so that those users can only access the resources that they're supposed to You have to worry about certificate management because kubernetes is kind of decentralized, right? It's kind of modular and all the pieces ideally should be communicating using neutral TLS Which means that you need to manage certificates You have to worry about securing kubelet. You have to worry about securing at CD, which is the brains of this whole operation You have to worry about credential rotation for your service accounts And you also have to worry about securing things like the dashboard that people often forget and lead to Hackers running cryptocurrency miners in your infrastructure So like none of these things are things that that none of these Problems are really insurmountable or anything like that They're just an additional factor that you have to consider when what you really want is to just get a cluster up and running In addition like I mentioned before kubernetes is modular like all the pieces are meant to work Not necessarily like they're meant to work together, but they can operate without the presence of each other Which means that like for example your kubelet processes on your worker nodes will pretty much run no matter What you do and if and if they're running and you get them talking to your master Well, you'll be able to schedule a pod workload on onto that worker But just because you've done that successfully doesn't mean that all of these deployment workload types are going to work for you Doesn't mean that just because you've deployed a single pod that you're going to be able to deploy like a demon set or a Cron job or set up some custom resource in your cluster and expect that to work So like these are all things that you have to consider and it makes it a little bit difficult to test this because how can you be Confident that you have a conformant cluster and even if you have a conformant cluster upfront How do you know that cluster is going to continue operating the way that you wanted to after you take down for example? Some of your workers for maintenance How are you confident that your kubelet is going to be able to come up successfully or that your cluster is going to survive and upgrade? So again a lot of things to think about when deploying kubernetes in Production because it's like essentially infrastructure and that means that you have to think about all the ramifications that come with that infrastructure And last you throw networking into this picture and it gets just a lot more complicated if you want a resilient cluster Well, that means that you need highly available masters and there's no way kubernetes gives you Built in to have that master high availability Which means that you need to go and set up your own load balancing solution Or if you're using some sort of cloud infrastructure there will you need to set up an external load balancer? So this is again taken directly from the kubernetes documentation that quote and this diagram This is what they suggest you set up by yourself If you want a resilient highly available master So that that's like a lot of work for us And then you throw in mutual TLS into all of this you throw in DNS And then you have to worry about like this situation if you have that load balancer sitting in front of your masters Then your workers need to talk to the master notes through that load balancer How are you going to ensure that your masters are presenting an SSL certificate that's going to be accepted by the workers? And what if you need to rotate that certificate like over time or for example the IP address of your load balancer changes Or the DNS name of it changes How are you going to ensure that all these certificates are accepted by both the masters and the workers? So like this is it like I said like none of this is really it's not something that you can't do It's just a lot to think about so what's the solution to all this? So this is how Piotr on Google start collaborating it and Bosch has super features that allows us to do this with repeatable diplomats using Bosch it's much easier to test everything and Resurrecting VMs It's all great features of Bosch, but today I want specifically talk about two recently new features Recently it's one a half year old features that Bosch provides us that allowed us to build Cloud Foundry container runtime and allowed us to manage multiple clusters One of it is Bosch DNS It allows us to remove load balancer out of equation. It's Bosch DNS. It provides easy master HA and if you Just specify some alias and it works like kind of like internal load balancer and if master goes down then it gets removed from this DNS record and With this it's you don't need external load balancer for internal communication if you need to talk externally you want it you can have it, but you don't need it and more importantly You operate doesn't need to configure anything So it works all the same for different clusters it provides So discovering for different clusters discovery of hcd or a discovery of masters from workers so this way it just works the same on different clusters but Additionally, we want to generate certificates and Crathub. Crathub really solved all those problems You are able to generate certificates for all the components Provide mutual TLS generates all the passwords, especially ones that you don't really care that required only for internal communication it Significantly simplifies multiple cluster management because Operator doesn't need to think about it. They just have one manifest and have to change deployment time as that's it And I will show how it's happened right now So I'll just deploy jerk CFCR cluster To be honest, I already deployed it. So we will have time to see some other things and I will just So I get sample manifest from Kubo deployment. It's manifested provide by CFCR This is minimal manifest. You can deploy it on each is it will work fine. So Like if you don't need any cloud features, it will work fine If you need some collaboration with cloud like connecting disks a lot bounces you need to provide settings for so-called cloud provider and That's why I connect this additional ops file then One more thing Calli-K Terrence there will be session about Calli-K Terrence What you need to know it will speed up your deployment Then I because I want this cost to be accessible externally I will add load balancer AP to this certificate add masters to this load balancer and Provide some variables. So those variables are required to tag VMs or cluster knows Okay, this VMs are part of this cluster. They can attach disks and so on and This variables are required to talk to GCP Click enter it will run It will take about one minute and as I said the first part of the manifest is available for any cloud And once the FCR says that They're adding support for different clouds. It just means that a cloud provider support for different clouds now each developer or Kubernetes developer or Kubernetes operator expects some block laws to be running on vanilla cluster So one of it is Qt DNS which allows So discovering size a cluster Kubernetes dashboard that you can see in pretty UI what's happening with the cluster and Hipster for metrics so this is what is done in this around and Because I use collocate around it will run very very quickly because it's Right on the master VM and it's already finished now. It's getting locks and print out locks. So I Will show how it works just part of the manifest So I added one job to the master and removes it as separate instance group Then this is ops file for the load balancer. I just had VM extension and IP just IP in variable Now I want to connect to the cluster. So I all my credentials and credit hub I Need to get them from credit hub and this is I get client secret Connect to this credit hub It's connected get the token that is to authenticate get the CA certificate and Set my cube config that cube config and cube Cube CTL for example get nodes. I Can see it's up and running It's available externally. So I will connect to my Local machine and I can do Cube CTL CTL get all get nodes as well. It will be the same nodes So what I can do I can deploy some workload keep CTL Apply with file assistant volumes Create disk Create some deployment it's up and running and Still proxy as I said it has dashboard so I can access dashboard from my local machine and Odin loading okay It says I need to log in and I need to provide My keep CTL file keep config file. I will provide it and I can sign in and proceed so Even if your dashboard is exposed you still need something to access and it's Deploying you can see that I start deploying this Guestbook and something is deployed some nice. Yeah, if I refresh it it will update something. I hope Yeah, everything is up and running so Cluster is up and running. I will go back to the remote VM and I can get keep CTL Get service front-end so I know how to connect to it and this is let's go Guestbook, it's up and running. It's running public AP address. So you can hello CF summit You can connect to it, but but please don't because Don't steal internet Can connect do it afterwards? So I deployed cluster. I use scratch up to get credentials I Deployed using vanilla CFCR. I added some ops files just to optimize speed of The deploying errands I get access to it. It's working, but in real life You probably don't want to expose your masters to the external internet You want them to be available internal network and they need to have some kind of jump box To access it and this is what we will do right now We will I will show the several possible patterns that you can use to connect to your Cluster that deployed in just internal network. So I come back here and I go here and I will I Deploy second cluster as well the same CFCR manifest same cloud provider Collocate errant expose links. I will explain later why rename deployment and Nothing about load balancer. So it doesn't it's not running spot of load balancer it's not available externally and Basically, I won't be able to access it Anyway, except from going somehow from internal network So I will deploy errant and to do this. I will deploy Jump box separately in the with the same Bosch fetching logs I Will show expose links. It just basically expose the links and making them available between cross deployment Jump box. I also deployed jump box. I deployed everything just half an hour. It's not that much and Manifest of this jump box I have Bosch DNS to be able to use the same service Discovery, but from different deployment. This is why I had this links shed because those links Provides this DNS discovery using Kubo DNS aliases. I can connect to the cluster with the same DNS name from this jump box then keep CTL is one of the I wrote this tiny job to show how you can access the cluster from jump box and Squid is simple HTTP proxy and nothing more interesting So again the same connect to credit hub and as you see it has to Two sets of deployment one from CFCR and other for CFCR staging that I just deployed and this has been jotted Automatically, I haven't done anything and in CFCR stages. There are lots of credentials and I basically don't care what as I Need I can rotate any of them, but as operator. I don't need to have them on my computer at all I'll get the token. I'll get the CA third and Now I'll try to connect with this CA third with this talking to master CFC internal, which is DNS alias for this cluster And unfortunately doesn't work because I actually need jump box. So I'll get the jump box and I'll get Connect to the jump box on the script. I'll get jump box API orders and using that jump box. I will try to get some Information so I Got those notes as you can see short And I'll deploy simple engine X application that I got from CFCR and Just wait for it to finish and it's deployed so Sometimes it takes time to download images, but I tried it several times. So all the images are there This is why it deployed so quickly and now I need to wait till load balancer is up But unfortunately it says ensure load balancer. So it's still starting up And I need to wait for some time. I need to access the cluster so I wanted to show you how we can access cluster from this jump box and Maybe it won't be jump box for you. Maybe it will be some VM in your CI. I'll be in concurs worker So I will just connect to this jump box SSH And What's important from it I can obviously an S look up master CFCR internal catch three masters they are advanced So we have a chain and then I get far recap jobs QPSY TL being QPSY TL again get service and I can see at my next up and drawing and this QPSY TL job is just Simple I have token I have certificate authority and I can show you talking because this class is not available externally You won't be able to connect to it But we can connect to this your cloud Okay, so let's Recap little bit. I Deployed cluster without external access deployed jump box and show how you can access it you can access the HTTPS proxy or you can create your own job that will run and execute commands on that cluster directly and developers won't even need to know anything about QPSY TL and The occults will be still available externally That's it that we wanted to show you there are much more things and there will be much more topics on this Today and tomorrow first is important link is manifest with for CFCR you stuck to ocean if you won't test that versions next link is link with demo and Link with my jump box deployment You can see how it works how it works with links. I hope that today you learn that Kubernetes is relatively easy to deploy and if you want you can use it and If you want to deploy it manually, it's much much harder Thank you all and We need you we as CFCR team. We need you to help us Collaborate better between cloud found application runtime and cloud forest container runtime Connect other services cloud found things to container runtime so it can be better Thanks questions question was if we allow user to Provide custom cloud provider or do they have to configure it all the time? so this cloud provider is provided by Kubernetes and Basically operator needs to configure it if you want to access the cloud need to configure something you need to have credentials which I've said using cloud config in Bosch and you need to Set some parameters in future in long long long future what we want and It's Bosch cloud provider We want Kubernetes to talk to Bosch and say please give me disk and Bosch will create disk attach to this node and it will work or Bosch give me load balancer Which potentially might happen. There is there are something in in the Bosch nodes. So in the future, you'll use Bosch built-in settings and you won't need to configure this cloud provider at all but for now, yes, you need to configure it set some settings and Yeah, there are some limited set of settings They're all in documentation Yes, that's a question Right now project does not support it. I think so Theoretic I haven't tried. Okay. It's not it's not supported. I haven't tried but it's possible because of links it's possible to get this if you deploy at CD with Bosch separately You can get access to the CD from the master and it doesn't need to be collocated It can be external or it can be run on separate VM But I don't think it's officially supported. So There might be some issues. You have to do it manually and check it manually if something does not work team will help you and I suppose they will solve it I Question was is Do you want to scale by three masters and if a CFCR comes by default with one master? In version zero sixteen that last released version it comes with one master with the version 017 that is not released yet the We I don't know sure what will be the defaults, but it's possible to use three masters and this is Come on. This is what I've This is what I've done. So I From this jump box if I would go again, I have three masters I it works, but yeah work is not finished on three masters team is still working and That's some caveats some H cases that they're covering Yes, another question. So DNS is It's why so it's we use Bosch DNS. I I Not Completely sure how it works, but I know that we have DNS demons that runs on the local VM And it actually talks to local VM to get DNS records. I suppose director has some sets like Information about which VMs are up and which are down in particular deployments But so I don't think that it's like something be crying on the on the director I suppose all running locally, but I don't know details if you really want to find out They will be Bosch office hours after the stock come and ask to me tree Oh, I know I'm not sure if it's well documented right now. Come and ask me tree. Yes Okay Yeah Okay question. Why do you want to run commit this cloud foundry? so there are two parts one part is why do you want to run Kubernetes and If you if you so I will answer a little bit later Why do you want to run commit this cloud foundry if you have cloud foundry you? Probably won't operator to have the same tool to operate your other deployments. So that's kind of obvious But why do you want to run Kubernetes? There are several use cases one of them is using disks like persistence it's It's supporting cloud foundry, but As far as I know Kubernetes has more configuration with it than extensibility It's very big part. So It's really easy to extend Kubernetes and some developers are get used to Kubernetes They prefer Kubernetes to cloud foundry There's some legacy of your clothes that It's very hard run on cloud funding and very easy to run on Kubernetes There are some patterns like running side cars and for example some specific routing If you want to have specific routing you have to go to Kubernetes. You can't go on cloud finder and change your routing UDP you can on Kubernetes you can try on cloud foundry GPU we will Probably with new stem cells will be able to run GPU workloads with Kubernetes I'm not sure what will happen is cloud foundry. I don't know. So that's like some use cases, but yes one more question So question was about IPv6 and persistence if we run with any issues with IPv6 and persistence I Don't know I'm not so right now. I'm not part of the CFCR team and I don't know what is the issues with IPv6 what is the current state and This persistence. I know that there are so many Easy ways to shoot yourself in in your feet and make it not work and break it And there are so many configurations that we covered most of them But some of them just don't talk because we don't know about specific configurations Yes, another question Okay, so Question was do we want to have ingress controller embedded in the California container runtime. I Don't think there is a push to get it embedded to cloud finder container runtime So certainly can run it on your own I You can deploy it with Kubernetes Then cameras will have to handle it and watch for the like that. It's alive and to health checks It's it's it's working. I know that yes, so it's We I don't think there is a plan to to It to be a part of something that's called for the container runtime team will do but if someone else will do it They will support it. That's the thing like it's this is why we said we need you we need you to help us and Deliver such features questions. No more questions. Thank you. I will be on the pure tell booth After lunch, so come if you have any questions and ask. Thank you