 Welcome, everyone. Am I live, Jim? Well, I think I am now live. So welcome, everybody. Welcome to the one-on-one track for Cloud. Our first presentation as part of this introductory one-on-one track will be Kubernetes, the final frontier. And if you're tracking like myself, I'm sure you're going to be excited. So with that said, let me turn this over to Amanda, who will be presenting this tutorial. Take it away. Kubernetes, the final frontier. This is an introduction to Kubernetes tutorial. The continuing mission to explore the strange new worlds of microservices, containerization, and their management. To seek out new skills and new adventures. To boldly go where no one has gone before. So happy you're here to join me to learn more in this tutorial. And for being here at Open Source Summit, this is my first time at Open Source Summit. I'm super excited. I'm super excited to be doing this tutorial with you all and especially around Kubernetes. So, but with a little bit more about what we're going to be actually talking about, we're going to be doing Kubernetes the final frontier. And I'm Amanda Moran and you can also follow me there on my Twitter. And so we're going to come to our next slide and it's actually going to be a video I would just like to share with you all to get us all excited all on the same page and excited about learning about Kubernetes. Kubernetes, the final frontier. This is an introduction to Kubernetes tutorial. The continuing mission to explore the strange new worlds of microservices, containerization, and their management. To seek out new skills and new adventures. To boldly go where no one has gone before. Alright, awesome. I hope you all got as jazzed up by that as I did. I just love hearing that song. Not so much me narrating, but I love that song. Alright, so let's get into it. Enough fun lessons, but I hope we do have a lot of fun learning today. So this is the agenda for our tutorial. We're going to do what is Kubernetes, right? This is a very introduction to Kubernetes, so we're going to talk about that. We're going to talk about why it's so popular. We're going to also learn about why this should be of interest to you. If you're not a Kubernetes developer right now obviously it's important to them, but why should it be important to you if maybe you do, you know, your developer, you're in, you know, machine learning, data science, DevOps. Why should you want to learn about it? Then we're going to go over the architecture of Kubernetes. We're going to talk about what is a pod and what is a deployment. Then we're going to get into the hands-on tutorial portion. We're going to have a little bit of time to install MiniCube. That's what we're going to be using because that's what you can install right on your laptop. And then we can just jump in and start, you know, with the hands-on tutorial. So MiniCube is really awesome for that. We're going to deploy a simple web application. We're going to deploy oh, we're going to deploy a simple application. Then we're going to deploy another simple web application. We're going to learn a little bit about high availability and scalability. And I'm going to kind of point you to some resources where you can learn more after this because, you know, we only have this limited amount of time together, so you're not going to be able to learn everything. So where to go next to learn more. So just a little bit about me just to introduce myself to you all. Like I said, I'm Amanda. So I'm a Bay Area-based software engineer slash solutions architect. I like to think of myself. I'm really both and I love working with customers and helping them on their journey with whichever distributed system that they're, you know, that they're using, be it Kubernetes, be it something like Apache Cassandra, Apache Spark. That's where I really, I'm really passionate about helping folks get onboarded with different technologies. So like I said, I have a background in doing, helping customers with machine learning, analytics, different, various different distributed systems. I've worked at a variety of companies, both big and small. I've worked for companies like HP and Teradata, very big companies, and I've worked for a 30 person startup on day one. So I've kind of ran and also some other smaller mid-sized startups as well. I'm actually an Apache Committer and a PMC member for Apache Traffodian, which is a sequel on HBase a solution, a way to query HBase via sequel. But what do I love other than my passions around helping customers and training and tutorials and doing things like this? Well, I love dogs. I have a Corgi. I love Disneyland. I love veggies. I love teaching and training and helping others and I love running and exercise. So hopefully now you learn a little bit about me and how my kind of qualifications to be teaching you all in this course. Okay, so another thing I wanted to mention before we kick off, there's one other element we're going to kind of take some time to install Minikube together. But before that, you need to have install a hypervisor. So I'm not going to take any like break time for us to do this. So you can just kind of be kicking this off in the background while I'm talking or, you know, future future times you're watching this video, you can pause it and then go do that and then unpause or however it's going to work. Yes, you need a hypervisor if you don't already have one installed. Personally, I use VirtualBox. It's just the one I'm most familiar with. Easiest one to get just like up and running. So it's normally like just two minutes. It gives folks the ability to kick off the download. So, you know, just just go ahead and take that time now. Just kick it off and, you know, we'll get we'll get going. And like I said, well, we're going to install Minikube as well, which uses VirtualBox. So if you already have VirtualBox installed or VMware or something like that, you could go ahead and install Minikube now as well. So however that works for you, but you're going to need both. But I will give time for Minikube later. Okay, so I'll give like 20 seconds while everyone's over there frantically clicking around trying to get that. So let's give just a tiny bit of time before I start my lecture here. So I do want us to be able to walk through the tutorial together. Okay, so I'm going to imagine that you've all practically gone over to VirtualBox, you've accepted all the agreements and now it's properly downloading. Okay, great. So what is Kubernetes? So Kubernetes, normally abbreviated as K8, is an open source system for automatically deploying, scaling, and management of containerized applications. So it was donated to the CNCF foundation by Google and has been developed and used at Google for over 15 years. So it was first released in June 2014 and is now around six years old. So Kubernetes is actually a child of a massive project that that they actually had at Google called Borg, which kind of gives a little understanding of the theming here of why we went with Star Trek for this tutorial since the original name was Borg. So Borg was very specific to Google. And so it couldn't really be open sourced all on its own. They couldn't just take Borg and just open source it because they just had too many specifics there. So a team of engineers actually worked on removing all the Google specifics and created Kubernetes, which they then donated to the Linux foundation and helped to create actually the CNCF, the cloud native and cloud boundary organization within the Linux foundation. So that basically taking it from something very proprietary specific to Google and then actually developing a team that would then take that and open source it really gives us this amazing technology that we have today that we can utilize. So what are some of the benefits of Kubernetes? So we kind of went over the high level, but what are kind of the benefits? So you can run applications anywhere as long as you have Kubernetes installed of course. So there's easy cluster management. It has service discovery and load balancing, storage management, automated rollouts and rollbacks. Actually I was just doing another tutorial a couple of days ago where I got to see the power of using those automated rollouts and rollbacks and it really impressed me. A lot of things about Kubernetes really impressed me, but the ability to do those rollbacks so easily I was I was kind of floored. So I've worked with like I said a lot of different systems. I've worked especially with Apache Drifodian on installs and upgrades for customers and you know that they could just you know sometimes you could just be a nightmare you know you could get stuck in the middle of an upgrade and it didn't quite work and then you're in this bad middle state and but with Kubernetes it has this ability to rollback just with basically one command and it's very easy. So I'm not going to be able to demo any of that to you today but just a word of something that I was super impressed with. It has automatic bin packing so placing the containers by their resources so right it's able to schedule out and figure which nodes in your cluster have the ability to take on this particular the resource or container you know by the size or the amount of quote that it needs things like that it's self healing and it's very easy to horizontally scale so we only have a short amount of time together today so I'd love to demo for you how they can be show you the ease in which we can create deployments and do self healing and scalability so we'll kind of see that over the course of our tutorial so at the very least I can demo those and we can work together on those to see those alright so let's start talking about why Kubernetes is so popular so companies are really moving away from monolithic applications so you've probably heard that term quite a bit you know microservices monolithic right and so let's just kind of dive into just a little bit of what the difference between the two are so on your left you see the architecture really of the past that we're really used to see right large monolithic applications really take years to develop with large teams of experts hours to build not just like to build like to code up I mean that takes years right but just to build the actual process you know you've made a very simple simple change to just one little component and then you have to kick off a build and it may possibly take you you know two or three hours before you even get that build that and you can then you know take and then go to test right so it's not really easy to make quick and easy changes right because you do have to spend all this time building this very large monolith not to mention a lot of times it's very hard to make changes because things are scattered all throughout the code you don't have those very clear sometimes you have clear nice components that kind of mimic microservices other times you don't so so but what we're seeing here on the right hand side is kind of the architecture that we're going towards now and more so in the future so you see just a really simple example of a microservices application so it's really easy to make changes to this architecture because as you can see right you're just going to if you have one simple change to one service you're able to change that and then build just that one service and then deploy that and then that's all you have to do right so it really makes it easier so when all these these services are broken out from these monoliths into these microservices it's much easier to test to change and to package so and it's much easier as you can see here just in this simple architecture drawing it's so much easier to scale out as well so you can scale out the services that you actually need to scale out and not have to scale out you know in a complicated way it's very simple so let's just talk a little bit more about why using microservices is something that you may want to consider for your application so with a microservice is essentially like we've talked about a service that just does a single task and that's really all it does so because of that you can really have rapid development especially on those services right you know your whole application of the whole it's going to help rapidly develop that application as you spread out that work among different people who are dedicated to those different microservices right you have the ability to swap out components of an architecture with ease we kind of talked about that a bit before it's easier to automate CICD pipelines because of this again because the build process that test process is much simpler when you can just simply make a small change and just build and test that it gives you a lot of flexibility and ease to change course so for me a prime example of that it's really easy to swap out components so say you have a web application that connects to a database so with a monolith what if you want to try a new database for this application so with using microservices you can easily change the driver and the API in one of your microservices that makes that call to the database and you'll be able to connect to the database service that's really all that you need to do and you're pretty much good to go you're not going to have to grep through all of the code in this huge monolithic repo to try to figure out which places the application actually connects to the database and where does it write, where does it insert with microservices it's very clear which services do what and where you need to change so you can kind of hear from my voice that I may have had to do that from time to time where I'm just grep through this giant code base just trying to find where are these connections to the database but with microservices architecture that will greatly be reduced so microservices are also very easy to containerize so with a containerization service like Docker or container D etc and so Kubernetes is really the best place to manage those containers so it's kind of been the winner of that to anything that you containerize with either Docker or the other containerization services you want to run that on Kubernetes and not manage those containers yourself this gives you the platform to do that so again why is Kubernetes so popular so there's many ways to install and run so you can manage your own Kubernetes cluster or you can actually use just one second here slight technical difficulty okay sorry about that so there's multiple different ways to install and run your Kubernetes cluster you can manage your own Kubernetes cluster or you can use many of the cloud offerings the first is the Kubernetes tutorial that I actually did with Udacity which actually I link you here in the slide notes so when you get the slides you'll see these in the slide notes the course from Udacity that I took on microservices and Kubernetes it's actually they used Google Cloud EKS to do the tutorial and within minutes I had a usable Kubernetes cluster where I could start deploying my application and my pods we're going to see basically the same with Minicube as well but it's just nice to know that you have either those options you can try to do that in the cloud it's very simple you can install something like Minicube the difference between Minicube and using something like the cloud so obviously with the cloud you can see easily with even just doing your simple testing how you can easily deploy that into production with Minicube there's no prediction with Minicube you'd have to then take what you built and move it off into either your cloud provider or a bare metal managed Kubernetes that you have wherever you're at your company so Kubernetes as a project has a three month release cycle now I'm not quite sure of the release cycle on those cloud providers I'm not quite sure how quickly they pick up the latest version of Kubernetes so I actually don't know which versions of Kubernetes that those different cloud providers are on so that's something that you'd want to investigate so if you need to use a particular version of Kubernetes or you want to be on the latest and greatest you'd want to check with the cloud provider that you're using to make sure that it aligns with what you need and so also let's see yeah but honestly just investigate that and that'll help from there all right so Kubernetes again why it's so popular Kubernetes has a very active and supportive community so there's many different companies many different companies contribute to the project so it's not just so even though it was open sourced and donated by Google Google is not the only contributor to the project or even the main contributor I think anymore because there's so many companies that are contributing and you can just kind of see from this GitHub here the popularity of this project so you can see from the GitHub status that and see how active the project is so there's over 25,000 contributors 2,500 contributors sorry 2,500 contributors so that's just really impressive for a project you know an open source project like this and also shows what if an inclusive community as well the fact that you have so many different companies and other people just coming in you know contributing to this open source project so everyone is really welcome to contribute and collaborate within this environment and so even just folks coming in who want to learn more and you know do some help you know like do a code review do a code review test or anything like that you know it's really welcome and then you're able to be mentored by a team of experts these expert Kubernetes folks that are here in this community so it's really a great opportunity to learn and contribute and you know honestly even get a little bit of mentorship through this process so there's also special interest groups that read regularly I think they meet either weekly or bi-weekly where if you know there's something very interesting to you in this community you know you can meet up and talk it over with other folks from other companies and etc there's also you know different meetups and conferences so it's a very active community where there's just a lot of learning to be had alright so why should I learn about Kubernetes so if you work in DevOps or infrastructure this is really a no brainer right and this is definitely you know this is moving towards the future and this is something a platform that you're definitely going to whether you're going to have to manage it you know directly that's a possibility also just it may be something that you have to connect to you know from another system and things like that and you have to be aware of you know the different things that need to be considered when doing that so honestly this is something that you definitely want to to to start learning about you know especially if you're in this space right it's a very technology so that's always you know gets people really excited right it's it's nice for the resume right but what about like developers and data scientists right so you may be thinking you know infrastructure and deploying of infrastructures and containers that's not you know that's not really what you know that's not what I do right I build machine models that's what I do but honestly the infrastructure will affect how right so there's going to be some things that you have to you know make some consideration because of when you know using something like Kubernetes because it will affect how you build build your models right so for example with storage you know if you are building something or using something expecting that you're going to have persistent storage or that this container will always be up and live and so I can just store you know information locally etc you know with Kubernetes that that you know this is it may not be the case the container can come up and then it could die and then if you had just written to local storage that's all wiped clean once that container is gone so you know you may want to think about you know saving after persistent store things like that it's just things that you have to keep in mind that you wouldn't normally right if you weren't using Kubernetes so also doing any like you know hard coding of values or hard coding of paths those may be things that you want to you know keep in mind also like I said because your containers are not persistent necessarily you know if you're bringing in a new package with that container that you're currently working with that's fine to help build your models but you know if that container dies and then another one comes up it's not going to have that package so if you start running something depending on that that may be a problem so those are just a few words of you know just to like just get you understanding that if even if your job doesn't revolve around Kubernetes with the platform that you're using is Kubernetes based just maybe some just few things that you need to keep in mind and consider and I think this class is going to help you to do that so now remember this is just an introduction so there's so much more to learn so we're just touching on a few high-level topics and doing some hands-on exploration then the best way to learn honestly is by doing but we will have a little bit of lecture here so let's start diving into the Kubernetes architecture so just a little bit of terminology so first we're going to talk about the control plane so the control plane has the cube api server the cube controller manager the cube scheduler a cloud control manager and fcd which is a key value database and then the node has a cubelet, cube proxy and a container runtime so here's just a nice graphic of the Kubernetes architecture it kind of gives you an idea of just those different services that we just discussed discussed so like I mentioned before I do have a background in distributed systems and me personally it's mostly in databases so for me the terminology that I'm used to like for me if I come to a new database that I haven't worked with before even though there may be new terminology it's really easy for me to get caught up and understand okay that relates to that and this relates to that so for me personally Kubernetes was a whole new world of new terminology so for me honestly it was a little bit daunting at first to kind of understand this terminology but if you're kind of feeling that way don't get discouraged the more you surround yourself with the technology and the community it gets so much easier and the terminology really starts to just you know it all makes sense honestly and the more you learn about it what you will hear you'll figure out that these the terminology is actually very self descriptive so they have very distinct names so we're lucky in that so so like I said first we're going to discuss the architecture of Kubernetes and the elements that are going to matter to you in bringing up your own Kubernetes cluster especially if you're in DevOps so that's all around the control plane users of you know the Kubernetes clusters don't really touch the control plane as much other than the workflow that I'll kind of show you here in just a minute so like I said it's broken into two large pieces that we're seeing here the green is the control plane and then the individual nodes so we're going to just go kind of on the basics of each one so this is a really nice graphic like I mentioned that I found on Wikipedia so each cluster is going to have one or more nodes probably anywhere from 10 to 100 to thousands of nodes and then also the control plane is made up of many nodes as well could be anywhere from you know 3 to 10 or anything of that nature so those nodes also is where the containers are going to run so your control plane is where it's all being managed and then your nodes are where your applications and your containers and your deployments and your pods will run so applications are actually not scheduled on the control plane and they're their own separate set of machines so let's let's dig in a bit to each one of these components so I like this graphic from I actually as you can see in the photo credit I got this from the Kubernetes documentation because it shows each one of these services and it actually shows them replicated as well so there's really no single point of failure here which for me especially working with distributed systems so long always kind of it's something I look out for because I'm worried about my customers and I want to make sure there's not a single point of failure so you can also clearly see that in this graphic versus the other one the LCD is that key value database and it's that nice cylinder that we're used to databases in an architecture diagram so and also here you can you can see how the different services are interacting a little bit more clearly than that other diagram right you can see that the API server not here because I kind of cut it off but you can see it's getting you know calls from the outside world it then interacts with the scheduler the cloud control manager the cube control manager and then at CT so let's dive into each one of those so first let's look at the API server so the API server is essentially responsible for the Kubernetes API and it's it's going to be how you interact with the Kubernetes cluster so we'll use a command line tool called Kubernetes CTL or it's sometimes called a cube cuddle so we'll be using cube cuddle to interact with with the cluster and then the cube API servers I kind of detailed out before as you can see in that graphic and you can kind of see here but it's really small it interacts with at CD the scheduler and the controllers so cube API server is the service that interacts with the outside world and like I said it's what we'll be using with the two cute with the tool cube cuddle to deploy our pods and application containers so next we're going to talk about the cube controller manager so it runs multiple controllers the node controller the replication controller the service account and tokens controller so it also it manages the health of the cluster in the status it communicates with the API server to perform actions on the cluster so the node controller for example keeps an eye on all the nodes in the cluster and reports back their health so I want to make sure that right if it's going to deploy something somewhere you know on node one that node one is actually alive and healthy right the replication controller makes sure all the replicas for the pods are all there and up to date so if you need three replicas of something it's making sure that you have three replicas because if maybe a node has gone down and you lost one of your replicas it's going to make sure that that replicas and booted up on another node so this controller will actually come in handy in our hands-on lab here in a bit so like I said this is really useful when a node fails so we won't be able to show the node failing in our lab but you will be able to you'll get an idea of it so I'll show you in a bit so service account creates default accounts and an API access tokens for the namespaces I don't think we're going to touch on much around namespaces but namespaces are basically just a way to kind of organize your Kubernetes cluster and give access to particular people and users and services and accounts and things like that right so next is the scheduler so the scheduler is for scheduling the pods onto the cluster so the pods the deployment the resources onto the cluster it's going to figure out essentially what resources are needed so for your application it's going to figure out what does it take to actually run that be it size, memory etc and then it's going to schedule based on that it's also going to base on affinity so if you have any kind of affinity rules in your configuration file it's going to you know if it needs to be a particular label like for example I have CPU versus GPU and so then I'm going to have a label that says okay this needs to be it's a TensorFlow job it needs to run on a GPU so the affinity the scheduler is going to take that into account then make sure that that pod is scheduled on the correct node it's also going to help with like making sure the data locality if you have some data in a particular place and wants to make sure that it's going to you know schedule you properly for that so it uses different algorithms to do that configuration to place those nodes so there's a default scheduler that does a combination like we talked about of like filtering and scoring and trying to figure this out you can also write your own scheduler with your own algorithm and you can actually plug that in because Kubernetes you know it's very again it has that microservices architecture even within itself you know you can rewrite things and then plug those in easily honestly even with the scheduler you could even go even more basic than that than using the default scheduler which is doing all this for you and figuring out that supply and demand for you or you can even just within your pod spec you can even just select I want this to go to node one it's probably not a great idea I can't really think of a really good instance where you might want to just specify the nodes for your application because that node could go down and then that pod wouldn't be able to be scheduled anymore once that node goes down but you do have that option if you need it you can have that basic level of scheduling which is like I'll just tell you which nodes to deploy on but it gives you an idea of the level of from super basic to super advanced of something that you create yourself alright and next on to our cloud control manager so this is really the this is a nice feature and like when I was talking about taking that Udacity course that connected to Google Cloud it'll use the cloud control manager to deploy things via the clouds that cloud services API so again you only have this in the cloud whether you're on AWS or Google cloud or Azure so for example you're not going to have the ability to utilize this on prem or for example with mini cube so you'll see later that with mini cube we have to use kind of a slightly different command than you normally would in Kubernetes to get a load balancer going to access the website that we're going to deploy from the able to access from the outside world we'll have to use mini cubes kind of way of interacting and creating a load balancer whereas if you were on the cloud you could actually use these cloud APIs that would automatically deploy a load balancer via those APIs okay so let's take a second and talk about at CD so at CD is a key value consistent database that's actually consistent not eventually consistent so if you're familiar with the cap theorem it chooses consistency over availability so it stores all the activity on the cluster so it's combined with the API server to actually perform the actions that you're going to perform on the cluster so it's all stored there and the API server will use a watch API on at CD to do that monitoring to see what it needs to do next so a request will come and it'll get stored in at CD for example like with a pod that needs three replicas so that information will get stored there the API server will be notified of this and it will check how many pods are running and so it's going to find in this little example it's going to find that only two pods are running it will then send a request to the scheduler to schedule an additional pod to match the request so it'll make sure that what's stored in at CD which was that a pod needs three replicas is then perpetuated out into the cluster so we've learned about the control plane which each of these components will be within the control plane and so we've kind of understand them I think at the right amount of depth for getting our applications running on Kubernetes and honestly it'll also be helpful to have that understanding of the control plane even if you're an application developer because it'll be helpful when you're troubleshooting issues and trying to work with the teams to try to figure out is it something wrong with my application or is it something wrong with the cluster so let's take into a bit more about the node architecture here so there's three elements of the node architecture the kubelet, the kube proxy and the container runtime so with the kubelet there's going to be one per node you're going to have a process or an agent that runs like I said on each kubernetes node it uses pod specifications to understand which pods and containers should actually run it monitors the pod for its that it's responsible for so right it's only managing itself it's one per node and it manages itself and the control plane and the kubelet work very closely together now this graphic is a little bit small so I apologize for that but you can see that the api server from the kube the control plane is interacting directly with those kubelets and then we have the kube proxy so the kube proxy does all the network proxy that runs on each node right so we need to have a way for the nodes to be able to talk to each other because we have a distributed system how are we going to have these nodes communicate you know between each other so it's going to use network rules to allow for that so that pods can talk to each other inside the cluster and other elements can come in from outside the cluster and talk to these nodes as well so it also edits the ip table rules on each node so from there we have the container runtime so the containers live within the pods and they package of our application so the container runtime must be installed on each node it doesn't just have to be docker but docker is actually what we're going to use in our example but you can also use container d or cryo those are other two popular container runtimes so like I said so kubernetes actually supports multiple of those not just docker alright so let's start talking about more about what are workloads and what are pods so now we've kind of transitioned from the architecture of kubernetes and this is still you know more high level topics but this is getting into what we're actually going to be able to deploy on our hands on lab so I've actually used this term with you now multiple times and so I apologize if I hadn't defined it previously if you don't know what I'm talking about but now you will so what exactly is a pod so pods are a group of one or more containers which you can see in this graphic here so each node is going to have multiple pods scheduled on it within those pods it'll either have one or several containers in this example you see it has two containers per pod so most likely the containers that you have within the pod and the whole idea is that those containers are actually working together for your application so they need to be grouped together in one pod so that they're easily portable and it's easy for them to communicate within you know between each other so for example if you had a web application you'd have a pod that's your front-end service that's hosting you know that it's actually your static website or your active website and then you'd have another pod that's maybe you're back in database that's serving up that information to the static pod or it's taking in transactions from maybe like a customer you know an interactive website etc so you would have this all within the same pod so that they easily communicate with each other so why why is this called a pod so a pod is actually a group of whales as you can see in that photo there and so as you as you know Docker who coined the term their logo is a very cute whale so then of course then pods is for a group of whales group of containers alright so again more on what is a pod so it's really the smallest unit within Kubernetes that's managed by Kubernetes so any containers that are run outside of Kubernetes are not managed by Kubernetes so you can if you don't do a cube cuddle run pod that would be managed by Kubernetes but if you don't do that and you do a Docker run of a Docker image that will not be managed by Kubernetes so it would still be up and running on your cluster so you would not get all the benefits of the easily scalability and management of those pods or clusters of containers so the pods are configured by YAML file so it will pull a container image from a source which you can kind of see here in this YAML file and we're going to look more at YAML files here in a bit so don't worry about too much about this graphic here so again like I said in the last slide containers in a pod are always co-located they will have a unique internal IP address per pod so that each pod knows how to communicate with the other one and interact with it containers talk to each other over pardon me let me even phrase that so each pod will have a unique ID so that the pods can talk to each other over that unique IP and the other services then containers will talk to each other via local host so the containers don't need that internal IP to be able to talk to each other they'll just talk to each other via local host and pod so all containers within the pod use a common storage volume so that's nice if you write out something into a file from one container another container can actually read that file into its application or whatever it's doing so this is managed by the kube API I say or the controller but it's and the controller it's a little type of there so we'll take a even better look at these YAML files during the hands-on lamp okay so a little bit about controllers so what is a controller a controller is a control loop that manages the environment it never terminates and continues to monitor the situation so kube control manager helps with managing each of these controller types so the control manager oversees all the different other types of controllers so it works on the current state and it gets you to the desired state so if we have two replicas and I want three it helps you get that so really think of an oven sensor and there I have a picture of an easy baked oven so think about an oven if you set it to a particular setting like 350 right and you want to wait until the oven is ready to put in your cookies right the sensors inside the oven that are checking the temperature and changing the state either they need to add more heat because it's still trying to pre-heat the oven so it's trying to get it up to the 350 degrees or it's going to notify you that the pre-heat is ready it's going to send out a sensor that's going to then probably beep or turn off a light etc so these are control loops they're just looping and continue to check state as they go so Kubernetes has many controllers and even custom controllers that you can write yourself but we're just going to focus on the deployment controller in this particular example so what is a deployment so a deployment is a it's a set of it's a set of pots so it's a set of pods and the actions that you want to happen for those pods so let me go here so the deployment is setting a desired state for your pods so things such as deleting and adding pods those can all be added to your deployment adding a replica set how many that you want do you want it to be restarted so if your pod goes down do you know do you want it to be restarted or is it just a batch job that just runs once and once it quits I don't want it to be restarted these are all elements that you can put into your deployment spec the ability to easily update and upgrade is there with within the deployment okay great so we've had a bit of lecture we have some understanding of the Kubernetes architecture we have some understanding of the different elements that we're going to be working with within Kubernetes such as pods and deployments and replica sets and now it's time to get to our hands-on lab so I would like everyone to go to this github so it's github.com slash Amanda Moran and then you'll see you want to go to the open source summit repo that I have there and I'm going to share my screen for all of you alright and so here I am on the github page so this tutorial is just going to be the nice it's a nice read me that we're going to go through so the very first step that we need to take is that we actually need to install Minicube so I would like to give everyone just a few minutes it doesn't take very long to install Minicube but I want to give I'm just going to give everyone three minutes to get Minicube installed or to get it you know up and running so that we can walk along with this together so there's some really fantastic docs here on this page it has a really like the Kubernetes docs there if I haven't talked about them enough they're really fantastic they're really fantastic guide to installing Minicube they're fantastic for just understanding the Kubernetes architecture and what you can do with it and you know help and guide you so I would strongly suggest taking a look at the Kubernetes docs after this talk but for now it's just going to help us install Minicube I want everyone to click on that and then go and install Minicube you can install on Linux, Mac, or Windows and like I said before reminder you will need a hypervisor installed that might take some time Minicube actually installs pretty quickly but the hypervisor might take a little bit of time so hopefully you've already done that and like I said personally I use virtual box I just find that the easiest so I'm going to give everyone about three minutes to walk through those quick install Minicube and then we'll get started here together so I'm just going to click here on this just to show you all so it's just going to like I said these are really intuitive docs they're really nice they give you installation instructions for Linux, Mac, and Windows it's really straightforward so don't worry about confirming the installation because we'll actually do that together hopefully everyone is finding it as easy to install as I did and I will say even though virtually connecting with everyone and this wonderful conference is amazing I do wish we were all there in person because it would be so fun to just come around and meet you all and help you with this and us all learn from each other and work together but without that this is a good second okay great so hopefully you all have it now installed or at least it's just about to install so we're going to get started here so what I want to do I'm actually not going to start my mini cube cluster because I already have one started actually you know what I'll just try to start and see what happens I think I'm going to get an error so we want to do mini cubes start and then if you don't name your cluster I think it just gives it the cluster name of default because you can have multiple mini cube Kubernetes clusters running at one time so I just called this final like our talk final frontier and then the default a VM driver is virtual box last I checked I just like to be more specific and just make sure I just define it here that the VM drivers virtual box I think for me I had an issue once where I had tried to use VMware and it didn't really work for me in this particular case and so then I had to go back and it got a little bit confused so now I'm just more specific and it seems to all go just fine so actually it's not throwing an error which is cool so it must just be starting just a fresh so if I do the start here it's going to download some packages it's going to you know get an IP address for us it's going to configure Docker to make sure we have that as our runtime and it's going to make sure it launches and awesome okay so kubectl is now configured to use final so right the command line tool right you need to make sure that it's pointing to the cluster that you want it to be pointing to right because your kubectl could be pointing to your minicube instance in this particular case we want to make sure it's connected to final and not default or any other cluster that we started we also want to make sure it doesn't connect to maybe our production cluster as well because we don't want to be doing anything like delete services or delete namespaces to delete deployments in our production cluster okay great so we're all started so now let's just do if you see here in the tutorial we can do a minicube status okay wonderful so we see our host is up and running our cubelet is up and running our api server you know all these different elements that we talked about before is up and running and our kubectl is correctly configured and it's pointing to our minicube instance on this IP so that is wonderful so if we were on a proper kubernetes cluster and not minicube we could actually use this command here so kubectl get component statuses so in this particular case because we're on minicube this command doesn't work but I did want to show you it just so that you could use it on a kubernetes cluster so it's letting us know about our scheduler, our control manager and etcd in this particular case just because we're not using a proper kubernetes cluster we're just using minicube it doesn't show up but it's good to command to know okay so let's go through and create our first pod so we want to run this command here and I'll walk through it here let me just quickly copy it I'm going to clear my screen here okay so we're going to use a kubectl and we're going to run and we're going to call our pod or our pod hello so run is a command that you can use only with pods we're going to call it hello and then we're going to use dash dash image equals a morano 6 which is my public docker repository that you can pull from and then we're going to pull an image called hello friends so go ahead and enter and so quickly you see that pod slash hello has been created now yours may take a tiny bit more time just because you have to pull that image afresh where I already have it already pulled and cached but it shouldn't take very long at all so this is going to be a simple container that's printing out a message to all my new friends here at the open source summit but when I deployed my container I didn't get any kind of message so let's just make sure that our pod is up and running so kubectl or kubectl getpods okay great so my pod name hello it's actually it's creating now and it's about 35 seconds old and I can just keep making sure that it's okay right so I my pod has been created and now it's completed so let's take a look at our log file so we can do a kubectl logs on our pod named hello awesome and we see what that pod was doing which was outputting a message to the log files and it says hello open source summit I'm so happy to be teaching you the basics of kubernetes and live long and prosper so we were able to get our first pod up and running on our kubernetes cluster and also what we can do now is we can do a kubectl delete pods and then hello and that'll actually delete that pod since we're done using it so it's nice to get everything all nice and cleaned up alright let's get into deploying a website just want to check and make sure everything's going good there so we need to create very first and foremost we need to create a yaml file so that's a configuration file that I've been talking about here and it's a yaml file so how we're going to create that and let me copy this command and then we'll walk through the command so I'm going to clear my screen copy this okay so we're going to do a kubectl we're going to create a deployment and that deployment is going to be named web app so in this particular case we're not creating just a pod we're creating a deployment and a deployment host as we talked about before it may have multiple pods replicas, restart strategy etc we're going to pull an image called Amanda06 slash Picard tips and then here's how we're going to create the yaml file and this is really the key to creating not just creating that deployment and it running automatically and instead we're going to just create the yaml file and it won't be run yet so we're going to do dash dash dry run equals client so just run a dry run of this on the client which is what we're doing here dash o yaml and then we're going to put that type that into a file called web app web app.yaml okay I use vim, I love vim so let's do it on our web app yaml and take a look here inside so let's just double check that it looks like what we have here in our example which it does which is great so we see our api version the kind so in this particular case if we had done the same same command with running that hello pod instead of deployment we would see a pod we're going to see some metadata we're going to see the labels on our deployment in this particular case it's called web app then we're going to get into the spec we're going to see our replicas we're going to see more around our labels we're going to see our template again with some more labels then we're going to see another spec around our container spec we're going to see the image that we're going to pull which is a morano 6 slash book card tips we're going to see the name of that container so our deployment is going to have a different name than our container names so our deployment is called web app and our container is called card tips so I'm going to keep this open but then we actually need to edit this yaml file so in the next step we're going to add a container port to our yaml configuration file so this is a really important step as our app as our web app we've created is running on port 5000 and we need to make sure that is exposed to kubernetes so if you see here in the example we're going to come down to our container because that's where we need to expose the port and we're going to add ports and then dash container ports I'm just going to move this to the side so yaml is very picky about indentation and all those things so you have to make sure that you get it right so that's why I like to just copy it just directly so I'm not trying to remember it and then getting it wrong so we're going to add ports dash space container port and then we're going to put 5000 that's going to expose the port then I'm going to save my file okay wonderful so I've created my yaml file I've edited it to add that container port to expose it and now I'm actually let's go ahead and just get that started okay so now that we've generated the yaml we're going to do a cube control apply dash f so for file right and then webapp.yaml so we should see deployment.app slash webapp has been created so if I do a cube control get deployments I should see webapp up and running wonderful and if I do a cube control get pods I should also see a pod that's been created right because deployment is a wrapper around our pod so it also see a pod and it's up and running which is wonderful okay so next thing that we need to do so that's not all right because now that our websites have been running we actually want to be able to expose it to the outside world so that folks can actually look at our website so we're going to need to expose our deployment to the outside world by using a load balancer so this will need to actually have an external IP so that we can actually come onto that website with the in combination with the port 8080 so this is the command that we're going to use to do that let me copy that just walk through this a bit so we're going to do a cube control expose our deployment webapp so that's the name of our deployment we want a type load balancer of port 8080 and target port which is the port of our container that's been exposed which is 5000 so we're going to run that there so we should see that exposed so if I do a cube control get service services wonderful you can see that my webapp now has a load balancer and so in this particular case it has not been granted an external IP that's because we're on mini cube and so we will then that's what we have to do next so let's also like I said we can also do a let me clear this cube control get pods, services we can also see our services and our pods okay so now since that we are on mini cube we actually need it to create that load balancer for us because we're not a you know a managed service that can then deploy a load balancer or something that's managed for us that already has a load balancer we're going to need to create that with mini cube so it's a really cool service so you do mini cube service webapp we're going to run that hopefully that should pop up should pop up our web page and it's live demo everyone pause there so in the past when I run this what should happen is actually my browser should pop up it'll automatically grant an external IP it should pop up and then there should be the web page now for some reason let's do a let's see if maybe I already have one running maybe that's why it didn't pop up let's see a little live troubleshooting is always fun okay I think those are the ones that just started so let's try it again okay well it's thinking about it well let's kill this for now let me just take another quick look here yeah interesting okay well what should happen and hopefully it's happening for you because something's funky on my machine right now but what you should see pop up in your local browser is a static web page with all these awesome Picard tips so it's just a static web page it'll have awesome Picard web tips that it's actually a twitter handle that tweets these just about every day of these different management tips from you know like it would be from Jean-Luc Picard so right here's a great one the respect you show your crew is a major factor in determining how you feel about their work so it's always good tips I read just about every day and I'm very disappointed that my web app is not popping up but hopefully yours is and that's what happens with live demos okay so let's move on here to creating a highly available application so if we do a cube control get pods we'll see that we only have one web app pod that's been created we saw that already so let's say our Picard management tip site starts getting a lot of traffic it's super popular how are we going to scale up and make sure that we have a highly available app so that's where pod replication comes into play so let's open back up our YAML file and that's where we're going to come back to this replica right so what we want to do and mind you this is not the only way to add replicas there's actually a command line way to add it but I just kind of wanted to show you just a few things here so what we're going to do is we're actually just going to edit our YAML file to have three replicas and we're going to save it and then let's just take a look at our pods okay so we see we still have one pod okay well actually it makes sense because we haven't actually applied this new YAML file so if we had you might have been expecting to see three then it'll just pick it up automatically but it won't so what I want us to do is we're going to do a cube control delete let's see make sure I get it right delete deploy web app and then delete service web app because that's that service that we created when we exposed for the load balancer okay so if we do get pods and services we'll see that they're terminating okay great so now we need to do another cube control apply dash f in our web app YAML it should be created so let's take a look awesome so now you should see we have one pod that's terminating it should be gone here in just a few minutes yeah there it's gone and now when you do a good pods you're going to see that we actually have three pods that are up and running and another thing so I want to show you if I do a cube control delete pods and I take one of the names of these pods and I delete it I take a second here so by the time this deletes and we do it get pods we're either going to see we may see a couple of things so let's give it a second and then see what we see kind of taking a long while to delete okay there we go so as you can see here if you look at the age so automatically what it did is even though that pod got deleted it automatically scaled up another pod and that's I mean honestly that's one of the many features and the powers of kubernetes right you didn't have to do anything it got deleted and then automatically it's up and running you know there's very little down time for your users so let's try to add our service again for our deployment and let's try many cube service web app again let me see if it'll launch I would really love if it did that I would love that so much nope okay alright well that's something for if it's happening to you that's something really fun to go and debug hopefully hopefully it's not but hopefully it's working for you it's working for me in the past not quite sure where it's working for me not working for me right now okay so we went to this verifying that the replicas work is intended we got our pods we deleted the pod and then we got our pods again and we wouldn't might have seen a couple things we might have seen one terminating and then one starting or we might have just like we saw we're all running but then with different times since they had been active right so just as a recap so let's just recap what we've learned in this tutorial so we've learned how to install a local instance of kubernetes with minicube we've learned how to run a pod from a docker image we've learned how to create a deployment and run a deployment create a service run a load balancer with minicube and make that service highly available with replicas we've boldly gone where no one has gone before except okay well maybe many people have gone before but we all learned something new which is awesome so I want to give some credit here to the wonderful kubernetes tutorial docs and also the wonderful docker curriculum docs as well I'm not a web developer so I mean if you could see my little website I actually got that code from the developers over at docker curriculum that they had a similar app and I just kind of tweaked it to be hard tips there's originally was cat gifts so there's probably a little bit more fun so it's really a lifesaver so I didn't have to figure out how to write a rev up all on my own so thanks to them that was awesome alrighty so you know now that we've kind of gone through our hands on tutorial we have we've learned about kubernetes why you should use it why it's important the architecture but you know you really need to continue learning because this was only just you know a little more than you know an hour and a half and definitely need to continue learning so I would highly recommend reading the kubernetes docs they are excellent very clear the folks who work on those work very hard because I I help a lot with documentation you know in my various places I have worked and it's it is a challenging job so they work really hard on that and so I appreciate that also the link to the Udacity course it is a free course introducing scalable microservices and kubernetes it's really great probably takes around six hours just so to get through a really good information another great book is kubernetes in action I've started reading that almost done and it's really good a very dense you know good information has like just about everything you need to know then I would also recommend a linux foundation training they have an introduction to kubernetes it's also free you do if you want to get certified on the or get a certificate I should say for introduction to kubernetes I think you do have to pay a small fee but the classic self is totally free and really great also like I've kind of mentioned previously you know become a part of the kubernetes community right join a meetup head over there and just learn more from the folks that are there and the other folks there to just chat with you know make a pull request keep attending conferences keep learning about kubernetes and other distributed systems and other things by the you know the linux foundation it's really great you know answer questions you know on different forums or slack overflow things like that on the mailing list you know somebody has a question you know that you know the answer to jump in you know and help out it's really appreciated help contribute to the docs right that's always a highly encouraged it's not just code right there's other things that you can do to contribute and become a part of the community and really everyone is welcome so nothing is too big or too small to be a part of this community and so there's just some references from my talk today and I just want to thank you all so much for coming to my tutorial I hope you learned a lot but I hope it's also just the beginning of your journey and thank you very much and I'll be here in real life to answer some of your questions thank you alright there's a little lag video but I think it's all okay but yes so that was a recording that I did and now you're hearing me live so a lot of you asked well you commented about the bug at the end of the tutorial and you were absolutely correct so what I have in the readme is correct I needed the dash the dash p final because we started up a cluster called final when we run our commands we need to make sure because I can have mini kubernetes mini cube clusters running at the same time so I need to make sure oh okay hopefully you all can hear me I wanted to double check that you could and I'm unmuted okay but like I said you can have many kubernetes clusters running on your mini cube at the same time so when you run a mini cube command you need to specify which one that you're connecting to and which one you run the command on and my brain forgot about that during the recording and so I wasn't able to get that pulled up but hopefully you all were and then I have the correct instructions in the readme so I wanted to go through it looks like you all have some questions here so let me just scroll through there's some comments now there's a couple of comments about maybe mini cube not working for you or it not working on various platforms sorry it's always a bummer when you're doing installation type things during a tutorial because everybody's computer is different everybody's laptop is different it always seems like never quite run smoothly for all so hopefully a majority of you were able to do it if you weren't apologies hopefully at least you're able to follow along and then maybe on your personal laptop or something like that you could run it and it'll be pretty straightforward okay let's see someone someone asked, okay this is a good question mini cube equals cube control are they equivalent? No so mini cube is just a nice way of running a Kubernetes cluster on your laptop it's just like a local installation of Kubernetes right? I think earlier in the talk I mentioned when I took a Udacity course they used Google Cloud Platform right and they used the EK GAKS I never remember the acronym it doesn't matter it's a Google managed Kubernetes service and so from there you could use that and then you would use you know you basically set up the tutorial exactly the same way except nothing with regards to mini cube mini cube is just running the Kubernetes infrastructure for you so you would run different commands for mini cube like mini cube start is actually start my Kubernetes cluster and then kube control run is actually start a pod on that Kubernetes cluster so hopefully that's a little bit more clear it's not as crystal clear here because we're setting up that Kubernetes instance on our laptop opposed to like connecting to an external Kubernetes cluster where it would be much more obvious so it is a little bit a little bit confusing here but once you kind of get the hang of it it's a little bit more easy to understand oh somebody mentioned I have a typo somewhere totally I always have typos oh why is it necessary to delete the deploy and the service oh I was able to run kube control apply dash f with replicas 3 in the yaml file and get 3 replicas yep you can do it that way as well you can create your yaml file and edit the number of replicas and run it from there you don't necessarily have to delete delete anything to do that I was just kind of showing you know like how you can clean up and then do it there's also another command that I didn't go into here that you could look up you wouldn't even need to edit your yaml file you would just be able to run a kube control I can't remember the exact command up top of my head but nevertheless there's a kube control command where you can just I think it's edit replica or something like that and then you can edit the replicas just there on the fly and it'll deploy additional replicas good question could you share the repository used for this tutorial so that's under my github so if you go to github and then you go to Amanda and I think it's also on the conference website as well but if you go to github and then Amanda Moran A-M-A-N-D-A-M-O-R-A-N and then you should find a folder that's open source summit and click on that and then you'll see that the kubernetes final frontier because actually I have another talk tomorrow where I'll have some slides posted in that repository as well so you'll see two folders one for kubernetes final frontier and the other for a database I have tomorrow and then and then from there you click in there you can find it okay let's see oh someone said thank you you're very welcome someone says a demo work for them hooray where can we get the slides you can actually get them in the github repository as well I will be posting them there I generally post a pdf of my slide decks in my github repository as well there may be also a way that that the conference is sharing them as well and so there would be that as available as well but you could also pick up a pdf from my github I should actually post that probably just right after I get off this conference that's a good question I'm not quite sure sorry someone asked about finding the video I'm sure it will be posted I'm not quite sure where at this time but I'm sure they'll have like following the emails where the conference organizers let us all know all that information okay someone else figured out my recording bug where I couldn't get it installed okay oh good question about would you okay so if you want to run a database and you want to have it running on Kubernetes would you I suggest running a database container in each of the pods or would you rather use a central database pod for them all that's an interesting question that you definitely want to it depends on which type of database you're using it depends on how you're going to be syncing that data so I would do a lot more investigation around that and what exactly kind of database you're using think about there's some really good tutorials out there on Cassandra and Kubernetes I would definitely advise googling that actually some of my former colleagues have been working really hard on that so I would look into that that would kind of help you on figuring that out even if you're not using Cassandra per se because maybe you don't need like a no sequel database that'll kind of help guide you on the architecture of defining that but yeah definitely research more into that okay someone out GKE is Google Kubernetes engine good job, good find someone asked about the alternatives to minicube I would look on the alternatives to minicube I don't know that there's anything else that you could run that I'm aware of that you can run locally on your laptop it's pretty easy to deploy a Kubernetes instance in the cloud now of course that costs money whereas just running something on your laptop costs no money but even if I mean like most of us can get credits and things like that so you're just running it for you know like for example this tutorial takes like 30 minutes it wouldn't take that many credits so it's something you can consider if you can't get it running on your laptop that you could just use something in the cloud with credits that is oh somebody mentioned they might have had an issue with the demo because I didn't mention installing cube control or cube cuddle my that's a good question and a good observation I assumed it came with minicube that it just got it it downloaded it for you um that was my assumption now I obviously already have it installed on my laptop so it would be real easy for me to not know if it came with minicube or not I thought it came with it if it didn't then yes that is a bug that I would need to make sure that folks knew that they need to install cube control so that's a good find question about minicube having the controller and nodes all in one um yeah so it's just basically running just everything locally on your laptop that's a good question how does cube control know how to connect to the minicube cluster instead of for example your production cluster um yeah so when you do that uh that's a very good question and actually when I was just dealing with the other day when you do that minicube start it actually is going to write to your dot cube slash config file and point to the minicube cluster it's basically going to put that there first and so that's where it's going to have when you do a cube control it's going to point to that minicube cluster first so then what you'll need to do to connect to your production cluster um I'm not quite because everyone does that a little bit differently um yeah I would make sure that then your cubiconfig file has it as well and that you might need to do some kind of uh start up or in it or something like that uh to reconnect to your production one good question good question why does kubernetes need a hypervisor kubernetes in itself a very good question does not need a hypervisor um when you have a kubernetes cluster that you've installed you know um in the cloud or you know on prem um you don't need a virtual box or vmware or anything like that um but for just the the minicube it needs that yeah good question excellent question can corgis get into disneyland no it's so sad good question though oh good yeah good comment here docker desktop is an alternative to minicube yeah great observation all right well thank you all that was fun good questions um yeah I hope that was helpful I hope that was you know a good introduction I know I know it's just an introduction right um there's a lot more to cover um so yeah I hope that was helpful hope you enjoyed the tutorial hope there weren't too many bugs along the way and uh yeah thank you very much enjoy the rest of the conference