 to be talking to you a bit about. I want to share with you my journey here in some ways mitigate, give you today. I submitted my talk to EuroPyton about four weeks ago and I was about here when I saw the email saying my talk had been accepted. I'm not saying EuroPyton is for my bluff and I need to prepare a talk. I'll work on it while I'm here. Yeah, I went to a drama concert in Borlund and I was looking for an event here in Borlund. And if you get the opportunity to go one day you should because it's a great one for people. It was interesting to learn that this was an ambition for Borlund. It was also quite distressing to see what this is. So okay Dave, don't be too subjected, don't be too upset. Why don't you go on a little drive? I'll try and do that. So I hired myself one of these and I started here. By the way, if any of you have seen the 1985 film The Boonies, the early Steven Spielberg film, it's awesome, not so cool things like regional music, Catholic music, really. And I was a bit techie so I had one of those. And then I had one of those. And I had one of those as well. And then I really, really thought, I thought, no, it's time to get to that point. Yeah, but the fact is that it didn't match me at the time. So we ride monitors just back. I was thinking, let me go. I started work on a project for the first time since it was a system that monitored the software. So we had a site pivot and I took the aging technology from our monitor and turned that into a general purpose monitoring solution. And we built up the product over a decade and it was, to be honest, to be completely honest, it was never that great because there's a general problem with monitoring and it's all about like... The software is a software as a service. And we chose Google Compute Engine and GKE. And GKE is the hosted Kubernetes implementation that Google provide. And we're quite keen on dog food and we wanted to use our own solution to monitor our stuff. And also we needed to provision monitoring instances for customers who were signing up to the SaaS so that they could actually, you know, click a button and a monitoring instance would be deployed on Kubernetes. So we needed to interact with the Kubernetes API. We're all Python people so we had to do that with Python. So that's why we wrote Kube. Now it's important to say that there are other Python API wrappers out there. And I'd strongly encourage you to go off and look at them so you can see how rubbish they are and how good ours is. No, seriously. Go off and have a look. They might be a better fit for you. And also if you do end up choosing one of these instead of Parkube, it'd be really great if you could come back and tell us what you thought was better because that would be interesting to know and we could try and fix anything that we've got going on in ours. We wrote ours. We wrote Kube in the way we wrote Kube because we wanted to abstract away from what's a somewhat moving target as far as the Kubernetes API goes. It's also got some idiosyncrasies that you don't necessarily need to be exposed to when you're interacting with Kubernetes via its API. So we wanted to kind of like create an opinionated version of the API that made things a little bit more palatable for the user of that API. We also wanted a clean watch interface. Watch interface allows you to get notifications of changes to resources within Kubernetes. And some of the other offerings that are about at the time, some have come and gone since we started but the ones that were about didn't do that very well. We wanted something with Pythonic and we didn't want to use code generation swagger and that's a theme in some of these alternatives. But you guys take a look and take your choice. So I hate it when speakers say, oh, hands up, do you know this, do you know that? But just a very quick show of hands is who has kind of been exposed to Kubernetes? Okay, so third of you. I'll try and railroad through this because time's at a premium. So Kubernetes is about orchestrating Docker containers, essentially, but not just Docker containers. Docker came out of Truity File Systems and if any of you have been exposed to Solaris and Solaris Zones, similar kind of concept. What it gives you is basically an immutable deployment component. It's easy to author. There's a runtime that runs on many platforms and it allows you to develop an immutable deployment component that underpins DevOps practices and continuous deployment. Across multiple nodes, it's hard to manage Docker containers in the raw, especially for scale and resilience. So that's why control planes like Kubernetes and Dockerstorm came about. I mean Google already doing this. Google had a system called Borg which uses LXC containers and it manages those across their enterprise. It allowed Google to scale, develop productivity and the number of services they're offering to their customers internally and externally without the corresponding increase in operational overhead. So there was obviously, you know, it was a useful technology. Kubernetes has had an amazing amount of momentum behind it and it's interesting how many, in quotes, competitors have actually got behind Kubernetes where initially they thought to position themselves as direct competitors to Kubernetes. They kind of since seemed to acknowledge that they have a particular sweet spot, maybe it's in cluster management or managing containers on very, very large scales. And they seem to have all sought to accommodate Kubernetes in their offering and in their space. So, you know, the only sort of like, sort of offering that hasn't really done that I suppose is Dockerstorm because that is, you know, trying to do exactly the same thing. So how does it work? There you go. Happy with that? Excellent. Yeah, it's not very helpful, that kind of diagram, you know, and it just takes forever. And I've got time. So I just want to go through some key concepts. So in Kubernetes, we have the idea of a cluster which is a single homogenous cluster of nodes, compute resource. Watch out for a thing called ubinities, which is kind of an attempt to federate multiple Kubernetes clusters so you can have basically multiple clusters with different kinds of shapes of resource running inside it. A node is some resource where pods are scheduled. And pods are the smallest unit of scheduling that runs the actual containers. So Docker, but not exclusively Docker, there's also support coming for rocket containers. And the Docker containers run inside the pods. And those are the things that the Kubernetes system schedules. There's the concept of replica sets. And what they do is they, it's a specification that defines the pods and how many replicas of those pods that need to be for scale and resilience, amongst other things. There's also services. Services target pods and expose their capabilities at the edge of the Kubernetes cluster. So you can think about the actual Docker containers, I suppose, you know, as nano services and then the actual pods or microservices and then the service definitions provide actual services for a consumer. Labels are an interesting thing and we'll be looking at those very, very quickly. And they're key value pairs that are associated with resources within Kubernetes, but they will be a scheduler to organize the objects within it. And there's lots of other stuff that we could talk about, but that's probably enough to get us going through the next part of the talk, something, something 30 minutes. So some other, some key concepts for Kube. We need to get the terminology straight, really. Right at the beginning is, principally the API, the Kubernetes API defines, defines kinds and defines resources. So a kind is the name of an object schema, essentially it's a resource type. And a resource is a representation of a system entity that's sent or retrieved from the API by Jason over HTTP. And there's two types of resources, collections and elements. So these are kind of Kubernetes terms I'm using, but they do map quite nicely into Kube. So for example, a pod is a pod resource, whereas nodes is a node list resource and that's a collection of nodes. So try and bear that in mind. Additionally, it's really important to understand the separation of specification and status in Kubernetes. When an API update is made, the specification of the resource that you're updating is made. And that's available immediately. So that's almost like an atomic operation. But over time, Kubernetes will work to bring the status of the resource whose specification has changed up towards that specification. So the system will drive towards the most recent spec. And that makes the behavior of Kubernetes level-based, not edge-based, which is quite a nice feature. Okay, so now the tricky bit. I'm going to open a terminal window. Bear with me for a moment. Try and mirror this display. Yay, that works. Okay, cool. Okay, so what I've got running here is... Yes, sorry. Ooh, I could. Yeah, I should have tried that before. Now, say when. Is that good? Sure, okay. Okay. So I've got a single node Kubernetes cluster running on by MacBook Air. By the way, if anyone was interested, knowing how to do that, just come and see me at the stand and we can have a chat about it. It's quite cool. Right, so what I'm going to do... Okay, right, so, Python, Kube. Here we go. So what you do is just import Kube, spelling it probably right. Remember that America thing where I was like really jet lagged and was up till three o'clock in the morning trying to write this stuff. So be nice. So import Kube. So the key entry point in the Kube API is a cluster. So we can say something like cluster equals Kube.cluster and we create an instance of one of those things. So that gives us a cluster object. So there's... If you want... Okay, so one thing I forgot to mention was when you're interacting with the Kubernetes API, the preferred approach is to run a KubeControl proxy. And what that does is it proxies the KubeControl API from wherever it's running to localhost on your machine. So what I've got here is quite simply KubeControl proxy. You can just see it on the bottom running here. If you're running your Python code using Kube, you know, in a container, then what you normally do is have a sidecar container inside your pod. You're running the proxy and one's running your Python code. Okay, so the other thing I can do here is I can specify a URL if the proxy is running on a non-standard endpoint or port. So we can just say localhost, port number 8001 slash apri, something like that. And then we get, obviously, cluster instance. You can use context managers as well. So, yay. So you can say something like withKube.cluster, say K.nodes. We'll talk about this in a sec. So K.nodes is returned a node view and that will become clearer in a minute. So there's a few ways to actually create your entry point as a cluster. So do you remember I was talking earlier about collections and elements? Well, they're represented here as views and items. So this kind of nodes thing here, because it's plural, you can see that's actually a node list. So that's a collection. And I can iterate over that to get actual view items out. So let's have a look at a few of these. So we've got cluster.nodes. The cluster object has a few of these things, many of these things. You can look at the documentation. It's all on read-the-docs. It's a work in progress, but there's some essentials in there. And we can see things like the cluster's replica sets and namespaces and that kind of stuff. So, okay. So we want to get a resource item out of a view. So I can say something like RS because I'm going to get a replica set from my cluster.replica sets. And I can do a .fetch. And I need to specify the name. Now, something I prepared earlier, I have... This is just a cube control on the command line. And I can get the replica sets and I know I've got one called service demo. So I'm going to say get me service... service demo. But I have to specify the namespace. So I have to say namespace equals default. It's the namespace that that's running in. You see how bad my typing is. It shouldn't be allowed in the alive demos. And then we look at RS. Oh, look, we've got a replica set item now. So this is an actual element as opposed to a collection. And it's got some attributes associated with it. So I can say look at some metadata and I can see what the name is and lo and behold, it is service demo. So I actually got to give them the right thing. And I can see what namespace back came from. And I can also see what labels are associated with that particular replica set. So... Sorry, a bit closer to the bottom of the screen. Okay. So what's important to remember about resources is that they're versioned. Kubernetes versions, all of the resources it returns across the API. And if you remember when I was talking about the separation of spec and status, when you get given a resource item back, it's versioned. So we can see here that RS.meta.version is versioned in 1561. So... Sorry. So one thing I forgot to mention was, I think I did mention briefly, is that these collections are actually iterated. So I can do cool things like a list comprehension. So I can say RS for RS in cluster.replica sets. And you get... There's only one that you get a list of those things back. So... I can build a list, say, for example, saying node.meta.version for node in cluster.nodes. So that's going to give me a list of all the versions for all the nodes in the cluster. So there's only one, and at the moment it's in that version. If you keep doing this, eventually you will see a different version coming out. The state of the node resource has changed because something's changed about the node. It's used a slightly different amount of CPU and that's been reported and that, whatever. So when you're interacting through Qube, you need to make sure you've already got the latest version of the object. Otherwise you could be looking at stuff that's wrong or out of date. Okay, so back to labels. So let's have a look at our replica set object that we had and it's got some labels associated with it. It's got one. It's actually a dictionary. So I can do stuff like, look at run, the run attribute and get the value of service demo. It is, however, immutable, so you can't mess up that way. And that was kind of a design decision of ours when we were writing Qube that we wanted every operation to be done by a set call. So you can update the label using a set command. So you could say something like, let's add a new one, foo, and we'll set it to label.setFoo and we'll give it a value of a bar. That's not predictable at all, is it? Okay, so we get one back. Okay, let's have a look at RS. See if anyone supports what's going on here. RS.meta. And it's not there. That's really annoying. So that's because I'm actually looking at the old version of RS because that's the one that got returned from an earlier call. That version. So what I can do is this. I'm going to be fancy and use a list comprehension because I know what I'm expecting. RS for RS in cluster.replicasets. And I'll get the first value out of that and I should get an RS. So now if I look at RS.version, lo and behold, it's a slightly different version. Yay! Okay, so now let's have a look at RS.meta.labels and we can see that our foo actually is on there now. So that's really nice. And so updating a label is kind of the same as creating a label. So I can do that. It's returned by a new replica set which I didn't assign to a variable. So I'll do that. I'll look at RS.version. And let's have a look at labels. And we've got Baz set on there. So cool. So to go on about labels a lot. One, it was kind of easy to do in this. And the other thing is they're a really good way of managing your Kubernetes cluster. If you want to manage the way your resources, pods, services, et cetera, are managed, then setting and resetting labels is a good way of doing that. Okay, so we can also delete labels. So we can say RS.meta.labels.delete and we can say we want to delete foo. I'm not going to be caught out a second time. I'm actually going to assign that to RS and then look at RS.meta.labels and foo's gone. Okay, so that's kind of the end of the... That bit, the live code demo bit. So I'm going to go back to RS.meta.labels. Okay, so briefly talk about some of the features that I haven't got time to demo. And in the latest version of Kube, we've got creating and deleting resources which actually makes it quite useful. So you can actually go and create pods, delete pods, replicasets, services, namesplaces, all that kind of stuff. It's just a simple create call. It's a JSON specification and it does sense a call to Kubernetes. We've also got a watch API implementation which, you know, say, come by the booth and let us show you because it's really cool. And my colleague who wrote that bit actually wrote a blog about how it was tricky and he's done all you guys great service because he's insulated you from all the horrors of how to do watch support using Python and over HTTP and yeah, it's neat. There's also, if you remember the fetch command that I used to get resource items from collections, there's actually a filter capability so you can filter the return results on label values, which is also really natty and really cool and I didn't get a chance to show you. Finally, the cluster instance, which is your entry point, has a proxy to the Kubernetes API. So if all else fails and you want to get to the actual API in Python, while you're using Kube, you just use cluster.proxy to do that. Okay, so time for questions. It's on BitBucket and I'm really interested to hear if you all think BitBucket sucks and you think it should be on Git or if it doesn't really matter to people. I've had a sharp intake of breath from some audiences when I've asked them about that and anyway, that's where it is, happy to move it. Check us out on Koby.io. I'm Koby CTO. Follow me on Twitter because I'm funny and yeah, so I'll take questions if we've got time. Oh, no. Thank you. Oh, good. Hello. Okay, so the question was are we here or we can... Yes, we're in the vendor area. We've got one of those little booze on the green and yellow carpets. You'll see that that fancy graphic should be up on a monitor and we can show you where we've got it. It's still in beta, but we can show you where we've got and you can have a chat with us. My colleague, by the way, is one of the developers on Py.test so some of you may have heard of him anyway. Yes. Okay, so the question was what version of Kubernetes are we working with here? It wasn't. Oh, okay. So it's just an opaque number and it just represents a version of the resource compared to the last time the resource changed. Now, depending on the resource type, the kind, that could be anything. So for example, for a node, it could be because some of the nodes attributes have changed. It could be because the label has been updated on a replica set. So that's when the version number is incremented. Not just when you make the call, but when Kubernetes itself changes the resource. No. So what I should have said was don't rely on the version numbers. Just always get the latest version of the object. So I showed you the version numbers. I was kind of advised, don't show people the version numbers, but I just thought it was an interesting thing, so I did that. Any other? Okay. Okay. Okay. Come to the booth, ask my colleague, and he'll give you a really good answer. I'll give you an average answer. He'll give you a really good answer. Thanks, guys.