 Well hello and welcome everybody yet again to another OpenShift Commons briefing. This week we have Gareth Rushgrove from Puppet with us. I'm really pleased to have him, not just because he's a puppet expert but because he's really one of the wonderfully opinionated, well-researched persons that I always love to hear speak and so we're really lucky to have him here and especially straight off Dr. Kahn. So please feel free to pick his brain in the Q&A if you're gonna let Gareth do his presentation first, do a couple of little demos, you can ask questions in the chat and then we'll leave time at the end of the Q&A and we'll open it up for a conversation there. So without further ado, Gareth I'm gonna let you get started and we will learn all about Puppet and OpenShift and how to use it with the new Kubernetes architecture. Okay so hi everyone as I just said I'm gonna do a quick introduction to some Puppet and OpenShift bits and pieces. I'll run through a few slides and then just get into some demos and then any questions and answers people want to do. So quick introduction, I'm Gareth Rushgrove I'm one of the senior software engineers at Puppet so I mainly work on building out Puppet, doing a lot of integration with tools like OpenShift and Kubernetes and I'm Gareth pretty much everywhere on the internet. So I'm sort of assuming people are familiar with Puppet so I'm not gonna go into a lot of sort of the details but Puppet is a configuration management tool it's a DSL based language with a whole suite of tools for managing all sorts of infrastructure everything from your switches and storage and through the things that most people do around host-level management but increasingly there's tools around the sort of container space and the sort of orchestrated space as well. One of the sort of how this came about really was I've been doing for a little while a bunch of integration between Puppet and Kubernetes and Kubernetes provides that really strong API that really strong set of primitives the replication controllers services pods etc. But actually we're sort of getting up and running with Kubernetes is at the moment Kubernetes is still a relatively low-level component and one of the things I found as I was doing that integration was how easy it was to get up and running with OpenShift. So that ended up with OpenShift being really the environment I was using to demonstrate and to sort of develop against for the Puppet integration. So it's nice to like then come from full circle and demonstrate a bunch of that back to you saw. So for those that want to have a look at this afterwards good starting point is the actual the module. So Puppet is mainly extended by writing modules so none of this integration is sort of requiring changes to Puppet or requiring some of the new versions. There is just a module on the forge it's undermined then space at the moment. So gara file slash Kubernetes and what that provides is the ability to describe Kubernetes in Puppet code and by which I mean this isn't about installing Kubernetes. I know if you're using OpenShift you already have Kubernetes. It's already there and exposed and set up and managed and so it's not about installing and configuring that. Though we might look at doing so with OpenShift in the future. It's about what you then do with Kubernetes. It's about in this example a sort of trivial pod but it's also about all of those primitives in the API the replication control as services deployment managers etc. For those that are familiar with Kubernetes already what you'll notice about the code examples on your screen is that it follows the same format in Puppet. It's just a different DSL. So if you're already familiar with the Kubernetes structure, the Kubernetes API, you're probably already familiar with how you would write that in Puppet and that was an explicit design decision. In fact there are some tools provided as part of the module which I'll demonstrate later that can take a Kubernetes YAML file and actually just take that and generate them and give you the Puppet code. In fact as well the actual module itself, nearly all of the code for the module is auto-generated from the Swagger specification. So keeping the module up to date is literally just a matter of regenerating it based on the updated Swagger specification. Kubernetes has some really nice features and sort of API driven bits and pieces that make that sort of integration pretty seamless. So far though you might be thinking well I could describe it in one broadly speaking data format YAML or I could describe it in the format that I'm showing. There's not really much value there and I'd agree with that. Really the advantage comes because Puppet is a programming language and yes it can be treated as sort of quite key value-parasque data but ultimately you can create your own abstractions. So for those that are familiar with Puppet we have defined types and classes and you can build those and this is an example of a defined type. This isn't part of the module, this is just something I put together while doing some work around the Kubernetes guestbook example. So one of the things you find with the sort of just taking a purely data sort of centric approach to Kubernetes configuration and like handwriting the YAML files which are really the wire format for the API. You end up with a lot of repetition because it's the wire format for the API but with Puppet you can actually abstract end users away from that and provide high level interfaces. So one of the things that I'm often seeing is that you often end up with a pair, a controller and a service and often like they go hand in hand. The API should treat them as individual things however the user interface to that does not necessarily need to do so and in public you can just create your own user interface, build your own abstractions just using the built-in standard Puppet tool. So that's a simple example and I'll show some examples of that as well. For those that want to sort of like again pick through some of the details, see some of the code, there's a blog post up on the Puppet blog from like a month or so ago and all about a bunch of the examples I'm going to show and sort of going from like setting up an OpenShift Poster locally using the provided backup machines to trying out the Puppet code and getting the configuration in the middle done and there's more blog post as well both on the Puppet blog but also on the Kubernetes blog about the sort of like the advantages of higher level interfaces to Kubernetes. Ultimately you can use many different interfaces at the same time for different purposes and Puppet sort of declarative nature means that it's very good for that sort of like high-level like change control managed sort of jank and whereas there's something like the user the more the sort of graphical user interface in the Kubernetes console is fantastic sort of like they're like seeing the state knowing what's going on at any given moment. I'm also going to add a quick bonus example, I think it's probably relevant folks here. I've recently done a little bit of work around using Puppet with Atomic and with Atomic often being used under the hood of OpenShift. I thought that would be hopefully interesting to the same audience as well and all of that's powered by a set of open source images that we've pushed to hub so I'll talk about that a little bit as well. So hopefully that's a reason like example and it's not my introduction rather and people are still here and I'll hopefully be able to click through to my console and show some demos. Yeah we're all still here. All right so the screen will be a bit small but I'll make it bigger. That looks perfect. Does that look good? That looks really good. Okay so I'm not going to set up an OpenShift cluster. I've already got one up and running so I'll let the screen refresh a little bit sometimes a little while so I've already got it's just a single node OpenShift cluster brand new 113 so there's anything running so far. We've just got good enough running there's nothing else there it's totally fresh cluster. I've also I've done the configuration I've put that the Puppet module simply uses the standard kube configs so you just need to put them into the relevant place puppet to discover them and then everything just works again like the blog post covers setting things up so I won't go into that in too much detail. The module ships with a number of different examples and the one I'm going to have a look at today is setting up a small Redis cluster so open up a public kube config file so here you can see the public module has types for basically all the the primitives and kubernetes so we've got a kubernetes underscore replication underscore controller type and here we're setting up master we're providing the metadata we're providing the spec and again this is a very low level part so you can see here there's a lot of duplication in the same way as the ammo file I'll come on to how profit can be used to solve that repetition problem shortly but this is demonstrations in the lowest level parts so if I go down you'll probably see a service you'll see another replication controller for the sort of masters and slaves so we're setting up a problem sort of master and slave based kubernetes cluster and so far this will follow exactly the same format as the ammo file if we had a ammo file that described this we could actually run the generate command and that would give us the public code for purposes of this I'm going to run a public apply so probably is often run with sort of an agent and a master so the public server you can connect to and things call home I'm just using local public apply here to demonstrate the types and provide the providers provided with the kubernetes module and but this would totally work with having an agent as well and I'm also going to run it with test just so we get a bit more output so what's happening under the hood is that the public module simply reads the public code and converts that into the relevant API calls to communities on top of open shift and open shift does the rest so if we flip through the sort of outputs you can see it's checking if a few things are existing it's creating the things that aren't so it creates there we've seen a couple of replication controls a couple of services so in theory we should be able to see them now up and running in open shift so also you get services we can see ready master ready slave we should see a number of pods starting to launch or already running so so far this is very similar to how you might use kubectl to do exactly the same sort of thing with kubectl apply but one of the things I can do here is use puppets idempotent nature so we can we can keep running the same commands we can still run the same code through puppet and ultimately puppet doesn't try and recreate things it doesn't create new things it just simply says oh those things already exist um and so you can have that constant you know sort of fo production setup like like that running constantly allowing you to change the code if you want to change the system but also running and having that constant tick that constant feedback that your system is currently still configured exactly how you want it um that has the advantage that if people do have like sort of individual command line access and and start doing things where maybe someone for whatever reason logs in and changes something unbeknownst to the rest of the team actually put people would put that back and tell you about it so if I were to for example delete a service let's delete that so the service is gone and if I run puppet again um instead of finding it it should find that it's missing and it should recreate it hopefully there we go so we see a notice there that the ready the ready slave service was created and it should all be back so again like it's the same pattern of usage as with puppet on a host with packages or users or files and and again like the in a production setup you'd have all the reporting and all the data going back into something like a video on another tool so there are some advantages there to the sort of just using the standard demo files and quick there's still a lot of syntax um let's see what we can do about that so unless I just the single demo file and I'll work backwards here as an example um I also have a defined type just called ready scuster and it takes a standard title in this case first and it takes a size I'll show how this is implemented in a second I'll give it a quick run first so here I'm I'm running the folder which then runs a bunch of the files in there um and we can see that it actually very similar output to the previous one it created it created two services to replication controllers so we should be able to see those thing um and you can see the the ones that are perfect with first so that setup took um that size of two and if you look here we have a single master and two slaves let's play around with that a bit so let's bump the size to five and rerun our public code and again some of those definitions don't need to change but one of them does and we we see the change notice appearing there um the output is a little bit verbose it can't contains everything um it would be nice to sort of format that better for users for in a cli here and that's definitely something I'm interested in doing um but the change results are there and if you had a a report on the other side you'd have something to pass that out anyway so what we should see now is well we updated the replication controller the replication controller then said well I'm running two replicas apparently and I now should have five and the pods have all launched um and we could go the other way what's maybe even more interesting is well I've created this high-level primitive um for for called Redis cluster and well now I can reuse it so rather than something having to note all of the implementation details now we know what a Redis cluster is we can create lots of them we have we've we've got a high-level interface just by using puppet as a programming language and that that's the real key to sort of the advantage here um on top of just using data so you can see here that there's now a second master starting to come up a few slaves and you could imagine changing those numbers and adding more and like doing that at a very high level rather than and we'll look at the implementation here um so under the hood Redis cluster is simply a defined type in puppet um it takes an argument of the size but it has a default of two so you wouldn't even have to provide a size you could just say like give me a Redis cluster and it's called this um for those that haven't seen puppet four has uh full typing hints so even here I'm I'm saying that this has to be an integer if you pass a string or an array of it will give you a nice error so again like we can create these high-level interfaces with fully typed parameters um which really helps sort of people avoid errors uh we've got a controller service pair I mentioned this is another abstraction so we've got a few abstractions going on here but you can keep digging in and it's only at the bottom layer it's only here where you see like the actual the low-level Kubernetes types being used and I mentioned before about that sort of the Kubernetes wire format is quite repetitive um but ultimately it doesn't have to be because we we have the ability to use variables so yes we're passing the labels into the metadata and into the spec so everything's wired up correctly but we're not having like if I want to change the value of app or the value of role well I change it at variable level and that's going to propagate through like the number of places it's used including in multiple different resources in the Kubernetes service and Kubernetes replication controller in this case so I'm using replication controllers and services as good high-level examples but the module spots pretty much all of the uh like the different resources in Kubernetes as I mentioned because actually the code is all generated from the spec so that's really a sort of a good introduction hopefully to the Kubernetes module I say yeah you could create a complex set so you could create the rest of it we could go on and make actually an entire setup and other high-level types that are specific to like sort of our business problems um and we could demonstrate it integrating with all this other rest of the tools but I'll leave that as a bit of an exercise for people who are interested in following up so I think I think did you have another demo that you want to do there yeah I've got one quick for the demo as well okay go for it and then we'll go so I mentioned briefly that we've just shipped a number of uh images to the to the hub containing puppet software so these are containers with puppet server so the open soft puppet server and puppet db a number of the sort of uh the dashboards from the community and the puppet and a factor uh image one of the advantages of doing that is that we can run on top of something like atomic where the unit of software isn't an rpm unit of software is a container but suddenly we by shipping those we have pretty good atomic spot I'll just demonstrate that really quickly um I'm uh not running on an atomic host but I have vagrant running I have a virtual machine running atomic um and that hopefully should be up and running the the code for this is actually available in the examples repository for the puppet in docker project so if you want to try this out yourself you can just sort of uh go along and type vagrant but I've already got that up and running for this sort of quick demo I've also set up a I'll briefly show the program file and I'm using the uh the sentos atomic host box um I'm going a bit more memory so it's a bit more savvy um and I'm when when the machine comes up there's a provisioner that will pull down the relevant images I'm using in this example and uh run it but basically run a puppet server inside a container on an atomic host so you could run an entire puppet infrastructure if you want to do and I've set up a couple of provisioners for running factor and running puppet and I'll show those running so this is just a shortcut to running things over I say so how we're doing this is that the container mounts the volumes from the atomic host and connects to the host network so the output here isn't of like the container context it's of the outlying atomic host uh so you can see a bunch of things around so what is developer and atomic host we can see things from the host not from uh here we go the the os you can see what it's saying this is a red heart machine sort of sentos machine and we'll be adding proc full support to factor um for detecting but it's actually an atomic machine and then you could do all sorts of other nice things um and you can run because factor can now just detect things about the atomic host well puppet can be used to manage things on that underlying host and that might include starting other containers or it might include putting some files under the underlying host and or users or firewalls or anything you would normally use puppet for and you can now do that on atomic so again the example here is in it's on a puppet get a repository uh in the puppet in docker examples repository under the puppet labs name and and with that uh they're my demos hopefully they were interesting and i'm happy to take some questions that's great can you pop over to that github repo that you were just talking about and just throw that up on the screen so that we end on that and that and people can know where to find all these examples it's pretty mind boggling all the ways that you can schedule stuff on um kubernetes and open shift and atomic and and i've always had a great love for for puppet so um i'm looking forward to getting to use it some more again yeah the the examples repo um uh with the puppet in docker work and the atomic example in particular is on the screen perfect perfect all right well i'm opening up the questions we've had a couple of basic you know really basic ones which i've tried to answer in the chat about kubernetes and and other things and you know as always when you're talking open shift someone always brings up ansible but um there are so many tools out there right now um that different enterprises are using whether they're using and puppet really just want to make sure that um that whatever the tool of choice is is available for people to use and and i think there's puppet is is is one of the widely popular offerings so i'm really appreciative of getting the chance and having you create this module and um do this um for all of the puppet users out there in um open shift lab it's it's pretty pretty cute hi uh okay this is rob great i have another question um i do i handle isv's uh that we are having partnered with ourselves and we're directing a lot of them now to open shift as a means of integration with our other products that say well we'll get them into open shift we'll put fusing open shift we'll put other things open shift that's a great place to to integrate what can we do with puppet to simplify the um of people to that you know into kubernetes because often they they they come in they don't know anything about open shift they have typically gone into you know up gone into the uh the google or the microsoft cloud or amazon cloud and what can we do this what what can puppet with uh kubernetes do to make this as painless a process as possible because i my concern is we we often throw up it's like well to get to our unfamiliar technology you have to learn two or three other technologies to get there yeah so you know these guys probably don't know puppet they know kubernetes they don't know open shift um and there's just sort of going like look full of god we just want to we just want to get this thing you know coordinated with with one of our other products uh what can we do to sort of simplify that so this this so that kubernetes my my ask is how can we make puppet kubernetes and open shift practically invisible to them that's a good question i i don't really want to make puppet invisible to them um i think what it is is that a lot of people already have the puppet expertise in the house they're already using it to do their ci cd workflows they're already using it the script all kinds of things because it's it's wonderful um and and what we're trying to do is keep them in a familiar world um and then allow them to use the tools they know to move forward into a cloud native open shift kubernetes universe well i was wondering like i thought that the same thing is that i i think puppet's probably not going to be the sort of the de facto sort of default entry point to people coming in like as you described um but actually puppets in widespread use across lots of organizations so there's a lot of prior out around the puppet language and what we're finding is that puppet as a universal tool in particularly in organizations where they have that sort of they're running a bunch of AI x stuff they're running a lot of rail stuff they're running maybe even should we say some windows bits and pieces and they're now looking towards running uh open shift and puppet as a universal tool and language across that sort of diverse infrastructure is the sort of sweet spot but not everyone will have that sort of diversity and there are sort of there are close there are sort of more native kubernetes tools and native open shift tools i think all of them can will coexist mainly around the strength of the kubernetes api let me well here's i guess here's the cut here would be the partner ask and i know we can't achieve it but how can we get as close to it as possible and that's um what they would like is they go to a website they upload their app hit submit and magic it's on open shift as an available container you know kind of like one of the old gears where other people can use it now and it can be used and spun up and and integrated with everything else i mean that's everybody's ask is you know make make me an easy button yes how close can we get to an easy button we're pretty close to that easy button right now um actually that was the image that that big easy button is one of the there's two prongs to this conversation one is we can make it we can we make open shift the easy button for enterprise kubernetes deployments so if you want to deploy open shift and you're in a customer situation or an end user somewhat some operations house um and they want to deploy kubernetes and open shift and with a real simple set of scripts and sorry they're all ansible garitha that's one one path in and the other path in it um that rob that i'd ask you to take a look at now um is the um open shift online using um open shift three using the new version of open shift is now up and available for the developer preview and what you're describing um is is a little bit based on um your memories of open shift two with our cartridges and gears and um our sort of the marketplace and quick starts and all of that so that you could really get in quickly and reuse and save um the quick start so you can get an app up and running and anybody who who had access to that quick start could start it really quickly and the same user experience um from the web console with the new release of open shift um is is there um it's a little more complicated because there's a few more things that you have to put in place in order to make um containerize what we call an application know be aware of all the other containers in its service um so but the the users right and i think what we're departing in middleware where these are not just java you know these are not just the ear and war applications that we traditionally we're recruiting things like machine learning companies big data analytics blockchain others often it's not and one of the reasons i'm interested because and i know it's it's an ethometeor engine so it's not these aren't even written in java frequently and so it's how do we bridge that gap as a result i i'd actually ask you to get as we have take take a look at the um the new online experience and also take a look at um under open shift can you see me for that yeah i will and uh a link and take a look at the source to image set of tools um because once uh once people start creating their images um and they push them to private registries which are often hosted on open shift as well um and then that's how you get your um reusable images that people can other people from within that company or organization that have access to that registry can use and deploy from the open ship console so that's that i think is the methodology and and i know we've gotten off track here because that's always doing these sessions but um we're nearly back to my talk at docket con around just and the the sort of diversity but also like barely early stage of a lot of the container build tools um but like as we were talking there like the next six six twelve months is going to seem like a better stuff tooling like improvements to what's there and hopefully some i'm sorry i didn't mean to get you off your didn't need to get a job topic but for us this is a big one because we're we're steering this way and we're looking for anything that makes their life easier to get there and and it's you know so that we don't have to say okay you guys are absolute experts in r and you know every statistical um app ever ever built but we need you know uh but you you know here's how you get into that the da da um open ship and kubernetes yep so i as as you probably me or may not know i'm a python person um and uh we have a couple of good write-ups on the blog um i think graham dumpleton did them um on doing basically what not for r but for python um creating custom um source to image um toolings for pythons and get all the bits that we need and and you'll see a directory also in the open shift um world and i'm trying to think of it in the repo where it is exactly but i'll i'll i'll dig it up and i'll post it with this um recording but um there is with source to image you're building the tooling or someone is building the tooling whether it's one of the python folks and the evangelist crew or one of the engineers that you can build reusable tooling to build out as many images as you want for the r community for example and so you're starting to get the baseline um tool chain for build services and continuous integration and deployment of images um be centered around tools like s2i and some of the other ones that garith mentioned um it is docker content so the high-level tools for doing all this um and pushing them to private registries which you can then expose to your organization organizational why and set the the user access to them so there is a whole um workflow and that's probably a topic for yet another briefing um sometime soon so um it's a good question and um i think the documentation is probably lacking more than the tooling at this point um so we'll have to work on that i think a bit more good question there's a couple other folks on the call um the briefing right now if there's any other questions um you can toss them in chat if not um this is about the normal length that we do our our briefings for so it's pretty good and um great content i really love the atomic demo so i think we're going to have to push a little bit more um content around that as well so i think that's um a great great use of atomic and puppet together so thanks garith yep i love thank you all right thanks for having me all right and there will not be a briefing next week because it's red hat summit week so we'll be giving ourselves a break and which is not really a break but um and then coming on back on in the following week so thanks again and um everyone have a great week and hopefully we'll see you all at red hat summit