 Welcome to expanding, wow, expand your Spinnaker pipeline to the desktop. All right, first, I'm worried about Kenzen, the company that I work for. We are a consulting company that specializes in application development, bunch of other things you can read on there. Really, we do a lot of things with microservices in the cloud, including helping customers do digital transformations, that is here. So microservices, we've been working with the Netflix OSS stack for quite a while now. We are a collaboration partner with Spinnaker and have been since before the initial release. And the aforementioned digital transformations where we come in and we help companies go from quarterly waterfall monolith releases to daily whenever microservice releases. So a little bit about this talk and what this is, this is a proof of concept. This is something that is possible that is, and this shows that it provides some value, but it's not something that you would want to ever deploy into production as is. I will be releasing all the code that I have developed as open source here in the next couple of days after I clean up to get repository. So you'll be welcome to hack it and use it and I think it's pretty nifty. So what this is not, this is not secure. I have a plain text password in there somewhere because again this is proof of concept. This is not automated, there are some manual steps involved and again not production ready. So getting that out of the way. So what's the big idea? The big idea is that when you are deploying software using a pipeline, those pipelines shouldn't be starting in the cloud because they really start when you do that commit on your, or that code change on your local machine. Well last winter I was working with a particular client on one of these digital transformation projects and we set them up with a development environment and a production environment all on AWS. But then the developers wanted to be able to test things locally before they started pushing things out. So they're like, okay, well how do we bring up a whole stack on our local machine? And we were using ECS in the, in AWS. So we couldn't obviously extend that back to the desktop so they had to use Docker swarm and then why can't we use Docker swarm to deploy or test in our production environments? So that got me thinking, it's like wouldn't it be nice if we could have one single platform to use everywhere that everybody could then deploy to just as they would any other environment. No differences whatsoever. And I have more notes here. So what is that platform? Kubernetes fills that bill, right? You can run Kubernetes in just about anything and deploy to it the same way you would on GCP or bare metal or Amazon. Who's using Kubernetes today? All right, lots of people. Who knows what Minicube is? Hey, just about the same number of people. And is everybody using Minicube today? A few people, a little less, okay. So what I think of Kubernetes and when people ask me what Kubernetes is, I say it's the new Linux. It is the new operating system for all your things. So how do we then do Kubernetes locally? And that's Minicube, which just about everybody here has heard of. So it's the Kubernetes distribution for your desktop. And it works just the same. You can use the same cube control to do anything. You just gotta get it up and running. It runs on Windows, it runs on Mac, it runs on Linux. So that's great. So this is where I decided to start with the extension to the desktop. So there are a couple of problems with that though and that running Kubernetes locally, running anything locally is not really part of the cloud. It's not, it's the same platform, but you are not on the cloud. You cannot form these direct connections back and forth between something behind a home firewall router and your environment up in GCP or AWS. So they couldn't connect them directly. So what do we do about that? How do we actually make this work if we wanna be able to extend our pipelines completely to the desktop? That's when I came up with this wormhole thing and that's the name of the project. Bit of a sci-fi nerd. So I was thinking Farscape, Stargate, where you have this gateway going back and forth between your cloud cluster and your local cluster. So I decided to create a simple pod that is up in the same Kubernetes cluster as where I will be running Spinnaker and that is going to listen for a connection from your local machine that is running Minikube and then we're gonna do a little bit of stuff with it from there. So here is the deployment that I created and this is really pretty simple. Like I said, this is a proof of concept. It's really hacky and I am creating an SSH listener and exposing it as a service, you can see there. But I'm also exposing one additional port and that is 8444 in this case. When you connect to the API for Minikube, it's on 8443 so what we're gonna do is we're gonna forward 8443 up into our little SSH pod and then expose that as a service so that the rest of the cluster can then talk to 8444 and make API calls to the Minikube cluster. And here's how we do that with a little thing called Socat. So I initiate that reverse SSH tunnel connection and you can see this is part of the manual hard coded part. That's the IP address of my load balancer up on GCP and then the IP address of the Minikube API gateway. So that's all great, that worked pretty easily but then what about this whole Spinnaker component? So what is Spinnaker? Who's heard of Spinnaker outside of the keynote this morning? Okay, good number of people, who here's used Spinnaker? Yeah, that's actually, I'm surprised that there are that many people that have used Spinnaker. It is a little bit intimidating to get set up in that people who have tried it are laughing because it's true. So we've actually made a lot of progress in getting Spinnaker set up over the last couple of years but it's still not for the faint of heart. There are different sort of code labs out there and ways to go and deploy it and at the end of the presentation I give a link to one of the documents online that will enable you to go and set this up on GCP really quickly, really easily. But again, it's not gonna be a production kind of setup. So what Spinnaker is, back to the main point of the slide is a continuous delivery tool. It's very focused in what it does. So instead of say Jenkins pipelines where you have to sort of construct all of those yourself using stages and groovy like code, it allows you to specify in either JSON or YAML an entire pipeline of all these different things that you wanna do and then connect to any number of different cloud providers, including obviously Kubernetes, GKE, AWS, Azure, I think there's now OpenStack support. So it's really nice in that you can define these very simple pipelines that will then go and handle all of the actual deployments for you. You can have a single pipeline deploying to five different clouds or four different regions of a single cloud and it's gonna be completely transparent instead of having to make all those calls yourself. But how do we configure Spinnaker? So this is another one of the pain points, right? Because all of the configuration files are in YAML and if you are running this in an operational model you have all your microservices broken out into separate containers or separate instances. So there is a tool that was developed called Halyard. Halyard is, it supports GKE as well as a local distribution now. I think that the development is still underway and it's got a little bit of a ways to go but that's what I decided to use in the distributed mode in order to configure my Spinnaker instance to add in multiple different cloud providers. And Kubernetes is a cloud provider, MiniCube is a cloud provider in this case. So I don't like this view. So those are all the commands that I use to actually go and configure my Kubernetes cluster using Halyard. And I'll show you those again and we'll walk through them in a minute. So then running HAL apply it will go through and it will create the entire cluster for you and it will add in these configurations. Now the reason I wanted to use HAL instead of going and hard coding the information about my MiniCube cluster is because I envisioned this being a dynamic thing. So what I wanted this to be able to do and I didn't quite fully automate this part was to connect into this container that I launched on my Spinnaker cluster and then run the HAL commands to add in the configuration for my particular MiniCube cluster and apply that config change so that then the cluster would show up automatically inside of Spinnaker. And the advantage of that would then be that any developer could go and add their own MiniCube into there automatically using a naming convention and then you can start constructing pipelines to do your test and deploy there and then either add that into an existing pipeline or have that trigger another pipeline which I think is the best way. So that's another interesting part of Spinnaker is that you can have pipelines trigger pipelines or include pipelines. So it would go, I have a pipeline that I have dynamically created for my MiniCube which is going to call my cloud Jenkins. So we're offloading all the compute for compiling into Jenkins in the cloud deploy that back into my MiniCube cluster and then once that completes successfully you can have it trigger the same standard promotional pipeline that everybody else would be using before getting a new version of a microservice out into the production environment. So this is an example of a pipeline that I put together and this doesn't follow that same model that I was just talking about but this sort of has all the stages in one place where it starts with the Jenkins build and then you can see that I have, it's hard to read, says deploy to MiniCube so it would then go and automatically do that and then call another Jenkins job to run tests against my locally deployed version of the application. And so that would involve of course forwarding some more ports back to the local desktop environment so that you could access the application but it's again proof of concept and then from there it will deploy into the test environment run your integration tests as another Jenkins job and I keep mentioning Jenkins because Jenkins is really, it's the Swiss army knife of anything you wanna do and continuous delivery and it does those things very well run scripts, run applications, you can use it to run Terraform if you wanted to but those are things that Spinnaker knows it doesn't do and doesn't wanna do, again it's focused very closely on actually executing and orchestrating these pipelines so Jenkins is an integral part of that and many people ask me, it's like well why do I have to include Jenkins if I have Spinnaker, that's why but on the converse why do I need Spinnaker if I have Jenkins already and it's because it's so easy to create those pipelines here in Spinnaker so some of the other advantages of Spinnaker and I can show you a dashboard here in a minute is that it provides you that one view of all of the different aspects of a pipeline it's one place to view the whole thing if you have a pipeline of pipelines that are deploying to multiple different accounts or executing tests in parallel you can open up that view and you can click into each one of those pipelines and see exactly what's going on you can view the status of your clusters whether they be instances or in Kubernetes and you can see the number of containers or pods or instances that are in there as well as the load balancer status you don't need to be toggling back and forth between three or four different consoles and in order to get that view of what's going on and then another advantage is that you do want to get down into those other consoles you can for instance with Jenkins it will provide you a link to the exact job number that was running as part of your pipeline and you can drill directly down into the Jenkins job itself and see what was going on so that failed you can say okay, I know it failed in this stage let me click on that particular job go all the way down and that's why go make a code change fix it and it will trigger another run of the pipeline and off you go. So this is where I was going to show a few different things and let's see how badly I can mess this up. All right, I had another terminal window here somewhere. So that one, is it this one? Must minimize that, okay it must be this one I think it resized everything for me. Now that's, ah yes this, okay. All right so here is the Docker file that I created for this little SSH tunnel pod. I added in a bunch of, is this? Yeah, okay I added in a bunch of utilities to help me debugging as I go through the process. Of course we needed the open SSH server but then I did all this and I'm basing it off of Ubuntu because that's gonna provide the easiest fastest route to all these utilities. You can see here is that hard coded password. I wanted to go and extend this and have it read in my particular public key so that I could just use that but this was working just fine for the proof of concept. We are then adding in a bunch of different things from Minicube into the pod. And this is so that it's for both Halyard. Halyard requires the kube config in order to add it into the spinnaker configuration. And I'm including the API cert and key and then that's the config file as well as kube CTL. How uses kube CTL to execute different commands within spinnaker or spinnaker uses kube CTL so. So then just exposing ports 22 and 844 and that's pretty simple and that's about it for that. So then here's the deployment config that I was talking about. I built that container and I pushed it up into my own GCR repo calling it kates wormhole and then here is the very simple service definition. So then from there, we've launched that pod and we've created that service so then we can go ahead and create the reverse tunnel. There is the reverse tunnel and we can see that we have 8443 on our local host. So then we turn on SoCats. And that is then exposing 8444 in the container to the service. So from there, I had to go ahead and create a how your container. I imagine that these should be combined. I didn't quite get that far along in this but could easily be done so that how is included within that same pod. And I know that's a little anti-pattern to have so much going on in one pod but it makes it a little bit easier. So using a persistent volume to store the configuration and then deploying the standard how your stable container container from GCR. And that gives us a how container and we can actually, let's see. I'm not completely there, what do you have right now? Well, these are the commands that I ran once I got onto the how. So I installed them and here's again, more manual setup. I had to add those mini cube keys in there. I had to do a number of different things that I really didn't even talk about at all in order to get GCE, GCP configured. Those are, I just followed the documents that were on the internet for setting those things up. Those are on the spinnaker.io website and while they are not perfect and I would love to find some time to go and update those, they do work if you stick with it. And I manually started up the how your daemon and then ran these different configurations. So this is how easy it is to actually add in cloud providers. So for example, we added in a registry here and then we added in our provider and we gave it a name. You have to specify which Docker registries that you wanna use in conjunction with the Kubernetes account but you can also automate that as well. So config provider, Kubernetes account, add the name and then you apply the config and it goes out and updates everything and you're off to the races. Back to the presentation. All right, so the value that I see in this is that again, every developer can then have a cloud native environment locally. You don't have to spend extra money on compute resources up in the cloud. I think everybody's got just about 16 gigs or more on laptops these days. So there's plenty of space in order to run it there. Plus it decreases that latency for connecting to the cloud and it leaves any different security group rule issues or any IM issues by being able to run that locally. And then you have that connected to the cloud. So it appears as one large environment and it gives you a whole bunch of visibility. I think for me the most interesting part of this was not just getting Spinnaker connected to this but it was the idea of being able to talk back and forth to a local Kubernetes cluster that you normally wouldn't be able to get to. I love the idea of not having any restrictions on where I can run things and where I can access things from. But so I thought that was the neatest part. Lessons learned, Kubernetes can really do all the things. It's great as a new platform for using just about anywhere. And I think that really soon we're gonna see it just everywhere and nobody's gonna be really using instances anymore. At least I hope not, because instances are slow. You can really bridge a whole bunch of networks just using simple SSH. And while SSH is simple to use, it's probably not the best. So I think there are better ways to actually create these tunnels that I look forward to exploring further. Note on security. Again, this is not secure. And you should not run this in production and there's the example of where I had that. Other things I would like to explore adding into this is something like telepresence. If you haven't looked at telepresence yet, it seems pretty interesting. So here are a couple of references. This first link is the one to go and get Spinnaker up and running on GCP quickly, painlessly, easily. If you want to kick the tires in Spinnaker, I highly recommend going and using that. As I told my boss though, he needs to expect the expense report from my personal GCP account. Couple of other references, the docs on Halyard as well as the complete command reference for Halyard. There's a lot of different options and they are fairly consistent, but when you get down into the details of different cloud provider setup, it can get a little strange. So that's a really helpful link to have on hand. And want to learn more about us or Kubernetes or Spinnaker? This is where you can find us. Thank you. Any questions? I was installed yesterday and when I started looking at the doc, I was puzzled, is there any recommendation of different paths that you have? So the question was, is he set up Spinnaker and it wasn't working? Do I have any recommendations for setting it up and getting it up and running? Yes, that one link that I provided works perfectly. It's this one right here. The Google Cloud solutions, continuous delivery, Spinnaker Kubernetes engine. That is enough to get you acquainted with Spinnaker. Then we've been meaning to put out a document on how to set up a production Spinnaker for a while. We actually did work on a Terraform setup in order to get it up and running too. That is something that we would like to get out and just haven't done so yet. There are other documents out there, but that's the one that I recommend for getting going. You had a question? Yeah, nope. Slack, yes. There is a Slack channel. There is a Slack group for Spinnaker and that is pretty active. I'm not gonna pull up my Slack because there's way too many other things going in there but I think it's, I can't remember the name of it, Spinnaker team.slack.com. And there's a bunch of people on there from all the different contributing companies. Yes. I am not personally, so the question was, there's a V2 that looks like it's coming soon and do I know when that's gonna be released? No, I haven't been involved with that or even too much with the collaboration personally lately but I do know that they're making good progress on all the different bugs and fixes and I imagine that V2 is just the same. Nothing else? Thank you again.