 Everybody, welcome back to another OpenShift Commons briefing. Today, we have the folks from Armory who are experts in Spinnaker. They're going to give us a deep dive on Spinnaker and the Spinnaker operator and tell us how to deploy it and all kinds of good stuff. We have with us, Hermann, Lizquiz, and the Fausts. I'm going to let them introduce themselves and we'll have live Q&A at the end of the conversation. So please, and thank you very much for being here today. Well, hello, my name is Hermann Musquiz. I have been working at Armory for the last few years. And I have been working in industries of telecom entertainment and technology and most recently with DevOps experience. And my name is Lee Faust. I'm a field CTO at Armory. I've got over 25 years of IT experience supporting a lot of different open source communities. Also have a lot of experience with enterprise development and delivery across multiple different verticals. I'm also an ex-Red Hatter. I worked at Red Hat in between 2006 to 2009 on the Centennial Campus behind NC State working on the Red Hat Exchange and Red Hat Network. So, Hermann, you want to go to the next screen? So, we're here to talk a little bit about Spinnaker. And for those of you who don't know what Spinnaker is, Spinnaker is an open source continuous delivery platform that was built originally by Netflix. And Netflix used this particular platform to be able to build and deploy applications into an AWS cloud environment. This AWS cloud environment, they used a number of other open source technologies inside their stack. So, they were using Java Spring. They were using HashiCorp Packer and a lot of other open source technologies that we would see in a common DevOps stack today. This technology leader group that was formed around Spinnaker was all around a particular technology called a cloud driver. And the cloud driver is something that supports all of the different major clouds. It also allows us to be able to work with Kubernetes and it also allows us to be able to do things like defining custom CRDs for deployments as you would do with OpenShift today. It's backed by an open source community. The community today is very strong armory and a number of other companies contribute openly through their employee contributions back into the community as a whole. We also have built some proprietary capabilities on top. But because of the way that the application has been architected, we don't actually fork the underlying code base. We actually build enhancements by using Kubernetes native technologies by being able to deploy additional pods and then integrating to the overall pipelining architecture. Its target agnostic delivery platform allows us to be able to focus our energies by giving the APIs back to the cloud vendors and allowing them to actually build the best practices into their given environments. So when we talk about deploying to Google, the Google cloud driver was written by Google engineers. When we talk about AWS, it's written by AWS engineers. And when we talk about deploying to Azure, it's written by the Azure engineers. Spinnacle allows DevOps teams to use guardrails instead of gates. So inside of a lot of environments that we talked to our customers today is they feel very comfortable giving direct access to dev QA and maybe even staging. But what they need is they need formal approval processes for deploying anything into production. So what Spinnacle allows us to be able to do is it allows us to be able to support everything from rolling upgrades to red blacks to blue greens to canary analysis to highlander strategies, allowing people to be able to deploy into the environments given the proper, proper security constraints that will be needed by any sort of enterprise today. This includes things like setting up manual judgments or doing things like deploying inside of a specific window. Spinnacle was built for resilient safety and speed. This is basically a golden path to production. There are a lot of capabilities that we'll be showing at the end talking about how you can use the spring expression language to minimize the number of pipelines that you would need to create. So you can actually reuse pipelines and use those pipelines for specific application types. So you could define a golden path to production for every react from front end. You could build a golden path to production for your spring boot applications. You can also do that based on technology type. So whether you're deploying to VMs or you're deploying to containers. It's a pluggable and extensible platform. We'll be talking a little bit more about this later. The plugin framework is something that is relatively new. And we would love to see the OpenShift community contribute back to the plugin framework to give a tighter integration between Spinnacle and OpenShift as the two platforms work in conjunction with a lot of joint customers. Remind, can we have the next slide please? So as we mentioned, Spinnacle is a next-gen software delivery orchestration tool. So this tool is something that, as we mentioned, we've got Netflix, Google, Amazon, Pivotal, Oracle, Microsoft, a lot of the major cloud providers, even internationally with Huawei, Tencent, and Alibaba. There's over 560 individual contributors, there are over 160,000 GitHub events, and over 25,000 commits. And this is only in the last year. It's a thriving ecosystem of 9700 engineers across hundreds of different enterprises and continuously growing. Next slide please. For those of you who aren't familiar with the CNCF DevStats page, you can go into the Spinnacle DevStats page by going to the captain's log, which is linked here, and we'll share this slide presentation out afterwards. What you'll be able to see is you'll be able to see the continued growth of Spinnacle across a lot of organizations. And when you go ahead and compare those across different continuous delivery tools with our different tooling that exists on GitHub, you'll be able to see the numbers of repositories, pull requests, and forks showing the vibrant community and the contributions that companies are making back into the open source community. Next slide please. So the average number of contributions from companies besides Netflix and Google, who are the largest in the second half of 2019, was more than two and a half times the average from December 2015 to January 2020. This is just amazing to us to see the number of people who are actually starting to either use Spinnacle in production environments, or they're feeling like they can use it as a platform that they can leverage internally to be able to build more useful tooling as a lot of companies are moving towards a self-service model. As app devs are wanting to self-service their deployments, what we're seeing is being able to provide people with the right amount of information just in time as those events are occurring, allows app dev teams to move a lot faster than they were before. Next slide please. So new in 2020, some of the things that we've been working on with the community is to develop a Spinnacle operator that Herman's going to talk about today, which makes it easier to manage Spinnacle itself. We've also built a plugin framework where some of the work that we're doing with the Netflix team is to go ahead and focus on building a lean core while also creating an extensible platform. What this will mean for the people who are wanting to use Spinnacle in production, you can imagine being able to do things like if your environment is only Kubernetes, what you can do is you can shrink the size of the deployment of Spinnacle to only require the things that are required to be able to do Kubernetes deployments. One of the other things that we built over the last year is something called Minnaker. Anybody who has done any development where applications have built as microservices where you have direct dependencies between the APIs across all of those services. What will happen is you can end up in a situation where the routing between those different services can be very difficult to manage. So what we've done is we've built Minnaker that allows you to stand up a VM and allows you to be able to clone just the repository of the service that you want to extend. And what we can do is we can use our plugin framework with a particular code that exists at the plugin points and allows us to be able to go ahead and set up your development environments very quickly. There's also some very exciting work that is coming out of joint teams that are working with Netflix around something called declarative delivery. If you look inside of the Spinnaker organization, you will see a particular repository called Keele where there's a lot of very interesting work being done around being able to do things like defining environments and constraints that allows us to move beyond the pipeline. Technologies that exist today. What we can do is we can define the application targets, the constraints that need to be there by environment and automatically allowing Spinnaker to determine the best way for you to be able to get the applications into those environments based on those constraints. And so many more exciting things to come in 2020. So for those of you that are excited about getting started with all of the stuff that we've been talking about so far, I'm going to turn it over to Herman and let him talk about the operator that he's been working on that will allow you to get up and running with Spinnaker very quickly. Herman, over to you. Alright, so one of the things that we wanted to improve about Spinnaker is how to deploy it to a Kubernetes cluster. Basically, we built this Spinnaker operator that creates a CRD on the Kubernetes cluster in which you have a custom Kubernetes kind, which is the Spinnaker service kind. And you specify a manifest file describing all the status that you want Spinnaker to have or the configuration of Spinnaker. Then you apply this manifest to the cluster and the operator is a pod running inside this cluster that takes this Kubernetes kind Spinnaker service and creates the respective deployment objects, service objects, secrets and so on for Spinnaker to be run in the cluster. And using the operator, you can use standard tools like QCTL, OC, Helm or Customize to deploy it instead of custom tooling that we are going to see. For instance, here is HALJAR, which is the official deployment tool for Spinnaker. When you use HALJAR, you need to take in mind that you need to run it in some machine, you need to run it somewhere. Usually this can be a virtual machine, it can be a pod inside the same cluster or it can be somewhere else. But there are some key points to keep in mind when you are deploying Spinnaker using HALJAR. One of these key points is that HALJAR internally uses QCTL command line to deploy Spinnaker. So this means that you need to have a Qt config file for the target cluster where you are going to deploy Spinnaker. Another key point here is that the Spinnaker configuration files need to live in the same machine as HALJAR. And this raises the question, how do you do GitOps with HALJAR? Well, you would need to have some custom tooling or automation for mounting these Spinnaker config files into the HALJAR machine. Also, if you want to deploy Spinnaker, there's the HAL CLI. So there's a special command to tell HALJAR to deploy Spinnaker. And you will need to build again custom tooling around HAL CLI if you want to automate the deployments of Spinnaker itself. And the last point to consider here is that HALJAR performs some validations on the fly. For instance, let's say that you want to add a new Docker registry to Spinnaker. And HALJAR tries to connect to this Docker registry to validate that the credentials are working. What does this mean? It means that you will need to grant access from the machine where HALJAR is running to be able to query your Docker registry. If you are running HALJAR inside of the same cluster as in a pod, then there's no problem because you already need to specify access for the cluster. But if you are running HALJAR in a separate machine, it can impose some security restrictions. Then we have another way for installing Spinnaker into Kubernetes, which are through hand charts. Hand charts are contributed by the community. And basically, they mount HALJAR as a pod inside of the same cluster where Spinnaker is running. This has the same considerations as before. HALJAR uses kubectl, therefore you need a kubectl file for deploying Spinnaker to the cluster. You need to have the Spinnaker configuration files mounted into that pod. So if you want to automate deployment of Spinnaker, there will be some way to update these configuration files inside of the pod. You still need to use HALJAR CLI. The validations are running from inside the cluster, which is an improvement. But now you have the configuration of Spinnaker split in two places. One is the native Spinnaker configuration files and the other is the actual hand charts. So if you want to make some changes, you have HALJAR CLI and then you have HALJAR CLI. And then you have HALM. And you have two different sets of configuration files to maintain. So what we did at Armory is creating the Spinnaker operator. And the considerations for this is that the Spinnaker configuration files are basically included into the Spinnaker service manifest file. This is a CRD. And you can apply this CRD to your Kubernetes cluster using standard tooling. Here's an improvement because you don't need Kube CTL or a Kube config file to deploy Spinnaker. Now you can use pure OC. If you want to deploy or apply this manifest, you can do an OC login and then OC apply. And the operator will take this Spinnaker service manifest and create the respective Spinnaker pods, services, secrets and so on. Another consideration is that now the configuration lives outside of the system. You no longer need to mount these configuration files into the pod or into the HALJAR machine. So this is easier for automation. If you want to do GitOps, you can store your Spinnaker service manifest files on their source control without any secrets. And then have a process to download these files from source control and apply them to the cluster. There's no new CLI like HAL CLI like in the previous solutions. And also the validations run inside of the cluster. So we're taking the advantage of the hand charts. So how it's like working with the Spinnaker operator? First, for installing the Spinnaker operator, we publish the manifest files at GitHub. They are not published to the operator hub yet, but it's on the roadmap. It's a future thing. Once the operator is installed, then you need to create or modify your Spinnaker service manifest. And for this, you need to know how to configure Spinnaker. There are a lot of configuration settings for Spinnaker for integrating with Kubernetes accounts, with AWS, with Docker registries and so on. And we provide configuration about these configuration fields. Some documentation for them. You can store these manifest on their source control. The secrets can be stored somewhere else. They can be stored in S3 buckets, in GCS, in Vault. And with the Spinnaker operator, you can also store secrets in Kubernetes as Kubernetes secrets. This is not possible with HALJAR or the hand charts. Finally, it's just a matter of applying these manifest. So what are some of the things that the operator does? You use standard tooling. As I said before, instead of having a custom CLI like HAL, you can use QCTL or OC. The validations are run at apply time. For instance, if you add a Docker registry, but your credentials are wrong. When you do the OC apply in this case, there will be an error that prevents the configuration of Spinnaker to be modified with a wrong configuration. It will be rejected if the configuration is invalid. Then automation with the operator, there's the ability to automate some of the tasks. For instance, for exposing Spinnaker, you need to have the public-facing API URL of Spinnaker and set that into the configuration. So Spinnaker needs to know what's the URL that is used to access Spinnaker. Normally, using HALJAR, you would need to install Spinnaker first, then create some load balancers. And once you have these URLs, you need to update the configuration with the URLs. With the operator, in the case of exposing Spinnaker with service load balancers, this is automated. The operator first deploys Spinnaker and the service load balancers, then waits for the public URLs for these load balancers to be generated. And then the operator automatically updates the configuration so Spinnaker knows where it's exposed. And finally, as I said before, one feature of the operator is that you can store Spinnaker secrets as Kubernetes secrets. Like your Jenkins credentials or your GitHub token. All of these can be stored as a Kubernetes secret. An operator will take care of reading these secrets and applying them to these services. And now here we're seeing an example of how to migrate from HALJAR to the operator. With HALJAR, you have a set of configuration files. The main one is the .hal slash config file. And there are some supplementary files that are stored in subdirectories like the profile override files. If you want to migrate from HALJAR to operator, it's just a matter of copying the configuration from these files into the Spinnaker service jamming. Finally, if you want more information about the operator, there are some links here. As we said, this deck will be shared. Now it's demo time, I'm going to hand over to Lee. Hey, Herman, can you go back a few slides to the gardening days? Just one of the slides we skipped over. So for those of you that are out there looking at wanting to use Spinnaker in your environments with OpenShift, and you'd like to learn more from the community starting July 16th to the 23rd, we will be having what we call gardening days. And one of the things that's great about these gardening days is you have access to the AWS teams, the Netflix teams, the Armory teams, as well as other people inside of the community to actually help you build new things that you can tie directly into Spinnaker itself. So what we have is we have a list of things that we can actually start building together. The last time that we did this, we had a company called Pulumi. For those of you that are doing infrastructure as code, Pulumi is a great company to look at where they do things a little bit differently than defining things as HCL with like Terraform or doing things in Ruby. So what happens is you can come in and you can work with us during this process of over these seven days and we would love to have you be part of that community. So to get you even more excited about coming to these gardening days, let me actually show you what Spinnaker looks like. Hey, so Spinnaker itself follows a very simple object model. So projects have applications. Applications end up having, underneath our applications, we have our infrastructure and we have our delivery. And from the infrastructure, you actually have access to all the things that Hermann just talked about. So what we do is Spinnaker follows a service account type model. So all of the work that you do into your environments will actually be working on behalf of somebody else. So when you go ahead and you deploy something new because it is continuous delivery, it's not acting as me as an individual. It is acting upon the service accounts that you've defined through the operator. Now, when we actually go in and look at the infrastructure that's being defined, what we can actually see here is we can see the regions where things have been deployed. We can see the accounts that are being used and we can also go in and see the states of all of the different things that have been deployed. Now, here we've got an example where we've got a Kubernetes cluster. And one of the things that our customers love about this capability inside of Spinnaker is when I actually look at this, these are things that you would normally need Cube CTL access to, which means you would have to define a user account, you would have to give them a cube config, they would have to install the Cube CTL CLI, and then they would have to learn all of the different Cube CTL commands to figure out how they get access to this data. Now, when we actually are sitting on top of the deployment itself, what we can do is we can actually see off of this manifest, we can scale it out, we can undo a rollout or we can do a rolling restart. So undoing a rollout is basically just going back in time, we can choose a previous revision, we can give a reason why we're backing this out, and we can go ahead and redeploy an old version of our application. This is really powerful, again, going back to giving developers the ability to self-service inside of the environment. When we actually go in and look at the pods themselves, we can even go in, this was deployed as a replica set, we can go in and look at the replica set action, so we can delete this entire deployment, we can go in and look at the metadata about this particular deployment, and we can also see what the health status is. Not only that, but we can drill all the way down to the pod itself. So when we go in and look at the pod, we can even go in and look at the logs. So here I can see for this particular application, I can see that I've got an exception, I can read the exception from here, but one of the things that I don't have to worry about is I don't have to worry about knowing which namespace it's been deployed to and running my cube CTL commands. Now, how do we actually get things into this environment? Well, we have our delivery option up at the top. Now, when we look at the delivery, our delivery can follow a number of different models. Now, one of the things that I'm going to switch over to is I'm going to switch back to my project, and I'm going to go to a lot of people struggle with the notion of what delivery actually means. So when we actually talk about delivery, delivery is broken up into three steps, and that is build, provision, and deploy. There's a service inside a Spinnaker called Igor that actually allows us to be able to connect Spinnaker to our continuous integration solutions. So we can hook this up to our Jenkins servers, we can hook this up to DroneIO, we can hook this up to ConcordCI, we can hook it up to Worker. There's a number of different CI solutions that you can connect us through. And again, the way that you set up that connection to those different solutions is all using that Spinnaker operator model. Now, when we go in and look at how we configure this, every pipeline that is created ends up starting with a configuration, and the configuration is broken up into multiple parts. So the first thing is what artifacts, if any, do we expect to exist for this pipeline to successfully get executed? So here what we can do is we can actually see that we've got a GitOps model that we're making sure that our manifest that we have for Kubernetes exists inside of our GitHub repository. We also have things like our service defined inside of that GitOps model as well. And then just like any good GitOps provider, what happens is we end up triggering this off of some other hook. So what we do is we actually have an automated trigger that says coming from Git, we're actually going to look at a repository type of GitHub, and then this is going to choose the organization, the project, which is basically your repository, and then what branch do we actually want to listen for? When we have this trigger set up, we also have permissions configured because inside of a lot of our enterprise customers, they have the issue of compliance and auditing as things are actually executing. So what we can do here is we can actually define the teams that are actually allowed to execute this particular pipeline. So when you actually log in, you can see up in the top that I'm currently authenticated, and I am actually sitting in the SE managers team. So this actually allows me to actually trigger this manually if I needed to. If it came in through an automated trigger and it came across what will happen is, is this will actually trigger this as anybody who is coming from the SE manager group or from the sales engineer group. What that means is if there's a manual judgment that needs to take place for this particular pipeline, the only people who can approve or execute on those manual judgments has to come from one of these two teams. Now up at the top, once we've actually triggered a pipeline, a lot of this work is all being driven from metadata. So down at the bottom, we have the option to be able to set up parameters. Now you could define parameters yourself if you are doing something like a manual deployment, so you can actually choose or type in a value. But a better way to be able to get those values is by actually getting those values from some other system in your DevOps tool chain. So here I can actually set up a build and push. So what I've done here is I go ahead and build. So again, I've got my Jenkins servers configured. So I'm going to go ahead and use the Jenkins API to actually call this specific job. Now when that job executes, you can actually tell Jenkins in the workspace to actually create a file for the output of what actually happened. So this way, instead of me having to take potentially tons and tons of pages of logs, what I could do is I could just create a JSON file or a set of key value pairs that allows me to be able to grab the values from that particular build, and then those properties end up becoming available later on in the next stage, in any of the stages after this stage in my pipeline. Now once we've done a build, the next thing that we probably need to do is we need to provision some infrastructure. So the first thing that we could be doing is again, in that GitOps model, we can actually choose the account that we're actually going to connect to and the namespace. And what we're going to do is we're actually going to grab, we could take this from text. We could also grab this from one of those artifacts that we defined in the beginning. Those artifacts would then be used to actually provision that particular load balancer inside of our Kubernetes cluster. The other thing that we can do is we can use the data from a particular deployment event and actually go and get information. So if I'm actually going out and grabbing information from a resource that was previously defined, I can actually go and get the value of a particular load balancer. So for those of you that are trying to do things like evaluating Istio or other mesh type solutions, we can actually build a mesh network automatically through your continuous delivery process by being able to map the metadata as long as we know the dependencies of the deployments. So if I was to deploy my back end API services and then I had my web front end connecting to that, what I can do is I can map those dynamically during the deployment process. And then the last thing that we can do is we can even do things where we've got some work where we can even do provisioning. So here we can provision an S3 bucket. So you could use this either with our custom Terraform integration or you could do this as a run job and possibly execute something as Ansible. Now if we go in and look at what actually came from the stages beforehand, I'm going to show you how we can actually use the metadata that comes from those stages. This is using something called Spell. So Spell inside of here, what we're doing is we're grabbing information from a previous stage. So here we've got stage find controller and then we're just walking the JSON path until we get to the host name. The host name that is actually being stored inside of Spinnaker is this name coming back from AWS. The other thing that we can do is we can look at information that came back from Jenkins. So here we can look at the build and push stage and we can actually get the build info and grab the number. And here we can use that as our tag that we want to use as part of our deployment. Now when we actually want to go and deploy this particular item, what happens is now you would normally do this as an artifact, so something that would exist in your Git repository. But for demo purposes, we want to be able to show you what this would actually look like from inside of Spinnaker. So this would be your manifest that you would define. We would have our deployment kind. And what would happen here is if you were using any of the custom CRDs that you wanted to use for OpenShift, you could use those in here as well. We have a number of customers today that are using Spinnaker with our OpenShift environments, and we'd be more than happy to talk to anybody about how we actually enabled those capabilities inside of their environments. And down here down at the bottom, we now grab the API URL that was coming back from the parameters that we defined before, and we can then take the value from that. We can also grab the image name, and here we can see that we're grabbing the tag from that expression. Now when we actually have this information, this will then end up creating a deployment. And if we go back and look at an execution that happens, so here if I look at all of the artifacts, so here we can make sure that we've got our GitOps file here. We've got our Docker image here, we have our service defined here, and then we have an artifact which is a web page which is actually mapped back to the GitHub hash. We can also see the execution details of a particular pipeline directly from inside the web UI. So if we go in and look at the provision S3 bucket, we will see that this is actually coming from a different pipeline. So if I go to that pipeline, I can actually go in and see the console output, and this is the Terraform execution that actually happened on my behalf. Now one of the things that a lot of people don't like about the way that a lot of provisioning tasks take place is when they're doing infrastructure as code, a lot of times what they do is they have to go into their ticketing system. So they have to go into something like ServiceNow or they have to go into Atlassian Service Desk. They have to go and open up a ticket and then somebody will go provision something on their behalf and then they'll get a notification and email when that task is complete. Unfortunately what happens is there are ways to be able to get access to those resources outside of your continuous delivery tool. So somebody could go in and modify the S3 settings and if they modify those S3 settings, nobody really knows until you go through all the different auditing that you can go through and find out who actually made the change. With Finnecker what we can do is we can enforce the best practices every time a new deployment takes place. So if somebody was to go in and let's say change the ACL for my S3 bucket and let's say they changed it to public write, I would actually see that I'm going to be setting this back to public read. So what I can do is I can optimize my cloud security model by enforcing certain infrastructure being created at the time of my deployment rather than doing things in two different teams where one team is using ticketing and going off and doing things outside. The other thing is is the app dev teams are going off and expecting those things to exist. What happens is is if the app dev teams go and deploy and it wasn't set up properly and they're using APIs to connect to those resources, the deployment overall could fail and a lot of those things end up getting lost in translation and it's really hard for the teams to communicate about why something happened and why something went wrong. So this allows everybody from inside of a single web UI for everybody to stay on task and know what everybody else is doing. When we actually have this work deployed, what we can do is we can do things that make really interesting pipelines. So what I'm going to do is I'm going to go back to the back end piece and when I look at the back end piece, I'm going to look at this particular pipeline. Now this pipeline is really intriguing. So what happens is we trigger our deployment and we deploy to dev. So we're going to deploy our front end load balancer and we're going to deploy our pod on the back end. And then what we're going to do is we're actually going to run a Jenkins job that has already been defined from previous work that's going to use something like a selenium runner to go ahead and connect to the endpoint of the load balancer to go ahead and run a smoke test. So we're going to use the API in Jenkins to go call a job based on a parameter that's being passed in from Spinnaker. And then what we're going to do is based on the success of the smoke test, we will automatically go ahead and deploy to stage. When we deploy to stage, what we can do is we could update our service now ticket. We could update our Atlassian service test ticket. You could automatically say that the smoke test has been completed. So you have visibility and traceability in those systems that you've already got in place to be able to know when things have been marked as successful. Once we've deployed to stage, we can run a full QA suite. And when we do the full QA suite, one of the things that we want to make sure that we do is we can actually go in and run what's called a canary analysis. Now canaries are very interesting to a lot of companies, but a lot of them don't know how to do it very well. So we're going to distill it down into three easy ways of doing canary analysis. And this all depends on the amount of traffic that's being generated for your application. So if you're a website like something like GitHub or Atlassian or AWS, you can actually do what we call traffic shadowing and actually generate enough traffic in a very short amount of time to be able to know if your application is performing at the same level based on the latest code changes. If your application is not generating the same amount of traffic, then what we need to do is we need to deploy equal baselines and equal experiments. So a canary is an experiment that we're going to measure off of. And again, going back to the operator model that Hermann talked about, you would have to configure Spinnaker upfront to be able to talk to your APM vendor. So these are things like Prometheus, it could be Datadog, it could be New Relic. There's a number of different APM vendors that are supported out of the box where we can actually connect our canary analysis up to. Now when you actually go down and configure your canaries, what I'm going to do is I'm going to go down and look at the canary analysis. And when we actually do this, we can see here that we're deploying our baseline service, we're deploying our canary, we're deploying a prod image. And what we're going to do is we're actually going to run this canary analysis down here where what we're going to do is we're going to do something a lot of organizations will either call it a bake or they will call it a soak. And that application is going to sit in that environment for some predefined amount of time. Again, depending on how much traffic you can actually generate, you can define this as hours, you can define this as minutes or any combination of the two. You can delay how long if you've got things like you've got to warm up a cache, you could actually go ahead and pick up a delay so you can wait 15 minutes before we actually start doing the canary analysis. And then we specify our intervals and our steps. So what we're going to do here is we're basically going to say how often do we want to go to our APM vendor and go and get the data that we can actually store inside a spinnaker. And then how often do we want to evaluate that data against a canary config and that canary config is actually defined here. And what we can see is from that canary config, we're going to be evaluating off of these two individual deployments that have taken place. So if I go in and look at my canary config, what I'm going to see is I'm going to see three metrics that we have defined. So we've defined one for CPU, one for RAM and one for IO. And down at the bottom what we have is we have something called a judge. A lot of times when you talk about doing canary analysis, one of the first places that we go to is we don't really know what data we should actually be comparing. So what Google and Netflix have done is they have gotten together based on the deployments into their environments. They have taken the best practices that they use with their app dev teams and they have built a custom judge that you can use out of the box. You can also define your own judges however you would like to be able to do that. Now the Netflix judge, the way that it works is it works off of what's called a man Whitney formula. And what that is is if you look at a traditional bell curve of all of the plots that have taken place, we remove any of the outliers. And what we want to do is we only want to measure against those things that we believe are inside of the normal bounds. So man Whitney will remove the outliers and pick a square that is underneath that bell curve for the actual data that it actually wants to compare. And then what we can do is off of the data that's being compared, we can then specify specific weightings where maybe our application is CPU constrained or it is IO constrained. We can actually map the group weights so we could group that compute is more important to us or IO is more important to us. And this really depends on the workloads that you're deploying. Now when we look at this, we have a report that will end up getting generated for a particular canary analysis. So when we actually look at the data, what we can do is we can go down and look and here we can see that our standard deviation between the two different deployments is very close. The only thing that we have here is based on the requirements that we've defined in our canary config is that our RAM has increased by 2.5%. So because it's increased by 2.5%, we're going to mark that as a high. Now there are settings that we would end up doing is a canary analysis. You can mark these things as two different types of analysis. So you can actually look at any one of the individual plots and of that plot, you could come back and say this went greater than 80%. When my memory utilization spiked, I want to go ahead and I just want to fail this canary analysis completely. The other option you have is you can measure it based on the average across the entire canary run. So if you're going ahead and you're doing an evaluation over a 24 hour period of time, what you can do is you can actually do that analysis as an average across the entire run and then do your analysis off of that. And that's where those results come back into the pipelines. If we go back and look at this particular canary run, go back and find it application. What we will see inside of the canary execution, you can actually go in and see down at the bottom when this actually executes. So our canary analysis here, we can see at a high level that we have 100%. And we can see each time that this canary was actually analyzed, we have 100% success rate. But you can also go in and look at the specific charts for this particular run of the data points. You can also go in and see here what time this actually executed and when it was executed. So all of these things make Spinnaker a very powerful platform for you to be able to design your continuous delivery solutions off of. As we talk about being able to build more and more plugins, so we're building Cayente as a service for canary. So we're starting to build plugin points for Cayente. So we'll be able to plug in things like Splunk or Log Stash so we can create judges around error rates, being able to dig deeper into different homegrown solutions that you may have developed internally. So that way you can truly use this as a way to be able to plug into any set of frameworks that you may already have existing inside your infrastructure. So we've got about eight minutes left and we've given you a pretty good overview of how to get up and running with a Spinnaker operator to be able to get Spinnaker up and running. And we've also shown you some examples that we hope have piqued your interest where you might be interested in using this with some of your OpenShift deployments. What I'd love to be able to do now is open it up if anybody has any questions that we can go ahead and answer here in real time or feel free to reach out to us on Slack or going back and looking, feel free to email us at our Armory.io email addresses. So right now I'd love to open it up to any question. All right, Lee, thank you very much. I'm going to get you to pop back into your side deck so that people can find the links to get a hold of you and the resources there you have. There's one question that's just come in that, how does Spinnaker compare with OpenShift's Tecton pipelines? And I think there's also, there's a number of, in the CD Foundation, there's a number of tools that do CD. So, and they all have their benefits and value proposition. So maybe if you could speak a little bit to that and tell us a little bit more from your point of view. Sure, so Tecton is a great project and we actually work with a lot of the engineers at Google that and IBM and Red Hat that are actually working on Tecton. And one of the things that's really important to understand is Spinnaker not only supports your OpenShift deployments but it's going to support those deployments outside of OpenShift as well. So you may have some legacy deployments that you're deploying to, let's say, vCenter or maybe it's an EC2 image or maybe you're deploying lambdas. A lot of those things are just emerging inside of Tecton today where they've got their catalog of different resources. One of the most difficult things that people have when they're doing Tecton is if you look at the things that they need for v1 release, there's a number of things around being able to provide back out strategies about being able to do things like canary analysis and being able to provide that feedback back into your deployments. And the web UI that actually exists inside of Tecton is really just focused on defining the pipelines themselves and how they have been created and the execution state. It doesn't really show you some of the details like we have inside of Spinnaker where you can actually drill into the details to actually get a lot of the data back. Now we believe that eventually what will end up happening is there will be a plug-in that will exist because the orchestration engine that exists into Spinnaker is built off of a lot of the different spring technologies. So it's using Quartz for scheduling, it's using other things for basically doing all of the different tasks that are necessary underneath a specific stage. Tecton itself is actually using the Kubernetes scheduler. So depending on your Kubernetes environment and how your scheduler is actually configured, if you're doing a lot of deployments, what will end up happening is you can actually overwhelm that particular scheduler. And we've actually seen that with some of our customers where you start getting to a point where you're scaling OpenShift or Kubernetes to the point where you've got hundreds of clusters that the management cluster where the scheduler is running actually ends up in a situation where you have a backlog and things start to get queued up and sometimes they just fail completely. So there's a lot of work when you look at the Tecton community which I know they're going to get there and Kubernetes and some of those things are still evolving to those size of deployments and scale. Those are the things that are really important to our customers today. And like I said, we want to be able to work with the Tecton community in the future to be able to allow our plugins and their plugins to be able to be used in the same way. All right. I think that was, I think what we'll, especially with the CD Foundation, what we are seeing is a lot of collaboration between all of the different CD projects out there. So I think you'll see a lot of good insights from both sides going back and forth. One of the things you, you did, we've talked quite a lot about the Spinnaker operator and I think Ashkay, if I'm, if I remember the conversation earlier, you were one of the people that worked on the operator itself. He's also in the chat here. Yeah, so I wonder when, what's the roadmap for the operator in terms of being available for use on OpenShift? Sure. So I'm actually the product manager for operator and we are looking to get Red Hat certified for OpenShift towards the end of this month or in August. We have done a lot of extensive usage of operator on Kubernetes. A little bit only on OpenShift. We are aware that there are a couple of things that don't work in the OpenShift platform and we'll be looking to get those fixed even for our certification process. Ramon, do you want to talk a little bit about the specific things that maybe we still need to work on for the OpenShift platform? Yeah, actually today, if you follow the quick start instructions, the operator won't start because some permissions issues in the file system of the pod operator. This can be fixed just by changing some of the configuration. So you can run the operator today in OpenShift, but it's not the process that is out of the box and we'll work to make this out of the box and to fix any issues that arise in OpenShift. Awesome. And like I said, you know, we're looking to get certified August and so it should be working out of the box just like with regular Kubernetes deployments. Perfect. That's actually really great news and probably a perfect timing as we're going into KubeCon in mid-August and so we'll be able to revisit all of that and maybe get a demo of it running on OpenShift via the operators. So today, if you wanted to run it on an OpenShift besides the operators, I have a little confusion between could we have, or are people running it using Halliard? Halliard, is that how you say Halliard on OpenShift now or are we awaiting the operator? Well, today people can use Halliard to install Spinnaker into standard Kubernetes clusters and I'm not aware of the issues that may arise in OpenShift specifically when you use Halliard. Diane, I can let you know that I actually have quite a few customers that are using OpenShift with Spinnaker today and we're continuing to grab enhancements outside of the operator which helps you manage Spinnaker itself, but then also how do we deploy into OpenShift and that's where we would love to be able to have the OpenShift community come to our gardening days and be able to provide us feedback onto how they actually use OpenShift in their production environments so we can tune and optimize some of the plugins that we're building today to make sure that we're meeting customer demand. And that might be a great way to end on here today and besides which I wanted to say how much I love the gardening metaphor for the hackathon. If you want to skip back to that gardening landing page or the gardening slide that you had there, Lee, or there. And I really highly encourage people to participate in this, especially folks from OpenShift who are already using Spinnaker to come here and then maybe we can move forward with the operator and get more feedback on this. This is a great opportunity and anytime someone's willing to help through a hackathon, we should definitely take advantage of it across the OpenShift community. And when you do have the operator working and certified for OpenShift, we'll definitely have you come back and give a demo of live deploying it. That would be another awesome opportunity for us to get more information and get you more visibility and feedback from our community. And we'll have to see what, you know, what comes from the feedback and where else we can figure out some other opportunities to collaborate. So we're sort of at the end of our hour here and not seeing any other questions coming in. I'll just check the chats again one more time. I think Lee, you really did an awesome, you and Hermann did an awesome job setting us up here. So I really thank you for taking the time to do this deep dive into Spinnaker, getting me up to speed and hopefully the rest of the community as well. And I look forward to hearing more from you guys in the future. And hopefully we'll get that Armory Spinnaker operator in operatorhub.io as well.