 Welcome everyone to theCUBE's presentation of the AWS startup showcase, OpenCloud Innovations. This is season two of the ongoing series. We're covering exciting startups in the AWS ecosystem to talk about open source community stuff. I'm your host, Dave Nicholson, and I'm delighted today to have two guests from Weaveworks, Steve George, COO of Weaveworks and Steve Waterworth, technical marketing engineer from Weaveworks. Welcome gentlemen, how are you? Very well, thanks. Very well, thanks very much. So Steve G, what's the relationship with AWS? This is the AWS startup showcase. How do Weaveworks and AWS interact? Yeah, sure. So AWS is an investor in Weaveworks and we actually collaborate really closely around EKS and some specific EKS tooling. So in the early days of Kubernetes when AWS was working on EKS, the Elastic Kubernetes Service, we started working on the command line interface for EKS itself. And due to that partnership, and we've been working closely with the EKS team for a long period of time, helping them to build the CLI and make sure that users in the community find EKS really easy to use. And so that brought us together with the AWS team working on GitOps and thinking about how to deploy applications and clusters using this GitOps approach. And we've built that into the EKS CLI, which is an open source tool. It's a project on GitHub. So everybody can get involved with that, use it, contribute to it. We love hearing user feedback about how to help teams take advantage of the elastic nature of Kubernetes as simply and easily as possible. Well, it's great to have you. Before we get into the specifics around what we've works is doing in this area that we're about to discuss, let's talk about this concept of GitOps. Some of us may have gotten too deep into a Netflix series and we didn't realize that we've moved on from the world of DevOps or DevSecOps and the like, explain where GitOps fits into this evolution. Yeah, sure. So really, GitOps is an instantiation, a version of DevOps, and it fits within the idea that particularly in the Kubernetes world, we have a model in Kubernetes which tells us exactly what we want to deploy. And so what we're talking about is using Git as a way of recording what we want to be in the runtime environment and then telling Kubernetes from the configuration that is stored in Git exactly what we want to deploy. So in a sense, it's very much aligned with DevOps because we know we want to bring teams together, help them to deploy their applications, their clusters, their environments, and really with GitOps, we have a specific set of tools that we can use and obviously what's nice about Git is it's a very developer tool or lots and lots of developers use it the vast majority. And so what we're trying to do is bring those operational processes into the way that developers work. So really bringing DevOps to that generation through that specific tool. So Steve G, let's continue down this thread a little bit. Why is it necessary then this sort of added wrinkle? If right now in my organization, we have developers who consider themselves to be DevOps folks. And we give them Amazon gift cards each month and we say, hey, it's a world of serverless, no code, low code, lights out data centers. Go out and deploy your code. Everything should be fine. What's the problem with that model and how does GitOps come in and address that? Right, I think there's a couple of things. So for individual developers, one of the big challenges is that when you watch development teams like deploying applications and running them, you watch them switching between all those different tabs and services and systems that they're using. So GitOps has a real advantage to developers because they're already sat in Git. They're already using their familiar tooling. And so by bringing operations within their developer tooling, you're giving them that familiarity. So that's one advantage for developers. And then for operation staff, one of the things that it does is it centralizes where all of this configuration is kept. And then you can use things like templating and some of the things that we're gonna be talking about today to make sure that you automate and go quickly, but you also do that in a way which is reliable and secure and stable. So it's really helping to bring that, run fast but don't break things kind of ethos to how we can deploy and run applications in the cloud. So Steve W, let's start talking about where WeaveWorks comes into the picture. And what's your perspective? So, yeah, WeaveWorks has an engine set of software that enables this to happen. So you think of it as a constant reconciliation engine. So you've got your declared state, your desired state is declared in Git. So this is where all your YAML for all your Kubernetes hangs out. And then you have an agent that's running inside Kubernetes. That's the WeaveWorks GitOps agent. And it's constantly comparing the desired state in Git with the actual state which is what's running in Kubernetes. So then as a developer, you wanna make a change or an operator, you wanna make a change. You push a change into Git. The reconciliation loop runs and says, all right, what we've got in Git does not match what we've got in Kubernetes. Therefore, I will create, destroy resource, whatever. But it also works the other way. So if someone does directly access Kubernetes and make a change, then the next time that reconciliation loop runs, it's automatically reverted back to that single source of truth in Git. So your Kubernetes cluster, you don't get any configuration drift. It's always configured as you desire it to be configured. And as Steve George has already said, from a developer or engineer point of view, it's easy to use. They're just using Git just as they always have done and continue to do. There's nothing new to learn. No change to working practices. I just push code into Git, magic happens. So Steve W, little deeper dive on that. When we hear ops, a lot of us start thinking about, specifically in terms of infrastructure. Especially since infrastructure can, when deployed and left out there, even though it's really idle, you're paying for it. So anytime there's an ops component to the discussion, cost and resource management come into play. You mentioned this idea of not letting things drift from a template. What are those templates based on? Are they based on, is this primarily an infrastructure discussion or are we talking about the code itself that is outside of the infrastructure discussion? It's predominantly around the infrastructure. So what you're managing in Git, as far as Kubernetes is concerned, is all those deployment files and services and horizontal pod auto scalers, all those Kubernetes entities. Typically the source code for your application, be it in Java, Node.js, whatever it is you're having to be writing it in, that's typically in a separate repository. You typically don't combine the two. So you've got one set of repository for basically for building your containers and your CI will run off that and ultimately push a container into a registry somewhere. Then you have a separate repo, which is your config repo, which declares what version of the containers you're gonna run, how many you're gonna run, how the services are bound to those containers, et cetera. Yeah, that makes sense. Steve G, talk to us about this concept of trusted application delivery with GitOps. And frankly, it's what led to the sort of prior question. When you think about trusted application delivery, where is that intertwinement between what we think of as the application code versus the code that is creating the infrastructure? So what is trusted application delivery? Sure, so with GitOps, we have the ability to deploy the infrastructure components, and then we also define what the application containers are that would are going to be deployed into that environment. And so I think, this is a really interesting question because some teams will associate all of the services that an application needs within an application team, and sometimes teams will deploy sort of horizontal infrastructure, which then all application teams services take advantage of. Either way, you can define that within your configuration, within your GitOps configuration. Now, when you start deploying at speed, particularly when you have multiple different teams doing these sorts of deployments, one of the questions that starts to come up will be from the security team or someone who's thinking about, well, what happens if we make a deployment which is accidentally incorrect, or if there is a security issue in one of those dependencies and we need to get a new version deployed as quickly as possible. And so in the GitOps pipeline, one of the things that we can do is to put in various checkpoints to check that the policy is being followed correctly. So are we deploying the right number of applications, the right configuration of an application? Does that application follow certain standards that the enterprise has set down? And that's what we talk about when we talk about trusted policy and trusted delivery, because really what we're thinking about here is enabling the development teams to go as quickly as possible with their new deployments, but protecting them with automated guardrails. So making sure that they can go fast, but they're not going to do anything which destroys the reliability of the application platform. You've mentioned reliability and kind of alluded to scalability in the application environment. What about looking at this from the security perspective? There have been some recently, pretty well publicized breaches. Not a lot of senior executives and enterprises understand that a very high percentage of code that their businesses are running on is coming out of the open source community where developers and maintainers are to a certain degree what they would consider to be volunteers. That can be a scary thing. So talk about why an enterprise struggles today with security and policy and governance. And I toss this out to Steve W or Steve George, answer appropriately. Let me, I'll try the high level and Steve W can give more of the technical detail. I mean, I'll say that when I talk to enterprise customers, there's sort of, there's two areas of concern. One area of concern is that we're in an environment with DevOps where we started this conversation of trying to help teams do as quickly as possible. But there's many instances where teams accidentally do things, but nonetheless, that is a security issue. They deploy something manually into an environment. They forget about it and that's something which is wrong. So helping with this kind of policy as code pipeline, ensuring that everything goes through a set of standards and really help teams. And that's why we call it developer guardrails because this is about helping the development team by providing automation around the outside that helps them to go faster and relieves them from that mental concern or have they made any mistakes or errors? So that's one form. And then the other form is the form where you were going, David, which is really around security, dependencies within software, you know, a whole supply chain of concern and what we can do there by again, having a set of standard scanners and policy checking which ensures that everything is checked before it goes into the environment, that really helps to make sure that there are no security issues in the runtime deployment. Steve, anything I missed there? Yeah, I'll just say, I'll just go a little deeper on the technology there. So essentially we have a library of policies which get you started because you can modify those policies, write your own, the library's there just to get you going. So as a change is made, then typically via say a GitHub action, the policy engine then kicks in and checks all those deployment files and all the YAML for Kubernetes and looks for things that are outside a policy. And if that's the case, then the action will fail and you won't, that'll show up on the pull request. So things like, are your containers coming from trusted sources? You're not just pulling in some random container from a public registry, you're actually using a trusted registry, things like are containers running as root or are they running in privileged mode? Which again, it could be a security but it's not just about security, it can also be about coding standards. Are the containers correctly annotated? Is the deployment correctly annotated? Does it all have the annotation fields that we require for our coding standards? And it can also be about reliability. Does the deployment script have health, you know, have the health checks defined? Does it have a suitable replica count? So if you can say we're a rolling update, we'll actually do a rolling update. You can't do a rolling update with only one replica. So you can have all these sorts of checks and guards in there and then finally there's an admission controller that runs inside Kubernetes. So if someone does try and squeeze through and do something a little naughty and go directly to the cluster, it's not going to happen because that admission controller is going to say, hey, no, that's a policy violation. I'm not letting that in. So it really just stops, it stops developers making mistakes. I know I've done development and I've deployed things into Kubernetes and haven't got the config quite right and then it falls flat in its face and you're sitting there scratching your head and with the policy checks, then that wouldn't happen because you would try and put something in that has a slightly iffy configuration and it would spit it straight back out at you. So obviously you have some sort of policy engine that you're relying on. What is the user experience like? I mean, is this a screen that is reminiscent of the matrix with non-readable characters streaming down that only another machine can understand? What does this look like to the operator? Yeah, sure. So we have a console, a web console, where developers and operators can use a set of predefined policies. And so that's the starting point and we have a set of recommendations there and policies that you can just attach to your deployments. So set of recommendations about different AWS resources, deployment types, EKS deployment types, different sets of standards that your enterprise might be following along with. So that's one way of doing it. And then you can take those policies and start customizing them to your needs. And by using GitOps, what we're aiming for here is to bring both the application configuration, the environment configuration. We talked about this earlier. All of this being within Git, we're adding these policies within Git as well. So for advanced users, they'll have everything that they need together in a single unit of change, your application, your definitions of how you want to run this application service and the policies that you want it to follow all together in Git. And then when there is some sort of policy violation on the other end of the pipeline, people can see where this policy is being violated, how it was violated. And then for a set of those, we try and automate by showing a pull request for the user about how they can fix this policy violation. So try and make it as simple as possible. Because many of these sorts of violations, if you're a busy developer, there'll be minor configuration details going against the configuration and just wanna fix those really quickly. So Steve W, is that what the Magalix policy engine is? Yes, that's the Magalix policy engine. So yes, it's a SaaS-based service. It holds all the actual policy engine and your library of policies. So when your GitHub action runs, it goes and essentially makes it a call across with the configuration and does the check and spits out any violation errors if there are any. So folks in this community really like to try things before they deploy them. Is there an opportunity for people to get a demo of this, get their hands on it? What's the best way to do that? Oh, the best way to do it is have a play with it. As an engineer, I just love getting my hands dirty with these sorts of things. So yeah, you can go to the Magalix website and get a 30-day free trial. You can spin yourself up a little test cluster and have a play. So what's coming next? We had DevOps and then DevSecOps and now GitOps. What's next? Are we gonna go back to all infrastructure, on-premises all the time, back to waterfall, back to waterfall, hot tub time machine? What's the prediction? Well, I think the thing that you set out right, the start actually is the prediction. The difference between infrastructure and applications is steadily going away. As we try and be more dynamic in the way that we deploy. And for us with GitOps, I think we're, there's a lot of, when we talk about operations, there's a lot of depth to what we mean about operations. So I think there's lots of areas to explore how to bring operations into developer tooling with GitOps. So that's, I think, certainly where we've worked will be focusing. Well, as an old infrastructure guy myself, I see this as an indication because infrastructure still matters kids. And we need sophisticated ways to make sure that the proper infrastructure is applied. People are shocked to learn that even serverless application environments involve servers. So I tell my 14 year old son this regularly, he doesn't believe it, but it is what it is. Steve W, any final thoughts on this whole move towards GitOps and specifically the, the Weaverworks secret sauce and superpower? Yeah, it's all about, as Steve already said, it's all about going as quickly as possible, but without tripping up. Been able to run fast, but without tripping over your shoelaces which you forgot to tie up. And that's what the automation brings. It allows you to go quickly, does lots of things for you. And yeah, we try and stop you shooting yourself in the foot as you're going. Well, it's been fantastic talking to both of you today. I'm, you know, for the audience's sake, I'm in California and we have a gentleman in France and a gentleman in the UK. It's just the wonders of modern technology never cease. Thanks again, Steve George from Weaverworks. Thanks for coming on theCUBE for the AWS startup showcase. And to the rest of us, keep it right here for more action on theCUBE, your leader in tech coverage.