 Welcome, everyone, to the Brigade On-Demand webinar. Brigade provides an event-driven scripting platform built on top of Kubernetes. I'm Kent Rancourt, one of the project maintainers. If you've followed our project at all, you may have seen content similar to this before. And if that's the case, I'd recommend sticking around regardless. It's an exciting time for Brigade right now. As of this recording, it was just days ago when we cut the first release candidate of Brigade 2. Most likely by the time you view this, Brigade 2 has already become generally available. So in some ways, this is our first presentation and demo that shows off the finished product. Brigade has an interesting history, which I'll touch on briefly. What eventually became Brigade V1 was born at a dais offsite in 2016. The entire team was assembled in Denver, and one exercise we all went through that day was a shark tank-style game wherein teams pitched ideas to the senior leadership, and the best idea would get resources assigned to it. Well, Brigade was the big winner that day, but interestingly, Helm, which we regard as our sister project, was born from a similar exercise at the previous year's offsite. Helm and Brigade both were products of an interesting thought experiment that I'm going to share with you. It begins with the comparison of Kubernetes to an operating system. So let's begin by asking ourselves, what exactly is an operating system? Well, it's a program that after being initially loaded into a device, manages all the other programs. Pretty simple, really. So in what sense is Kubernetes an operating system? Well, an operating system manages processes on a device. Kubernetes manages containers in a cluster. So if Kubernetes can be likened to an operating system, it's useful to ask ourselves, what are some features and functionality that we take for granted and a traditional operating system that are conspicuously absent from Kubernetes? In 2015, Kubernetes lacked a package manager, so Helm was born. Well, something else that we take for granted in a traditional operating system is the presence of at least one and often multiple scripting environments. Traditional shell scripting is flow control that wraps the execution of processes but is not opinionated about what the processes do or how they run. Well, containers are really just a layer of isolation around your processes. So what would scripting look like in a cluster? Well, cluster shell scripting would be flow control that wraps the execution of containers but is not opinionated about what the containers do or how they run. That's where Brigade comes in. Brigade offers asynchronous event-driven scripting. It can be used to chain together containers to create complex workflows. And it uses Kubernetes as a workload execution substrate. That is to say, Kubernetes is an implementation detail and users do not have to interact with Kubernetes directly. Brigade is currently a CNCF sandbox project. So before we get too much farther, what is Brigade really good for? Essentially, anything you'd like to automate that can be triggered by some event. Examples may include CI and CD, GitOps, ChatOps, nightly code scans or security audits, report generation, and much more. So what is an event? It's something that originates from an external system, indicates that something has occurred and it enters Brigade via event gateways. And here's what a graphical depiction of that may look like. To the left, we see external systems that send their own sort of events, webhooks for instance, to a gateway that converts inbound events to Brigade events and uses the Brigade API to drop those onto Brigade's internal event bus. So here are some example gateways that we have already created. There are several others besides these that are generally pretty easy to build. So you can expect us to be creating more and you can pretty easily create your own if you'd like to trigger your scripts with events from an external system we don't already integrate with. Even a proprietary one. This is what a YAML representation of an event would look like. They're really JSON over the wire, but this is a bit easier to read. I also want to point out that even though this may have a striking similarity to a Kubernetes manifest, this is not a Kubernetes manifest. You can see that every event has a source. That field denotes where the event came from. In this case, a GitHub gateway. Every event has a type. Gateways may emit many different types of events, and generally they would map pretty directly onto events sent by the external system and received by that gateway. This event indicates that someone starred a repository in GitHub. Gateways should always document the events they emit. An event may optionally have qualifiers. If you're familiar with labels in Kubernetes, qualifiers are similar, but a little bit different. They specifically provide more context about the event. So here, someone starred a repository, but which repository? In all likelihood, no one subscribed to this event is interested in knowing when a repository was starred. It's more likely one may want to know when some specific repository was starred. So who subscribes to events and how? Well, that is where projects come in. Projects are user-defined, they subscribe to events, and they define workers to handle those events. So more succinctly, a project simply pairs event subscriptions with worker configuration, directions for handling each subscribed event. Workers are pretty simple. They either embed a bit of JavaScript or TypeScript, or include Git coordinates where such a script can be found. Workers are executed in their own container. Workers can handle an event directly, usually when the handling is simple and easily accomplished with just JavaScript or TypeScript. For more complex situations, workers can coordinate multiple jobs, each of which runs in its own container. We'll come back to that in a few slides. Here's an example of a project with a worker that has directly embedded a simple script. You can see that the project subscribes to all events that indicate someone has starred a specific GitHub repository. The embedded script responds simply by writing some logs to the console. When you embed a chunk of script in YAML, however, you lose a lot of the niceties that we usually take for granted, especially when we have a lot of tasks highlighting especially. So for more complex scripts, it makes more sense to reference a script that is stored elsewhere. Like this. This alternative to the previous example would handle the event using a script retrieved from the Crankor Stargazer example repository on GitHub. Now, not all events can easily be handled with JavaScript or TypeScript. For instance, suppose that when someone opened a PR against a particular project, they execute a battery of tests, but the project in question is implemented with Go. This is where jobs come in. Jobs optionally are created by workers to handle discrete tasks. They can be executed in Serial or concurrently, subject to scheduling constraints. Jobs themselves run in their own containers. They can be based on any OCI image and sidecar containers are also supported. Here's an example that builds on our previous ones. It's a more complex script, but it spawns two trivial jobs and executes them in sequence. Each of these jobs will execute one after the other in their own container. So finally, it's demo time. One thing I am not going to show you in this demo is the installation process because frankly, it's uninteresting. Like most other things that are built on top of Kubernetes, Brigade is installed using Helm and default configuration is well tuned for a local non-production installation so you can be up and running and kicking the tires in just minutes. I'm actually going to utilize the Brigade team's own Brigade instance for today's demo. For our demo today, we're going to be creating a Hello World project. It wouldn't be too hard to start from scratch and it might actually be too easy to start out by copying an existing project definition. So what we're going to do is split the difference into two. The first thing I'll do is make a new directory for our sample project. Now, I'll simply type Brig and it and give the project a name. You can see this created a number of files under a new dot Brigade directory. The dot Brigade directory is just a way of keeping everything Brigade related separate from everything else in your project. Remember that Brigade scripts use JavaScript or TypeScript and we won't touch on it until later in the demo, but you can actually use a package.json file to enumerate third-party dependencies for your scripts. Supposing that you are using Brigade to add some automation to an existing project that was itself implemented in JavaScript or TypeScript, having everything Brigade related in a separate directory allows you to have separate package.json files for your main project and for your Brigade script. Again, we'll come back to this later in the demo. In the list of things that Brig and it did, you can also see that it created a project definition, some notes, a file for secrets, and because we didn't already have a .getignore file, it created one for us and added a couple entries to it. Specifically, if we should check this project into source control in the future, there are certain things we wouldn't want tracked. One of those things is whatever NPM modules our Brigade script uses because it's simply not convention to track those for any JavaScript or TypeScript project. That's in no way Brigade specific. Also, we see that our secrets aren't being tracked, and that also makes sense because one typically does not check sensitive information into source control. You can also see some notes were created. Those notes spell out your next steps. We're not going to bother looking at them right now because you've got me instead. Next, we're going to take a look at the project definition file that Brig and it created for us. At the very top, we see a comment that points to a JSON schema. I'm not sure how widely known this is, but YAML is a superset of JSON, so you can actually use JSON schemas to validate YAML documents. I've got a VS code extension installed that recognizes this comment and will provide me with context help based on that schema. Note that the comment isn't specific to the extension in any way. Rather, it's specific to the YAML language server. So although I haven't tried it, I'm given to believe that you can get the same context help in any editor or IDE that integrates with the YAML language server. Now we'll look at the main body of the project definition. Recall that a project fundamentally pairs event subscriptions with worker configuration that describes how to handle those events. We can see that at work here in this file. We're subscribing to events of type exec that come from a source called brigade.sh slash cli. We're going to come back to that in a minute or two. We can also see a worker template section that describes our worker. Under there, we see default config files. When your worker launches, the full contents of whatever you put in this section will be mounted into the worker container. The worker always looks for a script named either brigade.js or brigade.ts. So by putting a brigade.ts file here in our project definition, we've effectively embedded our script in the project definition. You can put other files here as well, but we want to keep things simple for right now. Now the mere existence of this project definition file doesn't accomplish anything unless we upload it to brigade. Although this is not a Kubernetes manifest, you can think of it in similar terms. If you defined a secret, redeployment, or other sort of Kubernetes resource with a chunk of YAML, Kubernetes still wouldn't know about it until you ran kubectl apply and passed it the file. Now I'm already logged into my team's own brigade API server. I'm going to list the projects that we have there so you can see that none are already named hello brigade. You can also see these projects all correspond to our most active repositories. And you might surmise that we're mostly using brigade for CI and CD. I really want to emphasize that brigade is generally useful for anything you want to automate. CI and CD just happened to be specialized cases of automation and their needs we had that we were able to quite easily fill using our own product. So let's go ahead and upload our project definition to brigade. Now if we relist our projects, we can see hello brigade is in the list. We can also retrieve that project directly. Similar to kubectl, we can also ask for the output in a different format if we'd like to see more details. So I'll ask to see a YAML representation of the project. You can see that for the most part this looks just like what we uploaded. I do want to call attention however to a field that wasn't present when we uploaded that project definition. Something that brigade added in for us. And that's this kubernetes namespace field. Brigade automatically creates a new namespace for every project. Brigade is really designed for end-users who are not necessarily kubernetes experts and it takes great pains to fully abstract end-users away from kubernetes. Inevitably however, people who are kubernetes experts are going to want to push brigade a little harder and do some things like automate maintenance operations on the cluster itself. For this sort of person, supposing they have direct access to the underlying cluster. Telling them the namespace that brigade created for this project permits the user to seek out the kubernetes service accounts in that namespace that are used by workers and jobs and amend their permissions so that they can do things within the kubernetes cluster that a worker or job cannot usually do. This is obviously extremely advanced usage and we're not going to discuss it any further now but I thought it was important to highlight that this capability exists. At this point, brigade knows about our new project and we should probably send an appropriate event to trigger the worker. Now in the real world, events usually come from a gateway of some sort that receives events from external systems and transforms them into brigade events. Gateway setup can be a bit involved and not for any brigade specific reason but just because there's often a lot of setup that needs to be done on the external system, github or slack for instance, to make it send events to your gateway. All of our gateways do have comprehensive documentation that cover whatever external setup needs to be performed but if we took the time to set something like that up right now it would really derail our demo. If you look back at the project definition you'll see we subscribe to events of type exec that come from a source called brigade.sh slash cli. Well primarily for testing and demo purposes, this type of event is trivial to create manually from the CLI. So instead of setting up a gateway, that's what we're going to do. Adding the follow flag is a bit of a convenience that will wait until the worker starts and then begin streaming its logs to our console. Because we had debug logging enabled, we see quite a bit of information about what exactly the worker is doing. The most important thing is that it launched a hello job that also ran to completion. Now that we've seen brigade handle an event for the first time, let's look at some other commands for managing events. We can list all events. This is probably an overwhelming amount of information so we can filter by project. Here we see our event and its status. If we retrieve that one event by its ID we will see even more detail. Included in these details we see the names and statuses of our jobs. Something else useful we can do is access the logs from the worker that handled the event. This may not seem like a big deal. We already saw these logs earlier as the event was being handled. But the Kubernetes pod that ran the worker is long gone by this point and yet the logs have been preserved. This is because we run Fluent D on every node to collect logs from all workers and jobs and forward them to our database. We can also access the logs from individual jobs by using the job or J flag. There are other interesting event related commands. For instance you can cancel or delete events. We're going to skip over those to talk about something more interesting. Let's revisit our project definition and its embedded brigade.ts script. Let's say that over time our needs change and we need to start doing something a little more involved with this script. It probably won't be long before we're sick of working with TypeScript that's embedded in a Gamble document and offers us no syntax highlighting or context help. This is actually pretty easy to deal with. So the first thing I'm going to do is copy the embedded script from our project definition. Now I'm going to create a brigade.ts file and paste in our script. Immediately I've got all my syntax highlighting and such. But you can see that a brigade specific import that I'm using the Brigadier library does not resolve. So I cannot get context help for that module. This is pretty easy to fix using conventional techniques that should be familiar to anyone who's done much with JavaScript or TypeScript. We can use the JavaScript package manager of our choice. Brigade supports both npm and yarn. So here I'll run yarn init from inside the dot brigade directory. This creates a minimal package.json file for me. Now I'll run yarn add. This takes care of our problem and now we'll easily get full context help for the Brigadier library. Apart from the context help resolving that library also means that we're now able to compile and execute our script locally. This can be a convenient thing to do for testing purposes and it's important to note that this won't actually integrate with the Brigade API server in any way. You see the Brigadier library is actually just interfaces and some stub implementations of those interfaces. At runtime our worker substitutes an alternative implementation of those interfaces to enable communication with the Brigade API server. If you run your script locally you can develop a sense of whether it works as you expected without any consequences. Now because my script is TypeScript I do have to compile it first. If your script were JavaScript you could skip this step and now we can run it. There are ways to control the source and type of the dummy event and even the payload using environment variables but they're documented and so we're going to move on. We've successfully separated our script from the project definition. Let's revisit that definition briefly and remove the embedded script and tell it to go looking elsewhere to find it. This is where our context help comes in handy. Now let's run get in it and add all the files we've created. Before we do this I'm going to remove the compiled Brigade.js file. I don't actually want to include that in source control. I'd rather have the worker compile my TypeScript for me. We can see all of the files that we're about to commit do not include the secrets or NPM modules. Now I'm going to add a remote for a repository that I created ahead of time on GitHub. Now we'll push our files to that remote. Last let's submit the updated project definition to Brigade. Now we'll manually create another event. Note that the debug output from the worker is a little different this time around. It tells you exactly what it's doing which includes using the package.json and yarn.loc we provided to resolve dependencies. You even see where it substitutes an alternative implementation of the Brigadier interfaces. We'll briefly touch on a few other useful things now. The next thing we'll do is demonstrate that you can use whatever third-party dependencies you like in your Brigade.js or TS script. So here we'll add and use the unique names generator module. Now we're going to incorporate this module into our script. Right here in the same go I'm going to show you how to chain multiple jobs together to create more interesting workflows. I also want to demonstrate that jobs can be based on any OCI image that you would like. So instead of basing this second job in the Debian latest image, we'll use the Alpine latest image. Here we'll chain these jobs together. Last but not least, I'm going to show you how to add secrets to your project and make use of them. This is one way to do it. You can set secrets one at a time. An alternative method of doing this is to put your secrets in a YAML file. You can upload the entire file at once. Here's how you use secrets within your script. You can see our context help is really coming in handy. Now note that what I've done here is for demonstration purposes only. It's a really bad idea to allow secrets to bleed into your logs as I've done here. Now there have been no changes to our project definition so there's no need to update that. But we have made changes to our script so we should probably push those up to GitHub. One last time we'll trigger an event manually and see what happens. This time around we see that two jobs ran in sequence. If we look at the logs for the individual jobs we can see the first job said hello to a unique name that we generated using our third party dependency. And we can see for the second job that it said hello to the secret we passed in. With that we're pretty much done. Remember I've been using my team's own brigade instance for this demo so we'll just clean up after ourselves by deleting the project. Deleting the project also removes all associated events logs and kubernetes resources and that concludes the demo. Really up to this point we've discussed just the basics. Brigade has a lot of other tricks up its sleeve. Here are some important features we haven't touched on. We have get integration so it's easy to build and test source code. We have shared volumes so it's easy to pass artifacts from one job to the next. We have configurable limits so you can configure the maximum number of workers that may execute concurrently or the maximum number of jobs that may execute concurrently. This is important because even the largest kubernetes cluster cannot possibly execute an infinite amount of work concurrently. We have fair scheduling. This is to say each project has its own work queue and we pseudo randomly select events from the head of each one. So if one project receives a large volume of events and another project receives an event afterwards, the second project does not have to wait for all of the first project's events to be cleared. We also have several user authentication options that include any open ID connect identity provider and github. I especially want to zero in on that. Recall that Brigade is built to be useful for teams where not everyone is a kubernetes expert and some may not even have credentials for directly accessing a kubernetes cluster. This introduces a requirement to authenticate to Brigade's APIs in some other way. We didn't want to build user management from scratch, so we opted to build integration with third-party identity providers instead. Our favorite is github. It's actually very easy to configure a Brigade installation to delegate authentication to github and to limit authentication to users belonging to github orgs of your choice. Something we're proud of is that we haven't just built Brigade. We've been developing an entire ecosystem around it. Here are some gateways and other peripherals that we've been working on. Gateways include Azure Container Registry BitBucket, Cloud Events 1.0, Docker Hub, github, Slack and there are more to come. We also provide monitoring in the form of Brigade Metrics. We have an offering for Chaos Engineering. Something called the Brigade Noisy Neighbor that can apply a lot of load to your Brigade installation. And we also offer a couple official SDKs with a third and possibly a fourth in the works. Because it's a question we're asked frequently, I want to briefly touch on how Brigade differs from Argo. And let me say off the bat that we love Argo. We don't think we're better than them. We think we're different than them. Brigade and Argo address similar use cases and both model workflows as assemblies of containers, but they approach the problem space differently. Argo emphasizes workflows. Events are an add-on. Argo is also fully Kubernetes native. Workflows are Kubernetes resources. Compare that to Brigade, which emphasizes events. Events do not describe workflows. They only indicate that something has occurred. Subscribed projects can handle events as they each see fit. Brigade also uses Kubernetes only as an implementation detail. Workflows are scripted and tooling fully abstracts Kubernetes so teams can build automation that executes in Kubernetes without being experts and without every team member having direct access to the cluster. So the bottom line is there's a lot of overlap between Brigade and Argo, but they take very different approaches to similar problems. So when would you choose one over the other? To a large extent, it may be a matter of personal preference, but other factors are likely to enter the equation as well. What exactly are you building? Argo is in my opinion, in many ways, lower level than Brigade. It's probably a good starting point if you're building your own product that has a need to build and submit workflows for execution. Brigade is a little more oriented around end users and, in particular, end users who are not necessarily Kubernetes experts. So if your team that's looking to automate a lot of your own tasks quickly and the team doesn't have a lot of Kubernetes experience on the bench, Brigade may be what you're looking for. Last, we're quickly going to talk about Brigade's architecture for two reasons. First, I think this will, to a large extent, demystify Brigade. Sometimes it's useful to look behind the curtain and see there's really no magic at work. Second, I hope this will be a useful primer for anyone who might be interested in making contributions to the project. Toward the top of this diagram, we see external systems. These could be things like GitHub or Slack. They send events to gateways, which convert the events to Brigade events and utilize the Brigade API to NQ events on a message bus. And the CLI permits end users to interact with the API as well. The API server's main role is to locate subscribers for each event, create a discrete copy for each subscriber, and add it to both the database and the message bus. On the other end of the message bus is the scheduler, which monitors for capacity and allocates it to workers and jobs. The observer component watches running workers and jobs in the substrate and reports their statuses back to the API server. Last but not least, a logging agent runs on every substrate node and forwards worker and job logs to the database for safekeeping. This diagram is the same as the previous one, but superimposes the specific technologies and protocols that Brigade utilizes. The API is a restful one served over HTTPS. Our database is MongoDB and our message bus is Artemis, which is the next generation of Apache MQ. The API server and scheduler both talk to the message bus using the AMQP 1.0 protocol. We use FluentD to collect and forward logs and last, but certainly not least, Kubernetes is our workload distribution substrate. Thank you everyone for your time today. I'm going to leave you with a few resources for getting started with Brigade and for locating and engaging with the maintainers and other community members. So here's our website, some getting started documentation, our main repo on GitHub, our Twitter handle, and our Slack channel, which is part of the Kubernetes Slack. I especially want to highlight the quick start as being your next step to getting this out for yourself. And I also want to emphasize that you should totally come talk to us on Slack. We love our community, we're really interested in hearing from you. Thanks again for viewing this webinar and have a wonderful day.