 Hi there, my name is Liz Rice. I run the AirConsource engineering team at Aqua Security, where we build tools to help enterprises secure their cloud native deployments. And I'm here with Daniel Patsack. Hi everyone, I'm a software engineer at Aqua, taking care of our open source projects, including Starboard. And today we want to share with you some background on what Starboard is and describe some of the design decisions that we've made as we've been developing it, and then talk a bit about the way we see Starboard developing in the future. So first of all, the motivation behind Starboard. Meet our friend Dave Loper. He uses Kubernetes and to do so he uses tools like kubectl and perhaps he uses a dashboard like Octon or another kind of IDE interface that uses the Kubernetes API to access the cluster and manipulate resources within that cluster. Now, if Dave is also interested in security, today he has to learn to use a variety of different security tools. They all have different interfaces, they generate output in different formats, maybe some of it is in HTML reports and some is in JSON files. The idea behind Starboard is to bring all these disparate tools into the world of Kubernetes. So we've created the Starboard CLI tool that provides an interface, a familiar Q-Control plug-in interface that Dave can use to run security tools. And it creates the reports in the form of Kubernetes custom resources that Dave can access using Q-Control and the dashboard. So over to Daniel to show us the CLI in action. Right, so I'm going to demonstrate how we can run a Q-Bench as a security tool via the Starboard CLI interface. What I have here is a local kind cluster with four nodes, three worker nodes. And we also have to install Starboard binary. In here I'm installing it from the crew index and we have to initialize Starboard. It's one time command. It is creating all the CRD definitions and service accounts and configuration map. And with that we could trigger Q-Bench scanner. Since we have four nodes, we are spawning four Kubernetes jobs which are scanning and running benchmarks on each node. Once it's done, we could list the results. So as you can see for each node, we do have all the checks run and you can see me that there are some failures. For those who are more familiar with user interfaces or graphical user interfaces, we also developed a Starboard plug-in. And this plug-ins allows us to navigate to the list of nodes, select a particular node and we contributed a CIS Kubernetes benchmark which simply shows the same information as you saw in the terminal. So the CLI allows our friend Dave to run security tools manually through Q-Control Starboard. The next step is to automate this using an operator. So we've created a Starboard operator at the moment it just runs vulnerability scanning on workload resources. So the operator watches for new pods starting in the Kubernetes cluster and it runs a vulnerability scanning tool. We have a couple of options already implemented and what we're talking about that later. Having run the scan job, the operator writes the vulnerability reports custom resource. So over to Daniel to show us the operator in action. Yeah, so let's take WordPress as a sample application. What you can see is a deployment descriptor. But this time I'm going to use a Octant, apply YAML file feature to create this deployment. And now if we switch to the Starboard operator in this is where we run, this is the namespace in which we run the operator itself. And in the same namespace, it spawn immediately a scan job. It is unnotated. So we could actually figure out which workload it is scanning. And it's related to the active replica set of the WordPress deployment. If everything is fine, this job is automatically cleaned up and the operator will create a vulnerability report and associate it with this replica set. So we could also go and see what is going on and what is the descriptor of this image. There is an init container. There is a WordPress container. That's actually the command that we are running. And since Octant is automatically refreshing the UI, it has been completed, switching back to the default namespace WordPress. And here is where we made this vulnerability information available. There is a status card component and the plugin displays the stats. If you want to drill down, we have the vulnerability tab and here is a list of vulnerability items. The operator will also act on the rolling update event. So if we bump up the version of the deployment to see if the newer version has less critical vulnerabilities, we will see that in the same way, there is another scan job in the starboard operator namespace. It is using a different set of labels because now the active replica set for the application has changed. But if everything goes well, you will see the report updated in here. For now, it is still scanning. And here we are. The latest version has only two critical vulnerabilities. And again, we could drill down and see all the details here. Thank you, Daniel. So now you've seen starboard in action. Let's talk a little bit about some of the design decisions that we've taken while building it. And you've seen how security report information is created and is associated with particular resources. And we're trying to generalize handling different types of security report associated with different types of resources. As you saw, we're using labels on the security-related resource to identify which Kubernetes results it relates to. So we can use label selectors to extract the right security information for a particular Kubernetes resource. But we're also using an owner reference. And the good thing about this is that when the owning resource gets deleted, the associated security resource that owns it also gets deleted. We simply don't have to worry about garbage collection because Kubernetes takes care of it for us. Deciding to use this owner reference approach actually settled a whole other design decision for us. We've been going back and forth on whether it would be useful to maintain historical security reports. And by using this owner reference, we know we're going to be cleaning up security resources when the associated resource is deleted. So we can't hold on to historical information. And actually that makes a lot of sense because Kubernetes isn't intended to hold a historical database of things that have happened in the cluster in the past. So there's no reason why it should hold on to security reports from the past either. There are other options for that. Log the information and store it elsewhere outside of the cluster. We also had to make a decision about the name we give to each of the security report resources. Initially, we thought of using a UUID because it would definitely be unique but it's a random string that's meaningless to humans. The alternative would be a deterministic name but when we were still thinking about historical reports we worried that it would be more complex. We could solve the problem but to make those names unique would involve doing something like including a timestamp if we wanted to make it a human readable name. So we initially lent towards the simple solution which was just to use UUIDs and know that they would be unique. Once we decided actually we're not going to store historical reports that will only ever be one security related custom resource related to a particular resource the whole naming issue stopped being so complicated. We could use deterministic names because concatenating the resource type and its name gives us a unique ID within the namespace and it's meaningful to humans. But it also had a useful implication in the implementation, right Daniel? Yes, since we are using the controller runtime library as part of the starboard operator code base which is using a pretty advanced Kubernetes client with its own cache does deterministic names solve the problem of duplicate vulnerability reports that we happen to create from time to time because we were not leveraging this whole caching mechanism. So eventually the resource name like a deterministic name reads better in the CLI interface but also improves the reliability and implementation of the operator. When we first spoke with customers about starboard something they brought up very quickly was role-based access control. Generally speaking Dave should only have access to the security information related to the resources that he has access to. So putting those reports in the same namespace as the resources can make our back configuration pretty straightforward. But as you saw starboard needs to run jobs and we have to decide what namespace to run those jobs in. We decided to do that in a separate namespace for starboard for a couple of reasons. First of all it means starboard doesn't need permission to run workloads in your application namespaces and that's better from the point of view of the principle of least privileges. We want to give everything as limited permissions as possible so starboard can only create these jobs in its own namespace. The other advantage of this is that when Dave is looking at the applications running in his namespace it's not cluttered up with the occasional scan jobs that starboard operator would be automatically creating. Now for the CLI the scan jobs run in a starboard namespace and for the operator they run in whatever namespace you're running the operator in, right? Yes, notice also that not every resource building in Kubernetes is scoped to a namespace. There are nodes and you saw in the demo that we run a cube bench for each node and then we associate a report with a node. So in this case we also distinguish between namespaced and cluster scoped custom security resources. So vulnerability report would be a namespaced report whereas we already mentioned cube bench report or cube hunter report is a cluster scoped resource. So we've seen that in general we're associating a security report with a resource and when it comes to running workloads we actually had a few options to consider. Remember that we're trying to make it easy for Dave Loper to find out the security information about his workloads, his running applications. Now if he runs an unmanaged pod it's pretty straightforward that the vulnerability report is associated with that unmanaged pod. But if we're talking about deployments there could be multiple instances of each pod and if we were to associate vulnerability reports with those pods we'd have duplication. Those reports can actually be pretty large so it could turn into a practical storage issue to store vulnerability reports at this level. So we decided not to do that an alternative might be to think about associating the vulnerability report with the deployment. After all that's the resource that Dave is typically going to be manipulating but there is a problem with this. There isn't always a single replica set per deployment and if we have multiple replica sets they may have different images in their pod specs so we'd need different vulnerability reports for those images those different images that they refer to. So the conclusion is we need to hold vulnerability reports associated with replica sets. When you come to think of it unmanaged pods can also be replaced with a pod of the same name but a different image so for that reason we include a label with a hash of the pod spec so that we can tell if the security report for a particular pod is out of date. Even if we're storing these vulnerability reports per replica set from Dave's point of view he's probably interested in querying it related to his deployment. So we've made it easy to traverse the hierarchy so that you can make a query at the deployment level and it looks up the vulnerability information from the associated replica set and over to Daniel to show us the hierarchy. All right so just to show you the current report that we have in our cluster we scanned all the nodes with kubebench and as you can see we set the owner reference for this report to point to a kind worker node but also you see that it's because of this great kubecontrol tree plugin we see the whole hierarchy of objects it's even more interesting when we display it for the WordPress application. Since we did a rolling update we have two replica sets one of them is active as you can see this ready status here and we also created a two vulnerability reports which are linked back to the replica set and just to show you what we mean by traversing the hierarchy of objects so remember that even though we don't have the report associated with the deployment object we do display this information right because we can do programmatically traverse the tree and display all the vulnerability information right in the user interface. So we've talked about how we envisage starboard being extendable and being used to show security reports generated by multiple different tools. Now let's talk a bit about how or what you would need to do if you wanted to add support for a new security tool into starboard. Over to you Dan. So here what you can see is like a quick explanation how starboard schedules jobs and how you can contribute or security vendor can contribute with its own vulnerability scanner. It's not like a plug and play experience yet but in general we believe that we could support other tools out of the box like we have this reconciliation loop that is constantly watching deployments or pods that are created and then starboard is taking care of scheduling a job adding all these labels that we explained and eventually persisting the report and here what we expect from the vendor is to implement a simple interface to provide a pod template spec. This is where as a prerequisite we expect that your vulnerability scanner is containerized so it could give us its image and then also build up the command that is used for scanning in case of 3D which is the default vulnerability scanner in starboard the command is pretty simple it's like an image reference and then the 3D output converter this is a piece of code that is reading a control pod and it's transforming the 3D model to the starboard model which in this case is a vulnerability report it has its own schema but we can't know what is the model of the third-party tool these two bits have to be implemented separately and if we move to the next slide you will see a simple interface called vulnerability scanner with those two callbacks one is get pod template spec that's the one where you specify what is your image reference and what is the actual command and all the args and the second method is a callback to parse the stream of logs coming from the pod if you intend to reuse all the custom security resources that we defined as part of starboard you could also swap for example Polaris which is used by default as a config-outlet tool so as long as you comply with this schema of config-outlet report you could reuse for example plugin you don't have to build your own dashboard or something like that just fit the data with your tool and we will visualize it for you the same applies for other IDEs for which we don't have plugins now but since the CRD is a common thing in Kubernetes and since we added additional printer columns even the default UI for displaying custom resources in Lens allows you to quickly spot that in a given application you have 91 critical vulnerabilities of course we can wrap it into a plugin but it's already useful and also you can see here are all the labels and the naming conventions that we were talking about so today starboard is extensible it's possible to add support for new security tools you do need to write some code where we would love to get to in the future is the ability to just add new security tools through configuration imagine that it's really a simple case of saying or telling the operator I want to watch a certain type of resource and when there are changes in that resource I want to call a particular security tool and it will generate custom resources of this particular type so adding support for a new type of security tool would be a case of creating a new custom resource definition and adding in the definition the configuration for that new tool so that's the vision of where we want to get with plugability today it is a little bit more complicated you do need to write a bit of code but this is the ultimate goal and we would also be very keen to hear feedback on the custom resource definitions that we've created so far we hope that they're flexible enough to plug in other alternative tools but we're very open to hearing feedback the last thing that we wanted to talk about in terms of the future of starboard is helping Dave Loper ask the question what are the most security most important security issues in my cluster or what are the most important security issues in the particular namespace I care about we want to get to a point where we can summarize the most important issues from across these different types of security report and use starboard to make it very easy for Dave to find out what security issues he really needs to worry about so if that sounds interesting to you you can download and run starboard from github it's also available on the artifact hub and the operator hub and we would love for you to get involved do come and check out the starboard repository on github you can get in touch with us or reach out to us here at the conference we'd love to hear from you thank you very much thank you