 Hello. Welcome to my session today. Today, we're going to be taking a look at securing the continuous integration and delivery infrastructure for the Tinkerbell project. My name is David McKay and I am a Senior Developer Advocate for Equatics Metal. I am also a CNCF Ambassador and I do quite a lot of live streaming. You can find my streams at rockhold.live. My goal with streaming is to provide educational resources for all of us to learn the vast cloud native landscape together. And today I want to introduce you to Tinkerbell, a bare metal provisioning system open sourced by Equinix Metal, now a CNCF sandbox project that aims to solve some challenges that have been difficult for a long time. And that is commoditising bare metal, which is no easy task. So before we dive right in, let me just shrink myself down. So Tinkerbell isn't your run-of-the-mill project. When we're working directly with a metal, there are a lot of things that are a little bit harder than working with a virtual machine. First, Tinkerbell has to run an in-memory operating system that can handle partitioning disks, encrypting disks, writing and installing operating systems through container-based workflows. There's multiple microservices that are responsible for understanding which bare metal devices are coming online through MAC address identification. There is IPEXA for building the operating systems and streaming the operating systems over the network. And of course there's IP address management too, DHCP. And while you can use traditional CI systems, you definitely have to provide your own runners. So in order to build out our continuous integration and delivery system for the Tinkerbell project, we need access to some metal. Personally, I work for Equinix Metal, and Equinix Metal do donate a substantial amount of infrastructure and inventory to the Cloud Data Foundation for projects like this. My go-to tool for spinning up brand new machines on any cloud provider of choice is Pulumi. And there are a few reasons I want to talk about Pulumi for this session. One, Pulumi very graciously gave us free access to Pulumi Cloud for the Tinkerbell project. This comes with a whole bunch of benefits. From the security side, it meant that we could commoditize access through their RBAC system. It also meant we could take advantage of their secrets management as well. Something that Pulumi does really well is allow us to really adhere and adopt GitOps by having everything that we need including secrets in the repository and pushed. They are of course encrypted using the Pulumi Cloud backend. However, if you want to use Pulumi and you want to stick to the open source, you can use any cloud KMS as a backend as well. This is our actual production stack file which is open source on github.com. You can see we have AWS credentials here and we also have our Equinix Metal credentials. Next, we can't just spin up metal and magically do something right. We have to go through some provisioning stage so we need some software on the devices too. My go-to tool for this is SaltStack. Again, focusing on the security reasons of why I'm using SaltStack is one, there's no SSH as a transport protocol. SaltStack uses zero MQ based messaging to pass messages which the minions are subscribing to from the master and executing those days. Salt also has a concept of pillars which allows us to have secret information available on the SaltMaster node and selectively distributing the keys that we want to each individual machine or minion based on a whole bunch of grains and parameters. One thing about the messaging system here is that it simplifies all of our network policies. The minions only have to be able to speak to the SaltMaster. We're not opening up ports for the SaltMaster to reach all of our minions and that's a big win too. And the way that we're consuming SaltStack or provisioning SaltStack on these machines is leveraging the Loomis secret store. Writing the secrets that we need to cloud in it and then they'll come available to Salt. And the secrets being stored in pillars means that we can selectively distribute them based on grain data to each of the minions. So the minions only get the secrets that we allow them to see. And it's worth pointing out that while Tinkerbell was open sourced by Equinex Metal and a majority of the team comes from Equinex Metal that it is a CNCF sandbox project. This means that we're using hardware not on our Equinex Metal accounts but on our CNCF accounts. It also means that any maintainer or contributor regardless of where they are employed should be able to have the same amount of access. We want to protect against the bus factor of course as well. So we need to commoditize the access to the machines. And for that I'm falling back on one of my other favorite tools, Teleport. Teleport allows us to disable OpenSSH. We don't need to rely on giving people access to the machines by reaching out and getting their SSH keys or scraping them from GitHub. We don't need to add everybody to the project on Equinex Metal. We can use Teleport's SSH server which is backed by GitHub SSO and restrict access to these machines based on a group that we create within the Tinkerbell organization on GitHub. So in order to give people access to the runners or to the saltmaster itself we just have to add them to a group on GitHub. And that's pretty cool. We're keeping Teleport secure by only allowing private IPv4 access for other nodes to join the cluster. And again the tokens are all stored in the Pulumi store and are encrypted by Pulumi and distributed via cloud in it to SaltStack to the runners as needed. So what does that all look like? Okay, so first you can see all of the code to provision this infrastructure and the applications running on top of the machines at github.com slash tinkerbell slash infrastructure. We have the Pulumi directory which is responsible for running the Pulumi app provisioning the bare metal, writing everything that we need to cloud in it and bootstrap the salt setup. From there salt takes over and installs everything else that we need on itself and the runner devices. Using Pulumi's cloud we have access to see when Pulumi commands were run against a stack. We can just click on Tinkerbell infrastructure production. We can see the outputs. We can see the configuration used including secrets although they are nicely sophisticated. And what else is cool is that we have the activity view that shows us every time someone ran the Pulumi stack so we've got really good visibility and to when any of these secrets were accessed the state was changed and nodes were spun up. Now in order to get access to the machines we can browse to teleport.tinkerbell.org There is only one option to log in and add us through github. I click the magic button and I now have access to all of the machines within the infrastructure. I think what is really cool about teleport as our SSH means is that we can have the ability to see active sessions and in fact we can even join them if they were in progress and see what the people are typing or doing or whatever. And the sessions are also recorded but let's take a look at that. We can jump on to our salt master and I can just run a nice simple salt command to ensure that all of my devices are online. Back over here you can click on active sessions and we can see that I have an SSH session in progress and I have a join button which if I type echo hello I can see both of my terminals. Very very cool. Let's end both of these sessions. Refresh and that will end in just a moment hopefully it shows up here and now our session is gone. We can go to our audit log, we can see that sessions were started we can see the single sign on from github we can see that someone joined a session and we can see the user disconnected. We can come back here and click play on our recorded session and see all of those commands that were executed. We got the salt, we have the salt, test up ping followed by our echo. Thank you for watching this session. I hope you get as much value out of Pulumi, Solstack, and Teleport as I, the Tinkerbell, and Echo Next Mail. Have a great day.