 I'm Jeremy from Sailplane AI and I'm here to talk about pilots for platform engineering. Platforms are APIs and tools that let software engineers self-serve their applications. Pilots are AIs that build and operate the platform for you. So at Sailplane we wanted to see if this was feasible. Can we create pilots that build the platform for you and that operate the platform for you? To build the platform what we mean is using generative AI to implement the APIs and tools that comprise your platform. And by operating the platform we want to see if we can create AIs that autonomously diagnosed and resolve problems for you. Let's start by talking about building the platform. If our platform consists of a bunch of custom resources defining those APIs, then implementing those APIs means implementing those controllers. And we wanted to see if we could do autonomous code generation and testing for you to implement those controllers. So here's how we build the platform using autonomous code generation and testing. It's an iterative process where we prompt the model to implement the controller. We then try to build and test the code, we collect those errors and we feed them back to the model asking the model to fix those errors. We then merge that existing code and repeat the process. So by doing this iteratively we can produce functional code. So here's an example of creating a vector database custom resource. So vector databases are becoming increasingly important for a new class of generative AI applications. As with a lot of databases there's a lot of low level details that should be the provenance of the platform team and not the application engineers. So we want to create a custom resource like the one illustrated here that only exposes the knobs that application engineers should have to worry about. So here's an example of a pilot building that controller for you. On the left we have the input which is the API definition for this custom resource and on the right we have the generated code that gets produced by this iterative process. So we're really excited about this because the bottleneck to creating bespoke platforms is being able to implement those controllers and so we think we can solve this problem using generative AI. Let's switch gears and talk about operating the platform. The reality is if we deploy any application on the platform we're going to have problems and we're going to have to debug and troubleshoot those problems. So that's typically an iterative and tedious problem of running various CLI commands that could control to try to identify the root cause and then resolve those problems. So we were really excited to see if we could try to solve that problem using AI. So one of the problems that we hear from a lot of platform teams is they're overburden trying to support their users. So here's a typical example. The software engineer will deploy their application and then try to access the endpoint and get an error. They'll complain to the platform team who will then investigate and upon investigating they'll realize that it's user error. In this case the team misconfigured their service selector and it doesn't match their pods. So we were very excited to see if we could solve this using AI using a technique called automatic scientific debugging which we've illustrated here. So here's how this works. We ask the model to come up with a hypothesis for the problem and as part of that we ask it to tell us how we could collect an observation that would confirm or refute the hypothesis. This will typically be a CLI command. We then execute that CLI command to collect the observation and we then feed that back to the AI and ask it to evaluate its original hypothesis in light of this observation. And then we ask the AI to either suggest a new hypothesis to refine the problem or else draw a conclusion if it's able to. So here's an example for that previous problem of network accessibility. The first hypothesis the model comes up with is that the service is misconfigured and we should run kube control described to test that out. When we feed that observation into the AI it concludes that the service is misconfigured because there are no end points listed. So next the AI generates a new hypothesis which is that there's a problem with the pods and we should run kube control get pods to debug that. And when we feed that observation back in the model correctly concludes that there's a problem because there are no pods that match the label selector from the service. So we're really excited about this because a major problem with self-service is that application engineers don't know how to troubleshoot their problems because that requires a lot of low-level knowledge that they don't have and aren't supposed to be exposed to and we think this demonstrates that you can solve that with AI. So if you're excited about pilots for platform engineering we invite you to come explore this with us. Come talk to us here at the conference at kubecon or sign up using the QR code. Thank you very much.