 Welcome. This presentation is titled Unlocking New Platform Experiences with Open Interfaces and I'm here with Thomas to talk a little bit about different tools that you can use if you're building platforms. How do you combine them together and how can you face some interesting challenges? Let's get started. Let's go right into it. We will not spend time on crazy stuff. So challenges that you will face when you're building platforms or dealing with a bunch of teams working with Kubernetes clusters and infrastructure and all that stuff. Let's start with the first challenge. The first challenge? How do you get people in newer teams to get faster to use all these crazy tools together? It gets complicated because again the amount of tools that you will need is pretty large and each tool has been designed with different use cases in mind so you need to learn all of them in order to be effective. So we need to do something about it, right? The second challenge is that you're building distributed systems so how can you reduce the cognitive load around developers building these more complicated systems that needs to be resilient from the get-go and that they just need to scale in some way or another, right? Because your business will do good so you need to be able to scale down. And finally how do you run this and how do you operate this in your production environments, how do you do promotion across environments and how do you know the things that you need to do with your applications and observe them and figure out what's going on when things are not working as expected, right? Let's start with a very simple application. I presented this application of developers on this year with the data folks. I don't know if we know them but what we showed here is that even from very, very simple architectures things get complicated really, really fast. So this is like an application that was published by Docker folks to show how to containerize applications using different languages and using different infrastructure, right? Simple application to cast them out and then see the results when the data is being, you know, aggregated by the worker's service. In this case, we're using Java, C-Sharp and Go, just a very quality environment. All of these things will need to be containerized and deployed into Kubernetes cluster. But then you have the infrastructure layer also that you need to take care of and we'll take a look into how to deal with that as well. Let's have a look at how it works. Let's take a look at the application. The application is pretty simple and it looks like this, right? So you can cast boats and you can actually see the results that are changing. So pretty simple stuff, very basic. Again, we want to show a simple application working and yes, the architecture around that application too. So we'll use these to exemplify the challenges that we mentioned at the beginning like the onboarding side, the distributed system patterns and tools that you can use around that space and then the operation, you know, running in product space too. So my name is Mauricio Salatino. I'm a software engineer. I work for a company that's called Diagrid. We are doing a, you know, Docker and working on also the G-18 project. I wrote this book titled Plasma Engineering for Kubernetes. I was given some copies away for the web, for the book club, so feel free to check it out. I'm involved with different open source projects in the CLCF space and outside too. So, you know, if you have any interest, like, if you're interested in these topics, please check the book out. Yeah, and I'm Thomas Vitale. I work at Systematic, a Danish software company. I'm a software engineer and CLCF ambassador. I'm really passionate about anything. Club Native or Java Related and combined these passions of mine and wrote the book recently. It's called Club Native Spring in Action. I'm also really a big supporter of open source technologies, contributing as much as I can both in the Club Native ecosystem and in the Java space. So, let's get started and analyze a bit more of the challenges that we face, in particular developer space when onboarding a new process, because there's always a reason to introduce more complexity. So, for example, how do development teams boost up a new project? How much will it take to go from initial idea to some prototype? We need to consider that. So, how much time are they going to spend before they are able to write the first line of code implementing business logic? So, how much boilerplate is there to deal with during the onboarding process? Yes, good question. And then an important question also the platform team should ask themselves is, do we want to enforce Kubernetes for local development? And if we do, how are we going to set it up? Will it be a local Kubernetes cluster running on the developer machine? Will it be a remote cluster that each developer gets access to? And how about multi-tenancy? So, we need to consider this part, because it will have a huge impact on the developer experience. And then, of course, we need to containerize applications at some point. How are we going to do that? Are we going to use some Docker files? Are we going to control that centrally from the platform team if we need to patch some security vulnerabilities? For example, there are challenges there as well. Next step, we need to configure the deployment and it will be different on the local environment, especially if we use Kubernetes also locally and in production it will be, again, very different. How much detail from Kubernetes are we exposing to developers? That's another important challenge we need to face when we design this end-to-end experience in terms of Kubernetes. And finally, when we put all these pieces of the puzzle together, we should evaluate how much cognitive load are we adding to developers that are using the platform. If it's too much, then we have to iterate through our design and start over and try to fix those areas that are affecting badly the developer experience. Yeah, it's kind of funny, because we are here, we all know Kubernetes, right? And we all went through that learning curve where we just understand how it works and then we can use it. But think about, like, new people coming out fresh out of college, all the stuff that they need to learn. It's kind of insane. So thinking about how to reduce that is quite important. Think about the very simple applications. It's just three simple applications in a distributed system. How much work will it be required by developers to run this locally? Yeah. So that's one of the questions we'll try to answer today. Yeah. So the next one. The next one is a little bit more closer to developers and also to infra, actually. It's how do you, the company, infrastructure from application? Like, the main idea here is that no matter how the application looks like, it will always be storing data or just changing events. And in this case, we are storing data in two different, you know, components, Redis and PostgreSQL. Both requires to have clients or drivers to be able to connect from the application, right? Imagine, in this case, we have the whole application in Java and the worker application in C-share. They are both using the Redis client, but those are two different versions of the Redis client, really in different languages, right? The fact that we need to connect to Redis basically means that we are just coupling these components together in a way. Like now, we need to make sure that the Redis client works with the Redis instance that is running on the same version and the Redis client in the worker written in a different language also needs to work with that specific version. If we want to upgrade Redis, maybe we will need to re-release these two applications. And then the lifecycle of these things gets much more complicated. The same happens when you configure, you know, the clients themselves to connect to an instance, configuring the Java client with the C-share client might be different. The defaults might be different, so these applications might be different when things go wrong. If the Redis instance is misbehaving for some reason, or the Redis clients are configured in a different way, they will behave differently and then debugging and troubleshooting that takes a lot of time. Most of the time, it's like, developers' time, right? Yeah, exactly. The next thing around like building these similar architectures, the same dimension already like the versions in the infrastructure, but also where that infrastructure lives. Where are these instances running? Are they, like, remotely located? Are they managed services? How do I access that when I'm just doing my development tasks? Do I have Docker Compost to run them? And what happens if my Docker Compost have different versions of these components that the one that I'm using in production? Then we are introducing all these mismatches that we need to deal with at some point, right? So, yeah, so in general, like, you know, even for simple applications, we are running infrastructure, like applications doesn't run alone. It's not our code alone. We need to run other components. There is coupling between the environment where these components are running and the application code. And again, same versions using clients that are written in different languages can be the main cause of disruptions and troubleshooting issues that people will need to spend a lot of time dealing with. Event-driven scenarios, this is like very common if you're building distributed system, again, like exchanging events across systems. It's a pretty common pattern. And if you're building platforms, you actually need to enable developers with certain tools to enable these scenarios. When you build, like, event-driven applications, the idea is that you can extend the overall application functionality by emitting events and consuming events from different services in a very decoupled way. But as you can see here, we also need a RabbitMQ client using RabbitMQ to emit events every time that we cast a vote. And that will actually, again, just take us to that scenario where we are coupling all the producers and consumers, in a way, because they all need to have the RabbitMQ client configured in the same way and upgraded if we are upgrading the RabbitMQ instance. So, again, it's like, it's not like a very, very strong coupling, but it's a coupling at the end, and if you want to upgrade RabbitMQ, you will need to release all these services. Again, sometimes there are some cases where you don't have the client available for a piece of infrastructure, and then you just need to start rewriting services and all that kind of stuff. Event-driven things, event-driven, like, the event-driven patterns and interactions are pretty common in the cloud-end of space, and I think that we should help teams to basically promote teams to build more applications like that. But, again, there are several coupling there between producers and consumers, and misconfigurations will happen. One thing that I want to mention about message brokers and that space of event-driven space is that, again, it's like when I think about cognitive load of people learning how to change our applications, this simple application should be pretty easy to change by someone who is starting with software engineering. But it's actually not if you are using something like Kafka that you need to learn about how Kafka works, how you need to configure it, and how to use it. The APIs are not as simple as just publish a message, consume a message. They are around that space, but it's not something that you will learn in a single day. And finally, complex service orchestrations. The more complex your application is, the more that you will need to orchestrate things against different services, and this is not just calling three services in a sequence. I'm not calling three things in a sequence. Maybe I need to call an internal service that requires certain certificates or maybe credentials, and then I need to wait for a person to just tell me that I can proceed with something, and that person might be on a holiday, so I just need to wait for that to happen for a long period of time. And then maybe I need to connect to an external system that it charges me for every transaction that I do. I'm not spamming that service, and I'm not just spending a lot of money doing so. Those complex orchestrations are usually hard, and you need to start figuring out which libraries will help you to deal with that kind of things, like, you know, retrying libraries in different languages that, again, can be configured differently, and how are you going to handle errors consistently across different languages. It's also a common problem that I've seen. So, yep, the same thing that I've just said. It's not as simple as calling a system, but sometimes we need to have custom retries, circuit breakers, or domain-specific logic hooks where you can inject specific logic in an easy way. We need to deal with long-running things. These interactions, these orchestrations can be across different days, months, or years, and we need a way to compensate when things go wrong. Maybe we are calling a system that it's like making a debit out from my bank account, and then, well, for some reason, we need to undo things. So, we need to have the right mechanisms to implement these operations. Yeah, and once we are done architecting the distributed system, we need to go to production, of course. Otherwise, our application will not produce any value until it's there, right? So, we need to address other challenges. So, how do we roll out new deployment safely? Do we have a backup option? Can we roll back deployments? Do we need something like progressive delivery, or maybe some canary deployments to slowly test the new version of the application just to a small amount of users? And then, how about auto-scaling? Because there's both a cost-optimization point of view, but also an environmentally friendly aspect to scale to zero applications, and that's actually pretty easy to do, scaling to zero. The challenging part is scaling from zero, because the platform needs to support that somehow. It needs to intercept that request and spin up a new instance of the application, and the application also needs to be designed in a way that can support processing the request immediately, so without waiting too much time during the startup before it's actually active and ready to process a new request coming in. And then, all the configurations specific to the application, where do we place that? Is it developer responsibility, or is it the platform that can support that? So, are we building an application-aware platform, or are we enforcing that platform-aware? Then, we need to infer the state of the application from its outputs, like we can't go to production without observability, right? So, we need to have the right signals in place and collect them and make them consumable in a way that can help troubleshooting and visualizing what is going on in the platform and in the application, and then this platform needs to be managed and operated somehow, so we need to consider that aspect as well. So, we're going to show some solutions about this, but perhaps we can have a little fun now. Let's have some fun. Let's make it interactive so we don't sleep. Yeah. Let's do it. So, if you scan that with your phone, again, we are going to be using the app that we showed before. You are going to be using the app. So, try to scan that and let's play a bit. We have some prizes at the end. I don't want to spoil it. Yes. Also, we need to settle the debate with the app to find out yes. So, is it more cats or more dogs? More cats or more dogs? That's the question. I think we're going to lose this. I don't know. I don't know. They are like... Keep voting, keep voting. Cats and dogs floating all over the place. It's really cats and dogs, right? There you go. That's insane. So, what's going on there? So, a lot of events. A lot of events going on, yeah, exactly. Oh, it's a tie. Ouch. It's very, very close. I would say that it's very, very close. And you can keep voting, of course. So, this is nice. Yes, it is. So, basically, this application is running on Kubernetes, of course. It's running on Google Cloud. We make it public. And we actually are using a bunch of tools to solve some of the challenges that we mentioned before. Please don't close the application. You have, like, a hash on top. We will use that later on in the presentation. But I just want to keep seeing kind of like the things that are running on Google Cloud. Let's switch back to the presentation. Should we unfold what's going on under the hood? Yes. Let's talk about that. Let's talk a little bit about the tools that we are using. So, that's how the architecture looks like, right? Yeah. That's a lot of things going on there. That's a very, very high-level view of the things that we are using. We're using a bunch of open-source projects, CNCF projects to get this up and running. Again, it's running on Google. Let's break it down. We want to unlock new platform experiences. And the first challenge we mentioned was around onboarding new projects. So, first of all, we want to boost up a new project. And there are different ways of doing that. But we want to minimize the time from a development team getting a new idea to develop until they are able to write the first line of code implementing business logic. And, of course, a popular way of doing that. So, we have a couple of Java applications. We integrated already with Dapper based on the Spring Boot framework. So, a development team can go there and boost up a new application and focusing immediately on the business logic. Yeah. If you have the C-Sharp team or the Go team just having similar templates, then you just can make it really, really easy. And you find some golden path for the development team. So, we can accelerate the onboarding and the development team will make it really easy. So, we have a lot of stuff to do with this. We have a lot of software files have some downsides. So, perhaps we want to use cloud native build packs. So, from a platform perspective, we can centralize all the rules and how we containerize applications and developers get a nice experience because whatever the language project and make it part of your pipelines. So running on Kubernetes, you can use the Kubernetes native implementation. It's called K-Pack. So you can actually make a build service, a centralized build service. And as a platform team, you can centralize, for example, rolling out new patch updates. So one of the capabilities of Cloud Native Build Packs is being able to re-base the image. So without having to recompile and talk to all the development teams, centrally, you can replace just the bottom layer of the image. Maybe there's a vulnerability in the operating system layer, and you can do it centrally. But there are other ways of doing that, of course. Because if we say one option is not requiring Kubernetes to run locally for development teams, we still need to provide integrations. We talked about we have Postgres, RabbitMQ. We have different integrations. So a great tool of dealing with that is Test Containers. With Test Containers, you can have support for different languages again, because we really want to establish a polyglot design. So Test Containers support Java, Golang, Ruby, C-sharp, and can make it part of the application lifecycle, the management of all these integrations and services that the application needs, both at development time and for integration testing. Another tool that can help if you're working with functions specifically is K&AD Functions that actually combines different tools to provide an end-to-end experience. So it has a bootstrapping capability. So instead of using backstage, you can use K&AD Functions pointing to a template. You get a new project bootstrapped. Under the hood, it also uses cloud-native billpacks to containerize applications, and also for deployment, it uses K&AD Serving. So you get one entry point to deal with the entire lifecycle of the application, which is pretty good for functions. And then we can have also a better experience when we work with pipelines, and for example, Dagger, that lets you implement pipelines using normal programming languages. I wonder how many people was in the app developer concession that we did the other day. So basically, we were showing how to automate the same application with some Dagger pipelines for local development experiences, so pipelines that you can run in your laptop and in CI. It was pretty interesting. If you didn't see the presentation, just look for the recording. I think that was covered there. OK, so at this point, we have the onboarding phase done. Developers are building their application. But I guess we still need to address the challenge of how we deal with state in a distributed system. Dealing with the state is important. I think that this is kind of like the part of cloud-native distributed challenges. I think that we can agree that we spent a lot of time at the beginning of the presentation talking about this separation between infrastructure and application code. I believe that this will help teams to go faster. And for that, we are using basically APIs. And that's what the Dapper project is about. The Dapper project will give you that separation of concerns between infrastructure that you're using and application code. So from the application point of view, you will have APIs to interact with the infrastructure. Do you want to send a message? Do you want to store some data? You will use those APIs instead of interacting directly with Redis, PostgreSQL, or RabbitMQ in this way. Having APIs, of course, as you might know, is really important also because then you can have different implementations. Which takes me to the second image, where if we are running on Google Cloud, maybe we want to use Google Pub-Sub instead of RabbitMQ because it's a managed service. So I don't need to run it myself. I can just use the managed service. If I can pay for it, I might use that. The same with Redis or the Google in-memory store. It's the same API. So I can actually replace the implementations and move my application across environments without changing the code. And the Dapper project has different implementations for different providers. So you can use DynamoDB in AWS to just store data instead of using PostgreSQL. Again, using managed services and having this API that allows your applications to move across environments without changes, I think it's a good thing to do. Again, because we have one more thing because we have APIs there that gives us the possibility of tapping into observability of how our applications are accessing this infrastructure. So you can extract data and see that in Grafana, something that we will do later on. But when we talk about states and integrations, there are some very well-known cloud-native patterns that we can actually start relying out of the box. Something that I mentioned, and when you see cats and dogs flying around the screen, that's basically some RabbitMQ messages that are being sent to the dashboard. Not to the dashboard, but to RabbitMQ, and the dashboard is consuming those messages. This is a very common thing to do where you want to store a state and then emit an event. This is a very common thing that developers will do. And we need to make sure that that's easier for developers. But something very common in this scenario is that you might want to only send the message if you are guaranteed that the information was stored in this case in Redis. The moment that I commit the information in Redis, I want to emit a message. If I cannot commit the information in Redis, I don't want to emit that message. We want to have some sort of control doing that. And if you take a look into the DAPR APIs that I'm using in the application, we can check quickly how is that looking like. So I have here the SaveState API. This is basically the DAPR APIs. And I'm using the DAPR client to access this from a Java application. I don't know if you can see that. So basically what I'm doing here is I'm calling the SaveState operation. And then I'm checking if the messaging side is available. I will publish an event. That's what a developer will usually do. This doesn't guarantee any kind of transactions. It's just wait for something to be saved and then submit it there. What I can do with DAPR is also use transactions. So I have a way to list operations that will be executed in the same transaction no matter which infrastructure I'm using down below for messaging or for storing a state. So that's kind of like it's building on functionality and common patterns that we know. And as you can see here, I am just have a list of transactional operations that I want to execute. I want to create a new state, basically store the boat into a persistent storage. In this case, it's ready. And then I will execute that transaction to the configured state store that I'm using. Something that you are not seeing here is that I'm emitting an event. Because again, the DAPR project gives you that functionality of implementing common patterns. And what you can do there is that you can configure these to happen at the infrastructure level. So instead of saying, I will push the developer to code and codify when to send events, I can configure DAPR to say every time that a state is stored, no matter the persistent store, I want to emit an event into a queue, in this case. And that's quite interesting with the Java application specifically, because using RabbitMQ, actually RabbitMQ doesn't come with that kind of transactional support, where we are using a pattern like Saga, where we want to save some state and we want to also publish a message. So we're not getting a better experience with DAPR, but actually, we managed to do something that is not possible with the plain RabbitMQ client, in this case, and get it in transactional guarantee. Exactly. And that takes us to the next slide, where that's what we want to achieve. And the next one, basically, it's the name of the pattern. It's called the Outbox Pattern. And you can just look for it in the DAPR documentation because it's pretty that simple to just store transactionally some state, and it will automatically produce a message for you without complicating the code with transactions and more complicated stuff. So that's one thing. The other thing is that when you can do these basic functionalities like storing state, sending and consuming events, the next step is what we discussed before, more complex service orchestrations, where you need to, for example, let's pick a winner between cats and dogs and then ask for the audience to see if the winner is around. But we don't want to be waiting there forever. So we can have a timer to say, if the winner is not in the audience, then we will just do something else. If the winner is in the audience, then we can give the winner a price. And if the winner is actually not coming to pick up the price, then we might need to pick another winner because it's like we have an uncomparative winner. So yeah. And then finally, you need to coordinate with an external service, for example, saving the winner into the Hall of Fame. That might be stored in a different cloud provider like Azure, right? We want to do all that complex coordination. And we can do that, again, with dapper workflows, with this kind of one of the newest addition to the framework that will give you a programmatic way to basically codify those interactions. Again, I don't have time to show all the implementation. Again, this is Java, but this can be implemented in any language, again, using common APIs. And we have here a workflow calling activities, in this case, like picking a winner activity and then waiting for events. You can see how easy it is to say, I'm waiting for this event, but with the timer for five minutes, right? Like if it's not coming in five minutes, then I have the way to say, well, now I need to go and do something else. Just do plan B, for example, or whatever you want to do. Again, very friendly for programmers, like for developers, because they will have an API to create these complex interactions. Let me go back to the Zoom here. Meetings, there you go. Yeah, so let's talk about how we can solve some challenges when going to production. So first of all, we want to autoscale our application. So all of you were voting. So we're going to see in a minute if we needed to autoscale, actually, the application, right? Or if it was enough on replica to sustain all the voting, and we're using K-native serving for doing that. So we can do autoscaling, scaling to 0, and most importantly, scaling from 0. And then we have applications that, of course, need to be ready to be run in a serverless environment. So for example, with Java, we can use Graph VM native compilation in order to get instant startup time and reduce memory consumption. Then we can use KDA, because K-native serving is providing autoscaling, but for HTTP-based request. So if we want to do the same for event-driven calls or transactions, then we can use KDA. And we can pair them together by themselves, or we can even use a higher level project like Open Function that brings them together and provide an overall function runtime and left cycle management. If you don't know about Open Functions, it's a very interesting project that basically combines K-native serving, KDA, DAPR, build bugs, and a bunch of other things just to provide that functions experience. If you are really into functions, you should take a look. It's not easy, because, again, it's combining a lot of projects together, so you need to understand how it works. But I think that it's a pretty good thing to look into. Yeah. And then we have the platform. So in this case, we use Carvel to package each capability as an OCI artifact. And then everything is bundled as one big package that can be installed on any environment with one command. We have Flux for doing some continuous deployment of all our application workloads and to manage all the different data services as well. And then OpenTelemetry, because observability is really, really important. We can go in production without observability. And OpenTelemetry provides these unified APIs and protocol with support across different languages. So again, we want to ensure this polyglot experience across the entire lifecycle of an application. And maybe we can have a look. Let's see what happens. Let's take a look. When you vote. I can take a picture. There you go. Thank you. So I'm really curious. So you need to go to the Cuts. Yes. And there. All right, first of all, let's see. Thank you. Let's see about Knative. And we have the VoteService. So we had some concurrency here last, let's say, last 30 minutes. So that's pretty nice. So we actually didn't need to scale. So we got more concurrency. You can see in this area. But one pod was still enough. So we actually did some load testing here. We used DDoSify to DDoS our application. And that was really cool. At some point, we had more than 1,000 pods. And we got the cluster autoscaling as well. So that was pretty neat. Well, just because we did the DDoS attack, I guess. Yeah, it did us like that. That's good, otherwise. There you go. Ah, there you go. Yes. And then, of course, we are using Dapper. So we are abstracting all these APIs. But under the hood, we still have a concrete implementation. So for the message broker, we are using RabbitMQ. So let's see if it's actually using RabbitMQ, right? So I can go and check in the RabbitMQ. So we can see we have a dashboard here. And we can see message is incoming and outgoing from RabbitMQ. So from a developer perspective, I'm only dealing with the Dapper client and sending and consuming messages in a neutral way from the application. And then under the hood, the platform team then would pick the implementation for that type of service underlying Dapper. It could be RabbitMQ. It could be Kafka. And for me, it's actually convenient because I know RabbitMQ. I'm familiar with it. I can run it. But then if you switch with Kafka, maybe I would have problems. So actually, using Dapper would help me. So I don't have to learn all the details of how Kafka works internally. Yeah. And I think that this is pretty important, at least from a platform engineering space, is that we will have observability at all the layers, right? Like application layer, looking at how the application is using infrastructure in a generic way, but also like at the infrastructure, all the dashboards for the specific tools that we are using. So it might be time to wrap up. Yeah. So yeah, I think that the main things that we wanted to show here in this presentation is that APIs are usually important, like choosing the right APIs. If you are building your platforms, it's really a good way to help to protect your platform investments, right? Like you need to figure out, where do you need to wrap everything up with your personal, like your domain-specific interfaces? And when you are doing that, like looking into standards and open source projects will help you a lot, right? Like people in the open source communities here in the CNCF, we are spending a lot of time like solving common challenges across the entire industry. And those standards and APIs are being created. So if you're interested in joining these movements, just please join the specific working groups. We need to actually speed up the onboarding processes that we have nowadays with all these tools and with all the ecosystem. So try to always aim for reducing cognitive load and try to improve the integrations between the tools. We have used a lot of tools here. And I think that because we knew the tools, we were kind of like, OK, efficient enough. But for people that doesn't know those tools, you need to make the process simpler. Look for out-of-the-box cloud-native patterns, right? Again, if you're building very complex distributed applications, you need to have the right tools in place to not push developers to solve these common challenges. And finally, like smooth operations, I guess that have observability all over the place so you can troubleshoot when things go wrong. And I don't know if you want to add something else there. Yeah, so I think we tested a few things from the distributed system. We tested storing the state, reading the state. We tested also exchanging messages, but we didn't test the service orchestration workflow. Maybe we should test that. Maybe we can test that. And I have two books here. Let's see how the cats and dogs are doing. Yeah, you have a code on your phone if you kept the window open. Yeah, so if you have it open there, I can see more boats coming. But what I'm going to do here, if it works, there is a button there just to choose a winner. I don't know the results. What are the results? I just don't even see my mouse. Yeah, it's a lot of cats and dogs. Dogs are winning. The counter is not working. Apologies for that. I thought it was working. It's not. But look at that. That's pretty wild. OK. I don't know if we'll be able to settle. No, I don't think that we can settle. No, we cannot settle that. So let's start with cats. I will give you that. So let's pick a winner. This is where the demo can go really, really wrong. But like drum rolls, there you go. Let's see. Let's wait. There you go. We have a winner. So is the winner in the audience? Let's see. Is the winner in the audience? Yeah, there it is. We have a winner. Where is it? Yes, yes. You need to come to pick your book. So let's mark it. The winner is in the audience. The winner get a book. He's coming. I will say that he got the book, right? Yes, if he's coming, that's good. He will get the book. Yeah, let's pick a second one. And we can do like the dog's one. So drum rolls again. There you go. There you go. Another winner. It's in the audience. Is it in the audience? There we have it there. There you go. It kind of worked. I'm amazed. And that was the service orchestration. Thank you very much, guys. So yes, books. And that's it? Yes. Let's close this up. There you go. Yeah, thank you very much for joining. Marissia tomorrow will be the Knative Functions talk. So if you're interested in learning more about that, feel free to join and we'll be around for questions. And in the QR code, you can check the source code as well. Thank you. Have a great day. Good stuff.