 All right, welcome to our talk titled, Kubernetes Wonderland Adventures in Platform Building. Today, we'll be talking about platforms, why do we need them, and how we can provide developers and other teams with what they need to do their work. So we're also going to look at different teams, look at how different teams share not only practices, but also tooling and how we can go beyond automation to provide different experiences to different teams. So join us as we go down this rabbit hole into Platform Wonderland. First, I'll introduce myself. My name is Alexa Griffith, and I'm a software engineer at Bloomberg. I work on the inference team, which is basically model deployment for machine learning workflows. It's that stage. And I work a lot with the open source project K-Serve, which is for this inference stage. My manager, Dan Sun, is the co-founder of K-Serve. And my team is heavily involved in the K-Serve open source community. That's good. My name is Mauricio Salatino. I'm also known as Salavoy. And I work for a company that is called Diagrid. We help people to run cloud native applications in productions with the right tools. But I'm also heavily involved in two different communities, the K-native community that we will be talking about today and the dapper community that we will be also showcasing a couple of things. I'm super happy to be here. It's my first time in China. And it's a pleasure to be here. And I've been seeing a lot about platform engineering during these last two days. So I think that I'm in the right place. And we will be talking about platforms and tools and the CNCF ecosystem. I've been a little bit busy lately because I've been trying to finish this book about platform engineering on top of Kubernetes. I'm super happy that it's almost over. It's been printed right now. So I'm super happy that I will have a physical copy with me pretty soon. But I'm even more happy because it's going to be translated into Chinese early 2024 via ePavit. You can just check it there. And this book is basically taking you into a platform-building journey. It's focused mostly on developers, not on data scientists and machine learning, but it's mostly about development. And it covers a bunch of different CNCF tools like dapper, of course, K-native, but also Argo, also crossplane, also tecton. So a bunch of different things. And how do you combine them together to actually build a platform on top of Kubernetes? The book, it's basically covering the reasons why you should be looking into different tools. And even if I'm mentioning these tools, the book is written in a way where you can just swap these tools around for different options if you want to. And all the hands-on exercises are hosted in this repository that it's called platforms on gates. And this kind of basically, this repository contains around 20 or 20 plus different hands-on tutorials that you can follow in your own laptop. So if you're interested in this, check it out. And I'm super happy to say that that's this basically translated all these tutorials into Chinese in less than a week, which is unbelievable for me. Like, amazing. So big clap for him because I know that he's in the room. Where is he? He should be there. There you go. We have it here. Yeah, fantastic contribution. This is all about the power of community. We can translate things. We can reuse things. And I'm more than happy with that. Thank you very much for those contributions. So when you go into this journey of adopting Kubernetes and then building platforms on top of Kubernetes, it actually feels like an adventure. And you can have a complicated time and face a lot of challenges if you don't actually be pragmatic and have a process around it. One of the main things that I see companies struggling with is scaling up the team's expertise. If you adopt Kubernetes and you expect that every developer in your organization need to learn Kubernetes in order to actually run the software that they are building, it will get complicated. If you are like a small company, maybe you can achieve that. But if it's a large company, you will actually struggle with that. Sharing knowledge, it's pretty hard. And waiting for people to learn things that maybe they don't even want to learn, it's a complicated challenge to tackle. If you are adding tools, and we will be mentioning a bunch of tools in this presentation, you will actually need to spend time learning about these specific tools. And some of these tools are too low level for people that it's building business applications. So if as a platform team or as a platform initiative, we can avoid making the life of our teams more complicated, we are actually moving in the right direction. And it happens a lot as well. I've seen a lot of companies evaluating different projects or different products to solve specific challenges. And then they face this kind of decision paralysis. When there are too many options to do a single thing and they are just blocked into deciding which one they should use for their platform. So you need to make sure that you have a process in place to evaluate tools, look into what the tools are providing, and look at how healthy the communities around these tools are in order to actually make a decision. And you will see that when I talk about platforms, I love to focus on APIs and self-service APIs. Because I believe that the platform movement around Kubernetes and the cloud-enabled space should be even more and more focused into APIs and how we can use the things that we know about APIs to build better platforms. When I talk about platforms in the book, I try to stay very, very practical. I've seen presentations here that describe the platform white paper from the CNCF, which is pretty good. But I try to go into the very hands-on experiences. I want to show people how this platform works, how these tools works. And I tend to use a diagram, a very, very simple diagram like this, where we have a platform that might be running in a cluster or in multiple clusters that is hosting a lot of different tools there and that they are configured specifically for providing behaviors to these platforms. Then a platform API, again, I will keep saying APIs during this presentation and platform APIs in particular to abstract away all the complexity and all the selection of tools that we will be running in these clusters. We will use these APIs to abstract away which tools are we using so that people consuming these platform APIs doesn't need to know anything about these tools. Well, we can enable them to know some bits, but we actually have the APIs to define the contracts between the teams and the things that the platform will do. And the example that I use on the book that it's linked there in the tutorial in chapter 6, basically what I describe is this topology where I have a platform that can actually provision new development environments for different teams. And I use different open source tools to do that, but the important thing in here is that the application development team that it's requesting a new development environment doesn't need to know anything about Kubernetes or the tools that are being used to provision that new development environment. That can happen in different cloud providers, can happen locally or in the same cluster. It doesn't really matter where that cluster is. The only thing that the development team is interested in is getting a new development environment and then being able to connect to that environment so they can do their work. That's why they are just modifying an application in that environment. If we can go one step further and install the application or the service that the developer is trying to modify and the infrastructure that is needed for that service even better. We are just facilitating the task. We are providing an API or a dashboard or a tool where they can go and say, give me a new development environment. The platform will provision it for them and then they will connect to it. So if you take a look at that tutorial, you will be seeing an implementing all these steps in your own cluster locally in your machine if you want to. Remember, it's just an example, but it highlights how to combine the tools and how to provide these experiences for different tools that want to achieve different things. You can be implementing a platform not for development purposes but for data scientists, for example, and that's something that Alexa will cover in a bit. When you go and start building platforms, I will, as I mentioned before, I will make a lot of focus on APIs and contracts between different teams, and there are a bunch of common patterns that companies implement to make things easier. Kubernetes is all about deploying and running workloads, but on top of Kubernetes, you can actually build, for example, container-assisted services platforms where you give a container to the platform and the platform will run it for you. You don't really need to worry about clusters, nodes, machines, VMs. The only thing that you need to care about is to create a container and give it to the platform. There are cloud provider offerings, like Google Cloud Run or AWS App Runner that basically does that. You give the container and they will run it for you. You don't worry about Kubernetes in that way. So Kubernetes, in that case, is abstracted away. You can go one step further and build functions-as-a-service platforms where the platform will take care of packaging your function source code and then run it into the compute that you have available. Again, Alibaba function compute, Google Cloud Functions, or AWS lambdas are pretty common examples. But when you go on Kubernetes land, you need to build it yourself, and for that, I usually would recommend to choose existing tools like open source tools instead of building your own abstraction layers and mechanisms to provide this infrastructure. And when you go into that direction of containers-as-a-service or functions-as-a-service, you are solving the runtime side of things. How do we run our software and maybe what programming model do we want to use to create these containers and to communicate them together? But something that we are not solving is how this is going to interact with the entire environment. If we have databases, message brokers, if we need to access to secrets or configuration files or documents, how do we connect our functions or containers with all this infrastructure without being tied to that infrastructure itself? Like if we connect to, if we are running on Alibaba Cloud and we connect to a very specific database in Alibaba Cloud, how do we make our application a little bit more portable by using standards and open APIs? So just the same information in a different kind of view, like if you are adopting Kubernetes, you have all the choices on how to run your platform and the interfaces will be probably Kubernetes resources. Like if you want to deploy things into Kubernetes, you will need to create a deployment, Jamu file or a deployment resource, a service resource or an English resource to route traffic to that workload. At that level, you can do whatever you want. You can just use any Kubernetes resource to implement your platform. But it will take a lot to teams to understand all these concepts and to apply them correctly. If you build a container as a service platform, then you ask as an input, like a container and you as a platform team will be creating all these resources in the best way that works for your company and your workloads. If you go again, functions as a service, then the platform will do more. The developers in this case will do less. But then you as a platform team are in charge of implementing all the mechanisms to take the functions code, build it, deploy it and run it at a scale. So let's take a look at some tools. And I think that I want to start with Knative because it gives you kind of like that contract that it goes one level above like standard Kubernetes resources. How many people here knows Knative? I've seen it several times in this conference which is pretty good. Knative is one of those things that it becomes like fundamental if you're building a platform because it provides a lot of functionalities out of the box. And it gives you that like standard contract for a container as a service experience where the only thing that you provide here is the container image that you want to run. And to just create this very simple Jamel file, you just send this to Kubernetes where Knative is installed and then Knative will run this container for you and it will give you an URL so you can interact with the service out of the box. So it's a pretty simple experience. You don't need deployments, you don't need services, you don't need ingresses. You create this simple Jamel file that basically specify the container. You send it to Kubernetes and Kubernetes will give you an URL back where you can start interacting with the service. Knative also automatically out of the box implements a scale down to zero. So if the service is not being used, Knative will automatically downscale it to save money and to be more efficient in the way that you use the resources. As you can see there, there's a traffic section as well where you can specify some traffic rules for more advanced traffic management capabilities. And in this case, Knative managed also to abstract away how complicated all the traffic management and all the networking is in order to achieve this. And that's been done using another project that it's called Istio. Kubernetes allows you to choose different networking layers, but at the same time as we see here in the contract, you don't really need to know Istio or even how the Kubernetes deployment resources service and all these other things works in order to have something running inside Kubernetes or do some more advanced traffic splitting or A-B testing or different release strategies. So Istio basically provides this service mesh inside Kubernetes that brings mutual TLS, so security and observability to your workloads. And it provides that advanced networking layer where you can actually implement and define how traffic will be routed to maybe different versions of the same application. So that's pretty interesting, but the main point of this slide is to make sure that you understand that by using Knative, you're actually abstracting away the complexity of learning Istio and you are using Knative building blocks just to implement whatever you're looking to implement in there. And on top of that, the Knative community is also working on Knative functions, which is basically another abstraction on top of Knative itself that tries to remove the developer in this case to know about Kubernetes itself. So you use a function, a funk CLI, like this is a command line tool that allows you to create functions called and then you just can go and add your business logic in it. And then by just running funk deploy, it will use a tool called buildpacks to create like the container image for that function and then deploy it into the Kubernetes cluster where Knative is installed. So a developer using this tool in this case, the experience that we are aiming for is that they don't even need to understand Kubernetes or Knative in this case, they just create a function, they code into their IDE or their editor for go in this case and then they just run funk deploy and the function is running inside the cluster. I strongly recommend you that you do video Knative functions from source to service, which is pretty much showing what the experience is and the latest version of this framework which is pretty interesting. And I didn't want to let this pass because I've seen a lot of open function here in this conference and big kudos to this community because they are basically building these functions of service experience on top of Kubernetes. They are gluing a lot of different tools together to provide this out of the box. So you install open functions and then just get the entire framework for getting these functions up and running in your Kubernetes clusters without actually knowing about all the resources and all the more complicated things that you need to learn about Kubernetes. So because this community, this is quite amazing. I'm looking forward to contribute to this. But just to finalize a little bit about what I've been seeing about the development side of things when you build platforms is that one of the things that you need to focus in your journey of creating platforms is how do you tackle the difference between environments? How do you build applications that can be moved across different environments and you reduce and minimize the change of your applications when you do that? So imagine that you have an application that you are running in your development environment and it's connecting to a database. In order for an application to connect to a database you will need some kind of a driver or a client. Imagine that if you want to connect to Kafka, a message broker, you will need the Kafka client for Go or for Java or for whatever language you're using. When you move that into your production environment you will need to make sure that the driver that you are using and you are including inside your service or your container or your function, it's actually compatible with the managed database that you are running on your cloud provider. And that introduce a lot of friction between the things that you need to have running in your development environment and the things that you are running in your production environment. And it requires you to retest all the functionality to make sure that the drivers are compatible. And also when you are running on a cloud provider you don't have much control on how that database gets updated. So you just need to make sure that you keep testing. One way of reducing the friction and providing kind of like a standardized experience across different environments is to introduce some kind of like a standardized API layer like application level APIs that allows you to access the environment and the things that you will have running like message broker, databases, email services, documents or whatever you are interacting from your application. So your application developers can interact with these APIs without worrying about like the implementations that are going to be used in different environments. Building this standardized API layer inside your company it's painful and it will take you a lot of time and it's actually not adding any business value besides portability across environments and across cloud providers. So thank you because we have like the Dapper project that basically provides some implementations of these Dapper APIs that allows your applications to focus on consuming APIs instead of interacting directly with components in the infrastructure. So if you take a look at the Dapper building block APIs you will see that we provide APIs for interacting for storing and retrieving a state for emitting and consuming messages for consuming configurations, secrets. We provide now abstractions around building workflows also provide like helpers and ways to declaratively define resiliency policy across service-to-service interactions. There are a bunch of blog posts down there that like we will share the slides later on that basically takes you to the journey of how this tool work with other Kubernetes tools as well. And there is like a tutorial of this in the repository as well. If you want to add Dapper into the experience and you're already using KNATV or you're using normal deployments the only thing that you need to do to provide the application access to these APIs is add a couple of annotations to your application and then Dapper that is installed in the Kubernetes cluster will inject a sidecar that will provide the implementation of these APIs. In that case this is kind of like how it will look like and you will see that your service at this point doesn't require to have any driver to connect to infrastructure it will actually interact with the Dapper sidecar using HTTP or GRPC requests. So your application developers can actually just use the tools that they know in order to do this request and interact with infrastructure. This gives kind of two benefits. One is the application developer only need to know how to do HTTP or GRPC requests to this API that it's local and running close to your application and then the platform team can define how to connect and how to wire up all the available infrastructure that is there for their application. So they can have different configurations for different environments to make sure that the things that the application need to work are present when you're running this application. From the development point side of view it's like actually I've seen these tools being used and in this conference and looking at open function I've seen these tools being used in conjunction so it actually makes a lot of sense to start thinking about which core components do you add to your platforms. But when you look into data scientists they are doing different things. We will be sharing some tools here but it's a different challenge and a different adventure as well. Exactly, so on our next part of our adventure we're gonna talk about machine learning on Kubernetes. So developing AI models with machine learning workflows they can also benefit from a set of standard APIs for developers and platform teams can use open source tooling to provide these and help to speed up development. So you may notice that we use some of the similar or the same underlying tools as Marysio mentioned in addition to some abstractions that are specifically designed for machine learning workflow development. So first I'm just kinda gonna introduce the model development life cycle in case anyone is not familiar. So basically we have a platform team. So a platform team needs to provide some business specific tools for their end users to be able to train and deploy models and a lot of times these steps are non-linear. So developers need to experiment with their data and explore it and they need to train their model, evaluate it and finally they need to deploy that model to production and monitor it and it may start over again. So to serve this life cycle a data science platform team needs to have these types of offerings. So for data access and exploration you may use something like Jupyter Notebooks for model training, you need it to be able as a platform team to support users being able to try out different ML frameworks like TensorFlow, PyTorch, DeepSpeed and PI. And then there should be some tools for managing these experiments like a UI and the ability to get metrics. Finally there should be platform support for model deployment to production which is also known as inference. Yeah in this case in this diagram here you can actually start noticing the parallelism, right? Like developers will use IDEs or the tooling for developing their applications but then they will need to access credentials and secrets to connect to different infrastructure if they need to do so. And then if we start thinking about how those applications are going to move to the production environment maybe to an staging environment or a testing environment how can we enable teams to do these kind of like different release strategies? How can we enable developer teams to do A-B testing in the same way that data scientists test their models, right? I think that's important. So the same tool can be used there. Yeah I think there's a lot of advantages to as platform teams coming together and trying to use the same principles or the same underlying infrastructure to give users a better experience and to provide better support around those tools. So first I'm gonna talk a little bit about some training platform offerings. So our training platform offers secure integration with GitHub and storage to connect with either our abstraction of the Kubeflow training operator or a Jupyter Notebook. So the training team chose two tools to offer to our developers. One for experimentation and one for training and we built managed support around it. So our managed Jupyter Notebooks allow users to explore and visualize their data and experiment with new model training ideas. So we create managed Notebooks to allow users to easily run Spark interactively or use a host of Python libraries and access a variety of compute resources like CPU, GPU or high-speed local storage. We also offered model training via a CRD abstraction of the Kubeflow training operator. So a workload designed for model training frameworks like PyTorch, DeepSpeed, MPI, TensorFlow, SQLer and more. And this is the only DSP workload type where you can do distributed training for large models that we offer and access high performance compute resources. You can also integrate with things that we offer like Argo orchestration. So I just want to note that by giving developers two options here, we're able to build strong support and offering around them. So users can choose between a host of libraries or like a host of frameworks to use for each. And I just want to say this to highlight how making a choice that limits what developers can use can also be beneficial when you have strong support for the choices you make, which then in turn allows them a lot of other options. So this is the full training workflow. It just has a few other components. And we also enforce using build packs similar to, as Marisa mentioned for the other development lifecycle, to standardize and control the security of images. So this ensures that we have a repeatable workload as we use build packs to freeze code and build our dependencies. We also provide Artifactory to store the build pack and secure input and output model storage to use for these training jobs. It's just important that a platform team creates standard solutions for every step in the development lifecycle. In this case, I think it's important to mention here that build packs will help you in that journey. If you haven't looked into that project, I strongly recommend you to do so because it takes away two things, right? Like the first thing is removing the need for creating Dockerfiles. Like the fact that developers can write Dockerfiles that can be insecure, that they are difficult to maintain and probably they don't even have time or focus to just get the right Dockerfile in place. But also it provides the platform team a way to say, okay, this company has certain policies and requirements and we will encode this into our own build pack. So we will just basically transform source code to container images in a way that it's managed by the platform team instead of defined by every developer in the team. Exactly. So next we're going to discuss the model deployment platform offering which we call inference. I like this quote that launching an AI application pilots is deceptively easy but deploying them into production is notoriously challenging because it seems simple, right? Maybe you have some web framework, Python running fast API and a model that it can talk to and you just want to send an inference request and get an inference response should be simple. That's simple, yeah. But what we actually see in production is it's quite complicated to keep an inference service running and service it. So maybe you have REST or GRPC, some type of load balancer receiving request. You probably want to take the data and the requests and pre-process it and then take it, the output and post-process it. And you might want to have a feature store. You also have the model input that goes to the service where the model is running and you probably want to scale that service as well to be able to handle the loader request. You have the model output that you might want to post-process. You also want to make sure you have a secure model store that you can download the model from and run it in the service, inference service. And among all this, of course, you want scalability among all these processes. You want observability in each step of the way and you want it to be reproducible and portable, definitely secure as well. So these things necessitate a lot of different components and expertise to set up. Yeah, that's not easy at all. So K-Serve is the standard API that we use to serve inference. So K-Serve is a highly scalable and standards-based cloud-native model inference platform running on Kubernetes for trusted AI that encapsulates the complexity of deploying models to production. That's kind of a lot. So basically K-Serve provides a simple way to deploy an inference service that contains all of these components. As the slide showed before, there's a lot of configurations and resources that need to be set up to run an inference service in production. By offering K-Serve as our standard API for inference workloads, this abstracts the way many of the details and resources a developer needs to understand in order to have an inference service running in production. They don't need to understand how to set up scaling, traffic management, and much more. So the user only needs to use a short YAML, which I'll show in a couple slides, to get started, but K-Serve is highly configurable. So the user can choose to configure as much as they want or need. So K-Serve has many out-of-the-box components, like I said, Scala Zero, Metrics, Request, Auto Scaling, and allows developers to easily plug in many other features like Explainers, Kafka Streaming, Batching, and more. So one thing about, and that we keep talking about are standard APIs, and one thing that's really nice about K-Serve is that it has a standard open inference protocol. So this means that no matter which serving time that you use among the many that it supports, the request that you use to talk to the service or standardize across every supported runtime, making it very easy to switch out serving runtimes without any code changes. You can also create your own custom serving runtime that implements this open inference protocol as well, and no matter what, it will be able to respond to these endpoints. So this is what a simple K-Serve service looks like. It's pretty simple, but just point out the kind of inference service. We have a storage URI, which is where to download. You can see that we have, we're using out of the box PyTorch and describing it as our model format. We will have two pods running, and from this one YAML, it'll spin up a transformer pod, which can just handle processing the data, and then the predictor pod, which has the model and serves those requests. So one thing I wanna note is that you apply this to the cluster, and you get a URL back just like Mauricio mentioned. And I think that this is another great example of this abstraction level. In this case, K-Serve users, by creating this inference service, they don't need to know anything about K-native or Istio in this case. So it's just something that will encapsulate all the knowledge that you need to install and configure K-native and Istio and create traffic rules for the model, but in this case, putting first the main user, which in this case, they want to serve models. That's why transformer and predictors are needed in this place. Yes. And K-native and Istio aren't required for running K-Serve, but we run it in production and we use K-native and Istio, and they have a lot of good benefits and features for running in production, like auto-scaling and traffic management, for example. So as a platform team, in addition to the APIs we offer, and Mauricio touched on this a bit, we also offer standardized features like namespace isolation, resource utilization, and the ability to deploy to multiple environments, and these are standardized across our platform teams. So also to note, our developer UI is our main product, like it's where users go to develop and debug their services. So with these standard APIs and standard features, it's way easier to build a unified user experience for the machine learning lifecycle. Yeah, there are two things here to highlight, right? Like one is the tooling that you will give your teams to use. Maybe dashboards, maybe something like backstage. I will repeat again, APIs are the first thing. If you have your APIs right, then building your eyes on top or dashboards for teams to consume, I think that it will be easier. If you focus on APIs and focused on golden paths, usually refer to the way for us to move, like your models or your applications from development to production, it's extremely important, and then figure it out which teams are going to be interacting with these tools and APIs, it's also good, right? Like if you have data scientists, you will need a set of contracts, maybe using something like K-SERV, or maybe abstracting K-SERV away might be the answer. And if you have developers that cannot spend time learning about Kubernetes, how would you create a contract for that? How would you optimize the journey from creating code to make sure that the platform team and the operations team can move those workloads into your production environment in front of customers? Because we are very practical people, we created a demo that we will not have enough time to show, but it's a tutorial that you can follow on GitHub. It's in this repository called KubeConChina 2023, and basically the repository takes you into this journey of installing all these tools to make sure that they all work together. So you can install an application that it's using K-native, it's using Dapper, and it's connecting to a bunch of infrastructure, but at the same time you're using K-SERV to host an inference model that you can interact with. I have the demo running here in my computer, and again, I do not have a lot of time, but I wanted to run this simple command, and let's see if it works. Get K-native services in this case, I don't know if you can see that, there it's pretty high in there, and the output will be pretty big, but basically what I'm doing here is I'm masking for a contract, in this case, a K-native service that all the things in my application is basically using. So here, I will just make it a little bit smaller so we can understand what this is. So here what I'm seeing is that I'm running four K-native services. I have the URL for accessing each of these services, including the front end, but at the same time here, at the end here, I have a model that I can access through that URL. So as a developer, I will be able to consume this inference model by just doing an HTTP request to the standard APIs that Alexa mentioned before, and all the traffic routing, and how do I reach that model? It's already being sold by K-SERV in this case. If I double-click here, you can see the application that is running there in my browser, and this application is connecting to a bunch of infrastructure. So if I list all the pods that are running for my application, you will see that I have my application services plus Kafka, PostgreSQL, and Redis. And you will also notice that, for example here, each container from my application service is running three containers inside the same pod. One is the Q-proxy for K-native, for auto-scaling. There is one of the services that it has been downscaled because it's not being used, so there is no pod running, which is kind of nice. And then another container running for Dapper to inject these API-level APIs that the applications can use. So if you look into the agenda service, for example, in the source code, or into the notification service or the front-end service, you will see that these services are consuming events via Kafka, but they don't have any dependency to Kafka itself. And yeah, on the same with Redis, right? Like, the agenda service is storing key value stores, key values into Redis, but it doesn't have any source code dependency to it. So you can move this application across different environments, maybe connect this into a cloud provider a specific key value store without changing any of the application source code. You will notice also here. Yeah, the predictor pod also has two containers in it, and one of them is also Q-proxy, same, it helps with auto-scaling. Yeah, it will actually auto-scale the model if it's getting a lot of HTTP requests, right? Yes. So again, same concepts. In this case, we are using the K-native service resource in order to illustrate that different teams can be using the same concepts, and they can all query it in the same way that basically means that if you have any Kubernetes dashboard, you will be able to explore and browse all these resources. And unfortunately, we are running out of time. So let's talk a little bit about the takeaways, right? So I would say that one of the main takeaways from this session, of course, is focus on APIs at the second point. I will start with the second point because APIs are the things that you should be focusing if you're building platforms. By having the right APIs and self-service APIs, you will enable teams to go faster and hide away the complexity of the tools that you choose to implement the platform behaviors, in this case. And your platform team will need to use software development skills to actually build these abstractions, put things together, glue tools together that maybe were not designed to work, but you need them to work together for your platform. Yeah, and I think you'll find that a lot of the same principles can be applied across different types of teams, between development and data scientists, for example. And of course, adopting open-source solutions requires expertise, and in order to choose, if you can rely on open standards to make a choice about which tool to choose, which tool best fits, that will help you in making the decision. Thank you very much. You can follow us on Twitter, if you have questions, we will be around, so feel free to reach out. Don't be shy, please. Yeah, and we'll post these slides, right? And we have references. Yeah, we have some references. We will share the links. And again, if you have any questions, please feel free to reach out. Yeah? Thank you. Thank you. All right. We did it. All right. We don't have time for questions, but you can come close and ask us here, like, we will be around. Yeah. Cool. Thank you. Sorry, oh, hi.