 Thank you very much, we can begin. And yeah, our title of our presentation is building functions as a service platform. So I think that Roland introduction to the topic was great and perfect because we don't have too much time about talking about us itself to build platforms and how we see companies doing this actually in real life. So to start with this presentation we will have like Victor here. Yeah, I mean, she already introduced us so we can just move on. Let's go. We can just start with the topic of our presentation and I think that that's pretty good. First of all, I'm really happy to have Victor here again because he's coming from a different community but you have been working with K&A for long. I remember you doing presentations about K&A. Not as much as you, but yes. Kinda, kinda. But it's pretty the same time. So doing this presentation, we wanted to talk a little bit about the need of building platforms on top of Kubernetes in this case more like specifically to functions as a service platforms. The challenges of building these platforms nowadays and the challenges of large organizations that actually need to train a lot of teams to understand how Kubernetes works and why we can make that kind of learning curve to reduce a little bit and then we are just going to do a short demo showing the platform interaction and how it should work. Yep. Yeah, so building platforms is kind of the main focuses I feel of this year and the next year and probably, and again, for the 10th time, right? We've been focused on building platforms I think since mainframes. But anyways, now with the emergence of, not even emergence, but with Kubernetes, right? Things changed drastically and the subject of building the kind of platforms became much more feasible than it was before. And so what is happening right now or at least what is happening is that the platform still have their space here. But more and more companies are really focused on building their own platforms that are tailored or made for what they have. Whatever that something is. And that's where Kubernetes has come as an extensible API with a scheduler and a conciliation with detection and all the good things that we have, right? And ultimately, the goal that we are trying to accomplish is to reduce the learning curve that developers normally would need to go through in order to be self-sufficient, right? So probably that would be the closest explanation of why are we so much focused on internal developer platforms, right? Simplify, lifecycle of applications or application developers are self-sufficient. Now, I already talked about Commutative Load but essentially in a nutshell what it's all about is about building new APIs that will self-serve the rest of the company, those application developers and which means that all the people on the right side operations, SREs, DevOps and so on and so forth are really focused now on having the same platform. That's it. And I would say that the main idea here is to make sure that the right teams have the right tools to do their work, right? And I think that that's really important without forcing developers to basically need to learn and understand how Kubernetes works because for large organizations providing that training and making sure that everybody understands how all these things work, it's pretty complicated. Just going back a little bit to this slide, we're going to be talking about platforms on top of Kubernetes and usually this brings like multiple Kubernetes clusters. It's not, we are not talking about the single Kubernetes cluster, we are talking about different Kubernetes clusters that needs to be managed and you need to install tools on top of them and you need to make sure that you offer these two application development teams so they can use them. And that's quite a lot of work. So that's why in what we are seeing is that more and more larger companies are building these functions as a service platforms internally based on Kubernetes. So they go into the Kubernetes landscape, you know, you go to the CNCF landscape and take a look at all the projects that are there and then you just need to start combining all these tools in order to provide like a specific developer experience to build things. And in this case, it's a little bit more use functions into that functions as a service experience. There is a lot of by, by, by view, but I cannot say that word. There is a book that it's called, Temptopology, let's do it like that, which basically talks a little bit about the organizational change that Compass needs to go in order to start thinking about platforms. And one of the main things in that point and that book is the idea of having platform teams that will take care of bridging this gap between the infrastructure that is needed for doing work and the application teams that needs to consume that infrastructure basically. And for our, the purpose of our infrastructure, of course, we are talking about like this cluster and then install top of these clusters. The application development teams that they can just use an API to interact with whatever this platform is created for them and then just can do their work. They can focus on building features or fixing bugs instead of worrying about what do they need to install in their Kubernetes clusters. So there are like different challenges. When you start building platforms, it's a complicated time. So because you're looking from behind. Yeah, yeah, so the challenges are essentially trying to figure out how all those tools work with all those formats and join them together into something that is usable to people who do not dedicate eight hours every single day trying to explore the CNCF landscape, right? Landscape is amazing. We get all the tools and so on and so forth. But for a person who is not, whose job is not really to monitor the landscape, it's close to impossible to really get on top of all that. So the challenge is really how can we simplify that landscape? How can we bring it to the level that is easy to consume to application developers, let's say. Yeah, and also it's not that the landscape, it's static, right? Like it's continuously evolving. Projects are being, you know, incubating every day. Like you just get more and more projects and projects changes, right? Like so you need to have specialized teams that take care of integrating all these tools, evolving and having a way to pivot behind API so these applications teams can consume that. Yeah, so if you talk about expectations, right, the requirements of such a competitive platform, we can split it probably into the needs or tasks of two groups of people. One would be application developers and the other set of tasks would be for the platform team and the requirements for developers are really straightforward, application developers straightforward and easy. They need a place to work and they need to place to code and run tests and do all the things that they're doing constantly every single day. They need to be able to do that effectively without interrupting really the work of anybody else in their team or other teams and so on and so forth. And this is now the most important thing. It needs to be extremely, extremely easy. I think that many of us in this room right now are kind of almost not aware of how complex things are for the rest of the world or the company, right? And then if you jump into the platform team, really the main job of platform teams is to figure out which tools we are going to use and how we are going to encapsulate those tools and innovate hide them so that we can reduce the complexity of those tools and make them accessible to everybody else. Yeah, and that's pretty much the gist of it, right? Like we need to have this separation between who is consuming the platform and who is creating the platform, the ones that are creating all these tools and putting everything together so developers can actually focus on coding. And that's kind of why we wanted to do a quick demo here and it's going to be super quick and super simple, but we will just try to explain what's happening on the back. Remember that the main idea here is that it needs to be simple for application developers. So what we're going to see here is we are going to switch to the terminal. Is this big enough? Yes? Kinda? Okay. So I will use kubectl apply. Let's do that, team A. So basically just running this command. I am sending a request to the Kubernetes API, right? There you, it replies back saying, okay, you have a new environment that was created. So what I just applied is this file. Let me know if you can see that. Yeah. Yeah, good. That file. So essentially what you see over here is a custom resource definition, right? It's something that is not installed, does not come out of the box from any tool. It's something that Mauricio created as a way to encapsulate what it means for a specific team, for a specific company to have in this case, to have an environment, right? So it's a new resource definition called environment where you can choose through the labels what type of environment we want to have. In this case, it's a development environment. We could have other environments like production staging, depends on the implementation and also what will happen besides that. And here to make it simple, we're only having one parameter saying, hey, do you want to have a database in that environment or not? Right, in this case, database, yes, right? So this is that interface. This is what the end user sees and can use to manage its own environment. Perfect, and you might be wondering why am I requesting environment? And the answer to that is that I've created a very weird application that I want you to show you. In this case, this is probably the weirdest application that I've created in my entire life. So it's an application that will play with our feelings and we are all adults here and my main language is definitely not English. Victor, your main native language is not English either. So we are not fully responsible for what this is going to do because it's going to use the British dictionary to just create random phrases. It can go very wrong, but we will just try to go fast. So the idea of this application is that when I click start, a function it's getting called and we are just fetching some input from the dictionary, right? Watershed FrostKara, all right. How does that make you feel? Happy or sad? You can just judge. And if we say sad, you can see here down there we have the evaluations. So we have a lot of zeros here. The application might not be doing well because it's just generating things that it's making people feel sad. In order to store this evaluation, we are using, you know, Redis, just any database of your choice. So the application is pretty simple. We create a request. We go to a function that basically fetches from an external input, a random set of words, and then we can make an evaluation that it's being stored into a database. Pretty simple. The idea here is to just reflect the kind of normal applications that you're going to be building. This is kind of like our production environment, right? And now the team, the team that requested the environment before is basically being asked to improve this so we can make people a little bit more happy with the words that we are generating. Again, the architecture of the application is pretty simple. We have users. We have a backend service that it's basically receiving all the requests. We have a database that it's storing all the evaluations. We have a function that it's in charge of going and fetching all these random words and then just processing those words and then just putting that back into the frontend. And that's pretty much it. So if I want to make a change into this application, again, I want to make sure that I have an environment with all these things already set up for me to work on without disrupting, of course, my production environment. So when I run the command before, I just created a development environment that contains all these pieces in a separate space. And how is that working? This is what we are going to be looking at now. So yeah, what we're doing here, at least for this demo, is combining crossplane. Crossplane allows us to create those obstructions that are used by the end users. We cluster so that we can have as many environments as possible as needed without really spinning up a new cluster. Extremely useful for development environments. K-native, serving functions, all the good things. And at the end of the day, also ArgoCity that synchronizes things between Git and the cluster. So eventually, Git becomes the interaction interface of developers rather than the cluster itself. Yes, exactly. Again, all these tools can be combined in different ways. And this is just an example of how you can combine tools together to provide like a developer experience. And we don't have much time just to show all the angles, but there is like a readme file where you can just run this demo and just read a little bit more about it that we will share later. So do you want to explain a little bit more about what crossplane is and what it does? Yeah, I mean, I'm not going into details of crossplane, at least not right now, just very quick recap. There are two important things about crossplane. First is that it gives us, when we install providers like AWS provider, it gives us all the custom resources with all the controllers that are used to manage one-to-one resources in some provider, right? AWS, Azure, VMware, what's-and-not. And then the other, and this is now more important, part of crossplane is creating compositions. Those are those obstructions that allow us to define new services with new custom resource definitions and controllers that will do something, right? And that's something in our example is, hey, how about we create an environment and hide the complexity behind what environment means for us inside those compositions? Exactly, so in order to achieve that, basically what Victor is saying is that we have created two different files. One is the CRD that represents this interface for application developers to use, in this case, the environment resource definition. And I can show you that pretty quickly. It's in the crossplane directory here, environment resource definition, there you go. And this is just a CRD, right? Actually, it doesn't have anything that it's crossplane-specific. It's just a plain Kubernetes extension. Yeah, I mean, more or less. There are a couple of specific things. But what really, really matters here is the open APS scheme at the bottom that says, hey, this is the CRD. This is custom resource definition that we want to define with parameters. In this case, extremely simple. It can get complicated, depends on you, right? In this case, there is only one property that's saying database. Yeah, exactly. So pretty simple. So remember, this is the interface that your teams will see, right? And then you can create, actually, you can create user interfaces to generate these resources if you don't want to give them the Kubernetes API access. So the next step in crossplane is to create crossplane compositions. And that's the other file that it's much larger, but we wanted to quickly cover that composition for development environments, right? We now have an interface, and we can use labels. As we saw before, we have a label type development to request our development environments. But we can create different compositions to create different resources when we want, like in a staging environment, or maybe like a QA environment, or even if you want to create multiple production environments, we can have these kind of recipes to create different environments, and then just use labels to choose which kind of environment do we want. So this is a very long, let's say, long Jammel file. But it actually defines what happens when we request a new environment, right? Like there is a relationship here with the resource that we just mentioned. So every time that we create one of these resources, crossplane will go ahead and create all these other resources that are listed there. Yeah, and now what resources are that depends on from one case to another, right? We can mix and match, create. You want something in AWS, maybe something in Azure, maybe something in Kubernetes. Maybe you want to create a GitHub repository. It's up to you really to design what that service and that specific implementation of the service will do, right? In this case, I think that we are creating a V cluster and installing it based on Mauricio's image and a few other things, right? Patching it, combining different resources into one coherent group. Yeah, so this composition more specifically is creating a V cluster, which is interesting. Like it's a very interesting project. I can talk just a little bit about it. But again, the main purpose of mentioning here a V cluster is the fact that, again, we are combining different tools and which tools you use, it doesn't really matter. V cluster gives you this very simple way of creating a new API server inside the namespace that allows users to access that API server without the need of sharing the same API server between all the teams. That gives you isolation and also it makes you feel that you are interacting with your own Kubernetes cluster. Are you all right? Good, good, good. And the interesting thing about V cluster is that in order to create a new V cluster, you just install a Helm chart in your Kubernetes cluster, which is kind of interesting. You don't really need to have anything installed in the Kubernetes cluster. You just install a Helm chart and then you are just creating a new V cluster. That creates its own API server and then you can just interact with that API server. So when I created an environment before, the crossplane composition created one of these V cluster instances and as soon as the cluster was created, crossplane can understand that the cluster is ready, get the credentials to interact with that cluster and then we are deploying a copy of our application into that environment. So by the end, when we have our environment, we have also an application instance running in there. So we can check if the environment is ready because sometimes it takes a bit of time. It never works from the first step. It never works, but again, because this is just a Kubernetes source, we can now list of our environments. We can list of our environments and we can see that the environment is ready there. We can see that there is a database being created there and that the environment is ready. It was created nine minutes ago. It's a live demo. So it can fail, of course. Because we are using V cluster, we will need to connect to this cluster now. We have created an environment, but we haven't connected to it yet. We are still connected to our platform API. And because it's a V cluster, we can use V cluster connect, which is a command that would basically fetch the credentials, the tokens to access the Kubernetes API server and connect to it. As soon as we are connected, now we are in our isolated cluster here. We can just do whatever we want inside this cluster, but as I mentioned before, the crossplane composition also installed the application to run here. So if I list, for example, all the K&A services, because I'm using K&A to deploy these things, sorry for the crazy overflow there, but yeah, long URLs. We can see that I have a new URL here that I'm clicking and this is the instance of the application that is now running inside my development environment. As a development team, I can just go and start changing this application in any way I want because I will not disrupt my production environment in this case. I will just click start here just to see if it's working or not. It seems that it's kind of working and as you can see here, like the database, like the evaluations that are being stored and fetched from the database are not the same that the ones that we have here, right? It's a separate database instance. So we have automated all the creation of the environment, the installation of the application, the creation of the database, and more importantly, connecting all these things together so they work for developers to work in just a single command where we requested this environment. Then we connected using big cluster in this case because we are using big cluster to create the cluster, but with crossplane, we can create clusters in Google Cloud or in AWS or in Azure, and then we would just need to use their tools to connect to those environments if you want to. Yeah, yeah, I mean the whole point of it is to have it serve as a wrapper that actually wraps the services and wraps the tools that you need and all the different resources into something coherent. Exactly, and the main idea here is like just to take it to the second part of the demo, right? Like what we want to do is we want to change a little bit our application and more specifically, this function that it's in charge of going and fetching some input from the British dictionary and just making it a little bit more happy, let's say, because it's pretty depressing right now. So in order to do that, what I will do is whatever developer will do, it will just clone the repository where this function source code is and then I will just create a branch for it and then I will just start making some changes and then I will deploy that function to our development environment, right? So in order to do that, I will just do git checkout team A. That's a branch where I already have some changes for this function. I will go to the process function directory here. This is where my function is. Let me do that again. And as you can see here, this is just a Go function that was created using the FuncCLI, the command line tool for Knative functions here, just to generate a very basic HTTP function. And I've already made the changes, so I will not open this into my IDE or my code editor. I will just use func deploy here just to deploy this to my development environment. Knative functions, if you are not aware of this project, because Evan mentioned it today, it's brand new, it's now part of core. It's not new, but now it's new that it's part of the core Knative projects. It allows you to create these functions based on templates if you want. And you can create functions in any programming language that you want, right? And I think that for me, that's the pretty powerful part of it. And by using func, you can create and scaffold the function structure, but also you can do func deploy in order to deploy these functions into a Kubernetes cluster. And the main idea here is that we are not pushing developers to create their Docker files or even to write Jamel files to do this deployment. So for development interactions, these kind of interactions where we want to make changes and try things out, this is kinda great. I just created a function and deployed the function into my environment here, in my developer environment, without writing any Docker or Jamel file. And as you can see at the end of func deploy, you get a URL for the function, right? But we don't actually want to see the input of the function. We want to see the change in the application that it's running in my development environment. So I can go here and refresh my application that it's in the development environment. Again, this is production, this is the development environment. And if I click start again, if things works, yes, look at that. Now we have emojis. So definitely more happy. I don't know about pyromaniacs, but so sad, fun, and well, let's be careful with that. But yes. So now we have implemented a change there in our development environment. And I think that func and all these tools are pretty good to improve that developer experience just to make it fast. And again, like the platform here has enabled developers, and yeah, and this team in particular, just to work with functions, with the function's obstructions instead of working with all the Kubernetes constructs, like reading deployments or services, and needing to understand all these tools. And that takes us down to, yeah, to something that I really wanted to talk about here. And Victor, give me two seconds and we will wrap this up because we're running one, two seconds there, exactly. So one problem that teams will start, like we have seen teams facing, is that with this kind of like a structure where you have like a platform cluster that it's creating environments, no matter if you're using, you know, vcluster in this case to create buildable clusters, or if you are creating environments in cloud providers, one problem that we see happening more and more is that, for example, in each of these clusters, you will need to have a K-native installation in order to use K-native into that cluster, right? So every time that you want to use K-native, you need to install K-native into that cluster. And that might become a problem because then you have tons of different K-native installations that you need to maintain and actually run. That's where big cluster plugins comes into a picture and I guess that this is another great example of different communities contributing. In this case, big cluster offers this possibility of installing plugins. So no matter how many big clusters you create, you can configure them all to share the same host installation of K-native. So every time that I've created a new environment here and run my application using K-native inside that big cluster, I didn't install K-native inside the cluster itself. These big clusters are re-sharing the installation of K-native that it's in the main Kubernetes cluster in there. And again, I just see this pattern more and more important and I think that Ross also mentioned this idea of having a shared control plane for building and installing all these tools while maintaining all the isolation between the clusters. So do you want to talk a little bit about the last part, which is kind of the production thing? Yeah, I mean essentially, oh, we have only one minute, so I will not talk about it much. Except to say that really the main difference is that although like those different environments, they're truly different, one is using big cluster, another one is it could be GKE or AKS or AKS or something else completely, right? But all that from the user perspective is innovate transparent, right? Users do not necessarily need to know whether, oh, this happens to be big cluster and that happens to be GKE and this, right? All that is obstructed from the users in this really operational detail. Yeah, and when you go to production, maybe you have a different pattern, right? Like maybe you are using ARGO CD in that case and you are not using funk, you know, deploy in order to deploy your function to your production environment. That will be a little bit crazy. So one of the things that we are working on the K&M functions project is to make sure that we also tied back into this more GITOPS approach where we can actually share the configuration of the function so we can send it to a repository that it's being monitored by ARGO CD and then we can just, you know, sync that to our production environments. And because we just have probably 30 seconds, I wanted to sum it up a little bit and stop right after that. Just stop. It says stop, yes, okay. So quickly, companies are building platforms. K&M plays really nice with this idea of building platforms and people is using K&M when they are building these platforms and the call to action for the K&M community is that we should go out to the XCNCF ecosystem and start integrating with these projects a little bit better. There are tons of examples. If you're interested in these, please get in touch and I will be more than happy to guide you. Victor, thank you very much. Thank you. Thank you, folks.