 Hello, everyone. Welcome to Cloud Native Live, where we dive deep into the code behind Cloud Native. I'm Annie, and I'm a CNCF ambassador, and I will be your host tonight. So, every week, we bring a new set of presenters to showcase how to work with Cloud Native technology. They will build things, they will break things, and they will answer all of your questions so you can join us every Wednesday to watch live. So, this week, we have Thomas here to talk with about one spec to rule them all. A very fun title. I'm really looking forward to this. And as always, this is an official live stream of the CNCF, and as such, it is subject to the CNCF code of conduct. So, please do not add anything to the chat or questions that would be in violation of that code of conduct. So, basically, please be respectful of all of your fellow participants as well as presenters. But perfect. With that done, I'll hand it over to Thomas to kick off today's presentation. Thank you very much. And thank you, everyone, for joining. So, yes, I'm here today to talk about an open source project named SCORE. SCORE aims to sort of improve the developer experience, reduce cognitive load on developers. And SCORE was born out of the platform engineering movement. So, before we address SCORE, I'd like to talk a bit about platform engineering. And where SCORE fits into that. So, I shall share my screen. Have I got our entire screen? Here we go. Can we see what's on my screen? Yes. Now we see the content perfectly. Cool. So, I'm just going to go sort of a short whiteboard experience. Platform engineering. And we're talking about platform engineering from quite a high level here. So, platform engineering really is sort of the discipline of a platform team providing a product to internal developers in an organisation so that ultimately software can be delivered faster, more sustainably, of higher quality. All while developers focus in on the work that they actually need to do and the work that they want to do, which is writing application code to develop applications that provide business outcomes. So, in platform engineering, we'd have these two distinct teams, the actual platform engineering team that are creating the platform for the developers to consume and the development team who are consuming that platform and writing their application code. So, on the platform engineering team sides, what do they care about? Ultimately, they care about enabling developers to do their job sufficiently. And within that, reducing ticket operations. So, rather than an operations team being reactive to every single request on an ad hoc basis, really sort of defining best practice infrastructure, sort of some golden standards, designing the tool chain, CI pipelines, embedding login, observability, and all those things that we need to run an application at Scow in production. So, some of the things that a platform engineering team may have, a CI pipeline that they have designed, some infrastructure code to actually go and deploy the relevant infrastructure. There's a request to zoom in a bit if that's possible. I can indeed. Yeah. And then I can move around. I can move around. So, this might work. First try, first try, trying this on a webinar to be fair. So, apologies if it's a bit sketchy. Okay. So, is that okay? Can people understand that? Well, it is. I think it's better now. I can see now, well, if the audience you still have trouble, let us know. And we can zoom in maybe a bit more. But so far, I think it's good work. Cool. Cool, cool, cool. So, yeah, platform engineering team, they have infrastructure codes to go and deploy infrastructure. And really, they're in the business of making these higher level architecture decisions. People within that team actually write in the infrastructure code to best practice, to a golden standard. Okay. And then we have the development team over here. And what do they do? Pretty obvious, I suppose. We write an application code. And really, do they care about the infrastructure? Do they care about the implementation of that infrastructure? They really just care about writing the application code and getting their app to production, right? So, to actually do these things or to live in this world where, you know, platform engineering teams are creating this architecture and development teams are just writing the code, we actually need a platform, right? So, bring in the platform. Okay. And this is an example of what a good practice platform could look like. Did warm. This might not go 100% smoothly. Okay. So, let's put this in the middle, the platform. Okay. So, we've got this platform that the development team actually interact with, and that the platform engineering team ultimately make resources available to developers. Okay. So, this is a very high-level look at a platform, but how could this work? Okay. So, the first thing is that the platform team, they're actually going to deploy in infrastructure, okay, to a cloud provider. So, that could be using something like Terraform, for example. I'm trying to, oh, here we go. So, they would go deploy to a cloud provider. So, that might be Google Cloud. Then they actually make, and that's all fine on its own. You know, we've got these resources running there, but we don't want developers to access those resources directly as such. You'll be too aware of those resources. You know, if it's Kubernetes, do we want them accessing Cube CTL and writing, you know, deployment manifests? Probably not. It doesn't really fit into the ideology of what we're trying to do here. So, platform engineers will make those resources available to the platform via something like a resource definition. You know, it's a pointer to the actual resource and gives access to that resource. So, they may create a Kubernetes cluster in Google Cloud, a GKE cluster in Google Cloud, make this available to the platform. They may also create a Postgres database, for example. And then give these resources, or, you know, point us to these resources, tags or criteria that define which environment they can be used in. So, Kubernetes cluster for the staging environment, a database for the staging environment in this example that I'm giving here. Okay. And then we have developers who are just writing their app code, building their app codes into a container, and then pushing that to a registry. Okay. So, they push the container to the registry. And then they actually need to define how that workload runs. And this is where score comes in. Okay. So, score is a workload specification. So, it's a smallish piece to a larger puzzle. All right. Just explaining some of the context behind the larger puzzle here. So, building app codes in a Docker container, pushing it to a registry, reasonably well defined. And then we get to the workload specification. Okay. So, the platform, you know, may have the object that we call an application, you know. And we need to configure that application object with specific workloads. All right. So, a typical application might have, you know, a handful of workloads between five, 10 workloads to build out the entire application. So, each workload would need a specification. So, in the sub pipeline that they build their app code, they push this specification to the platform. And that specification will contain information around what the container or what the workload is in terms of the container image and the resources that actually depends on. Okay. So, if we just sort of type out a rough specification here, we might say image, the name of the image. And then we say resources. And I appreciate you probably not, let me just zoom in a bit because you're probably not going to see this on the screen. I'm not even zooming. Just zooming like this. So, you may say image, the name of the image, resources. And then rather than actually, you know, worrying about the implementation of that resource, where that resource actually lives, we just say, I need Postgres. Right. So, say resources. I need a Postgres database. I need some storage. And, you know, the fact that this workload is going to go on Kubernetes anyway. Okay. So, we just declare the resources that we need. And we've already sort of tagged this application object within the platform as it belongs to the stage environment. Okay. So, once I've got my workload spec, once I've got my workload specification defined, once I have my application code pushed or built and pushed to the image registry, we need to go and deploy that application. So, at deployment time, what happens on, you know, using something like a platform orchestrator product is the orchestrator would say, okay, this is the staging environment. So, it would know that we need to push this image and run that image on the Kubernetes cluster in the staging environment. And then also says, we need Postgres and we need some storage. So, it would look through the available resource definitions for a Postgres database and the actual relevant environment as well. So, it would pick those up and attach those to the workload. And the idea behind this is that as a developer, I do not really know where that database lives. I don't know about the configuration of it, but I do know and have confidence the platform engineering team have set that up to absolute best practices, absolute company golden standards that conform to all regulations and compliance. And I just focus on writing my code. Okay. So, are there any questions at this point? There may be questions. No, nothing at the moment, but there was a comment though, nice, someone said. So, that's really nice to hear the people enjoying the content so far. Oh, there's no comment I see. So, we can actually maybe tackle it. I don't see a question mark, but maybe we can take your take on it. So, there's someone saying, but their humble experience in a platform engineering team that let me conclude the following. Don't give many options to devs. Example, auto inject resource limits is for historical metrics for monitoring systems. Smart enable or auto scaling is required, producing the inputs from dev increase reliability and consistency. Yeah, I absolutely agree with all that. I think that's a great comment. It's actually probably a very nice segue into score and looking specifically at this word specification, because score is really about reducing the inputs that a developer has to give, only exposing things that they care about and that they should know about, rather than all the complexity of infrastructure. So, I will move on to that right now. So, we're looking specifically at score here and some of the problems that developers have that we have spoken with have told us. So, they have a high level of cognitive load. So, this is often a switching between different tools within different environments or different teams have different pipeline tools. Lots of switching is not productive. In consistency between environments too. So, if you're developing locally using Docker compose and then in production you're using full blown Kubernetes, multi cluster, multi cloud even, very different environments and you can see inconsistencies there. And then infrastructure management, which is quite problematic before developers, not spoke to many developers who enjoy managing infrastructure. And this also, the infrastructure management piece creates higher cognitive load as well. And what the score development team and product team believe today is that there is a large, most development that happens is very infrastructure centric. Developers really invested and have interved deep knowledge in the tooling and the platforms that run their workloads. They have to really think about configuring each environment very specifically. So, taking that example again of developing locally and Docker compose, Kubernetes for production, you have to think a lot about sort of the environment specific configuration and then just platform specific configuration as well. So, if we're using different platforms for different applications, different teams, there's a lot of consideration that goes into that. So, what we're, what score is advocating is a workload centric development model. So, ultimately it's the workload that creates the value, the workload, all those workloads do it together, create this application, the application is creating the value. Infrastructure is super important if we didn't have infrastructure, where would we run our workload? But it is ultimately a byproduct of a workload of an application. So, really score is aiming to help developers and platform teams to help developers focus on the workloads rather than the whole tech stack and all of the sort of complexity that goes in that tech stack to run the application. So, workload centric development just focusing on the workload. So, the spec is just a single source of truth across different platforms, across different environments and really tightly scoped as well. So, this is a nod to the comment that was made in the chat, really only exposing the inputs that a developer cares about and a developer needs, hiding those inputs that could create confusion, create extra cognitive load and create production instances that are unfavorable for our application and our customers. And a completely declarative approach for infrastructure management. So, the context we're talking about here is declaring what we need and thus being given there. So, obviously there's declarative tools for infrastructure as code, but this is looking at as a developer, I declare what I need, I get given that, I don't care where it runs, I don't care where it is, I don't care too much about the underlying configuration. So, a little bit about the tool itself at quite a high level. There's a few components. So, there's the score specification, which is the spec that can be run across multiple different implementations. So, an implementation relates to a specific platform or a specific piece of technology. So, at the moment we have score compose, we have score helm, there's score humanitech for the humanitech platform. So, you have this single score specification, you run it against an implementation and it spits out a platform specific configuration file. Okay, so I could be deploying to multiple technologies, multiple types of platforms, but I need to write one file and I don't have to care about sort of all of the pieces of configuration or the nuances of each platform's configuration. I'll stop just to see if there's any comments, questions. Not at the moment, but keep them coming people. I know we did get confirmation at some point that the Zooming did the trick, so we want to have been after that. So, that was good. Oh, cool. And rather than looking at this on here, we're actually going to look at in a demonstration. Okay, so I'll skip to a demo now. I'm using this sandbox environment and I will give everyone access to this sandbox environment at the end of the session as well so you can play around to your heart's content. Okay, let's look at a score file first. So, let's set out a situation before I go rushing in to looking at code on the screen. The situation is, I am developing locally using dot compose and then I need to deploy to Kubernetes using hell. Okay, so we're going to leverage the two score implementations, two of the score implementations that are available today, score propose and score help. This is probably quite a maybe a simplified situation, okay, using some of the, you know, just the existing implementations that we already have. There are other tools out there that can, you know, tackle this type of problem just between compose and help. But, you know, score is supposed to be very pluggable. It is very pluggable. You can write your own implementations for your own platforms. But we're just giving an example here. So, I have a score file called, let's actually make this a bit bigger now. I'm sure there'll be complaints or suggestions. So, I've got this backend score file. So, if we look at this, we've got, I'm just going to sort of run through the file and explain it. We've versioned the score spec so you can use different versions and metadata. Simply, we can just simply give this a name here, which is our backend workload. And then a list of containers or a container. So, we've got a simple image here. And then we can issue commands to that container, arguments, etc. And then a list of variables as well, okay, that we want to inject into the container. Now, as you can see, we're not sort of explicitly declaring the variables or, you know, the variable values. We have got placeholders in there. Okay. So, the idea being that a good platform should be able to resolve these placeholders and actually inject correct values. Okay. So, we quickly just go back to this very basic drawing. The idea being that when we actually do this deployment of this application, and that's deployed onto Kubernetes, also when we specified Postgres in this instance, I don't know what's going on with that drawing, when we specified Postgres in that instance, that will take the configuration from the sort of the database resource. So, as a developer, I do not need to work with credentials. I do not need to understand the end points. That's all resolved by the platform and injected into the workload. Something I didn't talk about too much when I presented this wonderful whiteboard. Okay. So, it's the platform that needs to be able to implement that. So, we can look here and we can say, then we start declaring resources. So, we can actually, in this instance, declare a set of environment variables that we want to inject. But in general, the variables that we use in, say, for like a database connection string would be resolved and injected via the platform. So, here I've said I need a database. My workload needs a database. The type of database is Postgres. And then, in this instance, just give them some default values that could be injected into the configuration. And then I've defined another resource here, resource dependency, which is the front end. So, we're sort of dependent on this front end workload actually to build our application. And in the front end, I've just got a sort of a basic alpine image that serves over Port 80. So, I will execute this. I assume there's not any questions, because I probably would have been asked. But the idea being that wherever I run this file, my workload is going to be consistent. Whatever platform that's on. There's nothing specific to any type of platform within this definition. So, we'll be using the Score Compose context. So, there's a CLI tool named Score Compose. And then, after that, we'll be using the same file in the Score Helm context. Score Helm. And hopefully, we should have exactly the same outcome. We will have the same outcome. When I share this with you as well, so you can actually follow through, you know, if you're interested, follow through with the guides on the right. I find this useful when I'm presenting as well, because I can just copy and paste. So, there's some stuff in there that explains what's going on. This is also important that there's, we've got this Score Specification here. But there's also, we actually need a database. This is just Docker running locally. There is no platform in this instance. So, we need to actually supply the database. So, we've just supplied that with a database definition in a Score, in a Compose file. We could have actually... Audience question coming in. So, we have a question. Can you dive a bit more into how you'd handle secrets? Yeah, that's a good question. So, Score itself doesn't really handle secrets, sort of reference in secrets. So, on this connection string here, this would all be secret information. So, the DB password. That's something that really, and what we can hint at some of this and show a little bit of this, but it would be injected by the platform. So, in this instance here, this platform orchestrator or this platform sort of sketch here has a lot of things missing on the actual sketch. So, naturally, there would be secrets management components of this platform. And we could... The platform would be able to inject into the workload. Hopefully that makes sense. Yes. Then we had another question, which is, are we implementing here a Docker Compose interface compatible, which is targeting Kubernetes? So, the interface, I mean, if I understand the question correctly, we can target any platform. So, there's the specification, which has a schema that looks, is represented like this when it's in YAML. And the idea being that we can really, if I go back to this slide here, anyone is free to write their own implementation that can target a specific platform. So, you may have your own custom-built platform by your platform engineering team, and you want a standardized way for them to develop locally and then actually address your platform or convert to Helm. So, the interface, the experience is around just having a single specification and outputting to a platform-specific configuration file without having to understand that platform-specific configuration file. You'll need to understand one specification. Hopefully that answers the questions, but if it does not, then please follow up. I'll be happy to. And then there was actually another question from the same asker. How are we managing multi-environments here, or is it still required third-party like Argo CD? So, again, more comes down to the platform, the platform to manage that. You may have something like Argo CD. You may have sort of a platform orchestrator, such as Humanitech. You may have a custom-built platform. But if I look at it from this context, so I'm trying to, there we go. If I have some cloning capability, this is just not going well. What I was trying to do is, actually, if I copy that out, there we go. I didn't think it was there as one object. So, if we cloned the application and then give it the sort of tag, sort of prod, the actual workload specification is really going to look exactly the same. We don't want the workload specification to look any different. The platform orchestrator will assign different resources. So, for the prod application or the prod deployment, it would give us the prod instance of the database and inject the relevant sort of configuration that's needed and connect to that database. The actual workload definition should look the same. We don't really want those to vary between environments. Hopefully that answers the question. Great. And if anyone wants to ask a follow-up questions, of course, the chat is open and ready for questions. But, yeah, now we can, I think, move on. Cool, cool. So, I'm actually going to thank you for the questions. I appreciate it. It's always good to have some interaction. So, what we're going to do is we're going to translate these score files to compose files, okay? So, I'll actually show the translations here. So, we've got these back-end compose or back-end score.yaml. It's now been translated to a back-end compose.yaml, okay? And then we've got the front-end compose.yaml as well. So, score doesn't actually even really execute those files. It creates a translation and then you can execute those files against the platform that you need. And, again, these are simple examples. This is one of the first iterations that we had was score compose and score helm. Naturally, platforms that provide a lot more would be more complex and their configuration files look a lot more complex as well. So, once you're going to start the workloads now, okay? And what we want to see here is this workload running and the connection string echoed out. That's what the workload is going to do. It's going to echo out the connection string to the database, okay? So, hey, Jimmy, whoever Jimmy is, connect into the DB and there's the connection string there. In this instance, we have just passed in the default values that we've set in the score file, but for a more advanced platform that uses a platform orchestrator, a platform product, these would be retrieved from the resource or from a secret manager injected in, okay? That's the platform really to implement rather than the score. Score is agnostic to that sort of platform. It is just a specification, okay? So, we got that. So, hopefully, that makes sense. We've used a file, a score file to define some workloads onto Docker. Now, we're going to take that exact same file, okay? And we are going to deploy via Helm to Kubernetes. Okay? So, if I come here and these are exact same files, translate them, yes, into the editor. And what we've got is sort of, within the Helm implementation, we translate to a set of values files, okay? So, we've got the front-end values and the back-end values. And then we've got a specific sort of reference workload Helm chart and that these values are fed into, basically. That was the implementation was built. We've got this Helm chart. And now we're actually going to run the workloads and just do a Helm install to actually run these workloads. And this is a GKE cluster. And so, if I do a sort of QTGL, get pod. You can see that those are coming up on GKE now, okay? And then to actually sort of output some more in-depth information, we're going to tell the logs of the back-end container. And as we can see, we've got the same, we've come to the same conclusion. Hey, Jimmy, we're connecting to the database. And then we've got the database connections during this being echoed out, okay? So, hopefully, we can see how we can use a single file such as this written in, you know, written in YAML to define a workload and run that workload across any platform that only supports a codified way of defining workloads. That's all we need, okay? So, yeah, we are looking at developing more contexts, so more integrations with other platforms, you know, be that cloud services or more bespoke platforms as well. And you're absolutely free to contribute your own context. So, we have this, we have a GitHub organization, and we have the specific spec within there, okay? And there's information on how to contribute, and things like that. So, I'll probably post this in the chat afterwards. And then we can look at sort of, we've got some of the sort of specific implementations here. So, score helms, score humanitex, score composer have already been built. So, you're free to look at the code for that. And yeah, have a play around. Perfect. If you get the links to our site, we can get them to the chat. We're going to access. And I think this is the perfect time for the audience members to start typing away all of their questions as well. We only released this at the back end of last year, really. So, it's still an early stage project. And we're looking for contributors as well. So, if you're looking to get involved in a new exciting open source project, and can contribute, then please reach out. Perfect. So, is the GitHub page that you're linking to, when we got you to the chat, the right place to find the contact info, where should people kind of be looking to reach out to you? So, if you go on, I've just posted another link called score.dev. I believe there is a contact us page there. Or you can create a discussion or just even send us a poll request on GitHub. But there's definitely contact information on score.dev, which is the website for the score project. And also, in the chat, I don't know if you've pasted this yet, but there's a link for the sandbox. So, if you follow that link, you get access to the sandbox. It's up for an hour. You can play around. And it will be up for the next week. So, yeah, feel free to have a play on that. Send me feedback our way. Perfect. And now, while we wait for our back end to kind of send in the links to everyone via the chat, you have an audience question. So, how easy it is to write my own converter from score, say, my company. So, it is a pluggable system. We've tried to make it as simple as possible. It depends on the complexity of your sort of internal specification. But if you want to talk further about that and get some of the development team involved or the product managers from the score project, we'd be happy to look at that further. Great. And the question actually did kind of celebrate with I've developed a young spec for my community's ops. Yeah. What was that sort? They continued with I've developed a young spec for my community's ops. Okay. Interesting. So, maybe trying to achieve the same things there as well, to a certain extent. And that's what, you know, has been noticed as well. There's a lots of different of these specifications that are not transferable between platforms. So, I think there's a lot of value in having just a more standardized specification, one that can be applicable to different teams and, you know, or different units within the organization to across organizations as well. You wouldn't have to learn a new specification if you change jobs. And they're using the same spec, right? Simple. Perfect. And we're getting the links into the chat right now. They are at least on the YouTube side. So, no worries there. And then there was another question from Pren on guess score home chart is a boilerplate the intelligence to deploy the workflow as deployment staples that slash job is defined somewhere. Yes. So, there's a essentially a boilerplate home chart that we feed values in. Obviously, you can use your own reference your own boilerplate that is more suitable for yourself as well. But yeah, that's the fundamentals of it. Perfect. And I hope the links are visible on the LinkedIn side. If not, I guess for if someone can't see the links, it's essentially the get off, get off site, the score, mainly anything from there or score.dev is the webpage likely. So, there you can find probably a lot of the material as well if people can't see the links there. Yeah, the link's not been posted, is that? Was it just me? Yeah, they're at least on the YouTube side. They are there. Yeah. Cool. Sorry. No worries. And then there's another question. Is there an integration with tools such as tilt? Use for local development? There is not currently. I don't know of tilt personally either. But that is maybe a fantastic project for yourself. And yeah, if there's enough sort of interest in a specific platform or deployment target, then we'll look into developing for that as well. Yeah, makes sense. But that's always a good part about open source. It's like if you want something, you can also start building together. Yeah. Question out the basics of it and then need some help from the wider community. It's starting to be quite a big community built around score now, which is cool, a lot of interest at least anyway. So even if you flesh something out and then others contribute to it, it'd be cool. Perfect. Well, there was someone who was commenting that they've used tilt, but not much. It's a niche part as far as their concern at least. Great that there's a discussion going on in the chat as well. I like seeing that. That's great. Yeah. Is there anything else that you want to kind of bubble here while people might be typing their questions in final call? I think for questions, if anyone's still wondering something, or is there anything that you want to share, Thomas? Nothing else that I'm going to share. I've spoken the last 40 minutes straight, so I'll give everyone a break. If there are any final questions and I'm happy to answer, of course. Yeah. Let's see if anything comes in. I have a question while we see if anyone is still typing away. Is there any kind of sneak peek that you've given on what does the future of math look like for score? Is there anything happening in the future and what does it look like? There's a couple of different implementations being considered. I won't commit to anything right now. It's not my place. But yeah, just obviously evaluating what would be most beneficial to the community, which sort of tools that we need to interact with are the most popular, and then moving on from there. The ultimate aim of score is, I will say, I've probably said it in other words before, but it is really to provide this definitive single specification for deploying or defining a workload. That is the ultimate aim, and it may take many, many different integrations with other systems to be able to get there. We're looking to go that way. Perfect. Actually, the discussion in time continues a bit with help. It's absolutely amazing. The best dev experience when it comes to being set up to need the prior. I love the discussion going on there. That's nice to see. Then, Dennis said, thank you Thomas and Annie, which is lovely to know is here. Thank you to you for attending. Thank you for amazing questions to everyone. But since there's no new questions, I think we're going to start wrapping up. Perfect. Thank you everyone for joining the latest episode of Cloud Native Live. It was great to have a session about one spec to rule them all. We really also love the interaction and questions from the audience, as well as the discussion within the chat, within the audience. That's always great to see. And as always, we will bring you the latest Cloud Native Code every Wednesday. In the coming weeks, we have more great sessions coming up. For example, next week, we have a session on showcasing new features and capabilities in Kuberno. So thank you for joining us today. And see you all in the coming weeks. Thank you, everyone. Bye.