 Hello, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Annie, and I'm a CNCF ambassador, and I will be your host tonight. So every week, we bring a new set of presenters to showcase how to work with Cloud Native technology. They will build things, they will break things, and they will answer all of your questions. So you can join us every Wednesday to watch live. This week, we have Thomas here to talk with us about solving conflict drift across environments with SCORE. Very exciting. Today, we're going to take questions and intervals along the presentation and the demo. So whenever you feel like asking a question, just pop it into the chat and we're going to get to it when we have a nice spot within the presentation or a demo where we can have a discussion about it a bit. Perfect. As always, this is an official live stream of the CNCF, and as such, it is subject to the CNCF code of conduct. So please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful of all of your fellow participants as well as centres. And I'll hand it over to Thomas right now to kick off today's presentation. Thank you very much. Thank you very much. I'm excited to be speaking here today. So I'm going to show a couple of slides, and then I'm going to go on to the more interesting stuff, which is the demonstration, okay? So let's get going. Can we see that yet? But there we go. Now we have the screen visible as well. Yeah. One spec to rule them all. Okay. So yeah, I'm Thomas Harris. I work as a customer success engineer, but today I'm going to be speaking about an open source project that I'm involved with called SCORE. Okay. So SCORE is built by developers for developers and aiming to improve the developer experience. Ultimately, it is looking at providing a single source of truth on how a workload should be run on any container orchestration platform, whether that be Docker compose, whether that be Kubernetes, whether that be using tooling like Helm to deploy the Kubernetes. Or maybe if you're using a bespoke platform API, which is now becoming more common. Okay. So let's take a step back and look at some of the problems that SCORE is trying to solve. Okay. The first one is cognitive load. Now cognitive load is a wide subject, and within this, we're specifically looking at context switching. Okay. So developers switching between different tools, different workflows creates high cognitive load and impacts the daily work. Say noticeable problem within the environment. The second is inconsistency between environments. Okay. So if I'm developing locally, I'm using something like Docker compose to verify my workload. Once I'm happy with that, I need to then go and deploy to a production system. I may be using Helm to deploy to Kubernetes. I may be deploying to ECS, Google Cloud Run. This then creates inconsistencies between environments. You get this sort of convict drift and problems with replicating the workload from local development to production development. Then another problem is infrastructure management. I've not spoke to any developers who actually want to manage infrastructure. Okay. They need infrastructure. It's sort of a necessity to run their workloads, but actually managing that infrastructure really requires specific knowledge around cloud providers, around cloud architecture design. And again, it also creates cognitive load when we have to think about managing infrastructure. So score promotes what we are calling a workload-centric development type of development. Versus infrastructure-centric development, which is pretty common today. So what score is looking to do and what we're moving towards is providing a single source of truth for your workload. So it'd be in the definitive reference on how a workload should be run regardless of environment, regardless of platform. It's a spec and the spec is tightly scoped. So it only asks for inputs that the developer is concerned with and shields the developer away from the complexities of infrastructure, away from the complexities of container orchestration. And in regards to infrastructure management, it's very declarative. Score allows you to just define which infrastructure you need, which infrastructure your workload relies on and it's up for the platform and the platform team to implement that infrastructure and that be available to the workload. So at a very high level, this is looking at the implementation of score. We have sort of three components. So you have the score specification, which is really sort of the key thing here. It specifies how a workload should be run. Then you have the score implementation, which is ultimately a CLI tool that actually translates the specification into a platform-specific configuration file. So to be clear, this is a very early project. We only released publicly in November, just at the start of November. So we have a couple of implementations. One of those is Docker compose. One of those is Helm. And it's very extensible. So the community are able to write specific implementations for their platform, whether that be implementations that are available on the public cloud providers or specific platform-specific implementations for your own custom platform API. So I will shall move on to the demonstration and not do any more slides. Is there any questions at this point before I start looking at some code? Not so far, but looking forward to those. So if anyone has any questions, you can send them in. Oh, cool, cool, cool. So before I dive straight in, let me give a sort of a brief scenario. So I've been developing locally. I've built a Docker image, a container image. I've tested that locally. I'm happy with it. Now I need to run that on a more production-grade infrastructure. So I could be using Docker locally here. As you can see, I've got nothing running yet. We'll get to that. And then my production infrastructure could be something like GKE. Okay, so I've got access to a GKE cluster as well. Just a single node, this is a demonstration. So as a developer, I need to understand the implementation of Docker compose, write Docker compose files. When I want to transfer that workload to a production-grade infrastructure, I also have to understand how to write helm files as an example. And as an organization that I work for uses more technologies, maybe a platform-specific API, I have to understand how to run that workload on those different platforms. So what SCORE is doing is having a single specification that can translate to any of those platforms, okay? So let's take a quick look at an example SCORE file. And this is a very, an example that just uses a simple image, okay? So it's just for demonstration purposes. But let's take a closer look at this file. So what I'm saying is on line one is that I have an API version. So the SCORE spec is versioned, which is pretty important and pretty standard nowadays as well. And then I am giving it a name, giving the workload a name, specifying a container or a group of containers and a specific image, okay? So all five, yeah, so far so good. Supply some commands line arguments to the workload. This is just something that keeps our workload running in this instance, okay? This is the point where it gets more interesting, right? This is just a definition of I need to run this container image, okay? But I don't know of many actual production grade workflows that don't rely on external resources. So if we just quickly flip down to this sort of resources piece from line 13 and look at sort of line 20 that says DB, okay? So database, but rather than me worrying about the specific implementation of that database where that database runs, whether that be running as a local container if I'm developing locally or whether that runs as a full blown service on Google Clouds under Cloud SQL. I'm not worried about the implementation. I just need an endpoint that is a Postgres database. So I just declare database and type Postgres, okay? And it's up to the platform to actually implement and provide that database. So moving up, just looking at the variables again, I've got this connection string. This workload needs to connect to a database. So it needs a connection string to speak to that database. As you can see, I have not declared anything explicitly. I've built up this connection string that will be resolved at runtime. And this is super important as well, right? So all that, you know because should I know that connection string? Should I hard code that connection string? Absolutely not, shouldn't be hard coded but for me to know it, I need to see. If we could zoom in a bit, there's someone who's wanting to see a bit closer. Okay, I can. It makes it interesting to look at. Is that okay? I think it's better, yeah. Cool. So what we're actually doing rather than explicitly declaring the connection string, we're just putting a placeholder. So we're saying we're referring to the actual database and these will be resolved when the workload actually gets deployed on the relevant platform, okay? So this is something that is part of the platform engineering ideology which we are calling dynamic configuration management. So when the workload is deployed, the configuration gets resolved and then it is dynamically updated as those values change, which they should, you know this is security features. It's likely that a platform team would implement dynamic credentials that rotate on a frequent basis. So the workload specification understands that and is able to resolve that. So this file, this file, this will be consistent across different platforms. So whether I'm in this instance, I'm gonna run it on Docker compose and then I'm actually then gonna move it into production which will be a Kubernetes cluster deployed via help, okay? So the first thing I need to do is translate the files to something that is platform-specific. So I'm gonna translate this backend file. I'm just gonna copy and paste this now. I actually need to be in the correct directory to do this. Okay, so I've translated that file. I've outputted it to a specific, a platform-specific file. That's the backend. I've also got a front-end that is similar to simulate a production type workload. I've got these two files. We take a look at those a bit closer. You can now see that I've got these two files. I've got a backend compose and front-end compose, okay? So the backend compose is translating the score spec to the Docker specification. Now I actually need to bring the workload up. If you're wondering how the database is gonna be built up, typically in production environments, that is down to a platform team to implement. In this instance, I've just run in a database locally, which is fine for a developer. Just bring up a local database. When we move into production, it becomes more difficult for them to define that. It's something that a platform team would implement. Okay, so I'm gonna run the DB, just as a compose file. The backend, which I showed you, and the front-end as well, okay? Now we're actually bringing those workloads up. All I've had to do is specify all of this in a single score file and just translate those files to something platform-specific. So we're just going through the motion of bringing this up now. If we look at the bottom, I know there's a lot going on here, but these are just the sort of the logs from bringing these workloads up. We can see we've got this, hey Jimmy, connecting to the database and we've actually resolved the connection string, okay? So we've got a front-end, a backend, and we're resolving the connection string so we can speak to the database. So on its own, on its own, if I'm just deploying to Docker compose, it might not be very useful. Why would I write a different file when I can write Docker compose explicitly or directly? But in reality, production grade workloads do not run under Docker compose, okay? So we need to actually translate those to a more production grade platform. So in this example here, let me get out of this, I'm going to stop all these workloads. This example here, I'm now going to deploy it to Kubernetes by help. So the same file, exact same file, I'm going to apply to Kubernetes. So I need to do the translation. So rather than using score compose, I'm going to use score help, okay? And it translates the front-end and outputs a Helm values file. So we don't output a Helm chart specifically, but I will show you how that's translated. So I'm going to translate the values file and translate the front-end to a values file then translate the backend to a values file also. So now if we go in here, I've got these backend values, I've got these front-end values that can be used with help. As mentioned, we don't translate to a Helm chart. What we have is a reference chart that values can be fed into. So we can install that chart. Obviously you have the ability to create your own reference charts, but we have an example here, which is on the score spec, we'll get a help repo. So now that I've done that, I can do a Helm install. So this is going to install directly to the Kubernetes cluster that I just showed you how I have access to, if I cube CTL. Now let's go and look at that deployment. Let's go and verify that deployment exists in case you don't believe me, okay? So there's a lot going on this screen. I appreciate that. If I say get service, I can see that the service is up. If I say get pods, I can see that I have these pods running, okay? And if I just tell the logs as well, we can see the connection string. Okay, so there we are. So in the same way that I deployed my workload before, I used the initial score spec, which is here. Oh, sorry, which is not there, which is here, so that's the backend example. I deployed this workload to Docker compose, and then I deployed to Kubernetes using the same file. And I've got the same outcome. I've got this workload running, it's being to the front end, it's speaking to the backend, they can communicate to each other, and we can speak to the database. If we look at the connection string that we've echoed out, we can see that it has been resolved, the inputs have been passed in, and everything is successful, okay? So this is just a simple example, right? There may be other tooling out there that can translate from to Docker compose, can translate to help in a single file. But what we are looking at really here, and the vision is to support the platform engineer in ideology. The ability for platform engineers to define resources to best practice, or as some may call golden paths, and for developers to take a workload-centric view of their development, which means that they can concentrate on writing code, pushing that code to an environment, and then that same specification being consistent across all environments, okay? So in future, we're looking at many different implementations. We have a very sort of strong implementation system where the community are able to write their own plugins. So we're really excited about the future of score and how that's gonna help developers improve their experience in their day-to-day work. So that's a very quick overview and a quick demo. Are there any questions? Yeah, there was a question, Mo, which was does score have a GitHub repo? It does indeed. Yes, yes. I shall post it in the chat one second. Are there any other questions? Why do that? Not so far, but to the audience, this would be the perfect spot to ask questions. If you have any, so send them in. Cool, I shall post this in the chat now. I think this is just private to yourself, but so there's the score.dev website, and then I will bring up the repo as well. There we go, and attendees in the YouTube side have the link now as well. There we go. So there's the actual specific repository that relates to the spec. And if you look a step back in that repository, you will find the specific implementations that we've developed so far. Yeah, so there we have the link, put it in the chat. And then there's a question, how do you compare config drift between two environments? If I understand the question correctly, config drift happens when a developer uses one type of platform or technology to deploy their workload. And then when they need to deploy to another type of platform, they need to redefine that workload. So they need to resolve and understand the differences between the two specifications or the platform specific implementation. And they probably have the best interests at heart, but naturally understanding many different platforms, many different specifications means that the config becomes different between those environments. And there's also the point of updating the config as well. If I make a change to one environment, I need to represent that in the other environment. So what score is aiming to do, and it's actually achieving in this demo as we've seen, is create a single specification that is consistent across those environments. Hope that answers the question. Please follow up if there's any further questions to that. Yeah, great, perfect. The asker, Rita, if you have any extra questions, just send them in and we can get answers to them as well. Yeah, did you have anything else you wanted to show as far as the demo goes or anything there? I don't have anything specific today, but what I will say is that we are gonna share the link to the sandbox environment that I just showed there to give viewers access. It will only be available for about a week at the moment, so we're gonna keep that up, let people play with it if they're interested, and we're happy to take feedback. So please Eva, reach out if my details are available or just make comments or pull requests on the GitHub repo. Perfect, should I share the link now? Absolutely. There we go, I'm sharing to people here as well, so they can see it within the stream as well. So people can go there and play around themselves and it was available for about a week. Yeah, we'll keep it out for a week initially, yeah. Perfect. And so if they wanna reach out with questions, do you have a Slack channel that people can join or should they reach out in the CNCF Slack or would the GitHub repo be the best store? We do, so there's a specific Slack community dedicated to SCORE, so I will just find that. I'm not prepared to share that, but I will find it right now. Community, join us on Slack. So I think this works for everyone. I mean, if you could test that now, does it take you straight to the Slack community? It works for me at least, so I can post that one as well. Perfect. There we go. Yeah, please join us on Slack. Ask any questions, any suggestions. If you wanna get involved in contributing, if you have any ideas, then reach out there and the SCORE team, yeah, we'll be happy to take those. Perfect. And while we see if anyone else is typing away a question and needs to send them in soon, I have a few questions. The first, the simple one is if someone's now super excited to learn more about the topic, and they're gonna do hopefully something around with the environment and everything, what should do they after that? Like what's the next step? What should they be learning if they wanna do even more of a deep dive into the topic? What do you have any recommendations? Well, I'll say the first thing is, tell us your experience on the SCORE Slack community. Tell us what you need to learn, what you'd like to know more. We'd be happy to provide many more materials on platform engineering in general and some guidance on where to go next. Okay, perfect. So very customized approach, I love it. Yeah, sounds good. And then the one thing that I'm always interested in, so what's gonna be do you have any sneak peeks or any kind of things you can share about the future of SCORE? Is there anything coming up in the future? What's in the roadmap? So we have a very clearly defined vision, okay? So that vision is to provide this sort of single workload specification that's consistent across platforms and solving problems that we set out at the start of the presentation, okay? So that's the vision. There are a few implementations on the sort of internal roadmap that we're speaking of that really would be interested to hear from the wider community what implementations are important to them, which platforms are they using most, where their pain points are and we can see if SCORE can help to alleviate that. Perfect, so clearly I guess the Slack community is the place to be to discuss and then take things further and really kind of see where the future takes SCORE as well. It's perfect. I don't think there's any new questions coming in so we can start wrapping up soon. So Thomas, do you have any final words or anything that you wanna kind of finish off as well? No, I don't know. Just thank you for the people that have joined. Really appreciate you taking your time out of your busy days and yeah, if you've got the chance, test the hands-on sandbox environment and reach out. Perfect, and here is the sandbox environment link again if you didn't catch it the first time and it is also in the chat as well. But perfect, thank you so much for coming in and doing such a great demo and really being very community minded and getting started with discussions with everyone. That's always lovely to see. But thank you everyone for joining the latest episode of Cloud Native Live. It was great to have a session about solving conflict drift across environments with SCORE. Really loved the few questions that we got from the audience as well and I hope everyone jumps in and tries out the environment in the next week as well. In the coming weeks, we would have great sessions coming up but we have a bit of a holiday break now coming up with Cloud Native Live but we will be back in January with more great sessions coming up. Stay tuned for those. Thank you for joining us today and see you in the next Cloud Native Live. Thank you for having me. Take care all.