 We are happy to see you here. And we're going to talk about simplified, inner and outer cloud-native developer loops. We're going to talk about developer experience. We're going to talk about some tools. And we're going to show you the live demos of how we think the inner loop and outer loop could work for some environments. I hope you're going to enjoy that. Why are we talking about this in general? It's because we want to talk about productivity. Developer time is a very, very expensive resource. And the more productive you are, the better it is for the business. So you save money, which is good. We all would like to save money. Yes, we all would like to get more money and improve the margins and everything. So how do we waste time? One of the most in an abstract sense biggest bottlenecks is when we do things again and again, but they're slightly different. And we need to resolve the problem again and again and again, but we cannot reuse the existing solutions. And we need to change small things. And we get the tooling that doesn't fit us, that we maybe are not fully embracing for working with it. And that is not great. I think that's called reinventing the wheel. Yes. When reinventing the wheel is one of the biggest time things. So today, we would like to look at the workflows for your typical DevOps pipeline for your software development cycle in a loop from code going to production and see how some tools interact and how you can enable maybe better application developer experiences without sacrificing the needs and your interest in production. Let's do it. My name is Aliok Shalayev. I work at Docker as a developer relations person. And I work on the test containers project. And this is what we're going to use later for the demos as well. You can find me by at Shalayev almost universally online. And if you are into that sort of thing, check out my YouTube channel. I do short videos with people. Is there any more fun facts on those videos? Maybe I'll check them out. Hi, everyone. Thanks so much for coming. My name is Alice Gibbons. I am head of customer success at Diagrid. If you haven't heard of us, we enable tools and APIs for developers, building distributed systems. And yeah, we are big supporters of the Dapper project, which I'm going to talk about today and how it enables productivity. So as Oleg mentioned, how do we enable developer productivity? Developer time equals money. You all saw the slide. You all believe it. Maybe you don't. Essentially, we did a survey and we surveyed over 150 developers and figured out if they were Dapper users or not. And 95% of them told us that Dapper saves developer time. So this is me telling you that Dapper will save you money. Essentially, this was a huge number of people that we surveyed in terms of large enterprises, all the way down to small and medium, all different industry verticals as well. Fintechs, we had a few other mid-sized retail companies. And the amount of time that it saves you is debatable, as you can see on this slide here. But we had stakeholders saying that around 30% of developer time was saved by enabling Dapper within your code. Yeah, so that's something I'm going to talk about today. And today we talk about Dapper, not just to introduce you to Dapper, but as an example of a production cloud native tool that you would use in your development loop. And we're going to see how it impacts our application developer experiences as well. Exactly. And during this talk, I'm going to be more the person that's on the production side. I'm going to be the person on the production side, and Oleg over here is going to be our developer side. Perfect. And we are going to work together in harmony. Hopefully. OK, so what is Dapper? I'm going to do a quick introduction just to get us all on the same page here. If you haven't heard of the project, it stands for the distributed application runtime. Essentially what it is is a set of APIs that you can access from your code to write distributed systems. And you're like, OK, what's a distributed system? We have a huge number of API services, or APIs like specifications to create patterns and applications for things like workflow, for things like message, publish and subscribe, for things like synchronous communication via service invocation. And there are a huge number of users that are using Dapper in production today. OK, how does this look? And you're like, that's cool. What does it actually look like? You can see on the side up here, we have a number of these APIs that you can access directly from your code using your language of choice. We have a number of SDKs within the project so you can build the Dapper client in natively to your code. Or you can literally use HTTP and GRPC native clients in your language of choice to reach out to these APIs. There also are a number of observability, security, and resiliency features kind of built in at that application layer. So if you want to do a little bit of shifting left and move some of your security features onto the application side, one of the things you can do is take advantage of some of the Dapper security features there. This is hosted really on any cloud or edge infrastructure. We see over 95% of people hosting this on Kubernetes. And hey, this is KubeCon. But there are ways that you can run this on virtual machines as well. And yeah, this is running as a side car model. I didn't mention that. So this is enabled through what are called Dapper components. The Dapper component model enables the access of infrastructure from your code directly using the side car. And it actually abstracts the code and the languages or the libraries away from your applications, making them super duper modular, super duper composable. And you can also shift the platform that you're running on. So for instance, you can see a huge number of infrastructure services on here, things like for publish and subscribe, maybe you're using Azure Service Bus, maybe you're using Redis, maybe you're using Kafka. Dapper doesn't care. You can just swap out what's called the Dapper component and keep your code exactly the same, running from development into production. As I did mention, Dapper typically does run as a side car pattern. There is your application code reaching out over local hosts to these APIs, again, your choice, whether this is HTTP or GRPC. And there are a huge number of, again, these APIs that you can reach out to. So you can kind of see on the side here, we have this local host endpoint, which is exposing our Dapper sidecar. And this is effectively my service invocation endpoint where I am invoking another service via the Dapper API. Maybe I'm publishing a message on my message broker. All things you might want to do while building a typical kind of distributed system. This is also really important for moving from development to production because your actual URLs do not change, right? You're running the Dapper client locally, you're reaching out over local host and then you're moving to production, running that as a sidecar, still calling over local host. So to illustrate our kind of end-to-end inner and outer developer loops, we're gonna use this service called the Pizza Service. Everyone loves pizza and essentially what we're gonna do here is we have a pizza store and this is a store that I'm gonna place orders in and I'm gonna order a couple pizzas and then it has multiple composable services, one for the kitchen service because how are pizzas made, they're made in the kitchen and then one for the delivery service to actually deliver that pizza to our end users. This is kind of a pretty standard, simple distributed system, right? We have a couple things going on. We have our Redis key value store where we're saving those orders in. We also have a message broker. We are publishing messages essentially from our kitchen and delivery services to let our customers know, hey, what's going on with my pizza delivery? If anyone's ever seen like the Domino's Pizza Tracker, think like that, but a super low quality version of it. What's cool about this, if I rename the boxes here, I can have any application. It could be your bank service. It could be your movie streaming catalog. It could be anything. This is your template for architecture. Exactly. And if you have a, one of these services, you are typically taking dependencies on your infrastructure directly within your code, right? So you can kind of see in my pizza store service here, I have both the Kafka and the Redis client because I am not only saving my orders in my state store, but I'm also receiving my message broker events to tell my customer, hey, what's going on with my pizza order? And so this is kind of the pizza service, how you would typically develop it. With Dapper, a lot of these dependencies are removed. So you actually can remove the SDK of Kafka and of Redis from your code. And you can actually just put in the Dapper client or the Dapper SDK, and then reach out directly to the Dapper APIs via their sidecars. We also have a number of features kind of built in here around observability, security, and resiliency. And from a operational perspective, since I'm the operational team, all we really have to do is install Dapper into the cluster as the platform team and then wire that available infrastructure. So platform teams really like Dapper because what they can do is actually create those Dapper component files and then hand those over to the developers with all the necessary specifications for them to connect to their underlying infrastructure. All right. Sounds great, don't need to reinvent the wheel there. Okay, yeah, no reinventing the wheel. So should we check this out? Yeah, yeah, so we have the pizza application and of course it's an actual application running in the Kubernetes cluster somewhere in a cloud or locally where Alice knows where it runs. There are three components that we have. The main one is the pizza store application. It's a Java application. How many of you speak Java? Very good, very good, because we were not fully sure but it's good that all of you, almost all of you read Java naturally. Amazing, thank you so much. And so what we have in the code, I'm just gonna walk you through a tiny little bit of code here and then I'm gonna show what this looks like running in production. We're importing our Dapper client. This is how we're gonna reach out to our Dapper APIs via code on the top here. And then where am I using this Dapper client? We're gonna check this out. We're using this method in the state store method. And essentially what we're doing is we are saving our orders, our orders that we're getting in for our pizzas. We're gonna save those within the state store as a key value pair. And essentially you can see I have this. I'm instantiating the instance of the Dapper client and then I'm actually reaching out to my state store. All I really have in code is this state store name. And this is an environment variable and this is relating to my Dapper component file under the hood, which in this case I'm using a Redis key value store. But again, the nice thing about Dapper is that it's entirely abstracted from my code. So if I do like a control F and search for Redis, you can see I'm not getting any results. So no Redis within my code. On the right. Yes. So in production if I want to switch my database from Redis to say Postgres, I can do that without. Sure can. Very good. I love that. Amazing. And you can kind of see on the right here I have my component file. And this is the key things is I have this metadata name which is KB store. And I actually have a couple metadata details that are specifically reaching out to wherever that Redis cluster is stored. In this case I've deployed it as a Helm chart on my cluster. This could be like your hosted Redis in a cloud or a container on your local machine running locally. Again, the only thing that has to change going from development to production is these Dapper component files. So that's kind of how this is wired up from a Dapper perspective. If I hop over now into Kubernetes, I can see I'm just gonna check out what I have running. So this is a pretty standard Kubernetes cluster. I have a couple namespaces here. You can see I have my Dapper system namespace. That is gonna be my Dapper control plane that I am, that I'm gonna be keeping up and running for my Dapper sidecars. I also have Kafka and Redis which are my infrastructure providers which I care about because I'm on the platform operations team. If I look at my pods, one of the things I'm gonna notice is those three services that I was talking about earlier. So my pizza, kitchen and delivery services. And you'll notice a couple of these have multiple containers running. So I have, if I check out, let's look at the pizza kitchen deployment. So on the pizza kitchen deployment, I have a couple of these containers running. I have one of them is my Dapper sidecar as well as my pizza kitchen container image as well. And these two are super important because again that's gonna allow that connection to those Dapper APIs from within my code. Okay, and then I also have, last but not least, to prove everything is running. I also have my Dapper components which are again those components, CRDs, those custom resource definitions to reach out to my infrastructure. Okay. Does this work? I don't know. That's a great question. Okay, so what does this look like? So here is our pizza store. We have, thank you, we have the our KubeCon pizza store and it was just lunch but I'm still hungry and I'm a vegetarian, sorry Oleg. So I'm gonna get this vegan pepperoni and ice tea. Very good. I support it. So I'm gonna send a place order and I'm gonna get a bunch of events that are coming into the UI here. Yay, it's working in production, fantastic. So you can kinda see these events coming through to our UI. Again, this is wired up also through Dapper via publish and subscribe messaging and you can kinda see where these events are coming from. They are coming from the kitchen service. They're coming from the pizza store service as well as that delivery service and hey, you know, should be coming in a minute now. Yeah. So this demo application is of course like public, it sits somewhere on GitHub so if you want to tinker with it yourself you can absolutely use this very, very easily. So we do this and it works in Kubernetes. It works in my production cause why not? Yeah, I have my Kubernetes cluster, I have my things. So what about then the inner loop of our development process? As an application developer, do I like this setup? Production always wins. If something has to be done in production as a developer I kinda just say no, right? If my production system uses Dapper, I need to accommodate. So that means I need to leverage and learn and use the same tools that I would use for production to deal with my setup. So that as an application developer that very often doesn't bring me a lot of joy and a lot of productivity cause I need to learn new tools that I might not be familiar with. I need to work with my complexity of like running a Kubernetes cluster. I need to understand how to install things. I need to understand the sidecar model for the particular things. I need to understand how to debug that when stuff will go wrong and it will. And I will hit my head on the wood. I like a woodpecker. And then I also need to have some sort of resemblance of how I run this locally on my machine when I develop my applications. And then I also will need to have the same setup in the continuous integration environment because I'm going to run tests there. I need that setup also work with my application. So I need some Kubernetes clusters in my CI. And that is the complexity that the production choices bring to me as a developer. And this is not ideal cause it complicates your development process. It complicates your onboarding. Everyone on your team needs to figure out how to do this. New people joining your projects need to figure out how to do this. And in general, there are very many concerns that are not application development specific in this scenario. Can we do better? That's the question. And we pondered that and we tried that. And we figured out maybe a different setup. And the main premise of that setup is that my production systems are environment specific. And the tools to work with production systems are all sort of environment specific. In production, I work with clusters. I work with deployments. I roll out deployments. The application is a mere detail there because I work with infrastructure and so on. So if I, as a developer, need to understand this whole picture, there is a lot of moving building blocks there. As an application developer, do I actually work on all of that? Absolutely not. Most often than not, the actual things that I'm as an application developer working on are my applications. So my local development setup and my processes are very application-centric because I want to work with a single application. I want to run the test for that application. I want to implement changes in a particular application and then pass it into the CI and then make CI build an inversion of that. And then it will flow into the outer loop of the development naturally. But I don't want to deal with Kubernetes clusters. I mean, I can, because I'm a professional, but if I could avoid that, maybe this is a little bit easier for me. So, and this is where we can try and build it better. And developers love API. We know how to work with API. We know how to request things. If you are building any sort of developer-oriented tools, if you give developers API, they can build things for themselves. So if you give building blocks of the solution, developers will be like, we are very, very good at building things, right? We are doing this for a living. So, both Dapper has the API, which is very, very great. So it's easy to use in your development. And also, we would like to have an API-first solution for development. This is where Test Containers enters the picture. How many of you know Test Containers? Very good. How many of you knew Dapper? Oh, Test Containers wins. Very good. Application developers in the whole. Right, so, for people who are not familiar, Test Containers are open source libraries and they give you a programmatic API to do things with containers. It gives you API to configure containers, manage the lifecycle, configure services in containers, perform operations with containers. And it's all in your favorite programming language that there are multiple implementations of those. We target pre-production use cases and we try to build in things that are excellent for integration tests on local development setups to kind of nudge you on the actual good paths so you would do things easier and those things would be more correct and less correct things you still can do because this is just a generic API, but it's a little bit harder and maybe the golden pass doesn't lead there. Test Containers gives you like a large ecosystem of modules which are predefined abstractions, which is very easy to do because this is just libraries in your favorite programming language. So, as an end user, you can run particular technologies in containers like databases or message brokers or dapper without actually knowing how to run them like yourself using the low level API or container configuration. We give you API to configure things easily, lifecycle and also the convenience API to wire your application to use those services in containers very, very naturally. In a nutshell, there are different implementation languages and it allows your tests or your application lifecycle create environments in which your application wants to run, which is exactly what we want here, right? We want to create a complex environment with dapper sidecars and potentially Redis, potentially Kafka service running there, potentially other tools in our local environment or in our CI without dealing with third party provided Kubernetes clusters or anything. So, this is what test containers does really, really well. Here's an example how I would run the Postgres database in my Java code. I would specify that I want a Postgres SQL container abstraction. I will specify that I want to initialize it with a particular schema for the given Docker image that I would use. I will start it and then I can get the JDBC URL which is the connection URL which I will pass to my application and my application will know where to find the database. Test containers will take care of the resource cleanup after the fact so you don't need to clean up manually even if something goes wrong and your tests or your application process will crash and it burn and your Docker environment where we run containers will be abandoned. We will still do the cleanup so to ensure that you have this repeatable environment that you can run your tests or run your application again and again and again and you always will get an ephemeral environment and you will never connect to this stale instance of a cluster that is running on a cloud somewhere with 319 days of the Dapper system component installed. I didn't think you noticed that. Nobody notices that. And then you get this fresh environment. So, as an application developer it takes away a lot of concerns for how to operationalize. Yes, how to make operations for your integration tests. You can do the same thing in for example, GoLang which is the base primitives of the abstractions are the same across all test container implementations. I can say run the container that is a Postgres initializes in an appropriate way. Wait until a certain log line will appear in my container and then get the connection string that they can pass to my tests on my application to connect to that particular Postgres. It's a very flexible approach. Can we see it? Yeah, we can see it. We can see it. We will not see the Postgres because this session is not about Postgreses. This session is about Dapper. So, test containers team offers the whole experience and the multiple tools for you. So there are open source libraries, your test and there's desktop free application and then the cloud backend if you want to run containers in the cloud and you don't want to run them locally in your Docker environment. So, this is how we do it. Yeah, let's look at the pizza application. What am I doing? Pizza application, come here. Pizza, right? This is the same pizza application. It's a pizza store application and if we look at the store method or store, store, store. This is the code that Alice showed you as well, right? This is exactly the same application. It comes from the same repository. This is just I have it locally, right? So, now to run this application because it depends on Dapper being available in my environment, I need to provide it with a Dapper system somehow. How do I do this? I would run Dapper in a container and I would do that with test containers. So, for that, this is a Java application. There are very good integration between test containers and frameworks in the Java ecosystem but you can do the same in any language is just for a particular project with just a couple of lines of code where you can say like, if development is enabled, do the containers in production? Don't do containers, right? So, I have my pizza store application entry point under my test and it will run in my actual pizza store application but augment that with the configuration for containers. And in my containers configuration, what I see, I see a Dapper container. The Dapper container implementation comes from a library. It comes from the jar file that I add to my project and it encodes how to run Dapper in the container. As an end user, as an end developer, I don't need to know all these details because the community has already provided us with the implementation for this. As an end user, I just need to say, give me a Dapper container, make this my application name and application is gonna run on port 8080. I will not just have the application called the Dapper Sidecar, which is not the Sidecar, but the container now, but also I will expose my application to the Dapper Demon, Dapper service. So if I need to receive some messages, it will also flow back into my application. So when I'm receiving those pizza events, my pizza store is gonna be receiving it that way. When kitchen is done with the pizza, the event can flow back. I expose my host on the host machine for test containers, for the port where I will run my machine. And I also configure my application to run where my Dapper service is running. It will run in the Docker container. It can have dynamic configuration. It might not run on the local host. It might run on a remote Docker Demon. So I need to programmatically tell my application. That is where your Dapper is running. The same way I would do that for a database saying like, oh, look, application, here's your database. So, right, perfect. I do that. I also create the mock of my kitchen service because I don't want to run more than a single service in my application because I want to ensure that if something breaks in my tests, it's only because of my actual application is at fault, so I can fix it. So I can run my application here. Can I run this? I think it runs. It runs. Test containers will find my Docker environment. It will, if necessary, pull the containers. It will start the containers. So here, we can see that we are pulling the Dapper latest. I'm using Test Containers Cloud. So it got, I closed the machine, and it got a new isolated Docker VM in the cloud. So it will pull the images for a second here. But it will start them, and then my application will run normally. And as an application developer, I don't need to know the details of Dapper. And the more of that, anyone on my project, I will check, it's all in the code. The whole configuration is in the code. So it will go in my version control, and then everyone who will pull it will have the same thing. So to run the application in the environment, they just need to run it. They don't need to mess with configuration or understand any details. Anyone can do this. Anyone can be an application developer nowadays, which is a good thing, right? It's a learning curve is getting flattened, right? So this is how I did it. And I can do this the same for the tests. I can have my tests run, use the same configuration. So it has the same containers configuration. And it can have high level integration tests that will exercise my application in a way that I would use it in production, and I would expect things. And the test here is rather simplistic, but well, we are not testing the pizza application logic here, right? We are trying to figure out how the environment is working. You can see that I have my three containers there, my mocking for the kitchen service, my dapper and my SSHD, which is exposing my host there. And it just works just like that. Amazing. I think I can run this and we'll see the blue checkbox. Lovely checkboxes. So now we have this local experience that I, as an application developer, love because this is what I would otherwise do manually as well, right? That I would do for other applications that are not working with cloud native technologies that don't depend on dapper or Kubernetes services. I'm just, I'm a happy bunny with this. So what next? Amazing. I mean, that was a great, you know, a great example. Oh, that was your demo. That was a great example of, you know, our local developer experience, again, with dapper and test containers, kind of working together and being able to install all those dependencies as a developer. That's amazing for me. It actually works, right? Like the application is actually running. Local inner dev loop, check. Okay, but you know, there's an eight in our dev ops even though it just got moved slightly. There's still is an eight and we have to make these work harmoniously together, right? So I've kind of showed this application running in production. Oleg has showed it running locally. We want to do a full end-to-end, you know, push deploy test and see how this all kind of works together. Yeah. Yeah, so how, how do we do that? How do we, you know, enable kind of this, this working together? We need to kind of figure out this plan in this, in this release step. I think of this, of this dev ops pipeline. We need to introduce and change and we're going to be, we're going to be responsible and when changing our things, we're going to, we're going to, we're not going to just throw it in production and just let other people deal with that, but we're going to release it properly behind a feature flag. And this, and for this we're going to use open feature. An open feature is a open source project that does standardization of feature flags. It allows you to use a standard API and then the, any sort of implementation of a flag provider. So in this case, we're actually using flag D, but it allows you to take advantage of really any different flag evaluation providers in this case, in, you know, in that I want to take advantage of. Yeah. So for now, we're going to try open feature and flag D and we're going to release the production. Right. So in my local experience, I'm going to use it the same. I'm going to run flag D in my test containers environment in my, in my Docker environment using test containers. And we're going to try to introduce a change and see how it works. This is the most dynamic part of the demo. So please continue breathing. Don't be super stressed, but we're going to do some coding here. I added the open feature already libraries to the thing and I will add the configuration for my open feature beans for my spring application. They get some configuration that I get from my properties and I enable the open feature bean that I connect to my flag D flag service provider and then they create the open feature API client. Now in my application, in my application at the very top, what they can do, I can say like give me the client for the open feature and then I would have a very simplistic sort of code that says, that says I want to look at the flag called V2 enabled and then pass it to the, to the property. Right. And here we're going to check a value and essentially change a color. Cause you know, I want to test out, I'm the production lady and I want to test out a couple of different things in production here, but I don't want to put extra onus on to my developers for having to push them through an entirely new cycle of development. I want to test out maybe I want to do AB testing, maybe I want to show this application to different folks and see how they respond. Right. So I get this feature flag, I'm going to inject my open API client into my pizza application, right? And so this, everything will compile. Will it work if I run it locally? Who thinks it will work? Who thinks it will crash and not work? It will not work of course, cause there is no flag D provider. It will be like, where's my flag D? I need my flag. I can't work. We need our feature of value or flag evaluation provider. So the same thing I do with all containers. I can just, just get this. This is the best part of development. Development by uncommenting code is the best. It almost never fails. However, in my container configuration, I say I would like a new generic container for the flag D, flag D Docker image. And then I expose the protocol ports, I override the commands and they pass my configuration into the container as a string and that will be my flag definition. So here is V2 enabled, options true and false. And currently this is true. So when I run this, let me run this my application again. I think if I run this, it will pull and run my flag D as well on top of everything. If we didn't forget any changes and then I can see my application. There it's pulling that flag D latest container. I see it right in there as the dependency. Yeah, oh, yeah, yeah, yeah. It pulled the container. It started the container. The container is being configured. My spring application picks the configuration. So when I go to my pizza store, you can see it's gloriously green. Wow. So I will finish my loop here with just building say a Docker image of my container with changes that I will just push into the registry and then the ops part of the team can pick it up and deploy that into production. Awesome. And then since we only have two minutes, I'm gonna show this really quickly in production. And so essentially what we have is, as you can see what Oleg did is we have, you know, our flag D provider running locally. We also have this deployed on Kubernetes with the open feature operator system. So this allows us to, you know, provide our different flags via CRDs. And we have, if I pop open my open feature configuration, what I have is, you'll notice it's the very same configuration that Oleg ran locally, but we have it within this feature flag CRD that is specifically for open feature. And we have this flag D source. This is just telling us, hey, we're providing these flags from Kubernetes in this case, and these are our pizza flags. This is, you know, built into our pizza store with a couple annotations. So, you know, our pizza store service, which is the one that we wanna change is pulling from these two CRDs. And essentially it'll allow us to effectively make these changes live and see how they get updated. So, you know, this was false. Let's update this guy to true and see if we can actually make this change color. So I'm gonna apply that guy, all right, configured, and then I'm gonna head back over to that UI. What do we think? Yellow or green? Yellow or green? Green. Yay. Woo-hoo. Okay, awesome. And that is kind of what we wanted to show. The, yeah, so kind of just wrapping up here, what we did show kind of an end-to-end developer loop. We talked about some consistent APIs, both from the developer perspective with Dapper, as well as the APIs that test containers gives you to develop with with your local dev experience. You know, simple inner and outer development loops, and then, you know, we're going to begin these changes with open feature and feature flagging, using best practices in production. Yeah, I think we showed a simple but nice iteration of the whole development loop into production. We made sure that my application developer experience is what I usually do and like and know how to work with. And we saved developer time. Yeah, hopefully. And then it's all was enabled by the combination of the projects. So you can build similar things yourself. And then there are a number of things that you can read online about this and try the application yourself. Yes, try it out yourself. We have this published online. And then, you know, if you had any other questions about this session, we're just out of time right now, but please come up, ask us at the end, or, you know, come to our booths at the Docker and Diagon booths this week. Thanks so much for coming. Thank you. If you... KubeCon asks us to ask for feedback. If you have great feedback, please fill the form by the QR code. If you think we can improve something, just come tell us personally. Thank you.