 Hi, everyone. My name is Kendall Rodin, and I am so excited to be here today for Dapper Day 2024 to talk all things Dapper integration. So I hope you've enjoyed so far the Dapper Day experience. You've heard a lot about how Dapper is being used by end users to power amazing experiences within their applications. And I'm really, really excited. I feel like putting this together was an adventure for me, just diving into all of the progress Dappers made and all of the really cool innovations that are happening within the Dapper space, both on the product side and also within the open source ecosystem. So for a little bit of context, as I mentioned, I am a product manager at a company called Diagrid. Diagrid is founded by the co-founders of the Dapper project, so Mark Fussell and Yaron Schneider, who I believe kicked off Dapper Day today. Before that, I was working at Microsoft and worked a little bit on the Azure Container Ops platform as a product manager, so super excited to shed some light on the great work being done both at Microsoft and Diagrid and across the broader Dapper community. So let's go ahead and dive in. I think a really good place for us to start today is just to acknowledge the insane Dapper growth over the past several years. So Dapper created in 2019, contributed to the CNCF a couple of years later, has now become the 10th largest CNCF project with over 3,000 contributors. I think this just continues to reiterate that the Dapper value proposition is extremely clear and that Dapper continues to solidify itself as an API standard for building distributed applications. In addition to the Dapper value proposition continuing to strengthen and the user base growing, existing users are also using Dapper for more intensive workloads. So for example, if we take a look at some of the CNCF case studies, we can see that HDFC Bank has over 750 million transactions that they're executing monthly and that system is dependent on Dapper. Same with DeFacto, right? 27.5 million events being published every 100 hours and they're using Dapper to do this. So they're obviously running Dapper at extremely high scale. And so there are new considerations that come into play when operating Dapper at that level. And then also another consideration is that with the growth, there's also new products, new technologies that are coming out to make it easier for Dapper to be used, enabling technologies. So there are new deployment models not just on Kubernetes, but also outside of Kubernetes that are helping bring Dapper to a broader developer audience so that they can get this very clear benefit that existing Dapper users are getting today. I also wanted to just highlight, if you haven't already checked it out, check out the state of Dapper report from 2023. As a quick summary, this data just really lends well to the points that I was just making. Over 150 individuals were surveyed who used Dapper on a daily basis. And it was very clear, Dapper users see a lot of value in this API abstraction. If you take a closer look at the survey, you'll see things like swappability of components and the vendor neutrality were obviously highlighted as major benefits. And then there was also some things that users disliked. And a lot of that was around troubleshooting and then running on Kubernetes, which can be challenging to manage, but also deploying to non-Kubernetes environments. So you can see that evidenced here in the data, really troubleshooting being 50% of respondents, finding that to be their top dislike, but then also a few finding it hard to deploy to both Kubernetes and on Kubernetes environments. And I do love the 13 people who said they don't dislike anything, right? I've never really seen someone who loves a product that much, but clearly that just goes to re-emphasize how great Dapper is and how much progress the project has made. And then I think evidenced in the previous point is that Dapper users did also indicate the need for more operational support. So things like better observability was indicated by 74% of people. They think that'd be valuable features around visualization, best practice recommendations, and then even automatic certificate rotation. Managing and rotating certificates within Dapper can prove challenging. The Dapper certificates are self-signed and essentially expire after a year. And so that can be a point of outage for users if they don't put it in to place a system for rotating those certificates at expiration time. Now I think it's a good time to start diving into some of the nitty gritty. So traditionally when Dapper is installed into a Kubernetes cluster, the Dapper control plane is going to be in charge of injecting the Dapper D sidecar into applications based on if the Dapper annotation is available on the deployment, which is what we're mostly familiar with. And while this is still the default strategy, there are some use cases that do require a different approach. So that can actually be enabled by Dapper shared, which is something the open source community is currently working on. You can find this in the Dapper Sandbox project on GitHub. It was initially called Dapper Ambient. But essentially the goal here is that you can deploy Dapper applications with the Dapper D running as a daemon set or a deployment on top of Kubernetes instead of being injected into the application pod. So here's kind of the two deployment models that are supported for Dapper shared. A daemon set, meaning you can basically deploy a single Dapper sidecar instance per app ID on each node or the deployment model where you basically just have a single instance of that deployment. And keep in mind, obviously, if you're running on a large cluster, that deployment could potentially be placed on a different node than the application workload itself, which can introduce some level of latency. So there are some trade-offs to the Dapper shared model, as well as a sidecar model that you can look at the blog post. I just referenced more detail for some of the trade-offs. A good example of when you might want to use Dapper shared is in the case you want to decouple the lifecycle of the application workload from the Dapper APIs. And this really lends itself well to kind of function models like Knative and Open Function, where you might want to keep the Dapper sidecar available because it's waiting to respond to, for example, events that are coming off a PubSub broker, where the workload itself is idle. So you don't necessarily want to scale the two together, but instead want to keep the Dapper sidecar asynchronous capabilities available while the application workload itself is idle and scaled to zero. So now I just want to briefly cover these two serverless open-source projects, one being Open Function and one being Knative, and just highlight a little bit about how Dapper integrates with and works with these platforms. So Open Function really provides an end-to-end solution for building a functions-as-a-service platform on top of Kubernetes. Everything from actually building the function, containerizing it, serving the function and scaling it on top of Kubernetes, and then also the eventing aspect of the functions themselves. If you see here, it actually supports an asynchronous runtime aspect using Dapper and Kata for scaling based on events, things like that, and then also synchronous runtimes, one of them being the HTTP add-on from Kata and another one being Knative. And what's interesting here is that because Open Function was basically built on Dapper from day one, it eliminated the need for the Open Function team to basically reimplement the connectivity to the various infrastructure services for each and every language that was supported within their functions. So for example, if you think about just Java alone, if Java needs to publish or subscribe to messages on a given message broker, they would have to implement a ton of SDKs in order to ensure that Java can talk to Kafka, Nats, Service Bus, so on and so forth. So instead, that entire process was built on top of Dapper, which is pretty awesome. And then that basically abstracts away the backends from the functions themselves as we can see here. So Dapper is supported in Open Function and Dapper powers their bindings, PubSub, and state management. Now, another really interesting thing here is that within Open Function, they did identify that the sidecar injection model for each function wasn't necessarily working super well. It wasn't necessarily optimized for a kind of scale to zero scenario. And so what they basically did is created this proxy service. And this is several versions ago. So this has been around for a while. But what it would do is it would create essentially a single Dapper service per function. And then each time the function scaled and a new instance was created, it would use the same sidecar. So what that allowed them to do is really reduce the cold start and the function startup time by creating this proxy. And so a good example of how Dapper shared basically already has a specific use case because obviously Open Function had already solved this problem. And then when it comes to Knative, it's a little bit less of an opinionated platform. So if you need more flexibility, some people tend to gravitate towards Knative. Knative supports using kind of the native Dapper model where you can annotate a particular function. That function is running on Kubernetes and then ultimately if Dapper is enabled, it can make use of the Dapper APIs from the application code. But a good example of a product that could benefit from or a project that can benefit from Dapper shared would be Knative because now they can focus on kind of a combination of Dapper shared plus the Knative runtime and then KDA for scaling. And you can kind of build your own platform using these primitives. So now that we've talked about some of the open source function related projects and how they integrate with Dapper, I wanna talk a little bit about some hosting options that also provide Dapper capabilities but also abstract way even more of the infrastructure and operations overhead. So a good example of this is Azure Container Apps. Azure Container Apps allows you to create containers that scale all the way down to zero and also completely manages and abstracts away Dapper but still allows you to use the Dapper APIs. So if you've never used Container Apps before, instead of deploying to a Kubernetes environment, you instead get this abstracted concept called a Container Apps environment. There is no access to the Kubernetes API. So it's really providing a developer experience on top of Kubernetes. You create the concept of what's called a Container App which is kind of akin to like a pod or a deployment in Kubernetes. And then you essentially provide Dapper configuration on like the ARM or Bicep deployment template to enable Dapper. So when you enable Dapper on a Container App, the Container Apps platform completely handles the sidecar injection, it handles the Dapper control plane deployment, the upgrades, you name it, the application rollouts, everything from an operations perspective around Dapper is completely managed for you. And a lot of the concepts are very similar to open source concepts. So for example, there's the concept of components that you can deploy at the environment level that are shared by multiple Container Apps. And all of the APIs are accessible via the existing Dapper SDKs. And then another really nice thing is that Kata is also integrated into the Container Apps platform. So it allows you to scale both your application in the Dapper sidecar based on events. And similar to Open Function, the Container Apps platform also has a proxy that's built in to ensure that the Dapper sidecar can scale when new events arrive, even if your workload has scaled down to zero. Another really cool update that I happened after I left the team a year ago is that the Container Apps environment now hosts not only Container Apps and job objects, but it can also host Azure Functions. So if you're familiar with Azure, the Azure Functions primitive, you can now deploy that into an existing Container Apps environment. So you can have a Container App, which is just a traditional application hosted in a Docker container, talking to a containerized Azure function that has that opinionated functions programming model. And you can use the Dapper extension for Azure Functions to basically allow functions and Container Apps to integrate with each other using these managed Dapper sidecars. So if you haven't tried that out yet, highly recommend, but pretty cool stuff happening over on the Azure side. Another thing I want to highlight within the hosted services space is something that's very near and dear to my heart because it is the product that I'm in PM for. But recently we launched the Diagrid Catalyst product at here at Diagrid. And it's essentially a set of unified APIs for messaging data and workflow that are powered by the Dapper open source project. So just to kind of draw a parallel as to how this compares to the traditional Dapper deployment model, right? Traditionally, your code is hosted on Kubernetes, you're accessing the sidecar APIs over HTTP and GRPC, and then you're using components to connect to backing resources. So for looking at Catalyst, you can now host your code absolutely anywhere. I know that's at first being like, what do you mean anywhere? And I mean anywhere, right? Azure Container Apps has created a hosted Dapper experience specifically within a Container Apps environment, but this is really abstracting away Dapper management for users that are calling from any platform. So this could be AWS AppRunner, GCP Cloud Run, ContainerOps, Container instances, you name it. Your code can be hosted anywhere because the sidecar is now hosted in Diagrid Catalyst. So within our product, we have this concept called an app ID, which is very true to the open source Dapper project. And for each application running externally, you create a remote hosted app ID. And this app ID, you can connect to the Dapper APIs through it, still over HTTP and GRPC, using the existing Dapper SDKs, getting a lot of the same cross-cutting concerns handled for you. Today, we provide five of the 10 APIs that are available in Dapper open source. And then on top of that, in addition to having a connection experience, which is basically our version of Dapper components, we also provide the ability for you to connect and use Diagrid infrastructure. So for people who are really looking for a serverless experience, not just on the sidecar side, but also on the infrastructure side, we provide these serverless scalable infrastructure solutions that require very little overhead and very little operations to deploy and manage. So let's take a look at a Diagrid Catalyst demo to get you all excited about what we've been working on. I'll show you how to convert the simple PubSub example from the Quick Starts to use Catalyst. Here we can see the checkout service publishes a message using the PubSub component. Then we'll have a declarative subscription, which ensures all messages that are delivered to the topic orders on the PubSub broker are then delivered to the order processor on the route orders. When we're using Dapper, what we'll traditionally do is use a multi-app run file to launch both the applications and the Dapper sidecars locally. We can see here the expected output, the orders are being published and also being received. Now let's see Catalyst in action. First, we'll create a Catalyst project and deploy a managed Catalyst PubSub broker. This will ensure that we can easily use the PubSub API for this application. Next, we'll get the project where we can actually see the HTTP and GRPC endpoints, which will be used to connect the Dapper SDK to the remotely hosted sidecars. In Catalyst, the sidecars are called app IDs, so we'll create one app ID for the checkout service and one app ID for the order processor. Catalyst makes it easy for Dapper developers to get up and running by allowing them to import existing Dapper manifests like subscriptions and components into Catalyst subscriptions and connections. Here we can see we used the existing Dapper subscription and created a Catalyst subscription. If we take a look at the project and all of the resources that we've deployed, we'll see we have two application IDs, a PubSub connection representing a component and a PubSub broker, which is managed infrastructure. Last but not least, we have a declarative subscription, which makes sure that the order processor will receive messages sent to the broker. With the Catalyst resources in place, we can scaffold a dev config file, which will ensure that the Dapper SDK has all of the environment variables set to connect to the Catalyst project and the remote sidecars we deployed. We can also set the application port for the order processor to ensure any messages that are sent to the app ID order processor are forwarded to the local machine on port 70.06 from the remote hosted sidecar. Lastly, we can set the application run commands to ensure both of our applications are launched at the same time and the application ID connections are established. Lastly, we can run our diagram dev config file using the dev start command, launching both of our applications and connecting the order processor app ID to our local machine. We can see that without Dapper in play, we were still able to use the exact same code to connect to the remote hosted sidecars, unlocking a new deployment model as both of these services could be running virtually anywhere. We'll take a quick peek at the Catalyst portal where we can see all of the resources we deployed to our project, both of our application IDs, our PubSub broker, our managed connection, and our PubSub subscription. If you wanted to make use of an existing service instead of a diagram managed service, you could create a connection. The connections are very similar to Dapper components and you can make the selection of your choice to connect to the backing broker in this case for PubSub API. There's also capabilities like the API Explorer, which allows you to see both the requests from the checkout service to the sidecar that's hosted in Catalyst and the outgoing event which is sending that message through to the local orders route. We can also use the call graph. The call graph provides insight into all of the applications that we currently have a Catalyst app ID for and the connections that they make use of. You can explore these Catalyst features and many others by signing up for our early access program. I would love for you to check it out and see some of what we've been working on including managed workflows, which is pretty exciting. So now that we've talked a little bit about hosting options, I want to focus in on developer tooling. So one of the really neat tools that has come out of Microsoft recently is .NET Aspire. I personally was really impressed when I tried out .NET Aspire. It makes it super, super easy for you to get up and running quickly, building microservices based in .NET. So essentially what this allows you to do is create multiple projects that you coordinate and orchestrate across. You can inject dependencies that are used and shared across these different services. You can also create and or consume a variety of NuGet packages, which are best practice wrappers around a variety of infrastructure services. So very similar to kind of what Dapper has provided in terms of like the Dapper component model and then additional tooling for VS Code and the .NET CLI that help you get up and running and scaffold sample projects with .NET Aspire. So there was recently a really great Dapper presentation on the Dapper community call and they showed how you can integrate Dapper with .NET Aspire. I thought I'd throw together a quick demo so that you can see this for yourself, but I want to encourage you to watch the Dapper community call that was recently hosted on, I believe February 6th or 7th of 2024 and see what it's all about. We're now in In Aspire with Dapper sample where we can see a web application which is going to make a Dapper service invocation request to a backend API. The backend API will return a weather forecast to the front end, which we can then check on the user interface. The service defaults project is going to be used for those shared dependencies across both of these .NET projects and then the app host is gonna be used with any Aspire solution and will essentially add both of these projects to the launch criteria, along with two Dapper sidecars, one for each application, and then we're gonna do a .NET run and start that app host project. As you can see in the output, we're able to navigate to an Aspire dashboard. This should look somewhat familiar if you ever played around with Project Tie, but we can see both of the Dapper sidecar executables along with both of the projects that we launched. We can also navigate to the endpoint and see that the service invocation request is successful. In addition, we can also see things like logs for both the Dapper sidecar alongside of the application workload, and then we can see things like metrics for particular resources and even trace logs, which is quite impressive, if you ask me. So now you got a quick look into .NET Aspire. I believe there was also a Dapper Day 2024 session on .NET Aspire, so I recommend that you check that out for a deeper dive into this awesome innovation. So another project that I think is worth checking out is the Radius project. This is an open-source project that came out of the Microsoft Incubations team, which is the team that Dapper was initially created by, so it's pretty cool to see additional innovation happening in the cloud native space from this team. And I actually think that they have some pretty good slides that I'm gonna show you now, and then we're just gonna dive into a demo to make it a little bit more real. So if we take a look at this Radius example, we can see that there's two pretty basic services, a front-end container, a back-end container, and both of them are using some sort of state store or database. If you're using Radius, you'll have this concept of a Radius environment, so let's say that environment is local development, for example. The compute that our applications are running on is just a generic Kubernetes distribution, and the recipes we have for local development will be running Redis and SQL as Docker containers. Well, let's say we swap that environment out. We don't have to change our application code, but instead can focus on using a managed Kubernetes platform on Azure for our compute, and then ultimately we can have access to recipes which will provision Redis in Azure or SQL DB managed in Azure. And then if we wanted to swap over and use an AWS provider or environment, for example, our application code will run on Kubernetes along with the Radius control plane, and then we essentially will have access to a new set of recipes to provision AWS native resources. A final thing I want to add here before the demo is that Dapper is built into Radius, which is why it's in this presentation today, and it makes it easy for developers to easily add state stores, pub subbrokers, and more to their applications, especially if you're already a Dapper user. So let's dive into the demo and see what this looks like. So to start, I'm going to initialize Radius on a Kubernetes cluster. In this case, it's a local kind cluster running on my machine, and I'm creating a new Radius environment, which is a default environment, and it's going to include that local dev recipe pack. So next up, we can confirm that the control plane was deployed to the Radius system namespace, and now we can look at the resources that we'll be deploying via infrastructure as code. So now we're taking a quick look at the bicep template, which will create the resources on the cluster. We can see we have an application called Dapper Radius. It's composed of two resources, this backend resource, which has extensions, which will inject a Dapper sidecar with an app ID of backend, and then there's also this connection section, which will specify a connection called orders, which references a state store, which will be deployed by Radius. If we take a look at that state store resource, it's actually going to create a Dapper state store for us, and it's going to use that default local dev recipe, which is Radius running in a container. We have a second resource, which is part of the application. It's called frontend, and it's also going to be using an app ID through Dapper to find that backend application, and it will also have a Dapper sidecar injected with an app ID of frontend. Lastly here, we are going to do a Rad run on that bicep file, and we'll see that it went out and created both the frontend, the backend, and that state store. So if we go to our frontend, we can put in a few orders, which are being submitted, and we should see that based on that Kubernetes port forward, we're actually able to see the logs of that backend service, and we can see that the order was persisted in that state store. Another thing we can do is run some Radius commands to actually look at the application Dapper Radius and see all of the connections being used, and we actually see that state store connection and the resources that are backing it, the Dapper component and the Redis deployment. So I think now would be a good time for us to segue into Dapper at scale and talking a little bit more about operations for Kubernetes-based Dapper applications. Learnings have been found from users who are running Dapper at high scale, and there's been consistent challenges that they've had to overcome, like certificate rotation, keeping up with the latest version of Dapper, managing this across multiple Kubernetes clusters, getting to the bottom of potential issues from a metrics and logs perspective. And so Conductor is a product that was created by Diagrid to essentially operate these cloud-native applications that are Dapper-enabled and running across multiple Kubernetes clusters. So for the sake of time, we're going to keep moving. I do want to call out that there is a free trial of Conductor 30 days where you can essentially onboard a cluster for free and get all of the Conductor insights that help you understand your Dapper installation. So if you want to try that out, definitely do. But now I'm going to show you a quick Conductor demo using the Radius application that we deployed on our cluster in the previous demo. We're now dropped into the Conductor cluster where we can onboard a new cluster connection. We provide a unique name for the cluster connection and also the Kubernetes distribution of the target cluster, which in this case is kind, which falls under other. In this case, I'm using the non-production cluster type. I also want to install Dapper. Now, while Dapper already exists in my cluster, I can choose the latest version and have Conductor take over the management and upgrade to the latest version. I'm also choosing to have the Conductor agent roll out all of my applications once the upgrade has happened so that the sidecars receive the latest version. While certificate management is provided, because it's a non-production scenario, I'm going to skip over it and just have the automatic upgrade of the agent selected. A command will then be generated that we can copy and deploy on our target cluster, which should deploy the Conductor agent and establish connectivity from the cluster to Conductor. I'm going to hit the Radius application that we deployed with a bit of load to generate some metrics, some logs and some movement within the cluster that we can then visualize in Conductor. We are now back in Conductor and can see that the cluster was successfully onboarded. We can get a quick glance at the Dapper control plane status, the agent status for Conductor, some deeper insights, and then we see the Dapper version was successfully upgraded. We can also see configuration information about the desired Dapper settings that we want enforced on the connected cluster. And we can click in on the cluster to get more insights via the cluster dashboard. The cluster dashboard provides not only cluster insights, but also application insights and component insights. This means bubbling up things like high advisories based on best practices that are recommended. It also means surfacing up any logs or metrics issues that may be used within the cluster. There's also an app visualization included where you can see Dapper-enabled apps the way they interact and the Dapper components that they make use of. If you're running Dapper in production, I highly recommend you check out the 30-day free trial of Conductor Enterprise to see additional insights at the application and component levels. Well, that's about it. We're almost out of time. I hope that you learned a lot. I hope you enjoyed this. I hope you enjoyed this. I did want to call out and highlight just a few more projects and technologies that I recommend keeping on your radar. One of those is the CNCF project crossplane. It works really well with Dapper. It allows you to create, provision, configure cloud-based resources using the Kubernetes API. And it's really simple to integrate Dapper into that model. So I highly recommend checking it out. Also wanted to say if you haven't heard of these projects, these are other areas of investment from a Dapper perspective. So Dagger, which is really in the DevOps space, and then Test Containers, which is really for Java developers and Salaboy, who works within the Dapper community and also is one of my colleagues at Diagrid has done a lot of really great work around Dapper integration with Test Containers. So last but not least, there's a ton more out there, new opportunities, new areas for Dapper to continue innovating, integrating, and so I hope that you feel more confident in understanding the broader Dapper landscape. If you have any questions, please don't hesitate to reach out. It's at Kindle Rodin on X, or you can email me at kindleadiagrid.io. Definitely reach out. If you have any interest in Catalyst or Conductor, we would be happy to help.