 Hey, everyone. I'm Donovan Brown, and I'm Jessica Dean. Today we're going to be talking about how to make writing microservices easier with an open source project called Dapper. We will start by covering the landscape of microservice development and how to get started. Then we will show how to extend an application by adding additional functionality. Finally, we will cover how to debug and deploy your Dapperized apps all while employing DevOps best practices. Thanks, Jessica. Don't worry, Jessica will be back later, but now let's talk about the landscape of development. No matter what industry we are in, writing software has common patterns. We reduce costs by only using the resources we need and scaling out when additional resources are required. Applications are being broken down into units of functionality, each exposed as a service. We want to focus on adding business value and not managing infrastructure. So we leverage serverless platforms and use DevOps best practices. And the number of programming languages seems to never stop growing. And we should use the best language for the task at hand. On the surface, microservice development seems simple. It is a micro or small service, not a giant scary monolith. However, there are limited tools and run times to support distributed development. Many of the frameworks are tightly coupled to a language or development stack. A typical microservice application might look like this. Several small purpose-built services all talking to each other. Some of the services also talk to external resources to store state or ingest events. Getting an application like this up and running and keeping it running has proven to be challenging. To address these challenges, the cloud-native community has developed Dapper, which is a set of building blocks to ease distributed application development. The goals are to take years of best practices and experience and make them easy to use from any language and on any stack. It is driven by standards and provides a consistent, portable and open API that can be extended and used on any cloud and on the edge. Dapper is community-driven and completely vendor-neutral. At its core, Dapper is a set of building blocks including service-to-service invocation, state management, pub-sub, bindings and secrets management with support for the actor model and observability out of the box. We can use as much or as little as we like. If we have already solved service-to-service invocation but need help with state management, we can just use the state management building block. Or if we are starting from nothing, we can use it all, allowing us to focus on adding business value. Any language that can send a HTTP or GRPC request can use these building blocks. Not only can applications at Leverage Dapper use any language, they can be run on any infrastructure. Dapper is implemented using a common sidecar pattern, where a second process runs alongside your main application. The main application uses HTTP or GRPC requests to call into the Dapper sidecar to take advantage of the desired building blocks. Here's an example application with two services, A and B. Notice each of them also has a Dapper sidecar. When service A wants to call service B, Dapper does all the arduous work of discovering where service B is and securing communication between the services. This slide also shows the plugability of the component model. Without having to change our code, we can change where our state and secrets are stored, where events are published, and even where our telemetry is sent. When developing locally, we can use a Redis container then switch to a different backing service when we move to the cloud. The components are the magic behind Dapper. Switching from Redis to Cosmos DB or from RabbitMQ to Azure Service Bus requires no code changes. We just update our configuration and our code stays the same. If we do not see a component we need, they are all open source and we can add whatever we need by submitting a pull request. At this point, let's focus on the different building blocks and demonstrate how to use them. Let's begin with service to service invocation. That lets one service call another. When one service wants to call another service, it must know its address, port, and the number of instances that are running. The service could be running anywhere. Dapper takes care of locating and securely calling the service. With a post request, we can ask Dapper to invoke the new order method on service A, and Dapper takes care of the rest. Observability gives us tracing across all the services in our application. With Dapper, we can configure where we want our telemetry sent. This slide shows the same information visualized from application insights and Zipkin. As a developer, we don't have to write any special code by running our application with Dapper, we get this for free. What I'd like to do now is a demo to show you how to get started with Dapper. People told me, do not do a .NET demo. When I asked why, they said you have to use an open source language that works on Mac OS, Windows, and Linux. Okay, got it. Open source works on Mac OS, Windows, and Linux. So I decided to do a .NET demo. For those of you that don't know, .NET is fully open source and works perfectly on Mac OS, Windows, and Linux. Let's go. First, I want to show you the docs site at docs.dapper.io so you know how to get started. Simply click Getting Started, and then click Install the Dapper CLI. This page will show you instructions for installing on Mac OS, Windows, and Linux. Now that you know how to get started, let's see some code. I'm going to switch over to Visual Studio Code, where I'm inside an empty folder. To get started, we're going to create a new web API using the .NET new command. What's cool is we can Dapperize an application without changing a single line of code. We can start to use Dapper immediately by running our application with Dapper. With this command, we are giving the service a name, telling Dapper what port the app runs on, what port we want to access Dapper on, and how to start our application. Dapper will build and start our application, and then launch the sidecar and wire everything up. Switching to a web browser, we can still access the application on port 5000. This allows us to incrementally adopt Dapper, because all existing clients can still access the application on its original port. The real power starts when we have a client that uses Dapper to access the service. If we access the service via Dapper, the results are the same. But notice, we are now using Dapper on port 3500. This will never change, even if we change the address of the backing service or the number of instances. This is saying, hey Dapper, we need you to do something for us. We don't know the address of the other application. We don't know where it is on the network. We don't even know how many instances there are. But we know its name is MyApp, and we need you to go invoke the weather forecast method. Dapper locates MyApp and securely calls the weather forecast method for us. But wait, there's more. Because we're using Dapper, we are getting observability for free. Switching over to Zipkin, we can query all the telemetry Dapper was automatically collecting for us. We can see when calls were made, how long they took, and what services were involved. We can drill in and see detailed information. Without Dapper, we would have to write code to get this information. This information is invaluable when you need to debug a problem, and we get it for free by just running our application with Dapper. Now I'm going to stop the application and return to the slides to discuss more building blocks. Next, let's examine state management, which allows us to create long-running stateful applications. With a post request, we can store a key value pair into any state store. To retrieve the value, we use a get request with the key. Because the components are pluggable, we can switch from Redis to Cosmos DB without changing our code. To build a scalable, distributed system that reacts to events, publish and subscribe is a powerful way to accomplish this. Dapper supports both publishing events and subscribing to them. The application does not have to take any hard dependency on a particular implementation or SDK. We can swap out Redis in this example for any supported PubSub component. Dapper also supports input triggers, which enables our code to be called when a specific condition is true. For example, call our code whenever a tweet is sent that contains a specific string. Output bindings allow our code to write to external systems without taking any hard dependencies. Finally, secrets management gives us a consistent way to access connection strings and other sensitive information. Now let's return to the code. We are going to pick up right where we left off and start using some of the other building blocks. To do that, we add the Dapper package to our application. This is an optional step, as the Dapper functionality can be accessed via the built-in HTTP client of any language. Adding a Dapper SDK will make using Dapper feel more natural in some scenarios. We are going to add three lines to the startup.cs file that will enable dependency injection and PubSub support. First, we add a call to add Dapper on line 29 to enable dependency injection of the Dapper client class into our code. Next, we instruct our code to use the cloud event standard, which is a specification for describing event data. This middleware will unwrap requests that use the cloud event structured format so receiving methods can read the event payload directly. The final line of code adds a Dapper subscribe endpoint that allows us to subscribe to events with attributes. With just three lines of code, we have everything in place to turn this into a stateful event-driven service. Let's add some new functionality to this service using Dapper. Out of the box, this code generates random weather data. In a real app, this data might come from events published from weather capture systems. So we're going to add code that gets run for each event and stores the data. Let's open the weather forecast controller class and add a post method to receive the events. We don't need a special route so we can delete this code. We're going to accept the original weather forecast business object and return it when we're done. To have this code called each time an event is published, we can use the topic attribute. This tells Dapper to watch the component name pubsub and when events are sent to the new topic, call this code with the data. We get the name pubsub from the component definition. When we initialized Dapper, it created a default pubsub and statestore component. Let's open the pubsub component. This component is named pubsub, which is why we used it in the attribute. On line six, it shows this is configured to use Redis. Later, we will move our code to the cloud by defining new components. We also have a component named statestore, which we will be using next. Let's close these and return to the code. Thanks to the call to add Dapper in startup.cs, we can use dependency injection to get a Dapper client. With the Dapper client, we can start using the building blocks. Calling save state async, we can store our weather forecast object in our statestore. The first parameter is the name of the component. The second parameter is the key and the final parameter is the data. That is all the code it takes to respond to an event and store it. Instead of returning random data, we want to return the data we just stored in the statestore. To demonstrate incremental adoption of Dapper, we will not change the return type of this method. As we did before, we will use dependency injection to get a Dapper client. Next, we replace the code and return a list of weather forecast objects from our statestore. To retrieve state, we can call get state async, passing in the component name and key. Because this method is not async, we return the result and we're done. We should test the code to make sure it works as expected. So let's set a breakpoint on lines 40 and 31. Let's generate the assets to build and debug our code. Next, we use the Dapper extension to take our launch configuration and start our Dapper side card whenever we press F5 to debug. We will use the .NET Core Launch Configuration. Let's name our service MyApp and we can keep the default port of 5000. What this did was create a new configuration based off the .NET Core Launch Configuration that can start and stop Dapper for us. Let's select the new configuration and look at it. Notice, this configuration has a pre-launched task that starts Dapper. Because our service has no UI, we can remove the lines that start a browser. Now, we can see there is also a post-debug task that stops Dapper for us. Let's close this file and press F5 to start debugging our code. In the terminal, the extension is running commands to start our application and the side card with the debugger attached. While the code starts, let's open an HTTP file with request we can use to test our code. The first request uses Dapper to post an event to the new topic of the PubSub component. Sending this request, we hit our first breakpoint. We can have a great F5 experience even with a Dapperized application. Let's press F5 to continue and send the get request to read from the state store. Just as before, sending the request, we hit our other breakpoint so we can step through our code. Pressing F5, we see the output is the weather forecast we stored. Now that we know that code works, let's stop it and move our code to use resources in the cloud. Because we're using Dapper, moving from on-prem to the cloud is as easy as defining new components. In the Azure Components folder are definitions that use Azure Service Bus for PubSub and Azure Storage for our state store. Notice the names of the components are the same as our original components, so our code does not have to change. We just need to tell Dapper to use the components in this folder instead of the defaults. We do that in the tasks.json file that was created by the Dapper extension. We just add a line that tells Dapper to use the Azure Components instead. With that change in place, we can press F5 to start debugging. When the code starts this time, it will point at the components in Azure. We can use the same HTTP file pointing at localhost and port 3500, but our components are now in Azure. Let's start by running the get request first, which should return null as we are no longer pointing at the Redis running locally. Sending the request, we hit our breakpoint as before. Pressing F5, we get null back as expected. When we send the post request to publish our event to the new topic, our other breakpoint is hit. Finally, let's run the get request again to confirm our weather forecast was saved. The weather forecast returned was read from Azure Storage. Let's jump over into the Azure portal to see the value in our storage account. Using the Storage Explorer, we can navigate to the weather table and see the item. This demo shows how easy Dapper makes it to turn any service into a stateful event-driven service and move it from your dev machine to the cloud. Awesome. Thanks, Donovan. What I want to do now is build on what Donovan has shown you. I want to switch over to a more real-world microservice application and show you how you can continue to take advantage of what Dapper gives you out of the box and add another tool to your tool chest to help make developing microservices even easier. We have a simple microservice application designed to process and grade the sentiment of tweets. There are three main microservice APIs. You have your Twitter handler, which we call the provider API. Your sentiment processor, or the processor API, and the tweet viewer, or the viewer API. All three of these components connect to a Twitter binding using Dapper, instead of having to go fetch the Twitter SDKs or figure out what the Twitter API even looks like. We also store state inside of Azure, but you could store it in Redis, MySQL, or wherever you prefer. We also have PubSub so the data can move freely between our different microservices. The best part is, Dapper simplifies all of this. Now, let's complicate this just a little. What happens if you need to debug one microservice in the context of your larger application or in the context of abstracted services running in Kubernetes? Have you ever wished you could just take your computer and put it in the cloud and debug just as you would locally, especially if you're only debugging one small part of your app? I want to demo something called Bridge to Kubernetes, which is going to let you do exactly that. We've all been there. We've all made changes that we were 100% confident in. We've linted, we've tested, and all that jazz. Our local checks passed, we commit, push, and open a PR. Our PR workflow fires off and everything is green. Approvals happen and we hit merge. A new workflow fires off and successfully deploys out to a development environment. But at some point in the process, after we have merged our changes, but before those changes deployed to production, we noticed something broke. Oh, gosh. Uh-oh. Did we just merge broken code into our main branch? Wouldn't it be great if we could visualize our changes prior to merging to main? It turns out we can. You'll notice I have a PR open and I've utilized a GitHub actions task to add a comment to my pull request. Between the comment bot and Kubernetes, I can take advantage of things like role-based access control and name spaces to create an isolated deployment environment for my pull request in my development cluster. Let's take a look at these changes now. Uh, okay. That's odd. I'm supposed to see tweets here with sentiment scores. I'm going to have to debug this. But the problem is I only made a change to a backend API, my Twitter handler, or what I call my provider. I'm going to flip over to my preferred editor of choice, Visual Studio Code. You can see that I already have my provider code up. Now, if I want to debug this locally, I know I could sand everything up, all three services, connect to our state store and pub sub and get everything to work. But honestly, I don't have time for all that. When our PR workflow fired off, our app was deployed into a separate dedicated namespace for these changes only. This allows me as a developer to use that namespace as a kind of sandbox for my development. I'm going to, in essence, put my computer in the cloud so I can debug in real time in an isolated fashion. Now, you may be wondering how exactly are you going to do that? I am so glad you asked. I installed an extension called bridge to Kubernetes that allows me to replace the provider code currently running in Kubernetes in my private namespace with code on my system. Best part is I don't have to deal with Helm charts or Docker files to do it. Once I have the extension installed on my system, it's very simple to use. All I have to do is hit command shift P because I'm on a Mac. But if you're on Windows, you would hit control shift P. And from there, search for bridge to Kubernetes. I'm going to opt to configure. Now, as long as I have access to my Kubernetes cluster I'm working in and the namespace I've deployed to, this extension is going to connect over to my application currently running in that Kubernetes cluster. I'm going to choose the service I want to redirect traffic for. Again, this is going to allow me to replace the broken service with code that's living currently on my system so I can debug natively. You'll notice we have several different services here that have dash dapper at the end. All of these services have a dapper sidecar, but I'm still able to debug my individual application with a native F5 experience without interfering with dapper whatsoever. I'm going to choose my provider service since that's the one I want to debug. And I know my application is listening on port 3001 because I can see that right here on line 12. Next, I'm going to choose launch program. This is going to launch the same configuration that I would use to debug locally, but it's going to launch it with an additional task and that additional task will be what connects to my Kubernetes cluster. I'm going to say that I want to redirect all incoming requests to my machine including those from other developers. And the reason why I'm okay with this in this instance is this is my own isolated namespace that I'm working on for my pull request. You would never want to redirect code in a production environment or something that could take down whatever you're currently serving to your end users. Now, what that did is update my launch configuration with a new configuration to allow me to debug using Kubernetes. You saw a similar example of that in Donovan's demo where he showed you his launch config with dapper. Only now I can keep my previous debug configurations and add another for connecting to Kubernetes. Now I can go over to my debugger, make sure that my launch program with Kubernetes is selected and hit F5. From there, the bridge to Kubernetes extension is going to do all the magic for me. It's going to redirect the service, provider dapper, to my computer, and a better way to visualize this is exactly what I've said several times now. I am literally putting my computer right now in that namespace in my Kubernetes cluster in the cloud. This application, as we saw earlier on our slides, has dapper components, has our viewer API, our processor service, and our provider, which is the only one I need to debug. Let's set a breakpoint right here on line 66. Great. I'm going to go ahead and refresh our webpage to force the processor to begin looking for tweets. Oh, boy. This is awkward. Demo fail. I'm not hitting my breakpoint. I'm going to check Zipkin, and I'm going to see if there are any tweets that can be traced. Nope. No dice. Something else must be wrong. Let's take a look at the... Oh, gosh. Turns out I cannot spell. I spelled tweets wrong. All right. I'm going to fix that. I'm going to restart my debugger. Bam. I just hit my breakpoint. All right. I'm going to go ahead and step through. We should see a tweet show up on our viewer. Awesome. And I can see that I even have access to IntelliSense, which means I can see the full body of their quest right here in Visual Studio Code, just as I would if I were running all of these services locally without Kubernetes, Helm, or Docker. All right. I'm going to remove the breakpoint, and then I'm going to check Zipkin again real quick. Yep. I can now trace the tweets being processed as well. By the way, thanks to Dapper, I get this observability and logging through Zipkin all for free right out of the box. I can track every single one of these tweets from the moment it enters our application to every microservice insider application all the way towards being shown in our viewer. I knew something was wrong when Zipkin didn't have anything to trace. Of course not. Tweets weren't even hitting our provider service, which is the beginning of the path each tweet has to take through our application. Now, thanks to Bridge to Kubernetes, I am 100% confident this is going to work. Let's get this checked in, and I'm going to stop my debugger. All right. I'm going to head on over to GitHub, and I'm going to check out to see what's happening with our pull request. Awesome. It started our pull request workflow. Now, while that's running, I'm going to take a look at a previously completed pull request so I can see the steps that this workflow is running through. You can see that we start with the obvious. We check out code from our main branch. Then we log into Azure CLI because that happens to be where our components live. I have my Azure storage, my secret shares, my service bus, and of course my Kubernetes cluster. I'm making sure that as a part of this pull request, it's deploying out our infrastructure using BicepLang because after I merge this PR, all of that infrastructure is going to get cleaned up, and this way, I'm not paying for anything I don't need. Now, here's what's also cool. It's going to install Dapper if it's not already installed, and to do this, it's using a native GitHub actions setup Dapper task. Next, I'm going to build and push that single provider image up. I'm not touching any other component or image or anything, okay? I'm going to generate a valid namespace for use with this pull request. This is the name that I'm going to use for my isolated environment or namespace for me on this development cluster. Finally, notice I can use Helm, just like I normally would. Only this time, I'm using it to deploy a Dapper app. I'm going to install this PR using my charts directory, and that's going to deploy out any dependencies that it needs, but it's only going to update, again, the service and image for the provider API. The last little bit of magic is adding a comment or a dash of confidence, as I like to call it, to the PR with a link to go view these changes. Again, this saves us from accidentally checking in broken code to our main branch. Let's flip back over to our pull request and take a look at the conversation. Yep, sure enough, I can now see that I have a new comment with the same exact link as earlier. I'm going to go ahead and click that. Fantastic. I can see that I have tweets properly coming in now, and I'm going to do a quick sanity check and run a query in Zipkin. Perfect, everything looks good. This is working exactly as expected. Thanks to Bridget Kubernetes and Dapper, this has made building and debugging microservices so incredibly easy. Thanks, Jessica. Great job. The community momentum around Dapper is incredible. We released 1.0 in February, and it was submitted to the CNCF in March. It is also great to see the support from organizations around the world. If you'd like to join the community, you can use any of these links. You can also follow us on social media and check out the Dapper eBook. Thank you so much for spending this time with us, and we hope to see you building incredible cloud native applications with Dapper.