 Welcome everyone to our session on implementing Dapper in an existing environment. Today we'll be taking you through how we use Dapper to improve an existing event-driven microservice landscape at AgroFirm. We'll discuss choices, the way Dapper has helped us, and give you some pointers on how Dapper may help you in your landscape to improve it. First of all, I want to give a quick introduction on our company. We are IT consultants at InfoSupport, a leading provider in IT consulting and software development. We specialize in delivering custom solutions for customers, including software development, data management, cloud services, and more. With a strong focus on innovation and collaboration, we help organizations achieve their digital transformation goals and stay ahead in today's rapidly-involving technology landscape. I want to give you an introduction on AgroFirm. AgroFirm is one of our customers. There are an agricultural cooperative specializing in providing practical solutions and support for farmers. With a focus on precision agriculture and sustainability, AgroFirm offers services ranging from animal nutrition to crop solutions. About a year and a half ago, we took over a project at AgroFirm where the main goal was to realize the implementation of e-commerce applications and a cell service portal to replace all the systems. The entire platform exists of multiple customer-facing applications and about 100 microservices. To accomplish this, we decided to utilize Dapper. In today's session, we will give you an insight into the work we have done, the choices we've made, and show you how Dapper has helped us implement an event-driven microservice architecture. I want to give you a quick introduction about us. Joining me today is Mika Kozbek, and my name is Stan Ilota. We both started at Infrasport about three and a half years ago and started at AgroFirm about a year and a half ago. During that year and a half, we have had about one year of experience with Dapper. What are we going to talk about today? Firstly, we are going to tell you why we chose Dapper, what led up to us choosing Dapper, and what were the benefits that drew us towards Dapper. Next, we are going to talk about how we implemented Dapper. We are going to give you some information about how we used the Dapper Helm chart in our Kubernetes cluster, which inputs and output bindings we use with multiple messaging solutions. We will show you how we implemented the stave store using Redis, and we will also show you how we managed our secrets using management entities and the secret component. Lastly, we will give you some takeaways on what we have learned and what might help you. Firstly, I want to give you some information on why we chose Dapper. When we joined our project about a year and a half ago, development had already been underway for a couple of years. During the implementation, a previous team decided on using Aka.NET, which is an implementation of the active framework for .NET. While Aka.NET is a powerful tool that certainly has its use cases, a lack of governance and a lot of over-engineering cause the landscape to become overly complex, turn into errors, and hard to navigate and govern. Early on, we decided to overhaul a lot of these services, eventually hoping to replace them, and most of the existing landscape for new components. To accomplish this, we would need to rebuild a lot of existing integrations, which were heavily dependent on Aka.NET, as well as Aka.NET-specific integrations and logic or interfaces. Aside from that, we wanted to keep our options open and have the possibility to replace existing message brokers or integrations with other alternatives. Dapper would give us the possibility to focus on implementing the actual business logic of our application without having to focus on implementing any best practices for the existing components in our landscape. To give you an idea of what our landscape used to look like, in this example, you can see the services used and required to replace or to place an order in the shop. During this process, a total of 12 services were required using multiple events. We used a total of three API calls that we sent through AVR management to SAP. Apart from that, we exposed a web hook to SAP to get updates on any changes on products. During the process, we also used external integrations like SendBrit to send emails to users and SignalR to update clients asynchronously about updates to orders. This process was too complex for what we needed to accomplish, and we decided to replace a big part of these services, combining them into fewer bigger services, which were using more logical separations, and at the same time, we wanted to step away from market.net. We mainly used Dapper for service-to-service communication and caching via Redis, but we might want to utilize more of Dapper in the future. All right. Next, I would like to talk to you about how we deployed Dapper on our Azure Kubernetes environment. At IHIFN, we use AKS to run and deploy our sites and services. Of course, to run Dapper on this environment, we also need to install all of the basic building blocks for Dapper. To do this, we decided to use the Dapper HelmChart. This is a HelmChart officially provided by Dapper, which, as you can see here, can be found in the GitHub repository. This HelmChart provides you with the standard components like the security components, the placement servers, the sidecar injector, and the Dapper sentry. We decided to use this with the HelmChart because of our existing workflow for CI CD. We use Argo CI CD, which is an open-source tool for GitHub's deployments to deploy all of our sites and services with HelmCharts already. The Dapper HelmChart has been pretty well with our way of working. Getting the Dapper HelmChart to work with Argo CI CD was quite easy. Only two minor changes were necessary to get it to run properly. It turned out that Argo CI CD was quite aggressive in killing the Dapper application if it didn't start up within a certain amount of time. As you can see here at the top, we had to increase the timeouts for the lifeliness probe and the readiness probe of Dapper so that it wouldn't be killed before it even had a chance to start. Secondly, our Argo CI CD instance didn't allow for applications to run in root mode, which the Dapper placement service did by default. We also had to turn this off with the setting here at the bottom. This is the way that we deploy Dapper to our Kubernetes environment. If, however, you are also using AKS and water-run Dapper, I would also highly recommend looking into the Dapper AKS extension, which is an extension for AKS also provided by Dapper, which allows you to achieve the same purpose as the HelmChart. It's a different option. We at the iPhone chose for the HelmChart because, as I said, it fit in with our way of deployment, but also due to technical limitations. This is definitely a very viable option. Next up, I want to talk to you guys about the State Store component. We used this to implement Redis into our application landscape. We wanted to use Redis as a caching method, especially in the services handling our shopping cards. We often have to retrieve those. They're quite big. We want to add stuff to them, update them, show them to users, and getting them from our relational database took quite a bit of time. We decided to implement Redis as a cache to speed that up. We used Dapper to enable that. What is the State Store component? For those who don't know, the State Store component allows you to do correct operations on a host of different state management tools. Think about Cosmos DB, Azure Blob Storage, MongoDB, and our case we used Redis. It also allows you to use transactions on those operations. That's quite nice. In our case, we used it. One of the benefits is that we didn't have to learn any fancy Redis SDKs or best practices. We could just use the Dapper client that enabled us to do everything that we needed. We only needed the correct operations so that helped us out. If you do need to do more complex operations, the State Store component might not have you covered, so take a look at the documentation for that. How did we implement it? First of all, we created a new component using YAML. It's quite easy. We specified the type of type Redis. Then we gave the host of our Redis instance, which is running on Azure. We used the default URL together with the name of our service. Then we have a password, which is a secret key reference. We can retrieve that from Azure Key Vault. I'll show you a little bit later in this demonstration how we set that up. It makes it quite easy that we don't have to handle any secrets or references to secrets, but we can just rely on Dapper to do that for us. Next up, we implemented the actual implementation of the Redis cache. We implemented a repository pattern, as known in DDD, to do this. Basically, what we did is, okay, we're going to check in the Dapper cache or in the Dapper client State Store component if we can retrieve a certain value. If that value is not available, we just return null. If it is available, we deserialize that to the object that we need. Later on, we could basically use it in our repository pattern. In the repository pattern, we specify if we want to use the cache. By default, we do want to use the cache. If we do want to use the cache, then we call that method that you saw earlier to retrieve a value. If we see that the value is not null, then we can immediately return that value from our cache. If it is null, then, well, we need to do something else. We can just retrieve it from our normal DB context. When we retrieve it, we also want to make sure that we save it to our cache. Let's say that I'm in our frontend, and I close it, and I want to open it again. I don't have to retrieve it from the database again, but from there on, it will be in my cache. It makes it quite easy to implement this without having to use any specific SDKs. You can just rely on Dapper. All right. That was the stay store component. Right now, I'd like to talk to you about the Dapper bindings that we use. In this simple diagram, I will show you the current setup at the end of the room we use for sending and receiving events. As you can see here on the left, we have Service A, which sends an event to Event Grid. Event Grid will then forward this event to all of the Azure Service Bus Qs that are subscribed to that event, and via the Service Bus Q, the events will end up at, in this case, Service B and Service C. Those are the two Azure resources we use. For that, we use two Dapper bindings. One is the input binding for Azure Service Bus, and one is the output binding for Azure Event Grid. Let's first have a look at the input binding. It's quite a simple component. As you can see, we use the type Azure.ServiceBusQs to connect to the Service Bus, for which we provide the name of the Service Bus and the namespace of that Service Bus. As you can see in this component, we use no credentials to connect to it or no connection strings. That is because we use Azure Managed Identities to connect to Service Bus. Well, as you might imagine, that's really useful because you don't have to worry about any credentials being in your code or anywhere else. It's all handled by Azure and Dapper. This is a simple code example of how we actually receive the events. The input binding will forward any events that it receives to this HTTP endpoint. Those two are matched based on the name. In this case, that's just Q. Here, we will receive the event as the HTTP post body. From there on, we can handle it as we see fit. Next, let's have a quick look at the output binding. As said before, this will connect to Azure Event Grid. In this case, Event Grid did not support Managed Identities. In this case, we actually had to use the access key for Event Grid and the endpoint. We did however use, as in the state store components, the secret sort to get this Event Grid key. We don't store it in the code. Let's also have a quick look at how we then send events. We can just use by using the Dapper Client and then the generic invoke binding action. For this, we pass the action create. This tells the component that it has to send an event to Event Grid. Then, as a parameter, we can pass it a list of events. The main advantage in this case for sending and receiving events with Dapper is that we are basically free to choose our sending and receiving technologies at any point in time. Currently, as shown in the diagram here, we use Event Grid and Service Bus, but because we use these bindings, we can easily change those technologies on the fly. For example, if at any point in time we decide to use Reviton Q, for example, we can switch it with almost no changes to the code. The only small change in this case, we would have to make in this little piece of code because Event Grid actually wants events in a certain format, Event Grid format. That's also something we provided here. For another technology for event sending, we'd have to change that up a little bit, but for the most part, the code will stay the same. Switching in the future, as we expect to happen at the algorithm, yeah, that will be ready for us. Lastly, I want to talk to you about secret management. We show you how we implement our components, but we also want to do that in a secure manner without having to pass around secrets or connection strings all over the place, which might increase risk. We're going to show you how we implemented that in our environment and hopefully it'll help you to do the same thing. First of all, we use Azure Managed Identities, which is a resource in Azure that manages your identity. You can provide it with a few roles for certain resources in Azure, and then you can couple that to either other resources in Azure or, for example, using Azure Workloaded Identity, you can then couple that to both services inside of your cluster. By doing this, you also enable Dapper to use those managed entities. It can out of the box pick up that it has a managed entity available on the surface that it's running on, and then it will use that to authenticate to services unless you specify a different method, for example, a connection string. To show how that works, first of all, I'm going to show you the roles that I assigned to my managed entity. Here I have a service bus topic and a few queues that I authenticated it to as well as event grids and app configuration. Then next up, I created an access policy on my key vault to say that it can get and list all the secrets from the key vault using, well, the application ID and object ID of the managed entity that I'm going to use in my service, which is the same one that I showed before with the roles. Then I can specify a component in Dapper where I say, okay, I want a secret store, and it has to be of type key vault, and then I can specify the name of my key vault. By doing this, because I already assigned the access policy to it, Dapper can immediately pick up, I have this managed entity available, I have the access policy underneath, so now I'm going to retrieve those secrets from there and I'll make them available to all other applications if they want to use them. So to show an example of this, or sorry, of all the other components that might you want to use them, so to show an example of this, I'm going back to the state store that I showed before. Here we have at the bottom a section about authentication where we can specify a secret store, and that's the same name that I specified for my secret store over here, so in this case, that will be Azure Key Vault, and from there on I can just, for any secrets that I need, so for example here for the password for Redis, I can say instead of a normal string, I provide the secret key reference, which is going to be the name of my secret key in the key vault, so in this case, the Redis password, so that enables me to, without having to know the actual secrets, still use secrets to authenticate to certain resources. You might also want to use those secrets inside of your application code, you can also quite easily do that by running the following code, so this example in our .NET code, we just add the dapper client using the SDK, and then to configuration, we can call an extension method, add dapper secret store with the name of the secret store, so in this case, Azure Key Vault, and then also an instance of the dapper client bottom, and this will enable our application everywhere to use those secrets. Do be aware that on startup, you might have some issues using these secrets, because if you retrieve secrets from Key Vault before the sidecar is available to serve them, you might run into an issue where neither your sidecar or your application is healthy, and that can cause your applications to crash, so be aware of that. Okay, so that was a lot of information, let's recap. At the start of this presentation, we saw the old situation at IRFOOM with a lot of complexity and very little manageability, after which we implemented dapper, where especially for us as developers, the key takeaways were that we saw that for new developers, it was quite a lot easier to get started on projects as to IRFOOM. Reason for that being that we only have one framework that developers have to learn, which is dapper to integrate with almost any external service that we have, and secondly, dapper also allows us to be very agile in our choice of technology. If at any point we would like to change any of the underlying infrastructure technologies, we can really easily do so with dapper with almost no changes to the code base. So with that, I'd like to thank you for your time. If you have any questions, feel free to ask them right now, we're happy to answer, and if you think of anything later, please contact us at either our email addresses or on LinkedIn. Thank you.