 Hello everyone, welcome to my session Dapper Unleashed, Accelerating Microservice Development. First of all, I would like to thank everyone behind Dapper Days 2024. I think it's a great organization and I'm really honored to be part of this selection of great sessions. I'm really looking forward to follow up on all the sessions because I think they're very relevant to the topic. My session is about practical implementation and practical application of Dapper in real projects. In my session, I would like to give a developer perspective on how to use Dapper on a real development team and how Dapper can help that development team to produce more reliable projects faster and of course with better quality. It's very difficult to explore the whole journey in 30 minutes, so therefore I will focus on the most important aspects from my perspective. Of course, you are very welcome to reach out with more questions and discussion. Before I dive in, I would like to introduce myself. I am Miroslav Janowski, Technical Director at INE Group. INE Group is a Nordic champion in providing IT solutions for the industrial world. As a technical director, I'm responsible for cloud, web and mobile development. I hold an engineering doctorate title from the Young Hoven University of Technology. Almost 17 years of experience with .NET, hate and love relationship, more of it, it's love and much less it's hate. And I've had different titles in my software engineering career, starting as a junior engineer, through technical manager and now technical director. Okay, now we can start since you know me. Now we can start. The agenda for my session is the following. First of all, I will introduce to a case study, a real project that has been simplified for the purpose of this presentation. This project originally started without Dapper and due to our curiosity and passion for quite a lot of software engineering, we have explored Dapper and its possibilities. Once you are introduced to the case study, I will explore the non-Dapper version of the project and then I will vaporize it. During that, I will show the source C-sharp source code and point to only the most important code snippets from the Dapper perspective. Finally, I will elaborate on one way as well from my team's perspective and what are our next steps. Case study. Built an AI-supported service for Ethetech assignment evaluation. For one of our first favorite customers, we had to build an AI-supported service for formative assessment of students' products and homework and also give suggestions for further development. Our customer is basically Ethetech startup that wants to completely change the education process by establishing creative and engaging learning for a better world. The customer has an ecosystem of several different web applications for creativity and collaboration. With the applications, the students can create different type of content, videos, books, mind maps, and so on. And of course, they can collaborate on this content interactively. Init Group is a technology partner for this customer and we are responsible for their accelerated agile development. Very recently, inspired by the great success of GPT, the customer wanted to create an AI-driven module for formative assess assessment of student products and also to give suggestions for their further development. Very often, the students have no inspiration and they get stuck into their creativity. So we would like to use GPT and AI basically to give them and to inspire them so they can be more creative and more productive. The idea is that the students create content of various types. Basically, they can use the older, different products from the startup to create audios, videos, mind maps, text, and so on. And then they can ask GPT for feedback or for further development of ideas. For that purpose, we have to transform everything the student has produced into text, then create a form and send it to GPT. Since this is startup, we know that everything must be production ready from day one. That means at least scalability, resilience, and security have to be treated with the same priority as the functional requirements. There were we went for microservice architecture from the very beginning. We have identified three microservices or three microservice application. The first one is ingestion service. The ingestion service is responsible for receiving requests from students and teachers. The ingestion service has one single HTTP endpoint and when a request is received containing all the multimedia content, text, audio, video, and so on, it is stored into a command prompt storage. And then the request is sent by reference to the next microservice. Of course, it's sent via queue. The transformation service is the one in the middle. That one is responsible for transforming the request by transforming everything non-text to text. This service used combination of various cognitive services as well as domain knowledge. And when the request is being transformed, the final result is being stored into a prompt storage, again into the common prompt storage, and also sent further by reference to the next microservice. The final microservice is basically the extraction service and this service is responsible for creating a prompt, including domain knowledge, sending that prompt to GPT service. And then once the result is ready, the result is stored into the common prompt storage and is forwarded back to the requester. One comment about the common prompt storage, you might argue that this is not a valid microservice architecture because all the three microservices, they share the same database or the same data storage. In this case, that is not really a valid argument because we use serverless prompt storage and that is Cosmos DB, basically, technology with autoscale that can support scalability in theory, infinite scalability and avoid the problems or the effects of a noisy neighbor. So even if there is a noisy microservice, ingestion, transformation or ingestion, it doesn't really matter because Cosmos DB has to handle that out of the box. This is the current architecture implemented in Azure and as it is being shown now, it runs in production at this moment. So you can see we have the three microservice applications implemented in .NET and they run in Docker containers in Azure container using Azure container services. We have queues, Azure service bus queues, between them and we have Cosmos DB as a common storage. Of course, there are many more services that are being queues like the cognitive services, open AI, application insights, key vaults, Azure monitor and so on and so on. Before we dive into the non-daproversion source code, I want to emphasize some important points for this session. Namely, in the non-daproversion, all the microservices must know how to connect to Azure Key Vault, how to read and write to Azure Cosmos DB and how to send and receive queues or requests to Azure service bus queues. So basically all these three services, they have to have these three knowledges or these three know-hows in order to complete their tasks. This common knowledge is basically manifested into boilerplate source code that is dragged along through the application lifecycle. And as such, it requires continuous maintenance, updates and tests, otherwise it will become a technical debt. Okay, now let's see the non-daproversion source code. One note here is that for the purpose of this session, I have all the projects living into a single repository while the production version or the real project, it's a multi-repo organization, has a multi-repo organization where each microservice lives into its own repository and has its own CI CD pipelines. Okay, in the non-daproversion, we have five projects and that is the API project, which is basically the ingestion, responsible for receiving requests. We have the transformation service, we have the extraction service and we have two more additional projects and that is domain model project, which holds all the domain entities that we need for this solution. And we have common service, which basically implements the common services that are needed across the microservice applications. In the common service, we have implementation on how to store and how to read values from Cosmos DB. And we do have a Q service that is responsible for sending items into Azure Service Bus Q. It's important that this project is being shared among all the three microservice application and that is basically the one that has the knowledge on how to use these services or how to use the third-party services that we need. Okay, the API project, it has only one controller and what it does, it waits for new requests when a new request is coming. It is basically executing internal prompt service and that service, what it does, it sends, it adds some logs, of course, and then it stores the request into the storage service and send the request into the queue. Then we have the transformation service. The transformation service is basically a console application that continuously listens for new changes on the queue and whenever there is a new change on the queue, it basically transforms the new items. So it gets the queue message, it reads all the details from the storage service, it does some magic, of course, then update the log states and again store to the storage and send to the next queue in the pipeline. And of course, we have the extraction service which is pretty similar. It has extraction service implementation in it and what it does, it waits for a queue message and whenever there is a new queue message, it gets all the details from the storage, executes open AI call, updates the log states and of course stores the final result into the prompt or into the storage service again. And as you can see, all these three services, they have basically implementation or basically initialization and injection of the common service and that is being implemented here. So into the API, we do have these two methods, register storage service and register queue service and what they do, they create a new Key Vault Inst Client, then they read the Cosmos DB connection string, they create new database connection and they inject that into a storage service. It is very similar with the queues. Again, we read from the Key Vault name, we create Key Vault client, we read the service bus connection string and create queue service. So what we get here, we get around 30 lines of code that are being used for creating and injecting the common services. It's pretty much similar with the other two microservices in the transformation service, we have the same case. We have initialized async, which is again around 30 lines of code that is being used for creating the common services and their appropriate injection. And of course, we have the same thing for the extraction service. One thing that I want to note here is the docker-compose file. So the docker-compose file is quite similar or quite simple because we have three microservices which are represented by three docker files and these three are created into the docker-compose file. Okay, so that was the non-dapper version source code. Now the next thing is to vaporize this solution. So let's do it. Before I jump in into the source code, I would like to give some elaboration on the steps that I'm going to do. So in the non-dapper version, as you already have seen that all three microservices, they have to have implementation of the common service, services that basically implement talking to Cosmos DB and talking to Azure Service Bus Qs. In the docker version, what we will achieve basically, we will get rid of the common services. So that is the first and most important benefit of the dockerization. So we remove the common services that have implementation on how to talk to Cosmos DB and how to send and receive a request to the service bus queue. And then instead of that, we get one docker sidecar container next to our three microservices. So the depreciation basically starts with one big deal and that is we don't need the common services anymore. Technically in source code, with source code language, it means one project less, no service initialization and no injection. So those 30 lines of injection that we have to copy from one microservice to another. And of course, no third party SDK dependencies. So our solution will not depend anymore on SDK for Cosmos DB and for Azure service bus. Of course, we reduce the source code, but on the other side, we will have to increase the configuration. So we will have to introduce the docker components. And of course, we will get more complex docker compose file, at least for local development. So when I was trying to explain to the developers in my team, when we do docker transition, we basically remove source code with configuration. And that source code is pretty much boilerplate source code that we still have to maintain. And if we don't do that, it will become a technical debt. And with docker, basically we get rid of that boilerplate source code that can become a problem after some time. And we move that complexity into configuration and config files. Because I mentioned Azure and this is running in Azure and we are using Azure as a cloud provider. So all these three services are basically implemented with Azure. Our three microservices, or our three microservice applications, they have Dapper sidecar running next to them that has knowledge on how to communicate to Key Vault, how to read and write items to the Cosmos DB and also how to send requests to Service Bus Q. So now let's see the Dapper version source code. Okay, now let's see the Dapper version of the source code, of the project. So as I already mentioned, we don't have the common service and that is one big win. So we do have removed the Q service and the storage service and we also removed their dependence. So I will now open the API, the ingestion service, and I will open the dependency injection or the program.cs file. And we did remove all that 30 something clients of code where we create new instance of the Q service and injection and also from the storage service. And instead of that, we do have only one single line at Dapper client. Going further in the prompt service, instead of injecting the Q service and the storage service we have injected only the Dapper client. And when we process new prompts, basically there is no big difference in the numbers of source code lines, but there is a difference in the way how do we save prompt and how do we send them to the next Q in the pipeline. Okay, another big thing and another big transformation is in the type of the project. So in the non Dapper version, we have the extraction service and the transformation service as a console application. In this case, in the Dapper version, they are APIs. Why? Because now the Dapper sidecar container basically takes over the responsibility for continuously listening for new changes or for new items on the queue. And whenever there is a new item in the queue, they create HTTP call to our application to inform us that there is a new item in the queue. So the transformation service now is a web API solution and it has a prompt controller. And in that prompt controller, whenever there is a new item in the queue, the Dapper sidecar container detects that item and send us VHTTP to our prompt controller. So this definition here basically says that this controller continuously listen, I mean, this controller will accept new items from the sub-sub-sub component and from the transform queue. Whenever we have a new item there that is being sent to the transformation service. The transformation service, it's also quite simplified but not too much because there is a difference in the injection. We don't inject the Dapper, the queue service and the storage service, but we do inject the Dapper client. Of course, the difference here is that we don't call the storage service and the queue service, but we call the Dapper client to save state and to publish events. The same goes for extraction service and I will skip that elaboration. What I want to elaborate is the Docker compose file. As I mentioned, we do have more complex Docker compose file. Why? Because for each of our services, now we have to register the sidecar or the Dapper container. So instead of three, now we have six services registered in the Docker compose file. Again, this is for local development. For cloud development, there are different configurations that I will skip. They that are not part of this session. I would like to emphasize couple of things about the components. I really like the simplicity and the configurability of the components. They're very simple to use and out of the box. So this is the Azure Key Vault component. And also, for instance, in the PubSub component, this is how we use the secret store. So we tell that we use this secret store and we tell that if you want to use this queue, the connection string for this queue is basically a secret key reference in the secret store. So this is very elegant and very simple to use. And of course, this is the prompt store, the Cosmos DB configuration. One thing that I want to emphasize here is the key prefix feature. Basically with the key prefix feature, we were able to reuse the same prompt service, but also the same items that we store into the prompt service from all the three microservice applications. So the key prefix basically makes a distinguish whether do we share the items across all the microservices or not. What we wanted to achieve is basically that our prompts from the ingestion service, they have to be available and reachable also for the extraction service and also from the transformation service. By using this feature, we basically enabled that in a very elegant solution again in a very elegant way. And one more thing that I want to mention here is basically how Dapper utilizes the metadata. So the partition key. Partition key is very important thing to know and especially when you design a new Cosmos DB domain model. So what it does, Dapper basically enables through the metadata to set the proper partition key into the save state, I think, call method. So this is very, very good and very efficient. In our case, we have decided to use the type as a partition key. And when we get a new prompt, we get the prompt type and when we send save state, we basically set the partition key to type and that's how we store the new items into the storage service. So that was the source code overview of the Dapper version. As a, because of my R&D background, I really want to get deeper into the numbers every now and then. So I did static source code analysis on the both projects with Dapper and without Dapper. We were, I was following three parameters, maintainability index, cyclomatic complexity and lines of executable codes. On the maintainability index, we did not get significant reduction because we pretty much got the same index or the same numbers. Why is that? Because these projects, they don't have the business logic inside them. So it's quite clean and pristine source code. On the cyclomatic complexity, we got around 10% reduction, which is quite good and I'm really happy with that. And I think the main reasons for that is that we removed the dependency injection for the common services. And the most important thing is the reduction of the lines of executable source code. We got first 40% reduction for this project and that is great number, but also of course, this project is not a real project and does not have all the business logic that is being implemented in the real project. For the real project, we get around 22% reduction on the lines of executable source code. But in that solution, we don't use Dapper, for instance, for AI component, for communication to open AI service. Now I know that there is a Dapper component for that. So with more services and more components coming into Dapper, I expect also more reduction even on real projects. So what went well? Of course, that is the less source code, technically for me and for my team, that means less technical depth or at least less source code to maintain. We did achieve accelerated development with a minimal learning curve. So of course, the dev developers, they have to understand what is going on and how we are going to utilize Dapper and the whole logic. But once they get that, then it's easy to continue. I'm really happy with Dapper because it's a set of the best engineering patterns and practices. Just the sidecar pattern is also something that is very well established into our world. Simple and elegant secret implementation, leveraging technology-specific features like the Cosmos DB partition key and also the key prefix there. And what I also like is that the abstraction is on the source code level, but not on the architectural level. What I mean by this is that if you decide to use, for instance, like in our case, Cosmos DB, you still have to be familiar and to know how to use Cosmos DB and to know how to design a database in Cosmos DB, but you don't have to write the source code. So you get less source code, but you still have to be aware on how to use the technology that you are going to select. And that is expected, but still I really like how it's implemented. What's next? As I mentioned, first of all, we will explore more non-functional requirements like the resilience, observability, and security before we decide to move to production with the Dapper version of the project. There are new components coming to the ecosystem like the OpenAI component that we would like to utilize and also benefit from that. I'm curious to combine Dapper with Radius and Aspire, and there is also already a session on this conference about Radius and Dapper, and I'm really happy with that. And of course, I would like to explore the cloud deployment options. One thing that we did, we tried to use Azure Container Services and we noticed that there is a difference in the component format, so we would like to see how that will go. If you have questions, feel free to reach out to me, either through LinkedIn or through the email. The source code is on my GitHub link. Feel free to reach out, as I said. Thank you very much.