 OK. Hello everyone. Thank you very much for coming along to this talk on DAPA, the Distributed Application Platform Runtime. I'm going to talk and explain what DAPA is. I'm going to then show you how you can follow along. I'm going to give you some examples of DAPA actually in use. And then we have some demos as well. This whole talk is actually being run through a Jupyter notebook, which means that if you want to actually run the code samples they are mixed in. So you can scan the QR code or you can fork the repository later. And I will get started. At the end there will be some time for questions and we'll see how the demos go. So for the demos you will need Docker installed. We've probably passed the point with the Wi-Fi of this venue where you can install Docker. So if you don't have it, they unfortunately wouldn't work. But yeah, that QR code will work whenever I'm not including it with text. Unfortunately it's one of the limitations. My CSS isn't that great. So you'll need DAPA. There's instructions there to install DAPA either on Linux, on Windows or on a Mac. And if you are following along the notebook you can just push control enter or command enter to run any of the commands or codes and bits. I already have DAPA installed so I'm going to skip through that. And while that's getting set up and while everyone else has a chance to kind of install and run on your own local device. So I'm the head of Open Technologies at Avanard. Avanard is a consultancy that focuses on Microsoft and Open Source. I'm also the working group chair for Open Source at the Green Software Foundation. And I'm also here with Open UK as well as their Blueprint Sustainability Officer. So that said, that's the typical little spiel at the start of these things. So why DAPA? What is DAPA? And what problems is DAPA actually trying to solve? I will be saying DAPA a lot during this talk. DAPA stands for Distributed Application Platform Runtime. And when you think about building applications, particularly as a full stack developer, probably less so for a number of the embedded developers among us, we're trying to scale applications that are cheap to run in clouds and a lot of the hyperscalers do not make that easy to do. We're trying to be really flexible and efficient with our applications rather than building massive monoliths. We're trying to build a number of different microservices. Many of the people that we work with from clients or working in-house don't want to be locked into one cloud. They might have a non-premise data centre. They might use AWS, Google, Microsoft. But a lot of our developers actually don't want to be focused on learning infrastructure. Some of the CNCF folks decide. They want to be focused on building applications that have functionality that can actually do things. There's a trend towards serverless platforms, so things like AWS Lambda, Azure Functions, Cloudflare workers. But the problem with these is they all use their own bespoke bindings. The way in which they work is very different from one platform to another. So it's very easy to get locked into one particular cloud vendor's way of doing things, which makes it hard for you to move your applications from one cloud to another. On top of that, when we're building applications today, we're tending to use a lot of different languages and frameworks. So your machine learning folks might be using R, they might be using Python, they might be using Julia, your front-end folks are probably using a combination of TypeScript, JavaScript, that could be Angular, that could be React. Your back-end folk could be using C sharp, Go, Rust, Python, PHP, if they're feeling a little bit risky. So there's a lot of languages and frameworks that our teams are trying to use. And teams don't necessarily want to have to learn every language under the sun to be able to communicate between teams to get things working. Again, developers tend to want to be building applications. So you can look at typical microservices application, and I'm continuously covering that QR code. You have your front-end. You'll want to be storing stuff in your database. You might want to be queuing invoicing, shipping. You might have bills to reconcile. You might have a payment system from Striper or something else to integrate with. You've got the checkout. So you've got all kinds of different systems that you are trying to integrate with. And on top of that, you've got all of the other things that I talked about before. So your machine learning services, your Python, your R. I can't take credit for this image, for the way this image was made by Dapper themselves, and where that's the case, I've cited that at the bottom of the screen. So all of these applications run on various different languages is kind of one of the points that I'm trying to hammer home here. So developers have limited tools to build applications that work across multiple clouds. So obviously cloud vendors tend to want you to remain on their platform. A lot of these run times have limited language support. So if I'm using Azure Functions, they're not that great at supporting Go, and you have to run your own binding. Bymon AWS is a similar story for other languages. Again, there's no real incentive for a cloud provider to give you that portability. So this is where Dapper comes in. Dapper is a CNCF project from the Cloud Native Computing Foundation. It was originally contributed by Microsoft. It's been available now since November 2021 within CNCF. It's got tons of contributors, actually, part of it and joining it. The documentation is really, really good. Some of the code that we're going to go through today is from the Quick Start. Some of it I've written myself, and some of it is a dodgy research prototype code, which I'm still hoping works on stage. It's got Discord as well, like all the coolest open source projects so you can join and talk with folks. Dapper itself is focused on allowing you to write microservices that are portable, that work with any language that you're familiar with, any major language, that you can either do a most prestigious call or call an HTTP API. The idea is each team writes their microservices in the languages with which they are familiar. So one team could do Go, another team could do Python. Obviously, teams still need to have discipline when they're picking languages because you don't want every single person to be picking a different language, but within teams that focus on a particular discipline, they stick to a particular language that is best suited for that use case. What Dapper does is it provides a series of capabilities and components that are hot swappable. Swappable, I should say, if you hot swapped it, you would actually have some problems. You know what I said that. So Dapper provides service-to-service invocation. So one part of an application can call another part of the application. It provides state management so you could actually use SQL server on Azure. You could use Redis. You could use PostgreSQL, MySQL. And the idea is that your application doesn't need to know the specifics of the implementation of the database that you're using. The publisher and subscriber says the service that you're using, the bindings. All it needs to know is that Dapper is exposing a capability. It calls that capability and it's up to you to shift your application wherever you best want to run it. And should you move your application from Azure to AWS to your on-premise server, the application will keep working. The application only cares that it has access to these capabilities. It's also got observability. It can use open telemetry, Azure application insights. It can use your keyboard provider of choice. And it can integrate with most configuration services and tools. It's already natively supported by Azure, AWS, Google. Alibaba is on there because they're a major contributor. Although I know they don't operate as much here. Kubernetes has a number of different plug-ins and extensions. And in virtual or physical machines you can deploy your own cluster, which is part of what I will be doing today on a single machine. So Dapper provides APIs, I guess, from the left side of your application through to the right. And we will be testing out some of these very shortly. So we can get configuration within the application. We can get state from your database of choice. We can call methods and different microservices. We can publish and subscribe from a queue. We can retrieve secrets. We can store our secrets. The idea is that Dapper is more than a service mesh because it's not a networking layer focused on infrastructure. It's a developer-focused service that allows you to connect different parts of your application. It does work with a service mesh. So you could use Istio or another service mesh of choice to provide the networking component. But it's focused on developers trying to build microservices. So that's why it will seem very similar to a service mesh in parts. The key thing about Dapper is it plays nicely with these other services. It's a CNCF project. That's what it's designed to do. If you saw Liz Rice's talk on the keynote, she talked a lot about EBPF. Dapper itself runs in a sidecar model. And I think this is probably one of the use cases where EBPF wouldn't be a good fit because if you want to run on multiple different clouds, you're probably not going to be injecting in the kernel of everything class that you're running on. So Dapper is a perfect example of when the sidecar model makes sense. So you have your application and then next to each application you have a Dapper sidecar that provides the components and capabilities for what you want to do with Dapper. So I can call the API directly through HTTP. I can do a remote procedure call. But I can also, through any of the many SDKs that Dapper provides, call within code out to these different services. So state, publish, secrets, et cetera. So this is what I'm saying. This capability is not tight coupling. There's across, I think, these nine different components. So nine different capabilities. There's 97 components available. So in terms of state stores, we've got AWS, Cosmos DB, Firebase, and Redis. PubSub, we've got Azure Service Burst, again, Redis. In terms of bindings, I can integrate with Azure Functions. So your Azure Functions can actually run Dapper as their coding tool of choice rather than you having to use the built-in Azure functionality. I can link with Azure Storage, GCP. I can use Twilio. So if I get a text, I can automatically process something. I can trigger an event. So someone tweets and my service is kicked off. But, again, the key thing is I can still move my application from one cloud provider to another. So I'm not locked in to, say, using a logic app or a local tool like Zapier. I've got all the different secret stores that I mentioned and the end observability. So, again, before we get into the demos, we have, on the left-hand side, we have Dapper. Dapper's integrated here with an Azure Key Vault. It's linked with an Azure Service Burst. That's what's provided the queue in this instance. And then, as items are added to the queue, the different services can be called and bounty. So, that said, it's time to actually try Dapper and get some demos working. The first view I'm very confident in, the last one was blocked by my corporate firewall this morning with a push that we'll see if that one works. So, yeah, let's get through the testing. So, the first thing you want to do, having downloaded the notebook, having got the docker set up, I'm assuming you installed Dapper in the Sierra Leia at the start. I've pushed command and enter to check the Dapper is installed. The notebook has run the standard blurb there. So, Dapper is running. If you haven't previously run Dapper, you'll want to run Dapper init. What that will do is make sure that you have the latest runtime. Biner is running your machine. I'm also going to make sure I've got a docker running still. There we are. Cool. And what this will do is it will also set up some default components. If you were running Dapper in live, you probably actually want to have your own state stores configured. You would want to not just be using a local Redis container that's going to be running your own local machine. But this will set up some components which will allow you to run through this part, which is just from the Dapper quick start. I'm going to check the Dapper version. Again, if you do the quick start online on the Dapper site, this is the same content. So, yep, I've got Dapper installed. I know the version. I'm going to make sure that the container is running locally. I hope to see Redis here. And you'll see my default component configuration should be listed here. So, I'm going to run that blank application using the default components from Dapper init. I'm going to run this in a separate terminal rather than the notebook, because your notebook will actually need to interact with the command and with block, shouldn't it otherwise? So, I've just run Dapper run. I've named the application ID my app. And I've asked it to run on a HTT port 3500. You can see that in the terminal. The Dapper sidecar is up and running. There's a blank application. There's actually no code in there. So, we just have the sidecar. We'd actually see any logs appearing there that's needed. So, I'm going to save a key value pair. The quick start itself usually uses the value of Bruce Wayne. I very much don't feel like Batman today. So, I've just put my name in there. What I am going to do is I'm going to call curl. I'm going to post in JSON. So, I'm posting that same JSON. You can see there, content type, application slash JSON. I've got the same array there with the key value of my name. I'm calling the server locally. So, I'm calling the Dapper sidecar, which I asked to run in port 3500. I'm calling the version 1 of API. I'm calling the state capability and the state store. I can see that that particular box has run. There's now output returned. So, I'm going to run the next command. So, I'm going to call the state store directly. I'm going to ask for the item that I stored in key name and what's in there, and that would be my name. So, if I change that, I'm going to edit this now. So, it's the same as using MongoDB or Azure Table Storage. It abstracts away all the different implementations there. I've got Fergus here, a set in the audience. I'm going to call curl, and I'm going to get the value. And fingers crossed, we get Fergus's name back. It's still running. Yep, there we go. We're Fergus. But again, I didn't have to know any particular vagaries of the cloud storage that I'm using. I mean, at this scale, it's probably not much different from using Azure Table Storage or something else. If you're doing much larger things that are representing the whole state of a particular application, which I'll show you a better example of that in a second, it becomes quite powerful. So, I'm going to call the Redis CLI, and actually see all the other things that I have in there. It was my testing previously. So, I'll ask for all the keys. I've actually been using some research projects there. It's probably quite small, but I've been working on a 2D browser-based game using Dapper. So, I've just thrown the world state in a particular component. I've got an application that serves at the front end. I've got an application that's got the back end. I've got something that models a new player. And I'm going to get the same value Redis. And again, I can get that same data, and you can see that in this case, if you can see very small at the bottom left hand there, I've gone into Redis, because that's the default component, and I can see that I've got Fergus out of that. I'm now going to delete Fergus, because we don't need him anymore. Delete him from the data. And that's literally what I've just done. I've had my application. I've called a state storage service. I've saved a key value pair, but the application didn't actually know in any way that this is being scored in Redis. Any state store for this. This was some other interesting components. I've got input findings here. So I mentioned that you could have your service configured to listen to Twitter. Whenever a particular tweet with certain keys or from a certain service takes place, it can actually call my application. In this case, it wouldn't import 8000. It will call the new tweet endpoint, because I've found it, and I can have my application process events. I mean, that's very common, very easy to do in a cloud already. But again, the point here is it is portable. You are not locked into whatever cloud provider that you are using. So now we're going to try and explore a bigger example, and I've linked out to the repository. This is the one that just crashed on me, so let's see how this goes. I'm going to close this dapper-side card that I was running. I'm going to open up my terminal. I'm going to try and run all the containers. That one's working. Excuse me while I get this running. You can see in my most recent search was I'd go out to Firewall Mac, which was the problem I was having this morning. Okay. Instead, I'll just see if I can run the backend service, and I'll just explain what is happening here. So I have those different applications here running on the left-hand side. So I've got my front-end folder, which is an application that serves up a compiled, bundled TypeScript Next.js application. I have my server here, which is written in Go. My server starts on port 5000 by default because I haven't specified it. My Golang application starts a MUX router, and it basically says that whenever someone calls API a new player, call the function published by its topic, I add my file handlers, so the DAPR SDK, if there's any way I can make this any bigger. I add my file handlers, because DAPR plays well with the existing web server, and when the application starts, I publish to a particular topic and that the server is initialising, and I've subscribed to the topic and I want to know about any changes of state. So my application is using the Go SDK for DAPR, and what it's able to do is subscribe to topics whenever a particular topic is published to a particular root, root it to a different part of my application. So I've got a web socket implementation here, and I've got my front-end. I have another application here, which is basically modelling what a player can do. So if a particular browser has a player that's wandering around the place, all those events are streamed to this particular application so that they can be processed. I have another application that is storing the states of the world, and in the session notes for this I will actually link out to this running because I think just looking through code isn't quite as good as seeing a person move around. But I think the key thing here again is I'm just setting up a DAPR client using my default implementation and I could run this on any cloud, I could run this on premise, I could shift it from place to place. I actually push this to Azure Container applications, which has needs of support for DAPR, and scale up and down. I've got my Azure Container apps deployment there, but I could actually move it to AWS, I could move it to Google, and my browser-based game is going to keep going. And that was very quick because my whole demo failed, which is embarrassing. I do touch on how the multi-cloud shifting works or observability, but I'm hoping you still got a sense of how powerful DAPR actually is. You can get started yourself at DAPR.io and with that I'm going to open the floor up to questions. Please ask me anything you want to know about DAPR and I'll try my best to answer. If there are no questions, then that is me. You can fork that repository, there's examples in there of running application in .NET, in Go, and others. Cheers, folks! Thank you.