 All right, let's go ahead and get started. I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar Dapper Lego for microservices. I'm Karen Chu, community manager at Microsoft and CNCF Ambassador. I'll be moderating today's webinar and we'd like to welcome our presenter today, Mark Charmini, Principal Program Manager at Microsoft. And before we get started, just a few housekeeping items. During the webinar you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop in your questions there and we'll get through as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of the Code of Conduct. Basically, just please be respectful of all your fellow participants and presenters. Please also note that the recordings and slides will be posted later today to the CNCF webinar page. at CNCF.io slash webinars. And with that, I will hand it over to Mark to kick off today's presentation. Thank you, Karen. Hopefully, everybody can see the screen. So in today's session, we're going to do a kind of quick overview of the project drivers, what kind of drove us to create Dapper. Reviewed the key capabilities that Dapper has. We're going to do a short demo on a service composition using Dapper in a kind of event processing application. I'm going to cover the project status and roadmap on the end and hopefully there's going to be about 10, 15 minutes on the end to do a Q&A. So let's get going. So before I want to go into the actual over Dapper, I want to review some of the kind of things that drove us to create Dapper and what it's trying to solve. I think there's many different reasons for that, but I think they bow down to three things. I think too often developers nowadays, they're step one in writing any new application, they have to decide where this application is going to run because moving that application later to a different platform will often require complete rewrite. The other thing is that these total or complete platforms do help and they kind of solve holistically a lot of different problems, but because you have to kind of buy in 100% into using them, they tend to kind of evolve slower and over time kind of drift away from what's considered to be a modern architecture. And as a result, in the third point, in the average company today, I've spent over last year probably talking to, I don't know, 30, 40 different companies out there and developers in those companies. The average developer, the gap between what the existing application looks like and what it's considered to be a modern application in the community is increasingly larger and larger and it's not common to find companies that have, I don't know, 1500 to 2000 applications that they want to modernize, but they want to do it all in one split move. So these are kind of the things that set up the context for Dapper and the type of things that we would like to solve for the developers. So let's go into Dapper. First of all, just a short overview. Dapper stands for distributed application runtime. Yes, we've abbreviated three words into four letters. I know it was hard, but we've done it. In a plain English Dapper is really an event-driven application. It's portable runtime that helps developers build distributed applications, regardless when they run them. Whether it's a bare metal, cloud, edge devices, and I mean actually edge devices like Raspberry Pi or something, the runtime is kind of independent of that. It's an open source project. It's hosted on GitHub under MIT license and it has a very open governance. We just recently tweeted, recently wrote about it, about a transition to open governance and the commitment to vendor neutral foundation for Dapper. Throughout the presentation, you're going to see some QR codes they have posted in here that will have direct links to places where you can actually learn more about the thing that I'm talking at that given time. In this case, it's a link to our blog post that we just, that I just mentioned. So that's kind of what those QR scores stands for there. All right. A little overview of kind of where we are with Dapper today. Less than a year. It was October 2019 when we announced Dapper. So we've had 11 releases out of this AM, I'm happy to say. I think we've landed the 11th release of Dapper or 11 major release, so V011. There's a decent number of image pools that actually signals a real-life usage. So this is not just the kind of things that it's out there and sitting, but people using this in real life. We have 70 plus different components that cover pretty much the entire CNC of data and messaging spectrum of the map that you often see and more about those components in a minute. And there's a growing number of also contributors which validates the sense of broad community and helps us kind of drive the direction of the project as a consensus of the overall community. And of course, the all-important stars that we all know are the critical to any open-source project. I'm being sarcastic here. So what are some of the kind of principles that are behind Dapper? So first of all, no limitations with regards to the language or framework. If a new language comes up tomorrow, something like, I don't know, I can't even venture to think of what that would be, we should be able to run it. If you can start it, if you pass a process, we should be able to run it inside of Dapper. The other thing is that there is maximum emphasis around the reuse and of those building blocks as we refer to them. And again, more about those two, but those are a la carte opt-in mechanisms that you can use to kind of bring in certain capabilities that are common in modern distributed applications. When those building blocks are, when the available building blocks don't meet your needs or where you want to expand the functionality of that building block, Dapper provides a facility for you to kind of add an additional capability or expand or change the implementation, which is very important if you want to be able to kind of allow people to scale and grow their application with their demand. The core of Dapper is also this well-documented set of APIs that provide a parity across multiple protocols and runtimes. So GRPC or HDPE protocol doesn't matter the shape of the API, the functionality is the same, even though the protocols are different. And it runs, like I said before, runs great on Kubernetes as well as any other infrastructure where it can be like a standalone process. We've done this on bare metals. We've done it on Raspberry Pi. It's like I was talking about a data center, a customer's data center, wherever you can get Kubernetes, for example, we will do. And it's also supporting multiple architecture. So Intel, ARM, Maclin, Linux and Windows, you can get either the readily available version of Dapper or just compile for those two. And the last thing is Dapper really tries to meet the developers where they are. The investment that you've done in learning a particular language, Java, .NET, Python, whatever that may be, we want to make sure that you feel at home in kind of natural idiomatic, it's just a fancy term for natural to you that should make you effective on day one. There is no courses required, there is no certification, use it incrementally, start small, and then build from there. So hopefully that will give you a background where Dapper was going. Kind of logically, and we'll go deeper into each one of those, but logically, Dapper at runtime kind of exposes those APIs, like I was saying, HDPE and GRPC. These APIs give you access to the most common usage patterns that we've seen in distributed applications. And we generalize them into something that can be consistent to you as a developer, but yet flexible underneath. So you can plug specific implementation of that. And we'll talk about those in a second too. And we refer to those as building blocks. And you will hear me saying building block, building block throughout the presentation. And when combined, they're like an open programming model that you can start today with very small and then you move from environments, change your mind and apply these through configuration without rebuilding your application. So hopefully never again we hear about somebody needing to move 1500 application in one swoop move to a new platform. So more on those building blocks going deeper into there. These blocks are they're independent. So there's no dependency between each one of those blocks. So let's go through those one by one. Service to service invocation or it's basically a reverse proxy like API for communication between multiple services within your application. So if you're building microservices, you're building n number of different services inside of something that's considered to be an application. And for you to be able to dynamically discover those services and connect to them, that's where where dapper comes in. The other common thing that we see pretty much is state management. Think about durable key values, the object store like S3 or GCS that gives you this put get verbs capabilities that allows you to kind of query by state. The shape of those APIs is pretty consistent. Yeah, they have different functional capabilities, but to large degree, we can kind of give you a very consistent API to that to managing your state in your application. With regards to pops up, this is all about messaging within application. There is, you know, different perspective on what market market microservices should be doing. There should be service and to service invocation directly, or there should be asynchronous through pops up, dapper provides you both of those. So if you're looking for messaging between your application, a synchronous messaging between applications, you can use pops up that's built in, and you can plot the specific implementation underneath of that. So if you want to do like a fan outs pattern or a fan in, it's super easy to do that. And there's a exact, at least once delivery semantics, so you don't have to worry about kind of events being lost. Resource binding is kind of one of those bindings that one of those building blocks allows us to extend the functionality of dapper consistently. So think of those as a connector to the outside world, to other resources that are not inside of your runtime. You can trigger your code based on external events when they, when they run, when they kind of happen outside of your cluster or outside of your environment. And you can send data from your application to outside resources. Actors, and I will go through each one of those a little deeper, but trying to provide a little summary here. Actors very much like an independent unit of distributed state and single threaded compute capability. So if you're thinking of something that's going to require high density, that's a great, great kind of category for that. There are some concerns you have to be aware of around single threading architecture, but we'll talk about in a second. And observability, very much all about automatic kind of insight into what your application is doing. So capturing the core graph of all the invocation across graph, across dapper, and the telemetry with regards to those metrics that are related to how your applications are responding, how many times, how long did it take, and tracing of the invocations across multiple services. If you're talking about microservices, you probably have a call stack that involves four or five, 10, put in more microservices and being able to look at them as a unit helps you kind of debug those issues and understand the bottlenecks. And we have secrets we've all heard about, you know, these leaking all the time. What dapper is trying to do is give you very much opaque get API to the secret management that could be backed up by very robust set of secret management solutions. So how she called vault, Google KMS and many other ones. And each one of those building blocks has n number of implementations underneath them, which means that you can choose the kind of optimal use case per service or per application of what you want to use to be implementing these building blocks. So let's kind of talk about the architecture. So regardless if you run on Kubernetes bare metal VM, dapper uses this notion of a sidecar. And yes, I know it's okay, sidecar is kind of correlated often with Kubernetes. But in this case, dapper uses this sidecar approach regardless where you're running. The app calls into dapper sidecar and dapper kind of execute the functionality on behalf of your application. This helps lower the utilization and offload IL often in some cases can actually improve the performance of the application. All services invocations are encrypted over MTLS with automatic certificate rotation, including situations where you upgrade the actual dapper itself. And with this release that we announced earlier today, we also added a specific identity for service invocation and granule access controls, including all policies, if you like regal. Yes, there's overlap with many mesh, meshes out there, as you probably have already realized. If you are, if you really, really are committed to using Istio or LinkerD, you can just disable the MTLS and use dapper in conjunction with those meshes. So how do I get one of those magical sidecars that does everything regardless where I run? Well, if you're running on Kubernetes, that's as simple as decorating a deployment with few annotations. And I bolt that text over here. There's many of those annotations you can add, but really only two are required. You say enable dapper. And then you also tell me, tell dapper what is the ID by which you want this application to be used, which comes into service invocation and many other things. Dapper will automatically inject a sidecar into the pod. So it will look very much like that experience that you had when you were developing using dapper on a local machine. It would, in fact, your application will know anything different. In a standalone mode, if you're running outside of Kubernetes or running any other infrastructure, you can run it using the dapper run command. In a self-hosted mode, as we refer to it, uses dapper run. And regardless of what the process is, as I've demonstrated 304 in here, it could be directly into a runtime. So go or node or dot net or whatever that might be. Or it could be actually an executable that's already kind of compiled into the machine code or something like that. It's a lot of flexibility there. With regards to kind of dapper on Kubernetes, I think this is a good one to go a little deeper, given CNCF and Kubernetes. We try to keep dapper as light as possible. So there's really only two CRDs. There's four system pods, although we can bring them down to about two if you're not using some of the features. So first one is the sidecar injector. This is what checks for the annotations and injects the sidecar into your pod. Century generates certificates for the sidecar and implements rotation strategy. The operator tracks deployments and deals with things like resource discovery and component registry. And actor placement deals with identifying or finding where the dappers are located, the actors are located, and then kind of providing metadata around us, more about actors in a minute because they can be rehydrated or all over the place, depending on where dapper is so optimal deployments for those. There's also health APIs and live probes that you would normally see in any kind of Kubernetes deployment. So let's go into those building blocks a little deeper and kind of show them in a real world like examples. So starting with the key value state management, it is distributed object store for your applications. What that means is that it outlives the session of the application. If your application goes out of scope, it can be restarted somewhere else and have access to the same content, same state. It has a concurrency configuration options per each operation. So you can do first or last wins if you're looking around have concerns around concurrency has similar configuration for consistency with regards to whether it's a strong or optimistic consistency and it's configurable retry policy, which means it's super granular. It's not only on the entire service, but it's an operation. It can be involved per retry per operation. It's configurable. We also support bulk and transactional operations for situations where you want to save a number of records, or if you are trying to retrieve a number of records, the back state for those is totally to you up to you. There's dapper supports. I think check this morning about 12 of those. So etcd, Redis, Cassandra, GCP, Cloud, Firestore, AWS, Dynamo, there's n number of those configurations and depending on what you prefer in your environment, you might use those, but the API which will be exposed to you to use the state in your application is consistent. So how do you configure that? You will see this throughout some kind of highlights here. We're using this notion of the components. Components are in representations of n number of components can be kind of implementing a particular building block. In this case, we see it's a component. I call this corporate DV, but it could be frankly anything. It's a component of a type of state. And in this case, it's a MongoDB. So dapper provides these configuration options for any number of those components that we've highlighted before, 70 or some before. What it gives you access to is some metadata that you can use to configure the actual service itself, including the database and other things. But once configured, one registry in dapper now is becoming available in your dapper API. So you can post to it by identifying the specific component and save the data to a particular collection in this case to Mongo database. The metadata is unique to a specific store. So in this case, this Mongo might have metadata that's specific to Mongo. If you were using LCD, there might be additional parameters there. And you will see the other options we provided. Moving on to service discovery and invocation. So dapper, like I said, is a reverse proxy for you invocation, but that allows you to kind of locate the services and invoke them using the assign ID. So in the previous example, we use, I don't forget what we use, but if you say, for example, my service is the ID of the application, you can start invoking from all you other services within your application and dapper will make sure you find the right instance. All of that is dynamic. There is no configuration for this one because as you kind of started the services within dapper, they all automatically are registered and the registry is managed for you. It works in Kubernetes. It works even across namespaces. So you can cross the boundary of the namespaces and you can invoke them over HTTP or GRPC. So regardless of what your application is running on, in this example, we have a HTTP application that it's calling into the GRPC application. The API is consistent for that application. Dapper does the protocol translation. So GRPC to HTTP or HTTP to GRPC. All of that is done behind a scene. Your applications don't have to be aware of what the target is using. Like I said before, invocations are automatically retried on a call level, which is more granular with what you get normally from the meshes. All traffic between apps is over in TLS. It's automatic, so there is zero downtime even during the cert rotation process and upgrades. And because now dapper also provides the X509 certificates, we can do a spiffy identity across clusters for service invocations. So if you, for example, have one instance of dapper running on GCP and another one on AWS, you can reliably and securely kind of connect them and invoke each other. Dapper also have options for policy through annotations, which kind of obfuscates a lot of the complexity, as well as for proper OPPA implementation for Rigo. So if you prefer writing applications in Rigo, sorry, policies in Rigo, you can do that. All of that comes with automatic telemetry. You don't have to create spans because dapper has the awareness of how your applications are invoking each other. That parent trace ID is automatically injected, and you can start kind of seeing the benefits. I'm going to show you this during the demo later on. Another popular building block pops up. It really allows microservices to communicate with each other. So the publisher has no knowledge of what the consumer is, and the consumer has no idea who published the data. And dapper uses CNCF cloud events as a kind of envelope for those, so wraps all those events for you. It provides at least once guarantees for your publishers. So some of the common implementations pops up inside of dapper. For open source, will be like Redis, Nats, Kafka, RabbitMQ, Hazelcast. There's probably a missing bunch of other ones. And for cloud search providers, for Azure, it's like a service bus or event hub. For GCP, it pops up, and for AWS, it's SQS, and there's a couple other ones for each one of those. But whichever you want to use, dapper provides you ability to scope those so your applications can be limited to which topics, for example, which applications can use each one of those components. Again, very much like we've seen before with a state, exactly same component, but this time instead of state, we say pops up. In this case, I'm using Redis. This is in the side of the cluster, so we're calling into a Redis namespace, fully qualified name. And the other thing you're going to see here, I'm going to talk a little more about secrets, but you actually see that didn't include the password for Redis, a reference password. And we'll talk about how dapper allows you to do it both programmatically and inside of configuration. But once registered inside of dapper, you get ability to share, post to these topics, and consumers can kind of subscribe to them. There's two different ways to subscribe to create subscriptions inside of dapper. You can do this programmatically, where dapper queries your app for well-known endpoint and for subscription. And this is kind of good for dynamic use cases, where you want to kind of respond with a specific configuration. And the app responds with an array of number of different subscriptions, where you tell us to which component and topic you want to subscribe, and what is the URL you want us to send, you want dapper to send the data to. That's great if you have awareness of dapper when you are writing this application. For situations where you already have applications that are aware of cloud events or are expecting cloud events, you can use declarative manner, which dapper provides a CRD for, that allows you to subscribe to create subscriptions through configuration. So your application actually has zero awareness of dapper. It just says, I know cloud events, give me cloud events, and dapper will send those to your application. Bindings, like I said, it's kind of a way to extend the functionality of dapper, and there's so many of these out there. I'm just going to list a few and I feel like I'm highlighting a couple of these consistently, but I'm trying to create some variety. We have two different flavors of binding. There are input and output bindings, and the output binding can be very directional. And I'll explain what that means. For input, you can think of those as triggers. So something will come from outside and trigger your code and bring some data, or maybe not. And so Kafka and all the kind of eventing systems out there are good use of binding, but we can also have APIs, like Twilio or Twitter and so forth. It really removes a lot of the complexity of your application, having to have the drivers or SDKs inside of your code and polling for that. Basically, application says, I don't know how this was configured, but give me that event and I will do something with that. Your application can be in gRPC or HTTP. It doesn't matter that event can be bound to you. And switching between these bindings at runtime is really as easy as changing configuration, and in some cases, relaunching the application. So dapper handles a lot of the retries and failure recovery for you. Again, none of that you have to write your application. It kind of keeps your code very lean and allows you to change your mind post deployment. Similarly, in the output binding, it allows you to kind of invoke from your code to the outside world. Just like with the other ones, we have a number of these out there for different systems and services. In fact, I just saw a couple of days ago, somebody just PR a iOS notification binding, so you can send events directly into your phone if you're using iOS. Just like with input binding, your code is free of SDKs. You can easily switch between bindings at runtime through configuration, and all the retries are handled for you. So regardless, if you're using input and bound to output binding approach, the configuration of that using component is exactly the same. And at this point, you've seen me show three different yamls that really differ very differently, very little. They only have some metadata that's specific to, in this case, for example, Kafka that differs and provides you ability to configure, but the actual notion of configuring your bindings inside of Dapper is super consistent and easy, regardless if you're running in Kubernetes or if you're running on-prem. That same file will work. So in this case, we have a Kafka binding that defines a specific topic. It could be n number of those, for simplicity of demo, just have one to which it's going to be used on the input. And for output, we also have limited which topic can be used to send data out of the application. We support consumer group. There's n number of other variables that Kafka provides, but that kind of is the extent of that. And then for input, your application would, in this case, the name of the component is MyKafka, your application, the route in your application would just have to expect to have a, there should be a routing application called MyKafka. And Dapper will post to that. You will also check before, for options on that, just to make sure that you actually have the route so it doesn't start flooding you with data before you are ready for that. And for calling outside of Ucode into the outside world, you're just using the consistent binding API with that same component and Dapper maps all of that for you. We touched on secrets a little before, but secrets are generally hard problem. And we've all heard about leakage credentials out there. Dapper provides you with this API that is consistent regardless of which backend you use. So I have logos for four of those out there. There's other options too. But the systems are backing Dapper's API behind the scene. So you can manage the rotation strategy and all that over there. And then Dapper gives you this consistent API that you can use both from within your code through the same API. So get, for example, MySecret, you will return the secret, but you can also use this inside of configuration. And we'll talk about this in a second. So in this case, we're using a HashiCorpVolt secret store. We've configured this with a few configuration options. There's a number of these again. I'm just showing a few. But you can now invoke that API in Dapper for secrets for specific password, and you will get that value of that. But you can also use this inside of the configuration. So what Dapper will do in this case, you will substitute these secrets, kind of inline the secrets for you. So your configuration is free of secrets. You can check it into repository. And as you deploy it to different environments, your entire solution is configurable at runtime. Observability is kind of a built-in building block. There's some configuration for it, but there's a lot of automatic things that happen. So I want to talk about a few of those. First of all, there's a ton of metrics that Dapper gives you kind of a visibility into with regards to the measure values and the counts around time series, monitors behavior of itself, of that sidecar, as well as of your application. And by default uses Prometheus and Grafana, there's a number of options to switch into CSP specific options like, for example, monitor on Azure. Similarly with distributed tracing, it profiles and monitors Dapper system services and the application. It's important for microservices because really, if you don't have distributed tracing, you don't know how things kind of work together. So I hope you identify bottlenecks. Hope you identify some issues and failures. And because it's kind of like a mesh-like architecture, it gives you these distributed traces automatically across the entire stack. So servicing vocation or bindings or state or whatever that may be, you get those traces automatically. And I will show you later, but all of that is by default available in Zitkin. If you're deploying some kind of open-source-centric solution in a CSP environment, you can use additional application insights on Azure, for example. And similarly with logs, you get FluentD and Elastic and Kibana, you can substitute some of those things and ship you logs somewhere else. Dapper injects a bunch of metadata into your logs, so you get the type and the host name and component name and app ID addresses and bunch of other things that give you kind of more context for actually what happened. Right, now Actors, a building block, like I said, is an object-oriented programming model like Aka and Orleans provides durable framework for hosting your actors. Dapper basically what it's functionality here is it's a self-contained unit of code that you deliver to Dapper that has both the state and compute. And Dapper manages the life cycle of that actor. It's based for use cases with minimal IOL because it's single-threaded. So if you start lacking, you will basically lack the entire solution. But there is a lot of benefits people can recognize through super high density. I think the numbers I've seen is like thousands of actors within a single pod, and obviously this can scale horizontally. And Dapper manages the state of your actors, offloads them when they're not used, all of that is configurable. It can rehydrate them somewhere else with the right state. But because the actor model is kind of a function of the runtime itself, Dapper supports actors only in Java, .NET, and Python. Every other building blocks up to now I told you about is 100% across all the different languages. So at this point you're saying, okay, this is great, but there is a lot of URLs and GRPC end points. Have we not moved past that? Well, yes, we've done. You can always use the row API if you want to. And in some languages, they're dynamic. It's kind of like a first-class citizen. Rest API is like in Node.js. It's something that you're already used to. But we provide also a five SDKs that we manage, the Dapper project manages currently. There's a number of those. I know there's a RAS, there's C++ and a bunch of other ones that the community manages too. So if one is missing that you would like to see, would love to kind of contribute and help working on that. And they give you this access to the same API. And when we go into demo in a minute, I'm going to show you how we can leverage that to kind of simplify the application. There's also integration into some of the frameworks. So for example, functions or Azure functions, regardless of running on Azure AKS or another cluster, you have access to those functions. And if you're kind of into the model of just programming only at the function unit, you can kind of integrate that very easily into Dapper and bind these configuration options for a different state and pops up very easily. Similarly with LogicOps, Spring Boot, ASP.NET Core and others. All right. I think at this point, let's go into demo. So what I'm going to do is I'm going to start with a, actually no, let's do a slide. Kind of walk you very quickly through the demo so we know where we are. We're going to do way over engineer application. There's probably way simpler way to doing, but to show kind of the capability, I will show the three or four different components inside of Dapper. So first of all, we're going to use the binding for Twitter to create a subscription for a specific stream of tweets. And we're going to combine that within the application and then persist each one of those tweets into a Mongo database. Then we're going to add a sentiment analysis API as another Dapper service that's going to be using service to service invocation to score each one of those tweets sentiment. So we're going to find if they're negative, positive or neutral or mixed sometimes. And when we score those, the tweet processor will also publish them onto a topic. And eventually we'll bring a UI application that will show those tweets in a UI. Don't get excited. My UI foo is super weak, so it's going to be very rudimentary application, but it will allow you kind of see how we can subscribe to events and stream them in this case over WebSackets to the UI. All right. So let's now go to the application. So a couple of things I want to show here, and obviously there's a lot of moving parts, but the first thing is we're going to show the component for Twitter. You see that even though I'm running on a local machine, I'm using secrets. And it's because in this case, I'm using one of those developer friendly, quick and dirty kind of secret stores, which is a file store. So in this case, my secret store for dapper is defined as a file that I'm hosting in my machine. You can use environment variables as well. And on a local machine, it allows you to just kind of use those secrets. When later on, when we move to something like Kubernetes, and I show you how to deploy that, you will actually use the secret, the Kubernetes secrets, because that's the kind of optimal way of using there. The other thing we're going to do is we're going to define a state. In this case, I'm going to use Redis for state, as well as for pops up, so that nothing changes there. Super easy for your local development. And we're going to run that. The other thing to kind of point out here is I'm actually using the dapper SDK. This is for Go. The very same principle would apply for every single other language out there. I just can't write anything pretty much but Go. And even that is pretty weak. But you will see that I'm creating a service. The part I want to focus on is the odd of creating a subscription to a kind of binding handler is as simple as it's a method on the client. So you say, I want to subscribe to the tweets. Remember, we define tweets as the component name. And then just handle. And the handler is super simple. It doesn't do anything other than just publishing it to a topic that it's configured again through configuration. So this is as exciting as YAML can get. Let's go to Go. So first what I'm going to do is I'm going to launch locally the viewer. You will see the dapper went through its logging processing here. It started the application. It told me that HTTP API is on this port and the gRPC is on this port, which are configurable. And also kind of give you a nice check box here. You're good to go locally. And pretty much the same thing works for every single application. What we're going to do in here, we're going to actually start a gRPC application on a specific port. This is going to be the sentiment scoring application. And we're going to point to where the configurations are. Again, I'm just using Go itself directly. Similar. I'm on a Mac. So Mac asked me to make sure that I allow that to happen. And exactly same thing for the tweet processor. There are some old tweets in here. And we're going to start the tweet provider. And what's going to happen in here is this is an HTTP one. You see the same thing. And I use the term football. And for reasons that we are recording this tweet, I'm actually not going to open the UI because I'm embarrassed what could be out there. But I'm going to open one for dapper, for term dapper, that I'm actually already running on a server. This is deployed on Kubernetes. And I'm going to come back later, show you how I've done that. You will see a number of tweets. And if you're going to tweet, and please be nice, don't say something that will embarrass me. And so if you tweet something will automatically come in here. But what for each one of those, we can see, for example, that I was very excited about the specific support and all parts. So I tweeted about it. And the tweet was identified positive. Joe beta actually sold that to and jumped on it. We see some people that maybe are less, maybe they were retweeting. So it's hard to tell if they were excited about it or not. But the sentiment is there. It's kind of not the function of that application. The one thing I want to show you is that in the entire code, there was no mention of any tracing, right? Our code was very simplistic. But what we were able to do is within your application, it actually create the map of these. So for each one of the services, we can drill down, identify what was happening. We can switch to logs. We can see metrics of the entire system. In this case, dapper and the performance characteristics of each one of those asset changes. So it's super transparent to the developer who just writes the logic of the application. They don't have to worry about the actual plan. Are we almost out of time? So I'm going to switch quickly to this. Actually, let's see if somebody posted something. Okay. Nobody did. All right. Oh, Brandon. So let's go back. The entire demo is 100% reproducible and it walks you through each one of the steps. You can go to this short link or just scan the barcode, the QR code. It will walk you through the creation of the cluster and if you don't have one, as well as configuring of the different components and deploying of the application. Let's talk about for integration very quickly or skip that even. A couple of things that the project is doing right now. We have a stable set of APIs with this new release. We've kind of made some changes in APIs. So at this point, the API is stable. We've delivered access control and service identity like I was already talking about. We've actually done a security added with one of the CNCF certified company and published those results as part of this PR2. So you can go to GitHub and find the reports about the security that we have performed. We have also announced a project transition to an open source governance to a vendor-neutral kind of way of looking at this project and making sure that this is sustainable over time. So what's next? Kind of by the end of the year, we're looking to release a release candidates of the 1.0. This is based on the feedback we've got from customers and users using this in the real world. We want to definitely start focusing more on addressing the friction from the real-world use cases. So as people kind of more and more are taking this to production deployments, they feed us information and we're kind of making sure that this is the highest priority to make sure that this deployment is successful. There is a fair amount of infrastructure work going on behind the project. If you're going to be sustainable as an open source project, you need to make sure that you have a good performance and performance automation. We have a lot of that too, but we want to make sure that this is all accessible to general community that you're going to be working on on Dapper. We're going to start also seeding the technical steering committee, just reaching out to the outside community and bringing and evaluating, kind of identifying who are the right people who should be on this. Then very much kind of focusing on ensuring the readiness for production grade workloads, which means paying a little more attention maybe to the operator and making sure that some of the metrics that the traditional enterprise or large-scale deployments operator would want to see as the focus has been definitely on the developer so far. All right, in closing, Dapper.io is a good starting point for pretty much anything on Dapper. The project itself, like I said, is hosted on GitHub, so GitHub.com for such Dapper. There is a chat on GitHub, as well as Twitter kind of monitoring going on. So looking forward to hear you. There's a few videos about Dapper that I've kind of collected into a playlist that you can access. If you can't find anything else, so if you need any other information, I've provided my email address, which might not be the wisest thing in the recording video, but looking forward to hear from you. Karen. Cool. Awesome. Thank you for that great presentation. We now have some time for questions. If you have a question that you'd like to ask, reminder to please drop it in the Q&A tab on the bottom of your screen, and we will get through as many as we can. So right now there are a few questions. The first one is, is Dapper production ready? If not, any idea when it can reach production-ready stages? We have a project starting this month and we're thinking of using Dapper for it. Yeah, so like with any open-source project, you're kind of working on the zero dot releases, right? And it's kind of assumed, I think that that's probably not production-grade, but what happens? Customers actually deploy this in production, so we have a few customers that went with Dapper to production. I would say, depending on the use case, if you're going to be running a monitoring system for a nuclear power station, I would probably wait a little longer, but if you're looking for some kind of monitoring application with ability to reprocess the data if we find something, that's definitely ready for that. So yeah, I think we're getting very close there. I think RC1 definitely is intended to be a production group. Awesome. Next question says, are you seeing a demand for streaming GRPC connections in addition to unary GRPC sessions? Yeah, no, we do get that question. We do see people asking for streaming support. I think we're trying to understand a little more the use case rather than just technology and what that would look like in a generic API. So if you're interested to kind of provide a kind of context for how this would help you in your use case, we would love to know that because that's kind of what Dapper drives this functionality by, what is the pattern and how can we kind of help developers in this case? Great. Next question, what are the main differences between Dapper and CloudState of LightBend? Oh, so the LightBend team has actually worked with Dapper team on a CloudState, and CloudState is one of the supporting components for state inside of Dapper. So I would say with regards to state, it's one of the options inside of Dapper. Sorry, I should have mentioned that CloudState was there. Cool. Next question, are there any plans to support distributed transactions across multiple microservices? Oh, I would love to know more about this. So the service invocation in a cluster can give you some guarantees of transactions depending on what you do. But I think I would want to probably understand a little more what that looks like if we're talking about like a secondary or a tertiary service invocation and having some guarantees around that. I think right now that's not supported or that's not an option. I think you can accomplish similar things through pop-up just by virtue of retries and kind of using this until you reject a particular message. But I would love to know more about it. If you have a way to open an issue inside of Dapper, I'd love to hear about that context. He followed up with such a saga pattern. Cool. I have to admit I'm not from the saga pattern. We'll look it up though. Ping me on email or Twitter and I'd love to talk to you more. Okay, or Stuart said maybe it's Sage and that they said okay to ping you. Cool. If anyone has any more questions, please drop them in. By the way on this transactional thing, there's probably a few members from the Dapper community who are a lot more knowledgeable than I am on this area and I'm probably cringing right now saying what is Mark saying? Please post it in Gitter as the topic and we'd love to have a conversation on that. All right, last call for questions. All right, well let's go ahead and wrap that up. Thank you Mark for a great presentation and Q&A. That is all the time we have questions today and thank you everyone for joining us. The webinar recording and slides will be online later today and we are looking forward to seeing you at a future CNCF webinar. Have a great day. Thanks. Thank you.