 So in the next 2726 minutes I'm going to talk to you about simplify application integration with event driven architecture. And what the goal is for you to understand what event driven architecture is, what events are, and how we at Microsoft think about events. We're working very tightly with SAP and with ASAPIO. We are working with SAP on integrating our platforms tighter. We're also collaborating in the form of the Cloud Native Foundation on standardization around eventing. And I'm also going to get to that towards the end of the session. What I want to start with is a story with where I'm also engaged in a different capacity because I'm working for Microsoft as the lead architect for messaging services, but I'm also on the digital advisory board of a football club, which is Bulusia Machingapa. And there we've done an event driven integration and that's very interesting and illustrative for how event driven integration will work. So in the last 18 months we've been doing work to integrate Bulusia's new ticketing portal with the ERP system that Bulusia already has in place. And the architecture and if you're familiar with football you will know Bulusia Machingapa, if you're certainly if you're in Europe. So I don't have to say much about that anymore. But what we did is we tried to integrate a new ticketing solution and the ticketing solution is being provided by Eventim. And Eventim is a major provider of ticketing services and in international corresponding company might be Ticketmaster. And the exchange of information runs completely in an event driven way. Whenever in the ERP system we have a new account that's being created and the account might be created in the ERP system directly or might be created inside of the shop we will raise an event customer created. That will then be reacted to by doing a transfer into the ticketing system so that the ticketing system has that information. In the ticket portal you can also show up and you can go and create a new account and if you do this then this is also synchronized through a customer or address created event or changes are there and then they're being synchronized also into the ERP system and that all happens under the covers using an event bus that is then driven by Azure functions. At the moment we can't see your slides only your video screen. Oh interesting. Please trigger this. Did that change? I can actually see this slide. That's a local problem then. Whenever there are ticket sales those ticket sales are also announced as events from the event system and then we catch those events and then also transfer them into the ERP system. What happens here is that the ticket portal event has an API which is effectively reactive. They have their exposing a series of events through an open API and they're asking you as the partner to implement those APIs so they have effectively an eventing system where they have an assumed interface that they're calling and whenever the state changes inside of their system whenever an activity happens they announce this through calling of webhooks and that's effectively the same thing that I'm going to talk about for the platform we have and it's a practical way of implementing or practical example of how those things are being implemented. Whenever the tickets have been sold they're also announced to the access control system in the stadium which means that you sell the ticket we know the accounting information for the ticket being made known to the ERP and with that same event that is also then being transferred into the access control system so that the ticket is known to the gates at the stadium so that you can go and pass and whenever there is any change to the state of a ticket tickets are also revoked then that's also communicated as an event into the access control system which will then go and revoke the ticket so all of this integration between event and access and the ERP system is all driven by events all driven by state changes without any of those systems being really tailored into each other but really using the extensibility mechanisms that exist on those to do and drive the integration. We have very similar scenarios in many of our use cases our scenario use cases and I just picked out a few and Collie Bear Games is using event hubs so that's a streaming information to go and take click streams take everything that happens inside the games and moves that through event hubs and then through stream analytics to understand what the players are doing. The Halo game that's another example has been our closest ally for event hubs in the early days and still are and the Halo game is effectively moving everything it has to do with a player career everything that they do moves that as events like everything that happens in that a game that a player does really and everything that happens on a multiplayer map all of that information is being transferred into the back end and then evaluated there and it's core for all of our cheating and banning functionality on the back in retail eventing is very important because there again all the activities everything that you do anytime you navigate anytime you look at an item ASOS will run this through a real time personalization engine and will then influence what's being served to you up on the next page page load so all of the activities that you have everything that you click everything that you look at has real time influence on how your information is being tailored and all of that of course runs through eventing in that case runs through event hubs to drive that information so that's another example of event series and then we have lots of IOT scenarios I picked out the telematics in there from Bridgestone which is about tire pressure tire sensors tire and allows for management of tire quality across large commercial fleets so in the slides there are all links to those case studies so all of those are examples for various forms of eventing the first thing that we saw the Borussia case was about discrete events and the other ones were more about event series so as a as a mental model of the things that I just showed you when we start to break this down into into what the concepts are here there are signals events and then there's streams and jobs a signal is just broadly speaking the capture of an occurrence something that happens inside of a software system or something that happens in the real world and that can be something like a new address record has been created in invoice has been written it can be the temperature reading of a sensor a statement a simple statement of fact associated likely with some information about that fact then the signal per se occurs somewhere and we will we will want to go and make that known to others the making known to others is an event the event packages the signal in an appropriate interoperable format and then goes and sends that out via some kind of event bus sending infrastructure so that someone else can go and pick it up and typically the the party which is sending those events is called the publisher and the party which is interested in those events and wants to get them is a subscriber an event stream is a chronological sequence of events that relate to the same context which means if you have a series of events that relate to a security then it's often useful to have a sequence to understand what the order of those of those events were it's very obvious when you think about information that comes from IOT devices like temperature sensors or any kind of of industrial sensors because there you will want to go and observe trends rather than act exactly on a particular state change a job is not an event a job is a task that needs to be performed by some party and preferably just once and the reason why I'm mentioning this here is that events and jobs are often confused and the infrastructures for jobs and events are often confused and they're not the same so events have the specific characteristics that they are a statement of fact about something that happened in the past and a job is something that you want someone on a particular party to do in the future. Signals can be of as I said of very many different kinds they can be about calendar appointment reminders you might have a calendar entry created the mouse has been moved the sales lead has been added the inventory item has been added effectively when you look at your own software systems when you look at your own solutions in your building you have a ton of different state changes whenever something an object is being manipulated you have an opportunity to go and raise an event about this and then have another system go and act on that so as you saw with the football case right there is a fact that the ticket has been the ticket has been sold has two effects there is the access control system that is fetching it there is also the ERP system that is fetching that fact and both of those parties are doing something else with it but the raising system the event system did not have to know about any of those two activities and didn't have to instruct those two systems to do those things it simply made that fact known what events do as I said they put signals into context and when I said context that's also what I talked about streams so you have these various kinds of sensors that may be in the building and then you're structuring effectively your events you're structuring the context from those events and if you're raising those events you're raising them inside of that context you're giving that context with them based on that context based on these paths effectively you're also able to go and partition those spaces which means you can go and handle one building but you can handle thousands and tens of thousands and hundreds of thousands of buildings with all the sensors in them in infrastructure because you can keep the related events together using their contexts but you can go and paralyze all of those the event handling for all of those events across many many different contexts which are unrelated and that's the magic that we have effectively that we use in these super high-scale systems is that we keep those contexts together but then we allow parallelization across those contexts there are different kinds of events as I said that we're handling one is a direct signal that you package into an event is immediately actionable so when you have a sensor that's raising a smoke sensor and that's raising a fire alarm you take action there is you raise this and then you are broadcasting this and then several things happen like the alarm in the floor goes on and you will initiate a call to the fire brigades and you will cause a bunch of activities to happen it's not predetermined what things will happen but you will effectively signal the fire alarm and then immediately things happen if you have a temperature sensor and the temperature sensor goes from 20.9 Celsius to 21 Celsius and that is the threshold from which you will go and start you know regulating your climate control system you will it's probably not wise to go and take action on that particular observation because for everything environmental observations for everything as observations of the physical world you will typically not react on on point readings rather you will go and project those point readings over the time axis and then you will calculate out some average or a median or some other statistical formula and then you will act effectively on the on that medium and when you go and then observe the median and that crosses the threshold that may then again raise an event that you may then act on so you can think of signals that are being created based on observations using some kind of stream analytics tool as derivative functions as they are as you know of derivative functions in mathematical calculus and of course when you are organizing these events in streams then you can ask you can answer all kinds of interesting questions if you collect them in a system that's organizing those events after arrival you can tell by the occupancy sensor whether they're currently people in a room you can tell which room is unoccupied in the building you can tell what the air quality is on the floor because you can go and do sensor fusion of a number of environmental sensors and you can calculate this and then of course if you are finding these derivative signals right you have there's unexpected occupancy detected in a unit then you may want to go and alert security and these are all things you can go and derive from basically just having that basic notion the basic information about events systems that talk and this is very illustrative here with a with a building because that's something that you can all relate to but generally when your systems speak when they talk about their own state changes you can also start extending them you can go and build logic that is based on those state changes the kinds of infrastructures we're using or we're offering and not only us but also the competition fall into four basic categories there's a discrete event router which is for these immediately actionable events we have azure event grid our competition they have event bridge and then in the kubernetes world there's the canative platform that has canative eventing all of those are examples of discrete events routers what they do is they collect events and then they push those events out to for instance a webhook but they can also go and target different delivery infrastructures like use and also events stream engines there for events which are independent of each other like the fire alarm a queue pops up broker is also often for used to transport events but they're typically there for jobs which means this is how you route the instructions for when you want to have a job done and you want to have a job reliably done then you will use a queue pops up broker so when someone wants to try to sell you a for instance the patchy Kafka we're going to get there in a moment the patchy Kafka broker as a queue broker that's just false because there's different infrastructures for those things so queues are for jobs and event stream engines and event routers are for events and that is a very clear and important architectural distinction and event stream engine will go and take a number of related events so an event stream as I explained and will then allow you to go and stream those with very very high velocity if you want to Azure event hubs the biggest event hub single event hub that we operate is transporting over four gigabytes per second day to day and we have over 10 trillion transactions on event hubs every day an event stream aggregator sits basically between two event stream engines or sits at the tail end of an event stream engines and looks at those events and then aggregates them which means it is the engine which goes and looks at and looks for signals in the data stream it will go and find the averages it will go and find the medians it will then go and out of the input event stream it will go and find the respective signals and will go and then again emit them as events and if you go and emit them again into an event stream engine or into a queue pops up broker or into a discrete event browser so all those things those four elements belong together in an event stream architecture and to clear up that misconception again this is mostly about the difference between queuing systems and things like Apache Kafka event streaming is not modern queues are not traditional there are all patterns of the state of the art messaging infrastructure so most customers use both for Azure I can say that all of the top 500 Azure customers use our messaging infrastructures and most of those customers use multiple of these infrastructure elements together because they are having different use cases some of them requiring grid some of them requiring service bus some of them requiring event event hubs and using all those infrastructures together in a single solution is something that we advise and so you should not look at whether you are looking whether using one of those infrastructures or the other but you typically will use them together but the same is true the same advice would be given to you by someone who is working for Amazon and AWS they also have the those split for this infrastructure for the exact same reason so typically you have direct interaction between applications that is kind of pre-planned you're assigning jobs and you want to have those jobs done and you get some feedback that's all direct that typically is commands and requests and that runs through RPC or runs through queues and you have extensibility which is done through eventing discreet events event streams etc that's how you are building think back to my football example that's how you build the ticketing portal the ticketing system which can then go and raise events about the state changes and then be extended and what's also important to understand is that the event stream engines are built to be super lean and lean meaning they are optimized for low latency and the reason is that in very many business contexts real time data is most valuable and most important when it's fresh and we have scenarios for instance in finance where that is enormously extreme where the value of data is super high like in NASDAQ this is an example from the NASDAQ market you pay enormous amounts of money for a fresh market data feed and the market data for algorithmic trading is only really interesting within the first 2, 3, 4 seconds and as soon as the data is 15 minute old it's worthless so the only thing that really makes the data valuable is its freshness and that's true for very many other events that we have in our customer we see in our customer systems so low latency is super super important for many of those scenarios so that's why we are building these information infrastructures for instance event hubs or but also event grid which are built to primarily go and catch rain as I call it catch all those events and organize them on disk and then make them available for consumption as quickly as possible so you can go and analyze and visualize and react to those things and you can go and create derivatives within a very very short time so that's why for instance for event hubs premium in our product we now have an end-to-end latency which means event in event out on the other side from consistently under 15 milliseconds replicated across 3 data centers availability zones etc because that is really important for many of our customers to have the minimal time from ingress to consumption and then but we also think that these event log stores event hubs in particular in our case are not necessarily the best long-term store because they only index really along the time axis and so what we then do typically is you go and take those event streams and then you store them in a database and if you want to go and access those streams after a while then you will go and go into that stream store which might be Cosmos DB or SQL in our case or might be MySQL or DynamoDB on AWS any of those kinds of databases and then you will go and buy your search criteria first and then you will find those events serialized there in an appropriate data structure so all in all we are thinking of these integrations that are running through events as a bridge between very many different parts of the system we have in Azure for instance we have Azure Logic Apps which is an integration broker which has adapters for 400 different applications, SaaS applications and platform elements in Azure and elsewhere which can go and raise events we have the IoT platform which can go and catch events and all of those use standardized events to go and route through the infrastructure through the event router through the event streams to make those available for consumption in the various different services provided in Azure. What we find is super important in that context is that we make those events available in standardized form which means there is a communication path often from a device but also from an application that runs through a predefined protocol and then there's multiple other infrastructures in the middle so you might start with MQTT then you may go and forward the data using Kafka and then you may go and store the information somewhere and then re-serect it and then go and forward using MQP. What's important is that those events which are key to driving the logic of your applications are not that no information on that gets lost so what we've done in the cloud native foundation together with the likes of IBM and Google and AWS was there too many many other companies PayPal is we defined cloud events and cloud events is a standard for events it defines what an event is and then binds those events to serialization formats MQP, Avro and Jason but also to transport protocols MQP, HTTP, NATS, Kafka and QTT with the goal that if you are sending a cloud event through using any of these encodings and through any of these protocols then and you read the cloud event back you will have the same event the same semantics preserved through all the header mappings etc etc and then you can go and route that further through some other infrastructure so we have a lossless model here to go and express events and of course we have a single model for how events can be handled for how events can be dispatched in a system the current work that we're doing in cloud events now after having released cloud events two years ago is we're working on a schema registry for the payload schemas of events working on an event catalog for how to catalog which events are available and then to make them discoverable to make discoverable the endpoints that are raising events you can easily more easily subscribe to them and then a common subscription API to standardize how you can actually ask for events to be delivered for your endpoints that has that is in various stages of progress there's specifications for all of those things already and for the schema registry for instance and for the subscription APIs there are also already products which implement those drafts so overall from our side Microsoft we have a very rich platform event grid as the discrete event platform event hubs for streaming service bus for processing jobs backed by the schema registry that I just mentioned and then we have another number of other services which are specialized for integration with IOT or with websites etc and they're all working together or integrated together so you can go and build your event driven solutions the focus of many of the following talks will likely be these discrete events for where the focus is on event grid and event grid there's also the focus for the integration work that we are doing these days with SAP so with that thank you very much for your attention we have a lot of things to offer and I will be happy to answer further questions if you have them I'll stick around for a little while to go and check the Q&A window