 I think we got everyone we're going to get. So good morning, OpenStack. I'm glad to see that at least some people are dedicated enough to show up for the first session after StackCity. So good on you guys. I'm Ryan from Red Hat, and this is my colleague, Victoria. And we're going to be talking about Zacar for microservices, notifications inside of OpenStack, Internet of Things applications. And if you guys are interested in any of these things, we have Zacar work sessions. And there are notifications work sessions that are going to be happening throughout the conference. So if this interests you, we hope to see you there. And so this is going to be broken down into a few parts. In the first half, we've got some looking ahead into the Newton and beyond for Zacar, messaging patterns and different ways that you can use Zacar for work queues and other things. And just an overview of the history of Zacar and how it works and why you would use Zacar instead of, say, AMQP. And in the second half, we're going to dive in for use case examples with how heat uses Zacar already for notifications, how they're looking at using it in the future, and where Zacar fits in into the user experience of production OpenStack. And an example with the Internet of Things dealing with different devices sending back data and receiving commands and taking advantage of those messaging patterns we start with. And so now I'll hand it over to Victoria. Again, thank you very much all for coming. It's like after that far yesterday, we weren't speaking too much to that. So let's start with what is Zacar. How many of you know what Zacar is? Raise your hands if you know Zacar. OK, one, two, three people. OK, so this brief introduction is going. We knew that we were going to need this. So where is Zacar? Here we have the mission statement. Of course, I'm not going to read this for you. I'm going to give you the short version. Zacar is the messaging and notifications service built for OpenStack and by OpenStack. The main goal that Zacar has is to connect applications running on the cloud and in OpenStack itself. There are several messaging alternatives, but Zacar has been built to work on the cloud specifically. It is not as other messaging queues that are around that has not been built with this use case in mind. So what Zacar is not? Zacar is not a replacement for Rabbit and Cure Cupid. We had got this question a lot of times. There may be some cases in which the use case for Zacar or Rabbit and Cure and Cupid would overlap, but it is not the main goal of Zacar to be a replacement for one of these. If you are familiar with the OpenStack directory, you would know that we are using Rabbit and Cure Cupid for communication between components, and of course, it's highly unadvised that you use a card for this. It is not. Zacar has been built with the web developments in mind, and we provide a set of tools that are not good for every single communication of that kind of things. Also, we have got several questions with regards to the data structure we are using. Zacar is not a queue service. We use the concept of queue, but it's just a way of handling resources, but it doesn't respect FIFO by default. You can enable that, but it's not what you are going to get out of the box. And finally, it is not an email service. It is not. And I'm at implementation, and you are not supposed to use it like that. So let's get a brief overview of the features we support. Zacar is a multi-tenant component. We have integration with Keystone, and we use Keystone project IDs in order to isolate resources. This means that you are the only one that will have access to the message you are sending to the queue. It is also a component-based application. You have different alternative for storage backend and transmission protocols that you can choose depending on your use case. And if there is one storage backend you need or one transmission mechanism you need that is not yet supported in Zacar, you can easily develop that. We also have support for a variety of messaging patterns, including task management, point of communication, and broadcasting. Zacar also has built-in notifications. You can subscribe to a queue in Zacar and get messages every time a new message comes in. We guarantee at least one's delivery. This is something you have to take into account when developing your application, because there might be a possibility that you get a message twice. In the messaging patterns, you can have at least one's delivery or almost one delivery, and we consider that the best option between those two was, OK, let's get redundant messages, but let's avoid losing messages. And finally, Zacar is horizontally scalable. We have a feature that is nine pools that allows you to have different entry points and storage location for all the data you're handling. Now, let me introduce some use cases. I'm not going to spend too much time in this slide, given that Ryan has some use case more interesting to introduce later on. So let's see briefly what is about one of them. Perhaps the most basic one is messaging for the cloud. Imagine this scenario that you are a web developer and that you want to deploy your web application in a distributed manner. And of course, so you want your application to have the components communicated between each other. If you don't have Zacar, then the way to have this working is for you to launch a new instance and deploy a messaging solution there, configure the network, of course, take care of all the burden of the configuration options that you have to handle there. And you have to do this several times if you have different application with different messaging needs or message patterns to follow. If you had Zacar, the only thing that you need is to know the endpoint of Zacar and just make your application to communicate using that endpoint. And so it is a good way to develop self-healing applications. Imagine this scenario as well in which you have your application in several nodes and one of the nodes goes down. In a regular case, you would need to send all the messages manually and to pray that when that node goes up, and you get all the messages you were expecting to get, everything works smoothly. In the case of using Zacar, the only thing that you should consider is to continue with the next item in the queue and just keep moving on with whatever you were doing. Zacar is also very helpful when you have to handle great amounts of data. It allows you to low balance the amount of data you are receiving for your application and it avoids overloading your application with a lot of data to process. And the final one is inter-clerk communication. This use case arrives recently with the appearance of new platform as a service project in OpenStack. I don't know if you are familiar with Trove, for instance, which is database as a service, Sahara, Octavia, Malila. These all projects have a characteristic in common, which is they all need guest agents running within the cloud and they need to have those guest agents communicating with the control plane. You could use Zacar to communicate between the control plane and these guest agents. And also to surface events to the final user, like in the case of if you use Zacar with Horizon. OK, so now let's talk a bit about how Zacar is architecture. Currently in our code we have three layers. One is the transfer protocol in which we have support for Whiskey, WebSocket, Webhooks, and email. All the clients communicate directly with the transfer protocol of your choice. And all the requests are forward to the API, which in turn sends all the data to be stored in the storage of your choice. We have different storage options for you to choose, including Radis, MongoDB, and, well, we also have support for SQL, but we strongly advise that it's not used to store data, but to be used in the control plane. So, well, the storage package that you will choose, of course, will depend on your use case in particular. But we have a new feature that is called Flavors that allows you to place the queue you are going to be using to the storage package of your need according to whatever you need, if, for instance, high throughput or high availability. We mark these storage packages, for instance, Radis has in-memories for MongoDB, has persistent support. So this is something that you can use as well, depending on your kind of application. Now, let's give a brief overview of how SACAR has been evolving. SACAR used to be named Marconi in the past. It's a long story. And it started with having support for MongoDB SQL, Kimia Radis, for the data plane. We noticed that transactions were not very good handled for MySQL. So we stopped supporting it in Kilo, which we decided to have SQL given only for management. We started implementing the beta support for WebSocket in Kilo, which later on in Liberty was highly available. And the latest addition in Mitaka is full support for WebSocket, and you can transmit plain messages and binary messages using the message pack in all of this version in order to save some resources in transmission. So now, what is next in the SACAR future, which are the things we will be discussing in the design sessions that we have on 30. We are aiming to see SACAR to support having notification within OpenStack. You have, when you deploy an automation script that will use Nova or Cinder or any other company in OpenStack, you need to keep polling, and you have several users polling to the same endpoints. And the overhead that that produce would be minimized using SACAR WebSocket transmission protocol, and you only would need to hear for notifications that you would deploy so much polling and so much resource waiting. We also aim into a standardized message format we are handling, so we can see notifications as a real API in which you have backward compatibility and such. Messages are not only written by people, but also by your application, your development. So we are looking forward to start working on that. We are also working on the dead letter queue concept is for messaging. It's a place for failure or undelivered messages. This would allow us to stop getting so many requests or resending messages all over again. You will only have this queue in which you can access and restore whatever you needed to restore. And the final is an addition to the subscription endpoint in which we will add confirmation when you subscribe to one of the queues. Now let's see briefly how it is the message life cycle. This will help illustrate the use case that Ryan is going to introduce after this. Imagine this scenario in which you have, I don't know, web app processing in the background. For instance, I don't know, your home banking. You will have a sender and a worker. The sender will send a job, let's say with ID 123. And the worker will be listening in that queue to get that job, process it. Once it receives the job, we have a feature that is called claiming. Your worker will have to claim that in order to make that message invisible to the rest of the worker that you have working there. And once you have processed the message, you delete the message. You have to do this in order to avoid getting duplicated messages. Now we are going to talk about one of the parameters we have for all our resources. But claims have an expiration time. So if that expiration time pass by and you haven't deleted your messages, your message is going to be available for processing after that. And you will get the risk of getting redundant data. So you delete the message and you continue processing whatever you have to process. As I was saying, all our resources have expiration and lifetimes. We have a TTL parameter for subscriptions, claims, and messages. Since we consider that having still data is worse than having no data. So this is something you have to think into account when developing your applications to interact with a car. And OK. Now Ryan is going to introduce you to some use cases that have been going on in the last release for Zakhar. So all yours. And so the first we're going to talk about a couple other message patterns. And then we're going to dive into how Zakhar is being used in heat and is going to be used in other open stack services to provide user notifications, notifications between services, and just a better user experience. The new features in WebSockets have made it easier to provide notifications to clients, whether that's a command line client, a web browser client, Horizon, looking at you. When you create a server in the web UI of Horizon, if you want to wait for that server to complete, you're refreshing that page, or you're going to get a coffee and then coming back and maybe wasting some time there. No commentary. So in a world where you use Zakhar for notifications, your Horizon dashboard would hook up to WebSockets. And over that WebSocket connection, it would receive notifications when your nova resource is ready, when your heat stack is ready, when your glance image is ready to be used. And for other things, like if you need a notification for a very long running job and you're going to leave your computer, email notifications might be what you need because you can get that on your phone, you can get that anywhere you need it, or for legacy systems that don't have very many options for getting data in, email might be one ingest mechanism there. And for more modern services that have WebHook support, Zakhar supports WebHooks and will call out to any other service. And so it can be a service outside of OpenStack, it can be your CI system if that has WebHook support. And inside of OpenStack, there are a lot of services that cooperate. Heat is a great example because it cooperates with just about every OpenStack service. So it right now has to pull every OpenStack service. And if those services had used Zakhar or a similar technology as a notification bus, it would reduce the amount of work that heat has to do. And it would reduce the amount of load that is put on these other services by heat. And in this situation, this is where you would have a longer time to live. As Vicky mentioned, you can adjust that based on your use case. In the notifications case earlier, you might have notifications only live for a few minutes because knowing that your Nova server from three weeks ago is ready is not all that helpful if you haven't opened the Horizon dashboard lately and you get it three weeks late. So you would have a shorter TTL for that kind of message where in service collaboration, you want to be sure that a service that needs that message, if it goes offline, can still get it from Zakhar when it comes back, whether that's down for maintenance, upgrade, or something else unforeseen. The use case that exists right now for Zakhar is heat. Inside of heat, you can provide a Zakhar queue and heat while it's creating your stack. How many people have used heat, actually? OK, so pretty good. So heat collects a group of resources that you declare into what's called a stack. And so you hand heat this specification for a stack that's called a stack create. And then heat will look at all those resources and create all of them and eventually have them all created and tell you that the stack is complete. Until adding Zakhar support, it didn't have a way to push that notification to you. And without that, you would have to either refresh your Horizon dashboard, keep running the stack status command line command. Neither of these are very good options. So with Zakhar, heat creates the stack. And then for every resource that it creates, it sends a Zakhar event saying what resource it is, what the new status of the resource it is. It's physical ID and any extra status information that exists for that resource. And as a listener to that over either web sockets or by polling Zakhar or by getting those notifications over email, you can react to those either by saying, yay, my stack is done. Or by programmatically registering those resources with a load balancer, with a monitoring system, with your inventory system, with your chargeback billing system if you have that inside of your organization. And getting all of that is actually as easy as adding just these four lines to your environment.yaml file. And what this is is an event sync is anywhere heat is able to send an event. And you can have as many of these as you want on a stack. And you specify how long the message lives in the TTL, the type, which is the Zakhar queue, and the target for the messages, which is your queue name. So in the future, heat could use the same process that it's using for notifications for hooks. If you're familiar with breakpoints in your IDE where you get to set a spot at any line of code and the execution will stop right there and let you examine the state of things, that's what hooks provide at the cloud level for heat. So in heat, you specify a breakpoint that happens, say, after a certain resource is created. And at that point, heat will wait for user intervention and for you to tell heat that it's okay to continue. So this is useful for if you're debugging a heat template and you don't know why it doesn't work. Or if there's some manual action that needs to happen before heat can continue creating the rest of the resources. And with Zakhar, you wouldn't have to ask heat, have you hit any breakpoints yet? Have you hit any breakpoints yet? Have you hit any breakpoints yet? You would listen on this as a car queue and when you were done, you would send a completion message to that same Zakhar queue and heat would get that notification pushed to it. Making your overall time for debugging heat stacks or doing any kind of manual intervention upgrades much faster. The other use case that we're gonna talk about is Zakhar for internet of things context. And as I'm sure everyone attended the keynotes and saw both Volkswagen and Smart Cities talking about having different sensors and different commands being able to be sent back out to smart devices to save energy or save time or increase the life of those devices or just have better insight into how climate control and things like that are used in your city. And with the internet of things, you have these devices that are out there. You can't take them back, someone has bought them, they've gone home. And in that device, you have to, it has to have the ability to talk to your service in a way that you trust allowing the client to have because if I bought a smart car, a smart thermostat, a smart anything, I have the ability to take it apart. It's in my house, I have a toolbox downstairs. And so I can take it apart, I can dissect it, I can look at everything that's inside of that device including the computer, including the data that is on the computer in that device. And so as Zakhar's signed URL feature allows you to do is give very specific granular permissions to being able to send or receive data from Zakhar and those subscriptions or those signed URLs come with a time to live, just like all these other Zakhar resources we've been talking about. And so your internet and so your device can renew these credentials, but if it doesn't renew them, you don't have keys floating around that just grant access to anyone. And there's use cases for both a write-only and a read-only queue. If you have a metric system that's sending in telemetry data from your smart car like your GPS coordinates so you can track them and track mileage and just keep an eye on the mechanical systems in there. You don't want every car to be able to see the data from every other car. And this is a great case for a write-only queue so you have one queue where all of the metrics data collects and each car has the ability to write to that queue but not the ability to read. So you don't have any privacy concerns about other device owners being able to see data from other devices that you haven't processed and they don't have permissions to see. And for the other side read-only queues that could be for commands, it could be for information like there's a software update so when you're connected to Wi-Fi, go over the network, get that update and flash it onto your firmware or whatever. And so that's where signed URLs are a really good fit for both any untrusted endpoint. So whether that's inside of like a Trove service or untrusted physical endpoint that's living in someone's house. And as I mentioned, sensor data, that actually presents some unique challenges because sensor data is high volume because if you sell, if your GE or some other device manufacturer, 5,000 devices sell like that. So if you've got 5,000 devices in the wild that are checking in every minute, every five minutes, that gets to be a pretty large number of requests as you add more and more devices and Zuccar has built a scale out with that. Like we mentioned, our different storage backends and the ability to add more Zuccar servers to receive messages. So you can balance the load from all of your devices and all of your customers across as many Zuccar servers as you need to handle that. And so Zuccar has the high volume data covered and for metrics data, usually it's not a big deal if any device doesn't check in, it can be out of cell service, it could be just turned off or it could be out of batteries or in the case of your application, your application could be under maintenance and if you miss that five minutes or however long your downtime window is of telemetry, that's actually okay and you can keep your storage requirements in Zuccar low by allowing all of this data to have a very short TTL. So if it's not picked up within a few minutes, it doesn't fill up your disk and raise your storage costs. So the example application that we're gonna talk about today is climate control system. In the demo in the keynote, that was actually also a climate control demo but in this application, we were imagining a company that has a series of offices, maybe spread over the globe and each of those has different zones in their climate control system and the company wants to have visibility into how that's being used across the organization and be able to have some automated systems turn off or turn down the climate control when it's late at night, when they're off peak hours and power is cheaper or whatever business reason there might be there. And of course it's better for the planet because you're using less climate control, so less energy. This is just a diagram of the architecture. We have HTTP connections for pushing because all of these devices on the left there that you see, the mobile app, the thermostat and the web interface aren't persistently connected necessarily and so someone might open the mobile app on their phone and send out a command and it will go out to this car server and send that message. And then the car over web sockets will push that information down to the air conditioning units and the air conditioning units can use that same web socket connection so it's very efficient. They can use the binary format to reduce compute requirements and the web socket remaining open means you don't have to keep reconnecting and opening new TCP and HTTP connections. So when someone does change something and this is from a thermostat on a wall, they set the temperature in centigrade because OpenStack is an international project even though we are in America. Those notifications go out to the AC units and they receive those, run that through whatever compute they need to do and when someone goes to check the temperature either on the thermostat, mobile app, web interface they'll grab the latest metric data straight from the car. They don't actually have to hit your server because you can use your JavaScript library locally in that web interface to hit the car and get the latest telemetry data from those AC units and so people can see why they feel like they need a sweatshirt so bad. And being able to have this kind of visibility is also great if you wanna add automation on top. So this might only be a first step in developing your climate control system. First step is replace the thermostat, get it online, get your devices online, get your data online, look at your historical trends and start adding automation that might look at the weather in advance and adjust climate control ahead so that people aren't over adjusting the thermostat and wasting energy or adjust it so that more of your climate control is happening at off peak time so you're saving money on energy, which is also great. And so now we're gonna open it up for Q and A. Since this is streamed, we have two microphones over there and so if you do have questions about Zekar or about any of the examples in the presentation we would love to hear them and thank you very much. You can hit both of us up on Twitter and I'll be posting the slides later. So hey, this is a question about the transports and what back-ins are supported. So I tried to use Redis with the WebSockets and I hit an issue, is that officially supported? Yes, it should be officially supported so we would like to hear a bug for that because the idea is that every layer that we have the transport layer and then the API and then below that the storage layers, all of those should be mixed and matched so you can use WISG against Redis on our V1 API, you can use WebSocket against MongoDB on the V2 API and that should all work. Cool. Thank you. Thank you. With the issue you found, so maybe we can help you debug it. Good question. Do you think Redis supports Redis cluster or Redis engine in both? So the way that we don't actually dictate how you set up your Redis cluster, in Zuccar we use the Redis connection string and because of how Redis clustering works, that's all we need and so it's the operator's job to decide whether they want to set up Redis clustering or even have Redis persist to disk, but our default is to just use standalone Redis within memory store. Thank you. All right, well, thank you everybody. We'll be around in the hallway if you want to chat and OpenStack Zuccar on a free note. Thank you, everybody. Thank you.