 Right, so we'll start with Jela Kaplan, who's going to be talking Marconi OpenStack Curing Notification Service, and Jela is a software engineer at the Clouds team at Red Hat. Thank you. So thank you. So I'm Jela. I'm both nervous and excited to be here. So I'm working for Red Hat mostly around virtualization. I've been contributing mostly around the storage area and to OpenStack to the Marconi project. So today I want to talk to you about Marconi, which is a Curing Notification Service for OpenStack. So what we're going to talk about today is why do we need a messaging service for OpenStack? What is Marconi? I want to give you a high-level overview of what Marconi actually looks like. And I will also give you some use cases about how you can actually deploy Marconi and use it in your own cloud environment. So let's talk a bit about the project's history. So the project doesn't really have a big history. It's pretty young. It started at January 2014 by Rackspace and Red Hat. It was incubated into OpenStack during the high-source DevCycle. It is currently production-ready. And we really hope it will get into the next June release into OpenStack. So first of all, before we start talking about Marconi, I want us to talk a bit about OpenStack. So here you can see a picture of the OpenStack services. So OpenStack is basically a set of services that are running on distributed machines throughout your infrastructure. And they are pretty much independent to one another. But they also need to talk to one another. So currently they're using a centralized message broker in order to do that. And what we want to do with Marconi is introduce to you an alternative to this message broker in cases where the message broker is just not good enough or not secure enough for your use case. So as I said, we have a lot of independent services in OpenStack. And there are also a lot of messaging technologies, which means different languages. And we want to have one unified way for the services to talk to one another. So basically we have a missing piece in OpenStack. We want to have a couple of things. So first of all, we want to have a queuing service for OpenStack. We also want to be able to have a notification as a service. This means that we want a service to be able to publish messages to other services in OpenStack. Also we want to have a really lightweight messaging API, which means it will allow us to integrate services using one unified API that will be really simple to use and it will cause no extra cost to your infrastructure while deploying it. So how are we going to do that? You're probably wondering if I'm talking about just yet another messaging broker because we have so many. Why would you need just another one? And there's also a really nice joke about it. You probably know it. So I want to reassure you that you don't have to worry. In Marconi, we are not aiming to replace any existing messaging technologies. We are aiming to use existing messaging technologies and to sit on top of them and use them. Also, Marconi is not aiming to be a task manager. We have Celery, which is a distributed task manager. And Marconi aims to talk to Celery, it is able to talk to Celery, and we want Celery to sit on top of Marconi and use it. Also, Marconi is not a Q provisioning service. It will not allow you to install any of your underlying technologies. You will still have to install everything yourself and deploy everything yourself. You will still use the messaging technologies you are using today, but you will use Marconi on top of these technologies. So now that we know what Marconi is not, let's talk about what Marconi is. So Marconi is a really simple and lightweight RESTful data API. And what it aims to do is to unify your existing messaging technologies. What do I mean by unify? It means not just to sit on top of them, but also that you could use different messaging technologies in your infrastructure and in your deployment at the same time. So what we aim to do is to create an open-source alternative to SQS and SNS. Do you know these services by Amazon? Okay, so basically SQS and SNS are both services by Amazon. SQS is a simple Q-ing service and SNS is a simple notification service. The first one provides you Qs and the second one provides notifications. They are both separate services. And what we aim to do with Marconi is to supply these two services in one single service, which is Marconi. And we aim Marconi to be used by application running on the OpenStack cloud. So now I want to talk to you about a few really exciting use cases we have for Marconi. So the first one is actually to deploy SQS using the Marconi service. So you will just have your own Q-ing service and you will be able to sell Qs to your users. Also, the rest of the use cases are aiming to be used by OpenStack services. So the first one is the Horizon notifications. So when you start an instance and whether it fails or succeeds, Horizon gets a notification. So you are able to do it today. But Horizon is just polling, but we want it to be able to get notifications. So it will be able to do that using Marconi. Also, we have the Cilometer service. So it generates events and statistics based on notification it gets from the other OpenStack services. And currently it does that using the centralized message broker of OpenStack. Sorry. So it is using the MQP message broker and it uses a really low-level API. And what we aim to do is to give it a more high-level API so it sits on top of the messaging technology. And we want to allow a guest agent's intercommunications, which means you have a guest agent running inside of an instance. And if you have a failure, you will want to be able to communicate to the other guest agents or to services in OpenStack. Currently you are able to do that using the messaging broker. But in this case, it's just not secure enough, so we want to introduce an alternative. So now I want us to move to a high-level overview of what Marconi actually looks like. So Marconi's architecture is pretty simple. It's composed of three layers, the transport, the API, and the storage. The transport is the actual protocol that the clients talk to. And the API introduces you to the Marconi resources and the actions you can perform on those through the protocol. And the storage are the actual messaging technologies that Marconi talks to, the underlying messaging technologies we're using. So a really important thing I want to mention about Marconi's architecture is that it's composable. So you can play with it, much like as if it were Lego bricks. It is obviously a plugin base in order to allow you to do this. And each plugin has to conform to a well-defined API. So you can just choose the transport and the storage you want to use and just use the suitable plugin. So on top of the transport layer, we have the authentication middleware. It is actually provided by a third party. Currently, we support two authentication methods, which are Keystone and Basic HTTP. So with Keystone, you basically have all the multi-tenancy features that it already allows. So you will be able to use multiple tenants and multiple projects under the same Marconi deployment at the same time. Obviously, if you want to, you can write your own authentication method. You can just write a plugin from Marconi to use it. On the transport layer, we currently support, that's our production protocol, which is HTTP. This is what we use. We plan to support, we target TCP for the general release. On the API layer, we expose Marconi's resources. So the messages are the main and most important resource in Marconi. That is actually what Marconi is all about. It's about delivering messages. So these messages can be read, posted, and claimed from a queue, which is a logical entity. And you can claim the message from a queue. This means that when a worker claims a message from a queue, the other workers can process those messages at that time. Also, you can configure all of your messages, queues, and claims in terms of TTL. So the storage layer are the actual messaging technologies you're using, which you will deploy by yourself in your infrastructure. So currently, we support two messaging technologies. The first one is MongoDB and the second one is SQL Kami. We have both of these plugins already. So SQL Kami is not really recommended for production environment. It's not really good for curing systems because of its performance. So what we use and what we recommend for production is the MongoDB plugin. It is currently used in the Rackspace cloud service. And we also target to have ready support for the OpenStack Journal release. So MongoDB will allow you to have fully durable queues and really persistent queues. And on the other hand, you will have Redis, which will allow you in-memory support for your queues. So if your application needs a really high throughput, it will allow you this. So let's move on to how you can use and deploy Marconi in your own infrastructure. So we have two ways to do that. The first one is using a single storage cluster. So you will have multiple Marconi nodes running in parallel in your infrastructure on top of a single storage cluster. This storage can be whatever you want, whether it's MongoDB or Redis or whatever fits your application needs. You just have to write the plugin for Marconi. And you just choose it according to your application needs. Another way to deploy Marconi is using storage pools. This means that just like before, you will have multiple Marconi nodes. They're still running in parallel, but instead of sitting on top of only one storage cluster, they will sit on top of multiple independent clusters. So you can choose these clusters to be whatever you want. And you can configure Marconi to talk to these clusters on a queue basis. This means that if you want your queue to be fully durable, you will choose it to be on a MongoDB storage cluster. And if you want it to be in memory and to have a high throughput, you will use the Redis storage cluster. So let's move on to the really great things about Marconi. So the first thing is that obviously it's open source. So we all here love open source. It allows you to have a really simple unified API, which means you will be able to use multiple messaging technologies. It is also providing us with FIFO Guaranteed for your queues, which is something that SQS does not provide. So obviously this feature depends on your underlying storage. Currently all of the storages we support are supporting FIFO Guaranteed and since Marconi is configured on a queue basis, you can make sure that your queue will be FIFO Guaranteed. We also provide you with storage pools, which allows you to use different messaging technologies at the same time and control the throughput of your applications. Also a really important thing about Marconi is that it's really easy to scale just as long as you have enough nodes in your infrastructure. You will be able to have as many Marconi nodes as you want. And as we talked earlier about the use cases, we target Marconi to be used by open stack services. And since it's really lightweight and easy to install and it plays nicely with everything else you have in your infrastructure, so it just fits in your stack. So before we close, I want us to talk a bit about our plans for the future or our roadmap for the general release. So something we're really excited about having are queue flavors, which are just like the Nova compute flavors that allow you to configure your instance. So the queue flavors will allow you to configure your queues. So you will just be able to create a queue and choose a flavor, whether you want your queue to be fully durable or to be in memory. You will be just able to choose a flavor and Marconi will choose the right storage cluster for you. Also, we aim to have live migration of queues. So in case you have a really heavy read or heavy write queue and it's basically just killing your storage pool, you will want to be able to migrate it to another storage pool without any downtime to your application. Another thing we aim to do that we already discussed is having ready support that will allow us in memory queues. And for the future, we aim to have AMQP support. We have a lot of discussion about this topic upstream. It was also planned for the June release, but we had some problems. So if you have some knowledge around the area of messaging and AMQP and unifying technologies, then we'd really love to hear from you and for you to join the discussion upstream. So after this talk, I really hope you got a clear view of what Marconi is and what it actually aims to do. If you're already using Marconi, we'd really love to hear your feedback. And if you have some new and interesting use cases after this talk, then we'd really love to hear your feedback. You can contact us. So that's it. Do you have any questions? Any questions at all? Okay. Well, thank you very much. Thank you. Here we go. Yeah. So currently, Marconi is using production in the Rackspace Cloud Service. We're not really familiar with any other production environments, but it is production ready. So you are just welcome to install it and try it. Hi. So while I understand that Marconi is supposed to be a core building block of OpenStack and to need some special requirements, what else makes it any kind of different than other queuing solutions? Because you started your talk by saying it's not a competitor, let's say, Mpq or Kafka or other messaging technologies. So how is it not yet another queuing and notification service in competition with Mpq and Kafka and et cetera? So it is a queuing and notification service. It is not another messaging broker. That's what I mean by that. So it is basically just giving you another level of isolation on top of your messaging broker. So your application will be able to just use a really simple and high-level API and not deal with the underlying messaging technologies API. So if you're currently using an MQP broker and you decide it's not good enough for your application needs and you want to change it, then you will just be able to deploy another messaging technology like, let's say, for an instance, just install MongoDB if you want a really high and fully durable queue. And you will just deploy the new technology and you won't have to change any of your application's code. Well, with Mpq being an open protocol, I can use Rabbit MQ or whatever other service which uses whatever it wants as a storage with whatever reliability I need from that software. I know that Rabbit is the most popular one, but not the only one. Yes, so you will be able to use MQP and also other technologies. That's the purpose. Okay. Hi. So when you say that Marconi could be used to let different top-of-the-stack services talk together, that means we'll have a new drive-in of slow messaging? Actually, I don't really have a short answer for that, so I'd really like for us to talk about it later. Okay. Thank you very much. Any more questions? Okay. Thank you very much. Okay. Thank you.