 Okay, we're gonna go ahead and get started if you want to take your seats Okay, so my name is Kurt Griffiths I am a architect at Rackspace hosting and I work on the Marconi project, which is the the codename for the message queuing service program that OpenStack has Today we'd like to give you a brief overview of the project We're gonna walk through the API and also talk a little bit about the architecture We've also got a live demo prepared for you so you can see Marconi in action and We'll finish up with Some frequently asked questions and then we'll have some time of course for for y'all to ask us whatever it's on your mind Okay, so In in the late 1800s young Italian by the name of Marconi began to play around with radio electronics in his attic of his family's home and He would go on to along with a few other visionaries revolutionize the way the world communicated Wireless had two key advantages over the alternatives First it allowed the center of the message and the receiver to be loosely coupled second it Afforded a wide variety of communications patterns. It was very flexible and you could apply it to a lot of different problems So with the Marconi project we hope to bring these same kinds of advantages to Cloud applications communicating through the internet We envision Marconi applying to a variety of application domains anything from e-commerce to software as a service to Social applications So let me let me just take you through some of the operations that you can do with the API So this is these are things that a cloud application developer would Would use in order to do some of those things that we just mentioned But before I get into some of the details It's helpful to understand that the Marconi API is based on two fundamental concepts first you have a message feed You can kind of think of this like a RSS or an atom feed where you can post messages up there. You can list them There's known No As you list messages other people can see them as well. So you're not hiding them from anyone There's also the notion of a claim and a claim is where I can grab a batch of messages and Their mind so you would use this for job queuing where you have workers who are processing things and and you only want one worker at a time to Be working on on a single message So those are the basic concepts and we actually have put these both in the same API and That enables hybrid messaging patterns. So you can have for example You might be processing Billing invoices maybe you're converting them to PDF files You can have a auditor process that's just watching the feed of messages fly by and Logging those and so if you ever have a problem or customer calls and says hey, I missed my invoice You can go back and track that so that's just one example of how you can use this hybrid model And out of those concepts we have you know to make this a little more concrete The API has a notion of cues or you can think of these as topics And then of course we have messages and then claims I already mentioned so these are pretty straightforward Now each one of these has different kinds of operations a lot of what you would expect and maybe a few things that you don't so Let me just take a couple minutes and walk through the different operations you can do on each resource The first up we have cues You can of course create a queue you can list cues you can find out that can be useful for auto discovery for example You can certainly delete them One interesting thing you can actually do with a queue is set metadata on it So back to the invoicing example you could set a template Into that queue metadata and then your your worker could pull that out and use that to construct the PDF file For example that you would then send to your customer And then you can also get statistics for that queue you can find out how many messages there are total How many are claimed versus free you can find out the the oldest and the newest message So we think this is going to be really useful for auto scale as well as just getting a Better insight into how your cues are behaving and and whether or not they're getting backed up So here's just a quick example of how you create a queue You can see I am able to choose the name of that queue And going back to the earlier example. I'm going to call it invoices And Here I'm going to set that template that I talked about before This is just a very simple example And then of course I can go ahead and get that so my my my worker would need to get that template back Out of the queue metadata so that's here's an example of how you might do that and Of course I can Always list those cues and I if I pass an optional query parameter detailed true I can get the metadata there as well and this can be useful if I'm trying to auto discover what What kind of cues I'm dealing with or you can imagine for a for a horizon or for some kind of control surface This could be useful And then of course you can delete Some message operations again, it's kind of the usual suspects you can post messages You can you can get a single message you can list those messages as much as you would like with an atom or an RSS feed with pagination And of course you can delete messages So I'll just show you briefly what a message looks like you you set a time to live which Controls the lifetime of that message here. I'm setting it for five minutes and then I'm setting a arbitrary payload This is as long as it's valid JSON. You can Submit whatever you like so it's up to the application to interpret that so here, I'm just maybe setting some some information you might find useful in generating invoice and There's just quick example of posting that and getting back the link to the postage message Again, this is an example of how I might list those messages One thing I'll just call out here as you can see the pagination is is there using the kind of a restful design We've tried really hard to make this a clean modern HTTP API All right, so our last one is things you can do with claims Of course you can create a claim when you create a claim it grabs it goes to the head of the queue and finds a certain it if so if I ask For five messages, it'll go find the first five messages that haven't been claimed by anyone yet And then return that to me and then I cannot I can always just read the claim directly I can refresh a claim because claims can actually expire and Yeah, I can also delete claim messages, etc Just really quick how this looks here. I'm creating a claim. I'm saying I I want it to live for one minute I think I can process all the messages there within a minute and That's useful because if my worker crashes then the system will automatically free release that claim and the messages can be reclaimed by another worker and Here I'm passing a limit of two so I'm asking for a maximum of two messages okay, so Once I get those messages, I'll do my processing in the example. I will create that PDF file Maybe I'll email it out and once I'm done. I'll delete that message so that it doesn't end up getting processed by a different worker And you can see there where I'm passing the claim ID so that ensures that Only the worker that created the claim is able to delete that message Okay, so that was a very quick rock-through if you want to dig into the details go to our wiki page We are planning to do some proper user documentation, but right now Everything's just kind of on the wiki. So With that I like to invite Flavio Percoco up from red hat. He's gonna discuss the architecture and Some deployment options that you have with mark on me. So hello run Thank God for the introduction. So yeah, I'll say I work for red hat. I'm sort of an engineer there Mostly learning to the storage side, but I'm also working on my Connie Like a broker developer Anyway, so I will now go through the whole architecture I will give you a really quick introduction to a two-mark on his architecture and how we built a whole this whole thing So this is a very high level overview of what we have right now. We have there. So basically Splitting three different layers that we can play with basically like if they were Lego bricks We have the first layer is a transport layer It allows so we created this because we wanted people to be able to deploy Markoni and be able to use different transport for it So those of you who don't like using HTTP for messaging Can you use some TCP transport and talk to Marconi through TCP or use your MQ as a You know as a way to send messages to Marconi and Marconi then will then process those messages and do something useful with them we then have The API layer where which is where we define the whole API So this is basically this is mostly a logical representation of how Marconi works We will split this in three different packages basically in the next few months But the API and the transport layer are in the same package right now So the API is high is where the API the API layer is where the API is defined is basically a couple of classes and specs that represent the whole The whole mark the whole Marconi API and it allows to you know have Validations pipelines and have a different way to expose that through different transport So transport when when you start a transport when you start Marconi and it loads the transport It will read the whole API spec from the API layer And it will create the the endpoints and will expose those inspoys dynamically And it will also allows to have at some point maybe extension So that people can actually create different endpoints and plugins based on their own needs That may not make sense to have inside the code base Then we have the storage layer that's where the whole persistency of Messages happen we right now right now we have a no SQL back end Which is based on MongoDB and we have a yes and SQL back end which is based on a SQL light We are replacing the SQL light one for a SQL alchemy based one So we have support for my SQL process SQL and SQL light and Whatever SQL alchemy can support and we are also working in the in the next couple of months We'll be working on there in the first aqp back end for Marconi Yeah, so basically the storage layer allows you to you know like playing around with what back end you want to use And what back end you want to have in your infrastructure So one of the philosophy we have in Marconi is that we don't want to be invasive We want you to be able to deploy Marconi with whatever you are already using we don't want you to Get a new storage get a new broker and get to know that broker yet again in a new API or whatever you You already know how to deploy let's say QP right you already know how to have HAA with it and how to distribute in your infrastructure We just want to give you a nice API on top of those technologies that you can basically develop things on and then make them portable at the same time So that's basically what the architecture of Marconi looks like. It's very simple. It's very straightforward There's no magic in there. There are just Interfaces, I know that word is ugly, but there are some interfaces there that you can basically use to you know Create your own transport or create a back end for your own technology or the one that's missing in there Yeah, and I'll now go through some ways of deploying Marconi Just like the API deploying Marconi is very simple as like I said, it's like playing with Lego bricks You can pick whatever you want, right tell Marconi where it is and Marconi would talk to that Storage or we had supposed that transport as you choose The first way to the play Marconi the simplest one is single storage You have a Marconi instance of several Marconi instances that you can scale out Horizontally and then you're talking to the same to this to one single cluster of storage Right you have your MongoDB cluster your mysql cluster or whatever you're using and you point Marconi to that cluster and you scale That cluster as much as you want The second way to deploy Marconi is to have been is to have several different back ends deployed at the same time basically Barconi has these Really cool feature called partition that we introduced that allow you to not just scale your for example your NQP broker like having different clusters of it and tell it like These kind these type of queues will go to the cluster a and the other list of queues will go to the cluster B But it also allows you to Have different back ends at the same time You can also deploy like a radius cluster along with a mysql cluster along with a rabbin and q cluster Based on your needs, right? It's kind of more work to do for DevOps and the whole deployment process But if you if that's something that you can find useful for example One of these case for that is like having all Not Persistent messages going to readies for example that you don't want you don't care if they are lost Let's say ready's fails or whatever you don't care about those messages. So you have these not non persistent messages there that Have really high performance because ready is fast And then if you care about those messages, you can move them to some persistent back end like mysql SQL Whatever you choose, right? So it basically allows you to do many things and then you can always kill Marconi who is on tally you can start many instances and have some H8 proxy on top of it Which is basically what is in this light here We have we have split the admin API from the public API So you can just bind the admin API to some private network to some private interface and then expose the whole public API Then the admin API basically does whatever the public API does plus has some admin endpoints that you can use to monitor your home your Marconi Instance and and and what's happening in your infrastructure And then like I said, you can have an h-protsy on top of those public API and Balance those things and if you need to scale more you can start more Marconi instances and tell them where this Sortage is and they will find it and we'll talk to those So basically you can you have to scale two areas right if you want to make Marconi scale You have to scale the whole Marconi instances And you have to also scale the back ends for it, right? If you already know how to scale those back ends, you just have to start more Marconi instances and you're done And with this which was very very quick I'll introduce you to Alan Matz who's a rack space manager director. What and he's going to demo Marconi to you and and why you have done what they have done in their cloud Yeah, my name is Alan. I I'm the director of engineering for at the rack space is Atlanta office We're the team that has been responsible for taking the Marconi API From OpenStack and actually creating a cloud queues product. So that's what this the Marconi is is known as in In rack space. It's a product called cloud queues So what we've done is basically we have connected Marconi up we made it given it a presence on our our standard Control panel so over here on this side we actually have the queues tab Which what I've actually done is you know rack space has six data centers So just for the purposes of this demo. I've actually created a queue In each of those six data centers. So we have queue in in Dallas, Hong Kong Northern Virginia London Chicago and Sydney. We use three letter Airport codes for to designate our our data centers so So we've done this we've also tied in Into our software development kits at rack space Some client bindings so that we have client bindings for Python Ruby Java and net at this time So I'm gonna basically just do a real simple demo just to kind of run some some messages through this is actually in GA We we announced lives Production released on Monday And so what I've done is I've basically created a queue in every data center and then I also have a server running In every data center. So if you look over here, I've got servers that are by the same name And so what's gonna happen here is I'm gonna send I'd say I'll send a hundred messages Oops we got a broken. I'm sorry my SSH connection here. All right, so now we're back So I am going to Run it. This is just a simple little utility. I wrote to make things easy. I want to send 100 messages to London So what's what this is gonna do is basically 100 messages are gonna go into the London queue and then I had these servers set up so that Each one is gonna pull messages from that queue and it's gonna send it to the next queue in the chain So basically what we're gonna do is we're gonna send a hundred messages around the world So we just connected to London and go back to the queues thing here And we should she see some of these messages start to run through So these There's eleven messages in flight in London. These are gonna go really fast. So if I go back here So basically we're going from London to northern Virginia to Chicago To Dallas to Sydney and we should end up in Hong Kong here So looks like we got 69. I'm not gonna wait for those to show up They should get there eventually, but let's just go ahead and take a look at What's in one of these Python, okay, so we have all hundred now Show message so first message in Hong Kong HKG And there we go So that's the content of the first message in the queue and basically it Every time it visited a data center That basically just posted a message that said I was here and I forwarded it to the next queue So that's a quick little demo. I think we're gonna come up and answer some questions I think Kurt's got some frequently asked questions to walk y'all through and then we'll take questions Come on back up All right, so as you saw Marconi is real it's ready for production and we are Continuing to work on it and refine it so we're really excited for the future there and just a big thank you to the folks at Red Hat and We've had some interns helping out as well and all the all the guys at rack space for getting the project to this point So I'll bring this up here Before we get to the Q&A session, I just wanted to cover a few questions that a lot of people ask us and Then we're happy to take some general questions from you. The first one is we do get asked a lot What's are we gonna be able to? Plug in Marconi to our existing Lost it into our existing rabbit MQ Infrastructure and and as Flavio said we are we're trying to work with whatever you already have deployed whatever you have Expertise in already. So we we actually had a design session on this earlier today and We're looking at planning that for our version 2 API. So that'll be coming later next year we are It looks like we want to do this for a back-end We're not sure if a name QP transport would be useful So we'd love to hear your use cases come talk to us and let us know what you would like there and of course If you have folk if you have engineers that would like to contribute to the project We're always happy to help them to Take their their patches Another question we get is you know the obvious one. How does this compare to what Amazon offers? Amazon has their SQS and SNS services We are actually we are targeting similar workloads to what they have One difference though is we are we have a unified API. So the the SQS and SNS are our two completely separate services and APIs and they With Marconi we brought We're we're trying to tackle those in the same API, but in a more flexible manner So we instead of saying you have you have to you do work queuing this way and then if you want to send email messages out when something happens you do it this way we want to give you more of a flexible creative platform and See what you can come up with so Just a couple of the quick differences. We we actually do have guaranteed first in first out for a single producer and once and only once delivery which At least with SQS Amazon does not guarantee And as Flavio mentioned that's probably going to depend on what kind of backends that you deploy But For some of our drivers we will have that available for people who would like that and then of course since it's open source And and you saw that architecture. It's very customizable Just a couple more questions we get People are wondering how mature it is you saw that we're actually running this in production in rack space today and We have a growing community of contributors and users I know Flavio has been really active in the the broader community going to meetups and conferences and talking about Marconi and and we're We're we're getting more and more contributors outside rack space and red hat so really excited about that we actually have also got a intern from the Nome outreach program for women who's going to be doing some work for us on our API so So we're moving along. We're growing. We're excited about the future. We have some things coming up We're gonna polish up our 1.0 API Over the next few months and release of 1.1 so that by the time we're integrated with ice house Everyone has a nice clean API to start out with because we envision that API being around for a number of years we are Looking at additional storage drivers right now as Flavio said we have a MongoDB driver and just a basic sequel light driver for testing but we're looking at all kinds of other options there for example Redis or a QP for different types of workloads and We're also Getting ready to merge in this storage sharding which allows you to do application layer shardings So I can have multiple Mongo clusters or my sequel clusters and I can distribute those cues across there and that will allow us to scale to master proportions Which is really important when you're running a public cloud In just a few other things that notifications part We're gonna we're looking at adding a few more things that'll make SNS type workloads possible connecting up Messages and alerts to like text messages or push alerts for phones or sending out emails and Then we just have a few other things going on We're working with the keystone team to add message signing and some other neat stuff So we invite you to get involved. Let us know what you want and Help us design the future of Marconi. All right Yes Yeah, let's get what let's get Alan and Flavio up here and we can There we go Currently we have MQP as such in the rabbit MQ We call it usually protocol and you call it storage and It does support the calling casting those kinds of topics and all that it supports So are you externalizing it? and One that is one and second if you externalize can I do whatever I want to do Within the open stack as if if I want to create another module. Let's say Independent model like you have yourself created right now For you, you will have your own queue based on a MQP right now. I assume I Okay, okay. Yeah, there's a little confusion on that because The the support that we're talking about free MQP is actually using it as kind of a persistence layer Which is a little strange because you usually think of like a database or something for that We are it We are actually providing a general abstraction over other Cueing protocols so We It's kind of up in the air whether we would do an MQP transport because it doesn't cleanly map to the semantics that were that we have to find today But we'd love to talk about that and find out your use cases again about that Does that make sense Oh, I see yes there are some other benefits besides the The API or the protocol you use to talk to Marconi with Marconi you have portability That you maybe won't have if you want to change from a broker to another broker You will not be able to or your application from one point to another point. So Marconi we what gives you is actually Cross cloud Portability you can basically take the same application you used to talk to one of the stack cloud one of the stack infrastructure Environment and you can take the same application and make it talk to another one where Marconi is deployed as well So it's not just about a protocol. We can support a protocol. There's there's a way to do that Right, we have we have transport plugins So you can basically create a transfer for a MQP if you want and you can create a transport for I don't know S3 because you may want to use both of the talk to Marconi And then you'll just make it interpret that stuff and pass that to the API and go down the whole change But it's not just about it. Yeah, there are ways to do that to make it even more portable because if you're if you Already have your application that talks to an MQP broker you Having a name QP transfer will give you that portability from that model to from that application to Marconi, but you know Yeah Yeah, I mean at some point you got a scope it I mean one of the one of the major things you get is a clean modern H2P API that works with Non-persistent connections One of the things you have trouble with with a MQP is going across, you know network boundaries going through firewalls and Also for scaling It's extremely difficult to get say a hundred thousand clients talking to a single a MQP cluster But you could do that with Marconi using the HTTP transport So like I said, I don't know if it makes sense to have a MQP transport because at that point maybe you might as well Just deploy like a Q-butter rabbit MQ But we are still kind of filling that out so that's great feedback. Thank you other questions I have two questions. The first one is that you you told us that You have you you are able to share in between different backends and I understood I understood the use case when for some examples some cues needs to be Persistent and so they're not my question is What what wasn't clear to me is that how do you do the sharding? Is this the role of the Marconi server to shard or if this is something That the user has to explicitly say that's something which is a bit unclear. Oh Okay, what okay, so Yeah, let me clarify that for you the The sharding is is really not something we're we don't want to necessarily surface that to the user But Is there two uses for sharding? I think this will help clear that one is simply to scale horizontally so if I want to have a hundred MongoDB Replication sets for example, and I want to just keep going out I can use the sharding feature to distribute those cues across all of those backend storages So I can just keep adding and adding and adding so that's one thing now The nice there's it's almost a side effect, but it does open the possibility to have like we said Heterogeneous back-end so I could maybe have a pool of Redis clusters a pool of MongoDB and then the user can specify What kind of a cue they want so they don't know that it's going to Redis or my sequel they just say I want super fast But I don't care so much about Fifo or persistence. Okay, we put that on Redis or you know, I want super high durability I'm okay with some of the trade-offs. So we'll put that on the MongoDB cluster So so that was my question. Yeah, he's the he's the kind of back-end exposed to the user. So he could I would say explain which kind of Back-end he wants and then leave the leave the implementation up to Marconi as you said Do I have any way to specify when creating a queue which kind of super fast Back-end I have or that kind of thing. So that that's not in the API today but I can imagine sort of a like a cue flavor concept in the future where when I create a cue I can say I want this flavor. I just wanted to add something to what Kurt said The whole sharding stuff in Marconi is not that Marconi will split your data between two different Back-ends so when you create a queue in a back-end the queue will exist in that back-end just there The message is going to that you won't be splitting several back-ends because that would we wouldn't be able to guarantee FIFO otherwise, right? So the whole idea is if your storage Supports sharding that's something that you will handle in the storage site, right? So if MongoDB Since MongoDB has sharding you want to shard MongoDB you will do that at the MongoDB level for Marconi There are separate clusters that he it can talk to you can talk to and you will simply say like this Cues here go there and the other cues go in the other side Yeah, yeah, so we're so the app so the application layer sharding is happening in a queue level not the message level You can use the the native storage sharding for if you want to do message sharding Got a question over here I was wondering what languages you support you have client bindings that you ship So yeah, right now we're working on a Python library obviously the first one is the most complete one There are their support for queues and then there's support for messages already and we're working on the claim part And those patches should line the next couple of weeks And there's also some hack on a Haskell library But that's that doesn't have much working yet, but there's some work going on there I hope there are there will be more libraries very soon Rackspace has as part of rack spaces SDKs like Ellen mentioned we do support Java and What else is a Java.net? Ruby and Python right now. Yeah, right now. So hopefully so we can contribute some of that I I believe they should work with generic open-stack clouds. They're not necessarily rack space specific so But we hope the community will contribute more there for sure Other questions. How easy is to jump on the project? Do we have to deal with things like Dev Stack or it's easier I Let's see how easy it is is it to jump on the project Well It's probably one of the I mean It's definitely one of the simpler projects in open-stack like you can get up to speed very quickly with the code We have a fairly small team right now. We were very active in the IRC channel You can see that up there and we have a good time there. So I think the Do you want to talk about the experience maybe of of our intern and how she's doing? sure so I Think it first the first thing you have to figure out is where you want to work at and mark on it, right? So it's not that you if you are working on the transport layer You won't be working on the back and later you still have to know the whole thing But we try to you know keep us focused on some things So we don't mess up like conflicting patches and whatever and we can keep this the whole Knowledge of what we are doing a little bit more, you know isolated and and you know But we still talk about everything in the IRC channel anyway So once you do that you can jump obviously on on launchpad and start getting some box There are tons of bugs because we create bugs even for the most Really lame things because we want to you know, remember them and we want to have low hanging fruits for people to jump into a project and That's basically what we do We just go there create things and if we haven't fixed them a good way to jump into a project like go in there and picking One of those bugs and start fixing them Another good way to jump into a project start working on the client side the client side I mean the client library for Python. It's there. It's still missing some pieces like claim But it will definitely give you the whole picture of the whole project and once you know What how the project looks like what the API looks like it will be Straightforward to get into a project and start coding on it. So we're out of time. We got one more question here Do I understand that correct? Is it? Marconi will be used to collect the notification that are generated by the Open stack different component that is that is the idea, isn't it? So the idea is not to replace What opens tag is using right now for messaging? So the idea is not to replace the rabbit and Q instance or the QPS starts open stack is using it It is possible. There is not idea the idea is to have a queue in service for open stack we will have integration with Celerometer and heat and horizon because those projects want to use Marconi for Notification and for QE, but that's not what will be used for by the whole infrastructure that the Like the salamander project and those folks they are interested in surfacing some Notifications to users. So we are we are very focused on on the the cloud users the application developers All right, thanks everybody