 Cool. Well, I'm Jeremy Klein. You can try pronouncing my name, but I really don't worry about it. That's fine. It's a French thing. We'll be talking about messaging today. Thanks for waking up that early and coming here on Saturday on top of that. So thank you. So yeah, this is who we are. We already introduced ourselves. So basically, we were, until recently, both of us on the Fedora infrastructure team, and right now he left the boat, I'd say, but we are still working on that. I'm still hanging on. I can't escape. So I still do Fedora infrastructure stuff. Yeah, I would describe myself as a reluctant messaging enthusiast. So yeah, that's what we do. Yep. I'm also maintaining the mailing link stack on for Fedora. So Mailman 3, Hyperkitty, Posterios, that stuff. And I've been involved with the messaging system that we use since last summer, I would say something else. What are we going to talk about? Well, I don't know if you want to make the first one. Yeah, yeah. So we were, yeah, we're going to just pick slides at random to talk about, but so yeah, we're going to talk, this is an introduction to messaging. So we're going to talk about maybe why you would want to send messages. I'll give you some advice on how to pick a protocol, maybe not the right protocol the first time or the second time, but I'll tell you about all your options. I'll tell you a little bit about how to design a hopefully good message and how to fix that when you don't do that right, because you won't do that right. We didn't do it right several times. And then we'll talk a little bit about reliability, what your options are there. And then we'll spend some time talking about what Fedora did, because we've been doing this for a couple years now and we've had a lot of problems. So we're going to tell you about those problems, and then you can not make those mistakes and make interesting new mistakes. So that's the outline. Let's start with this. Yeah, so why would you want to send messages? The basic answer is you want to offload your problems. You've got a lot of machines and one machine can't do it all. So you want to do all your work somewhere else, and so you send messages around. And so it's a good way to separate your problems with network partitions sometimes. So things like making work queues. If you've got a lot of CPU intensive work, but you want to be responsive, you can, you know, queue up that work and then do it in a distributed fashion. Microservices are really hot right now, so those all have to talk to each other in various ways. Or things like remote procedure calls, those are old and well-known and useful. So the way we specifically use it is to stop other people from telling us about their problems. We do this with a publish and subscribe approach. So applications publish events that happen and then other applications can freely listen to those events and do whatever they want. This is great because we've got a lot of people working on various different applications and they don't have to work so closely together. It's a little bit of decoupling. And then we also have the community, they can listen to all these messages and do whatever it is they want to do. So the problem is that offloading your problem is a problem. It's actually pretty hard. Communication is hard. Doing it reliably is very difficult. There's a lot of uncertainty, but there is certainly latency. So you have to deal with all the things that come with using a network. And all those things must come to an end at some point. Network connections, hardware, the universe. So there's failures all over the place and you have to at least be aware of those. You can choose not to deal with them, but that is also a choice. So it's actually kind of a difficult problem. So there's a lot of messaging protocols out there. There's a huge list, AMQP, I'm not going to read them all. You could just do all your messaging by opening a TCP socket and just writing back and forth between those sockets. But it turns out that that's pretty complicated and there's a lot of things you need to account for. So people thought a lot about that and made all these protocols on top of such things. So I advocate that you use other people's work. So I think the most important thing to consider is the reliability of the messages you send. You can break it down into a couple different classes at most once. So you may get the message or you may not get the message, but you will only get the message once if you do get it. And if you do this approach, your messages will definitely get lost and you should plan for that, even if your plan is to not do anything about it. The second approach is at least once, so there's some detection in there to see maybe did the recipient get it and it will try to send it again until it acknowledges the consumer says, I did what I wanted to do and I'm done with this message. The problem there is you'll get duplicate messages and you'll have to deal with that in some way. You could do deduplication or be idempotent, but you're going to have to think about that. Ideally you could get the message exactly one time and everything is perfect and you don't have any problems, but that turned out to be pretty hard. And then the last thing you could get the message zero or more times, that's kind of both the top two together and I don't think you really want that, but you could have that if you wanted, so that kind of covers all the options. And each protocol gives you one or more of these, so if your messaging needs to be really reliable, that's going to drive your protocol choice a little bit. And I thought about listing all the various protocols and what they offer you, but it would be a lot of texts on the slide, so my recommendation would be, if you're going to do something like this, carefully read through your options, read the specs, it's not super fun, but really know what you're getting into because you're going to have to live with that choice for a while. Before we jump to the next slide, I just wanted to point to note something, if you have questions, you're free to ask them during the session, like interrupt us, that's fine, and we're going to try to answer that. Yeah, definitely. So the second thing to consider when you choose a protocol is probably the maturity of it, how long has it been around? Have people used it? If it's brand new and it's used in one place and there's one implementation of it, it could be fun to play with, but if you're thinking about putting it in production somewhere and relying on it, I would be a little hesitant to do something like that. Is it a standard? Are things going to change under you? And I guess the big thing is, do you have client support? A lot of messaging protocols involve a server, typically called a broker, and so you're going to need to be able to talk to that broker, and typically there's client libraries, and if you're dealing with more than one language, you need to make sure that every language you might be dealing with has good support for that. And you might start using new languages, so if it's well-supported on all languages, that's probably a good thing too. And then, like I said earlier, if there's only one implementation and it's a little flaky, that's going to be what you're stuck with, definitely look around and see what the reliability and what not of those servers are, whether they're proven in production, how long they've been around, just kind of normal software choice stuff, I think. I think one more thing to consider is the performance. All those various protocols have different performance characteristics. I think we all agree that more performance is good, but if you don't need that performance, a lot of times performance comes at a price. So just as an example, we'll touch on it more. Fedora opted to go with zero MQ, and one of the big features of zero MQ is that it's fast, and that's great, but we're not sending a million messages a second. We're not sending 10,000 messages a second. So the performance of the protocol is not as important to us, it turns out, as we thought. So maybe if we had spent some time estimating how many messages we would have sent, both in terms of just average messages and bursts, how big the message bodies are, like how much data are we really sending around, and how many different participants in the network we had, we might have made a different choice. So definitely think about the performance of your application before you make a choice there as well. The way, and that's a good question, there's not like a nice little map that says, I'm sending this many messages a second, pick this protocol. Different implementations of the same protocol will perform differently. You can do, I would recommend doing some benchmarks just to get started, like deploy the server, run simple tests, just kind of see, that gives you a chance to see how it works, and kind of what it's capable of. So example, we're using AMQP, and the first thing I did was set it up on a Raspberry Pi, and just see what the Raspberry Pi can do, and it turned out to be orders of magnitude more than what we send. Now, we already had the advantage of knowing how many messages we send, because we've done it, yeah, we've made a lot of mistakes, and we've recorded all those mistakes so we could see, like, how much we've done, and so we know a little bit more about our use case than I think we did when we first started. So it's not a great answer, I guess, but I would say go out, there's a lot of people who have written blog posts about their setup. So on the service websites, usually there's use cases, and they show how many messages they managed to get through with this instance and this setup, sometimes they show that, so they're going to show you the best they can do, and you can evaluate then, like, if the best they can do is a medium message per second, and the others, the other implementation, the best they can do is about a billion speed per second, then you know that one is more designed maybe more perfectly than the other. So they're definitely going to try and claim, like, the maximum throughput, so that's a good way to gauge is, like, if you set it up perfectly and did a lot of tweaking and thought a lot about your design, that could be the performance, so that's also a good way to compare things. So here are just some broker names. You can get a lot of this stuff run as a service. I looked a little bit at what the offerings are. I know, like, Amazon will run ActiveMQ for you, or you can run it yourself. So RabbitMQ is a popular broker. It does many protocols. A lot of brokers do multiple protocols. So I think it does AMQP, Stomp, MQTT. So the broker you choose does give you a little flexibility as well. If you want to try a couple of different protocols and see what features you like from each one. ActiveMQ does even more than RabbitMQ. I think it does anything you could ever want. eJabberD is actually an XMPP server, but it also does MQTT. And you could use XMPP to coordinate messages, do PubSub. Actually, I know that several projects have done that in the past. So even if the protocols design more for chat, or use more for chat, it's still messaging. So I don't know that I would recommend doing that, but you could. And then there's things like Kafka, which I don't know a ton about, but I do know that it's aimed more for high performance and sacrifices, a lot of more flexibility and more expected features, I think, for message brokers. So if there's questions about that, I can't help you because I don't have time. There are talks about Kafka, and that was one yesterday. I don't know if you went to it. And I think there's one today also, maybe one Sunday. So if you want to hear about it, some people here work on it. Yeah. Cool. And with all these brokers, there's, again, reliability. So the protocol has certain reliability promises, but individual brokers, of course, have different ways to deploy them. And I think you definitely want to think about what your deployment is going to look like. Is it going to be highly available? Do you need it to be highly available, or is downtime okay every once in a while? And a lot of times the performance is tied to how it's deployed as well. So the thing, and I made the slides with a little bit of Fedora infrastructure in mind. We use their own queue at the moment, and it doesn't have a broker. But it is, it gives you a lot of tools so you can kind of build your own broker if you so choose to not use one if you want. And I think the reason, I was not around when this was first made, but the reason given was that a broker introduces a point of failure. And that's true, it does introduce a point of failure. But it's only one point of failure, which may sound like a bad thing until you have hundreds of points of failure. And then sometimes those points of failure fail. And you have to find which one before you can find out why it's failing. And this has really caused us a ton of trouble. So yeah, you're going to have a point of failure, you're going to have to think about that, but at least you only have one. And especially a lot of brokers let you mitigate the risks of that one point of failure. We're doing RabbitMQ and it has, you can deploy it in a cluster and do fancy high availability things and failover and hot upgrades and all that stuff. So definitely think about what your reliability and uptime requirements are when you choose a broker as well. Another thing to note about RabbitMQ is that even if you don't have a broker, you do have all the things, you have to solve all the problems the broker solved for you. And it turns out that's hard. The people who make the brokers do a lot of work. So that's a little bit, that's like an intro to the infrastructure you might need to send these messages. And so the next big topic that I was going to talk about was how to design a good message. I think in Fedora we have a lot of messages and very few of them are in the good camp. So the first thing you have to do if you're going to send these messages, a lot of these protocols involve, you just send some binary something. So you have to pick a serialization format. There's a lot of options out there. I've listed some of them here. Again there's pros and cons to all of them. You can do JSON. It's human readable-ish I guess. There are ways to write schema and we'll talk more about schema later, but you definitely want those. I know that now. Other options are like XML and the XML schema. If you like XML, I don't know if people do. There's things like protocol buffers. Those are not really human readable most of the time. There's some caveats there and they also come with schemas. But the important thing is to pick one that meets your needs. If you need to send a lot of messages and bandwidth is an important protocol buffers might be a better choice than something that's text like JSON or XML. So when we started in Fedora and still now, we have no message schema. Schema are really nice because you know what's in the message and you can validate that message when it arrives. The upshot of this is that people change the schema all the time and then the message isn't what you think it's going to be. And if your program is not well written, it will crash. And if it's well written, it will still crash, but tell you more about why it crashed. So definitely, definitely use message schemas. The way you do these are going to depend on how you decide to serialize your messages. The schemas should definitely have some sort of version to them. We're tying our message schema to topics and that's kind of an AMQP. It's not necessarily AMQP specific, but depending on the protocol. Yeah, most things do. So when you send a message typically in a pub subsystem, you add a topic to it. And this is a way for people who are interested in the message to hear about it. So maybe I'm publishing messages about builds and I say build completed and that's my topic. And then other people are interested in when a build is completed, I want to do something. And so they just subscribe to those messages and that's all they'll get. So have some way to version your schema. Keep those organized. Enforce them on both sides. When you publish a message, you should check to make sure that what you published is correct. And depending on the serialization format, there's just different ways to do that. But this is really nice for developers because especially depending on how you're generating those messages, you'll catch very quickly when you mistakenly change the schema. That happens a lot. I think all of us in Fedora infrastructure know that you shouldn't change the messages, but if they're being generated from ultimately like a database schema and you write a migration for your database and forget about the message, all of a sudden it's broken and you didn't notice until it's in production. And you don't notice that that's broken until things don't happen when you expect them to happen because of the many points of failure and tracking all that down. So definitely check it when you send it, check it when you receive it, check it all the time. And let's see here. Yeah, so when you're modifying the schema and you want to migrate people from your older schema, sorry, if you want to migrate people from your older schema to your newer schema, then a possible way to do that would be to publish both at the same time during a period of time when you migrate your clients. And that's what we are deciding to do with Fedora messaging. We're publishing the topic has a version. Like I'm sending, for example, build.complete.v1. And if I want to change that schema because I want to change the organization inside it, I would publish it to build.complete.v2. And I will also publish the older one. So if your client has subscribed to the older one, nothing will change. If it's subscribed to the new one, it will get the new one. So you don't have to do an if close and when in your code to handle that. You will just get what you expect to get. Yeah, it turns out that once you deploy these messages, you will want to change them. And so you have to come up with a way to migrate, as he said. So right now we don't have a migration strategy. It's just to break or to try to deploy things at exactly the same time. But different people are working on all these different applications and the publishers and you don't even know who's listening to those messages. So things are complicated and it's difficult to do that. It's better to come up with a solid plan on how you're going to move from one message type to the next. And know that you will have to do that. Yeah, also it's nice if the schema implementation that you use is a cross language and cross platform because one of the points of messaging is to connect different applications which may be written in different languages. In third world we have a lot of Python but we also have PHP. We also have Java. We also have probably other things maybe Ruby. Yeah, it could be. And then there's everybody who's outside of the infrastructure who listen to these messages who knows what they're using but they probably want to be able to understand the messages. So JSON schema which is standard and implemented in different languages. XML has XML schemas and protocol buffers comes with schemas I think. Yeah, it'll spit out bindings for a bunch of languages but the important thing is that you're going to have to deal with that. So yeah, what you want to avoid when you're designing messaging? So basically in federal messaging we send JSON so it's objects that are nested somewhat. You want to avoid too deep nesting otherwise it's going to be complex for the people who read your messages to get to the information that they want to get. Try to avoid information redundancy too. For example if you're dumping from a REST API, you see sometimes in REST APIs you have object nested and you can get some object at the base of your object I guess and others that are linked to it because it's dumping from a database schema and it's not normalized. So there you can have the same information in multiple places and that can cause problems in the end. Also be careful about dumping your SQL schemas without really thinking about it. Some of the apps do that. For example they subscribe to a schema change or to a database change in their ORM and they send a message each time there is a change. And they send the message by basically serializing their database object and sending it to the message bus. And of course as soon as you make a migration you break the message you send or you send a different message. So you need to be more careful with that because otherwise clients will not see that you have changed your schema and client will back. Yeah that's an important thing. Consumers try to handle schemas that are really not organized exactly as they expect but you can't really rely on that. Some will crash and that's not good. Using a schema on the developer side also helps you with that. Yeah and so if on the consuming side the message doesn't match your schema you can choose to do a lot of things but typically we're throwing those messages out and sending emails to people. But yeah you have to think about what happens because you're probably going to still mess something up and something's going to get to production. Nothing ever works so I think that's the takeaway for this but yeah. Good because we're going to talk about reliability now. So sometimes bus crashes and so that's the different models we were talking about earlier that I will be starting with the at most once model. So in the at most once model you know they're not going to get duplicates of your messages but sometimes you won't get them. And so there's no knowledge acknowledgement from the producer on the consumer that might be very useful in some cases. I'm thinking of for example if you're collecting metrics about your network infrastructure like your CPU load or disk space and you're losing your collecting them regularly like every five minutes and you're losing them. It's one of the data is not really too bad. You're going to get an updated version five minutes later. So you might want to trade to do a trade off and be sure that you don't you don't you're allowed to you're allowed yourself to lose messages. Yeah that'll add a little bit more you know performance by not checking things. So if you need to send a lot of messages things like that. So there can be network failures and you will lose your message especially much like UDP. You can the receiver can crash when the message is arriving happens especially if you change your schema. The application starting sequence can be different. For example in zero MQ the client the receiver connects to the producer to get the messages. But if you're rebooting your producer then the the receiver will try to reconnect periodically but he might well reconnect after you've published another message. So that happens. Yeah. Yeah. Yeah. I'm repeating what you said. Yeah. So the comment was zero MQ has a lot of a lot of pro I'll summarize that by zero MQ has a lot of caveats and problems and you know definitely be aware of all those things before you decide to use it. Yeah. And also so the receiver can crash things can happen but the producer will also not know if something bad happened. They just send the message and if nothing happened on the receiver side they will never know it. So there's no API to retry your messages if something goes bad. There are solutions to the problem that you can build. You can have a heartbeat system to avoid network cuts or like being cut by a firewall for example. So a heartbeat system is basically sending dummy messages back and forth with with the group of machine that participate in your network. And so you can be you can detect network failures faster than this and maybe the buffer your messages in that case. You cannot see build the retry system into the producing and consuming library. So you can for example publish a message and store it somewhere which will be local or remote. Then the consumer can when they restart they can look into that store and get messages that would have been published by between the time that they rebooted that could work. But those local stores can have failures for example you can have you can run out of disk space happens. The remote stores can crash. They can also fail to receive the message because they're not an exception in the system. They might very well lose the message that was addressed to them. So it's kind of I mean it could work but it's a bit fragile and it's also very network and computing heavy because you'll be your your sending messages. So it's a push model but you're also pulling. Yeah and you know this basically is reinventing brokers. Yes. Not not with not a lot of thought. That's kind of what we did which don't do that. Yeah yeah so the comment was yeah so the comment was for the recording that if something says it's lightweight. Raise your eyebrows. It turns out that you know people throw that word around a lot and often you don't want that. Because there's a lot of features that come with that heaviness. And I think there's a slide later where I run one of these heavy brokers on a Raspberry Pi which is not exactly enterprise grade hardware. And it could certainly run all Fedora infrastructure's messaging needs. So another model which is the least once model. So the trade off here is you know you're going to get the message but you might get duplicates. Basically it's based on server and consumer and allegements. When the producer sends a message to the broker the protocol will say yep I got this. And when you're getting a message from the from the broker you have to say yep I got the old message. In this case you will know that the message has gone through. There's a feature in revealing Q well in MQP actually. I think it's MQP. It's MQP called message durability which lets you which is which in which the broker guarantees that the message has been written to disk before sending you the acknowledgement. So you know that if the broker crashes it will pick it up when it goes up again. There's also another feature of Rabbit MQ called mandatory messaging messages. So it's a bit that you flip when you send a message that says that if you activate this the broker will tell you if the message was not readable. If it couldn't deliver it to anything. For example you have a message that no one is subscribed to. If you send the message the mandatory message bit on that message the broker will tell you that I couldn't do anything with this. Otherwise it will just dump it. So that may be important for some messages but not for all. So you have a problem which is you might get duplicates. For example if there is a problem in the network before the broker sends you an acknowledgement your producer might very well think that the message was not received and try again. And that might be a temporary network failure and you're going to end up with duplicates on the consumer side. If your consumer crashes before doing something with the message it will not send the acknowledgement to the broker and then the broker will keep that message in its queue and you will get another one next time you start. So there again you will have to deal with duplicates on the client side on the consumer side. So basically you have to detect duplicates or you have to make sure that you are in the important which means that your actions you can do your actions as many times as you want it will not do something different. But the good thing is it's always on the consumer side because both problems cause the consumer to receive duplicate messages. It doesn't have to be handled at the producer side. Another model would be the exactly once model. The exactly once model means that you'll get one message. It will never have you will never lose something lose a message and you will never get duplicates either. But there you're hitting a problem that is similar to the CAP theory in databases. I don't know if you know that. So the CAP theory is that you can have you can have. You can have. Courierance. Availability and partition partition. But you can only have two of those. You can never have three. So when you have a distributed system it's very hard to have courierance and you should never lose things. Yeah. So exactly once model like at some point you have to either claim ownership of the message or you know not if you're a consumer. And once you claim ownership of the message at that point you know lightning could strike that computer and it explodes and now you you know you've lost that message. So building an exactly once model is hard and is better just in my opinion better just to deal with duplicates. Sometimes sometimes you can get what you need. Yeah if you try sometimes. Sometimes. Yeah. And we also know that you have this song stuck in your head now for the whole day. And that was that was a deliberate. So on the broker side you can do with Rebitmq at least we know that you can do clustering but it's very common for brokers. Clustering means having different different servers sharing the load or having high availability between these different servers. For Rebitmq it has an equal piece of models which means that it's not a master it's not a leader follower system. All the all the brokers can be addressed at the same at the same time. This is a great way to split out your load. You know if you look it's if you read the docs you can see how best to design your messaging system to use your hardware most effectively. But it does allow to you know have different message queues on different hosts and yeah there's a lot of things you can do. So it definitely although it is a single point of failure it's a pretty resilient single point of failure. And so it's on Rebitmq it has to be on the same land if you want to do that because it exchanges messages between the database and the link has to be kind of OK. And you can do a distributed architectures also you can have brokers in different asset centers for example. And then there's different plugins in Rebitmq. There's two of them called Federation and Shovel which will let you move your messages from one broker or one cluster of brokers to another cluster of brokers. Yeah. What about performance now. So we've measured federal sense in average half half a message per second I would say. Yeah. And the peaks go to several messages per second. Yeah. Rebitmq when you test it it showed that it can have a throughput of 800 messages per second on the Raspberry Pi. And that's by with default settings. Yeah. And also enabling message durability which means that it's written to disk. SD card and I did not spend good money on that SD card. So and there are huge installs of Rebitmq for example like credit for example. Here's some nice graphs to prove our point like this. You can see the publishers and the disk right. Yeah. This is actually a three cluster installation. I didn't have three Raspberry Pi's but I did have three equally cheap. That's not an onboard but they're all like you know single board computer not high performance things. And the Q's are mirrored to all three of them. So we're getting you know 728 messages a second being published and it's you know writing all three all the messages to all three disks. And that is slightly larger than 0.66 messages per second. Yeah. So we'll talk I think briefly about the furniture because we have like five minutes. We've kind of talked about it. We'll go through that quickly. We have a lot of high level services. So build systems, issue tracking, all that stuff, package distributions. And they all talk to each other in various ways and interface with each other. And in the beginning we started Fedora FedMessage. So the current implementation which uses your MQ with something called Fedora badges that will let user earn badges when they do stuff in the Fedora infrastructure like build a package, give comments when they update and all that. So it was really was not something very critical. I mean if you if your message was lost you could very well go to a FedRentra member and say I was there at that event. I would like to get that badge. It was a failure. It's your fault. Please give me the badge. Please give me the badge. Yeah. Although this does take up valuable time like there's not a lot of Fedora infrastructure people so when and these people do notice when the messages go missing. I've seen a lot of people say I did my hundredth build and I didn't get my badge and they're busy counting but some messages got lost and then you know somebody has to investigate that. And it's you know it takes time. So reliability would have saved us a lot of a lot of time actually. Yeah. Well and the message was stored in a database also so we could we could get that. But then the needs changed changed and we had more use case. For example we wanted to do centralized notifications instead of having all our applications send email we would send the FedMessage and one application would be responsible for distributing email. Or IRC messages would be building and testing packages and relying on the bus on the FedMessage to get that. There was also application that decided to do distributing process queues or using FedMessage. So that requires a bit more reliability than badges. Basically like this was a system every app was talking to every app things were going all the ways. Yeah. So it was referred to as a mesh although and I don't wear that spaghetti picture went but yeah FedMessage. Yeah. But this ended up being you know hundreds of points of failure really just a spaghetti spaghetti monster that was very difficult to deal with still is very difficult to deal with. But yeah. So yeah we had problems with due to start up sequence apps being restarted. We talked about that before in direct you need to assign different port for every every instance of your applications that was a pain to the data format change all the time. And there was no API to handle her like you would be in your when you want to publish a FedMessage just say FedMessage.publish and that's all. And you never know what happens after that. So yeah. So for example I'll give you an example of the package update. So if you have a federal user that wants to pack to publish something to make an update to a package they will do a build in Koji. So then Koji will send a FedMessage. So this dashed arose our FedMessages. They will send a FedMessage to a CI pipeline that will test the message. Of course if that message gets lost. No testing. Nope. And then the CI pipeline will put the messages the results of the test in results DB which is a database for test results. If that message gets lost. Yeah that's a problem. So the CI pipeline decided to pull data gripper which is the store for all of FedMessages. So there was a polling here. Yeah. Of course. Yeah. Of course data gripper might as well have lost the message that that can happen or this could have got the message but not this one or the reverse. Well anything can happen. So that's bad. And then when the user creates the update in body, body will pull results DB while it might very well get the message from results DB except the body developer here knows that they might get lost. So he decided to do polling instead. And they send a message to a backend that will do stuff like curing bugzilla to get information about the bug, curing the wiki for the test cases and that can very well get lost. And if that get lost, in your update UI, you have a spinning wheel that waits for the bugzilla information to get received and that never happens. Also, when the build is created in Koji, it is signed by this this component here. So the RPM is signed. If that get lost, the build is not signed. Yeah. And this when it's signed sends a message to the backend to tell them, well my RPM is signed, you can compose the repository. And of course, body does not want to create a repository with unsigned packages. So if that get lost, it never composes. So they realized the body developer decided to pull the database here to get that information. So basically you have this sort of push-pull model that is just waits for and doesn't always work. So of course, it ends up being what you expect in that kind of situation. Well, we're going to right now. Yeah. So we had to go through this whole process that we talked so confidently about and we made some choices. So here are our choices. We went with AMQP. We're using RabbitMQ. We have a three node cluster. We use durability. Qs are mirrored across hosts. We use JSON serialization. That was, you know, we didn't really have a choice with that no matter what. I think it's a fine, it's an acceptable choice, but we wanted to be compatible with our old messaging stuff, which does JSON. And then we're adding on top of that JSON schema. So we have to track down all the people who are sending messages and beg them to write schema. So three node cluster, you can configure RabbitMQ however you want. You can mirror Qs across a couple nodes, all the nodes, two nodes, whatever you want. So we're mirroring them to all three. They're all durable. They all get written to disk. And the nice thing about this is that we have a published, subscribed virtual host. It lets you do lots of different separated virtual hosts. And then if other applications want to use this for like work Qs and things like that, maybe, I don't know how many people do Python in here, but Celery is a popular one for that. Or just, okay, yeah. So yeah, we can reuse it and just have one deployment that the admins manage. So we're out of time and we have a couple more slides, I suppose, so I think we can... Well, it's only, I think it's 9.45, so we might have more time than we think. Okay, we're out of time. Okay, yeah, sorry. So yeah, I'll go very quickly through that, but basically that's the schema we decided as our Justin Schema, we told about that. I think the slides are going to be available online, so if you want to read through the rest, that's going to be possible. And right now, the software library is published. We have bridges that bridge between FedMessage and the new system in production. And some apps are starting to migrate, so we'll get there. And I think that's almost the end. Yeah, that's what we have. If you have questions, we're going to stay in this room for a couple minutes. So if you're free to stay here and ask us questions. Thank you.