 Hi everyone. So I'm Saiyan Chaudhary. I'm a Python Django developer and I've been working on Python. So I'm a federal contributor. I contribute to federal projects in the federal infra team. So the federal infra team has a project called FedMessage. It's known as a federated message bus. So it was prior known as the federal message bus but later they changed the name to federated message bus as the Debian guys also started to using it because the scenario for which we built this particular piece of software was also the same problem which the Debian guys were facing. So let's start with what is FedMessage. So FedMessage is basically a simple Python package. It basically what it does is that it sends message to and fro applications. So it sends a message to a particular message bus, a virtual message bus and the other applications which are connected to that particular message bus can receive the message. It's just simple as that. Now the problem with the federal team was that there were plenty of disconnected components. So suppose somebody applies to the federal ambassador's team and now a person manually had to go and create a bugzera ticket for that person and this was getting painful when a lot of people were trying to send up ambassador request and the similar problem existed in all over the federal infrastructure. So what they decided was that to build something that is in real time and could solve the problem that the federal infra team had. So FedMessage is built on top of zero MQ and I will tell the point that it has no central broker so there would be no single point of failure. Let me explain why we chose this particular how the architecture was decided. So FedRWA has very critical changes that keeps on happening. Suppose a build is that is happening. A new package has been updated so RPM is created for that particular package. So if based on that RPM a lot of packages, the package maintainer gets notification and the other package maintainers who are kind of in a subscribe to that list get notified. So if a particular message does not get transferred so it would be a problem for the package maintainers to continue with the work. So this is what AMQP the traditional structure looks like. So there is a producer and there is a central broker to whom the producer sends the message and then there is a consumer which actually consumes those messages. Now the problem with this is that suppose if in our case the broker goes down so what happens to the messages that the producer is sending? A lot of things would get disconnected and a lot of things will fall out of place. So what we thought of as a simple zero MQ, so if you have worked on zero MQ there is something known as a zero MQ Pubsub pattern. So we built a based on the Pubsub pattern. So the producer connects to a particular port and the consumer simply connects to that particular port and starts consuming those messages. And this is the basics architecture of the zero MQ project. Now let me show you how the complete federal infrastructure is divided. So we have different like the body, we have something like the track where all the tickets are maintained then we have a Fedocale for maintaining the meetings and stuff like that. It's just like a calendar. Then we have a Tahir, Asbot, Asbot is a forum, QF support forum and like both where the package maintainers submit their packages. And now so whenever activity is happening, so every activity is sent over to the particular message bus. So the line that you see in the center is the bus. That is basically the port which it's connecting to. And then based on that particular bus, we have a number of consumers that are constantly consuming those messages to work or process those particular messages. So we have like a FedMessage IRC server. So if you go to pre-node and connect to federal slash FedMessage, you can see the number of messages that are coming every minute. And then we have the FedMessage. So federal has something called the FedBadge system where based on some activity, suppose they have done a blog post, you get a badge. So this is all generated. So when somebody pushes a blog, it sends a FedMessage message to the message bus and the consumer, which is the FedBadge, actually consumers and checks if a badge can be awarded to that guy or not. So this is the basic architecture of the FedMessage project. So this is like the producer sends a message on a particular topic. And the consumer that is consuming those messages can subscribe to a particular topic. So the topic is defined as like the org.federalproject.env. Then we have the category then object and then event. Now what is env? So env is basically the environment that we have. So you can set up environment for your staging machine, your production machine or your local system also. So based on those you can change actually change the name. So you can give depth, STG production or any other name you want. And the producer will, consumer will be consuming those messages based on that environment. And then we have the category. So which application, category is basically the application that is sending the particular message. So suppose the Koji that is Koji is sending the message. So it would be like the category would be replaced by Koji. And then we have suppose both is sending the message. So the category will be both. And then we have the object. So suppose in both which particular object did that particular activity. So object is something like a user or attack. So suppose you can build a, so let me give you an example. So we are building a project called auto cloud. So it has which actually does image testing. So here we can name the object as image. So the object which we are doing the work on is image. And then sub object, sub object is like a more data to the object. So it's just optional. You can use or you can just ignore it. And then we have the event. So event is basically like what action is being performed by that particular user. So it can be update, create, test, running, test, test field, something like that. So these are the list of few topics that we have. You can view all the topics at fedras-fedmeshers.readthedocs.org. And a few example would be like the Aspart is the object that is, category that is sending. And then the post happened. And then operation that was happening was edit. So the topic would be created like aspart.post.edit. Similarly bodhi.update.comend. So we have a list of example topics that we have. You can go to that particular website and see the list of topics you can subscribe to. Now, so if you want to set up a fedmeshers on your machine, so this is the command if you want to use for Fedora. So it's sudo dnf install fedmeshers. Now when you install fedmeshers, you will see a file getting created in slash etc slash fedmeshers.d. It would have a list of endpoints. And the Fedora message, the Fedora message that the producer is producing is publicly subscribeable. So anybody can subscribe to those messages and create applications out of it. I will show a list of examples, applications that were created out of this bus. So anybody who wants to start contributing to Fedora can simply use these messages to create meaningful projects out of it. So if you hit tcp colon slash hub.fedoraproject.org colon 9 and 4 0, that's the public subscribeable port that we have. With the ZMQ subsocket, you will able to get all the messages through it. We also have commands inside the fedmeshers package so that you can directly consume messages without writing the complete 0MQ code. So this is the structure of the file I was talking about. So you have like a dictionary that you have with the endpoints map and whenever you have, if you want to consume some messages, the messages that will be coming from will be this particular port. So how to publish messages? So before you can publish message, you have to make an entry to this particular endpoints file. So I will be showing a demo on how you can add a particular endpoint. So after adding that particular endpoint, you can actually start publishing messages with a particular topic. So here what I am doing is that I am echoing hello world to the fedmeshers logger. So if you do subscribe to the particular endpoint, you will find all those messages coming to your consumer. Now suppose you don't want to do it from the terminal and rather write out code in Python and Roostup. So this is the code. So you have to import the fedmeshers package and then you have a fedmeshers.publish method which actually publishes that particular message to the fedmeshers bus. So here you can tell the, you can tell fedmeshers that the topic is testing and then along with the message, there are other options also which you can add like is the consumer enabled and stuff like that. So this is the way how you write code for publishing messages using a Python code. And if you want to start consuming those messages which you are publishing to the particular fedmeshers bus, so you need to, there is a command called fedmeshers.tl with a lot of options like really pretty, pretty print and cow say. So you will get the most uses are really pretty. So you will get all the messages in form of a JSON which will show in your terminal that these messages are getting published in real time. Now there is the same goes for, suppose you want to do the same thing from Python code. So there is a method called fedmeshers.tl which returns a tuple of name, endpoint, topic and message. So you can do stuffs based on those variables. So suppose you want to, basically after consuming those messages you will like to do stuffs on those messages. So you can do activities on those messages and get meaningful data out of it. Now suppose you want to, when you write start consuming messages it's like a Python program that is running. But if you want to run in a demo mode, we have something called a fedmeshers.consumer.Fedmeshers.consumer which is a twisted code that which actually connects as a demo and keeps getting the messages. So we have the topic in here as everything that has the or.federal projects. So what the consumer does is that it gets all the messages that has this particular topic and then you have to write in the method consume. You have to overwrite that method and do whatever data manipulation you want to do on that particular message. So here what I am doing is that I am simply pretty printing the message and so this is the basic pay of three methods of consuming those messages. So now let me show you what things are built on top of fedmeshers in the federal infrastructure project. So we have a Koji stock which what it does is that whenever a package is getting built for the primary architecture it sends out a message to the fedmeshers and it basically rebuilds the package and test the packages in the secondary architectures. And then we have a fashion track which actually listens to the problem I was talking of that it listens to the ambassador project and it starts creating those bugzilla tickets. Then the recent project that we built on top of fedmeshers was Anithya. So Anithya is basically a tool to check if a project is having a new release or not. And when a project has had a new release the new hotness the next project that I have is it basically notifies the packages that you need to update your package now and there has been a new release. And then we have something as the federal news so suppose you are working on a project and you want to get notified if a particular package has been updated or not or if a particular image has been tested or not so you can directly install this fedmeshers notify project and you will get a this is based on genome so you will get notifications for that particular tag. And then federal news so federal news gives you there is a simple webpage which actually gives you the latest thing that is happening on federal and basically it gets the data from the planet and stuff and then we have the notifications so the federal fedmeshers notify you can disable using the desktop notifications the genome shell extension that we have and then these are the list of things you can do with desktop notifications so you can turn off a particular if you want to get notifications on IRC meetings or if you want to get notified on if a particular person is blogging or not so you can directly configure your fedmeshers notify to give data about it so then we have some project called data normal which actually stores all the messages into the database so what the there is a every project that is consuming the message there is a demon called fedmeshers hub that is actually consuming those messages so what fedmeshers hub does is that it stores the those particular messages in the database and the project name is data normal so you and using those data using that database we have built a rest API for that particular data so if you want to get all the data that happened previously so you can directly hit the API endpoint and get the data old data that happened in the past so the project that provides the JSON API is the data gripper and then we have a project called this week in federal which actually gives the list of all the activities happening in federal like what all commits that were pushed and how active are the contributors how active are the ambassadors so you can get to know all those stuff in this week in federal project and then we have federal badges project that actually the it has a fed badges demo which actually receives fedmeshers data and checks if a particular person needs to be awarded a badge or not and finally there is a project called federal notifications which what you can do is that you are using that fedmeshers data you can get notified so there is a desktop version to it which is called fedmeshers notify and then there is a separate project that is the FMN which actually what it does is that it gets the data into IRC then android and email so you can configure stuff like this is the web page you can build different filters on which you want to get data okay and this is basically the for the IRC backend and then we have like the email backend and the android backend also so this is basically all done on the basis of the fedmeshers hub project so you can if you want to write your own projects you can directly create a demo and start consuming those messages and like I told you this the projects like this weekend federal and I was like this weekend federal was built over just a weekend so if you want to sub so I during futcon I was working on something as like getting all the fedmeshers messages over to telegram so you can just go and hack with those data in your and build something good out of it so I told you about a project that I was working on called auto cloud so since the internet was not working I could not show the actual logs that are coming in so here in auto cloud project I have something as the auto cloud is a category so here you can see that the auto cloud is the category object is the image so the primary work of auto cloud it will test image and see give results on that particular image and then we have the action as aborted so the topic comes to as auto cloud dot image dot aborted okay so if you see the code for the fed message hub so this is what it does so this is a fed message daemon and so I have subscribed to a topic called our federal project dot prod dot bill says dot task dot state dot change okay now what it does is that it basically checks in the bill says project that we have for any change that is happening and when I when I am consuming those messages I am checking for a particular message and based on that I am triggering off image test that is happening and so here you can see like I am publishing those messages to a particular to the particular fed message end point that I have so here I am specifying the topic as image dot cube and the mod name the category that I have is in the method that I have created like published to fed message and extra data that I am sending is goes into the message part so if I show you so is my code on how I am publishing those messages so I have the topic which is being sent to me and then the mod name as auto cloud which is the name of the category and the message I am sending over to the fed message bus so that people and then based on this data people are building so this is a reference from where I talk so we have we are really active and we really we push everything to fed message and based on those fed message messages we build meaningful projects out of it so if you want to contribute to Fedora so you can start come into IRC channel on Fedora infra and directly ping anybody out there to that you want to start printing and you will mainly see their projects if you go to the Fedora infra GitHub repo you will find all the projects which I mentioned are there and has the same code that you have and the thing I missed was that so here you can see that I have created so here you can see that I have created endpoints according to my need to publish and because in Fedora infrastructure everything is not so you cannot directly publish to Fedora the Fedora infrastructure bus but you can directly subscribe to it but for this project like I got entry into the infrastructure and got code deploy on to publish so we basically maintain a flat file to get to see which all applications are allowed to push messages into fed message so but if you want to build out small projects out of it if you want to build some real-time apps you can directly build using this fed message project you can directly install it via pip and build things out of it and so one thing you have to do is that you have to create endpoints based on the category name and so here this is basically the name of the module that I am using and is separated by the host name and here I specify the number of ports that I will be publishing there are two ports that I will publish my messages to and then the consumer will be basically consuming those messages from this particular ports so if you want to know more about fed message you can directly go over to this presentation and directly go over to the fed message read the docs.org and check how you can integrate fed message along with the project to build real-time apps out of it so this was the end so do you have any questions 0MQ is used for transient messages shouldn't this be running in a more persistent way yeah so see the 0MQ thing was that you can directly subscribe so here in the graph scenario was that whenever the consumer connects and it starts getting the messages and we never found this there was a problem with the messages like getting the messages so we are like when if you go to the IRC channel and see the number of messages getting published so there is no failover in that case and we also have a project called fed message replay which actually was built by the Debian guys and if you needed it can replay those messages back to the fed message bus yeah so a very basic question on your first slide so you said like you didn't go with that broker-based architecture you actually went with 0MQ right which doesn't have a broker the whole concept of message queue is basically the whole purpose of broker being there is either the producer or the consumer is not alive at the point of time of sending or receiving the broker is the one who stores the message and really said to either the producer or the consumer at the time when the producer or the consumer comes up back again you are losing that if you don't use the broker architecture with your architecture if either the producer goes down or the consumer is not available I am sure the producer is not going to keep the messages till infinity at some point so your consumer can miss messages in the current architecture the consumer loses suppose the consumer gets down so it loses the messages for that particular point of time but the problem was that suppose the broker stores the message and the broker goes down we lose those messages as well and in this case we for that you have a j for broker or you can have a cluster of systems working for broker actually do load balancing across multiple systems but here the thing was that you can directly so you can directly subscribe to this particular message now the thing I told to him like we have something as the fed message replay that actually replays the messages to if you want which you are coming back that is kind of acting as a broker only yeah but if you are connecting to that particular fed message so the consumer directly consumes message from the point it starts I understand your point but my basic question is why not go through this method and why when go through the zero MQ what was the original I'm sure there will be a lot of discussion on the list why not this and why not so I'm just trying to figure out what was the reason you went with the non broker kind of architecture so that was the primary reason what if the broker goes down and the messages are lost a lot of solutions around that if you just google yeah but in that case see if the broker goes down there are no chance of getting those messages I think that's right you can set up cluster but for example you know I'm just I'm coming from an open stack background open stack is a MQP and open stack works at a humongous scale MQP works very well in open stack my question is you said for publishing you had to get some special authorization right so I mean is there anything already baked in that Isis authentication or is it like hard coded and like centralized deployment I mean how exactly is authentication handled so see in that case if you are working on your personal system or deploying, installing on your project there is no authorization that is required but inside because federal infrastructure not everybody can push because messages are critical build on those messages we do more of stuff so in that case we are like blocking those messages to publish so on the level of in hub.federalproject.org we actually restrict that part you cannot publish that message if you publish you will not find that message coming through the fed message while doing a tail on that messages I just wanted to know how exactly are you differentiating this particular producer has a right axis this one doesn't how do you differentiate that in that case I don't know much about that part but basically what they do is that they directly do that after once your project is set up you have to send up they do security check and write code based on the they write scripts and do blocking on that stuff so any more questions?