 in the laptop, right? Good, awesome. So we'll start off the date today. Here we will be having the open event solutions track. So I hope every one of you are awake, had your rest, did your breakfast, everything, good, right? Because I might fall asleep while talking, so please wake me up if I do. So open event server. What is open event server? Maybe you're wondering like, okay, I don't know, I haven't heard of it or anything. So how many of you have registered here via eventa.com website? Have you gone to eventa.com website? Anyone? Okay, you have. So how many of you checked the schedule for this conference? Have you checked? Yeah, many. You might have seen the schedule printed outside as well in the doors, right? So all those are part of this open event server project itself. So open event server is the open source project which kind of powers up all these things. So it's actually the open event ecosystem which powers everything. Open event server being the back end part of it. So what I'm going to talk about is decoupling and demystifying. So what you see right now in the life, that is one of our legacy codes because the code that we are working on right now is still in the alpha phase. So we are testing and we are developing on it. So why did we move from this legacy code to this next code? That's what I am going to talk about. And if you are wanting to contribute to this project because obviously the best part about having an open source project is anyone can contribute, right? We can all make it together better. So if you want to contribute to it, maybe this talk will make it a little bit better for you all. So a little introduction about myself. My name is Saptak Sengupta. So I'm an open source contributor. I have been contributing to many organizations like jQuery, Freedom of the Press Foundation. But one of them being Fawcetia, I started with Fawcetia and I have been contributing to Fawcetia now for like around three years or so. So I am the core developer of Open Event Project. Right now I'm working in Zomato. And I also mentor and coach in RGSA, Google Code in and then they say I will be mentoring in Google Summer of Code as well. So what is Open Event and why Open Source? So that's the first point. So the very idea of Open Event came. If you have attended the first day talk on AI by Michael Christian, you have heard that he said that there are many artificial intelligent assistants, but there are none which is open source, right? So why can't we make an open source solution? So the same idea was behind Open Event as well. So we have a lot many event management sites such as Eventbrite, Eventnook, and so others. But we didn't have any open source site. And the problem is like, see we are organizing Fawcetia. Fawcetia is a completely open source event. Every talk is based on open source or something like that. So we should have an open source event management site as well to host an open source event, right? So that's where the idea of Open Event came up. So not only can you host your event in Event here, the benefit of having Open Source Project is you can deploy it in your own server, you can use it for your own events. You can have as better an event as you want. Also the best part about Open Source like Open Event is that if you have a trouble, you can just ping in, you can create an issue. And even if you want, you can push your own solution and someone will review the solution and they will merge it. So that is the best part which you can't do in other kind of event management sites. So if you tomorrow feel that, okay, maybe this event management site doesn't provide me a solution that I want. So you won't be able to do it in a closed source website. But Open Event allows you that because you can create an issue and then we will look into that and we will create it. So first we'll talk about the Open Event server legacy which is the current website that's event here.com. Many of you might have seen this homepage. So this website is completely a tightly coupled website. What do I mean by tightly coupled website is this event is Okay, there is, what is going to happen? Wait a second, just a moment, okay, so tightly coupled. So firstly, the architecture that we had in our legacy code was tightly coupled. What do I mean by tightly coupled? So we used a Python flask package which many of you might have heard of. So we didn't use any separate backend or front end for it. We used the entire Python flask package for the templating as for the backend purposes. So most of the work that was done was in the backend itself. The data fetching, the data processing and everything. And in front end we mainly did was CSS designing and templating part of it, how we show and everything. And apart from that, we only had the schedule app which was completely front end heavy. Otherwise, most of the things were done in the backend. So the problem with it was, if you wanted something to be modified in the front end, we needed to change a lot of the code in the backend as well. Because that's how the data was being fetched and that's how we were showing. So we wanted to show something else, we had to change everything in backend as well. So that's what was the problem with tightly coupled. You can say tightly coupled looked something like this, with the black part being the backend, yeah, it's a jumbled mess of code. And then in the front end is just the red connections with the black part, right? So that is a little difficult to deal with, right? So this was the architecture that we have right now in the production event here.com. So we are using a post-disk radius and salary. Because we have some queuing process like downloading of the speaker CSVs or session CSV for the event organizers. And also allow you to import a complete event from JSON files or export the event completely to a JSON file so that maybe you want your own front end, right? You don't want our front end to be used. So in that case, you can download the entire JSON files and then you can build your front end on the JSON file itself. So for that, we have the radius and salary, which does all this queuing works. And then we have jQuery, Lodash, Bootstrap and all this for the front end purposes. So most of these, as you can see, are all CSS designing things. There is no MVC architecture or any kind of thing in the front end. It's just whatever the data we get from backend, we show it in the website in a nice way. And that's up to the user. So you can compare it with this Jenga structure. So why did I bring up all these images? This image is beautiful. This is a nice structure. It looks good. But every part of it depends on the other parts. You bring out one part, the other, the entire structure might collapse, right? So though it's beautiful, but it's not a very reliable structure for a long run, right? So that's when we came up with this idea of open event server decouples. So last Google summer of code, our project was that we will be decoupling the entire structure. So we will be having a separate backend which serves only the data in a rest form. So when you have a rest form, you can make as many API calls and get the data that you want to show in the front end. And then implement a MVC structure which can show you the data, right? So now we have something like this. This, I'm not sure if it's visible, but this is the API documentation, the above part that you can see. And this is how the front end looks now. Now, this front end is completely front end. It doesn't call, and whatever the data it gets, it uses that, right? So why decouple? As I said, the front end and backend work independently. So before what happened is, if I wanted to make a change in the front end, I had to make it in backend as well. Now, for developers, it might so happen that I'm a JavaScript developer. I don't have much idea about Python. Or I'm just HTML CSS guy, I don't want to do a postgres thing in backend, right? So the problem was, even if I had to include a feature, I had to implement it completely from the postgres level, then Python, then JavaScript, then HTML CSS, and whatever comes in. But right now, what happens on decoupling is, I can say, okay, I need this kind of data. If you already have it provided from a REST API, well and good. If you don't, I need an endpoint which provides me that. And then I will just focus on the front end. I have the JSON data, I'll use the JSON data, I'll parse it, and I'll show it using the front end. Same goes for the backend guys. Back end guys, we don't need to concentrate on how it is implemented in the front end. We don't need to know how it's done in the JavaScript side or the HTML CSS side. What we know is, if you make a call in the post, make a post request in this endpoint, I will insert the data in this way with this permission. Or I will give you this kind of data if you make a gate request to my endpoint. That's all I care about. I don't need to know how you are showing it in the website. I just need to fetch the data in a proper way with the proper permissions. So that makes working a lot more easier. So that's what I mean by separation of concerns. So the backend can concentrate a lot in the permission system, how I deal with the database, optimizing the database queries, how many workers I need to serve all these things. Then if there is a background process or a queuing process, I can completely handle it in the backend only. I don't need to worry about it in the front end. As well as front end also can concentrate in making it better for the UI UX perspectives, like whether I show a loader or those things, instead of worrying about how I get the data from the database. So that's a complete separation of concerns. And also we can use different tools, right? So when I have front end separate from the backend, if I think that, okay, this is a tool that is good for front end, we couldn't use it before because we were using flask. So we had to see whether it's compatible with flask, whether it will go good with flask. But right now, I don't care. I have a separate front end. I will use this tool. It doesn't matter because it doesn't depend on front end. Similarly, it goes for backend. Today we are using flask. If tomorrow we decide to move to Django or even like Java or PHP, we don't need to worry whether it will break the backend. Because after all, we have to serve the data. So that completely differences the tools. So that's what we have right now. That's what we are working on. That's what we are developing. And most probably next year when you come, you will be seeing this thing live. So this is the backend and the front end completely separate. Front end just returns the data and handles permissions and the databases. Well, the front end implements the entire presentations thing. It decides how to show, where to show, what are the things I want, what are the features that I want, whether I have a dashboard, whether I don't have a dashboard, everything. So it looks like this. Well, maybe there are two jumble code, but then the one jumble code doesn't need to worry about the other jumble. Only thing they need to worry is, okay, this needs to work. I need the data, that's all. I need the front end to work, that's all. I don't want to know about the black box things, what's going inside. I don't even care. So this is the architecture now. We still are using PostgreSQL and as a database. For serving the REST purpose, REST API endpoints, we are using Flask Res JSON API. So Flask Res JSON API follows a JSON specifications, Res JSON specification. So we have a talk on that after this. Then we have SQL Alchemy for doing all the ORM-based database operations. So we don't use a complete SQL queries as such. We use SQL Alchemy so we can use object relation model mapping. Then we have Redis and Celery that's still used for queuing purposes like importing, exporting, sending of emails. So suppose I want that the before the event starts, all the speakers should have get an email, right? So we can queue all those processes using the Redis and Celery. Now as you can see, this is completely separate. So the Flask Res JSON serves the data, then we have a front end site. The front end now uses embedded JSON semantic UI and the front end does the routing. It takes care of the entire routing, where it goes. So it's kind of one page system as well, but it takes care of the entire routing. The back end doesn't need to worry about the routing. So the back end doesn't need to worry, okay, this page means I have to send this, this data, no. The front end takes care of this routing. It calls the APIs based on the page it is in. And so as you can see, it's a lot more balanced now. So as much as front end we are using, we can use as much as back end as well. The reason we can do this is because of the browsers that are right now present. Like maybe five years back this wasn't so much possible because the browsers couldn't handle so much of JavaScript before. But right now it's perfectly fine. It has like there are a lot many JavaScript websites and tools which are being made. So this works perfectly fine. That's why we have a completely balanced front end and back end, right? So I compare it with something like this. So these two are completely separate processes, right? I don't want to know how the socket gives me power. It gives me power. That's all I care about. Similarly, the socket doesn't care about what is getting plugged into it. It knows if it gets plugged into it. It will work because I'm giving it power. That's it. So advantages, advantages of having this as independent stacks. We can develop on completely independent stacks. The EmberJS has nothing to do with the Flask Residues on APIs. And so on, independent development. So right now if we want to do a project like Google, so like you know, FOSS Asia has a lot many code development projects. Say code heat, then we have Google summer off code. We also participate in Google code in. So the development is pretty much independent now. The front end people can concentrate just on the front end. The back end people can concentrate just on the back end. You don't need to be a jack of all trades to contribute to our projects now. So we can like now concentrate on just one skill. The developer can concentrate on one skill. They don't have to learn Python to get started with the front end. So that's one of the advantages because we always want more and more developers so that we can make the projects better and better every time. Because that's the spirit of open source, right? The more developers, they can contribute and make it better. So this is the API server components. So this is exactly how it looks under the hood in the API server. So firstly, we have Flask Residues on API. So after this talk, you have a talk by Shubham Pardia, who will be talking about Flask Residues on API in more details. So Flask Residues on API kind of has this kind of four components. So there's a data layer. So the data layer what it does is it takes data from the database and then provides it to the resource managers. Resource managers maintain all these things like what happens on a post request, what happens on a get request, whether there is a relationship between two endpoints or two databases. So these are all maintained by the resource managers. Then there's a data abstraction part of it because obviously we don't want the entire data to be seen. So what are the data we want based on the permission? So the user will be seeing a completely separate kind of data as different from the event organizers from the backend as per. So that's what data abstraction takes place and then we have routing. So routing here is just for the endpoints. We have the endpoints which servers say for the users, all the users or a particular user's detail or an event or creating an event. So that's all the routing we have. And then we have salary. Salary, as I said, is for the queuing purposes. So a lot many times we have faced this problem that people face a little difficulty in understanding the code structure. So we have this kind of architecture, but what exactly do we need to know while we are contributing to the project? Like, which is what, where are we using this? This is the architecture, cool, I got it, but where is this architecture in the code, right? So when I talk about post SQL alchemy and as database models, so those are everything that is there in the models folder. So there is a models folder entirely. That's completely like Flask or Django everywhere, we have a models folder. So in models part of the code structure, we will have all the SQL alchemy models that we are using in the endpoints. So we have an events model, for example, we have sessions, we have users, we have speakers. So in the models, what do we have? In models, we specify all the fields that we'll be having and which field depends on maybe some other database field. And say if I make a STR query or a JSON response I want, then what will be the JSON response I will be giving? And also what are the fields that get initialized with? So I need to know if, like say, if it's a safe or created at, say, when is the event created at? So we need to know that what do I initialize it with? Do I initialize it with the current time or the current date or whatever? So those are all the things that are maintained in the model. It's pretty much like the Flask or the Django model that you write. Then we have the Flask JSON API library that's completely in the API folder of the code structure. So it has all the resource managers and the permissions included. So permissions include, we have different kind of permissions as you can understand in the event management system. Firstly, we have the super admins who can bring people in, change the user roles, see the cells and all those stuffs. Then we have the event organizer permission. So event organizer can look after everything. So we have almost every kind of feature in our event management system. So you can have a scheduling app. So after you have accepted all the stations that you want in your event, you can go to the scheduling app, you open it, you drag and drop it, and that's it. You get all the schedules. So we have the scheduling app in there. We can see the ticket cells, how many of them have been sold? How many are pending? Which country do they belong from? And every other detail. Then we can add to the ticket cells as well like what are they? Maybe we have a custom form like say for FOSS Asia we need some different kind of fields. So we can add those fields so that we can get those in the attendees data as well. Then we have the speakers and the sessions part. We can see all the speakers details. We can accept the speaker sessions. And then we can add sponsors and all those things as well. So all these are managed by like all these need the endpoints which are managed by Flasters, JSON API. So we have all the APIs and the permissions there. Then we have queued processes as I said import, export, email. Then downloading of CSV files as you can understand for FOSS Asia like there are 210 speakers and some that kind of sessions as well. So the problem is if I go with the session, CSV downloading for 210 entries at a go. It will obviously go for a gateway time out. We can't wait for that kind of a long time. So what we do is we queue processes and there might be more speakers in other events as well. So we queue the entire process. We also handle the race conditions so that maybe there are two queuing together and so that it doesn't have a race condition while writing into the CSV file. So we queue the processes and then once the download is done, we send an email to the event organizer as well as it's shown in the website as well. And then we have the tests because unit tests are very important. Unit tests and integration tests are a very important part of the code structure because we don't want that we are implementing a new feature means it breaks all the other features, right? So we always go for that. And not only do we do a unit test and integration test, we also do a documentation test. So we want to make sure that if I am implementing a new endpoint, that gets documented. And if I'm changing something in the documentation, that's exactly what is there in the API endpoint as well, right? So firstly, as a permission, we use JSON WebToken. So JWT is quite a common way of using for permission systems. So it helps us with the authentication part of it. In each request you make, whether it's a post request or a get request, where you need a JWT authentication, so there are public endpoints as well, right? So I want to see all the upcoming events. So I don't need any kind of authentication that should be visible to everyone. But in case you need an authentication, you have to send the JWT token. So the user doesn't need to worry about it. It's the front end which needs to worry about it. So if the front end is making a request to the endpoint, which needs a JWT token, you need to send the JWT token again and again. So JWT token mainly has three parts. It has the header, the payload, and the signature. So header mainly deals with the data that you want to send. So it's mainly about the user whom you are authenticating, the user data that you have. And that goes in the payload as a JSON form. And then in signature, we have a combination of the header and payload and the salt so that we can create a new JWT token. So in our case, the JWT token expires after 24 hours. So you can use different kind of algorithms in JWT token, whatever you want for the encryption of the key. We have used a hammock shot of 56. So hammock is a time-based encryption. So we expire the JWT token after every 24 hours. So in the front end side, when they create, they have to create a new token after every 24 hours. So that's kind of the TOTP system that works in many places, like GitHub 2FA has a TOTP system which expires after 30 seconds, or something like that. Secondly, so our API is not, it doesn't return a JSON data. It's returns a JSON API specification data. So there is a difference. Though it's a JSON format, you will get objects and arrays and all those stuffs. JSON can be anything which has an object which starts with a curly brace and then it has anything inside it. But JSON API has certain specifications associated with it, like you need to mention the table name, you need to mention the version name, you need to mention what are the relationships that it's associated with. So we follow completely the JSON API specification. So when you send a content type request, you can't set it to JSON. You have to set it to application slash vendor API plus JSON. So VND.API means vendor, VND stands for vendor and it says API plus JSON. So we are following a JSON API kind of specification rather than just a JSON. That helps in complying so that in every endpoint that we make, it has the same JSON format. So it doesn't happen like users, I am getting a completely different kind of format and then even sessions, I am getting a completely different kind of format. No, that doesn't happen because we follow JSON API specification, every endpoint serves in the same way. So this is a little brief introduction into the code. You don't need to go into the code details, but this is how the API codes look like. So in API codes for setting an API endpoint, so this is actually an example of creating an attendee in the system. So when you create a new attendee, this is the class which gets called and then before post and all this get executed. So as you can see, if you can see, it's visible at the back. So resource list. These are all the parts which are a part of resource manager that I was talking about. Resource list says that it will be returning a list after you have created the event. So basically when you make a post request, you don't have to deal with a specific attendee. You have to create it into the entire post table. So a get request and the post request at the list level is managed by resource list. Then we have some kind of functions which the resource managers provide like before post, after post, so that maybe I want to do a little change in the data while it's being post, like it's being created in the database or maybe I want to change a little bit of data that I provide while making a get request. So those all things are handled in this before post part. So say in this case, we have made a require relationship. What this means is that for creating an attendee, he must have bought a ticket for that event. So it has a relationship with some ticket and it has a relationship with some event. Otherwise I won't be creating an attendee. So we make sure this tech passes and then only we make a post request. Now in the decorators, we mentioned the permissions. So as you can understand, decorators is a list of all the Python decorators that we use before executing these functions. So JWT required just mentions that this endpoint needs a JWT auto-integration, otherwise it won't be happening. What all does this endpoint support we specifically mentioned what an endpoint supports. So this endpoint for this function just supports post. If you make a put request in this or a patch request or a delete request, it won't work. Which schema it deals with. So the schemas are all mentioned in the model. So it deals with an attendee schema and then in data layer, we have whatever the model we are trying to request and what is the DB session that we are using. So that's an example of how the API codes look like in our system. Then we have the API routes. So we have to provide that this endpoint does this or this endpoint is related to this particular class. So this is what is done in API route. So API.Route as you can see, this is the API.Route for FAQs. So if you want to have FAQ in your event, so you can do that as well. So FAQ list post is also a class like I showed in attendee list post. So FAQ type list post, the first route says that if you make a request to FAQ type slash FAQ types, then it calls FAQ type list post. And this, the middle part is the name that we give to this route. So that we can access this route from the other function. So we use this name, if we have to access this route from the function. Otherwise from the browser or for accessing the endpoint, you use this endpoint and this is the class that gets called. So that's basically how the routes are written. So all the routes have to be specifically written for every endpoint that we create, including the relationship routes. So as you can see, there is a relationship route as well. So there are different kind of FAQ types maybe. So for FAQ types, there is a relationship with FAQ. So every FAQ belongs to FAQ types. So that are the relationship routes that we mentioned. Then we have the model. So this is an example, like I was talking to you about the model. So this is an example of a model. We specify the table name first and then we have all these fields, ID names, slug, events, events of topics. And these are the fields that we need to fill up and this is how we initialize it. So this is pretty much a Flask or a Django model. Then we have salary. So salary as you can understand, we create a salary task. This is the example code for sending an email. So we have a send.email.post.smtp task. So this uses the SMTP and this mentions what is the hostname, username, port we are using and all those stuff. So this entire functions happen in the salary in the background so that it doesn't interfere with any other requests, right? Then what do we do in the front end? So we have created already a salary task. Okay, now or for import or export or anything, but how do we understand that the task has been completed? How, when do we show it? Or when do we make an email after the task has been completed? So we have used something called polling. Polling is a common like computer science technique. So what we do in polling is we have a certain interval and after every interval, we make a request and see whether we have got a response, we have got a response. So when we create a salary task, it gives you a task URL. Seller provides you with a task URL. When you make a, get request to that task URL, what it will do it, it will return a response. Whether if it has failed, it will return failed. If it's working, it will say pending. And if it has done the response and it's a success, it will return the data that was supposed to be there as well. So that's the technique we follow to understand, okay, a task has been completed. Then coming to unit testing. So testing dread. So dread is not the fictional character, it's a library. So what dread does is, dread verifies that whatever I'm writing in the API and whatever the data it's serving, it's mentioned in documentation as well. So it ensures that the documentation and the API comply with each other. So for writing the documentation, we use API blueprint. Then we have unit test for unit test. We use unit test, which is a Python library that makes sure that all the helper functions are doing what they're supposed to do. So that's what unit test does, right? We ensure that whatever the function is, it does exactly what we want it to do so that it doesn't break when we have. So all this unit is we use with the CI. So we have a Travis CI, which is there in our repository. So whenever we, someone makes a pull request or someone does a commit. So all these unit tests need to pass, which mean that they also have to include a documentation every time you enter a new feature. So if you see the documentation, you can be sure that it complies completely with the REST endpoint. Otherwise it wouldn't have got merged. So that is what ensures. These are some of the links which you might want to note down. And I will be sharing this slide also after the talk. So you can find all the details that I've told you in this particular links. And this has the legacy code, the server code, the front end and all this part. So separately, this all structures will be talked to you by others after this talk. So that's all from my side. Thanks. Any questions? Yeah. Yeah. So the front end is now a typical JavaScript application. So the problem that I've found, the only problem that I've found is one of the problems is that Google will index it everywhere. No, Google will index it at some points. But like you might want to say, like you can mention that as well. You can do your SEO in that way so that Google indexes all the, like you are saying it will index the entire website, but it won't be indexing all the specific pages, right? That's what your question, right? It will only index. You can say that it only indexes. But that's, you can maintain that as well in JavaScript. So you need to still while routing or KC. So maybe that is a problem we have in our repository. So that's a good point. But we can handle that. That's possible in handling in JavaScript applications. So Google will follow the routes in your JavaScript application? No, Google won't follow the JavaScript routes. But yeah, we can mention in particular route, like say if it's a speaker route, we can mention the ACO, like the metadata that we want to serve in that particular link. So Google, yeah, you are saying that the robot.txt, all this parsing won't be happening. Yeah, that's true. That won't be happening. Yeah. That's a true point. Well, not really, but yeah. If you don't want to index, how can we do it? If it means we don't really require all the specific pages to be indexed. So yeah. Okay. Thank you very much. Subtech, round of applause please. Okay, so actually we have four talks and when we go more into detail, also we've got new questions. And so we will also be...