 So Hello, and welcome. I Don't know how many of you know me, but in case you don't This is a little bit of background information about myself. I'm a software developer at Rackspace. This is a company in the United States Probably if you know me you don't me because of the the flask mega tutorial This is a series of articles that I wrote on my blog a few years ago that are somewhat popular I also wrote the O'Reilly book on flask Mostly these days I work on APIs and microservices and My language of choice Probably 90% of the time I'm coding Python, but sometimes I have no way to avoid writing JavaScript So I do and in previous life. I used to work a lot with C++ as well You have my my Twitter my My blog which has a lot of articles about Python in particular flask and my github Page where you can find a bunch of open source projects that I did Before I start I also wanted to mention the the other two Places I'm going to be available in this conference Tomorrow there's a training session a three hour long session on this specific topic on microservices So tomorrow morning That's that's when it's when this is happening and then I also going to do a help desk on Thursday If you have any questions about flask or web development or any of my open source projects I'll be more than happy to talk to you one-on-one. Okay so Basically, this is going to be an introductory talk on microservices probably intermediate level maybe a beginner that's ready to jump into this this Pool of intermediate yet that nobody knows exactly what it is But that that's basically the target audience So I'm gonna start with a set of analogies Imagine this is something that happened to me. I have a friend who was taking a So for development class he's not a developer. This was an elective Class that he took in college and he had to write a game game of pong He had a bug and he came to ask me for help and what he showed me his code it looked pretty much like this The whole game in a function So he called pong and then that that was the game and he had a bug with the way the ball moved Right, so so to help him I had to go understand pretty much the whole game Right, it was very hard for me to to figure out what's going on So I used the opportunity to show him a better way to structure this this game, right? So I told him you should do something like this Where you have a bunch of functions each function does one thing and then when when you need to do something bigger Than what a function does then you have a function that calls other functions and all the functions call each other and achieve The big thing that is the whole game, right? And I'm sure we all Going to agree that you know that the one the right is much better than the one on the left So here, but basically what we're saying that you know short and focused functions are better than long functions to do a lot of things Now what happens if if we go one level above? So this is this is an example from my book the flask book If you start the book it teaches you how to build a flask application and when you reach the end of chapter 6 You end up with an application that that does quite a lot But basically that the whole Python logic is in this this one module hello the pie Then you have a bunch of templates and a bunch of static files But but basically the whole Python thing is in 1.5 So so then you you're getting to chapter 7 and chapter 7 is the one that teaches you how to Structure your application so that it's more maintainable so so it goes into something like this where you have a Starter script managed a pie the configuration. It's in its own module You have the application is in a package and then the tests are in in a different package And if you keep looking the application has a Couple of modules that are for specific things. So the database models are in one module email Support is in another one There's this this main Folder, which is another sub package that is a What in flask? We call it a blueprint which for those of you familiar with Django will be sort of the same as a Django application And if you keep looking then you have for that blueprint you have errors forms and views So basically once again same thing as with functions now with modules We're saying that small and focus modules are better than large modules that do a lot of things right Now what happens if we take this even higher level and we talk about services web applications Well, can we translate this? So this is an example of a big web application This is actually an application that exists. It's on my github. I used this application to teach a class to demonstrate that flask can scale and Basically the idea with this application is that You have these two green boxes Flak is the name of the application and then the celery worker it's One or more workers that do asynchronous functions and the idea is that you can run Any number of flak instances and any number of celery workers and you scale those two according to your needs Then then you have a database that both use and there's a message queue That's used by celery and also used to to allow the celery workers to push Notifications to the client through through web socket So anyway, this is probably sort of an advanced ish application very scalable, but it's it's one code base so can we apply the same logic that we apply to functions and modules to this and In my opinion the answer is yes and Okay, first I should I should tell you about the problems with this So one problem that we have is that the code is a single code base. So it's Very hard to test all the things are Basically coupled you you have code that deals with users in this particular case Flak this example is a chat application So so you have coded these with users and messages all mixed up. You don't even realize that you are having You know coupling between those two Potentially separate functions of this application If you are working with the team and you need to introduce a new member to your team That person it's gonna have a hard time trying to figure out how to become productive because it you know that they'll have to understand a lot of things before You know before you let them participate You to reduce the risk of them breaking things In particular what when you use Celery something I find I find I don't like that much is that If you need to upgrade even though in this case we have two separate services the main service and the celery They all come from the same code base So there's no way to upgrade them separately you have to stop both or if you have many instances of each all All of that you need to stop then you do the upgrades upgrade the database Whatever else you need to do then you start them again. So so basically you have to take your application down for the upgrade Also if you have a problem and the application crashes Then basically the whole application crashes and the site goes down until you figure out what's what's going on and you can restart it The scaling becomes difficult Imagine in the case of a chat application like this So you have a module that deals with users a module that these with messages Very likely there's gonna be more activity on the messages side than in the user side, right? You're gonna have you know a bunch of regular users that are already registered and then we're gonna be chatting So we're gonna be sending a lot of messages. So if you find that you need to to scale your application You're gonna be scaling the whole service. So you're gonna have you know that the necessary the necessary number of instances to satisfy the load for the messages But then you can also be having a lot of instances that can deal with users that are not there you're gonna be Over over scaling for the user side, and there's no way to have more fine control The only control is basically the the salary versus the rest the main service Also consider the case where let's say this is an old application you did it in Python 2 and now you are interested in going to Python 3 and That's gonna be probably gonna give you a headache right because it's all or nothing You're gonna have to upgrade the whole whole application in one go So all these are problems that are typical of what we call monoliths. So these big applications that are They're built with a single code base now This is also a real application. It's the same application. It's also in my github. It's converted to this idea of microservices and You can probably guess that the idea is basically to write Smaller services and the services then talk among themselves to achieve, you know, the whole function of the application And in this case, you can see we went from two green boxes to five and series not there anymore So so we have five services You can see in the bottom you can see the client UI. So this is a service that serves the application that runs in the browser For this particular case, I wrote that in Python in most cases, I'm going to guess that this is gonna be a node application and That's totally fine because you know, they're independent services. You can write each service with the best technology For that service. So we have a client UI. We have a service that is tokens That basically this is authentication Messages and users are separate services and then we have the socket IO service, which is the the web socket push notification Module so they're all separate in this case. These are all flask applications Some of these are so small that you can open them in your screen and see the whole code And the ones that you don't they're probably two screens not no more than that. They're they're all, you know, fairly small You can see that we went from two orange boxes we had a database and a message queue on the monolithic case now we have four boxes We still have the message queue that that serves the same function. It helps the Services communicate among themselves But then we have three databases. We have a database for the messages service Database for the user service and the database for the token service which stores provoked tokens And then what else we have we have a a new box a blue box called service registry I'm gonna talk about that more later, but basically this is a The very efficient database that keeps track of all the services that are running It knows what's running and then he communicates with the load balancer so that low balancer knows What what to you know, what the services are now These five green boxes. They're all Independently scalable now. So so now if I have more More load on the messages side I can run More more instances of messages and then keep users, you know at one or two for example so I Talked about You know disadvantages of the monolith and all of those now translate into benefits When we're doing microservices the code complexity it's it's you know, greatly reduced Each service as I said, it's a very small flask application You know reminiscing of the the hello world type application you see in documentation They're actually very simple to to code very simple to maintain Because we're forced to keep things separate it's less likely that we're going to introduce bugs due to coupling The user service has no way to access for example the messages database directly It needs to talk to the messages service so messages will have a public API that will expose to to clients or to other services the users will will do the same thing and Basically that that's a decoupled the sign, you know by force you're basically This design promotes the decoupled the sign that helps helps create programs to have less bugs now the the case of having a new member in the team that that you want to To make productive as soon as possible That that becomes really easy because you can put that person to work on one of your simplest services and Like in the case of pong if you have you know the the code structure with functions if I need to fix how the ball moves I don't need to learn how the You know how the players move or how the collisions happen all I need to do is Basically go to the function that moves the ball and this is the same thing you can put a new developer to work on the on the token services for example the token services is very simple and You know right away they take a start being productive You can you can even allow a new person to create a new service because it's a very simple application One of the things that I find most exciting is that you you can upgrade like the big guys do without Going down we never find out when Facebook Twitter, etc You know deploy upgrades because they do it, you know while running and we can do the same with this I'm gonna show you an example of that later probably you don't believe me, but you know Give me you know benefit of the doubt. I'll show you in a little bit If you have a problem with a service that crashes or has bugs or whatever That's gonna affect a small part of your application The rest of the application will continue to work So unless you're unlucky and your token service goes down which basically means that nobody will be able to authenticate If if you have a big application with lots of services and one minor service goes down Then the rest of the application will continue to work. So it's a partial Failure not not a complete failure that the case of a monolith I mentioned that you can scale individually the services and adapt to the loads and Finally also very important You can choose the best technology stack for each service They don't need to be all written in Python 2 or Python 3. They can all be written in the best tool And in the example of going from Python 2 to Python 3 if you started this application with Python 2 Then you could start migrating services one by one to Python 3 as long as the communication mechanisms between your services Is standard so you will do for example HTTP for example Then everything will continue to work and you can do a gradual upgrade to a new technology Likewise if you find that you need to write a new service and For some reason you find that go or node or Ruby is a best choice Then it's absolutely no problem. You can do that service in the in a different technology and that it doesn't really matter Of course, it's not, you know all roses and you know benefits. There are some some problems too What one problem that I see You know, I've enforced the fact that things become simpler This this is Really true, but not not so much The complexity doesn't go away completely The complexity goes into the if you look at the diagram it goes into the arrows So the complexity migrates from inside the green boxes into the arrows And now you have a web of connections that sometimes gets pretty crazy So you have to make sure that for example, you know have a cyclic Links a service calls service a call service B and then be eventually ends up calling service a again You know, you may need to look for inefficiencies in that sort of thing I Suggested by showing the boxes that each service has its own database So something that people like me which like relational databases a lot I suffer with being unable to do joins Because now each service has its own database So if you need to create a join in this example between users and messages, you have to do it in the application There's no way to use SQL because it's two databases and you know one service cannot access the database from the other And we don't want to To keep things separate and be able to upgrade the services separately Deployments are hard and you know, the Bob's people will tell you that You know, it's just security for for the Bob's people, but you know, there are so many moving pieces that Yeah, you know, it requires a full-time job sometimes to keep things going when we have this type of architecture and then finally You you have This this pinball effect right each service. That's small things So when the client requests a complex action, then that that may require usually requires a Request to pinball through different services, right? The entry service could be messages Messages may need to talk to tokens to verify the authentication It may need to talk to users to get user information They need to talk to socket IO to to push a notification to the client. So basically it becomes less efficient So you have to keep that in mind too. So response times for the client They're not gonna be as great as when you have a single code base So you you may wonder how do you go about? Transforming or converting or refactoring a monolithic application into microservices. Unfortunately, that's that's pretty hard and But basically there are the three main strategies They the one that's the probably the easiest is to say okay what I have so far. I'm going to keep I'm not going to worry about that, but then anything new that I start building from now on I'm going to build You know using small services microservices That's that's the easiest strategy not the greatest one because you still have a monolith that you need to grandfather into your microservices platform another option would be to To start with the big service Incorporate that but then over time you start breaking away parts of that big application Into small services. So eventually over time you are going to end up with a microservices architecture. That's a pure But but then there's gonna be a potentially long transition time where you will be You will be working with a hybrid That's probably what most people do When they do this and then finally you can you can use the line-in-the-sand approach and say okay I'm going to refactor this application into microservices, you know today It may take you a week or two weeks or a month, but then when you're done, then you have a complete application. That's You know that that's fully microservices enabled I Would something that's important and I see a lot of projects that they say, okay, I'm gonna do microservices and all they do they start writing services and that's probably I would say 50% of the equation You need to have a platform. That's proper for microservices to live and basically this if we What's Okay, if we go to the sorry if we go to the diagram This is basically the the load balancer and the service registry are very important components That that need to be in place even if you have a monolith that you are transitioning into microservices You'll have to figure out a way to incorporate that monolith Into the platform that allows microservices to exist. I'm not described what that means But Before I get into a little bit more theory I'm gonna do a demo and then if I run out of time then at least I get the fun part down So Okay, so this is this this this microflag application So I'm gonna show you Let's see so I'm gonna look in It's basically a chat application Please stand there so I can go to another tab You can see that things are look a half to bulbs That's probably example playing with this before So now I can Create another user You know so pretty simple stuff But let's let's look under the hood a little bit This is probably gonna be whoa. What's this? So this is An open source load balancer so that the the yellow The yellow box that you saw on the diagram on the left. It's called ha proxy Probably if you don't know this one, you probably know engine X Maybe you know traffic, which is another one. That's that's kind of becoming popular these days So all the these tools do is basically you tell them You know, where are your services and then you have clients connect to this thing and this is sort of a switchboard a control a traffic control that Basically shares all the requests that come from clients Among all your instances Now it this is super busy. I'm not gonna explain everything because it's irrelevant to our purposes here But if you if you look at the the sections there are six sections the top section basically shows you status about The this this tool H a proxy listening to requests. We're gonna ignore that There's five more sections and these these five sections are for the five services that we have these five green boxes You can see that Messages is running three instances. We're running version four Of the messages service and we have three so when requests come to this load balancer, they're gonna be assigned to one of these three and Basically, H a proxy would make sure that all three stay a more or less equally busy We have one of soccer area. We have two of tokens and then one of UI and one of users so you can see that I'm scaling independently So This is all running in a very great machine. So I'm gonna log in to show you some fun stuff so What I should say you probably used to hear microservices associated with You know platforms like a Kubernetes, you know that type of thing which you can use I'm not using that that right now This this is all, you know, I'm a flash by I like simple stuff So this is all built using bash and a little bit of Python So this platform doesn't use any, you know professional, you know professional grade Microservices platform. I have H a proxy. I have a service registry and then a little bit of batch So for example, I have a bash that runs a new service. I can say For example, it's let's run users Oops users So this this is gonna run a new container. This is all based on Docker containers So I'm running a second users. You are going to see in a little bit H a proxy please upgrade and show me to there you are So The the way H a proxy upgrades is a little bit clunky blanks the screen But that doesn't mean it goes down. It's just that the web Panel that that's a little bit clunky, but anyway, you can see at the at the bottom now. I have two users So all I did was run this I can show you Okay, this looks awful, but somewhere in here here at the top. You have the new this is the new users Container that I started so just by starting the container The container itself talks to the service registry, which is this database that that knows about everything and then The service registry knows about it and that that gets communicated to H a proxy So H a proxy puts that service online immediately now I'm gonna just be nasty here and I'm going to Kill that guy So the moment I stop it H a proxy is gonna notice that something's going not right. So it's gonna Blacklist that that service so immediately, you know, and any requests are coming. They're gonna go to the good one They're the other one, right and in a few more seconds since this isn't coming back, then it's gonna go away completely So this is one cool thing that you can do with microservices super easy You start and stop things I Mentioned before that upgrades are really fun and very efficient so As soon as this this red guy goes away. I'm going to show you that I have another another bash script That's that's going to upgrade messages. I have three there you go have three messages instances They're running before now on this instance. I have already here a version five That I'm about to deploy so I'm gonna say MF upgrade roll messages and Pay attention to what happens now So you are going to see a v5 messages come up please so So one v5 and now one v4 is going down Another v5 is gonna come and another v4 is gonna go down So so basically as you see there's always at least three That are running so we never stop we never have we never have less than three, which is what we intended that are running Eventually, you know the three v4s are going to be killed and they're going to be replaced by the three v5s We have one more that needs to go down There you go So that that's how you do an upgrade without stopping it. So I think at this time if If I look at these these two guys, they're still running Somehow I got two bulbs there, but other than that These these are still connected. They never lost the connection So so people using this service will never notice that you're doing an upgrade. Okay So that was a demo So in the time I have left Okay, not much Yeah, so five minutes in five minutes, I'm gonna try to rush through this I'm gonna describe the pieces that that make this that the build is So so we have a load balancer. I mentioned that I'm using a cha proxy Basically having a load balancer when you're doing microservices is a must you You can try to not use a load balancer, but it's really you lose a lot of benefits You you basically get to do very simple rolling upgrades I showed you a bash script that can do a rolling upgrade without going down And that's only because we have a load balancer that that supports this this architecture You can do a be testing green blue diplomacy all these cool things you hear the you know The very popular companies the fables the Facebook's and the Netflix is talking about they do the Chaos Monkey You know all those things you can do and and the load balancer is the main piece that supports this So super important that you have one in place before we before you start doing this the service registry is it's a database that usually It's designed to be highly available. You run multiple instances of it redundant and Basically super fast it caches stuff in memory so that you can you do queries are very quick and Basically the service registry is Basically connected from all the services when they start so they register themselves and then if the service dies The the connection depends on the system, but usually the connection Has a TTL so if the service doesn't refresh that connection and says hey, I'm still here Hey, I'm still here when stop saying that then the service registry will remove it And then immediately will talk to the load balancer and it'll remove it from the low balancer as well So basically that that's the whole magic and if you want to know I didn't put it here But the registry that I'm using is another open source project. It's called at CD et CD It's it's very simple. It's actually the one that Kubernetes uses as well containers are Big part usually you see microservices platforms are always done with containers. It's only because it makes things much more easy The container provides a layer of isolation a little bit better than just processes for example, it allows you to To work with virtualized network ports All these services that you've seen that these five services in the example That's all the instances of all these services they're all running on port 5000 Which is the flask default and and then docker takes care of you know mapping that into some other port that I don't even care Right, I don't care what port it is But but you know for me writing the services, it's for 5000 every time so that that's really nice You can do that yourself if you don't use docker, but you know, it makes things a lot more difficult And of course if you're using different different technologies Then having containers make sure that you don't have collisions between conflicting dependencies as well We have the orange boxes so storage your storage containers Usually your database your service registry can also be considered storage a message you you know all of those For all those for production platform, you will you will look for something that's highly available something that you can Make clusters off. So typically if you're using for example my sequel you you will look at Galera Which is a cluster solution or Aurora if you're on AWS, you know all those things that basically make the service very reliable Not running one instance that if it dies then the whole thing goes down, right? The same thing for cues and so on And then you have your applications. So these are the green boxes and these applications are stateless So I need to rush These are stateless. So that what allows me allows me to start and stop these services They have no data in them. They use the storage services to store data So this allows me to start kill and it doesn't really matter. They all are basically disposable I can horizontally scale them for free. I can run as many as I want And basically the more I run the more load I can handle on that service And I Think I'm going to skip this one in the spirit of saving time. I already talked about this Basically the lifestyle of the microservice it starts talk to the sir to the service registry And then when it dies stops talking to the service registry and that that translates into the load balancer removing it from from each configuration and Then finally we have service to service communication Which is the the mechanism by which the services talk among themselves this example I'm using HTTP as a way for internal communication There are many projects that decide to only use HTTP for for client communication into the into the project But then internally they use different mechanisms Which is totally fine as long as it is a mechanism that doesn't restrict your choices of technologies So you should find Standard RPC mechanisms, for example, that would be a very good way to to do a more efficient less chatty communication than HTTP So I'm gonna give you a link to the slides But if you want to try this example yourself you can just runs in a vagrant virtual machine These are the instructions and the requirements. You can run this application and play with it like I played here And tomorrow I'm going to talk in more detail about this if you want to learn how this was built You You can see the slides Thank you Thank you very much for this nice presentation. We do have time for questions Very close Thanks for the presentation My question is can there be a difference between running two Instances of the same microservice into docker containers on the same machine and running just one instance in one docker container So can it help with load balancing other cases when it can help? Well, I'm not sure if I understand your question, but typically you will not run Your let's say you have three instances You are probably gonna run them in different hosts and then the load balancer is gonna be You know in front of all those and the reason is that if the host goes down, you don't want, you know, all your instances to go down So there is no case usually when you would run to microservices into dockers on the same machine, right? I'm sorry. I Can't hear you well Should I I know you're asking about something specific, but Yeah, I mean can there be a case when we have two microservices in to docker containers and they are on the same machine Because they would have different processes and either a case when it could help Well balancing because It's totally fine. You mean if there are collisions between the two containers. No, I mean would it make sense in any case? What make more sense is to to use different hosts? Okay. Thank you That was my question. I mean only for for reliance, right? You you don't want, you know a host a docker host to go down and take all your instances of a service So you will have even different data centers, right? If you are, you know doing this for real Okay, thank you Any more questions Hi, thank you very much. I was wondering what's the best practice for up a rolling upgrade in microservices like it just did When one service it's changing the data scheme and the other one isn't Yeah, right so Yeah, I didn't expect to have time to talk about this but glad glad you asked there are some rules So when you make upgrades to the database that that's not the only case But if you make upgrades to the database, they need to be about what's compatible always The same thing with the API. So if your service a you know, your service exports an API that other services and clients use That if you make changes to that they cannot be breaking changes So in my example here when I went to before before to be five of a service I need to make sure that I can have v4 and v5 running at the same time using the same database So usually if you need to make significant changes to your database, you need to make them in stages You cannot make them like in one go. You cannot remove a column. For example, you can deprecate a column You know deploy the upgrade once, you know, you you're all upgraded only then when you make sure that No instances are using that column then then remove it in a second step It's more difficult. Yeah It's a pain actually Okay, one more question. I very interesting talk. Thank you I've got one question if you split up your big application into many that keep the data in different databases How do you keep your database backups consistent? How you keep you basically its services independent. So you you have to back up everything separately It can be in its own schedule. Yeah, they don't need to be done, you know at the same time or anything Okay, so it's you have to split up your transactions so that what has to be in one transaction has to be the same database then Okay, thank you Okay, we have time for one last question Hello So if you're upgrading your database, you were saying that the Instances of the microservices shouldn't have any data and they can be closed at any time But if you're upgrading the database, how do you? Maintain the data because the services shouldn't be holding this data Well, so yeah, but basically that's the same question before it's difficult You have to design your database upgrades so that they are that they don't break existing application that the existing version of the instance So typically requires multiple stage approach to make significant changes and usually you try to avoid making big changes Usually you add stuff but never remove or rename You know because those are kind of expensive to deploy You know because of all the effort and the risk of breaking something so so so yeah, typically you start thinking Probably now if you've never done it, it sounds crazy And I was in that same position a couple years ago But but then you get used to to that and then you start thinking in You know designing your database, you know so that you don't have to make You know breaking changes so often of course if you need to make a breaking change then, you know You can make an exception and say okay this time. I'm gonna stop everything upgrade and start everything and then there's some downtime You can go that route too Another option is to use not no sequel databases, which I you know, I'm a big fan. I'm all guys I'm traditionally trained. I like relational but I have to agree that you know for this type of architecture. No sequel makes a lot of sense You know because it's a lot easier. There's no schema to do to deal with Okay. Thank you very much. Thank you