 Thanks. Good afternoon. My name is Max, and today I'm going to talk about why we think that you should stop trying to glue your services together in import lymph instead. Who are we to get that out of the way first? We're a delivery hero. We're an online food ordering service holding. So what online food ordering is all about is basically the concept is simple. I guess that there's no one around who's really unfamiliar with it. You get hungry and you go online to one of our web pages. You search for restaurants that deliver to where you are. You compile your order, you pay online, and then you wait for the food to be delivered. So basically it's like e-commerce, but with grumpy customers by definition. And also the fulfillment part is interesting because food needs to be delivered quickly. That's something you need to take into account. We're operating 34 countries, and we're headquartered in Berlin. This is our mascot, the hero, therefore delivery hero. So just a quick show of hands. Who of you attended the talk by Matt about Namiko before lunch? That's a fairly good amount because there are a few things that we're not going to talk about, but which Matt nicely introduced. So we're not going to talk about what services really are, or as opposed to monoliths, or why you should go with a service-oriented approach, why you should not do it, how this helps you, neither are we going to talk about cloud stuff, or Docker, or why you should call it microservices and not microservices, or how micro is micro. But what we're going to talk about is Lymph today. So Lymph is a framework for writing Python services, and to start with, I'd like to justify a little why we wrote another framework, because usually developers say, hey, there's something out there already. In this case, there wasn't. So once we have that out of the way, we're going to get our hands dirty with a live demo, fingers crossed, that that works all right. And that's basically the main section of this talk. And afterwards, we're briefly looking under the hood of Lymph what other features are there, which we don't touch today, briefly touch on things like Namiko and so on, give you a little outlook, and then hopefully there's time for Q&A in the end. So I have to be fair and say, if you want to go over things in your own time, then there's this entire introduction. You can find an article at import minus Lymph.link. Everything is written down there. You can go over things in your own time. There's even more detail. You will find the exact same examples or services that we're talking about today. You can try things for yourself. There's a vacant box set up that you can use, which we'll use later, or just to debrief yourself on what we talked about today. So why do we write another framework? That's pretty simple. Two years, roughly, we've been to the situation where we said we want services in Python and not worry. So let's assume that our decision to go with services was right. We were running with a big jungle monolith, basically a lot of spaghetti code of the legacy variety, the one that no one really likes. So therefore, the idea of going with services became increasingly attractive to us. We want you to stick with Python, because usually you say, hey, if you do services, the idea is that you don't have to worry which language you're going to run with, but as we like to do Python, every developer should be able to be as productive as they are. And if we would not have stuck with Python, well, then I couldn't be here today and talk about it, so that's good. And we didn't want to worry. We didn't want to worry. That means if you want to run your services and do services, there was nothing really that was helping you a lot back then. So we wanted services in Python and not worry. The first two things are easily ticked off. The third one wasn't, and therefore we came up with LIMP. So we had certain expectations, though. Running and testing your services should be as easy as possible. You should not have to worry about glue. That means I, as an author or operator of a service, you should not have to worry about how to register your services, how to run it, how to configure it. You should not have to worry about any of these glue code at all. Configuration should be simple and flexible. You should get a lot out of your configuration files without having to write a lot of code to pass them and deal with them. Possibly you should take the same service, run it on your local machine, your lab environment, staging live, possibly another country, even simply by configuring it differently. Scaling naturally should be easy. So if you need more resources, then you just throw more instances into your cluster, yet the client code should be totally unaware of this. We wanted to have the opportunity or the possibility to speak RPC rather than the HTTP interfaces be able to do to emit events so that we can do asynchronous communication. But we also wanted to easily integrate or expose HTTP APIs. And last but not least, if you want to introduce a new service, then there should be as a little boilerplate as possible, yet a fair amount of scaffolding helps you to nicely structure your stuff. So what we came up with was Limp. You can find Limp.io and we think that it or the idea is that it should satisfy all of these requirements. So I repeat myself here. I think it's a framework for Python services. And by default, Limp depends on RabbitMQ as an event system and ZooKeeper for service registry. So just one more quick show of hands who knows what RabbitMQ is. Good. ZooKeeper. Fair enough. So ZooKeeper is a distributed key value store. And that's how we do service registry. But we'll find about that later. So here comes the scary part. People say that I should not animate my slides. I should not show code on slides. And neither should you ever, ever, ever do a live demo because it will go horribly wrong. So I'm going to show you code and it's animated and we're going to do a live demo so there's nothing that could possibly go wrong. So to begin with, and to jump into the thick of it, we're going to write services and we're going to increasingly introduce new services to see how they interact with each other. We're going to run them. We're going to play around with them to explore the tooling that Limp brings with itself. So we start with the most sophisticated Hello World example you could think of. That's a greeting service and it's funny because Matt used basically the same example that was not planned but it's funny though. So this greeting service, you give it a name and what it's supposed to do, it's supposed to return a greeting for that name. So to begin with, we need two files, sorry, that's the first line already. We need two files. We need the implementation of our service naturally in a PY file and we need a configuration file in a YML file, sorry, that's YAML. So to begin with, for our service, we start with importing Limpf and this is basically where this talk lifts up to its claim. We import Limpf and we want to define a service called greeting and we do so by inheriting from Limpf interface and like I said, we want to expose one method as its interface, it's called greet, takes a name and we can expose it easily by decorating it with Limpf RPC. And then what we do inside of this method is we simply print the name which we received saying hi to, we are omitting an event to let the world know that we're just greeted someone and lastly, we're returning this greeting. The configuration file is rather straightforward as well. We have to tell Limpf which interface is to run and where they are located on the Python path because what Limpf does is that it imports it at runtime to bring up an instance of this service. So let's get our hands dirty and we'll do this within a prepared vagrant box which is readily accessible for everyone at import minus Limpf.link. It provisions via Ansible and what this vagrant box does is it has ZooKeeper and RebianMQ running inside and to make things even more accessible, there are prepared Tmock sessions which you can easily fire up which then start services and panels and which are nicely labeled using the toilet command and this is what we're going to do now. So let's go there and run our services and play around with it. You don't see the topmost line of my shell which is confusing but you should see everything from now on. So this is where in the box now it greets us very friendly with import Limpf and there is this Tmock session prepared and we're fired up with Mux, start greeting and what you see now is two panels, one is running an instance of the greeting service. On the right hand side you can see that we are running Limpf instance and we need to point it to the configuration file so Limpf knows which interface to run. On the left hand side we simply have a shell and this is where we'll explore the tooling that Limpf comes with. So to begin with let's say we don't know anything about Limpf at all so it should tell us about the commands that are available. Limpf list does so, that's a whole lot of text, don't worry you don't have to read them one by one, we'll explore things bit by bit. So let's say we have no clue whether there are any services running at all or which there are, Limpf discover should tell us so we can discover services and indeed it tells us there's one instance of the greeting service running as expected. Let's continue to play dump and say we don't know anything about this service, I want to get to know something about its interface. So Limpf inspect greeting should inform us about this service's interface. And this is more than expected actually so the top most function that you see, sorry the top most method that's the greeting method, the one we've just implemented ourselves and below that you see four built in methods which you get by inheriting from Limpf interface. So let's exercise this service, we can do so by issuing Limpf request greeting, greet will supply the request body that needs to be valid Jason talking and typing at the same time is hard and I'll greet you guys the Euro Python. So what we'll expect to happen now is the request should hit the instance of the greeting service supposed to print something and we should receive the greeting in the response so fingers crossed this works and did it on the right hand side you see that it said saying hi to Euro Python and we received the response on our end as expected that's very nice on to the next service so yeah this is what we just did so the greeting service also emitting events every time we're greeting someone this is something we haven't seen yet and it's emitting events there's no service that consumes them so let's write the service that consumes these events and as creative as we are we're going to call it listen service and once more we need two files one where we implement the service and one where we configure it so we start with importing Limpf and we define the service by subclassing Limpf interface we're calling it listen and like I said every time an event of type greeted occurs we want to consume it and we want this method to be invoked exactly invoked sorry it's called on greeted and it receives the event that has been emitted and all it does is that it takes the name from the event body and it prints that somebody greeted that name the configuration is just as straightforward as before we have to tell Limpf that's supposed to run one interface the listen interface and we have to point it to where this is located on the Python path so that can be imported so let's run them in combination to see how they interact therefore I'm firing up the T-max session for that and we're doing a leap of faith here we're not only running one instance of each service but in this case we're running two instances of the greeting service and one of the listen service but let's assure ourselves whether they have registered correctly with our service registry Limpf discover should tell us so as you can see indeed there are two instances of the greeting service and one of the listen service so the listen service is supposed to consume certain events let's assert whether that really is the case so we can event sorry emit an event of type greeted with Limpf and we have to provide a body also once more needs to be Jason and the name is Euro Python so what we're expecting to see now is once we emit it the listen instance is supposed to consume it and it needs to print something in fact that consume the view can see this down here it consume the event that was emitted before when I when we were requesting the greeting instance so we're expecting to see it print again now very nice it printed as expected so let's request a greeting and see whether they're correctly working together so we're expecting to see now to once we send this request we expected to be handled by one of the greeting instances it should print should return to us and the listen service should print once more and in fact the the second greeting instance handled it now if we repeatedly issue this request the request should be randomly distributed over the greeting instances and fingers crossed yes this worked top most one handled it and the other one very good so this seems to work as expected but it wouldn't be 2015 if we were not talking about web services so let's expose our the functionality that we have within our service cluster which is the bleeding edge greeting service and we want to expose it via an HTTP interface what we need to do there is we're going to write a web service and once more we start out by implementing it in python and configuring it afterwards so this case we import from lymph web services the web service interface and we'll also need some that's our tooling to deal with URL mappings and and since we also want to return about it response business as usual we define our web service by inheriting from web service interface we want to expose one URL which is slash greed supposed to be handled by the greed method receives the request and we expect the name to be in the query string and what we do there is when we receive the request we pick the name from the query string we print that we're about to greet someone we're invoking the greed method of one of the greeting instances this is RPC basically sorry and then the end we're returning the the greeting in the response and we also need to configure it and as our web services supposed to listen to a port we have to include this in the in the configuration that's the only bit that differs from the two configuration files we've looked at before so let's run everything together so what we see here is one instance of each service running web greeting and listen and since old habits die hard let's make sure that they have all registered correctly lymph discoverer should tell us and indeed there's one instance of each service let's exercise the web service now and see whether they are actually working in combination as they should so we're listening to port 4080 and the name should be in the query string that's Europe Python once more so once we issue this request we expect to receive the greeting in the in the response and all services sorry all instances should print something in order to validate that they were actually being spoken to let's issue this request and in fact all services all instances printed something and we receive the greeting in the response says hi Europe Python over here but there's one thing that you might see already is the more services you run the more complicated it becomes to well develop with them locally you need more shells to run the instances and if you want to run several instances of one service you need to run them in several shells that's becoming rather painful and it has become rather painful here already because we want to run three services we need three shells that's a bit of a pain but there is lymph to the rescue it comes with its own development server that's the lymph node command and what we need to do to get its leverage is within the directory where we want to run our development we have to we there needs to be a configuration file called dot lymph YML and in there we configure the services that we want to run and we configure how many instances of each so this is highlighting the the important sections so if you want to configure instances you basically tell them how to bring up that that instance and how often so we run two web service instances three greeting service instances and four instances of the listen service and within the last section since we have two instances of our web service running and they're listening to port we have to configure this one as a shared socket so let's bring up our note you won't only see the note on the top right panel but below that you also see tail with the lymph tail you can subscribe to all the locks of any service so in this case we subscribe to the web greeting and listen service and it will print all lock statements that it receives from the from the instances so let's make sure that everything registered correctly because there's no output in the lymph node window sorry panel lymph discover should tell us that we have indeed two three and four instances of the services running respectively and let's hit our service cluster as before with local host that's great name goes into the query string once more you're a python and what we expect to happen is once we issue this request should be handled by the instances we should see print three print statements now in sequence in the note panel and below that plenty of log output so fingers crossed this works very nice it did so you see three print statements up here almost reached like a little haiku about to greet Europe Python saying hi to Europe Python somebody greeted Europe Python the response looks good as well we can see the the greeting has been returned as expected and we see plenty of almost confusing log output below here but now consider that possibly your instances might be distributed over any number of machines and if you want to debug something or follow the locks and get information from that it's hard to tell which lock statements belong to each other how can I relate them to the to a request possibly they well belong to the same request but the lock statements come from several different machines let's you overcome this problem with the trace ID so whenever a request enters the cluster and it does not have a trace ID assigned yet lymph assigns a trace ID to the request and this trace ID is being handed forward via every RPC request and event that is being emitted and then whenever we log is being locked with it so you can see here we hit the web service and returned a header called X trace ID and that's where it included the trace ID and let me allow to use I terms search highlight sorry search and highlight function so you can see the trace ID is appearing in the locks properly and within your own time maybe you can assure yourself that it actually locks correctly with the trace ID and we can correlate all the lock statements via the trace ID that's very good I managed to successfully go through the demo part and nothing broke so let's just briefly reason about the communication patterns which we've just observed and I think I went a little bit too far with animating stuff but maybe that's hopefully it's entertaining so we started with having two, three and four instances of these service running respectively and we issued an HTTP request it was handled by one of our web instances it printed something and then we wanted to invoke the greet method via RPC of one of the greeting services so what happens behind the scenes is that we consult our service registry which is ZooKeeper by default and we ask it for all the instances of the greeting service and then we pick one at random to send the requested and in this case it was the lowest one the request is being send over it printed something, emitted the event to our event system which is Rebitem queue by default, returned the response and then we had nice output on the shell and on possibly entirely different timeline one of the listen instances consumed the event by getting it from the queue and print it naturally so we see there's RPC available which follows the request reply pattern and it's synchronous communication and we are also emitting events that's the PubSub pattern and that's asynchronous communication so you've seen I've jumped slides here already exactly one instance of all the listen services will consume the event however there are situations where you'd like to inform every instance of a service that something occurred and all we need to do is on the lower left you can see that we decorate the service sorry the method which is supposed to consume the events as usual but we say that it's broadcast and what happens instead is that we when we emit the event we're publishing to three sorry to four queues in this case and then it's being consumed four times and that as a repercussion naturally we would have seen four print statements so these are the communication patterns which are available with LIMP so but what else does LIMP come shipped with so what's in the box is that what I mentioned already LIMP manages your configuration files you can get a lot out of your configuration with very little code it provides a testing framework so that you can unit test your your services following the fashion of if I invoke this RPC method is an event being emitted as expected or run some together and exercise them its dependencies are basically pluggable so you could exchange zookeeper for something else you could do service registry with something else you could not use RabbitMQ but something else like Kafka for instance there are service hooks so when you when your service starts you want to possibly set the stage for it like you want to provision a database connection and then once you shut it down it's supposed to be cleaned up there are service hooks for this LIMP allows you to do futures so usually classic RPC is blocking and but possibly you're not interested in the reply from the service or you're interested in later so you can do this with a future you can defer the call LIMP collects a good amount of metrics when it runs your service already and then it exposes them but you can also collect custom metrics so for instance if your service is talking to third-party API and every now and then this request times out and you want to keep track of how often this happens this is what you can easily do you can also write your own plugins for LIMP there are even more hooks that you can plug into and get custom code executed whenever something interesting happens out of the box there is a new relic and a sentry plugin for LIMP it's easily the sorry the CLI interface is easily extendable a colleague wrote LIMP top which is basically like top but for LIMP services and you can handle you can receive remote errors you can get shells and remote services and so on there's a whole lot more so how LIMP works under the hood is that anything that is supposed to be sent of the wire is being serialized with message pack message pack is well that's their claim like jason but a little smaller and a little faster RPC is being dealt with with zero MQ like I said service registry by default happens with via zookeeper and events are being the event system is revenue queue every LIMP instance or every service instance is a python process and it handles requests and events within greenlets and this is what we do with G event and everything that is web or HTTP we use vector tooling for that so since some of you attended the Namiko talk already as for things that are out there that are similar to what LIMP is there's one thing that needs to be mentioned of course that's Namiko Namiko does a lot of things very very similar almost starting similar to how LIMP does things naturally does certain things differently but it's very nice and if you haven't attended the talk just that you have a look at Namiko also there are other things out there which which don't solve the big picture like Namiko or LIMP try to solve it but they supply solutions for niche problems like zero RPC or other stuff and you would still have to provide a good amount of glue code yourself and this is both Namiko and LIMP both try to avoid so what we have in mind for the future we want to have this little ecosystem of libraries for writing special purpose services easily so there's we have LIMP storage in mind LIMP monitor which then collects all the metrics from other services and stores them wherever or does with them whatever you want LIMP flow which is basically the idea is to write business processing engines which deal with your business process and manage your entities and wherever there is that is to come so to sum things up if you can remember this one thing that LIMP is a framework for writing services in Python I think I have been successful today you can find out more at LIMP.io and naturally it's open sourced your contribution is very welcome you can find the docs at readthedocs everything is linked there at LIMP.io it's all written down in more detail following the same narrative as today at import minus LIMP.link that's where you find all the examples that's where the vagrant boxes that's basically where you can go and play around with LIMP and last but not least if you are a Spanish speaker and you'd like to hear this talk again later this week in Spanish then my colleague Castillo will give the same talk in Spanish I had to learn this by heart it's import LIMP I don't know whether I made a good effort but I see you're nodding very good so that worked and here comes the shameless plug that goes with every talk so we're hiring if you're interested in working with us in Berlin and you want to work with LIMP if you find that interesting you possibly have seen this flyer in your attendee back already feel free to reach out at deliveryhero.com or see us at our table in the hall we've brought goodies and gummy bears most importantly thank you and thanks for the organizers of course questions or maybe you can just shout and then I'll repeat the questions so everyone can hear or if you would like this your first question well theoretically it's possible to talk to LIMP services to do it yourself so you would have to basically re-implement exactly within any other language and your last question I didn't get it so your question was whether you can use LIMP node the idea behind LIMP node is to be used in development it's not the idea is not to replace your production how you should run it in production the idea is to help you run stuff locally you don't want anything with the LIMP node basically you just supply a command I don't know for instance to run your Redis server you can include it there as well and then it runs it for you Hi just to correct what versions of Python do you support and if you support Python 3 why didn't you use something like so to answer your question yes both Python 2 and 3 are supported I don't know with which Python 3 version it starts I know that we have had a little trouble with this in the past but it's supposed to support Python 3 as well and I didn't your other question IOHTTP I don't know about that but to preclude your question already but other yeah sorry so about message versioning do you support that or that something somebody would have to do on top of your message versioning yeah for example if you run two different versions of a service in a cluster you want to see you're running two different versions of a service and there's nothing that deals with it out of the box right now you would have to deal with it yourself I guess I mean it depends on whether the interface is backwards compatible but you can't it would be another service then if you want to run two different versions right now you would have to run two different services or just expand the interface thank you for a great presentation and it worked out it's amazing to see such a polished software which promise a lot but what's on your in your backlog what's the issues you're working on what's your roadmap for developing it further so the ideas to further let these special purpose libraries I'd like to call them that to let them further mature and then release them as open source at some point but right now the ideas to simply make lymph more stable we're going to run it in production anyway so this is something that will naturally and that will naturally grow and mature within the future I don't know if I could write but I understood that you're using a zero MQ for handling the RPC calls and rapid MQ for handling the events have you considered perhaps using rapid MQ for both and then getting rid of one extra dependency or did you experiment but didn't work or for instance Namiko uses RPC goes via zero MQ sorry via rapid MQ there as well for us it was a design decision not to do RPC over something persistent as rapid MQ actually my question was going the other way around if it is also possible to replace rapid MQ with a zero MQ for the pub sub yeah definitely so is it but you said it is pluggable but is it already implemented or is it something I was expecting a question therefore I prepared something so this is the part of the .lib YML which I actually didn't want to show because it's a little confusing however what you can see here is this is actually where things are pluggable so you could provide another class which does registry or handles events this is what it looks like by default you can make your own back ends for either and I see everyone's eyes narrow so yes it is confusing and just one more quick question I've seen in the YML files at least maybe in the simple examples just names of services and class path so would it be possible already just to have a decorator that defines the name and then just launch it getting rid of the YML file and just launch the instance just providing the path to the class well in theory yes that's possible but the idea behind this is that you could group several interfaces together and run them as one service and I think this is just more flexible because then you don't start to mix what's in your configuration not this way you have everything in your configuration and that's where it is thank you cheers nice cool thanks guys