 Thank you. Welcome everybody, thanks for coming. This talk is about microservices and Nomeco, which is an open-source library that you can use to write them in Python. My name is Matt Bennett. I'm the head of platform engineering at a company that's currently still in stealth mode, so I can't talk about it while I'm on camera. And previously I was a senior engineer at One Fine Stay, which is where Nomeco was born. So how many people in the room know the phrase microservices? Okay, that's quite a lot. And how many of you knew about it two years ago? Not that many. So microservices is the hot new buzzword, and suddenly they seem to be everywhere. So quick history lesson. In November 2014, Martin Fowler and James Lewis published this paper, microservices, which I think is considered to be the seminal paper on the topic. So I highly recommend it. It's very accessible. It's not very long, and there's a lot of information in there. And it's also very recent. So they didn't invent the term microservices, but they gave it a concrete definition and really propelled it into our vocabulary. So at One Fine Stay, we discovered this paper when it was published and realized that it described what we'd been building for some time. And that was really exciting, because suddenly we had a common language with which to share ideas about this stuff. So for the uninitiated, what are microservices, or more correctly, what is the microservice architecture? So this is Martin Fowler's definition. It's an approach to developing a single application as a suite of small, crucially each running in its own process and communicating with lightweight mechanisms. So I think it's helpful to contrast microservices to a monolith, which is probably your default way of building an application as a single process. So your typical Django site is a good example. You would probably compartmentalize your logic into different apps in Django parlance, but ultimately they would run in the same process and memory space as each other. Whereas in microservices, your apps become entirely separate programs. So in essence, this is good old fashioned decoupling and encapsulation, but applied at the process level. And so what this forces you to do is consider the boundaries of the services or the seams that run through your application. A common response to the hype around microservices is you should be doing this anyway. That's just good design, which is true, but with microservices you can't be lazy and do say a cross component import because it's not there to import. And there are other benefits to using separate processes as well. So some reasons for adopting microservices is the primary reason for adopting any software architecture is scale or rather maintainability at scale. So I don't mean scale in terms of serving hundreds of millions of requests a second, but rather in the complexity of the problem that you're trying to solve and the team that is charged with solving it. So there's an analogy for this that Alan Kay, who is the inventor of small talk and object origin programming, used in a 1997 keynote, which I watched a video of because I was 13 in 1997. And it goes like this. So if somebody asked you to build a doghouse out of wooden planks and nails, you'd probably be able to do a reasonably good job, a reasonably sound structure, but if they asked you to then scale that up to 100 times the size using the same equipment and tools, you couldn't do it. The thing would collapse under its own weight. And so when society started building massive structures like cathedrals, we used stone arches to support the weight of the structure. And I had this light bulb moment where I realised that the etymology of the word architecture is literally the application of arches. So how can microservices help you achieve maintainability at scale? We've already said it's about decoupling and encapsulation, but what else? So as separate programmes, they're independently deployable, which means you can have separate release cycles and separate deployment processes for each part of your application. So the Guardian newspaper have written about how they've embraced microservices, and it's allowed them to start using continual delivery and iterate very quickly to deliver one part of the application, but without putting some slower moving or more legacy or more risky parts at risk. Separate programmes are also independently scalable. So now I am talking about serving hundreds of millions of requests a second. So to scale a monolith, you have no choice but to replicate it and deploy another instance. You have to replicate the whole thing. But microservices are much more granular and therefore more composable. So if you have a service that is very highly CPU bound, you can deploy more of those across more CPUs without having to drag along the rest of your application as well. And there's also a freedom of technology. So being good Pythonistas, I'm sure we all really want to use Python 3 where we can. But sometimes we get stuck using an old library that hasn't been updated and therefore we can't. We're stuck in Python 2. In a monolith, you have to use the lowest common denominator. But microservices are individually free to use the most suitable interpreter for them. So pi 2, pi 3, pi pi, it's up to you. I perhaps shouldn't say this too loudly at a Python conference, but this extends to your choice of language as well. So if you want to experiment with something functional like Haskell or Erlang, you can write a service in that language. Now forgive the circular reference on this one, but microservices are not monolithic. So outside the realms of software architecture, a monolith is something that's big and imposing and impenetrable. I think that the monolith from 2001 space obviously. Whereas microservices are small and nimble and quick. They have a smaller code base, which means it's easier to bring a new developer on board and understand the whole thing. There's a lower cognitive overhead to understanding how it works, which is inherently more maintainable. And then there's Conway's Law. So how many people have heard of Conway's Law? Yeah, a few. So this is something that ThoughtWorks talk about a lot. In 1968, a chap called Melvin Conway said this. Organisations which design systems are constrained to produce designs which are copies of the communication structures of those organisations. Because in 1968 it doesn't seem like there are any new ideas in software architecture. So if you have your regular three-tiered web application, you have a database layer and you have an application logic layer and you have a user interface layer. And you likely employ specialists that work in those areas. So I've worked in a team like this. And as a member of the middle tier, I would be able to talk to my application developer peers every day. And it would be really easy for us to communicate. But then when we went to speak to the UI folks, we used this, you know, a subtly slightly different language. And there was this layer of friction that meant that we made mistakes and it was harder for us to communicate with them. That's Conway's Law in action. So what ThoughtWorks recommend instead is you build small multidisciplinary teams. And then you separate them based on the natural divisions that exist within the organisation that you're serving. And as a result you get an application that better reflects the organisation rather than these somewhat arbitrary technical boundaries. So these wonderful benefits are all well and good. But what does it cost? Well, it's kind of a grown up architecture. There are a lot of things that you have to have in place before you can make it work for you. If you want to avoid the architectural doghouse. So there's a DevOps overhead. If you're increasing by 10 or 20 times or 100 times the number of things that need to be built and deployed and looked after, that's a massive burden for an operations team. And the only way to cope really is to leverage automation. So it for your tests, for your deployment, for your machine management. And another insight from ThoughtWorks is that microservices are a post-CD architecture. And what they mean is that it's enabled by automation. And without automating your tests and deployment and machine management, that this burden would make microservices completely infeasible. So I think this is why microservices all of a sudden seem to surround us. It's the same good ideas of decoupling and encapsulation, but with this new dimension being enabled by DevOps tech. So as well as the DevOps overhead, you also have to embrace the domain in which you're operating. So you could argue that for a sufficiently complex application you should really be doing this anyway, but I've certainly worked places that didn't. So what I mean by domain knowledge is you have to really understand the business requirements, i.e. the problem that you are trying to solve for your organisation. And you have to do that so that you know where to draw the lines between your services, how to divide your application up. You can't just build a web app and then tack things on as they become apparent. Microservices forces you to do this up front. And then there's the decentralised aspect. So you no longer have a single source of truth like the traditional database layer. You have to relinquish, that means you have to relinquish acid guarantees and instead embrace base, which stands for basically available soft state eventually consistent, which is a really awkward background in but a good chemistry joke. What this means is that you can't apply transactions across calls to multiple services. You have to apply it in one place and then wait for those changes to be propagated and reflected in all the other places. That's eventual consistency. So at one fine day we made a mistake in this realm. We built an abstract calendaring service that handled the data, the calendaring data and calls to several other services. And because they were in separate, the calling service and the calendaring service were separate, we couldn't apply transactions across them, which is kind of a rookie error really. But what ended up happening was the calling service would call the calendaring service and write something to the calendar which would succeed or fail. And if it succeeded, the calling service would then do something else. And if that's something else failed, we had to catch it explicitly and then call the calendaring service to say, oh, just please undo the thing that we've just done to you. And we couldn't just roll back a transaction to achieve that. So that's an unnecessary loop that we forced ourselves to jump through. And of course there's also the race condition where while the calendar has something in it that we end up removing later, something else looks at the calendar and they say that it's full but it should actually be free. So the decentralised aspect means you have to think about these things. And you have to be aware that you are introducing complexity. So a collection of microservices is fundamentally more complex than a monolith. There are more moving parts and those moving parts are connected by a network which is inherently less reliable than in memory calls. So in a complex system, in a complex system failures rarely happen for exactly one reason. It's usually a cumulative effect of various soft failures adding up. So your network slows down in one area of your infrastructure which causes a backlog of requests which combined with a recent code change means you're writing more to disk, which means that you run out of disness. And it's only when you get to the fourth or the fifth of the nth soft failure that you actually fall over. To mitigate this, you need monitoring and telemetry and you need analysis of the data that that produces so that when something goes wrong you can figure out what it was, what caused it or preferably you figure it out before it goes wrong. So by now you may be asking yourself whether microservices are right for you and if so here are some questions to consider. Is your code base large enough that no one person understands it? Are your dev and release cycles slow because of chains of dependent changes that need to be made? Do your tests take forever to run? If so, you might be fighting a monolith. And if that's the case, are you ready to support a distributed system? Are you leveraging automation for your tests, deployment and machine management? Do you have sufficient monitoring and analysis in place to figure out what's going on inside it? So if your answers to these questions are yes and no respectively, they're fear not. Maybe you can build a multi-lith. So this is a term that I came up with yesterday and so I'm not sure whether it will stick, but it serves the purpose for the presentation. There is a sliding scale between tens or hundreds of microservices at one end and a single monolith at the other and this is a continuous spectrum. So you may choose to augment your existing monolith with one or two satellite microservices, the multi-lith. And this way you get some of the benefits like you could choose to use a different interpreter or you could try out CD without most of the cost. So assuming we're all emboldened and ready to embrace microservices or a multi-lith, let me talk about Nomeco. So it's an open source Apache 2 project and it's a framework that is designed for writing microservices. We named it after the Japanese mushroom, which grows in clusters like this and we thought it kind of looked like microservices with many individuals making up the larger thing. So I asked a botanist friend of mine why they grew like that and he shrugged and said, because there's not much room. True story. So there's a couple of important concepts that I need to introduce to explain some of the design principles in Nomeco. There are entry points, which are how you interact with a service. So this is how you request something from it or otherwise get it to do something. Entry points are the interface or boundary of a service. And there are dependencies, which is how the service talks to something external to it that it may want to communicate with. So for example, a database or another service. So if we jump into some code, I put the code in the following examples in a repo on GitHub so you can grab them later if you want. Nomeco service is written as a Python class. So it has a name, which is declared with the name attribute and it has some methods that encapsulate the business logic of the service. And then the methods are exposed by entry points. So this HTTP decorator here will call the greet method if you make a get request to that URL. So if I expand this example slightly and let's pretend for a minute that string formatting is really expensive and we want to cache the greetings rather than generate them every time. I've also switched out the entry point. So now it's a remote procedure call implementation as opposed to HTTP. So the first thing to notice is that the business logic of the method is unchanged by twitching out the entry point. You know, we've added logic to deal with the cache, but it's entirely isolated from anything to do with HTTP or RPC. So in other words, it's a declarative change that has no impact on the procedural code in the method. And the second thing to point out is that the cache is added as a dependency. So this line here, cache equals cache client, is the declaration of the dependency. So dependencies are special in the maker. You declare them on your service class like this, but the class-level attribute is different to the instance-level attribute that the method sees when it executes. And that's because the dependency provider, which is our declaration, the dependency provider injects at the instance-level attribute at runtime. So if we hacked our method here to print these two attributes when it runs, we'll see that they're different. So the first one here, the top one, cache client, is our dependency provider, and the second one is actually an instance of a memcache client object, and that's what the dependency provider injected. So using dependency injection like this means that only the relevant interface gets exposed to the service method and the service developer. All the plumbing of managing a connection pool or handling reconnections is nicely hidden away inside the dependency provider. So this emphasis on entry points and dependencies also makes Nomeco very extensible. All entry points and dependency providers are implemented as extensions to Nomeco, even the ones that we ship with the library, which we include so that it's useful out of the box. But the intention is that you're free to and encouraged to build your own, or maybe through the wonders of open source somebody will have already built it for you. So this is the list of built-in extensions. So the RPC decorator that we saw earlier is an AMQP-based RPC implementation that gives you a request-response type call over a message bus. There's also a published subscribe implementation that gives you asynchronous messaging over AMQP, and there's a timer for cron-like things and there's experimental web socket support. So I think it's worth explaining why we have this AMQP stuff in here. So HTTP is a natural starting place for microservices. There's a lot of great lightweight web frameworks out there. There's great tooling around API exploring and caching, and HTTP is ubiquitous. And you're probably going to need HTTP on the outside of your services so that clients can interact with it. But for service-to-service interaction inside your cluster of microservices where you control both sides, you probably want something other than HTTP. In particular, PubSub is a killer app for microservices. There are all kinds of patterns for distributed systems. That rely on asynchronous messaging with fan-out capabilities and stuff. AMQP is really great for that. So that's why we include it out of the box. But you don't have to use it. There are also some really great test helpers in Nomeco. So we've already seen how injecting dependencies keeps the service interface clean and simple. But it also makes it really easy to pluck those dependencies out during testing. So in this snippet here, we're using a helper called the Worker Factory, which is really useful when unit testing services. So you pass it your service class, and it gives you back an instance of that service, but with its dependencies replaced by mock objects. So you don't need a real memcache server, and you can exercise your methods by calling them and then verifying that the mocks get called appropriately. The Worker Factory also has another mode of operation where you can instead provide an alternative dependency. So in this case here, we're providing an alternative dependency and we're using the mock cache library, which for this test has a much nicer interface. We don't need to set up the return value or anything like that. So there are other helpers in Nomeco that do this kind of thing for integration testing to help you with your integration tests that let you run services with mocked-out dependencies or disabled entry points so that you can limit the scope of your integration or the service interaction to the things that you actually want to test. So to summarise, in the microservices architecture, you split your application into services which run as their own processes. And this is a way to achieve maintainability at scale so that you can build cathedrals of software. And it comes with a host of other benefits, too, like freedom of technology, decoupled release cycles, even team structure, if you want, for each component part. It's a grown-up architecture. You're building a complex distributed system that means you need to automate your DevOps, you need to monitor it, and you need to analyse the result of that monitoring. And you need to overall be aware that you are building a distributed system on all of those distributed trade-offs. But you can also adopt incrementally by adding one or two satellite microservices to your existing stat. And if you want to go on this microservices adventure, there's an open-source library that can help you with it. It's made for writing services, encourages you to write clean, highly testable code. There are several built-in extensions, so it's useful out of the box, but it's designed to be extended to your use case. So if you want to know more, read the docs, fork the repo. And with that, thank you very much. Thanks for the talk. We finished a bit early, so there's lots of time for questions. My question is, it seems that there is an implication that you migrate stuff towards from monolithic to a more microservice kind of architecture. But is it a good idea to actually start doing microservices from the beginning? Is a sound idea to try and start something small? That's a bold move, I would say. So in that paper that Martin Fowler published, right at the end he talks all about microservices and then right at the end says, but you probably shouldn't start with microservices. I think that probably depends on your prior experience, what your roadmap is, whether you're starting from a blank slate or not. It's kind of a trade-off. Thanks for the good presentation. Is there a big enough open source project that you can look at as a real-world example? Yeah, that's a good question. So Jamaica is heavily in use at One Fine Stay, which is closed source. And there are a number of other smaller London startups that are starting to use it. I don't think there are any public open source applications that are using it yet. Hi. It seems to me that Lymph that is going to be presented this afternoon is very, very much similar. Do you, I mean, maybe it's a crazy idea, but why not try to take up the best ideas of both and build something similar? So I'm excited to talk to the guys from Lymph later. So we... Oh, hi. Hi. We've had some email exchanges about stuff, and EuroPython is our opportunity to get together and talk about sharing some ideas. So I have a secondary question, which is more technical, but is there a way... I mean, I looked at the API and the documentation this morning, and it seemed pretty nice, but there's one thing that's missing, just a simple XMLRPC of Python provides is the possibility to do introspection in the methods. You want to know what arguments are expected by a certain method. You want to have access to the doc string of the method. And I couldn't find in the code or in the documentation if there was a way to do this with Namico. Well, so the entry point decorators they don't mutate the service methods. So you should be able to take your service class and look at a regular class and look at the doc strings of the methods that you've implemented in it. Yeah, you import your service class and inspect that. So you don't want to import on the client side, you don't want to import a service class because sometimes the service code will depend on many things like, I don't know, database interfaces and whatnot. So on the client, you really do not want the service code. So you want to be introspecting on the client side, like at runtime? Yeah, typically what the simple XMLRPC allows you to do by this service.system.list method, for instance, or whatnot. I mean, this is really useful to develop a client, independently of the service. So one thing that we have bounced around for a while is the possibility of a client library where you can, from your service, you can export something that the client can then interact with. And otherwise you're talking about shipping schemas over the wire, which is also a possibility. I think that's how XMLRPC does it. I would put that in the category of fun extensions that you can add. Nomeco is actually quite a young library, certainly as an open source project that's being promoted. So there's a whole bunch of possibilities like this that I hope that we get to. I'd like to ask if there is any ongoing efforts to make a PyCy Kafka interface as a message bus? So, again, this is part of the extensibility. So we, at one point today, we used MQP very heavily, and so we built, and the built-in things we built because we needed them, and then we shipped them with the library because we think they're useful. But, yeah, using an alternative message bus, using zero MQ or any alternative communication mechanisms falls squarely into the category of, this is an extension, let's build an entry point for it. And I hope that that's what happens. Okay, so basically right now there's kind of, are you building right now anything beyond what's been on the slides? Yes. Yeah? Okay. It's not Kafka. It's not Kafka. Sorry. Hi, thank you for the talk. Just a basic question. So when you go for Microsoft's architecture, so you need to be sure in advance that two services will never need to share memory in the future. Otherwise, it can be quite a large amount of work to merge them together, isn't it? Sure, yeah. Thank you. Hi, thanks for the talk. It looks like one of the hardest things to do is transactions. Do you have any suggestions on how to approach the problem? Not really. Transactions are a wonderful thing that we have got used to, and you don't lose the ability to have atomic transactions in microservices, but it's within the scope of one individual service. So that's why dividing your application up is difficult because you need to make these decisions about where do you put these boundaries so that you can have atomic transactions in the places where it matters and fall back to eventual consistency for other things. How you doing, Matt? Great talk. I'm sure there's a few of us here in the room who are working on monoliths. Do you have any suggestions on how to approach, say refactoring it into microservices? Yeah, go for that. And what to watch out for? Go for the multi-lith. So this is exactly what happened at One Fine Stay. We built this Django app, which is still our front-end, and it accumulated all this logic about bookings and payments and financial stuff, and it just became unwieldy. So the journey started with, it was actually, let's take a piece that doesn't yet exist that we know is going to be really hard to add into this gigantic code base. Let's just build that as a separate thing. So a good candidate for the first microservices is a new thing that you need to do. Make that separate, and then maybe you can identify another segment within your app that's reasonably decoupled already and then you can move that out. You can't really answer that question in anything other than abstract. Hello. Are there any situations where you wouldn't recommend to use microservices? Any integrations? Any situations where you wouldn't recommend it? Wouldn't recommend. Because you cannot use it for everything. Can you? So you probably don't need it if your developer team is two or three people strong. You definitely shouldn't do it if you're not prepared to support the distributed system aspect of it. If you don't have automation in place for your DevOps, it's kind of a big commitment. So you only really want to start going down this road when you know that you've got the relevant things in place because otherwise you can come and start pretty quickly. Hello. Have you tried using Namco on platform as a service? That's one thing. How do you approach configuration? I see that you declare a service, but where is the services configuration taken from? So on platforms, in platforms of a service, I think there's too much recursion there for me to get my head around what we're offering. But let me come back to that. On the config stuff, so I didn't show it, you can provide on the command line, a YAML file that contains your config, and then dependencies have access to that config, so you can... It excluded it for simplicity, but you would probably specify the config key to look up for, say, the memcache location when you declare your dependency, and then it would know to go and read that element of the config file. So if you look at the code on GitHub, I've done that. Do you maybe have an environment variable parser for 12-factor apps that are configured through environment variables? No, not yet, but... Last question. How do you run your services, like with Ganycorn or with... So there's a command line interface in the Macon, and then we just run that with Supervisor. Hi, so I've already been using Namico, and I was wondering if there's any interest in doing a sprint this weekend on Namico. Yeah, totally. So I fly out on Saturday nights, but all of Saturday, that's it. Okay, I'm up for that as well. Cool. I always wonder how micro should be the microservices. It's a bit unfair to compare only against a monolith, and as you said before, it's just a good architecture if you have components. So you said you should do microservices if a single person cannot keep the whole code in his mind, but, well, if you split it up into medium-sized services that still fit into a human brain, it's opposed to having basically every route of your jungle configuration as its own service. I don't know where is the limit. So that doesn't seem like a good idea. The term microservices is actually kind of unhelpful because it implies a size, which I don't think really applies. One fine stay, we perhaps didn't make all of the decisions correctly about how to divide our application up, but we had some services that were minuscule, just a couple of methods, and we had others that could only just fit in somebody's brain, thousands of lines. So it's an unhelpful classification. It's very unlikely that you'd end up with lots of services that are all the same size. So, yeah, pinging the granularity, deciding where to draw the lines, that's the hard bit, really. You've talked about built-in extensions. Can you extend, can I extend and write my own extension like if I want to support another protocol? Yeah. So what was your example right at the end? What was the suggestion right at the end? So yes, you can. You absolutely can. Entry points are harder to write because it's a bit more machinery, but dependencies are pretty easy. So if you want to talk to a different type of database, for example, that's easy. If you want to send a message or put a message in SQS, it's a relatively easy thing today. Does anybody have any more questions? There's one. Yeah, you mentioned some of the key points to go into, or the hard points when you go into microservices and DevOps. One of the things you said you have to have in place is really good monitoring. If you consider covering some support in Namco for having some sort of approach to monitoring. Right. So the thing we used at One Fine Stay, which worked extremely well, was we used LogStash and Elasticsearch. So every time an entry point fired, we would dispatch a message, stick it on a queue that would be ingested by LogStash and put in Elasticsearch, and then we used Kibana to explore the data. So you could see which methods got called, and then through the call stack you could see which methods called them and which arguments were and how long they took and what size the payload was, and you could build all sorts of cool graphs so that you can see spikes and explore it. And that worked really well. So that didn't get open sourced before I changed jobs. So I'm currently in the process of re-implementing that, and that will become one of the first open source things. I look forward to thank you. All right. If you have any questions, please thank Matt.