 Okay, don't resolve the technical data, just you also can use it and do something with it. Oh, yeah. So yeah, I think it does just give you the right answer for your question. Okay. You can, you know, put your arms aside to the, to using the, to a lot of the complainers and just use it. Test. Okay, so let's start. Welcome everybody. Thank you for coming. Today we will talk about the migration of an old legacy application into some more modern architecture like service oriented architecture. So just some words. So I'm Mathieu Gilot. I'm backend developer at Optima in Brussels. So small startup in Armand de Bru and growing fast. I'm open source enthusiast and I enjoy working. I like my job. I think we have, we are very lucky to be software engineers. So that's a never ending learning process. When I'm not working, I like travel and I like tractors because tractors are cool. So what will we talk about today? This presentation is not about parallel migration. It's not like building another application in parallel of your application. And when you reach a certain point, just switch to the new one. It's like refactoring a live application. And the main purpose will be to make your application easy to work with. More maintainable, more testable. It will be more easy to understand when you onboard juniors or new people to work on. And it will scale in a better way. And that's the most important point for today, I think. So let's have a look at the plan. First I have some words about concepts of system oriented architecture. Then we will see some tools that will help you in the migration, like some different refactoring workflow. Best practices that will help you to build a scale level micro-service for your brand new application. And then we will just have some quick examples about migration. Because I took my way in doing a migration now, so I have plenty of examples. Okay, so first let's start with service oriented architecture. So everyone here heard the words? Okay, it's okay for everyone. So the concepts of SOAR. Business value has a priority over technical strategy. Strategic goals has a priority over project specific benefits. Interoperability is better than custom integration. Shared service is better than specific development. This one is very important. Flexibility has a priority over optimization. That's very hard to make a non-technical manager accept this point. Because if you say okay, I don't care anymore about optimization. Yeah, you will just say no. But yeah, if you have single responsibilities, small agnostic services doing some tasks, you will not need to over optimize your application anymore. You will just have to make each service scale at its own rhythm. And evolution refinement is better than initial perfection. So if you take all these concepts, the same idea is behind. It's work for the long term. If you want to maintain your code, you have to work for the long term. So respect these concepts. And we have also principles. So agnostic services, it's single responsibility. So abstraction, service are black boxes for every other part of your application. Statelessness, the service should be stateless. Composability is very important. You can compose service with a set of other agnostic services. It's like mainly using the dependency injection pattern. And reusability, so service should be reused between every part of your code. And encapsulation, encapsulation is a very practical thing for a migration. It's okay, you have legacy code and it's impossible to do a clean service with it. But you want to be able to interact with it through your dependency injection container, for instance. So you just create a dummy service and make it a wrapper around your old code. So I have an example for that. So this class is a very old legacy code. There is no namespace. There is nothing modern. It's a singleton, so it can be called like thousands of times everywhere in your code base. So if you want to update, to move the usage of this thing to another one, you're doomed. Because you will have to change everywhere. So one good practice to start a migration is to create a service just wrapping this whole thing. So it will be more easy to replace it over the time and smoothly just, yeah, you don't have to replace all codes in one time. Then for the concepts with the services, so services are agnostic, agnostic pieces of code. One service should do only what he has to do and handle only the concepts, the context he has to carry only to perform the action. Nothing more, nothing less. So all dependencies have to be injected in your service. So it will be very easy to test because it's just an IO, input, output. So you can do a functional testing on a service very easily. And composability, play Lego in your application. So here, for instance, I have a database service, which is a database abstraction layer maybe, and a caching service. And this one, authentication service, I will inject the database and the caching and the logger service in it and just play with all the small pieces of code I have in my framework. So that's how small agnostic services are implemented and dedicated to some specific tasks. Now, to implement these new small services, we'll have a look at some workflow and some best practices you can put in place in your daily work. So we'll have a word about different types of refactoring. Refactoring, I think we all do it every day. One pattern that I like, I enjoy a lot is TDD. So we'll see how to do it. And we will make a difference between some different refactoring tasks, which are called optimistic refactoring and large scale refactoring. So what is refactoring? So Martin Foller says, it's a change made to internal structure of your project. It will make it easier to maintain and to understand, but it will not change its behavior. And that's what's very important in refactoring. Don't do two things at a time. Don't develop a feature and do a refactoring at the same time. You will get into a big trouble if you do it. That's why Martin Foller again says, you can see it as two hats. When you work on a refactoring task, you have like a helmet on your head. And when you work on feature implementation, you will have like a sombrero, for instance. And principle is never wear two hats at the same time. If you work on a refactoring task, work only on the refactoring task, and at the end, you switch your hat and you just go back to implementation of features. So that's how we do it using TDD, for instance. TDD is a very nice way to code and to produce something which will be maintained and without regressions. So first thing, you write tests. You write tests and you define what are the needs, targets of implementation of your new service, just using assertions, failing assertions. So all your tests will be read, but you will foresee every possible scenario of usage of your new service. Then you make tests pass. You make tests pass and you don't focus about the design of your code. You just get the work done to the end. That's development of the feature. So it's the sombrero. And after that, you can focus on the design of your code. Make it nice. Make it easy to read. Make it commented and, yeah, with every best practices. But it will be very easy to do the refactoring as all your tests will already be green. So you will not break anything. You're covered if you foresee all the possible scenario during the writing of the assertions, the first step. So that's a very good practice. Then we have what is called optimistic refactoring. I don't like it. I don't do it, but it exists. And maybe it will be a very good tool for you. Because, yeah, there is no silver bullet. So we have a lot of different patterns of different tools. Every application is different for another. So some tools are maybe very good for you and not for me. So it's based on the Boy Scouts rule, which says always leave the code in a better state that you find it when you start working on it. And it's cleaning up code as we work in it. So you implement a new feature. And at the same time, you do small refactoring tasks. That's why I don't like it. I prefer to split. But, yeah, if you have a lot of safety discipline, you can do it. Comprehension refactoring is exactly the same thing, but it's not the same trigger. It's OK. You see a big class of, like, method of hundreds of lines of code. And you spend, like, two hours to just make the map in your head and understand how it works, get the full picture of the thing. And if you had to do this mental work, just put the result of this mental work in the code. So the next guy, which will work on this thing, will not have to spend two hours to make the same understanding process as you did. He will just have to reread your comprehension that you put in the code using, I don't know, comments. Refactoring, like, split into small methods in a more logical way. And, yeah, that here's the only refactoring task is move your comprehension into the code. It can just be write comments. It can be split. It can be, I don't know, use inject dependencies or, yeah, every good practice you want. So optimistic refactoring is these two kinds of refactoring. Yeah, refactor while you work on implementing your feature. It's a good move if it's a very simple fix. And if it will make easier to implement the feature you're working on. But it's better to do it on a stable code base. And if it requires less than a value that sounds good to you, a percentage of the time to develop the feature. Like, if you spend less than 5% of your time on this small refactoring task, maybe for you it's worth to do it. If it's half of the time to refactor, maybe you have to split. And, yeah, never wear two hats at the same time. And it's, yeah, you must have a lot of self-discipline to do that. And maybe I can recommend you if you want to try to buy some hats and, yeah, just do it with real hats. Then we have preparatory refactoring. I prefer this one as optimistic refactoring. So you refactor code base before adding a new functionality. So you say, okay, I have a new feature to implement on this part of the code and I know it's really shitty. I know it. So you have to foresee to estimate how much time it will take to refactor just parts, which will be useful to implement this feature, plus the implementation on the refactored code. And if it takes less than implementation of the feature on the shitty code, yes, it's a good move. If it takes a lot of time, maybe you have to split, write compatibility tests. I will explain after that compatibility tests and do the refactoring in a dedicated branch, put it in production, just check you don't have regression and start the development of the new feature. And then we have last refactoring, large scale refactoring. So you have to fix large area on problematic code. Usually it happens in every company on every project. It's a good practice to do it like twice a year, once a year, twice a year, every three months. It depends on your workflow. And the more you will work with quality approach. And what I mean by that is use little pickup refactoring TDD or plan refactoring in your daily workflow. Then large scale refactoring should happen less and less. And if you have as many codes to refactor in each planned refactoring, it means you will really have to put in place some little pickup refactoring or plan refactoring in your workflow. That's because you don't do, yeah, you have to do more refactoring. And you have also, when you want to replace a big part of your legacy framework, like, okay, I want to replace this old module by a vendor. I want to move this from an API to another one. And that's a long-term task. And I recommend for that to use a branch by abstraction to reduce the risk. So do people know about branch by abstraction? No? Can you explain quickly if you want? Okay. So you start by implementing an abstraction layer on top of your functionality. And usually you have one functionality and you have many clients which are modules of your codes using this functionality. Then you implement an abstraction layer on top of your model and then switch the codes to your abstraction layer for all the clients in your other parts of your applications. And then you develop the new system and you put it on the same abstraction layer using compatibility tests just to say, okay, it will behave the same way. Maybe you will need to put an adapter on top of your abstraction layer, but it will work at the end. And then you don't have to change all the wiring. You just have to remove the old system. And the clients will still use your abstraction layer. And you can let it because maybe one day you will have to move again. I don't know. So yeah, abstraction is never too much abstraction. And then best practices to implement your new agnostic services. So we will have a word about decoupling, about solid, test automation and monitoring, which are good patterns, good practice, good tools. So decoupling, yeah, try to apply the law of demeter. So law of demeter is don't talk to strangers. Just talk to your close friends. I will show an example. Solid principles. You know solid. It's five patterns just very useful to break coupling in code. It's a single responsibility, open-close principle, list of substitution, interface segregation and dependency inversion. So Google it, have a look. It's very, yeah, do it, use it. And use dependency injection to mock. Mock easily your services and write unit tests. So yeah, use dependency injection. You can also use events to decouple, like an event dispatcher. I don't like it very much, but it's a good tool. Or even an event bus, which is like asynchronous task centralized somewhere. And that's a very good thing because you can plug a lot of things on your event bus, like monitoring statistics. I recommend if you have, if you want to go to a micro service environment, an event bus is a very good tool. So yeah, for the law of demeter, here you break the law. Because you talk to a stranger, you don't talk only to your friends, like direct friends. Like you have a change tire method on a car with a new tire. So you make car, get wheel, delete tire. This breaks the law of demeter because you take an object as a parameter and call a method on the dependency of this object. So you can talk directly to this object. You can pass a wheel, for instance. Wheel here and not a car. So you will talk only to one level. That's not mandatory, but it helps to reduce coupling. And that's why I don't like events so much. If you use events, you have to be very strict about how you dispatch your events. Because if you're not, you will lose visibility and it will end in that state and you will start to be in trouble. For performance, for comprehension, you can obfuscate your code very easily with events if you don't have a very strict way to dispatch. So now, automatic testing. All new agnostic services must be covered by tests. For me, that's mandatory. So unit testing, it's just test an algorithm. It's context agnostic, so it should not interact with context. No network, no database, no dependency. So that's why you have to use dependency injection to design your services because you can mock dependencies and write unit tests. So that's a good practice. Unit tests, a real good unit test is very hard to write. And you have to foresee all possible scenarios. Then you have integration and functional testing. So okay, unit testing, you have a lot of small piece of codes. You just test algorithm works. Integration testing, you test that all these modules interact well together. And functional testing, it's the same idea but from an end-to-end view. Like for instance, you want to test a job. You will not context agnostic here. You can interact with the database because it's a functional test. So you can make a snapshot of your database, for instance, and it hydrates your database with fixtures. Then run your job and make some queries at the end just to say, okay, this action was done, that was done, that was done. So you can interact with context in integration and functional testing. So don't do only unit tests, don't do only integration tests, you need both. And that's a very nice example of unit testing, okay, but no integration tests. Because it opens and closes, so your basic algorithm is okay, but interaction between your modules doesn't work. And we have also end-user testing, which is already called acceptance testing. You access the application if you are an end-user, and then you test what's displayed. That's typically in PHP you have tools like Selenium to do that. That's a very useful test. And I recommend to use a description specification language, like I don't know if you heard about BDD behavior-driven development, like use Behat or Cucumber or something. So that's not the developer which will write the scenario of testing, that's the tester, the QA team. And yeah, that's a good practice for me. And if we speak about tests, we have to speak about code coverage. Never make a target of code coverage, it will only make your code base worse. Never do it, because it has no correlation with code quality. I recommend you to go on a Martin Foller blog and read an article about assertion free testing, which is a very real story of company delivering codes covering 100%, but with no assertions. So that's a very good idea. So don't make it a target, but it's a good tool. Yeah, it must not be a target, but it's still useful. Focus on the risky code. You need to have 100% coverage on risky code, and for me, risky code is something which can delete data from user and to end code exception. That's another part, that's a design problem. Or tricky algorithm thing you don't understand very well, because maybe it's all, maybe it was badly implemented, I don't know. That's the risky code, and yeah, put your test here first. And yeah, 10% coverage. So if you cover 10% of your risky code with 100% of possible scenarios, your tests will be much way more efficient than if you have 100% coverage for 10% of scenarios. So yeah, don't focus on coverage, never. And monitoring, which is also a thing to have in your new services. Profile and method and monitor are all time to identify bottlenecks. If you're part of your application and what's very important for me in monitoring is to make a distinction between IO and processing. You don't monitor IO and processing in the same matrix. That's not the same thing. And IO for me is something which is blocking for the PHP thread. PHP thread, because yeah, PHP is under-threaded. So like a database code for instance, it's an IO for PHP. You go out of the PHP virtual machine, you wait for something, and then the script go back to its work. So you can use a lot of tools to monitor. Profile, Xhpro, for the memory check is very efficient. Pinbar, who knows pinbar here? I knew that. Okay, pinbar is an amazing monitoring tool. It's really, it's crazy. Really, you can monitor your production in real time without losing performance. So have a look at it. I wrote an article about that and there is a link at the end of the speech. So now we will have a look at tools. That was about best practices for your new services. Now we'll have a look at tools to make the migration, to move from your old system to the new one. So compatibility test is a tool I like very much. You have to get some indicators just to see how fast you go and it will make your manager happy. And you have to use monitoring just to see if you don't have any performance regression. You will have some performance regression during a migration because it's normal. You have two systems cohabiting and they don't work in the same way. Like usually legacy code, it mainly works with IDs as new design works with objects. So you have two things not working in the same way in the same context. So you will have a bit of performance issues during the migration, but monitor it and you just have to define your own levels. Okay, for me, 10% of loss is acceptable. No, maybe not. I don't know, it depends how you want to work. So what's a compatibility test? A compatibility test is just to ensure you don't break the compatibility from the old system to the new one. So it's a short lived test. You just use the old system, use the new system, you assert it returns the same thing from the same context and when you kill the old system, you remove the compatibility test. That's just for the switch. So I have an example here. That's old legacy code that's the same service as before. We use IDs. Here I have the new thing which is objects. That's a very easy compatibility test to write because the two things do the same thing, just load an object. So I can easily assert that object equals. Maybe you will have to, if you use your ID, it's hard to assert that object equals. So we'll have to write helper new tests. But yeah, compatibility test can be hard to write. You don't have to make it nice. Like, yeah, for one time you can write dirty code, do it. Yeah, that's enjoyable. Yeah, that's not production, so that's okay. And Crap Index, it's an indicator you can use. Crap Index is a trans-juriscanalysis and prediction index. It's based on coverage and complexity. So it's linear with cyclomatic complexity. So if you take a method, if it has a high Crap Index, it means it's a big risk for your project. So low coverage and high complexity. And in PHP, you need, for instance, if you run the coverage tool, you have reports about Crap Index of every method, every class, everything. And you can just migrate and see at what speed it decreases. And that's a pretty good tool to follow the migration. And it's easier to identify parts of your code with higher risk, so the parts you have to migrate first maybe. And progression metrics keep it very simple. It's useless. It's just to make manager happy. Like non-technical manager. So you say a percentage of your migration gives them dummy data. Maybe we don't care. It's really useless. But you have, I mean, a migration is a company decision. So everyone has to agree about the process. So you have to sell the idea of the migration. Because if you have some manager which is non-technical, he will not see the benefit of the migration. Because for you, your developers, you know that performance is a feature and he don't. So, yeah, sell your idea. Migration, it's not easy to make it accept by non-technical managers. So, yeah, insist on it and maybe give some fake data. And monitoring, it's very important just to track you and make regressions in performance. Usually I use anomaly detection and alerting to spot regression. So when I replace something, I have some alets just in Slack or in mail, what you use, what you need, hip chat. I don't know. That says, oh, it's 10% slower than before the release. That's not normal. And then, no, put all these together and we will try to kill the monolith. So, how to spot bad code that is very easy to migrate. Usually it's when compatibility test is very easy to write. So it's like an old system and a new system doing almost the same thing. So it's very easy to migrate. And when it's not used in too many places because it will be very hard to deploy with conflicts if you have to change code everywhere. So we will see something to avoid that. And when compatibility tests are fast to implement, it will be an easy task. So write your new service with all dependency injected so you can, you need test it and so on. You write the compatibility test. You replace the old code user by your brand new service. And for each method, you migrate. Your global complexity will decrease because you're using a responsibility now and the coverage will increase so the crop index will naturally go down after every method migrated. Now we have some tasks where it's more complicated. So for instance, you want to migrate something but it's called everywhere in static, statically. So it's coupled with every part of your code like you have, I don't know, hundreds of files calling a static method that you want to remove. You will have to change everywhere. So what I do here to make deployment more easy, I implement the agnostic service, I write the compatibility test and then I inject the new service in the old method I want to kill. So I have an example here. What I do, I have my method to kill. So that's legacy code. And in it, I just use the new service and as I have a compatibility test and I have forcing all possible scenarios, I know I will not break anything. So I can just replace smoothly. Like I put a release with this in the code, maybe it will not be used anywhere. And then I just make in other releases, just replace the calls of the thing. So I have a client, like a part of your code base and I don't call this anymore. I call directly this. But when you will be done, you can just remove this method. And I also put some logs. I'm able to track what's using the legacy code and I use a dedicated monologue channel for that so I can easily detect okay. From this part of the code, I still use the legacy thing and that's not normal for me. And we can do it exactly in the opposite way. You can inject your old manager in your new service. Why? Maybe the method you're trying to kill has dependencies with other method in your current service. So that's why what I do here is, okay, that's my new service. That's a repository. It does one thing, interaction with database. So single responsibility, we're okay. I inject the for manager, which does, it was a big legacy file, which does almost everything. And so if I have some methods that I moved that have dependencies to other methods of the for manager, I don't have to replace all in one deployment. I can just remove it smoothly and the dependencies will be your next target. And when you have no dependency anymore, you can remove the for manager from your new service and you're good to go. So keep in mind small steps. Small steps because you will have a lot of release to do in the migration and you will have a lot of conflicts. So yeah, make it small. But every step, every step should finish in a stable state. That's very important. It totally makes sense. And all your new service have to be tested. Your code coverage, don't make it a target. It will increase naturally and start where it hurts. And that's very important also. You know when you work on a legacy application, what's the worst part? Okay, that will be very difficult to do. Do it first because it will just, yeah, it will make the team confident in the migration process. They will say, okay, this guy has migrated this file, this service which was the worst thing we had with a lot of dependency everywhere. So we can migrate everything. So start where it hurts. Maybe do some two or three easy tasks first just to set up the process and just have a good plan. And then start with a big one. After that, yeah, it will be started and it will be clear in the mind of every developer of the team. So now, we have seen patterns of system-oriented architecture applied to the architecture of your application. But initially it was at an architectural level. So it's not your service. That is a single responsibility, but it's a micro-application and that you have a lot of micro-application talking together in an ecosystem. It's small web services, single responsibility, easy to test. So it will be end-user tests through an API mostly. You try to keep a consistent communication protocol in your ecosystem that's very important and that's impossible to do. So try as much as you can. And you have to use API-first architecture for every of your web services. That's pretty straightforward. And you can start within your framework. So you now have some agnostic services in your framework. And these services you can just reimplement the logic into a small application which will be dedicated to perform that task. And then your service will just become a wrapper that will call your new micro-application. Maybe you don't need it, but I will have examples just to explain it. So that's a very good way to kill your old framework. Just trashing some code out in some agnostic services and we will see how to kill it. So that's your monolith. That's a big monolithic application. You have back-ends. We don't care about the names, that's just tools. And you have a set of features implemented in your application. And why is it bad? You have coupling everywhere and it scales at one rhythm which is the rhythm of your application in a global way. And that's not very optimized in the term of scale. So intermediate states, you take some of the services, like for me the best example here will be the search. Because the search is resource-consuming. It's usually some data consolidated from a storage to another one or from different storage to a central elastic search or something you want to use for your search. So you can very easily implement it into another application that we'll call through an API with your main application. So your old search service, you make an agnostic service in your application and then this service will become a wrapper to your new micro application here which will be standalone and do only search. And for search you can do it in Go, you can use a multi-threaded language to do that, use what you want, do it in PHP if you want. But the search never scales at the same rhythm than the rest of the application. It depends about your application, but usually scaling about the search has a big need about scaling. SSO is also a very good example because SSO, yeah, you can authenticate users and share authentication between a lot of different units in your ecosystem. Like your main application can use the authentication but another application can use the same one because now it's a standalone application that just calls through an API and then you can just put out some parts of the code in some agnostic services with single responsibility. And maybe you will have some services like jobs that will have to share some storage with your main application, that's not a problem. After that, when you're close to the end, you will have this kind of thing with what we call system microservices. System microservices usually interact with Word. It interacts with backends, so with database, so with external APIs. You gather data from Facebook API, from a database from, I don't know, a lot of different sources, so you can write microservices on top. So it's dedicated small APIs. Then what remains of your framework is here and you can also code what we call experience APIs. Experience APIs are just here to reformat outputs of your framework for some dedicated clients. Like you can have an Android, iOS application, a website, brand new API, graph API, I don't know. But experience API is, I think, a very useful thing. Yeah, it's a very useful thing. So what's very important here is, yeah, try to keep something coherent in the communication between your services. Use a standout. Use a REST. Use a JSON RPC or RAML. I don't know something in the RMQ. You can use what you want, but use something coherent in your ecosystem. And I told before that's impossible, but start as much as you can. And yes, be careful with communication. That's the same thing that with events. So usually what I do, I put here an event bus which centralizes all communication and just dispatch. And after, if you want to kill your framework at this layer, you will just have what we call orchestration APIs. That's just make your all small services, communication interact smoothly. And you can use caching, you can use a lot of things. So yeah, that's the design I like. But yeah, what you have to keep about this talk is there is no silver bullet. Maybe all the tools I presented here you don't need. I don't know. Maybe you need all. Maybe you need just a part. We have a lot of very good patterns and good best practices and tools that exist on the market. But every application is different. So yeah, there is no silver bullet. Just take, have a look at how everyone is working and take what fits the better with your application, with your needs. Yeah, that I think if you have one thing to remember of this talk, that is, there is no silver bullet. And yeah, I had another point on that. I don't remember. So if you like to work on this kind of thing and you're in Brussels, come join us. We need help, really. And I know share some resources here. So that's a blog of Martin Fuller. Go read everything. It will take you one. There is really crazy amount of stuff. But that's very interesting. MuleSoft, it's about this kind of architecture. I saw at the MuleSoft Summit. So MuleSoft is a software company designing some microservice ecosystems. And they share very good practices. Yeah, things about unit testing. And that's the blog article I wrote about PINBA. Have a look at PINBA. I will try to do another presentation on this tool because I think it will be very good for people to know it. So I don't know if you have questions. I have one. At the beginning of your talk, you were talking about TDD. Yeah. And you said that at first you should write every test. Yes, of course. And after that somehow implement applications. Yeah. It was on purpose to just exaggerate it a bit. It's just don't focus on design. Not do it in a bad way, but don't focus on design. You will focus on design when all your tests will be green. The design of your code, I mean. Focus on the functionality. It's just to... Commenting and start for it. Yeah, commenting like... I don't know. When you start to work on something, don't try to make it nice in the first iteration because you will lost a lot of time. Maybe you will not have forced in some one case and you will have to rewrite everything. So that's a good practice to go straight to the point and then refactor make it nice at the end. It doesn't mean make it messy at first. It just means don't focus on it too much. Anyone else? Okay. Yeah? When you spoke about the database service, for instance, right? Yeah. On your way to microservices and, well, assuming that the different services will use the database for their own stuff. Does that mean that the database service will be a super thing and really just move the queries around as more services will use it? Usually, for me, micro-application to access storage should be very, very small. Yeah. For me, micro-application for storage should be very, very small. It doesn't have to handle a lot of business logic and maybe if you have business logic on top of your data then you have to write another micro-service that will just handle this logic. Like for me, Redis API, for instance, is built in micro-application. You don't have to code it, just use it. If the database service is a black box and the database service is also a black box and other black boxes need this database service, do you put another black box? Sorry, I didn't get... Every micro-service is a black box. Yeah. But the database service itself is a black box and other black boxes need this service and other black boxes need this service. How do they interact with this? Okay, so the question is if the database is a black box and other services are also a black box, how will the interaction be done? So I think you have to centralise all your business logic in a dedicated orchestrator which will just make the communication easy with all other black boxes because it's just business logic of how you will use your data for specific clients. There is context switching, I think. So maybe you need just an orchestrator and if you like orchestrator, I recommend to have a look at Erlang because it's a very efficient language for that. I don't really know. I didn't find a very good solution so far. I think if you're looking for best practices, the question was about best practices for keeping a coherent communication between small services. What I would recommend is dig into MuleSoft's engineering blog. Netflix's engineering blog also has very good practice for that. But I never find something which is 100% satisfying from my use case. Combine best practices from other people and just make something which fits your needs. Sorry, I don't have a real answer for that. Just a quick note if you attended any talks here. Feel free to provide some feedback through joinedin. It's always nice for the speakers to hear what you thought as well as for other conference organisers so they know whether they can invite that speaker or not. Thank you for attending and see you next year.