 Oberstein who's going to be speaking about scaling microservices with crossbar IO So hello. Yeah, my name is Tobias. I'm giving a talk about scaling microservices or microservice in generally was crossed by Oh, but specifically scaling microservices the talk is online or the presentation is online So there will be a lot of links inside, but you don't have to write them down. So just Just open that you And yeah, well, my name is Tobias. You can find me on Twitter and various social Channels under that nick and that's my email if you want to give me a Feedback or have further questions don't hesitate. Just contact me via email or Twitter Yeah, my background is in math and statistics I've been programming since I was 12, which is quite a long ago I've been doing programming in different languages a lot of them I Regularly used nowadays CEC plus plus SQL JavaScript the other ones less and of course my preferred languages Python these days so There's one language. I still want to learn that's rust That's missing in my my repertoire still because it's an interesting language But other than that, I'm spending most of the time in part so Apart from a program I'm founder of Tevendo and cross bio a startup and we do messaging servers. So And we also with it open source and community supporter We so we started a lot of open source libraries and cross by all itself is also open source Regarding the startup we have a commercial open source business model So that usual thing the software everything is open source But if you want to have commercial support for cross by all you can get that from us So that's pretty much standard in yeah, well enterprise software, you know that business model Well, I mentioned that we started a lot of initiatives open source libraries One of them is Autobahn you've probably heard of it was the first web socket implementation in Python You can use it to write web socket clients as well as servers. So you can use it at the pure web socket level And it also implements the web application messaging protocol, which we will be using for Microservices, I will talk about that later Autobahn supports to as in chronos networking frameworks under the hood so that's twisted in as in guy oh and runs on Python 2 and 3 and So you can use it like if you're a twisted guy Then you can use it in a twisted application, but you can also use it in an as in guy or application And Autobahn is used More and more it's a biggest user probably from deployment site is firefox. They use it for active push So that's a browser feature and your browser feature and they have like serving like 80 million connections currently using Autobahn and want to ramp up that to the whole user base That was quite a thrill I'm in conversation with them because it's a difference if you have like to 1,000 web socket connections versus 80 million There are new effects new new challenges like like keeping them alive and stuff like this and It's also will be Bases for good but nine that's a continuous integration system also written in Python And then it's used in Django channels. So Django, it's it's a Based on whiskey it's blocking and they want to have want to have some more real-time features. So that's That's also user Then web application messaging protocol. That's also started by us, but it's an open ecosystem So it's an open open protocol. We're just one implementers in our days So they are third-party implementations of the protocol and the story is like a web circuit It's it runs natively over over web circuit, which is real-time bi-directional and stuff like this But it's quite low-level. So we figured we needed some some more abstract for for writing application You want something more abstracted. So what the one protocol implements? Remote procedure calls and publish subscribe. We'll get back to that later Well, yes cross by all it's a one brooder. So web application messaging protocol is a router protocol. So you need a router There are different routers cross by always our one brooder, but they're different one brooder So you're not locked into our stuff. That was because I was mentioning one business open ecosystem So we from the beginning we said no, we don't want to look in lock Uses into our stuff. So you can have different implementations or can use different implementation as well And Well, there are a lot of links the presentation Then we have a mini demo code which is kind of stripped down. That's the second one And then we have a bigger application. That's not quite we've we've tried to finish it until today But it's not totally finished, but a lot of code is already working. You can have a look there It's all in GitHub. So just give it a try Okay, just very briefly what's what's microservices what's the story about that I won't go much into detail But just to a quick intro So this is something you don't want to have What you usually have if you have a monolith application you end up with a big pile of spaghetti Sooner or later. So it usually starts small and everything is great and works But in the end after a couple of months or years it piles up and in the end it looks like this So What's the deal the deal is about taming complexity? So that's the overarching story Why do you want to have microservices? It's because you want to tame the complexity to not get sunk in in some spaghetti hell So that would break down into Dividing conquerors. You want to have smaller parts which are more manageable One thing should have only one responsibility. So not not Shuffling everything into one one piece and one piece does everything So you want to have separation of concerns and then you want to have decoupling So that's that's a pretty important point. You have want to have your your parts Contained and then basically Minimize the coupling between the components. So that's the approach to tame complexity and Well, that's of course the longest Has a history. It's not not new that that theme of taming complexity and Splitting down stuff is pre-old and there were different approaches of different technologies trying to do that car bar decom and Soap is probably the worst of all Well, I've done for example c++ with car bar, which is not nice You have to be a massive history to stand that basically or to to survive that and in the end It's it's just doesn't work or it's it's too complex So and we also see Why rest and HTTP isn't really an answer or not at least a complete answer For microservices. So if you think microservice most people are not saying say, okay, just let's use HTTP rest That's that's works. That's good enough, but we'll see why that isn't enough So we will you look at the microservice how you approach microservices and an application based on an example and The example is a traveling salesman problem So we will look at the traveling salesman problem and the application and how to break that down into microservices And what are the challenges and how you would do that? So just very quickly the traveling salesman problem. What's what's that? So imagine a salesman that has to visit a couple of cities so that big the fat ring that's the start point and The the task for the salesman is to visit all the cities exactly one or each city exactly once and then come back to the starting point So and the and the the task is find the route which is shortest. So salesman wants to travel fast wants to wants to Minimize the lengths of the round trip around the cities so This would be one solution But not the shortest path so with that few cities it's it's obvious what would be this shortest path probably this one and Now the problem is How do you find that a solution? Computationally and how do you find it if the number of cities is growing because the problem is that's It grows the number of possible routes grows pretty fast. It's exponentially grows exponentially so Or is sorry cities is already a huge number. I can't even read how many digits a lot of digits so exhaustively Looking at each possible route isn't practically possible. So we need something better Yeah, well, that's just just a wrap-up. So the traveling salesman problem is a combinatorial Optimization problem and the search space looking for a solution that's exponentially large And there's not a deterministic or a close solution to that problem So there are problems is a close solution, but traveling salesman is not one of them So we'll look at one way to solve that just very briefly that simulated annealing So if you have a search space now in one dimension, that's the X X axis and you have a cost function that that gives the cost depending on the the Particular solution in the search space You're looking basically looking at the energy surface or cost surface And the the problem is that you don't want to get stuck in if you if you now try for example gradient descent You pretty much get stuck in a local minimum and simulated annealing tries to avoid that by by a clever clever heuristic to to look inside the search space and search for the best solution so it isn't isn't Try to avoid that getting stuck in a local minimum and How do you do that? so we don't need to understand the details of the algorithm, but there is an important point where there's a repeat m times loop and That's the important point because we use that to to scale out the microservice so we will have a compute microservice component and And we want to scale that out and to be able to do that We need to have in the algorithm a place where we can split up the problem into sub problems and then Distribute to each instance of the compute compute microservice part of the the problem so if you're interested it works like you start with the initial temperature and In annealing that's the the idea is like if you if you have a melted melted material and cool it down It will it will if you if you cool it down slowly it will get into the minimum energy state so and that's just Transfer of that idea into an algorithm And so you start with initial temperature and the initial state which can be any other just a random route and Then you repeat that until you you've reached a lower and temperature and Then you you just have starting from your current solution You perturberate that solution a little bit just like swap to cities the order in which they are visited Swap those and then have a look again at the modified route is the energy is the coast lower if it's lower than you take the new route and If it's higher you never let us take the new route even if the coast is higher With a certain probability that depends on the temperature and that's the trick because that avoids to get stuck into a local minimum early early in While while running so there's there's always a little bit Unless the temperature is zero. There's always a non-zero probability of taking a Worst route so that gets you out of the local minimum But that's not it's just an example the important point is that we have a repeat m times part in the algorithm Which we can use to to distribute to our multiple instances of the compute service Well, that's why don't we run it on one core just and that's enough because The answer is simple because there are no There's a limit Based in or rooted in physics you we don't just don't have like like infinite fast single core machines So we have to use multiple cores How would we use multiple cores for that problem in an application solving to traveling salesman We could use some I talk mechanisms This is not what we're talking or what are we talking about because that's a talk mechanisms just for the compute problem Are we talking about just using microservices also for the compute part also for scaling the compute part and Well now TSP app how does does it look we have an application that the input is the problem Our compute time budget and we just want to have the as an output the best route found in that compute time budget So the app has to do Made basically three things so it needs to control the overall search the orchestration of the search It has to have a user interface We want to be able to look at the route or control the parameters and it needs the compute core multiple instances of the compute core so Splitting up that into a microservice Application would probably look like this so we have now split up the monolith the big monolith in different in different parts and The different parts are we have the user interface microservice parts we have compute parts and we have the orchestrator so This is the TSP app from a microservice angle or in the microservice split down Architecture now the thing is these components need to talk to each other so for example the orchestrator that controls the overall optimization needs to Needs to call into the compute instance instances of the my compute microservice For sub parts of the search space compute me your best route within your subspace of search and Then it wants to get back the result best route for that sub problem or subspace of the search space So it needs we need something to call into the come from the orchestrator microservice to call into the compute service and get back a result So that's pretty straightforward But we also want to have something like events Like the compute servers. What's my current load on the on the machine running that instance of the compute microservice? It probably needs one thing we want to see what's the CPU load currently? How many routes per second are processed by that particular instance of the compute microservice? So we need something something different which is event We want to distribute information to the other parts So the orchestrator wants probably to track all the the CPU loads of the instances of the micro compute microservice And we want to show that in the in the user interface as well So we need two patterns a more procedure call and publish subscribe ideally in one protocol in one technology that makes it easier and that's One way is using the web application messaging protocol because we figured initially that we we want to have something at more abstracts and Peter than raw web socket and we need those two messaging patterns in one protocol So how does it work We have app components components of microservice. I will use it synonymously and those are Initially connecting to crossbar crossbar is a router wonp router. So Code wise that looks like this. This is the twisted variant How he established sessions from the components from the microservices to crossbar I won't go much into detail But in the end you get a session object which is self and then you can have those actions for the two messaging patterns for remote procedure calls and for public subscribe for publishing and and receiving events So that's just kind of boilerplate which you need to establish a session and then To actually run a session you have this boilerplate I will not go into details, but you can see there's a WS in in the application run at that URL That means it runs over web socket okay, so Pattern number one publish subscribe ideas you have abstract namespace your your eyes which Was names where the names are topics. So the Decoupling between the publisher and the subscriber side is why are that namespace via that your eyes? The publisher publishes to the abstract topic crossbar knows of the one fruit and knows who is subscribed on that topic and Can distribute the the events again? So two app components could subscribe like the UI I subscribe to see on CPU load change all the back end the orchestrator on CPU load change and then The compute component can publish when the CPU load changes or each second for example periodically Publish my CPU load and the subscribers would then receive the CPU load and The point is both don't need to know about each other. So we have that decoupling So the publisher side doesn't need to know where who subscribed where are my subscribers? Where are they currently residing they could be behind network big behind knitted networks and behind the firewalls and so on So we have that decoupling for that pattern. Oh Okay, miss that that's the distribution of the event then To the actual subscribers So code wise it looks like this pretty easy. You have a call Event handler on hello That should be fired when you receive an event on your subscription and to subscribe You just say session subscribe. You say, what's my event handler and your you are I so that's pretty straightforward a Subscription can fail basically only for for one reason. That's you're not allowed to subscribe So they are their authorization mechanisms in crossbar where you can finally control Who is allowed which role is allowed to subscribe who is allowed to publish or not? Yeah, well publish looks like this so you have Basically also on the session of the session published so you can publish your data And so how to Remote procedure call look look This is also again decoupled with a namespace your eyes are decoupling the caller and call the site so a call he says I provide a for a procedure and This is callable under this your eye and then the caller can call the procedure under the your eye But both don't need to know where physically the other one resides. So we again have a decoupling So this looks again like this the component registers When then as a call incoming cross by knows who has registered the the procedure and can forward the call to the call II The call II produces a result and the result is then shuffled back to the original caller So again, we have a decoupling between caller and call II Which is pretty much a difference to rest HPP because with rest HPP The caller needs to know where is the host name and the port number which I'm trying to call So there you have a coupling to your deployment infrastructure And you have a coupling inform your application code to your deployment infrastructure and you don't want to have that So we transferred that pattern from publish subscribe the decoupling to the part to the remote procedure call Well register looks like this need to fasten up a little bit I think Just code wise you can have a look on the presentations online Call can be done like this. So it's pretty pretty straightforward session call And you have that the only difference basically to a direct function called the yield there Which which you should recognize that's asynchronous code So it's not a synchronous in-process call, but it's out of process It does network stuff, but it's asynchronous therefore the yield you could have a newer in Python 3.5 You could have a weight basically, so that's just that's Python 2 compatible code But in Python 3.5 there would be a weight at that place And you can of course combine those actions So you can call a procedure in an event handler and then publish events from a register procedure if it's called and stuff like this So it's all pretty much you can combine that to create more complex interactions Then shared registration, that's the feature with which we will be using for scaling out market services if we have to Normally one procedure can only be registered once The second one gets an error already registered, but in crossbar we have a feature called shared registration which allows The same procedure to be registered by multiple instances of a service or instance of a component and How does it work then if there's or how is that usable? It's usable because crossbar then can for example implement hot standby for you So you can have all the calls routed to the main primary component until that component fails and then all further calls are transparently routed to the hot standby component and the caller doesn't need doesn't Isn't aware of that. So it's totally transparent So you have hot failover for micro services like that's one use and the other one for for the problem here is this Scaled so you can have calls routed to different components or multiple instances of the component in a round robin fashion So that allows you to to scale because you can run those instances on different machines and It's again transparent completely transparent for the caller side The caller side doesn't need isn't even isn't even aware of the fact that there are multiple instances on the cally side It's just transparent So shared registrations that isn't it's not context at all It's just a single option that you have to to give on the register with When you're registering the the procedure it's invoke round robin So we have different invocation policies round robin is one random like single first last so different invocation Policies will be using round robin. It's simple straightforward to understand And So then there's another feature because if you if you then say, okay, that's enough normally a component can take in Arbitrary amount of calls like if you have a python component, that's it's a single thread It doesn't make sense to to send it like 100 calls if it's only single threaded If you if you make it multi-threaded or if you have a component which is multi-threaded Then it can't probably take in many calls in parallel But that's you should be able to control that concurrency. Otherwise your component just gets overwhelmed by incoming calls or invocations so that's another feature which is Pretty much necessary for for practical use. It's max concurrency. You can register Cally can register and give max concurrency say I'm able to do that many concurrent calls to process That many concurrent calls So again, that's pretty easy. It's another option during the register You just give the concurrency you're able to to serve and crossbow will note that and then it will not ever send More than that many calls concurrently to your component So that way you can prohibit Okay I'm too slow. Sorry. Okay TSP app. So this is the architecture. It looks like then So we all connect the crossbar And of course you can have those components written in different languages so we support more than 12 languages and And then you can combine those components or place the components on different boxes machines physical machines So you can have the orchestrate the computer machine one two different compute instances of the neck and next machine and so on Summary yes microservices new new answer old problem We've seen that Cobra and everything else But we think that's that's a new new answer to the old problem a better answer than before We have seen those two connection patterns or Interaction patterns remote procedure called publish subscribe which we think are pretty much always necessary or most often necessary in practical applications And then we've seen how that is made easy by crossbar and Autobahn Scaling and hot standby are offered by the router in particular by a cross by all there isn't currently the Wampruder stiffer by a feature sets cross by is the most advanced one So not every Wampruder or currently it's the only one that supports that Yeah, we'll also have a price draw with two Iot start the kids we are pretty much into the Internet of Things We've got two of these you can win these If you take the survey you don't have to provide your email. We don't want to have your personal data just feedback. So Please visit these links take take part in the survey and you can win two of these Okay That's that's my talk So any questions? Yeah Yes, so crossbar IO is like a black box that handles everything or do you deploy it yourself? And if you deployed yourself, you know, what does it have like a few nodes? Does it know do high availability if one note fails and you you you can you can deploy it on your own premises So it's open source you can it's open source You can just download it deploy it in your bare metal on your in your cloud and AWS We also have Docker Support so you can just docker run crossbar basically and you're up and running, but it's right It's a black box as is it should be looked at as a black box You can also install it like PIP install crossbar But you should look at it like like Apache or nginx like a black box system. So We're we're still working on that on the on the scale out part of cross of the routing core itself So that that is in the work on and we're working with that. That's probably I can't tell after that more Okay, so thank you for your talk. Hope the we managed to use Database with crossbar. I mean we have to We need database in direction. So does it have to be and synchronous to Well, you can have the database interaction in from your components So you can have a component which is just using as good I can be to talk to your database or whatever We will also have it is an upcoming thing Which is a postgres as a connector which will allow you to directly use warmth right into the database So you can for example call a Postgres SQL stored procedure like any other warmth dot one procedure So you can call from JavaScript directly into a stored procedure in postgres SQL You can publish events warmth events from a trigger in postgres SQL Okay, so it doesn't have to be asynchronous on your in your component You can be suck you could you could create a component and let it be synchronized, right? If you if you're mentioning like like database libraries are usually synchronous There there are ways to to work around that so postgres SQL For example has also asynchronous read asynchronous database drivers others like if you have an oracle Cx oracle it is synchronous, but you can run it. You have to run it then on the background thread So otherwise it blocks your primary thread with which is doing the asynchronous Networking so but there's their ways to to handle that Thank you for a talk a for talk and my question is The your scenario is a real nice because it's very easy just talking between microservices But I'm afraid to use these products because when I need to make authorization Let's say My clients and need to get some of the data it's it's scary because How do I pass my tokens from the user to the To the let's say a autobahn or crossbar or when I pass them let's say motorization change and These problems are Well, well, they're probably different aspects. We have authentication mechanisms built into crossbow We also have extensions for extension points in crossbar where can hook into the authentication phase Pretty much by just implementing a womp component again, which is then called curing the Authentication they can plug into a proprietary authentication system and then we have role-based Authorization, which means you can finally control which who is allowed Who's allowed to subscribe who's allowed to publish for example the one the sensor in an IT application Could be allowed to publish but not to subscribe even to its own topic So you can have finally controlled Our authorization of those actions But that's there are many aspects like you mentioned in like Encryptional security or in what direction is here? and nice example is a user can see different Instances there is instances so one user can see only two instances and I have a topic name instances in the crossbar, let's say. So I how do I send a User only is instances. Yeah, you can you can either have the the Role-based authorization mechanism where the specific user is authenticated under a specific role and only under that role He is able to receive his own events or the events. He should be able to receive and you also can have That's called exclude ineligible. You can have but that's an advanced feature now probably we'll talk after that You you can control even for the individual publication Down to the session level who is who should receive that event? So not all the subscribers which are basically Authorized and subscribed but even a subset you can you can have Exclude ineligible, but let's let's talk what's probably to Show that in code, but you can pretty much finally control who is allowed to do what and who is Should get what event or whatever so that there's a lot of stuff inside so we are pretty much paranoid on the security side so that's Maybe awesome answer Hey, thanks for your talk One question you mentioned like the routing you do the main feature is the decoupling I see that but it also seems to me now that you introduced a very tight coupling to crossbar to like a third component and Solving that in the like HTTP case would involve something like you can use a load balancer or something like smart stack So like how would you respond to that? Do you think it's this is more of a convenience that it's easier to set up and get everything running or do you have a Different point of view. I have a different point of view because at the application level Crossbar is invisible From your code. It's it's it's only visible in the initial connection establishment But but that's a couple of lines of boilerplate somewhere in your application The rest or in your microservice the rest of your microservice because it's totally unaware of the fact that there's a intermediary So and regarding your point with the HTTP rest doing load balancing then you're pretty much Reinventing what we did because then the load balancer needs to know who where are my rest endpoints? And when they change when the one machine goes down, it needs to be updated the load balancer Needs me. So there's a whole category of startups The API management for microservice for rest-based Microservices so that's a whole category which from our point of view is is doing it the wrong way But of course, that's that's all of you But I think you you're starting down the road reinventing the stuff we do that that the decoupling will be Based on your load balancer and your engine X or whatever But then you have to to manage that that needs to know where are my rest endpoints and and the other problem is I We've not talked about that, but it's all open ports all your microservices are basically web service with open ports So you have a pretty much big attack surface With crossbar you only have outgoing connections from the components from the microservice. There are no listening ports So for example, that's not only a problem on the security side It's only also problem on the networking side if your microservice is behind a net. It's not reachable from outside So rest doesn't work. So you can shoot shoot shoot open ports UP and P or whatever, but that's security hell So but with crossbar you can have your app component your microservice sitting behind the net doesn't matter because it's only one outgoing connection So there's advantage and security on network side Yeah, just that would you normally deploy it behind engine X or is it the first thing you'd hit crossbar? Please I didn't would you normally deploy deploy crossbar behind engine X? You can of course more many people do that Deploy that behind engine X for for basically serving static web assets from engine X from the caching and then just Reverse proxying the web socket connections to crossbar, but I would say we personally we Ourself we just run plain crossbar. So there's a web server build in we've done benchmarking like it It scales on 40 cores to 600,000 web requests per second Can shuffle more than 10 gig per second Hcv response traffic So we don't have a need for engine X if you like Facebook then probably yes And you need engine X but so engine X will be faster on the pure Serving static web assets, but for our use cases you don't need it pretty much Okay, I'm sorry. Thank you I saw on your website I mean So on your website and I got the link which is seems not to work now But you have a lot of projects with Crossbar in unembedded environment or something like this What well the the Internet of Things Industry 4.0. It's the most important user base or users for for us because it's a big wave coming and There you have inherently distributed applications if you're writing like like a holiday planner You can still decide if it's a monolith or it's if it's a microservice app because in the end It will run in the data center and only there But if you have an Internet of Things application with different locations moving vehicles Data center back ends mobile devices and whatever then that's inherently already distributed so there isn't a choice between monolith and microservices or because it's inherently turned distributed already and While we have a lot of feedback or uptake in that Internet of Things So we users of crossbar are either Doing it for like I want to make my web using the face real time. That's one user The other one says okay I have I run a Bitcoin exchange want to have some real-time stuff But the the biggest or most interest from for from our kind of view is Internet of Things So that's that's pretty much a focus for us Yeah Okay, I see there's still people with questions left I would invite you to go afterwards during lunch or right now or whenever go grab this guy and ask your questions And we're gonna get to the next speaker. Thank you Tobias