 My name is Xing. I work for Autodesk. Today I'm presenting this topic. Let's say it's more about an idea or a thought experiment rather than anything with any practical substance in it. And if you're looking for any clover code in it, you probably will be disappointed because there isn't any. OK, so let me just get started with this. The topic is about functional microservices. This idea came to me when I was working on a couple of microservice projects because in those projects I found that microservices didn't deliver the things that has been promised to us. So OK, let me just get started with this slide. So this is me. I'm a stat. I call myself a polyglot developer because I'm comfortable to code in many different languages. I'm a super fine Haskell, even though I've been stuck in the beginner phase for many years. Yeah. What is a stat? A stat is a software developer in test. That's it. Yeah. So to start on this, basically in the beginning we have the monoliths. And we are all aware of the problem associated with monoliths. A typical monolith system is really huge. It's very complicated. Nobody knows where our things. And anytime you make any little change, you have to redeploy everything. And any little change could have the potential to bring down the entire system. So then afterwards is microservice. So we turn to microservice for solutions to those problems. But unfortunately, my experience with microservice is that the same mistake had been made all over again just in a different fashion in a way that actually harder to debug and harder to diagnose. So it's the same spaghetti at all. So here you've got the same thing. Let's say when two services read right into a shared data store, then everything breaks. And the service could have many, many dependencies. So a little change in one of the microservice somewhere could have a cascading effect on the other part that seemingly unrelated. And of course, there are unaccounted full set effects. Again, in your microservice, you can just call whatever other service you want. And then basically every mistake would be made in monoliths. We could find them in microservice projects as well. So that led me to think, can functional programming help at all? Because we know in functional programming, there's a focus to write your function as pure as possible. And there's tendency to isolate the set effects to the places that belong to. And then I wonder if I can bring those learnings from functional programming into the microservices. And specifically, can we actually model microservice as functions? So if we take that wheel, then every service should be like this. You send in a request. The service does something. I'll give you a response. But this service can only be the function if it always gives you back the same response for the same request every time. So essentially, it's not always pure. So there are roughly two kind of, how to say, the non-purity in non-purity. You could have internal states because your internal states can affect the response it gives back to the user for the seemingly same request. And it could have external effects. An external effect is that your service may not have the answer at hand to answer a request. So you need to reach out to another service to fetch their answer, then give a response. In this set effect, it means you had to call somebody else to give you back something. So to make your service pure in a functional way, then there are certain ways to deal with these two kind of set effects. First, we look at the internal states. How do we handle the internal states? We begin from here. Now we know with the internal state in place, this service won't be pure because for the same request, it could give different response depending on your state. So an easy way to solve this is to bring the state as part of the input into your service. So we could see in the like this. So it just had an extra argument called the state. But the states are not, how do I say? Yeah, just go ahead. So by the states, how do I say? Those states are not, how do I say? Sorry. OK, let me just go ahead. So we could actually see the states as a reduced over the incoming, the previous incoming request. It's built up from your previous request. This is a thing called CQRS. I guess we know what it stands for. It stands for command, query, responsibility, segregation. What it essentially means, every request comes into your service, can be one of the two. It can be either a command. Command is something sent to your service, but doesn't expect a response. This command will help you, this command, sorry, the service will build up the state over this command. And the query is something that then change the internal states of your service. So basically it's just asking a question, and the service will compute a answer from its internal state and the query itself. Now with this in mind, so we got this. So whenever your service is asked a question that's a query, it can work out the response from this query itself and its initial state, and all the command it received previously. So this is how we model the internal states if we want to wheel our service as a pure function. Yeah. OK. At this moment, any questions? Oh, are you guys following? Yes, please. Do we keep a state for the list of commands? Oh, sorry. So we don't want to keep the state as the reduced state. Oh, the state. In the second line, there's a reduced state, right? This one, this state. Yes. So basically to model the internal state, if we want to see the service as a pure function, then this state must become the actual parameter to this service. And here it explains how this state actually built up. It built up from this map. In the last slide, I understand where the query comes from. Yes. It comes from external source. Yes. I understand where the initial comes from. It's part of our definition. Yes. It's internal. The commands. Yes. Where do the list of commands come from? OK. Here's the thing. When you have the service, you receive many, many requests. And the request can either be a command or a query. For the command, for the command request, they don't expect a response. Those commands have the service to build up its internal states. Yeah. But today, I will send a command. Tomorrow, I will send another command. Yes. And then the next day, I will send a query. Yes. Until I send my query, nothing needs to happen. That's true? Yes. Yeah. But at the point, I send my query, how will the service know the command I sent yesterday and the day before? Because your service, let's say, just think about your service has a little database to its own. Yes. So every command in the service. Yes. You keep those states. Or you can just transform those commands into a way and store locally as it's on state. Yeah. So that's why. OK. So next. So basically, we'll see the service at this. We have a series of command queries, command queries, coming through the service. And it gives out a series of responses to out. This way, we actually see the service. It's actually just a transformer of a stream. Transformer stream, it takes an input stream. It produces the output stream. This way. OK. This way. So we can see a little made up example of how this thing work. Let's say we have the service called counter. Here's this. This is the input stream. And this is the output stream. Yeah. So input is a command plus one, plus one, plus one. Then the query was that what's number. When the service received this command, it built up its internal state, just adding everything together. Now here is answer three. Then you do another two. Then answer five. OK. This is about how to handle internal states. Any more questions before we move into the next part? OK. Sometimes our service needs to reach out to the external world to interact with other services. And to model this, this term I made up is called effect for response. The idea is that the service itself doesn't perform, but doesn't actually perform any of those external requests, any of those external effects. What it can do is that it can return a response that includes those effects its intent to perform. Yeah. OK. And with me. So the service doesn't actually perform any of those, but somebody has to. So I call this somebody the boundary. So you can think about our nice and pure services living inside a room. In this room, everything is provided to the service so it can remain pure. But somebody needs to do this dirty work. And that's the boundary of the service living. So boundary is kind of like this. So let's say when a user makes a request, the boundary captures the request and made another request into the service. The service says, hey, to answer this request, there's something I don't know yet. So you need to ask another service for piece of information. So instead of, say, for this service to reach out to this external service directly, it just returns a request to perform this request. They will call it an effect. So when the boundary says this, OK, you don't have the answer yet. So I have nothing to give back to the user. But I'm going to help you to get the answer from the external service. Then the boundary pushes. Remember the command query, command query. They push another command back into the service. This boundary can ask that question again. Do you have the answer now? Then the service can say, OK, now I have the response. The boundary says, OK, now you have a complete response. I can send it back to user. So for this boundary, it simply helps the, so in this way, this boundary, this boundary helps this service to remain pure. Let's just see another made up example. You can see the little square in the middle is the boundary. And this input stream of the service, and this is the output. You can see here the user. So this service, what you're going to do is just summing up some random numbers. So the user can send in the request, say, add a random number, add a random number, add a random number. And then ask the question, what's the total at this moment? So the boundary sends the request to here as a command to say, add a random number, add a random number, and ask question, what's the total? At this moment, this service doesn't have the total because it only knows it needs three random numbers to be generated. So this response, it just say, hey, I don't have the answer yet. And please help me to, please give me three random number. Then this boundary reach out to this service asking for those numbers. Then these numbers come back, back to boundary. The boundary then feed those numbers back into the input stream of the service. Now you've got the plus five, plus 18 plus. Now ask in total again, or the total is there. Questions? Yes, please. How is the request for the three random numbers connected to the result? Connected to the result. I'd say you had two requests for the random numbers. You know, we've got the random numbers that you asked for, and they went for another request. Sorry, sorry. You mean this service? Yeah, yeah. How does it coordinate? How does it coordinate that? Let's say, OK. Let's say it's only received this two. It doesn't have the third yet. At this moment, if you ask this total at this point, it's just going to return a response similar to this one. But instead of asking for three random numbers, at this moment, you can ask for one, because you already have two. Its internal state can recognize, OK, now I have two random numbers fulfilled, and I still need the third one. OK, so if you have two requests, which are each three random numbers coming at the same time, would they just get random addition of the six random numbers each? Ah, I see your point. Because for this service, because we have this input stream, in the stream, how to say, then every item in the stream has its own place. So it's not possible for two, let's say, ended up in the, let's say, arrived at the same time. So you have to, so this service always processed item one by one. Yeah, so that's it. That's this one. So basically, in this way, we just, from the things we just talked about, now we can actually see services as functions. That means for every service, we can write down this formula. Then we can list out this input, this output. If we put something input, the way we can predict what the output should be. This, to me, is particularly attractive. Because I'm an asset, this system looks like this, are the easiest to test. And also, you can see the services as stream transformers. Essentially, to receive a stream of incoming events or requests, then we produce another stream with these responses. To take this wheel, it's kind of easy to compose. It's easy to compose the services together to act as a bigger service, but still display the same characteristics. So the benefit of getting from this, number one is crystal clear API contract that not only helps me as a tester, it also helps all the consumers for this service. And there will be no surprise in the hidden set effects because all the set effects, whether it's internal, external, must be modeled and reflected in the formula we just saw. And by following this model, there will be no shared multiple states because all communications must be through this input and output streams. And as a natural consequence of this, the service will become functionally composable that's going to open a lot of doors. So yeah, so that's all for this talk. So any questions, please? Yes, please. Two questions. Yes. The first one is regarding the first part where you try to eliminate this internal state. Yes. What I want to do is the same pattern as event sourcing? Yes, that's where I borrow the idea from. OK. And the second part where it's trying to eliminate the source, the side effect from external dependencies. Yes. So if you deploy the example evasion because of the architecture, how do you deploy the container? The container. You mean the boundary? In most cases, those boundaries will be in the form of, let's say, other services. Let's say you could have something here as your boundary and taking care of the actual user request and response. And then you can have those pure services underneath that. And here, it orchestrates all of this together. You can take it in this form. OK. Yeah. Is that still micro-service? That guy is probably not a micro-service. Well, it's not a service that we talk about here. But you can, I'll just say this. But those dirty work, somebody has to do the dirty work. I'll say. But we can limit those dirty work into those places, to those limited places. So the vast majority of your, sorry, the vast majority part of your system remains pure. Yes, please. Coming back to the question. So I think that you're making the assumption that everything is processed sequentially and everything arrives just as you send it to. But you know that network is not reliable. Oh, yes, I know. And you can get, like, you send message 1 and message 2. First, you receive message 2 and then you receive message 2. This one is actually, when we think about the service as a stream transformers, how do I say it? The transform your streams are kind of a solved problem. There are many academic papers or libraries exist out there to help you to do this. I'm sure those problems are already solved in one of those places. Let's say there could be missing events or there could be duplicates. And one way of solving this, for example, is you can borrow the idea, let's say, from the TCP connection. When the way TCP works, if we receive 1, 2, 3, and then 5, it's going to wait for a while for the number 4. It won't deliver number 5 to the downstream application yet before it receives 4. There are ways to solve those problems. And just by modeling this as a stream transformers and apply those existing solutions, then I'll say pretty much those problems can be solved in real time. Can that be also implemented in a functional way? Sorry? Can that also be implemented in a functional way? Probably not, I'll say in that case. So there's still some mutation going on? Oh, yes. A pure, pure function is not useful. Someone has to do the dirty work. It's just a wire way to do the wire those dirty work are done that matters. If that's it, thank you.