 And I'm welcoming now the first speaker for our Java sessions, Mary Grigelski. So welcome, Mary, and show us what you prepared for us, the good stuff. Hello, Anna. Hello, everybody. Welcome to my talk. And thank you again to Red Hat, to everybody on your developer advocate team, Anna, Etzin, Burr, everybody. So thanks a lot. So I'm all excited today. So really, really thankful. So thank you. See you. The stage is yours. OK, thanks. Yeah, I'm going to share my screen now, folks. Thank you for your patience with me. Let me kind of, let me share my screen. I just got brought on, so things, OK. So I'm assuming everybody can see my screen now. So if you can't, please let me know. Let me kind of go to the chat too, just to make sure. I'm not sure which chat, but I guess maybe, yeah. OK, then there's also a stage. OK, great. Yeah, awesome. Thank you. Yeah, I think I'm switched to the correct chat now. So thank you, everybody. And I know I don't have like a tremendous amount of time to be talking on a topic. In fact, that's actually can be quite involving, right? It's not very simple if you have worked with it. So here we are. So my title is about exploring stateful microservices in the cloud-native world. So everybody, when you look at the word microservices, looking at the word cloud-native, it's always about stateless, it seems like. So I'm kind of going against the current, so to speak. It's talking about stateful. So I'm going to be a bit doing some introductory, like conceptual stuff too. So bear with me if you're already familiar with the subject matter, but I need to be aware of maybe not everybody come in on the same page. So I'm going to talk a bit too about some basic concepts. So yeah. So thank you, thank you. And again, I'm a developer advocate at IBM and very happy to be here today. So who am I? Again, I'm actually from the WebSphere Liberty team. And I'm actually more on the open source side. And in fact, primarily doing open liberty, micro-profile, Jakarta EE, all of these Java kind of wonderful open source Java tools that allow you to build like complex enterprise Java applications. So, and I've been with IBM for three years and I joined the WebSphere Liberty team about six months ago. And prior to that, I was with the, we have our developer ecosystem group that was focusing just on Java. So in fact, I was doing mostly reactive stuff. And today I'll touch a little bit on reactive because every time when we talk about microservices and all of these things and stateful and when it comes into play, then we kind of talk about reactive. That's dear to my heart. So I'll be talking a little bit about that too, just very little bit. So, and okay, so I have myself too, I have over 25 years of software development, engineering experience prior to becoming an advocate. I'm also the, and just to give you an idea too, it's just so you know, I also work for another company that actually was, I was on the app server development team at Sybase before two for seven years. So, and then I was then with various companies in Chicago that are more on the application level and doing like production delivery. So I understand very well, so to speak, right? Sometimes the pain and the triumphs of working in IT shop. So that's what it is. And then I'm also a very active community builder. I'm also the president of the Chicago Java user script. So I like to also invite you to join my junk, you know, if you can, you know, from anywhere, right now we are all digital. So, okay, so that's me. And so let me then start. So the topic is about stateful microservices. So you're wondering, well microservices, okay? Isn't it all about stateless, especially in the cloud native world, right? So that's the thing. But let me first start too, you know, if you're new to this topic because it's a bit less talked about I'd say, but yet it is actually such an important topic. And let me use a kind of fun kind of illustration, finding Nemo. So I think if you have little children, especially, or if you're into cartoons and animations, that's a great, you know, story too. So why am I using this? Because I want to illustrate stateful person stateless, what it means, you know, in computing. So that's kind of like to get into the cartoon part first, the fun part. So as you know, right, the story goes is that Nemo was lost, this little fish. And then his dad went about, you know, the bigger orange fish, Marlin went about looking for him. And then in meantime, Marlin ran into another fish, the blue one, Dory. Now Dory is a very forgetful fish, but yet she is kind of happy, go lucky, you know, and like that. So maybe you're getting the point now. Like when Dory sees as a school of fish, you know, or jellyfish coming in, she was like, oh, so pretty, let me jump on them. And Marlin said, wait, wait, wait, don't jellyfish can sting. So, as you can see, right, Dory represents the stateless computing side while Marlin is the staple side. And as you can also see, you know, there's advantages to both, but yet we live in a very, you know, kind of a stateful world, you know, so let's take a look. So computing life used to be simpler. Stateless computing, what is it, right? So essentially too, it's a communication protocol that doesn't retain any session information. So that means, you know, data travel, right, in your application run time, but the state of the data doesn't get recorded between transactions as it travels through because it's just more for, you know, doing something and I don't care, you know, all these contexts around it. I just want to get something done, kind of like that. So, and as a result too, the architecture design implementation too, it's much simpler. If you want to scale the systems, it's actually relatively straightforward. It's just a matter of spinning up another component or in a cloud native kind of environment that you have a pod or a working node or something, you just spin it up. You don't worry about, you know, setting everything, you know, the state, you know, all of these things. So, so as a result, right, it's a lot faster, scaling it is easier. Recoverable, recoverability too, also is easier. There's any failure, you need to spin up another component. It's just a matter of spinning it up. So it's kind of like a dory fish, you know, happy go lucky getting things done quicker. But realistically, we live in a stateful world. I mean, imagine the fact that if you forget the birthday of your loved ones, right? Especially if you're a significant other, if you forget it, what would they, you know, he and she respond, right? That's not a good thing. So as such, we live, you know, in a world that we need, you know, to keep track of things. We need to maintain that context. So stateful computing, then it's about, you know, communication protocol that actually would retain all session information. State of the data gets recorded at every step across all transactions along the way. And then as a result, the architecture, the implementation design, everything's more complex. Because you need to scale the system, it's harder. You know, you need to like add another component to it because you need to serve a small request. And you want to make sure that that component will get initialized to the same state as the rest of them, you know, of your components in the cluster. So it takes more time. So performance wise, right? It's also kind of not as efficient as a stateless way of doing things. And recoverability likewise, too. You need to recover from system failure. You need to make sure everything needs to get initialized to the same state. So everything's just take longer. But yet, you know, we can't avoid it. It is very important. So how do we do it in, you know, a cloud-naked world? But let's first take a look to statefulness in the cloudless states, meaning that before cloud, you know, how is state being done? So let's take a look. You know, in the client-server systems, right? And let's not go back to the very, very, very early days. But let's go back to, say, the 90s, right? Kind of like client-server kind of coming up. So stateful database systems on the server side. So as you can see, too. Database-style transactions are common, right? If you're using some database, some relational database, you know, if you, you can write your code so then you can actually let your database, any kind of, you know, say, job code in this case. You can do any, you know, any language, obviously. But the thing is that you let the database handle the transaction. If you do some kind of connection to the database, if you don't explicitly say it, then your database engine, you're letting it control your transaction, you know. Anytime you do something, even a select, doing an update or insert, it will start, you know, an implicit internal like transaction. So if something, everything goes right, then essentially, too, the database will take care of committing your results. However, if there's problem, then it will roll back, right? So that's how, you know, the database can be done. Now, that's good, you know, for like, you know, your application using one single database. But what about, you know, you have systems that you can have database data being stored in multiple databases, you know, across, in your local network, let's say. And not only that, too, it, you know, you can have, you know, the same vendor made, you know, database, but what about, you know, you need, you may actually have, you know, databases that are different vendor made, right? You can, we can have an IBM DB2, we can have Oracle, we can have SideBase, actually that's where I used to work for. So let's say, you know, there are situations in which that type of ecosystem happens, you can have multiple distributed database, and what do we do? Then there's also two-phase commit, you know, that's a protocol that's being used, and it is still being used today. And in fact, it's quite a robust type of protocol. So quickly, so what is two-phase commit? So essentially, it represents, you know, a commit protocol that will take two phases. So, and then it also requires a coordinator is called the two-PC coordinator. And so the coordinator, what he does, what about this coordinator does, what is the coordinator for the two-PC? Then let's say I have, using the same example, I have DB2, we have Oracle with SideBase, and so first phase, you know, it's like a voting thing. So then my coordinator will need to ask, you know, DB2, are you done? You know, what is your status? And DB2 is doing his thing, you know, doing a bunch of insert and update, whatever it is, comes back and then it's all good. Then it's basically tell the coordinator, good, I'm good. So now the coordinator get the result from DB2 and then move on to the Oracle. It's the same thing, right, ask, are you done? And if it's all good, collect the vote. Go into SideBase and then it's all good, collect the vote. Now the coordinator looks at all three, it's all good. So then now I kind of go back and say, you know, DB2 commit, Oracle commit, SideBase commit. Now that's actually obviously would work. And also it seems like it should, you know, be pretty good, you know, you're making use of kind of an orchestrator, a kind of thing, right, kind of not so much orchestrator as it is more like a coordinator, making sure things are good. However, that's the problem is that this 2PC protocol is that it's pretty, you know, kind of inefficient, right? Now the thing is also is that using 2PC, we're making some assumption that isn't correct, which is the network, right? Network, we are all assuming in distributed systems that the network is reliable, but in reality, network is very unreliable as we know, right? There can be latency, right? There's traffic issue or it's completely some problem. You can't even connect. So in that case, you know, the 2PC protocol coordinator could be hanging too, like all of a sudden I'm doing this, you know, I'm on another computer or another system and I'm asking, you know, DB2 asking all these things. If my network is down, then I can't, my 2PC basically... Mary, sorry to interrupt you. There seems to be some echo on your transmission. Can you check if you don't have another tab open with hopping just in case you're hearing yourself? You didn't see myself? Okay. I have it open but not connected to the audio. So is it better? Let me, I'm so sorry. Let me kind of take a look too. I thought I had already kind of... Yeah, there's some notable echo. So we're trying to grow from where it's coming from. Sure, I'm so sorry. Okay, so let me take a look. So over here, I don't have anything that's open on my other browser. In fact, I restarted my browser before I come in. You have some headphones or something? I have headphones too. Yes, I can do headphones too. That will be better. Is it in bad, all of a sudden? No, I think now it's bad, but it didn't have in the beginning. So it's growing. Okay, okay. There's some reason you said that. Okay. Okay, I'm so sorry. Joris, that's why we do it live. Yeah, sure. Okay, sorry about that. I never had this happen, but can you hear me okay now? Now it's even worse. Oh, really? Let me do this one thing. I'm so sorry. Okay. Is this better? Yes. Oh, okay. No echo. I think I know why because I'm doing aggregate device rather than using my short. I don't know what is going on. Maybe the, you know, the system may be having problem when I aggregate it. Okay, but that's good. That's perfect. Sorry about that folks. But thank you for your patience with me. Okay, let me then resume. Okay. Thank you. Thank you. I'm sorry. I wasn't paying attention to all the chat too. I saw something, but okay. Thanks. Thanks a lot. Okay. All right. Then let me kind of go back to two PC. So, okay. Two face commit. Let me be a little bit fast, but I think you get the idea that to two PC, there's a, there's a disadvantage to it because, you know, if everything is good, you know, your network is reliable, everything. It will be efficient. Maybe. But the thing is too, you can have some problems. So it's a very synchronous type of protocol. As you can see, my coordinator needs to ask this database, then get back to result, go to the next one. So in the meantime, the rest of your databases are waiting for instruction. So it can't do anything until the coordinator tells you what to do. So in that kind of a scenario is not very efficient, but the thing is too, it works, you know, and it is just something we need to be aware of. So that's two, two PC. And it is still being used today. Don't get me wrong. But we'll also examine into another approach of doing this type of distributed transaction kind of work. So in my couple of slides, I'll be talking about it. Okay. So I'll be a little quick. Okay. Then now let's look at Java, right? So how does Java handles it? Java, we have enterprise Java. Right now is Jakarta EE used to be Java Enterprise Edition or JEE. And then even before that was J2 EE, right? So, okay. JEE or Jakarta EE has EJB. So enterprise Java beans. And obviously there is session beans and then also entity beans. There's also stateless EJB. So we won't talk about that now. Then there's also session EJB. So session, what we'll do is that during a session, the data will be kept, you know, kind of the state will be kept only as long as the session is in place. However, if the session goes away, then basically you will lose your data. So the way to kind of handle it is basically, if you use entity beans, which actually, you know, kind of complicate things, obviously, then you have to kind of connect with your database or some sort of external storage, right? That kind of persists your data. So there will be entity beans. Now for HTTP connection, then there's also for surflets, there's HTTP session. So that's kind of responsible for keeping state of the data in your HTTP session. And then let's take a look then on client side. You know, client side, we have like caching of server responses. So there's also like cookie-based authentication. Cookie-based too, as we know, right? The cookie will carry itself, you know, the session ID to the server, basically present to the server and say, hey, this is my session ID. So server does all the validating. If it's all good, authenticate you. Then basically the cookie, right? The client will then go back to, you know, to your client side and then fetch the actual payload, you know, all of the stuff that needs to go through. Whereas then there's also the newer token base, authentication such as JSON web token. So that's more efficient because then in my, carry in my payload, it's also, I include the session ID and the payload. So you don't need to make extra trip. So that's kind of just example of how state, you know, statefulness is being preserved, you know, in a living system. So all right, let me, I kind of spend enough time on this. And then I had problems with the other thing. So let's kind of now take a look into stateful microservices in cloud native environments. So you might be scratching your head, you know, cloud native is all about stateless containers. Now I won't go into all the details, but let's kind of touch it quickly, right? Cloud native, what is it? Right? It's an overarching approach. It's basically a term that came up, you know, if we go back to some history that came, came into place, basically coined by Netflix, because they were trying to, they were early adopters of all of these cloud native thing. So they call it cloud native because they want to leverage on the cloud infrastructure to do systems that can be highly available, very scalable and performant. Now all these three kind of goals to, is actually bring something dear to my heart, which is reactive. Now reactive is slightly different reactive, but which I'll talk about in a little bit, but anyway. So this is the goals of like in doing things, you know, the cloud native way, we need to be lightweight, a very efficient performance and all these things. So, and let's take a look too. There's also cloud native too is guided by the 12 factor application kind of principle, which is a methodology that was drafted by developers at Heroku. So essentially it's a guide set of guidelines and best practices for portable and resilient applications that are well suited to be in the cloud environments. And one of the factors indicates the need for self-contained services, which are to be deployed as stateless processes. And microservices architecture so far, right? It's the one that can satisfy this kind of a requirement. The nice thing about this is that it's such a set of guide, set of guidelines and it doesn't enforce the tools and the libraries that the applications must use, but it provides kind of the solid concepts that the applications must follow. So here just a kind of a quick listing that I kind of took it, you know, kind of summarizes all of these 12 factors. So I won't go into like exactly all of the details, but just wanted to point out number six is the processes. So we want to execute the app as one or more stateless processes. So that's kind of a microservice kind of approach. Now there's also another number nine to its disposability. We want to maximize the robustness with fast startup and graceful shutdown. So really this is the microservices kind of a space. So how do we preserve state across session, transactions and network boundaries? Let's kind of visit a couple of kind of areas of this. So techniques, mechanisms, we're still using this today, right? Caching, you can be using caching just for example, like a J cache specifications in Java. There are providers that you can use. For example, like Hazelcast. I actually have a quick example that I can show you about session persistence too, shortly. And so caching, then there's also database style transactions that are also still being used in place. Then there are also cookies, right? These are all still being used sessions, right? Then we make use of HTTP session to kind of hold, you know, carry with it the state of the data. And then there are tokens like JWT, things like that. So these are kind of more like the techniques and mechanisms. And we can kind of look into that later. And then let's move a bit too then into cloud native infrastructure. We can't avoid but talking about Kubernetes on, of course, there's OpenShift too, that's actually, you know, bring Kubernetes up another step, right? So, okay. So let's kind of quickly take a look at the features in Kubernetes OpenShift, right? So there's a concept too. In any distributed systems, you need to have some algorithms in there that helps with doing leader selection. Because if you think of it, right? We have all these clusters replicated, you know, kind of parts and working nodes all running together. You need to have somebody to make decisions. So kind of do election. So you need to select a leader. So the thing is too, you can use Apache Zookeeper, for example, but the thing is too, is that Kubernetes has a built in leader election algorithms. That's already in at CD. So you can kind of rely on that. It's not like something that you can do in extra. And so you can use that without using any external leader election algorithms. Okay. And then there's also Kubernetes has stateful sets. So as such, right? The name already implies that stateful sets, it's kind of a natural Kubernetes feature that will help you to manage the statefulness of your data sets. So as such too, Kubernetes is an infrastructure level kind of capability, you know, more like that rather than the application, but it helps with keeping statefulness of all of your data too on the infrastructure level. So it manages your data and manages the versioning and especially too in, you know, any kind of system, you know, whether you're testing or production, you'll be having like, you know, kind of a quick upgrade and all that kind of things. So all of your components get upgraded and kind of, you know, kind of going through all of the life cycle. So it will manage that for you. This is the stateful sets. So. And then there's also persistent volume, which is then persistent volume to works also, like with persistent volume claims, which is like request to kind of request the storage is actually gets into quite complex too, which I won't go into details. And of course the red hat, you know, open shift folks are very familiar with this. So somebody will be explaining that in much deeper detail, but at least to, we know that Kubernetes has the persistent volume, persistent volume claim to kind of request storage and also the storage class of these things. So it has its own way of managing all of these kind of volumes and map it to the actual, you know, all of these complicated kind of settings in your whole cluster too. And then there's also in Kubernetes, there's also this concept of a session affinity, sticky session. So I'm using some sticky cookie picture in here because the thing is the idea is that you, because the thing is there are so many, you know, kind of cluster in your, in your cluster, you have different notes that can do the job. But the thing is the best way to do it is you want to kind of keep, you know, that same kind of component that's servicing your client to service the same client again. So then you avoid any kind of like initialization need, right? So there's this concept of the sticky session that will kind of help you sort of like a cookie kind of in a cookie kind of way of handling of the session, but it's on the Kubernetes kind of infrastructure level. So, okay, so that's that. And then let's get into a bit more of the programming side of things, which is on Java level. So what can we do, right? There are programming design patterns that you can utilize. So first and foremost, the most famous is the saga pattern. Saga as such, you know, it's like a long story, right? That that's what it, what it means, you know, outside of the computing context. So essentially too, you can be doing transaction that can span, you know, like, you know, kind of a long time to, or span multiple components in a highly distributed kind of manner. So that's saga pattern. So quickly too is that saga is interesting. It's not the same as traditional database. So what it has is that if there's any problem that goes on, you don't actually call it a rollback. If there's, there's no such concept as rollback, although the effect of such can be kind of like a rollback because you're trying to restore the state, you know, of your component. Let's say, you know, before the transaction happens kind of like that, but we don't use the word kind of transaction in saga. We call it like a forward strategy. If anything goes wrong, then you kind of call upon your compensation module to compensate for what's kind of been done and you need to restore it. So that's the idea. So that's a kind of quick nutshell about what saga is. And there are two ways of doing coordination saga too. There's choreography, which is event driven way. And so in the, in that kind of model, you don't make use of any orchestrator or like a middleman to kind of direct traffic. Whereas you're kind of more like a self initiating event driven way of doing things. Now the other way would be orchestration. Now let me kind of quickly kind of do this. I know I've been talking a lot. And by the way, this is Chicago. So hello from Chicago. Anyway, so quickly. So I'm actually working on, I come up with this design that I will be implementing. Hopefully, you know, very soon when I can find time. It's actually a choreography event driven saga. So here I am, right? Using an order processor to illustrate this, you know, how we can do it. Now the thing is, so I'm working with open liberty and micro profile. So my micro profile has this new feature that came out long running action LRA that came out just in May. So I also have resources for you if you're interested. So as you can see, I can have, you know, some order that comes in, let's say using Kafka event driven, right? There's order coming in, then it's trigger, you know, the order micro process, microservice to get to get busy doing this thing. And then it was sent essentially to it doesn't really send until, you know, inventory check is subscribed to my order generated event. So is credit check. So then I'll, you know, these two will get to work once I get the order created event. And then they too go into work. And let's say everything is all good. Then it's kind of say is checked. It's all good. Then I can move on. But in the meantime too, you can see too there are also this self feeding kind of thing back to the order too. So then the order can keep updating his status as well. Now. So let's say if it's all good, then the inventory is all checked. You know, I have the product in place that's being ordered and the credit is checked this, the subscriber, the consumer is all good. So then I would just say, you know, payment processor, you can get to work now. So payment processor in the meantime would then go off and kind of work with the payment authorizer, you know, third party, all of these things. When it's all good, then basically it will have status that would be changed. So once the event, you know, kind of gets updated, the event gets generated, then it's basically trigger if order paid and kind of go back to the order to kind of indicate his pay, then I can then trigger the shipment processor to go forward. Now, this is kind of like a simple way of illustrating a usage example. As you can see, it gets, it can get very, very complicated. Now the thing is too, is that in using this kind of approach, I do need to also write compensator. It's basically components that will go hand in hand. And that's actually a bit of the controversy about using compensator. It kind of means that I'm going to tightly couple my compensator to certain action, you know, that goes along the way. So that actually will be left for another talk that I'll be talking about this compensator approach. Okay, but that's kind of like giving you a quick example of how you can do, you know, with a saga pattern. So, okay, then I will get back to my here. So, okay, that was a very quick thing. And I realized my time is running out. So, okay, so the quick thing is I mentioned about long running action, saga interaction pattern. So how can you do it? So again, micro profile, we have LRA. So I have information for you to kind of check into. Now, what about reactive quickly? So reactive systems, right? I kind of point out about the responsiveness, right? These are the four tenets of reactive, responsive, elastic and resilient, essentially kind of satisfy the cloud native goals too. But then as such too, there's differences because reactive systems, we're dealing with data as streams, you know, versus, you know, traditional way, we're not necessarily all streams, you're an imperative, except I think if you get into project loom, then it will be different. It will change the whole course. That will be interesting to see. But I just wanted to point out that perhaps using a reactive approach in your micro services and also in an event driven way is actually they can all go hand in hand too. So, okay, so I think I may not have as much time to do the code example, which I was hoping to, and I'm sorry again for my audio. But the thing is too, just wanted to point out to you, you can go to our interactive lab session for open liberty. I was going to show too. Let's go back here, open liberty. Everybody you can visit open liberty. It's always good. So let me kind of quickly kind of, this is our open liberty link. Like I said, it's just been exciting actually for me to kind of start working on this because it really is very cloud native ready, kind of very flexible library and framework and runtime. And also the quickly kind of point out is that the runtime is actually operates, you know, underneath the hood is open J9. And that's the Eclipse open J9, which actually J9 use was a, you know, IBM kind of clean room implementation of the JVMs. All open J9 has been donated to the Eclipse Foundation. It's already and right now it's packaged up into like no cost. No, you know, kind of any extra baggage of IBM summer room runtime. So I suggest you to look into it, but open liberty is already working with that. And then also open liberty works very well too with open, with micro profile, which is all Jakarta EE compliant. So all of these in micro profile, you can find all of the features that you need, for example, Kafka, GRPC, GraphQL, all of these things. So LRA is what we're talking about. So I suggest that you can kind of take a look into it. And then I know I only have two minutes. So for take home, you can go to the open liberty guides. And if you look into the guides, look into persistence. So the persistence is what I was going to show is caching HTTP session data using J cache and Hazelcast. So then you can also kind of get your hands on and after this, you can follow this and we'll explain to you, you know, how, how you can leverage on Hazelcast in this case that will implement the J cache for session persistence. So, okay, so I realized I only have one minute. So let me then quickly kind of go back to my slides. So this is like a J cache and Hazelcast example. And then there's also another example that I have is basically using a stateful open liberty application in Kubernetes. So if you want to visit that, then I'll also share the slides with the organizers here and some resources and links. These are all the resources in here. Please visit that. And specifically is saga design pattern, micro profile, LRA 1.0 and also a nice block that was written up earlier this year. And these are links to our open liberty micro profile and Jakarta EE. And these are other IBM resources that you can visit. There's also a programming with Java and IBM cloud that will explain to you how you can get started with cloud native in Java. And the other thing and then also I'm also live streaming on Twitch every Wednesday and in fact this afternoon I'll be doing that. Today's session I'll be talking about Kafka. So there are other talks too. And then there's also our expert TV and meetups too. And also my Chicago Java user script if you'd like to join. And also this is free IBM cloud account and we run support open shift. And with that comes to a close. I think I'm right on time. So thank you very much. If you'd like to stay in touch, please connect me on Discord as well. That's my link to Discord. I'd like to continue the conversation there and follow me on Twitter and everywhere else. So share with me what you have. Thank you so much. Thank you.