 So before we dig in, let me tell you more about the payments market. The payments market is really gigantic. If you look at the companies that are there, there's lots of them. Here you have a young, micromancer employer. You have a life ward. There's lots of them. There are more than 500 companies there. And if you're buying something on the online shopping, on the online shop, you can buy for your goods using all those payment methods. So you can use your RISA card, or you can use some kind of wallet extension, for example, a live wallet. So in total, there are more than 200 payment methods out there. So how does the process of doing online shopping look like? So at first you navigate to your online shop, you just put your items into the shopping cart, and at some point, if you're a good, you're submitted. After you submit the shopping cart, you're probably being redirected to a web page for online payment. And probably you need to fill out some some details, like your name, your credit card number, the expiration date, and then you click the pay button. After that, you somehow get a notification that everything is okay. You're processing your order, and after a couple of days, you get the package. What you probably don't know is behind scenes, when you submit your cart, an online shop probably sends to a payment organization an order. An order says that you're done, though, but t-shirts for 65 bucks, and you might expect a payment from it. After you submit them, the payment form, a payment company does the authorized. So basically, it locks money on your credit card so you can no longer spend it. It's still there, it's just locked. After the authorized is done, there are two separate processes that happen as in currency. The first one is anti-money laundry, and the second one is anti-proud. The anti-money laundry makes sure you're not supporting, for example, a terrorist organization, whereas the anti-proud makes sure that you're allowed to pay with this cart. For example, it has not been stolen. And if anything is okay, then the payment company do the capture phase. So this is the phase that money actually flows from one account into the another. And at any point of time, you can do a refund, which means just send the money back. You don't have, for example, your t-shirts. You order some t-shirts, we don't have them. Here are your money, here's your money back. And you can also issue a void authorized if you're right here. So this means cancelled transaction. Have you heard about this man? This is Frank Abbott now. Have you seen the movie called Catch Me If You Can? with Leonardo DiCaprio? Have you seen it? Yes. Basically, this is the history of this man. So he claimed to have eight personalities, including a lawyer and a pilot. And he actually stole in the 1960s over two and a half million dollars from the US banks using fraud checks. So he was a check fraud fraud. So he created a false check. He was sentenced to prison. He escaped twice. But in general, he spent about five years of prison. Actually, currently he works for the FBI. And he helped tracking all of those other check frauders. How can I say this? Exactly. So once he started the legal job, he earned more than ten million dollars. So this is an irony, right? You need to sell two and a half to earn ten legally. And he said that what I did in my youth is hundreds of times easier today. Technology bleeds crime. And I think it's very true. If a thief gets somehow his own credit card, it takes a minute to spend funds on this card. It's very easy. So let me tell you how to build an anti-fraud software. Very small, very simple, but it has all the bits and pieces that are in a real anti-fraud software. You're going to use OpenShift to deploy everything. There will be a transaction repository. This is an finishpan cluster. There will be a transaction data in it, something like card folder data, something like your name, the credit card expiration number, the math credit card number. So basically, you cannot store the full credit card number. You just need to erase the bits inside of it. There will be a MySQL database. For example, for user data, this might be a recurring user. Perhaps he bought already something in the past. So we can verify him more quickly. And we will build an anti-fraud app. Anti-fraud app will query the finishpan cluster, will pool the transactions, will query the user data, will try to match everything, and will send the scoring back. Because the anti-fraud software is all about scoring. You just take the transaction in, do all the evaluations that you need, and you just return a scoring, a number. I really like using an onion model for creating software. This is Uncle Bob's interpretation. The proper model was made by Alistair Cochrane. It's called a hexagonal model. However, I like the idea. In the center of your service are your model, your ideal view of the world. And you build services and use things around it, and your integration bits are external, let's say. This makes the dependencies very clear. So a model cannot depend on any service, which is around. It makes actually sense. A service can depend on the model. So let me tell you more about the technology we are going to use very shortly. This is a cloud done by Red Hat. It's based on Kubernetes and Docker. We'll just give it as our deployment problem. We will be using Spring Boot. And Spring Boot is a very nice project that I view it as a combination of three things. The first one is configuration. Spring Boot tries to look at your class path and tries to figure out what is your configuration, what to run to make your application get moving. The second part is packaging. Spring Boot offers a main plugin which takes all of your dependencies and package them into a single jar. And the third one is dependency management. So Spring Boot starters are basically a set of bombs, a bomb-files-build-up material. And they already contain pretty fine versions of dependencies that you're using. For example, if you use the Infinispand Spring Boot Star, it will get you the latest stable version of Infinispand. And the third is the Infinispand. It is an elastic and dynamic in-memory data store which has key-value semantics. So it's basically a map from the API of the view. You can deploy Infinispand along with your app, so you just put Infinispand embedded, as we call it, together with your war file or inside your jar, or you can connect to a remote cluster. Now, what's pretty funny is, as I told you, Infinispand is in-memory data store, and it can persist data to any other service that you like. It can be a relational database, or it can be Cassandra, whatever. We have lots of integration bits. For example, Camel, we have CDI integration. Infinispand is also a drop-in-to-Palestine-for-mem-HDD, and we have also C++ and C-sharp plans. There's a lot of features around querying the data. However, I'm not touching this part. We did massive re-factoring in Infinispand 9, and there will be probably lots and lots of presentations about this, so I don't want to steal the start. All right, let's check some crowds, shall we? So the first thing that we need to do is we need to spin up a local OpenShift cluster. And in order to do this, you need to download the OpenShift client tools, which are for your environment. And once you download and install it, you will have the OC command on your cluster. Can you see this from the behind? Okay. So in order to spin up the local OpenShift cluster, all you need to do is type OC cluster up. I prepared a small script for bootstrapping the infrastructure. So we do a little bit. So currently it's booting up. It's installing the registry, the router. Here it is. Okay. So let me initialize the infrastructure and tell you what's going to be there. The first one is MySQL database and a service for it. Basically, Kubernetes and OpenShift deploy applications using parts, so called parts. Those are limits containers. And you can create services, which act as a load balancers for them. They also create a DNS entry. We will need the DNS entry later on until in a moment. So we create a MySQL pod and a service. And we create also an InfinisPan here. We also create InfinisPan deployment. So this will create our pods. And here's the tricky part that I want to show you. The first one, we need to use Kubernetes to use discovery. OpenShift is based on Jgroups as a cluster in service. So Jgroups needs a protocol for exchanging data and also a protocol for discovery. And here we are using Kubernetes Jgroup stack, which will use CubaPane underneath. And in order to get CubaPane to work, you just need to set the environment variable and get the name space using downward API. So everything is up until now. So let's check it out. Everything is running. Let's have a look at the transactions. We have a couple of transactions. So we'll start all the infrastructure that I showed you. Now, I would like to show you the way that I like to do development with OpenShift. Maybe there are some better ways, but perhaps you will like it. So the first thing is I'm spinning a local cluster. So every IPs, all the pods deployed on OpenShift are available using my IDE. So all I can do is I can get all the pods and get their IP address. So let's have a look. We have MySQL. And I'll start to use it in my application. So this is the main configuration file for the reboot. It's called computation.properties. And... So what I'm saying here is we are using in 10-span string with starter. By default, it uses the whole client.properties file. This happiness file is used in all our manuals. However, you can overwrite. You can say to the starter that all the properties are not stored in pop-up client properties, but they are in application properties. So right here, I'm overriding I'm just setting my SQL IP address and the same applies for the boot up in the post-order. So now we need to figure out how to connect to this remote infinitesimal cluster. This is pretty easy, actually. Here's our application, antifraud service. And here's our core. So this is the core part. It consists of a model. And all of this is very simple. It has query data, antifraud query data. So we just put everything in it. Carve folder data, transaction info, and probably users' info, if it exists. And we get antifraud response data back. And antifraud response is, as I told you, it is just a transaction ID and a scoring. Nothing more. So antifraud software consists of two services. The first one is the processor. It basically takes all the antifraud rules and goes through them one by one and zooms up the scoring. Nothing depends. Those aren't simple rules. And we have a couple of them. We have cart expiration date rule, for example, or transaction amount rule. So the transaction amount is pretty simple. It takes the transaction amount and it checks if it's more than 100. And if it is, then it booms up the scoring by 50. And if it's not, if this is a really small amount of dollars, a small amount of money, it just takes the scoring down. In our example, there are three such rules. In real-world antifraud systems, there are thousands of them. So here's our antifraud core of our system. And this is the integration bit. You're integrating with the transaction, the rest of the transaction system. And one of the things that I'd like to show you is the async transaction receiver. What it does, let me spin it down a little bit. So it connects to a remote cache. It connects to a remote and thinnest and cluster. It takes a cache called cache port transactions. Then during the init, it basically loads all the transactions with async computable futures in the queue. So all transactions are in the form of computable future because sometimes if we are loading the whole data, we have value and a key. However, if we are just listening on events on the cluster, so here is the notification part, whenever a new transaction arrives, we invoke async currency in this part of the code. And here we get only a key. So we need to load the value async currency. So this is the part. Everything is based on the computable future and a blocking queue. So we put everything that we got into the queue and let it sit there. And the other parts of the system are picking it up and processing it and processing that. So let's connect to the remote and thinnest and cluster. The first thing that we need to do is we need to build an thinnest and cluster remote configurator. So let me show you. We have been returning a new configuration builder. This is the one from the hotel client. And we are adding servers. So here it is. So let's have a look. The reason that I use the IP addresses is that I want to use Spring Boot from my console. I want to make it super easy for development. And as you can see, something is actually archiving some transactions. However, if you have a look, it's bloody slow. Everything happens pretty slow. We got it running for a couple of seconds and we processed only three transactions. It's pretty slow. So what can we do? We can use some profiling tools just to spot the bottleneck. But there's another way. A very simple. Have you heard about aspect gradient programming? Here's a technique that allows you to take a beam and a method from it and invoke some code before and after returning... before evoking the real method and after returning a value. So we can write a very small piece of code and measure how long did it take to do all the steps. So let's have a look. I created a very small annotation called time. The timed annotation can be applied to all methods. In other aspects, called timer, it basically looks through all the code, takes all the methods annotated using timed annotation, and it lost the time before the actual invocation and it computes the amount of time that took to invoke the real method. Let me show you this in action. In our application, we have something called time class. So this is an end-to-end solution what happens with our service. As you can see, we get a transaction from the queue. We just mapped it to an antiprot query. We invoke the processor and send the result back. So let's have a look how long did it take to invoke all the services. Antiprot query, the processor, the final bit, the sender. Also time. If you could put the last one before last one, put it to the processor, put it below. Oh, sorry. In the processor. Put in the response. Put in the method. Yeah, yeah, exactly. Let's make it smaller. So completed, let's see. So here we have the process is pretty long. 300 milliseconds. But this one is very bad. The antiprot query method takes two seconds. So what can we do with it? We need to focus how to optimize it. So the problem is with this method. So here is the magical trigger. Stop a little bit. But let's assume that we cannot modify this code. This is a remote service. And we cannot do anything with this. So Spring offers you a very good capability called Spring caching. This is a concept in Spring Core. And it's another annotation that you can use on top of methods. So you're just saying, just apply caching on this method. So if you see the matching parameters, the first and last name that is already in the cache, just return it. Without looking into the method. And caching, of course, needs proper configuration. So let's have a look. The first thing that we need to do is we need to enable caching. And it's pretty simple. If we left it like this, the Spring would use the default implementations, probably based on Guava. But we want to use ConfiniSpan, right? So for this, we need an ConfiniSpan embedded cache manager. So we need to... thin. So we need to configure a local cluster with all caches in 80% mode. It doesn't matter. All needs to be distributed. And we need to create the cluster with the internal cluster for caching. So let's get to work. We need to create a global configuration builder. And now we need to go into transport and cluster name. So this would be our cluster name. Cluster name is pretty important because if we use many clusters on production, they will try to form a unified cluster. The cluster name is just a way to distinguish them. And we also need a cache configuration, which can be created by using configuration builder. But without, it's not the one from the client. It's this one. And we need to create a cluster name. Now the mode. We have a couple of modes in the Span. What I would recommend you is to use the async replication and just with that mode. Just, it makes it easier for the cluster name. And we need a default and embedded cache manager. An embedded cache manager can take global configuration as well as configuration as parameters. So let's state it. Build and see build. We also need to define two caches that we will use for the cacheable annotation. You probably remember we used the user data annotation. So the first one will be user data, creation, and the second one will be GUIPs. And we need to build. So we have everything up and running. So all we need to do is just return the spring embedded cache manager. Which just takes one parameter which is embedded cache manager. We just return it here. I also created a very small bin called cache inspector which basically goes to all the caches and prints them up just to make sure that we know what's going on. Let's go back to our pipeline and let's use our new cache inspector. And just before sending out the results, we will print out the content. Okay. Let's have a look. Transport. Related with your own inspector. Transport should be configured in the Boston Gigant variation. It should be okay. I probably did some typing everywhere. But here is the dump of the cache content. But let's wait a moment and let's see if the caching really works. As you can see, the last names and first names are called the cache. But we have no idea whether the cacheable annotation really works, whether it's speeded it up. Here we have the transfer question. In three seconds, there are some entries that should be really quick. Let's do another couple of things. Now, the good thing about the Internet Span is that the cache is shared across the institutes. So once you create multiple instances of the server, of the servers, they will create a unified cluster. Because you probably could see after a minute or two the amount of entries is increasing really, really fast. Let's see what did it go. Whether we're lucky or not. As you can see, it's very quick. Eight milliseconds. So this one was definitely taken from the cache. But to be honest, there's also another method that we need to take a look at. This is the process. Of course, you could do the same steps using timed annotation. However, let me tell you where the problem is. If you look at the service, the IP rule basically takes the IP from the query and uses GUID service just to determine what is the country. So for example, if user's country is the same as the transaction country, then it's probably not a problem. However, if my card is registered in Poland and there's a transaction from Nigeria, oh, this might be a problem. So let's have a look at the GUID service. It's again very simple, but again we cannot modify it. So if you look at this, it tries to do some caching. So it basically takes the key, checks it. This is in a cache. And if not, it generates the wrong cache, right? But whatever I use in tennis pattern, and I see the code like this, I just remove it. Because all the do-it-yourself caching is probably worse than using a sophisticated framework. So let's remove it. Let's simplify the code. Get from the country. If the country should be probably based on IP. And again, let's use the cacheable application. We need to get back to the integration and take the proper cache name. Spin it up. You can see the nodes are turned in the cluster. You can see there are some entries also from the GUIDs. So let's spin up. When you look at this, we have, you know, as a thought, we have a lot of entries in the cache, but those transactions are not very fast. You can see all. But those transactions are not very fast. So I'm sure if you use the code, you can spot an error right here. You see an error. What might be going on? Why there are processes or so? The thing is with computable futures. So computable futures are really great for us in France jobs. We need to have an eye on the protocols that they're using. Transaction queue uses an executor right here. So the executor service basically takes the data from the finishpan and puts them into queue. However, all those computable futures are created using the executor service right here, right? So the question is, what happens if you use the define method on a computable future that was created with customer executor? Which threshold will be used? And it turns out that we used a threshold from the original computable future. So we are using the executor in the asynchronous queue, and we are using this for processing transactions. Well, this is insane, actually. So what you should probably do is you probably should create an executor right here. So just keep your thresholds close to the code. So I can create one, and we need to change all those methods from then apply and today apply a sync to supply the executor at the end. You can even improve it a little bit further. We don't care about the results, right? Just acquire and forget, right? So probably all the transactions that were done are already processed. So those are the tricks that you could use to optimize your code very quickly. So all you need to do is to add infinity point to your path and use the cacheable connotations wherever the performance is possible. Whenever you can cache anything. Now, the funny thing is that all the instances form a cluster. So whenever you add new instances, the cluster becomes bigger. And if you need the additional capacity all you can do is just to remove three open nodes. And that's it. So now that the last piece is gone and now the last piece, how to put this into open trees? So this is actually very simple. If you look at the configuration file, we use the IP address right here. Of course in the open tree you cannot do this. You need to rely on DNS names. And DNS names are bound to the services. So all you need to do is to get the services from OpenShift. And here we have minus 2L right here. And we need a transaction repository. Let's put it right here. And finally, we can use the fabricate maven plugin just to package your app because it's very easy to package spring boot application. This is only one jar and that's it. So you can use the alpine image as a base. Those are the smallest images for Java. Just put your application whatever you want and invoke Java minus jar and you're done. So let's do it. And make it plain fabric. And as it deploys, let me show you the OpenShift console. The local console is always available at part 8, 4, 4, 3. And you need to use the credentials developer-developer. So we have all the application right here. You can see the deployed minus 2L service, the transaction repository. We have the transaction creator, which I haven't shown you. This is the service that basically creates all the transactions on the fly. And we have the user creator. So the good thing is that if you need additional capacity all you need to do is just to scale up, for example, the transaction filter. It takes a while, if you look in the logs, there are two nodes inside the cluster. So again, this is very good for things like transactional systems. For example, imagine that you're processing transactions before Christmas and everybody is buying presents in additional capacity. So you need to scale your cluster up. And once the customers are over, you need to scale it down because you don't want to pay them down for additional capacity. So let's wrap it up. Spring Boot is really good for writing the wire characters I call it. So just take all the bits of your applications, glue them together. It's a brilliant framework for doing things like this. It packages everything in a single jar, which has a lot of advantages. You just type Java minus jar and you're ready to go. It has lots and lots of integrations. So you can integrate with this easy, you can integrate it in the finished time, whatever you really want. In the finished time, it's a dynamic and elastic solution which fits perfectly into the cloud. If you have additional capacity, you just add a couple of nodes there. If you don't need it, just remove them. It doesn't use a single master, so you don't need to worry that you killed the most important node in the cluster because there is none. It operates using multiple nodes. You can just fix it as the data source or you can put it along with your application. It's perfect for caching. Together with Spring Boot makes caching very easy. And finally, OpenShift, it's a very good runtime platform. It's our new realm, basically. You just take everything and run it. OpenShift is containers as a big technology and does all the housekeeping. For example, you could use the containers alone without OpenShift, but then you would need to have a lot of copies of the servers. You would need to remove them. So it does all the housekeeping. So that's all. Are we going to integrate in the transaction? Assuming the data will be transactional right at the start of payment, that the one to actually write to the database. In your case... You said that there is no master node, so where is the data stored? Data is stored with all the nodes in the cluster. The node goes down. There's a state transfer or some sort of data rehash going on. So the cluster distributes the data again. Or some parts of it. Just to make sure that, for example, at least two copies of the data are always. In the other case, it will distribute it to another node. How many nodes do you need? For this kind of master, at least? It depends. If you're doing a standard clustering, as I know it, you'll probably have a couple of nodes that you can expand by a couple and several of them. You would need to counter data. And you probably would want two copies of data inside the cluster. So if one goes down, you have a backup code. However, if you're a paranoid, you can do as many copies as you want. So what are the largest set-ups? Is it how many nodes? I heard an urban story about 1,000 nodes. I'm not sure if this is true or this is an urban legend. I think having such a big cluster is a terrible idea. Because imagine that you need to upgrade the cluster. You need to bring all the nodes down and actually refine them with the new ones. How many nodes? It will take you a week. Yeah, but if you want to feed the data, both will do some process. Of course. And in terms of, is it the rather simple for smaller objects or in larger data? You can start both. You can start both. And what are the examples of the largest requirements in terms of tens of thousands of data or millions? I would have to check this, to be honest. To be honest, I don't know. I mean, terabytes are realistic. Yeah, we probably know better, right? What are the requirements for memory for like one node? We actually create this pretty good performance guide. Requires memory for distributed nodes. So if you just need to evaluate, there's a point. So basically, you need to have a number of what's your data source, your data size. And you need to know how many players you should color. For example, two. Do you need a primary owner of data in a backup or primary owner into a backup? Hold on.