 Okay. Hi everybody. I was gonna start off by thanking Diane for holding this great event, but since we're behind schedule, I'm gonna skip that part if that's okay with you. All right. So, quick introduction. My name is Eero Arvonen. I'm currently working for Suomen asiakastieto. I've been with asiakastieto since early 2016. My background is in Java development and currently I spent my time about 50-50 between doing, you know, projects, software projects and doing other cool stuff like improving our CICD pipeline or trying out new technology. So I'm presenting Suomen asiakastieto, which is part of asiakastieto group alongside a company called UC. We are listed on the Helsinki Stock Exchange. Now, we are among the leading providers of business and consumer information services in the Nordics. Within the Nordic markets, we are a data powerhouse. We are quite good at collecting data and building services that utilize this data. We are also a FinTech company and we help our finance clients enhance their business opportunities by bringing new data innovations to the market. Our largest customer segment is, segments rather, our banking and finance insurance. In brief, our products and services are primarily used for stuff like risk management, finance, administration, business and making, sales, marketing, that type of thing. Our goal is to deliver automated and accurate services for these functions. Now, why should you listen to me instead of reading emails on Twitter for the next 30 minutes? Here's why. I'm going to tell you why a company like us is interested in or rather decided to jump into the open banking space. I'm going to tell you about the product that we've already built and launched. I'm going to demonstrate to you what it does and I'm going to talk to you about the design choices that we put into it. I'm also going to take a look into the future and speculate on what kind of products we're going to see going forward. And finally, I'll show you how I took our open banking product account inside to the next level by migrating it on to Quarkus, which is a framework for cloud native Java. And here's some pre-picture, our marketing department conjured up. So after the financial crisis, we've had plenty of brand new regulation in the financial sector. You can see all the acronyms here on the right. There's KYC, there's anti-money laundering, GDPR and most recently we have the PSE2. Now for this new regulation brings new business in two distinct ways. First of banks have more need for our existing risk management services because regulation demands it now. And secondly, and this is the case with the PSE2 and the account inside, is that we are getting access to new data sets, which we can leverage to build better products on new products. Now financial institutions are getting the short end of the stick in that this will drive up their costs and this might disrupt their business models. So what FinTechs like us need to offer is automated solutions based on these new data sets via scalable technology. And since we are competing in a highly regulated market, complying with the regulation is key. So it all kind of started several years ago by us investigating what changes PSE2 will bring and how it will affect our clients and our business with them. So after studying it for a couple of years, our clients actually started asking us for solutions on what they should build on top of this new regulation and whether we can present them with business opportunities. And now we are at the stage where our version 1.0 has been launched and first customers are live and we are starting to build even more stuff on top of it. And next up we are looking to improve the service connected to other services and see if we can find some more business cases with it. So then a few words about the service we've built account inside. Yeah, now we're getting to the part of which I'm qualified to actually speak on. I would say any product that aims to be a great product has to have some sort of an impact in society. And we did try and build a product that is really great. So let's have one slide recap of what it is that we do. Ossiakaset fights against over indebtedness in Nordics as a short summary. So this is what our service does. Okay, this was supposed to come one by one. But basically we access bank accounts of a loan applicant, hence worth known as PSU or payment service user, based on their consent. We process the bank account data in the context that they've chosen on behalf of the creditor. That is the one where they are applying for a loan app. We deliver actionable facts regarding the applicant's financial behavior from this data. And we do all this in a compliant way. So how does bank transaction data create value? How do you create value of bank transactions? First off, we are currently able to calculate ability to pay and cash flow out of people's bank account data. That's how much money you bring in versus how much money you spend and also how much money you have on your account at different times. Also we verify income sources and I-bans. According to a poll we did, the number one thing our customers that is creditors want is to be able to reliably verify the reported net income of a loan applicant. And we can also produce all sorts of other results from bank account data such as whether you're likely to own a vehicle, whether you're likely to own a house, how much you gamble and how many children you have, things like this. My favorite one that we're doing currently is calculating which percentage of your net income you spend within the next 48 hours of your payday. You can see how that might affect your credit rates. So now I'm jumping to the service design part, so slightly more technical. We decided to run this application as containers and open-ships. We have a private open-ship cluster and that's why we decided to go that way. And based on that it's quite natural to go with a microservice architecture. We chose Thorntail, also known as Wildfly Swarm, for the back end. We've been running EAP at Asiakastieto Slash USA for several years now and we kind of saw Thorntail as a natural evolution for monolith EAP type of stuff. We picked React and Node.js for our front-end. We're able to go with the latest Java, so why not write? Java 11 it is. And then our product owner asked for us to build something she called joints into the service. And what I mean by that is the PSD2 space is very, in terms of standards, it's very unregulated. So each and every bank will have their own API through which you download stuff. Stuff will be named differently, different authentication mechanisms, all that. So we have to be able to change this stuff on the fly and we also need to be able to have partners that aggregate a set of bank APIs in order to harmonize them and make it easy for us so we don't have to do the job of integrating to every single bank there is out there. And then other parts of the process should also be detachable. For instance, what if we had a service that just wanted to download the bank account data but not utilize the calculation that we do that has to be detachable part of the process? Or how about what if some of our customers already have the bank account data so they don't need that part, they just need to call the service, the calculation part with the account data that they already have. So that also has to be a thing that can be separated. Now there's obviously some interplay between all these bullets, right? So containers, microservice architecture and then the thorn tail thing where you're actually able to build the stuff. So I think our design philosophy got quite well aligned with this set of technologies. And a quick word about how we worked with our clients on this one, the creditors. So we broke new ground at Aseka Center with the amount of customer interest that this new project received. We had regular workshops with our pilot customers and they were able to give meaningful input that actually affected how the service turned out. We did scrum, we did code review, programming, that kind of thing. And maybe this is not special for you but for us it is. We were actually able to release the test environment to our clients before our production was live. That doesn't happen every time. This is the application architecture of account insight. We have a grand total of eight microservices and a database and a cache. And the production runs on just about 20 containers. And let's go through the different parties at this time as well. So there's our customer, the creditor, that's going to be here on the top left, the bank logo. Under them there's the end user, the PSU, payment service user. And then on the bottom left there's something called Enfuse. And that's the partner I mentioned earlier that aggregates many, many banks for us through a single API. And so just to underline, this is the part that we've built and it's all running on OpenShift. So let's go through a little bit of what each microservice does. So there's one set of microservices that's responsible for the process flow. There's the top one called the Interface PSD2 REST. That's the process flow from the perspective of our client, the bank. Then there's the PSD2 client which controls the process flow from the perspective of the end user. And then there's the PSD2 orchestrator which controls the process from the perspective of asiakastieto. Now if you wanted to change the flow somehow, you can see how this is pretty well detached now. If we wanted to alter it in any way, we just have to edit one or two or three of these services and not the rest of them. So pretty basic microservice stuff but still it's been tested and it works. Then there's the calculation stuff, the mathematics of it. And there's two services for that. There's what we call the rule engine. So that contains all the facts that we can derive out of the raw bank account data. So one doing just math and then there's the company matcher. Now if you ever go into your browser to your online bank and look at your account, you're going to see a bunch of transactions. And if you look at the counterparty of transactions, I don't know what the case is for you guys. But when I go there, some of the names have nothing to do with the actual company that I've been. For example, if I go to the gas station, it might say something like 04220 Helsinki and not like gas station X. So there's a huge issue on how to actually map the name of the counterparty to an actual company and the business that they do. So there's, we built a service that does this matching and that can be reused in any other context now as well. And finally, there's the integrations part. So like I mentioned, we might have one or several counterparties when connecting to the banks. And there will be in the current architecture two microservices for each counterparty that we have. So one for each line of communication. So when we are calling the banks, we are calling through the integration enfuse or integration bank X. And when they're doing the callback, we're going to receive it at the interface enfuse. And this is how we decouple the bank integrations from the rest of the service. And then there's a mechanism for logging stuff for the purpose of auditing and that kind of thing. That's not really super interesting. So we're not going to go through that very much. But I think with this, it's time for the demo. This is going to be a live demo. And I'll be accessing a sandbox that's built on top of a sandbox that's built on top of another sandbox through VPN via mobile internet. So just in case something goes wrong, let's all agree to blame somebody else. It's not my fault. So here we have our testing tool. This is kind of mocking the creditor in the process. So here you can see the initial message that our customer would send. And we're actually going to edit this one right now to make the process slightly more simple. So they're going to call one of our endpoints. They're going to get some stuff as a response. And then they're going to forward the loan applicant to our service. And this is what the loan applicant would see on our service. Now this is finished. I'm sorry, we don't have an English version for this. I don't know why you can ask the product owner or something. This is for accepting the terms and conditions. And nobody ever reads those because this is just like bank data. It's not important. And then there's bank selection. So we're going to go with, well, this is finished bank, Osuuspankki. Maybe the most of you don't know about this. But we're going to say our bank is Osuuspankki. So we're going to be forwarded to do strong customer authentication, two-factor authentication for the bank. Wonder if I still remember this. And this would be two-face. So now my phone would ring and then I'd get a code and I'd input it here. And mind you, these couple of last screens are the bank screens. So we have no control over. Osuuspankki has no control over what they see there. But basically they're doing the customer authentication. And they're going to get a list of their bank accounts here. So we're going to choose these two accounts. These here are the balances. This is not my account. This is a test. So we're going to say Osuuspankki access these two accounts and we're going to click accept. And then it's going to redirect us back to account insight built by us. And we're going to be able to download the data. It's going to take a while so we can just take the time by admiring this guy's beard. Okay, we're ready already. It's faster than usual. Here we can review the actual raw data if we want to. So the end user can review the data if they wish. So this is just JSON, right? But here are all the transactions they've had for the last 12 months or something. So they can review their own data. This is a GDPR thing, I think. But the process doesn't terminate here for our customer, the creditor, right? Because they're going to want to get the calculated results as well. So we're going to take a look at the testing tool and say get latest rule engine results. Boom, yes it worked. And we're going to... there are plenty of rules here. You could scroll down. There's plenty of stuff here. But I'm going to do one that I know here. Largest positive transactions. Name, tilupalvelut. Okay, this is finished and it's also misspelled. But basically what our service is telling us is... There's someone called tilupalvelut limited company, who's been sending us money over the last six months on average once per month. Well, this is the previous 30 days. This is the 30 days before that and etc. for the last six months. And it's telling us it's been a total of 15,000 over the last six months and a total of 30,000 over the last 12 months. So now if this guy had applied for credit and said they work for tilupalvelut and their salary is something like... Net salary is something like... What is that? 2500 per month would be inclined to believe them, right? Okay, so that's the demo basically. It's in production. We have clients there. Yeah, so let's jump back to the PowerPoint. Okay, so I'm going to briefly speak about the benefits from working with OpenShift on this one. The setup was really quick for each of these microservices with Thorntail, so the threshold to decouple stuff is very low. We can go from code to our test environment and even into production in less than five minutes. That's pretty good. You never want to do that though. Containers and microservices lead to freedom of choice in technology. We could do Thorntail, for which I don't think we had a proper runtime environment without OpenShift. We could do Java 11. We have high availability, easy to scale and automated recovery. This leads to better uptimes and whatnot. Just to summarize, each one of these also has a concrete business benefit. Quick setup, like I said, it saves time and time is money, right? Lower boundary to decouple stuff leads to higher quality, hopefully. Quick to deploy means faster lead time, shorter feedback loop, that kind of thing, and the freedom of choice in technology should lead to stuff like better quality, better productivity and even better security. Also, one thing I've noticed is that when you get to work with current stuff like this, you're going to have motivated coders because they're able to try some kind of new stuff and keep their skills relevant. So, a bit of speculation on what's going to happen next with the open packing stuff, with regard to how we see it. Basically, these are the things we're talking about building next. Now, the thing I showed you was about a loan applicant getting their credit rating by some automated means, right? Now, what if we thought this the other way around? Currently, we're not... Osirakasela is not saving the data in any way. We get it, we process it, we send it to the creditor and then we discard it. So, there's nothing that we have at the end except the audit trail that it happened. What if we anonymize the transactions to make it GDPR compliant and we save them in a database? All of this. Now, once we get a critical mass going, there's going to be a bunch of stuff in there, right? Because if you look at your bank, you're probably going to have within the last year, like, I don't know, 500 transactions, 1,000 transactions, something like this. So, we anonymize them and we persist them. But we try and save some relevant information like age, gender and approximate location. Again, still keeping it GDPR compliant so we can't single anybody out. And then, with sufficient data, we can draw conclusions like women age 20 to 34 prefer ageing them to Zara in Stockholm 77% to 23%, right? So, we can do all these cross cutting things with this big data that we have. We could also say stuff like, okay, but Zara's been gaining market share at a rate of 0.3% since 2015. So, this is the kind of stuff we could do. And this is pretty big because this could be, this kind of data could be sold to anyone. It's basically irrefutable and every business needs it. You know, taxis, hotels, grocery stores, I don't know, everybody. Another initiative would be synergy benefits by just attaching this to existing risk evaluation services. We could also have another approach where instead of you going to apply for a loan and the creator rating you to instead of having a platform within, I think I said the where consumers come, do the same process and then we rate them and then they can go to every banks with a pre-calculated statement saying, okay, this is this guy's trade rating. So, kind of flipping the script around that way. And finally, there's some anti-money laundering opportunities where we could kind of be able to track where money goes through which companies better by automated means like this. Okay, so that was about the future stuff we are seeing and we're again gonna change places here because it's time to go subatomic. Let's have a show of hands here. Who's heard of Quarkus and who's tried Quarkus and who's working on an actual project that's gonna be in production at Quarkus. Okay, and who's already in production with Quarkus. Okay, just me then. So, remember this slide from earlier, this one? It hasn't been accurate for quite a while anymore. Three of those microservices are actually now running Quarkus native with one of them in all the way to production since mid-December. Now, okay, so it's not like everybody's even heard of Quarkus so better not skip this next slide. What is it and why should you care about it? Well, it's a Kubernetes native Java net framework that's supposed to reduce the size of both the Java application itself during runtime and its container image footprint. So basically they've rewritten parts of, for example, stuff from the Java EE specification to be more cloud native type of thing. And there are two modes you can run Quarkus on. There's the JVM mode, so regular JVM and then you can also compile it to be a native executable. And the native executable part, which I've done the migration of, is not painless. It's actually, well, you'll see. So these are the pain points. Basically, reflection, if you use reflection in Java you're going to run into some issues. I did it kind of trial and error so it took me quite a while to really get that down. There's pretty basic stuff like your SSL. So if you want your HTTP clients to do HTTPS calls you're going to have to do some extra stuff. Now there are guides for this on the Quarkus.io but due to our special circumstances we had to do some extra stuff with that and that was not cool. Also, if you're using web service, so that's Jack's WS, I couldn't get it to work as far as I know it doesn't work on the native mode. So I had to go and change some stuff within our legacy applications to expose stuff as REST APIs instead of web service APIs. Then there's the fact that the application is booted at build time instead of, you know, startup. So you boot the application during the native compilation and that runs into some issues if you're not careful. At Asiagaset, this is how our configuration management works. During startup we download environment-specific stuff over HTTP. Now that's not going to work if you do it at build time because you're going to do your build so that you can run it anywhere, right? So yeah, had to fix some stuff but the migrating to Quarkus is good for highlighting the stuff you're doing wrong, so I would definitely recommend that. And then there's logging configuration. The way we configured our logging wasn't compatible with this so we had to do some stuff to get around it. If you guys want to chat about these pain points, I have these in much more detail and you can see much more painful expressions on my face trying to discuss this, so just come up to me later and we can have a chat. So enough with the pain, let's see the games, right? So we are being promised smaller footprints and I did some performance tests on this and I'm going to show you the results in two sites. So there's the resource utilization part that interests us and the other is performance, right? So resource utilization first. So here we have three deployments. The left one is the application as it is. The middle one is Quarkus and JVM and the one on the right is Quarkus on the native combat version. So when we were running the application as it was, its container image size was over one gigabyte where the JVM version, it was pushed down to about half of that and with the native version, the container image size was 200 megabytes and the crucial thing here is that the native one is cool because neither is there a Java binary in the container because there's no Java, it's a native application but you don't even need an operating system that has to be able to kind of run Java so it can be stripped down to very, very minimum. And then there's the middle column there, the tan column. That's the memory usage. So our micro service used to take about one gigabyte of memory after some load. Migrating to Quarkus JVM, we cut that down by like 75% or something and with the native deployment, where the deployment was restricted to 50 megabytes of RAM, the consumption was down to 41. So that's like 20 to 1, that's pretty good. And then the final column is the CPU. So out of the box we had a CPU utilization of 350 millicores. This is just reference numbers basically. And with the JVM it went down to 150, so that's about 60%, I suppose. And then going to the native one, it went upslided to 170 and that's because the memory is so low that it has to constantly garbage collect. So there's actually a space-time trade-off going on. If you give it some more memory, it'll less CPU. So you can actually optimize for that kind of thing. People are falling asleep, so let's move forward. So performance. The first elephant in the slide is the 60% bar there, isn't it? So Thorntel used to take one minute to boot up and migrating to Quarkus JVM, we took that down by 90% and going to the native another 90%. So currently the application is starting in about 0.4 seconds. Now it's not good enough for server less, but the reason is our configuration management, like I said, it downloads stuff over HTTP during startup, so it's not going to be like 10 milliseconds or whatever. And then there's throughput. I was actually surprised to see that the throughput went up quite significantly. The throughput was 3.3 calls per second for Thorntel. And I'm talking sequential calls, so you make one call, and when it returns, you make the next one, so no parallel calls. So the throughput went up by like 50% for the JVM version and then slide it down because of the restricted memory on the native deployment. So some thoughts on the native migration. I'll mention that JVM migration was extremely easy because, well, from Thorntel to Quarkus, the dependency changes mapped quite well. So these comments are for the native version. I think there's something missing here. Okay, so the pain is real with legacy applications. So I wouldn't recommend everyone to go and migrate all of your legacy apps onto Quarkus native right away, because you're going to be in a world of pain that way. Use it on some fairly modern thing you built last year or do it on Greenfield projects. But the JVM Quarkus thing is definitely for everybody, and I would suggest trying it out. There's stuff I didn't mention like a testing framework and a development mode that's super useful, and we can chat about that later. And with that, I think I'm going to wrap up. Any questions? Everybody can smell that lunch being piped in here. So thank you very much. It's wonderful to see Quarkus being advocated for. And this is also, I think, I wanted to say, when we ask people to give us their case studies and their stories, we're not asking for the cleaned up version. We like to hear your pain points and get the feedback. That's really one of the wonderful things about having a community event. We're not trying to sell you anything. We're trying to share the stories and the war stories. So thank you very much for that.