 Ken, you want to go next? Thanks. Hi, my name is Ken Chenus. I'm a chief architect at ACI Worldwide. ACI Worldwide is the payments company. You may not have heard of them, but there's a good chance each of you used them today when you checked into your hotel. We process payments and banking for over 5,000 financial institutions, merchants, intermediaries, and billers worldwide. We do about $14 trillion a day in payments through our software, so it's a pretty extensive environment. So I'm here to talk to you a little bit about real-time payments analytics and real-time fraud detection that we were able to achieve on the OpenShift platform. So first I just wanted to talk a little bit about ACI's universal payments platform Because we support so many different types of financial modes of moving payments from point A to point B, we have a wide variety of software applications. Those applications cover a plethora of areas, including retail payments, merchant payments, bill payments, and so forth, so like when you pay your charge card, you're probably going through ACI's bill pay service, or when you charge something you could be using a retail payment authorization service that we run, or when you use an ATM machine, you're probably going through ACI rails to get money out of the machine. So we do all of that. And what's really key to us is fraud detection and payments intelligence. So everything that runs in our cloud environment goes through a centralized universal payments platform which is hosted in the ACI cloud, it's a private cloud that's geo-distributed in multiple data centers across the world. And so we have these challenges that we have to deal with, and one of the biggest ones is payment latency. When you process a payment, there's very little amount of time to make decisions on what to do with that payment. And during that decision making time, that's when we have to make the determination as to whether or not it's a fraudulent transaction or a good transaction, and what other payment intelligence we want to tag on to that payment. So a little bit about how we do this, there's a tremendous amount of data science that goes on behind the scenes, but ultimately we have to make real-time decisions because from the time the payment comes in to the time the payment goes out, we only have about 80 milliseconds to figure out what we want to do with it. And so for those of you that are working in microservice land or working with services in general, hopefully you'll recognize 80 milliseconds is not a tremendous amount of time to do everything that you'd have to do with this. So we start off with machine learning and data science working on data in a big data repository on a Hadoop cluster, and what ultimately comes out of that is features, rules, and models. Those features, rules, and models are what we apply real-time to each transaction that's flowing through the system. So in parallel of sending the data to the historic repository so that we continue to do continuous machine learning on it, we send it to a real-time decision engine where we apply the features and the rules and models in real-time and we make a decision on what to do with the payment. We had some challenges with early versions of this product as our payments started to increase in volume and fraud detection started to increase with complexity. And so as we set out to design a next-generation platform for payments, analytics, and fraud detection, we actually looked to containerization as a vehicle to achieve some of our non-functional requirements around performance, scalability, and latency. And so we moved to a microservice architecture and we broke the solution up into small microservices that we were able to then dockerize and deploy on OpenShift. And this gives us a tremendous amount of power because we can scale the environment up and down as the needs be and we have a very low latency between the microservices within the platform. We moved from a relational database model to a Cassandra cluster model so we're using a Cassandra cluster for our persistence layer and of course we still are using our Hadoop cluster for all of our machine learning. We used a lot of open source technologies and then expanded them to meet the ACI's non-functional requirements. We are very stringent about security and we're very stringent about ensuring that a payment gets from point A to point B without getting lost and we have about a 40-year history where we've never lost a financial transaction and that means a lot to ACI. So here you can see our transactions come in through our universal payments platform. It's kind of like a universal adapter. Then they go into our event receiver. Everything's defined through metadata so as the data evolves we can just do configuration changes into the environment and it automatically updates. We never have to experience any downtime in the environment because we can roll service updates throughout the environment simply by deploying new containers. A little bit about the real-time analytics and our performance characteristics. We come into an exec. The exec runs into a microservice which is running a complex event processor. We can spin up any number of these. During Black Friday weekend we'd probably have six or seven of these running in parallel. We execute our models the same way. We'll have multiple model executors running in parallel. We'll go to the Cassandra data store to retrieve all of the feature information and then we make a decision and we rode through the Black Friday weekend which is basically Black Friday and Cyber Monday and maintained a 30 millisecond latency time for all of our decision making. So we were pretty proud to hit that metric on the OpenShift platform. What we'd like to do in a future direction is really where we originally intended to be which was running our Cassandra cluster also on OpenShift and running our Hadoop infrastructure on OpenShift. We've encountered some challenges and we're working with the vendors. We're working with Hortonworks and Datastacks and Red Hat to try to tie these platforms together so that we can maintain the low latency and the same scalability and flexibility that we have within the container platform to what we have outside the container platform. So that's where we're hoping to get to with the environment. So I'll pass it on.