 Thank you. It is truly a pleasure to be talking to all of you, particularly doing an in-person presentation over the, I think, the last, I don't know, when I did this before, it seems like ages ago. But I'm actually deeply excited about being here and describing and giving you a snapshot of where we are at Swift in building a truly enterprise-scale AI platform for financial intelligence and other use cases. Part of my excitement stems from the fact that Swift has a truly unique role in the industry. It is a membership-driven organization funded by close to 11,000 members, and it provides the common payment rails that really services pretty much the entire world across multiple currencies. And the richness of the data that exists on the Swift network is something for any person who has done AI for a long time is deeply exciting. And over the next 25 minutes or so, I'll share with you who Swift is. I'll play it two minutes with you shortly. I'll give you a sense together with Marius, my colleague here from Red Hat, where we are in developing our enterprise-scale AI platform. And I'll end with a compelling initial use case that we have started working on for anomaly addiction for fraud and some of the exciting new directions that we're taking with advanced research labs. With that, let me start by, oops, I'm sorry, I think I went to the wrong piece. So let me play this two-minute video to give a sense of who Swift is. Behind every transaction is a story. Transactions that build businesses, strengthen economies, and improve lives. At Swift, we're making these transactions faster, smarter, better. Delivering new services that increase predictability and speed, reduce friction and costs, and power a more inclusive global economy. Together with our community, we are reimagining how our industry operates, capitalizing on rich data and collaborating with the brightest minds in technology and beyond to unlock a world of new opportunities. We are transforming the Swift platform to enable instant frictionless transactions from one account to another anywhere in the world and give you the capabilities you need to enable bold new possibilities for your customers. Possibilities like quick and easy low-value cross-border payments for small businesses and consumers, payment pre-validation that ensures straight through processing, end-to-end tracking for securities and instant and predictive treasury services for corporates. These innovations and more will help our community thrive today and in the future, especially if we develop them together and adopt them as one. That's why we're inviting you to be part of this journey and be part of shaping the future of finance, faster, smarter, better. Faster, smarter, better. So to be a little more quantitative, Swift really does about 44.2 million messages per day, that was the peak. Actually, this year it hit 50 percent above that target, close to 10 billion transactions annually, 11,000 plus financial institutions are in the Swift network across 200 countries. Pretty much every currency that you can think of, closer with this network. And I think over the GPI, which is a global payment innovation in 2019, close to several hundreds of trillions of dollars of transactions happen that year. So this is a truly rich platform and a massive network of financial transactions which, if properly used, can lead to an incredible reservoir of data for financial intelligence. At Swift, in partnership with Red Hat, my colleague standing on the stage here with me, with C3.ai, Cove, which is a specialized software-defined memory provider, and other partners like Argonne National Lab. We're truly building a platform for enterprise-scale AI and its deployment. Let me turn it over to Marius to give you a little bit more detail on this platform. Thanks very much, Joe Lopati. And first of all, I'm excited to be here as well. Thank you for joining us. And it's my pleasure to actually speak about how we worked with Swift and helped bring to life the vision of a platform for analyzing financial transactions and bringing value to its members. And you've seen the numbers. The key target, the key goal here was to be able to tap into the transactional intelligence that's coming out of those 10 billion transactions. Understand, as Joe Lopati will talk later, how to build a fraud detection and anomaly detection solution out of that. And in order to do so, it was required a different kind of thinking. It was required a different type of platform. It was required a platform that had a different kind of capabilities. It had to be an extensible platform that allows multi-tenant operations, which means that it had to be able to run on any cloud, on a variety of types of infrastructures, wherever Swift members needed. It means that it had to meet the performance criteria that come with these large machine learning jobs. Their large data sets consume a lot of memory. 9.5 billion transactions is a big number. And that's just in a year. So it had to meet this elasticity both in terms of compute and memory. And of course, it has to be a trusted computing environment. Swift members are financial institutions. And financial institutions are subject to very stringent regulatory requirements when it comes to their data. So the question was, how do you tap into all that data? But they retain control and they remain compliant with their requirements towards their own customers. As Shalabadi mentioned, this came as a collaboration between three major industry solution providers. Of course, Red Hat OpenShift, which is a leader in hybrid cloud-based solutions. Cove, which is a leader in software-defined memory. And of course, in C3, which provided the AI platform capabilities. So getting those things to work together and talking a little bit about what enabled this platform to function. At the heart of it, the use of OpenShift and Red Hat's contribution was to enable having containerization and automation at the heart of it. Containerization enabled Swift to tap into the power of bare metal deployment while retaining capabilities such as flexibility of deployment, self-healing, health monitoring and seamless management of workloads and workload portability. It also means that, you know, it's hybrid cloud capabilities enabled to deploy this, you know, and to take the solution and be deployed in, you know, in the future or other types of infrastructure. Public, private, you know, or at the edge. And when you think about automation, automation is not automation of operation. It's also automation of deployment. So this is where containerization and automation kind of work together to enable this solution and enable the multi-tenancy that we're kind of talking about. And then, you know, the fact that OpenShift integrates seamlessly with Cove, which is one of our partners, means that, you know, Swift can run very large machine learning jobs and can scale both horizontally and vertically while retaining all the benefits of running containers and even think of, you know, terabytes of memory for a job while still getting that flexibility without a massive investment, without incurring a massive cost in, for example, super expensive hardware. And then, you know, when you think about automation also, it doesn't stop at running things. Automation is a key part of security. Automation means that security and privacy controls, which are a key element of the solution, are part, are embedded into the platform and make it possible for Swift members to actually trust this platform with their data while retaining the control and the thinking. And of course, containerization and operate-first principles, for example, enable an AI platform like C3, for example, to run and, you know, to work on this platform together with the rest of the software that Swift runs. To sum it up, though, and probably that's one of the most interesting things here is that all this working together enables a workflow, a machine learning workflow like the one at the top that brings together all the personas that are working on the solution, that brings solutions faster from the minds of the data scientists into production. It deploys them. It facilitates their monitoring, you know, and that, you know, that kind of agility facilitates expanding this into the future, you know, from transactional intelligence to operational intelligence to customer intelligence. And I'm candid over to Chalapa to talk about where this is going from here. Thank you, Marius. So one of the key design points for us to take a platform like C3.AI and containerize it on Red Hat OpenShift and add the capabilities of a software-defined memory, as Marius pointed out for high performance, is to create a level of automation that will allow us to easily deploy this in multiple infrastructures. Swift is an on-prem deployment today. So we could take another institution which is also on-prem, and this is a journey to cloud in most financial institutions, and we could very simply with a great deal of automation deploy this infrastructure. It could be another bank of financial institutions could have a private cloud installation. And similarly, we could do the same. Or we could have a lot of public information that actually impinges or has signatures related to a particular outcome we are desiring in unstructured data from Thompson Reuters or some other place which is on public cloud. So agnostic to the type of infrastructure that exists with the automation that we are providing with this containerization, this workload can easily move to any such infrastructure. Now, why is this important? There are a number of multi-bank initiatives today that are really trying to utilize the data across these financial institutions to drive highly accurate anomaly detection for fraud, etc. Like there's a UK tri-bank initiative, the one in Netherlands, Japan, and the UK. These are all based on federated data architectures, intending to use things like multi-party computation and or homomorphic encryption, etc. Here, what this will enable is truly federated model sharing, which in our view is a highly secure and privacy-preserving way of actually deploying federation. Imagine one scenario where we would develop a highly accurate anomaly detection model on the swift, rich data. We move it over to a financial institution to actually specialize it, customize it on proprietary data that they are not willing to share with the rest of the community to improve the performance for themselves, bring that capability back, again, that we can use for other institutions in iterative fashion. TensorFlow Federated from Google has actually built a fantastic architecture to do this for all of our smartphones and have highly accurate capabilities on your respective phones. Now, we believe the platform we are building actually enables this on hybrid multi-cloud in unprecedented ways. Now, another design point that I think is a big theme of this conference is ensuring that anything we build at enterprise scale is built responsibly. What does that mean? Responsible AI in our view has five key dimensions, right? One is clearly accuracy of prediction. It does what it's supposed to do better than role-based systems. Second, that these results are explainable so that an institution clearly knows why a certain outcome has occurred and what are the key drivers to drive that particular outcome. Fairness, that we, to the extent possible, based on the training data that we have, we're eliminating any bias in the solution and ensures a proper level of inclusiveness. Of course, there's enough auditability in the system that when needed, it has enough provenance to go back to the model, to go back to the data that was used to debug why a certain situation arose. And lastly, as Marius pointed out, these better be highly secure and privacy-preserving, ensuring that the data, which versions of which we have to keep, has, is maintained in a highly privacy-preserving way. Now, a couple of examples. So we've started, as I said, on an anomaly detection problem which is an underpinning technology for fraud detection institutional fraud. We call it payment controls, which is a product in Swift today. So here I show the richness of the data, which is really a sender, a beneficiary, and the corridor, which could have multiple hops over which this is sent. And we've taken a subset of the data, which is roughly about 200,000 interactions, the large 10 senders and about 8,000 beneficiaries, and a lot of triplets related to sender beneficiary and currency pairs, and close to 120 plus currencies represented on this. Relative to a current rule-based implementation, which is the current product today, we've done some early work on machine learning data-driven approaches to improve upon that. We are seeing with an XG Boost type model close to 200% improvement on the F1 measure, which all of you know is a harmonic mean of precision and recall, a single measure that is used to measure machine learning systems. This is just the beginning. In partnership with an advanced Department of Energy Lab, Argonne National Lab, University of Chicago, which will be the home of the first two exaflop machine, Aurora. I think you'll go live sometime early next year. Remember, an exaflop is a billion gigaflops. So it's a massive compute power, and some of these labs were home to the IBM fastest machine, the Summit and Sierra, which is a 200 petaflop machine earlier on. But our intent here is to really look at the richness of the data on the Swift network. Here, we are trying to model two attributes, one, the graphical nature of the network. In this representation called temporal graphical networks, we are looking at each of the institutions as a node and the traffic between them as an edge. In addition, we actually are modeling the temporal nature of this. The beauty of this temporal graphical networks is that it can model time-wearing nature of the traffic on the edges over time. And one of the things that is a key element of anomaly detection is the history of anomalies that exist. And time to previous anomaly or time between the number of anomalies are significant features. In this particular formulation, without going into too much of the technical details, there is this notion of a memory unit that has been built that allows you to actually in an almost pseudo-Marcovian way represent long histories so that you can actually capture features like what I just described, time to the previous anomalous transactions, etc. Second, there's an encoder where we're looking at what are called embeddings, both for the node and the edge. And also, there's a decoder, which is a traditional multilayer perceptron that actually then in a supervised fashion trains for the anomalies on this network. Our early results indicate here we have taken the area under the curve as a comprehensive measure, which is really the area under the what's called the ROC curve, which measures the trade-off between false positives and true positives. We are actually seeing significant improvements from about 0.8 to 0.9 by wearing the number of embeddings and the what's called attention, the span of history that we're looking at. So what we have described today for you is three things. One, the richness of the swift network in terms of, as I said, the number of institutions that operate on this, the dollar value that flows over this network, the number of currencies, etc., and the richness of the corridors, and the potential for creating high performance, highly accurate solutions for transaction intelligence like anomaly detection for fraud using an enterprise scale AI platform that we actually built with our partners, C3.ai, Red Hat OpenShift, Cove, and starting to partner with these advanced R&D labs like Argon to really create, I call it, anomaly detection at line speed that allows us to really attach a risk stratification score to every transaction over time that can be consumed by these financial institutions. Thank you.