 Thanks, Juliana. And thanks everyone for attending our session about applying the I2C vehicle changing challenges in financial services. Today we're going to see two demos. You know, how Red Hat can help customers build AI-powered solutions. My name is Marius Bogovici. I'm a Chief Solutions Architect for Financial Services at Red Hat and work with the top 30 financial institutions in the US and Canada, particularly in the field of digital transformation, and to help them build AI-based solutions. I'm joined today by my colleague, Sadhana Nandakumar. Let's her second to introduce herself. Yeah, thanks Marius. Hi, this is Sadhana Nandakumar, and I'm a Senior Solutions Architect at Red Hat. And I work alongside Marius specializing in the application development portfolio. I help create solutions for our bankers. Glad to be here today. Over to you, Marius. Thanks, Sadhana. So, again, once again, thanks everyone. We're going to spend a very little time kind of setting the problem and then kind of diving to the two demos that we have that we'd like to show you. I just wanted kind of to recap real quick of the context of this conversation. And you've probably heard that, you know, AI is redefining success for financial institutions. What does that mean? It means that banks generally have relied on economies of scale and have relied on physical footprint and have relied on, you know, the relationship exclusivity with customers, you know, to kind of keep, you know, ensure that they have a customer base. The larger you were, the more likely you were to retain customers and to succeed. But, you know, emerging technologies and in particular AI have changed that dynamic. Instead of, you know, the scale of assets, it's more about the efficiency than which you use your data. Instead of, you know, being able to provide services at scale, it's how well do you serve your customers and how well you understand what they need. And in fact, one of the demos that you will see today is exactly coming into the space. So, you know, understanding this and applying artificial intelligence cleverly is creating differentiators either by helping financial institutions work more efficiently or by helping them capture and define new ways of doing business. So, you know, things and interactions are either, you know, the kind of the operational aspects are commoditized and made more efficient, but new models of business and new understanding of the customer creates new opportunities for banks. And this is the main question. How do they, how are they efficient? How are they, how are they, how enterprises can be, can do better? And the starting point for that and kind of the key to our demo today is that, yes, building models and having the right algorithms for the business is definitely a success criterion is extremely important. But as you can see in this diagram that's adapted from a famous paper and published, you know, called hiding technical debt to machine learning systems, the model is just a small part of the whole system. And in fact, intelligent solutions require the proper, you know, the proper communication, collecting the right amount of data, verifying it, but also the ability to build and deploy models rapidly at scale and make them work. Well, with the rest of the application and that is essentially the product of interdisciplinary work of the collaboration of multiple teams, data engineers that get the data, transform it and prepare it. Data scientists that analyze it, application developers that take the models that data scientists have developed and incorporate them into business solutions, which is ultimately the goal of the exercise. And last but not least, infrastructure in infrastructure engineers require to need to operate these models at scale. So in order for that to happen is we need to build like we need to have a way to essentially apply the workflow at the top, like from the setting goals to get in preparing data, developing the machine learning model and doing all these other activities. In a consistent manner that brings everyone at the on the same page. This is where, you know, Red Hat's portfolio can help building not only kind of the solutions, but actually creating a platform. At the platform that allows these different teams to collaborate and kind of and covers every aspect of the process. Easy access to infrastructure. The building of the models using containerized solutions and also the execution of machine learning software tools. The way the reflection of this concept is the open data hub architecture that is a reference open source architecture put together by Red Hat is a community project that actually illustrates how a combination of open source software. Running on top of a container orchestration platform like OpenShift can help build complex AI solutions and can help the different personas that are part of the of this life cycle interact with each other. So what you're going to see today in the first demo is essentially how this platform comes together to solve a problem from, you know, from, from. Defining and doing the data analysis up to building models in production. We're going to see, you know, we're going to see a data scientist essentially doing their work in Jupyter Note, in Jupyter Notebooks for creating the algorithms that train the models. We're going to see a CICD process using OpenShift pipelines that takes the result of that work from get and deploys it as a running service using OpenShift serverless. And finally, we're going to see how that model, that model can be, can be monitored. The problem domain that we're kind of tackling is fraud prevention. So let's go to the, let's go to the demo and take a quick look at how that process entails. We have, we have the OpenShift operator, the open data hub operator already installed an OpenShift and I'm going to be working in the user space. User 2. When user 2 logs in, it can, it will be, it will have access to a number of components. In this case, I'm going to open the open data hub dashboard and launch Jupyter Hub. Jupyter Hub will allow me to create Jupyter, running Jupyter Notebooks that, that will, in fact, be the environment in which the data scientists will do the data exploration and work. This is very important. By using resources, by using a, by using containers and by using a container orchestration platform, I can create a more efficient platform here. Instead of wondering about how do I prove the proper libraries on my laptop and how do I get access to resources like GPUs, I can defer all that work to OpenShift and, you know, I can use, for example, a container image that has all the libraries already loaded in. So now that I launched, now that I launched, now that I launched Jupyter Hub, I can start the different activities as a data scientist. And the data scientist would actually, in this case, for example, we're using GitHub repository that already has the notebooks in. But you can imagine the data scientist coming in and building these different, these different notebooks and codifying their findings to share with the rest of the team. So, you know, I can start and, you know, do the exploratory data analysis to understand how the data is shaped, how, you know, how the structure of the different transactions, legitimate and fraudulent. I can understand their distribution and can, I can start thinking of what features or characteristics these different transactions can have. Once I do so, you know, once I start exploring this data and start kind of cleaning and categorizing the different features, I end up with essentially with a combination of notes and Python code that implement the algorithms, for example, for extracting features and processing the data. And finally, I can, once the features have been identified, I can use them to start a training model. In this example, for example, we have two models, logistic regression and a random forest algorithm to run the, to train a specific model. Now, the key piece is that so far, everything that exists here exists in the Git repository. It is algorithms and data and information that someone will actually have to put together in order to build and run this as a service. So the next step is once everything is defined to take those notebooks and to execute them and to create actually running services. So in this case, what I have here is an OpenShift pipeline that takes these notebooks from their GitHub repository and creates and turns them into a container image that can be deployed as a service to serve in and run the model inference. Let's start the pipeline run. It will run for a while. So I will just kind of get it started. I just want to kind of show you a few of the options here. For example, I have a builder image that does the encoding. I have a list of notebooks, which are, which contain the code that our data scientist has developed. I have a two other parameters like a base model image that has libraries and in everything that the model image is actually the target where the kind of the containerized image of the service will build. And the source of this, of this build process. So when I start this process, you can see that, you know, we will run for a while. It will run for about 10 minutes. So we don't have time today. I will just show you the final result of this pipeline, which is, which looks something like this. When he finalized it actually built the model. It has, you know, by looking at the logs, I can understand, you know, it's, I can see that it has applied the same kind of I can see that it has applied the code that was in the notebooks to actually train this model with with with data that was provided to the belt. It has turned it into a container image. And in this case, This container image has been deployed as a serverless service in my application. So this makes it easy, for example, for for developers to use the result of the work of data scientists. They just need to know where the code is and, you know, and and how it's And to have the extraction that takes you that runs the service, but also it makes it easy for data scientists, for example, to hand over the their work to developers by knowing that there's a process that turns everything that they did makes it a container image and and runs it on the runs it as a service. And indeed, I can see that the result of this is a is an open shift serverless service that runs here in can run inference. I'm going to do a quick one more quick thing to show you how this works. In this notebook, I have a way to interact with this service that you see here. This is its URL and to make calls that, you know, that evaluate various transactions. So what I'm going to do is I'm going to just going to run this real quick. And you can see, as we go through the through the different notebooks, the result of the evaluation, as well as a few other tests, these tests are important because they cover the last part of what we want to show, which is monitoring everything that the service does is essentially published as metrics to Prometheus. So I can use Prometheus here to evaluate the different, the different prediction results, and to see, for example, to monitor the distributions between legitimate and fraudulent transactions. If you are following the space, you know, that generally speaking, fraudulent and legitimate transactions have a typical distribution, right, I'm expecting to have far more legitimate than fraudulent transactions. So when the ratio starts to shift, it might be a case where the model has drifted is not applicable anymore. So by monitoring the data in Prometheus, I can actually see whether these transactions are, whether my model is still applicable, or maybe I can start. I'm required to train it or reevaluate it. So this concludes our first demo. As a, you know, to recap, what you've seen is, you know, how a data scientist can use Jerry their labs running an open shift to actually create data science environments that can allocate different expensive resources like memory CPU and data. And also kind of work in a centralized environment, how they can share the results of the work with, with, with other development teams, how they can turn this into a running model by using, you know, a CI CD tool, and how a deployed service can be monitored using Prometheus. So what I'm going to do next, I'm going to hand it over to Sadhana for the second part of this, for the second part of this, of this demo. Thanks, Marius. While we are switching screens, I just wanted to set some context. Essentially, what we're going to be doing today is we're going to be talking about another use case where, you know, AI plays a key role. Personalization is a topic of discussion across every industry today. And with the post pandemic era that we are in, it is more relevant than ever for businesses to make sure that the distance does not discount on the services that the customer deserves. And while the first use case spoke about fraud, the second one talks about how you can improve the overall customer satisfaction by providing better digital experience to the customers. And so let's keep going here. So with the various researchers that are happening across the globe, one thing is evident that when you actually provide the right level of service to the customers at the right time, that provides a differentiator for your brand that makes sure that the customer loyalty goes up significantly. And essentially what we are trying to do here is bringing in the effectiveness that is being provided by AI and combining that with the business users best judgment to create what is called as AI powered business decision. And the reason why I say AI powered is it's you're getting the intelligence from the historical data and the predictive data, but you still provide the steering wheel to your business users so that they can react to change when they need to do it. So as much as the AI is providing them with the right path forward, they also have their discretion to determine an alternate flow in case of need. And so the use case that we will be looking at today is providing best offers to retail banking customers based on their past historical purchases, the likelihood of a customer accepting an offer of a particular type and the business users discretion. Again, when you're thinking about a solution of this nature, something that's critical is the ability to do traceability into the whole process that happens. And so a very high level logical view, all of the customers interaction by other various events they're going to come in through different channels into an event stream. This event is then going to be analyzed by that AI powered decisioning component that I spoke about, which then determines the best offers to be extended to the customer. And typically when you look at a use case of this nature, multiple teams are involved, you're going to have the data engineering and the data science teams looking at the overall production data and coming out with patterns and predictions. Whereas the intelligent business application is going to provide for the service to the customer in real time. So as soon as an event occurs, it's going to look at the behavior patterns of the customer. And it's going to respond back to the customer in real time providing that offer right when they actually need it and when they are best suited to accept it. And so high level view of the architecture here. So you're going to have the event stream that's going to form the backbone of this architecture. And as these events come in, as I mentioned, it's going to be pulled in by an audience automation component. And behind which, you know, there is a sophistication with respect to data. So every customer is going to have that customer 360 degree profile. That's going to be a combination of data that's spread across the organization. So the ability to pull in that information along with a predictive profile and provide it in an abstracted way is going to be critical in identifying the right patterns of behavior and the right offer to the customer. So with that overview, let's look at this in action. And I just wanted to call out that all of the pieces related to the data science aspect that Marius walked through in the first demo is applicable here as well. So it's the same data science experience, same way that they create the notebooks, the same flexibility they get with deployment. So in this use case, you have this banking customer Sarah who's a platinum card customer and she has suspended somebody that says she's predominantly made Ellen purchases. So with that profile, let's go in into an airline booking webpage, and let's quickly make a purchase. So as soon as the purchase is being performed here, you can see that an airline transaction event is put into the event stream. This event is then being acted upon by our decision in component. And it generates an offer on the fly in real time. So when you go back and refresh the offer section, you can see that they've received an offer to upgrade to an airline car. Now how did this happen. So to better understand that I have a decision model that I represented here in a standard notation. And the graphical notation which is easier for business users to understand the next step. So essentially we're combining several different factors relevant to the customer's profile, pulling in a customer segmentation model, which is a machine learning model. So you're essentially combining profile information, historical information, predictive information, all of which then goes on to determine the offer. So in this case, you had an airline purchase with a customer who is a platinum customer and the qualified purchase has said she's predominantly made Ellen purchases in the past, and hence that offer was extended to the customer. Now in order to better understand this we spoke about the aspect of traceability, and which is why I think it's important to talk about how you can use the event source to do analytics. This again feeds back into the data science loop to make sure that the models are constantly being made better and better. So let's filter by this particular user who is the user 89920 and you can clearly understand that this person was given this offer because of his status and his predictive profile and historical profile. So essentially you're able to get that traceability into the decisions that are being made, providing for, you know, better control over data privacy regulations, making it much more seamless when you think about releasing offers or doing marketing based on customer data. And I think all of this kind of sums up the overall architecture in here. You can see that several of our products from our portfolio have joined together to create a solution of this nature. And the open data science aspect of the concepts that you know Maria spoke about in the beginning of this conversation is going to add that data science clue to the rest of the application development stack by making sure that the intelligence can be put to use when it needs the most. So with that, I'm just going to transfer it back to Marius for some final thoughts.