 Welcome back to theCUBE's coverage here at Reinvent. We are on our 11th season of Covering Reinvent. I'm John Furrier, host of theCUBE. Dave Vellante is at the analyst session. We had to sit down at the floor yesterday, but we didn't hear four days Covering Reinvent. And the big story has been generative AI, but really the transformation of how businesses are now going to apply their data to solve business problems, but also change the interface to how people engage with applications and also spur on a tsunami, a feeding frenzy of developer activity with large language models of foundation models. And this same, we've got featured guest mastercard here, Manu Thapur, who's the CTO at Mastercard, and Drew Jenkins, AWS's Alliance Leader for Persistent. Congratulations on your success, called out by the CEO on stage yesterday for persistence and saving 68% of developer productivity. 28%. Okay. 68% is great, but it was 28%. And yeah, we're very happy to see that. And that's a real payback for the work you guys have been doing today. I will get to that later. Manu, thanks for coming on CTO of Mastercard. Well-known financial services. You've been doing big data for a long time, right? Yes. And now AI, Jenner of AI in particular, highlights that next level, legit next level capabilities that are coming online. Absolutely. If you have the data, take a minute to explain where you guys are at right now. Yeah. So I'm CTO for some of the value added services at Mastercard. And by that, I mean, when we use the credit card that goes through the Mastercard network, but it also gets scored for probability of fraud using AI. We've been doing this for a very long time, number of years, and on an ongoing basis, about 2% of the transactions are denied, which ends up saving merchants billions of dollars. And we generate revenue that way, and it's a growing revenue, very well growing. In fact, it's the fastest growing portion of Mastercard's revenue. Now with Generative AI coming in last year, that's sort of captured the imagination of a lot of people. All of us have used it, tried it in the form of chat GPT. But some of the fundamental concepts of large language models, which can be used for language generation, can also be used for code generation. And we're using that, looking at that to improve developer productivity, of course. But we're also looking at ways to extend Generative AI for improving the rest of the business. You know, it's interesting, and I really appreciate you for what you guys do. I know how hard it has been to do data at that scale. But Generative AI, I mean, open AI and chat GPT, that's a chat bot interface on data. Okay, so it's great. Okay, it's not that, okay, I like it. But what it did was it educated everybody. That's right. Like this is new, it's magic. Streaming words on a screen, it helps me write co-pilot kind of vibe. I think that educated the mainstream of an expectation and this shift from SaaS to now Generative is non-deterministic, right? So SaaS was easy, it's deterministic, non-deterministic. So the shift in culture and also tech stack impact, which we saw on stage, sure, this is like, you guys weren't so this early. I interviewed you guys on this last year. This is a change, it's now gone mainstream. What is your view on this as it has reinvent now? Cause last year, just whispers of this, not, it wasn't even on the main stage. Yeah, that's true. When you say mainstream, it makes me think of like, like the main premise, well, I don't know if it's the main premise, but a major premise of the keynote yesterday with the announcement of those three tiers for GenAI, to me that helps to provide structure that I think is much needed. It helps to kind of demystify it by showing the different tiers or the different layers and where the AWS products and services for GenAI fit into them. And what I found was that, or at least my opinion is that that's really interesting dichotomy because it's adding structure, but at the same time, that structure is based on options and flexibility, which is right up AWS's alley. It's very smart. And I think even, you know, how they started the keynote with storage. I mean, who reinvent storage? But the Express 1 is a direct illustration of how you can build around S3 to make it go faster and reduce the cost. Now data is now part of it. So I mean, this is back to you now. Big data you've been doing it, but now it's changed. The usability is going to be faster, lower latency, lower latency on packets, but also lower latency on answers. So you have two latency dimensions now. This is a new phenomenon. Yeah, absolutely right. And we're deployed on AWS and some of the things that have been barriers for especially financial services companies to go in the cloud, are primarily things like latency, throughput, and security. Those three hurdles are the big hurdles that any financial services company needs to address before we can successfully go to the cloud. And if we consider the use case, the response has to come back within tens of milliseconds because it's while the transaction is being approved that we have to do all the computation and do the calculation and return the result while the customer is waiting either at an online store or a physical store. So the big challenge was how do we do this at scale and with the latency that is required from us at the SLA we provide the customer. So the good part was we were able to work very closely with AWS professional services to make that happen. We brought them in early in the game and re-architected the entire system from a monolithic system to more of a microservices-based system which is elastic and cloud-native. And that partnership was very, very successful and then we've extended that partnership. When did that journey start? What year was that? That was almost three years back. Okay, got it. So you're still on that journey? We're still on that journey. We have deployed the application worldwide in multiple regions and throughout that journey we've not only strengthened the partnership with AWS but we brought in new partners also like Persistent to help us get better at delivering and improve our velocity and improve the speed with which we deploy software onto the cloud. What's the relationship with Persistent that you guys have? You got brought them in at when and what was their role? So Persistent was brought in I would say close to a year back to help develop our software and we are a product development company and the good part about Persistent is that their focus is on product development not a genetic system integrator if you were. And that matches our needs very well so we were able to successfully partner with them and deliver results that are much better than some previous partners we were using earlier. Drew, talk about this as obviously competitive advantage for Persistent but you're starting to see the levels of providers out there from a services professional services standpoint. Coding is huge, right? And knowing the tools and the platform also matters. How important is that for companies out there who are probably obvious either already mandating a genetic strategy and or bottoms up data engineering or other, I'd say replumbing. Whatever you want to call it, refactoring, resetting. Rebuilding, I mean we saw yesterday you can move a thousand Java apps in two days and then soon to be dot net to Linux and that to me in the keynote that was just like, okay, what's next, schema changes? Full schema, change, I mean this is where we are. Well it is a big differentiator for us. We say we have engineering DNA or we're born digital, right? And what that means is with our focus on those capabilities as it has been since the very beginning we were really able to transform its scale. We're able to provide customization at scale and to help customers drive outcomes that they're looking to drive. And then you throw in our collaboration with AWS on Code Whisper for example and our expertise there and that only just, it pours gas on the fire when it comes to driving more efficiencies. And it basically accelerates the creative intellect capital of the people. It certainly does. Humans plus AI is greater than AI, I mean we always say it on theCUBE. It's true, it's true. What are the use cases that you're seeing that's being enabled now? Talk about some of the new things that are popping up because the trend that we're seeing in theCUBE as we're reporting the generic goodness that's happening there's low hanging fruit, put a wrapper around something, data's laying around as we were saying last night and you turn that data into some value either because you can do it faster and then this new kind of reasoning and I call it native AI, new capabilities that emerge could be aggregation of data. What use cases are you seeing from stuff that you've instantly moved on and things that you're looking at building that wouldn't have been possible? Yeah, so certainly something excitement in the industry is around using generative AI for a number of use cases and some of them are low hanging fruit. We spoke about developer productivity. There's customer experience, making it more personalized for customers, making better recommendations for customers and I think one of the interesting use cases that is more for the financial domain than other domains is to do the same thing for transactions as has been done for languages. So if we look at how large language models work, they basically take a sequence of words and predict the next word and then once that's predicted that sequence is taken and the next word is predicted. Now one can move from there to taking a sequence of transactions and then predicting the next transaction. And if the actual transaction is far away in terms of vector space from the predicted transaction, we know that it has some probability of fraud. So applying the same techniques in other domain areas I think is going to be a growing area for the whole industry and the whole community and I think that's one of the reasons why there's so much excitement in the space. I'm not smiling because I'm just, we love these new ideas that are actually possible now. They're attainable but I got to flip the coin on the other side and talk about the bad guys have the tools too, right? So how do you guys look at the security equation because right now the arms race of value is okay, I can move fast too, so could the bad guys because now, okay, what if they can detect that and I'm not going to say bring AGI into the conversations because I don't think that's really the right conversation, it's not ready yet but that is a big possibility, like say zero day exploits are being figured out. I mean, the hackers are using the tools too. What do you guys, how do you guys handle that? What's the mindset? What's some of the things that you see around the other side of the security coin? No, that's absolutely right because they're so all the way from deep fakes to impersonate and get somebody's identity to better ways for the bad guys to penetrate systems. So it's a very diverse set of new attacks that are coming and will be coming our way. So in some sense it reminds me of this ongoing arms race, this spy versus spy comics. You're always trying to do the other one. But- This makes it fun. Yeah, so sure. Stressful. So we made some acquisitions in that space also in terms of making the security better. All our value-added services are focused on cyber and intelligence. And that's going to be a growing area and a very important area for us to continue to invest in and make acquisitions in in order to make that arms race in the favor of the good guys versus the bad guys. And I think that's, I've been a big advocate of you now seeing the positive aspect of the good team to win the tools that can be there. The question I have for you is the team makeups. How you put development teams together. I know you got the partnership with Persistent. Data engineering is a big topic we've been having on theCUBE all week. Platform engineering has been happening for over a year and more, SR we will know DevOps has happened. Cloud has happened, okay. But platform engineering is the new term for setting up the platforms, whether it's Kubernetes cluster, standing up more clusters and inference clusters. Data engineering is a new concept that's been, we've been talking about for about a year. It's becoming more sharper and focused as data becomes the key architectural design piece to feed the developers who want to, we think we shift left with data or have more data in the pipeline where they can manage data policy at the point of coding. Whether it's code assistants or Q or whatever we're going to see more of that embedded in. How do you look at the team makeup at MasterCard when you look at how to put that future team together. Same game, but maybe new formations, new plays. Yeah, so data is fundamentally the key to everything, right? The actual code that the model runs is not that large in terms of lines of code, but it's the data that makes the difference and the fortunate part is that MasterCard has a huge amount of data worldwide that helps us get better, learn better and machine learning is all about having good data and being able to leverage that successfully. So all throughout the organization it's a fundamental core concept that we use and in addition to that being throughout the organization we've also established at the global level a special data and analytics organization that just focuses not just leveraging MasterCard data but our customers data also and building a solution that then is very valuable for the end customer. So MasterCard's revenue streams come from three sources. One is of course from the network. The other one is from value added services and the third one is from data and analytics and it's a huge focus for us. Drew, as we wind down the segment here, and first of all thank you for sharing that insights into your company's plans and how you approach it. Persistent, you guys have a great case here. We're starting to see AI take on hard problems, not just write copy for marketing, right? You see that obviously, the demo as you see. But with Q and all these things coming out, this is this hard AI problems coming. Talk about what you guys do at this scale with other customers, what's the persistent story now with this genetic wave coming? You got the experience coming into this wave so that's going to give you an advantage. What are you guys doing now with customers and MasterCard in particular? How's this relationship developing? Well, we are, we do a lot of communicating with our customers. It's our collaboration with AWS though, that is really helping us to tackle these new problems that are coming on and what I mean by that is, we have regular, the stakeholders from our CTO's office and the appropriate stakeholders at AWS have regular conversations, a regular cadence around GNAI and through those forums, we're able to come to them with things we're hearing from our customers. We're always seeking the voice of the customer. We use that information to really let the folks at AWS know what's coming, what we're hearing. They react appropriately in terms of feature changes and developments and things like that. But when it comes to where the rubber meets the road is we're trying to drive experimentation with our customers when it comes to GNAI and what I mean by that is, we just signed a strategic collaboration agreement with AWS specific to generative AI and what that means is they are providing increased investment and persistent in exchange for us making commitments to grow that business and a part of that is funding proof of concept for customers like MasterCard and others. So we're really encouraging that. We want to plant those seeds. We want to help customers deploy their use cases faster. So they can drive the outcomes that they're looking for faster. P.O.C., we're back to the good motions of having proof of concepts, get that experimentation going, iterate through it and kind of continue. Final question for both of you guys because we wrapped because I want to get this out there. The role of inference is huge. And we've said in theCUBE, we've heard this at KubeCon. Inference is the killer app in this whole GNAI thing because you train stuff but you're inferencing, getting to that value, low latency answers are getting to a start point to be creative or solve a problem faster. Not waiting for a response, prompting again. We see a latency speed game to getting more answers faster. Whether it's fraud here or something there. What's your view on inference? Do you agree it's the killer app? It's the key? Absolutely, absolutely because what I described as what we deployed on AWS is all the inferencing side. We actually developed the model and trained the model in our data centers. And once the model has been developed, trained and tested and we've done all the valuations of which model is the best model, then we take that and deploy it on AWS for inferencing. And that's the part that is most critical for us for deploying on AWS because the number of benefits we get. One of those as an example is many countries now have on-swell requirements. So we have to be in a region and keep the data in that region. And being on AWS we can leverage the data centers that AWS has throughout the world. And it gives us the latency, it gives us the throughput. So inferencing is what we have on AWS and the rest, the training, the model creation is still in our data centers. We can unpack that for another hour. Many a true thanks for coming on theCUBE. Really appreciate it. Thank you very much. And good to see you again. And congratulations on the call out on the keynote with Adam Sileski and the success you guys have with your coding. Appreciate it. Thank you very much. Cube coverage here on the ground, on location. We've got our Palo Alto studio with the live stream. Back to you guys from Palo Alto. We'll be back with more on location coverage after this short break.