 Hello everyone, welcome to theCUBE's coverage of High Performance 2023, covering everything in HPC, machine learning, AI, high performance analytics, quantum computing, part of ISC 2023 event coverage. This segment is Dell, NVIDIA, and the financial service. We've got three great guests here on the power panel, Andrew Lou, product manager with Dell, and probably Rob Morthy, who's the customer partner developer relationship at NVIDIA. Dell and NVIDIA power panel, Peter Neves, president and stack securities technology analysis center. Gentlemen, thanks for coming on. This is a super power panel. Thanks for joining me. Thanks John. So first question, how is HPC being used in financial services? Which use cases are you guys seeing most interesting right now in the for financial companies? Currently we see actually these areas being used for important applications at the credit valuation adjustment. So these areas are being used for risk pricing and discovery in capital markets. And that's what we call as HPC, multitative finance in parlance. And customers are doing large-scale simulations, Monte Carlo simulations, as well as doing risk valuations in real time. So they can get ahead of competition. And where we are seeing this action go is people want to do more of this combined with artificial intelligence and other technologies. For example, they want to use large language models and then use it for trading signals and do their own algos in quantitative finance. And we all know high-performance trading has always been the edge. Every millisecond, nanosecond counts. Now AI is coming into the mix. We're seeing a lot of action there. We're going to come back to that. Peter, what's stack? Tell us what the company you guys do. You're in the middle of all the action. What do you guys do? How was it started? Yeah, stack is main goal is to improve technology discovery and assessment for the finance industry. And we do that in two ways. We do that through dialogue, but we also do it through research. And our research is guided by the Stack Benchmark Council, which is over 50 of the leading vendors in technology, as well as 400-plus financial firms. And the way stack got started was it was really expensive for people to do technology evaluation. In fact, when stack was started, I was an early member. I was CTO at a trading firm. I had all of my high-priced engineers doing nothing but evaluating technology and doing these bake-offs. And the idea here is we all have similar workloads. We all have these workloads that are exactly what we do on a daily basis or a great proxy for that. So, stack came along and helped bring together the Benchmark Council to define those workloads so that we could do apples-to-apples comparisons of different technology stacks to see how they solve the problems, how quickly they do it, how much throughput they can get done, and how efficiently they do it. And now, 15 years later, that's what we do in a variety of areas, including HPC and AI. Real quick, while we have you segue there on the market, what are you seeing now? Obviously, the edge is so important in getting that technical edge and those benchmarks. How frequent is it being done? What are some of the current state-of-the-art criteria you're seeing out in the market right now with all this change? Yeah, I think that the most important thing that we're seeing right now is analytics, which has always been considered not real-time, not part of trading, but more back-testing, that what we see for HPC, right, historical research, back-testing, the things that's outside of the trade critical path is starting to merge with that trade critical path. People need to do more real-time HPC, more real-time analysis of data. And so, where we used to talk in nanoseconds for electronic trading, and we would talk milliseconds or seconds for analytics, those are merging together, and it's not because the real time's getting slower, it's because that analytics has to get faster and faster and faster. You nailed it. This is so exciting. That's why I love this power panel, because you got packets in the old days, well, old school, moving packets around as fast as possible. Now it's data and the right data mixing together. We're going to get into that. Andrew, Dell's been a player in financial services, and you could go to any servers. You see them all with Dell's. You guys have been in this game for a while. What are you guys doing now for financial companies? What are successes that you see right now? Yeah, we are. We are heavily investing in financial services industry. We've been working closely with NVIDIA and Stack and Peter and probably have been great partners to us. We've been listening to our customers, discovering what they want in their products, in their solutions. And we've been putting together NVIDIA products like the A100 GPU along with PowerEdge servers like the XE8545 that we launched last year in November called the Dell Validity Design for Risk Assessment. And that was, and Peter can explain more on this, that was designed to measure system performance against Monte Carlo simulations. So that system itself was designed by our engineers, put together, tested to help clients have a baseline understanding of where our systems function best. You know, Peter, I want to come back to you. If over the past five years, we've been doing theCUBE for 13 years, I can tell you I've seen all the stories, go back five years or so, all the marketing people, it's all about the solutions, not about the hardware, and don't talk speeds and fees, don't confuse anyone. I'll tell you right now, people want to know what the speeds and feeds are because it's impacting the solutions. All right, you're seeing more discussions around silicon chips, performance. Now they want to know, is it performance as a baseline? Right, this is now the new thing. Can, no one wants to run their app on slower or their workload on slower infrastructure. So you guys done a lot of benchmark testing right now with Dell. What's the current state of the art in terms of benchmark standards? What are people need to look at? What's your take on this? Yeah, yeah, I definitely agree with you on the, it is about speeds and feeds, it is about speed. I witnessed that over the last five years too, but I can tell you in capital markets and financial services, that never went away. Speed and throughput is the ante. It's table stakes for everything that's done in capital markets. And what we've seen, and it's been great to see Dell over the last couple of years and Nvidia as well, really step up and give financial firms the data they need to make decisions they need to make on the solutions they're looking to procure. In fact, the things that they care the most about what we see a lot of financial firms asking about is how fast can it crank through the data I have to do? How fast can it do these workloads? Because even though they're off real-time, even if they're offline, they still have to get done between close and open. And between close and open isn't overnight, it's an hour, you got an hour. And so people are really concerned about how much data can I do? And they also want to know can I do it efficiently because the data center space I'm in is very limited and the power is very limited. And I think that's one of the things that Dell was pleased about with their last audit. Their last audit in Stack A2, which is our HPC benchmark, they had the highest space efficiency, they had the fastest cold times in the baseline Greek benchmarks, and they had the fastest cold time in the large Greek benchmarks. And just so that people don't get confused on what cold means, there's kind of two ways to do HPC in a cluster. One is you have all of your code already loaded and it's already primed and ready to go. That's hot. And that makes sense if you're only doing one task with your system. But if you're doing many tasks with your system and constantly changing out what task that is, you're starting from a place of cold. And so that's where Dell shined in their last benchmark, right, was with the cold times and with the space efficiency. I love the whole closing thing with, talk about the closing that time with Wall Street. I talked to a CIO and they said, every minute's worth millions of dollars, but I can get an extra minute. That's how important time is. This is really cool. Right. Talk about the NVIDIA part of this because you guys have a big presence obviously in the marketplace with the finance, the GPUs are hot, everyone wants NVIDIA, your software is booming. What are you guys doing right now with Dell on this? Talk about your solution when your relationship with Dell. Thank you, John. NVIDIA is in the top 10 in the US stock market by market cap, right? This is all due to AI leadership, right? And we have a close partnership with Dell where we are launching these products along with Dell. And we are essentially positioned for success, right? We play in three areas. Quantitative finance, as Peter mentioned, where we do all the risk calculations, right? The clients need real workloads and they vary. And the benchmarks provide a way where the clients can do actual workloads with the credit values in this adjustment CVS. In addition to that, we also have like an area called data movement, ETL, extract platform float, so where NVIDIA sets up the space along with Dell. And obviously, right? We live in this age of AI where everybody has heard about GPD for chat. So NVIDIA provides such solutions in the AI market. We call this neural nets because this is how the brain functions and hence the term neural nets. And we have been reading the pack here, right? Where every client, every financial institution, hedge fund, bank is building the large language models. And these are essentially models for their own internal purposes where they're using this for building trading signals, right? So they're actually using it to generate trading signals at the speed of light, right? You need to know when Silicon Valley Bank is not doing well. First Republic Bank is not doing well, right? We have had like big companies like Credit Suisse hit with market volatility runs. And NVIDIA just provides that AI layer, right? With which you can do all the three solutions. You can do quantitative finance. You can do, move your data machine learning and you can do neural nets, right? Your own brain with chat GPD. And essentially, right? We kill all these three workloads where the client has the maximum bank for the buck. And right, who would not want such a hardware, right? Where Dell along with NVIDIA provides a solution which can be used for both algo trading, capital markets, and as well as large language models and generative AI. And that's where we find ourselves, right? We find ourselves at this intersection where clients are building all these solutions. They're mixing all these models and essentially building the platform of the future. This is great. This is the conversation we're getting to the next gen advancements in the industry, the financial industry. Peter, you mentioned this earlier about, you know, packets was the first real innovation speed of the packet, speed of light, closer to the transactional value on trading, high frequency trading, all that generation happened. Now we're in a new era where the data matters, right? This is where data processing, but having access to the data, so it's really a whole nother level on top of the frequency, the signals, these actionable points that you can take. He was just mentioning that. Like, this is the new thing. This is not necessarily new, but it's gettable now. It's mainstream. What's the impact to the financial industry as it's, I won't say democratization, everyone's using that word, but it is kind of being democratized, but the financial industry will capitalize on it first. What are they going to do with it? What's the trend? What are they going to do with this new advancement with HPC intersecting AI, which is compute, more power, more speed with AI? This is the real confluence. Yeah, it's really interesting, John, because the more HPC power you have, the more quantitative analytics you can do, the more research you can do, the sooner you can get through ideas. So there's never been a shortage of quantitative ideas on Wall Street or anywhere else. So the problem has been being able to validate those ideas and test them. So the faster people can validate and test them, the faster you can get through the 99 bad ideas and get the profitable one. Now, that doesn't mean that's going to be one profitable trade out there dominating. Speed is interesting, right? Because only one company can be the fastest. Only 10 can be the top 10. But the more and more quantitative strategies have in the market, the more companies can compete and also the more difference of liquidity in the marketplace, which actually makes for a safer, more stable market. So what we actually see is by diversification of strategy, which is being led by quantitative strategy, by the ability to do this HPC, the more that we're going to have interactions in the market for different reasons, which makes for a healthier market. And for everybody who's at home, who's maybe not in finance, who is trading on their own or worried about their 401k or something, it actually gives you better, tighter markets so that you're getting better prices when you get in and out of positions. Yeah, and I love that. I would like to add that. I mean, right, on top of it, right? We live in a world, right? Market data doesn't get you any advantage, right? Finance has always been behavioral, right? Motivated by psychology. And so what we get here is a new area called finance for alternative data, right? So people are, companies are able to generate signals at a consistent speed, right? Which is able to generate into signals, right? Data and they're able to trade based on the data. And so we live in this new world where you're able to generate new data at the speed of light, right? Which where we are working along with Dell and you are able to generate new signals that tell you where to invest, right? And these are being driven by such models and they are being used by downstream algorithmic models. And there you go, right? You can make alpha, which is what finance really wants. You want to make excess returns and you want beta, right? You want enhanced index returns. Yeah. And we are in that stage now where Dell has been a great partner. This is a great, Greg Segway. This is a really important point because as Peter was saying, you got more speed, more insights at scale. Now you have also the other end of the spectrum, misinformation. So back to Peter's point about the human factor, at the end of the day, it's the opportunity. It could be a day trade. It could be the next hedge fund. Who knows if they get the right signals? It's going to be a real interesting game and again, it's going to come back down to the scaling the intellectual capital of the financial professional or team, which brings us back full circle, the intelligence aspect of it. You get the speed of the network, speed of the compute, access to the right data or wrong data. That's got to be figured out. This is new, new concepts because it's at scale. Now AI has been done before. What's the big pivot here? What's the factor for these companies, if you guys could look at it as an industry and say, what's the most important thing people are paying attention to right now as they put their toe in the water or jump full in to this new world? Thank you, John. For us, it's basically large language models. Every client wants to build this platform. Either you are in business or you are out of business. For example, credit trees, which is a major bank, I mean, it had like market volatility then. So we have seen that clients who do not have such technologies are either going out of business, right? And it's not only for making profits, right? You want to make sure that you manage your downside risk here, right? You also want to make sure that you are able to capitalize on market opportunities and look out for your own areas. We call this like large language models NLP for early warning indicators. And this intersection of HPC and AI is helping us build that. Peter, what's your take on it? John, sorry, go ahead. Oh yeah, I see large language models is beating hot right now. And it is a talk across all industries, not just capital markets and finance. But there's another interesting trend that I'm seeing as well. And it's a little counter to large language models, not saying that those are going away. It's in parallel with. So large language models are massive training jobs, right? That just go on and on and it needs to be done and you need a lot of power to get through them. But the other trend that I'm seeing in finance is that when you're doing a smaller quantitative models and you're backtesting over years, you have to be able to, oftentimes they're training and retraining daily. So if you're backtesting over 10 years of data, you're retraining many, many models every day of that backtested history, which means you really need the ability to instead of go very deep training one model, you need the ability to scale broadly training hundreds in parallel over thousands of days of data and updating it with every new day. So we're seeing just this massive need for scale in the financial services industry in order to do their training because they're training so many models. And then once they've trained that model, they then need to test it on a vast amount of data to make sure that they're not just the anomaly of what was successful one day and then it blows out the next day, right? So it's massive training and then massive backtesting off of that training. And that's really driving a need for high performance compute. Andrew, you're in the business, you got to get access to the data, you got to tune it, operationalize it, then tune it and let it run. I mean, this is more compute. You're in the middle of it. 100%, I think what my colleagues have said is they're talking about the hot new trends, right? And of course, we're paying attention to that, but we also don't want to get distracted that there's always a consistent baseline of need for HBC within finance on the less, for lack of our term sexier workloads, right? Such as banks, insurance companies, right? How are people pricing loans, the risk of default, et cetera. And then I know probably mentioned behavioral, right? How does an AI machine or entity factor in, starting to factor in personality into decision making for these banks, right? Not just based on hard metrics and data, but how do you actually quantify some of the softer stuff that goes into that? And that's what we're doing with our hardware, right? We've got four-way, eight-way boxes coming out. They're in the works, et cetera. We've got other solutions that are gonna be launched later this year that are, again, thanks to NVIDIA and STAC. They've been, of course, Peter and STAC are an objective company, right? They have to maintain that objectivity. And the probably when I are working to figure out how do we maximize performance on everything, right? How do we get the best out of everything we do? And that's why we exist. Andrew, I gotta say you brought up a really good point. You gotta refresh the speeds and feeds and get that hardware, new hardware in there because here's what we just brought up in that last segment was is that everything's sexy now because look at the bank failures that was mentioned. Those small areas matter. They could have been identified with the signaling. Every workload now is part of that system. This is where I think it gets interesting because what was once not sexy, small little workload could be a huge impact when you start to think about the scale of the data. I mean, this is why the bank failed. The bank, some of the way the bank failed because they just weren't paying attention. They tried their best. They were trying hard. They just missed it. It was a complete miss. Completely flat. Well, it's funny how these things happen in life, right? I know we're not in the fashion industry but it's like 90s fashion coming back in vogue now. It's all a big cycle like Peter can tell you Well, it never went away in my book. I think always hardware always matters. Just went away with cloud. I think cloud abstracted away with DevOps what all that under the covers things but it still happened. There was still servers. Now you're looking at silicon to app, super computing, super cloud, super apps coming. We see this clearly. And I think the financial services is a tell sign always been an indicator. I mean, that's final thoughts is what's that indicator guys? What's the, if each of you can share your thoughts on what's the key tell sign right now that the financial services are telegraphing to the marketplace? What would it be? John, I can definitely add your rate. I'm a coin by background. I worked on all the use cases. I'm a CFA charter financial analyst, right? And I've never seen an area rate where both quant is in play, right? And you can identify crypto companies such as FTX failing, right? And you need to combine both the HPC coin part and the AI part, right? Otherwise, you're gonna see like an era where you will continue to see more banks not doing well, risk is not managed. So we are in this era, right? Where the intersection has already happened, right? It's not in the future. It's now, right? And firms that have been able to capitalize on it are doing well, right? And we look to continue this partnership with the NVIDIA Dell to take these solutions to clients. And we have been seeing pretty much demand from every client in the market. That's awesome, great, totally agree. Andrew, what's your take on the tell sign from the financial services world that the world is, the future is here? Well, you know, I'm gonna take a step back and answer that by saying that with, it's really, and it goes back to what you're saying, government regulations or lack thereof and how that's going to influence the wider ecosystem, right? Because what we're seeing with the bank failures is now the regulations are starting to kick back up, which will obviously then drive the demand for better HPC abilities, right? Because to Peter's point, the more data you can crunch, the better you can assess the risks. And therefore, the better you can either generate returns for your clients, manage any downsides that might occur, et cetera. So I think the indicators are gonna come where we see the regulations trending. Awesome, Peter, your thoughts, final word. Yeah, what we see is, you know, for a long time we talked about full stack developers. What I'm starting to see now is what I think of as full compute developers. You know, we live in a world of heterogeneous compute and that doesn't mean that we have a slew of different types of compute that we then take a job and run it homogeneously on one type of compute. It means that we break our algorithms up and we run them on whatever compute is best for that section of the algorithm and then get those answers together. And so we're seeing, and where I'm seeing that really being driven home is by financial institutions looking for developers that can program to a variety of compute. That's why I call them a full compute developer. And they're truly taking advantage of heterogeneous offerings, you know, and I think that's something that we see, you know, Dell and NVIDIA stepping into. I love that, love that line. Compute developers, they've got to be crunching the data. We're whole mother level. Next gen here. Prabhu, thank you. Andrew, Peter, thanks for coming on. Great segment on the financial services impact. Dell and NVIDIA and the financial services, this is theCUBE's coverage of ISC 2023. Thanks for watching. Thank you.