 Hi, I'm Peter Burris, and welcome to a special digital community event brought to you by Wikibon and theCUBE. Sponsored by Dell EMC today, we're going to spend quite some time talking about some of the trends in the relationship between hardware and AI. Specifically, we're seeing a number of companies do some masterful work incorporating new technologies to simplify the infrastructure required to take full advantage of AI options and possibilities. Now, at the end of this conversation, series of conversations, we're going to run a crowd chat, which will be your opportunity to engage your peers and engage thought leaders from Dell EMC and from Wikibon SiliconANGLE and have a broader conversation about what does it mean to be better at doing AI, more successful, improving time to value, et cetera. So wait till the very end for that. All right, let's get it kicked off. Tom Burns is my first guest, and he is the Senior Vice President and General Manager of Networking Solutions at Dell EMC. Tom, it's great to have you back again. Welcome back to theCUBE. Thank you very much. It's great to be here. So Tom, this is going to be a very, very exciting conversation we're going to have, and it's going to be about AI. So when you go out and talk to customers specifically, what are you hearing them as they describe their needs, their wants, their aspirations as they pertain to AI? Yeah, you know, Pete, we've always been looking at this as this whole digital transformation. Some studies say that about 70% of enterprises today are looking at how to take advantage of the digital transformation that's occurring. In fact, you're probably familiar with the Dell 2030 survey where we went out and talked to about 400 different companies of very different sizes, and they're looking at all these connected devices at edge computing and all the various changes that are happening from a technology standpoint. And certainly AI is one of the hottest areas. There was a report, I think, that was cosponsored by ServiceNow. Over 62% of the CIOs and the Fortune 500 are looking at AI as far as managing their business in the future, and it's really about user outcomes. It's about how do they improve their businesses, their operations, their processes, their decision-making, using the capability of compute coming down from a cost perspective and the number of connected devices exploding, bringing more and more data to their companies that they can use, analyze, and put to use cases that really make a difference in their business. But they make a difference in their business, but they're also, often, these use cases are a lot more complex. They're not, we have this little bromide that we use that the first 50 years of computing were about known process, unknown technology. We're now entering into an era where we know a little bit more about the technology. It's going to be cloud-like, but we don't know what the processes are because we're engaging directly with customers or partners in much more complex domains. That suggests a lot of things. How are customers dealing with that new level of complexity, and where are they looking to simplify? You actually nailed it on the head. You know, what's happening in our customer's environment is they're hiring these data scientists to really look at this data. And instead of looking at analyzing the data that's being connected, that's being analyzed and connected, they're spending more time worried about the infrastructure and building the components and looking about allocations of capacity in order to make these data scientists productive. And really, what we're trying to do is help them get through that particular hurdle. So you have the data scientists that are frustrated because they're waiting for the IT department to help them set up and scale the capacity that they need and infrastructure that they need in order to do their job. And then you got the IT departments that are very frustrated because they don't know how to manage all this infrastructure. So the question around, do I go to the cloud or remain on-prem, all of this is things that our companies or our customers are continuing to be challenged with. Now the ideal would be that you can have a cloud experience, but have the data reside where it most naturally resides, given physics, given the cost, given bandwidth limitations, given regulatory regimes, et cetera. So how are you at Dell EMC helping to provide that sense of an experience based on what the workload is and where the data resides as opposed to some other set of infrastructure choices? Well, that's the exciting part is that we're getting ready to announce a new solution called the Ready solution for AI. And what we've been doing is working with our customers over the last several years looking at these challenges around infrastructure, the data analytics, the connected devices, but giving them an experience that's real-time, not letting them worry about how am I gonna set this up or management and so forth. So we're introducing the Ready solution for AI which really focuses on three things. One is simplify the AI process. The second thing is to ensure that we give them deep and real-time analytics and lastly provide them the level of expertise that they need in a partner in order to make those tools useful and that information useful to their business. Now we want to not only provide AI to the business, but we also want to start utilizing some of these advanced technologies directly into the infrastructure elements themselves to make it more simple. Is that a big feature of what the Ready system for AI is? Absolutely. As I said, one of the key value propositions is around making AI simple. We are experts at building infrastructure. We have IP around compute, storage, networking, infinity band, the things that are capable of putting this infrastructure together. So we've tested that based upon customer's input using traditional data analytics, libraries and tool sets that the data scientists are going to use already pre-tested and certified. And then we're bringing this to them in a way which allows them through a service provisioning portal to basically set up and get to work much faster. The previous tools that were available out there, some from our competition, there were 15, 20, 25 different steps just to log on just to get enough automation or enough capability in order to get the information that they need, the infrastructure allocated for this big data analytics. Through this service portal, we've actually gotten it down to about five clicks with a very user-friendly GUI, no CLI required, and basically again interacting with the tools that they're used to immediately rather than gate like in stage three, and then getting them to work in stage four and stage five so that they're not worried about the infrastructure, not worried about capacity or is it going to work. They basically are one, two, three, four clicks away and they're up and working on the analytics that everyone wants them to work on and heaven knows these guys are not cheap. So you're talking about the data scientists, so presumably when you're saying they're not worried about all those things, they're also not worried about when the IT department can get around to doing it. So this gives them the opportunity to self-provision. Am I got that right? That's correct. They don't need the IT to come in and set up the network to do the CLI for the provisioning to make sure that there's enough VMs or workloads that are properly scheduled in order to give them the capacity that they need. They basically are set with the preset platform. Again, let's think about what Dell EMC is really working towards and that's becoming the infrastructure provider. We believe that the silos of server storage and networking are becoming eliminated, that companies want a platform that they can enable those capabilities. So you're absolutely right. The part about the simplicity or the simplifying the process is really giving the data scientists the tools they need to provision the infrastructure they need very quickly. And so that means that the AI or rather the IT group can actually start acting more like a DevOps organization as opposed to specialists in one or another technology. Correct, but we've also given them the capability by giving the usual automation and configuration tools that they're used to coming from some of our software partners such as Cloudera. So in other words, you still want the IT department involved making sure that the infrastructure is meeting the requirements of the users. They're giving them what they want, but we're simplifying the tools and processes around the IT standpoint as well. Now we've done a lot of research into what's happening in the big data now is likely to happen in the AI world. And a lot of the problems that companies had with big data was they conflated or they confused the objectives, the outcome of a big data project with just getting the infrastructure to work and they walked away often because they failed to get the infrastructure to work. So it sounds as though what you're doing is you're trying to take the infrastructure out of the equation while at the same time going back to the customer and saying wherever you want this job to run or this workload to run, you're going to get the same experience regardless. Correct, but we're going to get an approved experience as well because of the products that we put together in this particular solution, combined with our compute, our scale out NAS solution from a storage perspective, our partnership with Mell and Ostrman, Infinity Band or Ethernet switching capability, we're going to give them deeper insights and faster insights. The performance and scalability of this particular platform is tremendous. We believe in certain benchmark studies based upon the Resnick 50 benchmark, we perform anywhere between two and a half to almost three times faster than competition. In addition, from a storage standpoint, you know, all of these workloads, all of the various characteristics that happen, you need a ton of IOPS. And there's no one in the industry that has the IOPS performance that we have with our all flash, Isilon product. The capabilities that we have there we believe are somewhere around nine times the competition. Again, the scale out performance while simplifying the overall architecture. Tom Burns, Senior Vice President of Networking and Solutions at Dell EMC. Thanks for being on theCUBE. Thank you very much. So there were some great points there about this new class of technology that dramatically simplifies how hardware can be deployed to improve the overall productivity and performance of AI solutions. But let's take a look at a product demo. Every week, more customers are telling us they know AI is possible for them, but they don't know where to start. Much of the recent progress in AI has been fueled by open source software. So it's tempting to think that do it yourself is the right way to go. Get some how-to references from the web and start building out your own distributed deep learning platform. But it takes a lot of time and effort to create an enterprise class AI platform with automation for deployment, management and monitoring. There is no easy solution for that until now. Instead of putting the burden of do it yourself on your already limited staff, consider Dell EMC Ready Solutions for AI. Ready Solutions are complete software and hardware stacks pre-tested and validated with the most popular open source AI frameworks and libraries. Our professional services with proven AI expertise will have the solution up and running in days and ready for data scientists to start working in weeks. Data scientists will find the Dell EMC Data Science Provisioning Portal a welcome change for managing their own hardware and software environments. The portal lets data scientists acquire hardware resources from the cluster and customize their software environment with packages and libraries tested for compatibility with all dependencies. Data scientists choose between Jupyter Hub notebooks for interactive work as well as terminal sessions for large-scale neural networks. These neural networks run across the high-performance cluster of PowerEdge servers with scalable Intel processors and scale out Icelon storage that delivers up to 18 times the throughput of its closest All Flash competitor. IT Pros will experience that AI is simplified as Bright Cluster Manager monitors your cluster for configuration drift down to the server BIOS using exclusive integration with Dell EMC's OpenManage APIs for PowerEdge. This solution provides comprehensive metrics along with automatic health checks they keep an eye on the cluster and will alert you when there's trouble. Ready solutions for AI are the only platforms they keep both data-centered professionals and data scientists productive and getting along. IT operations are simplified and that produces a more consistent experience for everyone. Data scientists get a customizable high-performance deep learning service experience that can eliminate monthly charges spent on public cloud by keeping your data under your control. It's always great to see the product videos but Tom Burns mentioned something earlier. He talked about the expansive expertise that Dell EMC has in bringing together advanced hardware and advanced software into more simple solutions that can liberate business value for customers especially around AI. And so to really test that out we sent Jeff Rick who's the general manager and host of theCUBE down to the bowels of Dell EMC's operations in Austin, Texas. Jeff went and visited the Dell EMC HPC and AI Innovation Lab and met with Garima Cocher who's a technical staff senior principal engineer. Let's hear what Jeff learned. And we're excited to have with us our next guest. She's Garima Cocher. She's on the technical staff and the senior principal engineer at Dell EMC. Welcome. Thank you. From my perspective, what's kind of changing in the landscape from high performance computing which has been around for a long time into more of the AI and machine learning and deep learning and stuff we hear about much more in business context today. High performance computing has applicability across a broad range of industries. So not just national labs and supercomputers but commercial space as well. And our lab, we've done a lot of that work in the last several years. And then the deep learning algorithms, those have also been around for decades. But what we're finding right now is that the algorithms and the hardware, the technologies available have hit that perfect point along with industry's interest with the amount of data we have to make it more what we would call mainstream. So you can build kind of optimum solutions but ultimately you want to build industry solutions and then even subset of that you invite customers in to optimize for what their particular workflow or their particular business case which may not match the perfect benchmark spec at all, right? That's exactly right. And so that's the reason this lab is set up for customer access because we do the standard benchmarking but you want to see, what is my experience with this? How does my code work? And it allows us to learn from our customers of course and it allows them to get comfortable with Dell technologies to work directly with the engineers and the experts so that we can be their true partners and trusted advisors and help them advance their research, their science, their business goals. So you guys build the whole rack out, right? Not just the fun shiny new toys. Yeah, you're right. So typically when something fails, it fails spectacularly, right? So I'm sure you've heard horror stories where there was equipment on the dock and it wouldn't fit in the elevator or things like that, right? So there are lots of other teams that handle of course Dell's really good at this, the logistics piece of it but even within the lab, when you walk around the lab, you'll see our racks are set up with power meters. So we do power measurements. Whatever best practices and tuning we come up with, we feed that into our factory. So if you buy a solution, say targeted for HPC, it would come with different bias tuning options than a regular say Oracle database workload. We have this integration into our software deployment methods. So when you have racks and racks of equipment or one rack of equipment or maybe even three servers and you're doing an installation or all the pieces are baked in already and everything is easy, seamless, easy to operate. So our idea is the more that we can do in building integrated solutions that are simple to use and performant, the less time our customers and their technical computing and ID departments have to spend worrying about the equipment and they can focus on their unique and specific use case. Right. Do you guys have a services arm as well? Yeah, we're an engineering lab which is why it's really messy, right? Like if you look at the racks, if you look at the work we do, we're a working lab, we're an engineering lab, we're a product development lab and of course we have a support arm, we have a services arm and sometimes if we're working with net new technologies, we conduct training in the lab for our services and support people but we're an engineering organization and so when customers come into the lab and work with us, they work with it from a engineering point of view, not from a pre-sales point of view or a services point of view. Kind of what's the benefit of having the experience in this broader set of applications as you can apply it to some of the newer, more exciting things around AI, machine learning, deep learning. Right, so the fact that we are a shared lab, right? Like the bulk of this lab has a high performance computing in AI but there's lots of other technologies and solutions we work on over here and there's other labs in the building that we work, that we have colleagues in as well. The first thing is that the technology building blocks for several of these solutions are similar, right? So when you're looking at storage areas, when you're looking at Linux kernels, when you're looking at network cards or solid state drives or NVMe, several of the building block technologies are similar and so when we find interoperability issues, which you would think that there would never be any problems, you'd throw all these things together, they always work like, Of course. Right, so when you sometimes rarely find an interoperability issue, that issue can affect multiple solutions and so we share those best practices because we engineers sit next to each other and we discuss things with each other, we're part of the larger organization. Similarly, when you find tuning options and nuances and parameters for performance or for energy efficiency, those also apply across different domains. So while you might think of Oracle as something that was done for years, with every iteration of technology, there's new learning and that applies broadly across anybody using enterprise infrastructure. Right, what gets you excited? What are some of the things that you see like, I'm so excited that we can now apply this horsepower to some of these problems out there. Right, so that's a really good point, right? Because most of the time when you're trying to describe what you do, it's hard to make everybody understand, well, not in what you're doing, right? But sometimes with deep technology, it's hard to explain what's the actual value of this and so a lot of what we're doing in terms of exascale, it's to grow like the human body of knowledge forward to grow the science happening in each country, moving that forward. And that's kind of at the higher end when you talk about national labs and defense and everybody understands that needs to be done. But when you find that your social media is doing some face recognition, everybody experiences that and everybody sees that. And when you're trying to describe the, we're all talking about driverless cars or we're all talking about, oh, it took me so long because I had this insurance claim and then I had to get an appointment with the appraiser and they had to come in. I mean, those are actual real world use cases where some of these technologies are going to apply. So even industries where you didn't think of them as being leading edge on the technical forefront in terms of IT infrastructure and digital transformation, in every one of these places, you're going to have an impact of what you do, whether it's drug discovery, right? Or whether it's next generation gene sequencing, or whether it's designing the next car, like pick your favorite car, or when you're flying in an aircraft, the engineers who were designing the engine and the blades and the rotors for that craft were using technologies that you work with. And so now it's everywhere, everywhere you go with, we talked about 5G and IoT and edge computing. I mean, we all work on this collectively. So it's our world. Okay, so the last question before I let you go, just having the resources to bear in terms of being in your position to do the work when you've got the massive resources now behind you of Dell, the merger of EMC, all the subset brands, Isilon, so many brands, how does that help you do your job better? What does that let you do here in this lab that probably a lot of other people can't do? Yeah, exactly. So when you're building complex solutions, there's no one company that makes every single piece of it. But the tighter that things work together, the better that they work together, and that's directly through all the technologies that we have in the Dell Technologies Umbrella and with Dell EMC, and that's because of our super close relationships with our partners that allows us to build these solutions that are painless for our customers and our users. And so that's the advantage we bring. This lab and our company. All right, Grimli, well thank you for taking a few minutes, your passion shines through. Thank you. I really liked hearing about what Dell EMC is doing in their innovation labs down in Austin, Texas, but it all comes together for the customer. And so the last segment that we want to bring you here is a great segment, Nick Okuru, who's the Vice President of Big Data Analytics at MasterCard is here to talk about how some of these technologies are coming together to speed value and realize potential of AI at MasterCard. Nick, welcome to theCUBE. Thank you for letting me be here. So MasterCard, tell us a little bit about what's going on at MasterCard. There's a lot that's going on with MasterCard, but I think the most exciting things that we're doing on MasterCard right now is with artificial intelligence and how we're bringing the ability for artificial intelligence to really allow a seamless transition when someone is actually doing a transaction and also bringing a level of security to our customers and our banks and the people that use MasterCards. So AI to improve engagement, provide a better experience, but that's a pretty broad range of things. What specifically kinds of, when you think about how AI can be applied, what are you looking to, especially early on? Well, let's actually take a look at our core business, which is being able to make sure that we can secure a payment, right? So at this particular point, people are used to, we're applying AI into biometrics, but not just a fingerprint or a facial recognition, but actually how you interact with your device. So you think of like the internet of things and you're sitting back saying, I'm using, I'm swiping my device, my mobile device, or how I interact with a keyboard, those are all key signatures. And we, with our company, new data that we've just acquired are taking that capability to create a profile and make that part of your signature. So it's not just beyond a fingerprint, it's not just beyond a facial, it's actually how you're interacting so that we know it's you. So there's a lot of different potential sources of information that you can utilize, but AI is still a relatively young technology in practice. And one of the big issues for a lot of our clients is how do you get time to value? So take us through, if you would, a little bit about some of the challenges that MasterCard and anybody would face to try to get to that time to value. Well, what you're really seeing is looking for, actually a good partner to be with when you're doing artificial intelligence. Because again, at that particular point, you try to get to scale. For us, it's always about scale. How can we grow this across 220 countries? You know, we're 165 million transactions per hour. Right? So what we're looking for is a partner who also has that ability to scale. A partner who has a global presence, who's learning. So that's the first step. That's going to help you with your time to value. The other part is actually sitting back and actually using those particular partners to bring their expertise that they're learning to combine with yours. It's no longer just silos. So when we talk about artificial intelligence, how can we be learning from each other? Those open source systems that are out there, how do we learn from that community? It's that community that allows you to get there. Again, those that are trying to do it on their own, trying to do it by themselves, they're not going to get to the point where they need to be. In other words, in a six month time to value, it's going to take them years. We're trying to accelerate that and say, how can we get that out of those algorithms operating for us the way we need them to provide the experiences that people want quickly? And that's with good partners. Now 165 million transactions per hour and it's only likely to go up over the course of the next few years. That creates an operational challenge. AI is associated with a probabilistic set of behaviors as opposed to categorize. It's categorical, a little bit more difficult to test, a little bit more difficult to verify. How is the introduction of some of these AI technologies impacting the way you think about operations in MasterCard? Well, for the operations, it's actually when you take a look at there's three components, right? There's right there on the edge. So when someone's interacting, they actually do the transaction. And then we'll look at it as we have a core. So that core sits there at basically, that's where you're learning, right? And then there's actually, we call it the deep learning component of it. So for us is how can we move what we need to have in the core and what we need to have on the edge? So the question for us always is we want that algorithm to be smart. So what three to four things do we need that algorithm to be looking for within that artificial intelligence needs to know that it never goes back into the core and retrieve something? Whether that's your fingerprint, your biometrics, how you're interacting with that machine to say, yes, that's you. Yes, we want that transaction to go through. Or no, stop it before it even begins. It's that interaction on an operational basis that we're always have a dynamic tension with. But it's how we get from the edge to the core and it's understanding what we need it to do. So we're breaking apart what we have to have that intelligence to be able to create a decision for us. So that's how we're trying to manage it, as well as of course the hardware that goes with it and the tools that we need in order to make that happen. Let me get another hardware just a little bit so that historically different applications put pressure on different components within a stack. One of the observations that we've made is that the transition from spinning disk to flash allows companies like Mastercard to think about just persisting data to actually delivering data much more rapidly. How does some of the, how does these AI technologies, what kind of new pressures do they put on storage? Well, they put a tremendous pressure because that's actually, again, the next tension or dynamics that you have to play with. So what do you want to have on disk? What do you need flash to do? Again, if you look at some people, everyone's like, oh, flash will take over everything. It's like, no, flash has, there's a reason for it to exist. And understanding what that reason is and understanding, hey, I need that to be able to do this in sub seconds, nano seconds, since I've heard the term before. That's what you're asking flash to do. When you want deep learning, then I want it on disk. I want to be taking all those millions or billions of transactions that we're going to see and learn from them, all the ways that people will be trying to attack me, right? The bad guys, how am I learning from everything that I'm having? That can sit there on disk and let it continue to run. That's the deep learning. The flash is when I want to create a seamless transaction with a customer or a consumer or from a business to business. I need to have that decision now. I need to know it is you who is trying to swipe or purchase something with my mobile device or through the, you know, basically through the internet or I'm actually even swiping or inserting, tipping my card in that particular machine and a merchant. That's when we're looking at how we use flash. So you're looking at perhaps using older technologies or different classes of technologies for some of the training elements, but really moving to flash for the inferencing piece where you got to deliver the real time effort right now. And that's the experience. And that's what you're looking for. And that's when you, that's what you look, you want to be able to make sure you're making those distinctions. Because again, it's no longer one or the other. It's how they interact. And again, when you look at your partners, it's the question now is how are they interacting? Am I actually, has this been done at scale somewhere else? Can you help me understand how I need to deploy this so that I can reduce my time to value? Which is very, very important to create that seamless, frictionless transaction we want our consumers to have. So Nick, you've talked about how you want to work with companies that demonstrate that they have expertise because you can't do it on your own. Companies that are capable of providing the scale that you need to provide. It suggests, just as we talked about how AI is placing pressure on different parts of the technology stack, it's got to also be putting pressure on the traditional relationship you have with technology suppliers. What are you looking for in suppliers? You think about these new classes of applications. The part is, you're looking at, for us it's, do you have that scale that we're looking at? Have you done this before, that global scale? Again, in many cases you can have five guys in a garage that can do great things, but where's it been tested? When we say tested, it's not just, hey, we did this in a pilot, we're talking, it's got to be robust. So that's one thing that you're looking for. You're looking for also a partner who can bring for us additional information that we don't have ourselves. In many cases, when you look at that partner, they're going to bring something out. They're almost like they are an adjunct part of your team. They are your bench strength. That's what we're looking for when we look at it. What expertise do you have that we may not? What are you seeing, especially on the technology front, that we're not privy to? What are those different, the new chips that are coming out, the new ways we should be handling the storage, the new ways the applications are interacting with that, we want to know from you, because again, everyone's, there's a talent, competition for talent, and we're looking for a partner who has that talent and will bring it to us. So that we don't have to search it, yeah, especially at scale. Nick Akuru, MasterCard, thanks for being on theCUBE. Thank you for having me. So there you have a great example of what leading companies or what a leading company is doing to try to take full advantage of the possibilities of AI by utilizing infrastructure that gets the job done simpler, faster, and better. So let's imagine for a second how it might affect your life. Well, here's your opportunity. We're now gonna move into the crowd chat part of the event. And this is your chance to ask peers questions, provide your insights, tell your war stories, ultimately to interact with thought leaders about what it means to get ready for AI. Once again, I'm Peter Burris. Thanks for watching. Now let's jump into the crowd chat.