 Edge computing is projected to be a multi-trillion dollar business, you know, it's hard to really pinpoint the size of this market, let alone fathom the potential of bringing software, compute, storage, AI and automation to the edge and connecting all that to clouds and on-prem systems. But what is the edge? Is it factories, is it oil rigs, airplanes, windmills, shipping containers, buildings, homes, race cars, well yes, and so much more. And what about the data? For decades, we've talked about the data explosion. I mean it's mind-boggling, but guess what? We're going to look back in 10 years and laugh what we thought was a lot of data in 2020. Perhaps the best way to think about edge is not as a place, but when is the most logical opportunity to process the data? And maybe it's the first opportunity to do so where it can be decrypted and analyzed at very low latencies, that defines the edge. And so by locating compute as close as possible to the sources of data to reduce latency and maximize your ability to get insights and return them to users quickly, maybe that's where the value lies. Hello everyone and welcome to this CUBE Conversation. My name is Dave Vellante and with me to noodle on these topics is Omer Assad, VP and GM of primary storage and data management services at HPE. Hello Omer, welcome to the program. Thanks Dave, thank you so much. Pleasure to be here. Yeah, great to see you again. So how do you see the edge in the broader market shaping up? Dave, I think that's a super important question. I think your ideas are quite aligned with how we think about it. I personally think as enterprises are accelerating their sort of digitization and asset collection and data collection, they're typically, especially in a distributed enterprise, they're trying to get to their customers. They're trying to minimize the latency to their customers. So especially if you look across industries manufacturing, which is distributed factories all over the place, they're going through a lot of factory transformations where they're digitizing their factories. That means a lot more data is now being generated within their factories. A lot of robot automation is going on. That requires a lot of compute power to go out to those particular factories, which is going to generate their data out there. We've got insurance companies, banks that are creating and interviewing and gathering more customers out at the edge. For that, they need a lot more distributed processing out at the edge. What this is requiring is what we've seen is across analysts, a common consensus is that more than 50% of an enterprise's data, especially if they operate globally around the world is going to be generated out at the edge. What does that mean? More data is new data is generated at the edge, but it needs to be stored. It needs to be processed. Data which is not required needs to be thrown away or classified as not important. And then it needs to be moved for DR purposes, either to a central data center or just to another site. So overall, in order to give the best possible experience for manufacturing, retail, you know, especially in distributed enterprises, people are generating more and more data-centric assets out at the edge. And that's what we see in the industry. Yeah, we're definitely aligned on that. Some great points. So now, okay, you think about all this diversity. What's the right architecture for these deployments, multi-site deployments, Robo, Edge, how do you look at that? Well, excellent question, Dave. So now sort of see, you know, obviously you want every customer that we talk to wants simplicity and no pun intended because simplicity is reasoned with a simplistic edge-centric architecture, right? So because let's take a few examples. You've got large global retailers. They have hundreds of global retail stores around the world that is generating data that is producing data. Then you've got insurance companies. Then you've got banks. So when you look at a distributed enterprise, how do you deploy in a very simple and easy to deploy manner, easy to lifecycle, easy to mobilize, and easy to lifecycle equipment out at the edge? What are some of the challenges that these customers deal with? These customers, you don't want to send a lot of IT staff out there because that adds cost. You don't want to have islands of data and islands of storage in remote sites because that adds a lot of states outside of the data center that needs to be protected. And then last but not the least, how do you push lifecycle-based applications, newer applications out at the edge in a very simple to deploy manner? And how do you protect all this data at the edge? So the right architecture, in my opinion, needs to be extremely simple to deploy. So storage, computing, networking, out towards the edge in a hyper-converged environment. So let's be agree upon that. It's a very simple to deploy model. But then comes how do you deploy applications on top of that? How do you manage these applications on top of that? How do you back up these applications back towards the data center? All of this keeping in mind that it has to be as zero touch as possible. We at HPEs believe that it needs to be extremely simple. Just give me two cables, a network cable, a power cable, fire it up, connect it to the network, push it state from the data center and back up its state from the edge back into the data center. Extremely simple. It's got to be simple because you've got so many challenges. You've got physics, you've got what to do, you've got latency to deal with, you've got RPO and RTO. What happens if something goes wrong? You've got to be able to recover quickly. So that's great, thank you for that. Now, you guys have hard news. What is new from HPE in this space? From a deployment perspective, HPE's simplicity is just gaining, like it's exploding like crazy, especially as distributed enterprises adopt it as its standardized edge architecture, right? It's an HCI box, it's got storage compute networking all in one. But now what we have done is, not only you can deploy applications all from your standard vCenter interface from a data center, what we have now added is the ability to back up to the cloud right from the edge. You can also back up all the way back to your core data center. All of the backup policies are fully automated and implemented in the distributed file system that is the heart and soul of the SimpliVity installation. In addition to that, the customers now do not have to buy any third-party software. Intig backup is fully integrated in the architecture and it's VAN efficient. In addition to that, now you can back up straight to the cloud, you can back up to a central high-end backup repository which is in your data center. And last but not the least, we have a lot of customers that are pushing the limits in their application transformation. So not only do we previously were one of them leading VMware deployments out at the edge sites, now we've also added both stateful and stateless container orchestration as well as data protection capabilities for containerized applications out at the edge. So we have a lot of customers that are now deploying containers, rapid manufacturing containers to process data out at remote sites. And that allows us to not only protect those stateful applications but back them up back into the central data center. I saw in that chart that there was a line on no egress fees. That's a pain point for a lot of CIOs that I talked to and grit their teeth at those fees. So you comment on that or? Excellent, excellent question. I'm so glad you brought that up and sort of had to point that up, pick that up. So along with simplicity, we have the whole GreenLake as a service offering as well, right? So what that means, Dave, is that we can literally provide our customers edge as a service. And when you compliment that with Aruba wide wireless infrastructure that goes at the edge, the hyper-converged infrastructure for simplicity that goes at the edge, you know, one of the things that was missing with cloud backups is every time you back up to the cloud, which is a great thing by the way, anytime you restore from the cloud, there is that egress fee, right? So as a result of that, as part of the GreenLake offering, we have cloud backup service natively now offered as part of HPE, which is included in your HPE simplicity edge as a service offering. So now not only can you back up into the cloud from your edge sites, but you can also restore back without any egress fees from HPE's data protection service. Either you can restore it back onto your data center, you can restore it back towards the edge site. And because the infrastructure is so easy to deploy centrally lifecycle managed, it's very mobile. So if you want to deploy and recover to a different site, you could also do that. Nice. Hey, can you, Omar, can you double click a little bit on some of the use cases that customers are choosing simplicity for, particularly at the edge? And maybe talk about why they're choosing HPE. One of the major use cases that we see, Dave, is obviously easy to deploy and easy to manage in a standardized form factor, right? A lot of these customers, like for example, we have large retailer across the US with hundreds of stores across US, right? Now you can up send service staff to each of these stores. These data centers, their data center is essentially just a closet for these guys, right? So now how do you have a standardized deployment? So standardized deployment from the data center, which you can literally push out and you can connect a network cable and a power cable and you're up and running. And then automated backup, elimination of backup and state and DR from the edge sites and into the data center. So that's one of the big use cases to rapidly deploy new stores, bring them up in a standardized configuration from a hardware and a software perspective and the ability to backup and recover that instantly. That's one large use case. The second use case that we see actually refers to a comment that you made in your opener, Dave, was where a lot of these customers are generating a lot of the data at the edge. This is robotics automation that is going up in manufacturing sites. These is racing teams that are out at the edge doing post-processing of their cars data. At the same time, there is disaster recovery use cases where you have campsites and local agencies that go out there for humanity's benefit and they move from one site to the other. It's a very, very mobile architecture that they need. So those are just a few cases where we're deployed. There's a lot of data collection and there's a lot of mobility involved in these environments. So you need to be quick to set up, quick to backup, quick to recover and essentially you're up to your next move. You seem pretty pumped up about this new innovation and why not? It is, especially because it has been taught through with edge in mind and edge has to be mobile. It has to be simple. And especially as we have lived through this pandemic, which I hope we see the tail end of it in at least 2021 or at least 2022, one of the most common use cases that we saw, and this was an accidental discovery, a lot of the retail sites could not go out to service their stores because mobility is limited in these strange times that we live in. So from a central vCenter, you're able to deploy applications, you're able to recover applications and a lot of our customers said, hey, I don't have enough space in my data center to backup, do you have another option? So then we rolled out this update release to SimpliVity where from the edge site, you can now directly backup to our backup service which is offered on a consumption basis to the customers and they can recover it anywhere they want. Fantastic, Omar, thanks so much for coming on the program today. It's a pleasure, Dave. Thank you. All right, awesome to see you. Now, let's hear from Red Bull Racing, an HPE customer that's actually using SimpliVity at the edge. Countdown really begins when the check and flag drops on a Sunday. It's so good. It's always about this race to manufacture the next designs to make it more adapt to the next circuit. To run go on those, of course, women. If we can't manufacture the next components in time, all that will be wasted. OK, we're back with Matt Cadu, who is the CIO of Red Bull Racing. Matt, it's good to see you again. Yeah, great to see you, Dave. Hey, we're going to dig into a real world example of using data at the edge in near real time to gain insights that really lead to competitive advantage. But first, Matt, tell us a little bit about Red Bull Racing and your role there. Sure. So I'm the CIO at Red Bull Racing. And at Red Bull Racing, we're based in Milton Keynes in the UK. And the main job for us is to design a race car, to manufacture the race car, and then to race it around the world. So as CIO, we need to develop, the IT team needs to develop the applications used for design, manufacture, and racing. We also need to supply all the underlying infrastructure and also manage security. So it's a really interesting environment that's all about speed. So this season, we have 23 races. And we need to tear the car apart and rebuild it to a unique configuration for every individual race. And we're also designing and making components targeted for races. So 23 immovable deadlines, this big evolving prototype to manage with their car. But we're also improving all of our tools and methods and software that we use to design, make, and race the car. So we have a big can do attitude of the company around continuous improvement. And the expectations are that we continue to make the car faster, that we're winning races, that we improve our methods in the factory and our tools. And so for IT, it's really unique in that we can be part of that journey and provide a better service. It's also a big challenge to provide that service and to give the business the agility it needs. So my job is, is really to make sure we have the right staff, the right partners, the right technical platforms so we can live up to expectations. Matt, that teardown and rebuild for 23 races, is that because each track has its own unique signature that you have to tune to, or are there other factors involved there? Yeah, exactly. Every track has a different shape. Some have lots of straights, some have lots of curves, and lots are in between. The track surface is very different, and the impact that has on tires. The temperature and the climate is very different. Some are hilly. Some have big curves that affect the dynamics of the car. So all that, in order to win, you need to micromanage everything and optimize it for any given race track. Talk about some of the key drivers in your business and some of the key apps that give you competitive advantage to help you win races. Yeah, so in our business, everything is all about speed. So the car obviously needs to be fast, but also all of our business operations need to be fast. We need to be able to design a car, and it's all done in the virtual world, but the virtual simulations and designs need to correlate to what happens in the real world. So all of that requires a lot of expertise to develop the simulations, the algorithms, and have all the underlying infrastructure that runs it quickly and reliably. In manufacturing, we have cost caps and financial controls by regulation. We need to be super efficient and control material and resources. So ERP and MES systems are running and helping us do that. And at the race track itself in speed, we have hundreds of decisions to make on a Friday and Saturday as we're fine tuning the final configuration of the car. And here again, we rely on simulations and analytics to help do that. And then during the race, we have split seconds, literally seconds to alter our race strategy if an event happens. So if there's an accident and a safety car comes out or the weather changes, we revise our tactics and we're running Monte Carlo, for example, and using experienced engineers with simulations to make a data-driven decision and hopefully a better one and faster than our competitors. All of that needs IT to work at a very high level. Yeah, it's interesting. I mean, as a lay person, historically, when I think about technology and car racing, of course I think about the mechanical aspects of a self-propelled vehicle, the electronics and the like, but not necessarily the data, but the data's always been there, hasn't it? I mean, maybe in the form of like tribal knowledge, if you're somebody who knows the track and where the hills are and experience and gut feel, but today you're digitizing it and you're processing it in close to real time. It's amazing. I think exactly right. Yeah, the car is instrumented with sensors. We post-process it, we're doing video image analysis and we're looking at our car, our competitor's car. So there's a huge amount of very complicated models that we're using to optimize our performance and to continuously improve our car. Yeah, the data and the applications that leverage it are really key and it's a critical success factor for us. So let's talk about your data center at the track, if you will. I mean, if I can call it that. Paint a picture for us. What does that look like? So we have to send a lot of equipment to the track at the edge. And even though we have really a great wide area network link back to the factory and there's cloud resources, a lot of the tracks are very old. You don't have hardened infrastructure. You don't have ducts that protect cabling, for example, and you can lose connectivity to remote locations. So the applications we need to operate the car and to make really critical decisions, all that needs to be at the edge where the car operates. So historically we had three racks of equipment, legacy infrastructure, and it was really hard to manage, to make changes. It was too inflexible. There were multiple pains of glass. And it was too slow. It didn't run our applications quickly. It was also too heavy and took up too much space when you're cramped into a garage with lots of environmental constraints. So we introduced type of convergence into the factory and seen a lot of great benefits. And when we came time to refresh our infrastructure at the track, we stepped back and said there's a lot smarter way of operating. We can get rid of all this slow and flexible expensive legacy and introduce hyperconvergence. And we saw really excellent benefits for doing that. We saw about 3x speed up for a lot of our applications. So here where we're post-processing data and we have to make decisions about race strategy, time is of the essence and a 3x reduction in processing time really matters. We also were able to go from three racks of equipment down to two racks of equipment. And the storage efficiency of the HP SimpliVity platform with 20 to one ratios allowed us to eliminate a rack. And that actually saved $100,000 a year in freight costs by shipping less equipment. Things like backup, mistakes happen. Sometimes the user makes mistakes. So for example, a race engineer could load the wrong data map into one of our simulations and we could restore that VDI through SimpliVity backup in 90 seconds. And this makes sure it enables engineers to focus on the car to make better decisions without having downtime. And we spend two IT guys to every race. They're managing 60 users, a really diverse environment, juggling a lot of balls and having a simple management platform like HP SimpliVity gives us allows them to be very effective and to work quickly. So all of those benefits were a huge step forward relative to the legacy infrastructure that we used to run at the edge. Yeah, so you had the nice Petri dish in the factory. So it sounds like your goals are obviously number one, KPI is speed to help shave seconds all the time. But also cost, just the simplicity of setting up the infrastructure is- Yeah, that's exactly right. It's speed, speed, speed. So we want applications to absolutely fly, get to actionable results quicker, get answers from our simulations quicker. The other area that speeds really critical is our applications are also evolving prototypes. And we're always, the models are getting bigger, the simulations are getting bigger, and they need more and more resource and being able to spin up resource and provision things without being a bottleneck is a big challenge. And SimpliVity gives us the means of doing that. So did you consider any other options or was it because you had the factory knowledge, it was HCI was very clearly the option. What did you look at? Yeah, so we have over five years of experience in the factory and we eliminated all of our legacy infrastructure five years ago. And the benefits I've described at the track, we saw that in the factory. At the track, we have a three year operational life cycle for our equipment. When in 2017 was the last year we had legacy as we were building for 2018, it was obvious that hyperconverged was the right technology to introduce. And we'd had years of experience in the factory already. And the benefits that we see with hyperconverged actually mattered even more at the edge because our operations are so much more pressurized time is even more of the essence. And so speeding everything up at the really pointy end of our business was really critical. It was an obvious choice. So why, why SimpliVity? Why'd you choose HPE SimpliVity? Yeah, so when we first heard about hyperconverged way back in the factory, we had a legacy infrastructure overly complicated, too slow, too inflexible, too expensive. And we stepped back and said there has to be a smarter way of operating. We went out and challenged our technology partners. We learned about hyperconvergence. We didn't know if the hype was real or not. So we underwent some POCs and benchmarking and the POCs were really impressive. And all these speed and agility benefits we saw and HPE for our use cases was the clear winner in the benchmarks. So based on that, we made an initial investment in the factory. We moved about 150 VMs and 150 VDIs into it. And then as we've seen all the benefits, we've successfully invested and we now have an estate in the factory of about 800 VMs and about 400 VDIs. So it's been a great platform and it's allowed us to really push boundaries and give the business the service it expects. So was that with the time in which you were able to go from data to insight to recommendation or edict, or edict, was that compressed and you kind of indicated that, but... So we offload telemetry from the car and we post-processed it and that reprocessing time really is very time consuming. And we went from nine, eight minutes for some of the simulations down to just two minutes. So we saw big, big reductions in time and ultimately that meant an engineer could understand what the car was doing in a practice session, recommend a tweak to the configuration you're set up of it and just get more actionable insight quicker and it ultimately helps get a better car quicker. Such a great example. How are you guys feeling about the season, Matt? What's the team's sentiment? I think we're optimistic. We have a new driver lineup. We have Max Verstappen who carries on with the team and Sergio Perez joins the team. So we're really excited about this year and we want to go and win races. It's great, Matt. Good luck this season and going forward and thanks so much for coming back in theCUBE. Really appreciate it. It's my pleasure. Great talking to you again. Okay, now we're going to bring back Omer for a quick summary, so keep it right there. Without having solutions from HPE, we can't drive those processes. CFD, aerodynamics, vehicle dynamics, the simulations. Being software defined, we can bring new apps into play. We can bring new VMs, storage, networking. All of that can be added. Right, this is a hugely beneficial partnership for us. We're able to be at the cutting edge of technology in a highly stressed environment that is no bigger challenge than Formula One. Okay, we're back with Omer. Hey, what did you think about that interview with Matt? Great, I have to tell you, I'm a big Formula One fan and they are one of my favorite customers. So obviously one of the biggest use cases as you saw for Red Bull Racing is trackside deployments. There are now 22 races in a season. These guys are jumping from one city to the next. They've got to pack up, move to the next city, set up the infrastructure very, very quickly. And for average Formula One car is running with 1,000 plus sensors on it. That is generating a ton of data on trackside that needs to be collected very quickly. It needs to be processed very quickly. And then sometimes, believe it or not, snapshots of this data needs to be sent to the Red Bull factory back at the data center. What does this all need? It needs reliability. It needs compute power in a very short form factor and it needs agility. Quick to set up, quick to go, quick to recover. And then in post-processing, they need to have CPU density so they can pack more VMs out at the edge to be able to do that processing. Now, and we accomplished that for the Red Bull Racing guys in basically two RU of two SimpliVity nodes that are running trackside and moving with them from one race to the next race to the next race. And every time those SimpliVity nodes connect up to the data center, collect up to a satellite, they're backing up back to their data center. They're sending snapshots of data back to the data center. Essentially making their job a whole lot easier where they can focus on racing and not on troubleshooting virtual machines. Red Bull Racing and HPE SimpliVity, great example. It's agile. It's cost efficient. And it shows a real impact. Thank you very much, Omer. I really appreciate those summary comments. Thank you, Dave. Really appreciate it. All right, and thank you for watching. This is Dave Vellante for the Q&A.