 From Denver, Colorado, it's theCUBE. Covered, Supercomputing 17, brought to you by Intel. Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at Supercomputing 2017 in Denver, Colorado, talking about big, big iron. We're talking about space and new frontiers, black holes, mapping the brain. That's all fine in Dandy, but we're going to have a little bit more fun this next segment. We're excited to have our next guest, Bertie Spang. He's a VP software-defined infrastructure for IBM and his buddy and guest, Wayne Clanfield, HPC manager for Red Bull Racing. And for those of you that don't know, that's not the pickup trucks. It's not the guy jumping out of space. This is the four million line racing team, the fastest, most advanced race cars in the world. So gentlemen, first off, welcome. Thank you, Jeff. So what are the race car company doing here at a hyper, or Supercomputing conference? Obviously we're very interested in high-performance computing. So traditionally, we've used a wind tunnel to do our external aerodynamics. HPC allows us to do many, many more iterations, design iterations, offer care, so we can actually kind of get more iterations of the designs out there and make the car go faster, very quicker. So that's great. So you're not limited to how many times you can get it in the wind tunnel at the time you have in the wind tunnel. I'm sure there's all types of restrictions, costs, and otherwise. There's lots of restrictions on both the wind tunnel and in HPC usage. So with HPC, we're limited to 25 teraflops, which isn't many teraflops. That's one of the five terrible flops. That's all, that's all. And Pernie, how did IBM get involved in Formula One racing? Well, I mean, our spectrum computing offerings are about virtualizing clusters to optimize efficiency and the performance of the workloads. So our spectrum LSF offerings used by manufacturers, designers to get ultimate efficiency out of the infrastructure. So with the Formula One restrictions on the teraflops, you want to get as much work through that system as efficiently as you can. And that's where spectrum computing comes in. That's great. So again, back to the simulation. So not only can you just do simulations because you've got the capacity, but then you can customize it, as you said. I think before we turn on the cameras for specific tracks, specific race conditions, all types of variables that you couldn't do very easily in a traditional wind tunnel. Yes, obviously it takes a lot longer to actually kind of develop, create, and rapid prototype the models and get them in the wind tunnel and actually test them. And it's obviously much more expensive. So by having a HPC facility, we can actually kind of do the design simulations in a virtual environment. So what's been kind of the aha from that? Is it just simply more better, faster data? Is there some other kind of transformational thing that you absorbed as a team when you started doing this type of simulation versus just physical simulation in a wind tunnel? We started using HPC and computational fluid dynamics about 12 years ago in anger. Traditionally it started out as a complementary tool to the wind tunnel, but now with the advances in HPC technology and software from IBM, it's actually beginning to overtake for wind tunnel. So it's actually kind of driving the way we design a car these days. That's great. So Bernie, working with super high end performance, where everything is really optimized to get that car to go a little bit faster, just a little bit faster. Pretty exciting space to work in. There's a lot of other great applications, aerospace, genomics, this and that, but this is kind of a fun thing you can actually put your hands on. It's definitely fun. It's definitely fun being with the Red Bull Racing team and with our clients when we brief them there. But we have commercial clients in automotive design, aeronautics, semiconductor manufacturing, where getting every bit of efficiency and performance out of their infrastructure is also important. Maybe they're not limited by rules, but they're limited by money and the ability to invest, and their ability to get more out of the environment gives them a competitive advantage as well. And really what's interesting about racing and a lot of sports, is you get to witness the competition. We don't get to witness the competition between big companies day to day. You're not kind of watching it in those little micro instances. The good thing is you get to learn a lot from such a focused, relatively small team as Red Bull Racing that you can apply to other things. So what are some of the learnings as you've got to work with them that you've taken back? Well, certainly they push the performance of the environment and they push us, which is a great thing for us and for our other clients who benefit. But one of the things I think that really stands out is the culture there of the entire team, no matter what their role and function from the driver on down to everybody else are focused on winning races and winning championships. And that team view of getting every bit of performance out of everything everybody does all the time really opened our thinking to being broader than just the scheduling of the IT infrastructure. It's also about making the design team more productive and taking steps out of the process and anything we can do there, inclusive of the storage management and the data management over time. So it's not just the compute environment, it's also the virtualized storage environment. And just massive amounts of storage. You said, not only are you running and generating, I'm just going to use boat loads because I'm not sure which version of the flops are going to use, but also you've got historical data and you have result data and you've got models that need to be tweaked and continually upgraded so that you do better the following race. Exactly, we're generating petabytes of data a year and I think one of the issues which is probably different from most industries is our workflows are incredibly complex. So we have up to 200 discrete job steps for each workflow to actually kind of produce a simulation. This is where the kind of IBM spectrum product range actually helps us do that efficiently. If you imagine an aerospace engineer, aerodynamics engineer, trying to manually manage 200 individual job steps, it just wouldn't happen very efficiently. So this is where spectrum scale actually kind of helps us do that. So you mentioned it briefly, Bernie, but just a little bit more specifically, what are some of the other industries that you guys are showcasing that are leveraging the power of spectrum to basically win their races? Yeah, so we talked about the infrastructure and manufacturing, or industrial clients, but also in financial services. So I think in terms of risk analytics and financial models being an important area. Also healthcare life sciences. So molecular biology, finding new drugs. So you talk about the competition and who wins, right? Genomics research in advance is there. Again, you need a system and an infrastructure that can chew through vast amounts of data, both the performance and the compute, as well as the long-term management with cost efficiency of huge volumes of data. And then you need that virtualized cluster so that you can run multiple workloads many times with an infrastructure that's running in 80%, 90% efficiency. You can't afford to have silos of clusters. We're seeing clients that have problems where they don't have this cluster virtualization software have cluster creep. Just like in the early days, we had server sprawl with a different app on a different server and we needed to virtualize the servers. Well, now we're seeing cluster creep with Hadoop clusters and Spark clusters and machine learning and deep learning clusters, as well as the traditional HPC workload. So what Spectrum Computing does is virtualizes that shared cluster environment so that you can run all these different kind of workloads and drive up the efficiency of the environment. Because efficiency is really the key, right? You've got to have efficiency, that's what, that's really where Cloud got its start, you know, kind of eating into the traditional space, right? There's a lot of inefficient stuff out there, so you've got to use your resources efficiently. That's correct, well. It's way too competitive. Correct, well, we're also seeing inefficiencies in the use of Cloud. Yeah, absolutely. So one of the features that we've added to the Spectrum Computing recently is automated dynamic cloud bursting, right? So we have clients who say that, you know, they've got their scientists or their design engineers spinning up clusters in the Cloud to run workloads and then leaving the servers running and they're paying the bill. So we built an automation where we push the workload and the data over the Cloud, start the servers, run the workload. When the workload's done, spin down the servers and bring the data back to the user and it's very cost effective that way. It's pretty funny. Everyone talks often about the spin up but then they forget to talk about the spin down. Well, that's where the cost savings is, exactly. All right, so final words, Wayne, you know, as you look forward, it's super a lot of technology in Formula One racing. You know, kind of what's next? Where do you guys go next in terms of trying to get another edge in Formula One racing, both specifically? I mean, I'm hoping they kind of reduce the restrictions on HPC so we can actually start using CFD and the software IBM provides in a serious manner so we can actually start pushing the technologies way beyond where they are at the moment. It's really interesting that they, that is a restriction, right? You think of like plates and size of the engine and these types of things as the rule restrictions but they're actually restricting, based on data size, your use of high performance computing. They're trying to save money basically, but it's great. So whether it's a rule or you know, you're shareholders, everybody's trying to save money. All right, so Bertie, what are you looking at? So 2017's coming through and in, it's hard for me to say that as you look forward to 2018. What are some of your priorities for 2018? Well, the really important thing, and we're hearing it at this conference, I'm talking with the analysts and with the clients, the next generation of HPC in analytics is what we're calling machine learning, deep learning, cognitive AI, whatever you want to call it. That's just the new generation of this workload and our spectrum conductor offering and our new deep learning impact capability to automate the training of deep learning models so that you can more quickly get to an accurate model like in hours or minutes, not days or weeks. That's going to be a huge breakthrough. And based on our early client experience this year, I think 2018 is going to be a breakout year for putting that to work in commercial, you know, enterprise use cases. All right, well I'll look forward to the briefing a year from now, it's super good in 2018. All right, Bertie Wade, thanks for taking a few minutes out of your day, appreciate it. You're welcome, thank you. All right, he's Bernie, he's Wade, I'm Jeff Frick, we're talking Formula One Red Bull Racing here at Supercomputing 2017. Thanks for watching.