 The history of high-performance computing is unique and storied. You know, it's generally accepted that the first true supercomputer was shipped in the mid-1960s by controlled data corporations, CDC, designed by an engineering team led by Seymour Cray, the father of supercomputing. He left CDC in the 70s to start his own company, of course, carrying his own name. Now that company, Cray, became the market leader in the 70s and the 80s, and then the decade of the 80s saw temps to bring new designs, such as massively parallel systems, to reach new heights of performance and efficiency. Supercomputing design was one of the most challenging fields and a number of really brilliant engineers became kind of quasi-famous in their little industry. In addition to Cray himself, Steve Chen, who worked for Cray, then went out to start his own companies, Danny Hillis of Thinking Machines, Steve Frank of Kendall Square Research, Steve Wallach tried to build a mini supercomputer at convex. These new entrants, they all failed for the most part because the market at the time just wasn't really large enough and the economics of these systems really weren't that attractive. Now the late 80s and the 90s saw big Japanese companies like NEC and Fujitsu entering the fray, and governments around the world began to invest heavily in these systems to solve societal problems and make their nations more competitive. And as we entered the 21st century, we saw the coming of pedascale computing with China actually cracking the top 100 list of high performance computing. And today we're now entering the exascale era with systems that can complete a billion billion calculations per second or 10 to the 18th power, astounding. And today the high performance computing market generates north of $30 billion annually and is growing in the high single digits. Supercomputers solve the world's hardest problems in things like simulation, life sciences, weather, energy exploration, aerospace, astronomy, automotive industries, and many other high value examples. And supercomputers are expensive. You know, the highest performing supercomputers used to cost tens of millions of dollars, maybe $30 million. And we've seen that steadily rise to over $200 million. And today we're even seeing systems that cost more than half a billion dollars even into the low billions when you include all the surrounding data center infrastructure and cooling required. The US, China, Japan, and EU countries as well as the UK are all investing heavily to keep their countries competitive and no price seems to be too high. Now there are five mega trends going on in HPC today in addition to this massive rising cost that we just talked about. One, systems are becoming more distributed and less monolithic. The second is the power of these systems is increasing dramatically, both in terms of processor performance and energy consumption. The x86 today dominates processor shipments. It's going to probably continue to do so. Power has some presence, but ARM is growing very rapidly. NVIDIA with GPUs is becoming a major player with AI coming in. We'll talk about that in a minute. At both the EU and China are developing their own processors. We're seeing massive densities with hundreds of thousands of cores that are being liquid cooled with novel phase change technology. The third big trend is AI, which of course is still in the early stages but it's being combined with ever larger and massive, massive data sets to attack new problems and accelerate research in dozens of industries. Now the fourth big trend, HPC in the cloud reached critical mass at the end of the last decade and all of the major hyperscalers are providing HPC as a service capability. Now finally, quantum computing is often talked about and predicted to become more stable by the end of the decade and crack new dimensions in computing. The EU has even announced hybrid QC with the goal of having a stable system in the second half of this decade, most likely around 20, 27, 20, 28. Welcome to theCUBE's preview of SC22, the big supercomputing show which takes place the week of November 13th in Dallas. theCUBE is going to be there. Dave Nicholson will be one of the co-hosts and joins me now to talk about trends in HPC and what to look for at the show. Dave, welcome, good to see you. Hey, good to see you too, Dave. All right, you heard my narrative up front Dave. You got a technical background, CTO, chops. What did I miss? What are the major trends that you're seeing? I don't think you really, you didn't miss anything? I think it's just a question of double clicking on some of the things that you brought up. You know, if you look back historically, supercomputing was sort of relegated to things like weather prediction and nuclear weapons modeling and these systems would live in places like Lawrence Livermore Labs or Los Alamos. Today, that requirement for cutting edge, leading edge, highest performing supercomputing technology is bleeding into the enterprise driven by AI and ML, artificial intelligence and machine learning. So when we think about the conversations we're going to have and the coverage we're going to do of the SC22 event, a lot of it is going to be looking under the covers and seeing what kind of architectural things contribute to these capabilities moving forward and asking a whole bunch of questions. Yeah, so there's this sort of theory that the world is moving toward this connectivity beyond compute centricity to connectivity centric. We've talked about that, you and I in the past. Is that a factor in the HPC world? How is it impacting supercomputing design? Well, so if you're designing an island that is tip of the spear, doesn't have to offer any level of interoperability or compatibility with anything else in the compute world, then connectivity is important simply from a speeds and feeds perspective, lowest latency connectivity between nodes and things like that. But as we sort of democratize supercomputing to a degree, as it moves from solely the purview of academia into truly ubiquitous architecture leveraged by enterprises, you start asking the question, hey, wouldn't it be kind of cool if we could have this hooked up into our ethernet networks? And so that's a whole interesting subject to explore because with things like RDMA overconverged ethernet, you now have the ability to have these supercomputing capabilities directly accessible by enterprise computing. So that level of detail opening up the box and looking at the nicks or the storage cards that are in the box is actually critically important. And as an old school hardware knuckle dragger myself, I am super excited to see what the cutting edge holds right now. Yeah, when you look at the SC-22 website, I mean, they're covering all kinds of different areas they got parallel clustered systems, AI, storage, servers, system software, application software, security. I mean, wireless HPC is no longer this niche. It really touches virtually every industry or most industries anyway, and is really driving new advancements in society and research solving some of the world's hardest problems. So what are some of the topics that you want to cover at SC-22? Well, I kind of, I touched on some of them. I really want to ask people questions about this idea of HPC moving from just academia into the enterprise. And the question of, does that mean that there are architectural concerns that people have that might not be the same as the concerns that someone in academia or in a lab environment would have? And by the way, just like little historical context, like I can't help it. I just went through the upgrade from, iPhone 12 to iPhone 14. This has got one terabyte of storage in it. One terabyte of storage. In 1997, I helped build a one terabyte NAS system that a government defense contractor purchased for almost $2 million, $2 million. This was, I don't even know. It was 9.99 a month extra on my cell phone bill. We had a team of seven people who were going to manage that one terabyte of storage. So similarly, when we talk about just where are we from a super compute resource perspective, if you consider it historically, it's absolutely insane. I'm going to be asking people about of course, what's going on today, but also the near future. Now, what can we expect? What is the sort of singularity that needs to occur where natural language processing across all of the world's languages exists in a perfect way? Do we have the compute power now? What's the interface between software and hardware? But really, this is going to be an opportunity that is a little bit unique in terms of the things that we typically cover because this is a lot about cracking open the box, the server box, and looking at what's inside and carefully considering all of the components. You know, Dave, I'm looking at the exhibitor floor. It's like everybody is here. NASA, Microsoft, IBM, Dell, Intel, HPE, AWS, all the hyperscale guys, Weka IO, Pure Storage. Companies I've never heard of, it's like hundreds and hundreds of exhibitors, NVIDIA, Oracle, Penguin Solutions. I mean, just on and on and on. Google of course has a presence there. The Cube has a major presence. We've got a 20 by 20 booth. So it's really, as I say, to your point, HPC is going mainstream. You know, I think a lot of times, you know, we think of HPC, super computing is just sort of often the eclectic far off corner. But it really, when you think about big data, when you think about AI, a lot of the advancements that occur in HPC will trickle through and go mainstream in commercial environments. And I suspect that's why there are so many companies here that are really relevant to the commercial market as well. Yeah, this is like the Formula One of computing. So, you know, if you're a motorsports nerd, you know that F1 is the pinnacle of the sport. SC-22, this is where everybody wants to be. Another little historical reference that comes to mind. There was a time in, I think the early 2000s when Unisys partnered with Intel and Microsoft to come up with, I think it was the ES-7000, which was supposed to be the sort of Intel mainframe. It was an early attempt to use, and I don't say this in a derogatory way, commodity resources to create something really, really powerful. Here we are 20 years later and we are absolutely smack in the middle of that. You mentioned, you know, the focus on x86 architecture. But all of the other components that the silicon manufacturers bring to bear, companies like Broadcom and Vidya, at all, they're all contributing components to this mix in addition to, of course, the microprocessor folks like AMD and Intel and others. So, yeah, this is big time, NerdFest. Lots of academics will still be there. The supercomputing.org, this kind of loose affiliation that's been running these SC events for years, they have a major focus, major hooks into academia. They're bringing in legit computer scientists to this event. This is all cutting edge stuff. Yeah, so, like you said, it's gonna be kind of, a lot of techies there, very technical computing, of course, audience. At the same time, we expect that there's gonna be a fair amount, as they say, of crossover. And so I'm excited to see what the coverage looks like yourself, John Furrier, Savannah, I think even Paul Gillin is gonna attend the show because I believe we're gonna be there three days. So, we're doing a lot of editorial. Dell is an anchor sponsor, so we really appreciate them providing funding so we can have this community event and bring people on. So if you are interested- Dave, I just have to make just something on that point. I think that's indicative of where this world is moving. When you have Dell so directly involved in something like this, it's an indication that this is moving out of just the realm of academia and moving in the direction of enterprise. Because as we know, they tend to ruthlessly drive down the cost of things. And so I think that's an interesting indication right there. Yeah, as do the cloud guys. So again, this is mainstream. So if you're interested, if you got something interesting to talk about, if you have market research, you're an analyst, you're an influencer in this community, you've got technical chops, maybe you've got an interesting startup. You can contact David, David.Nicholson at siliconangle.com. John Furrier is John at siliconangle.com, David.Valante at siliconangle.com. I'd be happy to listen to your pitch and see if we can fit you on to the program. So really excited. It's the week of November 13th. I think November 13th is a Sunday. So I believe David will be broadcasting Tuesday, Wednesday, Thursday. Really excited to give you the last word here, Dave. No, I just, I'm not embarrassed to admit that I'm really, really excited about this. It's cutting edge stuff. And I'm really gonna be exploring this question of where does it fit in the world of AI and ML? I think that's really gonna be the center of what I'm really seeking to understand what I'm there. All right, Dave Nicholson, thanks for your time. TheCube at SC-22, don't miss it. Go to theCube.net, go to siliconangle.com for all the news. This is Dave Valante for TheCube and for Dave Nicholson. Thanks for watching and we'll see you in Dallas.