 Welcome everyone to the very first Purdue Engineering Distinguished Lecture of the 2020-2021 academic year. My name is Arvind Raman, I'm the Executive Associate Dean for the Faculty and Staff here in the college. Now the Distinguished Lecture Series really started in 2018, really to invite world-renowned faculty and professionals to Purdue Engineering to engage in thought-provoking conversations and ideas with faculty and students regarding the grand challenges in their fields and opportunities there as well. Now besides participating in an interactive panel which just concluded, you know, a few minutes ago, the Distinguished Lectures also present a lecture to a broad audience of faculty, graduate students and undergraduate students as well. Today's talk is going to be about cooling technologies for data centers, challenges and opportunities. And it is my distinct honor and pleasure to introduce, as our Purdue Engineering Distinguished Lecturer for today, Professor Derajia Ghanifer, who is the Presidential Distinguished Professor of Mechanical and Aerospace Engineering at the University of Texas at Arlington, where he heads two centers. He is the site director of an NSF-IUCRC on energy efficient systems and director of electronic packaging. Dr. Ghanifer received his PhD at Howard University and after that he worked for 15 years at IBM. In 1991, his work was recognized by being awarded the IBM Outstanding Technical Achievement Award and Appreciation of Computer-Aided Thermal Modeling. Since joining UT Arlington in 1999, he has graduated 230 graduate students, a record for the university, record for any university, including 25 PhDs and currently advises 16 PhDs and 13 master students. His former students are making significant contributions in many technology companies such as Facebook, Intel, 3M, Microsoft and Amazon. His new initiative, which I think we'll be hearing about in this talk today, is to start a new center called RAMPS, Center for Reliability Assessment in Micro and Power Electronic Systems, for which he received significant funding, new equipment and lab space, and faculty lines as well to work with him. For his contributions, he has received many awards and I can only pick on a few. It'll take too much time to go through all of them. But he's been honored and recognized by the 2014 Nesby Golden Torch Award and the 2019 ASME Heat Transfer Memorial Award, amongst others. In 2020, he was a recipient of Howard University's Charter Day Award for Distinguished Postgraduate Achievement Research Engineer. In 2020, he received the Semi-Termal Lifetime Achievement Award in recognition of significant contributions to the field of electronics thermal management. He is a fellow of the National Academy of Inventors, National Association for the Advancement of Science, and the American Society of Mechanical Engineers. In 2019, he was elected to the U.S. National Academy of Engineering. Without further ado, please welcome Dr. Agonifer for his distinguished lecture. Over to you, Director. Thank you. Thank you, Dr. Oman. I had a great day today. I met many of my colleagues. I started out, I think, meeting with Dr. Ganesh Subranayan and Dr. Sam Moudouar, people have known for a long time. And then Dr. Anil Bajaj, the former department head, and my good friend, Dr. Jay Gore, and another good friend, Dr. Ajay Malish, and then finally was the current acting dean, Dr. Mark Landstrom, who reminded me about and thermodynamics. Let's not just talk about efficiency. Let's talk about entropy destruction or exergy, maximizing the total work. So let's look at it from an exergy point of view. So that was very nice. So today I'll talk about cooling technologies for data center, challenge and opportunities. This talk was preceded by a panel where I had my great friends, Dr. Ashish Gupta. Oh, my video is not on. Let me turn it on. Dr. Ashish Gupta from Intel and Dr. Moudouar Younger from Google and several colleagues I've met from Purdue. We talked a little bit about data centers. So in particular, I'll talk a little bit about what are the challenges and opportunities. Some of this stuff is really back to the future and some certainly knew. So outline will be background, free cooling and evaporative cooling. You know, some might say it's been around for a while, but yes, it's still extremely important. Liquid cooling, indirect liquid cooling using co-plates, immersion cooling. And what is the impact of new packaging technologies or in particular heterogeneous integration? How is that going to impact data center, thermal management and reliability? So what is a data center? It's really, it's a purpose built infrastructure or facility that houses IT equipments such as servers and storage and it can have access to data. And certainly we have known how important it is and no time has it been more important than during this pandemic currently where we're always clicking on our phones to access data. And no two facilities are the same. You can have facilities that are very small, like I have two experimental data centers that are 625 square feet each. And then a few miles away, we have a Facebook that has 800,000 square foot data center facility, a little bigger than ours. So what are the types of facilities and what are the shifting trends? So the Amazon, Google, Facebook and so on, they have really fairly energy efficient energy efficiencies important to have what we call a hyper scale facility and this purpose built. And then you have companies, finance companies like Citigroup and Bank of America and others. And reliability is extremely important. You cannot fail. You want to have 99.999% availability, right? So that that's critical. So cost maybe not that much. Then we have digital reality companies like which are sale floor space so that you can have computing. It's really a co-location facility. Now by 2025, it's predicted that infrastructure hardware buyers like Google, Amazon and so on will consume 50% plus of all server and storage infrastructure deployed. So, and the growth of hyper scale enterprise facilities is very significant. I'll talk about it in a minute. So here's a microprocessor trend. So if you look at this, first of all, if you look at the transistors, the number of transistors, this is more slow, continues to double. Now the frequency is a different story, right? It used to be in the early 60s every year, then two years, maybe now every four to five years, but the number of transistors for a given area continues to double. Now the performance, now when you look at a single thread though, not so much so, right? Starting to flat out. And the frequency since roughly around 2004 or so has been fairly flat, but probably between three to five gigahertz or so, and it continues to be that. And we reached the typical, the thermal design power. It really start flattening up. The reason is that we have this thing called denar scaling, which is really the physics behind Moore's law, which says that the power density, used to be the power density is the voltage times the current, right? Used to be that it is divided by the area. All of them scaled roughly towards the square root of 1.4, 0.7. But then the power stops scaling, or the voltage stops scaling like that, more like 0.8 or so. Therefore, for a given footprint, the power density will start going up unless we limit the power. So that's really what has happened. There's really limiting, you can see here limiting the typical power loss after 2004 or so. And then you can see the number of cores. How do you get performance? Then you increase the number of cores significantly, right? 4, 8, 16 and so on and so forth, right? And you also hear, you look at the nodes, so GPUs, 712, I know that companies like TSMC actually have gone significantly lower, you know, 5nm and even talk about 3nm. But when you start looking at that, the issue there is that the cost, if you look at the cost versus the previous technology, after 2004 or so, the cost starts becoming significantly higher. So part of Moore's Law, the one part of it is it doubles every two years or 18 months, whatever the case. And the other is the cost continues to get down. So you got freebie. The cost goes down of transistor and you're getting more transistor so you can do more work, but that's not the case anymore. So that's really something we have to pay attention to and discuss that more. And now energy demand and data centers, as you can see the energy demand, you know, continue to grow. And it was really, this was what was predicted in the 2010. What was predicted that the energy efficiency, the annual efficiency used electricity use was going to go significantly exponentially high. But what happened was in the 2010 or so, there started to become, you start using improved management, huge hyperscale shift between 2010 and 2018 and hyperscales significantly bring down the electricity use. So if you start looking at the combination of best practices and hyperscale, so you really start having some leverage on what the electricity uses. So we have to continue doing that, but that's, that's, that's, so this prediction is not necessarily, if you look at some papers in the early 2000. It was, it was doomsday, but a lot of, a lot of people in the thermal area have done a lot of good work to, to, to reduce that. Now, if you look at the in a pie chart, where the electricity uses significant electricity use in cooling, storage and network and servers, but we really want to minimize that. We want to minimize this as much as you can right. And if you start looking at the 2010, where the traditional data centers dominated and not so much so when you start looking at cloud and hyperscale versus 2018. So the significant increase in the hyperscale and, and as well as the cloud, the efficiencies have improved significantly. So this is just to show you again what the, the global energy storage use this is 2010 to 2018 installed storage capacity increased significantly everything is increasing, except we're doing a good job related to PUE for example in 2015, 75 so energy efficiency has increased better energy efficiency, but the server demands continue to rise. So the past forward is you want to use the extended historical energy efficiency gains widespread adoption of innovative efficiency measures to maximize infrastructure efficiency, develop new technology to to manage change and better modeling capabilities for decision makers and open sharing of reliable data at the global level. So now let me start talking about some of the cooling strategies. I got off by talking about free cooling free cooling is really you have a data center, just bring the outside area and just bring it into the server, not paying attention to what the temperature is or what the relative humidity is obviously if you're going to do that 100% of the you got to be in the right environment, right, because the servers are set up such that from a reliability point of view, you got a zone you got an envelope where the relative humidity and the drop off temperature got to be in that envelope. However, you know depending on where you are, you might 90% of the time, you might be in that envelope so you can do free cooling, and the rest of the time, you can, you know, get out of that area a little bit out of the free cooling. In other words, the drive off temperature relative humidity can just go outside of the zone. It's okay as long as it's for a limited, limited amount of time. So evaporative cooling is large enterprises use that, but they then couple that evaporative. I mean, free cooling. And if you want to complement free cooling was evaporative cooling. What you do is, you use spray, and then subsequently cooling medias, where you use the latent heat of vaporization to significantly reduce the need for really compressors, right, the need for crying cry units computer room air handling units. So the cooling equipment suppliers provide often large indirect evaporated the indirect evaporated. The nice thing about that is, in the panel you might have heard the panelists talk about the that we're going to have some contamination. So if you just bring the outside air without doing anything, you can have contamination issue you know where you can have some of the copper and silver and so on and so forth can be oxidized subsequently leading to some reliability issues. So you have to be concerned about that but was indirect, you don't have to do that because the, the air that's being cooled is not is not in touch with the outside environment at all. So, here is a typical direct evaporative cooling. So, so you have a you bring the outside air and potentially you might mix the, you might mix it. This could be cold, for example, if so, you know you, you can mix it with the exhaust was a hot eye. And, and then you have a filter wall misting system. So you use this direct latent heat of vaporization to cool it and then supply that cold air. Right. And this is a typical misting system that's being used that that was used at Facebook eventually this late to cooling medias. And yes, so this is a rigid media versus what you just saw the misting where you have this cooling media where water flows in the cooling media and air blows through it and then you have a latent heat. So the challenging opportunities are that for end user operator. You know, you, you got to be able to say, at what am I, what am I going to be concerned, you have to have control one of my become concerned that those servers are meeting the criteria, for example, this being sold by Dale or, or IBM and so on, that the air that's coming in the air and the relative humidity and drive up temperature is in that recommended zone. How do I do that. So, there is widespread adoption evaporative cooling airside economization provide immense potential for continued energy saving and it's being used extensively. So the outline of some of the what we've done extensive model work in this area is direct and indirect evaporative cooling a key thing there is really the cooling media. How do you characterize the cooling media, how do you optimize it, you split the cooling media vertically horizontally. There's a lot of experiment that needs to be done to do that kind of stuff. So, this is funded by National Science Foundation, and in this particular case, the collaborating was Binghamton University, we've got some our care colleague of mine. And the industry that sponsors it. Obviously, this is funded by NSF and a Facebook future facilities and mess decks. This is directing in indirect evaporative cooling of it parts. So, here is a psychometric chart so psychometric chart that really is a two dimensional state of moist air. Really. So, in other words, you assume that there is a third property right if you use Gibbs phase rule, you need three thermodynamic properties but once you fix one you need to. So here is a 2011 of the psychometric chart. What that's saying is, if you are in this recommended zone, if the air that's coming in somehow into the servers on this recommended zone you're good. But then, in order to save efficiency this was between 18 and 27 degrees C. I mean that's that's pretty tight, you know, data center operators say look we need better stuff than that you know IBM. Dale and so can you provide us equipment where we can really raise the inlet temperature and so that was done so right now, you can have this envelope here. This is allowable zone, where the inlet temperature can be up to 45 degrees, not necessarily the entire time but it will allow you to bring an air up to 45 degrees C. So which means almost no need. Most of the time for any mechanical cooling. And subsequently what's happened if you look at the difference between this is, you have been able to reduce the relative humidity significantly to about 8%. Right. So, so this is something that we follow. One is in British units and the other one is. This is more the thermal guidelines, again, supply was a copy of this. So the goal of this project is to provide best practice for using direct evaporative cooling and indirect evaporative cooling techniques, develop deeper insight and provide unbiased guidance on implementing operating and maintaining evaporative cooling system. Keep in mind that huge percentage of the installation still uses direct and indirect evaporative heat exchanger we talked a little bit about. So it's still to a certain extent liquid right because we have evaporation but but when we say we're going to try to migrate to a direct liquid cooling, like cold place and so on. It does not mean we're going to make that migration right away. It's going to take some time but we're, we're being pushed to do that. So here's some of the tests we've done a lot of work on the cooling media and the indirect evaporative cooling as such so we actually building talk about making a unique heat exchanger. So this is airside economization. So if you're, if you look at this. If you are anywhere in this region, you can do about direct evaporative cooling, right, if you are, you know, this is the recommended zone so if you happen to be the outside air happens to have a drive off temperature and relative humidity you can do direct evaporative cooling and be able to get into a recommended zone. If you are somewhere over here, you need to do indirect cooling right to get anywhere in here, or you can go all the way down here or you can do it in two steps you can get anywhere to this point and then do a direct evaporative cooling. In this particular case, for example, you cool to reach somewhere in this zone and then you can do direct evaporative cooling. So, a lot of work needed to be done so we teamed up with a company called mess tanks for our sponsors and they've been sponsoring our work for several years so we want to thank them. As we were speaking we probably had a meeting with them today. So we built this it pod. This is where electronics is, and we have a cooling tower we have a cooling media media for direct evaporative cooling and indirect, as well as indirect evaporative cooling. So this, we've been running this for about five years and was very little reliability issues. So, evaporate efficiency versus maintenance. So we've been doing a lot of work related to this. Once again, the cooling media is critical. How do you split it, you know, vertically horizontally. And, and then also, how do you control the inlet doing we use for this data center competition we use something program called six sigma by future facilities, or company we work very closely with. In addition, we're also developing neural network so we actually using this, this it pod we have developed tremendous amount of data, so that we can develop artificial neural network. So we can have on time modeling or institute modeling, so we can control the inlet and outlet conditions, the inlet condition evaporated. And this is how we test the evaporative cooling media. Have this wireless relative humidity and travel temperature and so on. And then we are currently working with a company called com scope, where we're looking at free cooling for their 5G towers and they're very good customer of ours they've been participating in our work since day one since 2011. So, this is some of the work I can't go into too much detail about it but some of the work we're doing. And there's proactive control and scheduling a data center coding user neural network. I told you we developed this neural network using the test data we have, you know, we do CFD, and then we create this artificial neural network model. And again, this is the CFD model using our test data center. And I won't go into too much of the details about the models we create. But so the summary and future work is CFD simulation based training, the CFD simulation takes quite a bit of time but it's fairly accurate that then guide us to develop this and then models. What showed that predictive data driven models have huge potential and optimizing sequence operations and the future work we're going to look at determining the subset of input conditions. And to build a training data set using CFD simulations. This is the outside air temperature patterns. And now, another thing is the development of heat exchangers. This is very important. So we are. It's a project. One of my PhD students is involved in. Ashwin Siddharth was a couple of master students and the being as a PhD student and we're collaborating with again because I'm a KF from Binghamton. So this is the experimental testing to investigate the change in effectiveness of an air air heater changer, which we actually building and to develop a compact model. So this is the indirect airside economizer unit. And this is the commissioning is currently in progress. And the summary is a comprehensive guide to promote widespread adaption of direct and indirect evaporative cooling implemented with airside economization. Better modeling for proactive control strategies and design and commissioning of direct and indirect evaporative heat exchanger air handling units. Now liquid cooling. One of my students here ushers currently doing some internship was video. This is some of the work he did was a Cisco service. This is really a hybrid direct and indirect. I mean, a hybrid air and liquid cooled systems. When you look at the hierarchy of cooling solutions, you have single phase Google IBM Lenovo. Keep in mind IBM been doing liquid cooling forever right since the early 80s. And in fact they have been doing 100% liquid cooling of racks in the order of 200 kilowatts and more so when people start saying I'm concerned about water. Certainly IBM been doing this work forever and Lenovo this Lenovo system by do and media and this is the open computer project. And then you have to phase neural IBM media as well. So this is some of the work we've done. This is a Cisco server which is high hybrid cool. It's a indirect liquid cooling was a co plate, as well as fans, we actually use fans to cool some of the other systems. We really were able to improve the efficiency significantly were able to show that we can reduce the number of fans by about 40%. It was really the system was put together, but not really would not was optimization of mind. And we've also looked at this is a system we got in collaboration with Facebook, looking at centralized versus distributed cooling systems. And so we looked at that and was able to show that was centralized cooling system where we performed significantly better. So this is work was done by Manasseh Sahini she's now got a PhD she's now was Intel. And this is the work. Bench top work done by This is the liquid cooling work. This is a hybrid cool server. And once again I think I've already said that we're able to show significant reductions in, in, and power in using the systems that this is comparative study and air and liquid cooling. And this is the work again done by one of my students who shots, the publications are there, if you're interested, and the future work is going to be. Instead of just looking at the single server, we actually received a donation by by Cisco of an entire rack of liquid cool rack was some like 30 servers. We're going to look at the optimization for at the rack level. This is something that who stress is currently working on dynamic look good cooling. As I said earlier, when we start looking at multi core. So you start having multi core systems and you really start looking at non uniformity and power distribution, which could also lead to non uniformity and temperature right will lead to that. And so that is really an issue because from a cooling point of view you want to cool for the worst case, the highest temperature. What we have developed here at UTA is a cold plate we call a dynamic cold plate. This is being led by my students by deep shy currently. And so what a dynamic cold plate can be used dynamic cooling can be used both at the rack level, as well as a chip level. So I'll discuss both. So at the rack level, what we do is we have a flow control device where, and you have, you sense the temperature and based on the temperature, we control the flow, so that you really reduce the, you don't have to have, you look at the maximum temperature and have the same flow for each one of the servers. This, this is sort of simulating Iraq, you can actually reduce the amount of flow going into this rack significantly. So this is rack level, the control strategy developed to control the pump based on either pressure drop, right. So, here's the control strategy, this is temperature based or temperature or pressure based. And, and then we have active flow control device that that opens up, you know, this is basically controls the resistance right if it's this open, you have very little resistance, right, and you can. Zero to 90 degrees right fcd is divided broadly three parts. And, and this is the fabrication of it and we are very, very conscious. When my do earlier talked about the TCO cost is always an option so one of the things I told my students is that it has to cost less than a dollar to fabricate. The fcd is currently less than a dollar. And this is the flow rates for different flow rates. I mean, this is the flow rate versus the angle. So we can actually control. That based on the angle of climate for lpm, for example, I can change the angle and control how much flow that goes to this fcd device. Right. And so using this fcd and dynamic cooling, we're able to show a 64% pumping power saving. And this is the schematic representation of the experimental setup. In fact, one of the master's students just defended a couple of days ago in this particular work. And this is his final experimental setup setup at the rack level. And this is the single. Experiment test vehicle assembly. And then we show, you can see here that, you know, the flow rate is the same. Right experimental percent flow rate change. It's all the same versus as we start experiment to we change the flow rate. Right. And you can see that the temperature. Changes accordingly. Right. So what we're able to show is that I'll just go to the conclusion on this, that the design and development the novel fcd was 3D manufacturing that costs less than a dollar. And we're able to reduce the pumping power by 64%. And this is rack level work we plan to implement this at the data center level as well. And now this is at the chip level so the chip level now you have a microprocessor GPU whatever it is as non uniform power distribution. So what we're doing is, instead of just having a cold plate with serpentine channels where the flow just comes into one side and just turns around goes out the other way or, or micro channels. We divided up into a number of segments, you know, four segments, for example, and depending on the temperature, we check the temperature. Right at the cold plate level we have, we divided in this particular case is divided into four, we check the temperature and then change the flow rate going into the different sections. This is what we call dynamic cold plate at the chip level. So, you know, this has been designed the final design used by metal by metallic strips. And it, it works really well. A lot of work I know, I did this probably watching this and say you showed you went through all this work I did in 20 seconds. Yes, that's what we do. But here's the punchline so you can see this is normal coplate right without the, without the dynamic coplate and this is dynamic coplate. So in this dynamic coplate, you're able to reduce the delta T across to 6.93 degrees C versus 15 degrees C here, without the dynamic coplate. And furthermore, the maximum temperature here is 39 degrees versus 44. So future work is implementation of the novel flow control device at Iraq level was controlled strategy for pumping power savings. Right, because at the end of the day it's not just the, the thermal resistance, but it's really a product of the pressure drop and and flow rate pumping power that we have to be concerned about. And the integration of the dynamic coplate and dynamic cooling at the rack level right so at both the chip. So you want to integrate both of the chip and rack level, the scanning even more significant. Immersion cooling. Immersion cooling so we have this rising power densities against CPU and GPU. So I have talked about it already denar scaling need to increase power usage effectiveness, the power usage effectiveness is the total power that you use this is the it power. Every, all the other power, you know, like for cooling and so on and so forth, divided by it, you really want it to be one. Because you, the only power you want to supply is just to the, to the it, but, but that's not really the case. So, so here's the computational demands, you know, GPU versus CPU, thermal design trends. I, I have something like 400 some people. I know I heard Ravi's talk recently for I term numbers 400 but this numbers are all very difficult to get out of industry right so just say I heard. So that's good enough. And, and then we have high power applications and typical applications ambient. So the cooling efficiencies of your, you're looking at cooling becoming a very significant power percentage of the overall energy, while you really just want to focus just on the server. Right. And the average PUE is pretty up there. This is the worst case. And the average is something like 1.8 or 1.9 right again the PUE I define as the total power this it power plus the cooling and everything else divided by it. So, and the heat transfer limitations of different cooling technologies right you know you have air cooling force convection. And then you have a dielectric liquid. I'm not sure if a person middle wires in the audience he probably will challenge me because he's, he's got some crazy numbers all the time continues to do that. Water force convection and water boiling. And, and this is the spray cooling and so on. And this is water spray cooling. And this is some of the limitations transfer limitations. And this are some of the major players using immersion cooling. So, immersion cooling you have it equipment, you have liquid. You have this inert liquid, and you have a heat exchanger, right, CDU and you had a cooling tower to take to take the heat load out. So, and there's also two phase immersion cooling as well. You know the high power CPUs obviously if you can you want to use to two phase. Some say it may be a little bit more complicated but there's a lot of work in that particular area as well. So, again, GPU versus CPU trend. So, this is some of the issues that some of the advantages of having immersion cooling, you can see that if you use, in case of lose a loss of cooling fluid, you only got just a little time. Before you, you start burning some chips, right. You start having some serious problems, you know all the fans will stop here though. You have significantly more time about 30 minutes. And the noise level is also right significantly reduced right talking about very low flow rates, you know, few LPMs versus here you can have air cooling could be significant. The reliability, you know, liquid versus that right liquid protects it devices from our environments we're talking earlier was called mentioned about some of the contamination stuff. We don't have to be concerned about that, even though there's some reliability issues here. So challenge and opportunities. Fluid selection is important and coolant material compatibility this is very very important. In fact, you can get some excellent performance heat transfer wise but the, but the reliability is certainly a concern and current and potential markets. So some of our research, we have this is work done by one of my former students, Rick Island and his colleagues John Fernandez and so on now Rick is Adele as PhD and this is looking at immersion cooling using oil. And we were able to go all the way to almost 50 degrees C and keep in mind one of the advantages if you have a flow, actually the properties is really gets better because the dynamic viscosity and starts actually decreasing as you start going so the so the pumping power actually decreases as the you increase the temperature so there's actually some advantages of going to high temperature. And so we did some work for the this was for open compute server. Winterfell server from Facebook. And this is, we also tested for a company called LCS, a rugged server. It's a lot of reliability work, you know, several hours in environmental chambers to see what happens if we start raising the inner temperature. And so, obviously, we can control that using an environmental chamber material compatibility. We do a lot of testing. Again, this is not a lot of fun it's a lot of hard work but this is what's really the bottleneck right now if you're interested in doing immersion cooling or if you and if you ask a company. Why are you not adapting that they're going to say reliability and usually it's material compatibility. Interestingly, that was also the issue way back in IBM when IBM came up with a liquid encapsulated module FC 72 there was some issues related to materials and that was really why you migrated to a liquid cooling cold plate. So this is a lot of the work we've done in reliability testing. You know, Dustin but particulates the solder balls we had to really cross section to see what happens to the interconnects. And we have done a lot of experimental CFD analysis of one new servers. One of the advantages because of the fin efficiencies of liquid right you don't really need those tall heat sinks right so you can go from a two use server to one use server which means you can almost double or serve go up at least one and a half times the number of servers you have for the square footage. So this is really immersion cooling has a loss of advantage plus you now don't need to worry about a lot of space between the racks right because you don't need to have perforated tiles and so on and so forth. And this is the form factor study. We did. And I, then there's also, again, minimum extinguishing concentration right fire hazards. I'll just pass by that. And there's a future work and that I'll just quickly say maybe about two minutes. The heterogeneous integrated circuit thermal challenge and reliability. The performance. This is actually from John Hennessy used to be the president of Stanford University and look at what's happening to performance right starting to flatten out right. And so how do you gain performance. So, and more slow, you can see that the time between technology knows is take more time. Right. This is actually from Dr Sue, the CEO of AMD. She presented this a couple of years ago at ERI Detroit cost is also an issue. This is a heterogeneous integration. This is now you have, you know, an SOC type of device where everything is together. But now you say, look, I'm now going to have a heterogeneous integration where I'm going to have a particular note at 10 nanometers. I might have this note at 10 nanometers right you can't do this you got to have all the silicon here got to be in the same technology you know so you can do the optimization you can optimize for what you want to do. So, you can do this on a silicon substrate or. You can use MF Intel's equivalent of that to an FD and you can also do 3D but as you start going to 3D issue, it becomes very challenging. Especially in terms of cooling because some of the heat from this device is depending where the ultimate heat sink is is going to have some issues how does it get out. In fact, we have a pat knife. One of my students, how to use thermo electrics and 3D cooling for 3D packaging. This slide was two slides I got from Ravi Mahajan I called him and make sure I can use it he actually send me his presentation his recent award presentation so this is the integration. This is what we call a heterogeneous integration so you, you put the various optimize I piece on, and then, and then stitch them using different technologies in this case is Intel is MF technology. And, and then, here was sort of for some of you young guys, what this was the message he left us was, this was only a couple of weeks ago, two, three weeks ago, forget it. He gave that keynote. He said we need better team materials right so we need 10x reduction, you know, for the next decade extremely important. I'm sure Carol is paying attention to that you know so I could do that. So, do all purpose teams as warpage controls solutions right so it's not just a team material the team material we also got to be dealing with warpage. I think also I keep talking my couch talk about that big SOC device or whatever that device was in that warpage associated with it so that's a definition. It's not just thermal is thermal mechanical and material right and research and liquid cooling, including immersion. So I talked about that. And I can tell you the immersion. It's not just heat transfer, it's reliability, new materials and cooling technologies to improve heat conduction. Improve methodologies include an increased focus on transient responses, and then co design co design I, I, I know it's about the end of the lecture maybe a minute or so but I want to give credit to, I want to to remind recently passed away, Mike Ellsworth. He was he was a young guy I used to mentor but I can tell you guys he had over 200 patents so he was a liquid cooling guru. And then someone that everyone knows Abhi Barcon recently passed away. He was, he was really big on co design. He was with ARPA and DARPA and so on and so forth. I mean, a DARPA he pushed a lot co design is really used to be thermal people would be worried hey you're pushing us downstream. You want to be in the upstream phase of design right so that you can have material popular material people architecture and everyone looking together to talk about design upfront right. That's what we call co design. In fact, a lot of codes nowadays, allow for co design programs like answers and so on and so forth. So that I have a center and the forming a center called rampas I, I only want to say that it's for heterogeneous thermal and reliability issues related to heterogeneous integration systems. And this is, I just showed a couple of this is a recent couple of years ago. A lot of them are now going to mill now got his PhD is now at 3am. He is at his PhD is now at Facebook. I was right. I just finished at the Tesla. And he got his PhD is now a postdoc with me and so on and so forth so that people go and this is an older picture when we go to Silicon Valley. For some reason I'm biased to Ethiopian restaurants so this is an Ethiopian restaurant. Some of it and John Fernandez that area is Facebook is a PhD. And one of my great students currently you'll be finishing up soon. Ashwin Siddharth here. Okay, I'll stop here. Sorry. Thank you so much to reject a very, very exciting talk. We do have a few questions. I'm going to start down the line here. So the rest of you who are the call please feel free to add any questions at this point. So the first question is really regarding the potential and opportunities for immersion cooling technology actually that question was posted just prior to your slides and immersion cooling. So, I think you may have addressed some of those things but the question is a sort of a broad based SWAT. If you were to do a SWAT today of strengths opportunities weaknesses threats of immersion cooling technologies. What would your words of advice be reliability and I can tell you it has a lot of and I would have been good to ask. Dr. She's good to have for mental as well. But a lot of opportunities, especially when you start looking at future packaging, because packaging is going to dictate cooling technology so immersion cooling got lots of opportunities for reliability is what's that's a factor. I mean, we've done the cooling part and we're spending a lot of time looking at reliability at both the active passive devices, you know, so on and so forth. So, yes, reliability. Great. Thank you that question was from WD energy. A second question. Oh, good friend. Yeah. Adam, let me see. The question is as demand for data centers is anticipated to grow rather fast in the coming years is there any consideration for waste heat recovery from a liquid medium by way of expansion to generate power to feed back to data centers are the grid. Yeah, I quickly answered that there's a lot of work. In that, especially in Europe was conducive to that. We talked about it a little bit during during the panel discussion, but a lot of work including my group asked that individual to send me an email, because I don't want to get started on that and I'll get started. So it could be could be a long discussion. I do want to give the opportunity to those who you mentioned in your talk who are actually attending today to have a chance to because you mentioned their names to go back and ask questions. I'm going to see if she is still on the call. If you want to ask a question. You, you spoke about some of the wires group so if it's always on the call please feel free to jump. Yeah, I would love to hear it some yes. I want to know what kind of heat flux you got. Tell me you started migrating out more into aerospace applications but I would like to see what he says. So, you know, we have a sheesh is actually on the other panelists if I should or Carol, you'd like to ask me questions you know feel free to jump in. Yeah, I think one question on the immersion is what would you like industry to do, like what kind of relationship between the industry and academia should exist in the next coming years related to immersion. I think that that I think I try to make a case that immersion cooling is really important, especially I don't need to lecture you on device trends right that there's a, I think migrating to immersion is probably easier maybe even than some of the engineering technologies, but what need to do is really we need to put students and at work and in the labs, looking at reliability issues, so that there's this prejudice does not exist. There's no question mama I mean, I didn't mention to you that we work with LCS and we work with other companies as well. Currently, looking at reliability so reliability issues. And also I want to mention she's that a few years ago couple of decades ago right you know the you have the IBM research lab and and their labs and so on and so forth right. Those things now are changing and become cost centers and so industry academia is now expected to go to to TRL 23 and so on and so forth so that's the case. Funding should be not for 23 years, but a little bit more. I think that I think that a long term like a four year plan and so on and so forth. It would be really great I know this there's this thing about less fun for three years because your PhD students finish up in three years but you really need a longer term, so that people can build this, you know, spaces premium right. And like right now I she said we're we're working on a liquid cooling significant amount of money, just to look at that. But then the space is limited and therefore we want to be able to leverage that so I think will be good for for for industry to come in and say look, here's some of our challenge in the future. I always tell the industry people like you that will give you three times as much you'll give us, if not don't come to us. So fund us. That's great. Thanks, Ashish. I don't know if Carol wanted to add. Yes, I do have a question. Thank you very much. So, um, Dara J. In the earlier discussion and in your presentation you talked about all the vast amount of collaboration that has to happen. Undergraduates graduate students with professionals. How would you like to see the community of universities start collaborating better together effectively together so that we can. I don't know, it will be more fun but I think we'll also do do a better job. Yeah, you pointed so well taken I think we should all check our egos at the door, for example, just listen to you on this panel, which is so great. All the stuff you bring about material issues right. You know, I have to be born again to pick up some of the knowledge base that you have, but we can, you know, get together and quickly that, you know, get someone like, you know, people like my do and Ashish and so on, have a panel discussion about how we can actually NSF does I use ERC's and other things or ERC's, but in addition to that I think industry should be involved and and saying, let us set a direction. We want packaging we want materials, all of this involved and we like all the universities to work together. Right. I think it's a great idea that just cannot do it by yourself. And a lot of times we're forced to do it by ourselves because I think sometimes the ego issue. So I certainly would love to work with you. Thanks, Carol. We have one more question from a Maria Bay. Thank you. From your experience which commercial software is easy and recommended for CFD data center cooling analysis. Are you kidding me you think I'll answer something like that. Well, here's the answer. I like them all. All right, but we use answers, ice pack, you know, fluent we use future facility six sigma we use flow term. And if you ask me to rank them, I rank them all a plus. Let me tell you something. Otherwise, as soon as I get off this I happen to be a preferred kind of customer so they do a lot of things for us you know the last thing is for my students to call me and say you know after that panel, your, your license was suspended. So I will never answer that question. Sorry. On that while I wait for we have time for maybe just a couple of more questions but while we're waiting for that I had a question which kind of maybe tease off a little bit on the computational side. You know with the increase in logic course and transition that you showed. You know the challenge that you brought up is really with this increase in energy density and heat flux right. Cooling capacity has to catch up stay up with all those increases right and pe has to keep coming down. So broadly speaking as you see this gap between sort of increasing needs for cooling capacity. And the need for the cooling systems to catch up right that gap. You know how does it. How do you close the gap. I learned from your talk that there are two big places where the gap can be closed one is your own modus operandi modus operandi if you will seems to be driven by computational predictive models as really showing the pathway forward is the enabler in a way data being used to improve, you know cool centers. You know you're close loop in that sense. You also brought up, you know an issue of reliability over and over again right so there's, there's that last mile reliability is really the last mile of you know great ideas are there but they haven't made it really because of reliability issue. So besides this idea of computational predictive models and reliability as you know, you know these, these could help address the gap that exists. What else is there really, where can we squeeze more of this gap away and you know approach what's needed in the coming years. Yeah. A good point I interestingly I had a meeting was your acting dean Mark Longstone, he said why are we talking about. You know, just the efficiency we need to talk about the exergy based efficiencies or interchangeably entropy destruction that's one thing to talk about but I think that we need to know where the technology is gone you did see where the frequency between technology notes, increasing right in the meantime, you got to get performance right. So, what's going to dictate that is, is packaging is going to be dictating that, and therefore, is people like, you know, a shish group Tom I do a younger and so on. And you tell us, when are you going to start migrating to this thing because you don't want to get caught by surprise because this is going to happen. For example, we've been talking about liquid cooling for long, long time right, but liquid cooling being implemented, but very, very narrow, right. And so I think at the academic level we need to anticipate technology direction, which, if I tell you it's going to take five years for the next I mean Intel is now dealing with 10 nanometers now and live for a little while right as well right. And so, but they are looking at very exciting technology like Lakefield and for various for various is the packaging right this. Are we are you guys prepared to look at those kind of trends to to to be able to look at that's really where the academia can come in I don't think academia is looking that far from that point of view, not not every one of these. Very good. So you should tap and you should call Ashisha a little bit more frequently. Yeah, yeah, that's great. Dureja I think I think we're at time here we've actually gone 10 minutes over. On behalf of me of the School of Mechanical Engineering that's sponsoring the distinguished lecture today and the College of Engineering we'd like to really thank you for spending you with us in such difficult and abnormal circumstances and you know really enlightening us with all your wisdom and describing all the great research opportunities and grand challenges in this space. So thank you everyone and thank you everyone for joining. And the recording of this talk is going to be available on the distinguished lecture website, as well as the panel so thank you everyone. Have a great day. See you.