 It's my pleasure to introduce the Purdue Engineering Distinguished Lecture Series, where we invite world-renowned faculty and professionals around the world to share their experience and their thoughts about the grand challenges and opportunities in their fields. The series started in 2018 under our Dean of Engineering, Dr. Mang Chang. And to introduce our speaker, I will invite Dr. Mang Chang to the stage. Mang Chang is the Executive Vice President of Purdue University for Strategic Initiatives. He is the John A. Edwardson Dean of the College of Engineering. And he's also the Roscoe H. George Distinguished Professor of Electrical and Computer Engineering. So please, let's us give a warm welcome to Mang. Thank you so much, Dimitri. Good afternoon, everyone. Thank you for joining us to the Elmore Family School of Electrical Computer Engineering, the host of the first in-person Purdue Engineering Distinguished Lecture. And we have an outstanding speaker today, but first I want to thank all of you for joining us, including those who are lining up for Starbucks. One great use of the caffeine that you're about to consume is to actually listen to the latest in semiconductors, as we all know that semiconductors chips, these microelectronic chips, maybe a year ago, if you ask people, can I get a bag of chips? I cannot get chips. And they say, well, go to Walmart, go to Target, go somewhere, and you can find all kinds of varieties of different flavors of chips. And now, of course, we all know that people cannot get chips. And it is damaging our national security, our job security, our economic security. And people are wondering, how can we charter the next decades of Moore's law access to chips advancement in how we design, manufacture, and package the chips? So it's great to see that as we relaunch the in-person version of the Purdue Engineering Distinguished Lectures, as we continue the journey towards and in sustaining the Pinnock of Excellence at scale, I have to brag about this, we are the largest top 10 undergrad engineering in the United States and top five, largest ever to be ranked top five in any discipline, any time in history. So we're proud of the faculty, students, staff here. As we do that, we think about semiconductors as a key pillar, microelectronics as a key pillar in many schools in the College of Engineering. And I'll just brag about two things and then introduce today's distinguished lecture. First is the fact that we have 30 plus outstanding faculty in, yes, Elmore Family School of ECE, but also in mechanical engineering, chemical engineering, material science engineering, industrial engineering and more. And these 30 plus strong faculty and their graduate students have received no less than nine nationally competitive research centers in semiconductors in just the past four years alone, including three out of six of the SRC sponsored research centers in the last round of competition, three out of six led by Purdue faculty members. And furthermore, on the workforce and education front, Purdue is the lead institute for the DOD scale workforce development program for the country. We also are in the process of rolling out, I think, the first set of degrees and certificates from undergrad to master to certificate online residential across design manufacturing and packaging. I think we are the first in the country to do just that. And as companies and to some degree governments put in hundreds of billions of dollars, some estimate $1 trillion in the coming decade around the world, hundreds of billions will be here in the United States. Where do we find that 80 to 100,000 new engineers and all kinds of engineers for the microelectronics industry? Well, today we're going to look at one particularly exciting dimension on test assembly and packaging and advanced packaging, technologically changing the equations. And this is one of the unique strengths here at Purdue and how that can be on short, reshored, near short back to the United States much more so than the previous years and decades. And it's a particularly exciting honor to introduce therefore today's distinguished lecturer, Ravi Mahajan. Ravi is an Intel fellow and the leader of Intel's effort in advanced packaging and assembly, the Pathfinder program in that important arena. And under Intel's new CEO, Pat Gausinger's leadership, there has been a resurgence of Intel and the whole industry's interest in advanced packaging here in the United States. We are delighted to have the partnership with many parts of Intel and hope today's visit by Ravi will further strengthen that. And Ravi received his PhD from Lehigh in 1990 and has been with Intel for 30 years more or less now. And over these three decades Ravi has made a tremendous impact and contributions including the Silicon Bridge, among many other intellectual property and contribution to the R&D. Ravi has also been a strong leader in working with academia between Intel and universities such as Purdue, creating new programs, new partnership opportunities and that is more important than ever before in the coming month as the CHIPS Act and other related federal resources will be invested in these areas. So Ravi's subject today will be the not-so-quiet revolution and how advanced packaging, in addition to the design and manufacturing of CHIPS, will shape and reshape the semiconductors industry. So with great delight, a warm welcome to Ravi. Thank you. Thank you. Can you all hear me okay? I always say this when I begin my talk, actually not always sometimes. The place at which I started my career at Intel, I moved about 500 feet from it max, except when they decided to renovate our offices and then they moved me two floors up but brought me just within that 500 margin. So to those of you who think about secure careers and long careers, I would highly encourage taking up packaging. The only time you have to move more than 500 feet is to attend to a squealing baby or to travel on vacation. With that incentive, let me start my talk. So I'm pretty sure there's going to be questions of what this phrase quiet and not so quiet is going to be and I will come to that. But before that, let me flash one paper that was written by John Shalf recently and he made two fairly interesting points. The points he made was that as we move forward, computing will be dominated by the ability to interconnect different compute and communication elements and the interconnects will play a greater and greater role. This in no way diminishes the value of homogeneous integration or Moore's law but it makes packaging an extremely good complement to the way we believe the future is going to evolve. Because I work for Intel, I have to flash a legal notice. Those of you who are speed readers assume that you have read this fast and I will move on. If nothing else, if nothing else you take away from this talk, I would ask you to take a few key messages out of this talk. One is advances in packaging technology are undeniably a vehicle that will drive compute and communications forward and they will truly drive it in ways that we have not thought about. And I'll show you a few examples of things we have built today that we could not have imagined in the last five years. If there is recognition that H.I. is the vehicle to drive advances, you know, the changes we make will no longer be quiet. Now let me quickly explain what this word quiet is so I get it out of the way. Packaging has evolved and there have been tremendous changes and I have a very simple slide to explain what these changes are. These are groundbreaking, economy changing changes but they have always, always flowed under the radar of Moore's law scaling. That doesn't mean that this field has not evolved. Some tremendous technological advances have been made in this field. And we have always done it under the radar moving forward given the level of importance and the level of interest that you see in this field. This is going to sort of come above the surface in a number of ways and it has already done so. Today we have packaging architectures that provide unprecedented level of performance in all kinds of architectures in the laptops you carry, the servers that drive it, the high performance compute that happens. And to move this forward, there are a number of technical challenges that we all have to look at carefully and master ways to solve it. I am firmly of the opinion that industry, academia, partnerships are the partnerships that will drive this forward. It will no longer be a case where an industry, academic relationship is just a matter of developing a small building block, a small tool or a metrology that is brought in, but a lot of technological innovation changes in materials, structure and performance predictions will have to be driven through fairly collaborative partnerships. So I know some of you may not have context so let me flash a very simple picture of what I am talking about when I say packaging. This is what I mean by packaging. A package on the left is 100 millimeter by 100 millimeter entity on which you have multiple things placed each individually tested to those tiny things that today are causing so much grief because we do not have many chips available today. The scope of the field is tremendous in a very simple flyby sort of example. Let me see if I can explain this. A wafer arrives at the shores of a packaging factory. That wafer is sorted which means probes come and touch it on the top, figure out if something works or something doesn't, you X out the parts that don't work. We thin it. Our devices start at 720 microns and can be thinned down to as low as 50 microns when they are utilized. The thinning process itself is something that has to be done flawlessly and I really mean that to those of you who study fracture mechanics, there have to be no flaws created by this so it has to be flawless. After that the dye is prepared or prepped, cut into pieces. Intel brings something very unique that nobody else has in this party. We are able to test each of these dye to create what is known as known good dye. As in before we stick it on a package, we have to know that it is good. So the known good dye is something we bring and it's a unique competency that reduces the overall cost. And if you look at the economics of this whole equation, this is a very, very important aspect of the whole part. After that the main operation which is a package is assembled on a packaging substrate using all kinds of materials. We put in different protection elements, thermal conduction elements. All kinds of elements are put in together and a part is created. This part is now tested for functionality. And when it is tested for functionality, this is what goes on as a guarantee that this is going to work. After that it is finished as in marked for traceability and we design boards to ensure that the parts we develop are compatible with the boards on which they go. The point of this is that the scope is pretty large. There are all kinds of points of failure in this and these have to be very intelligently developed. So one of the things I pride myself because I work in Intel is we produce processes that ensure 99 plus percent of the dye that come into our process go out good. As in our processes are designed to deliver this level of performance before they go into high volume factory. To do this you need a whole amount of predictability, understanding of materials and structures. I said we will not be so quiet so let me describe what was quiet and those of you who are curious that picture actually is titled quiet revolutions. You can Google it if you wish. In the last 30 years that I have been in this industry there have been significant changes. In the 1990s for instance for the very first time organic flip chip packaging came into being. What happened because of this is we were able to take a low expansion coefficient piece of silicon, stick it on a high expansion coefficient and still produce reliable parts. An entire new industry was born. If you look at the way the money shifted and the importance shifted in that time frame it had moved completely from a certain class of industry to another class of industry. And by doing this we offered a lot of advantages instead of doing the traditional wire bond side arrays we were able to do area arrays. In the early 2000s people figured that the interconnect on the back of silicon was made of copper why not make bumps out of copper and slowly but surely we took all the lead out of it. The semiconductor industry can proudly claim to be the first industry that became environmentally friendly by eliminating first lead and then halogen out of its components. All kinds of sockets became mainstream and as the density of the interconnect started growing up we introduced new methods. Fine pitch flip chip became real and then technologies that were introduced in the earlier part of the past decade are today in high volume manufacturing. I'll describe them briefly. Technologies like silicon interposer like EMIB became mainstream and a new horizon opened up for product performance. Almost everything that was done here was very well known to the packaging engineers. The intensity of technical thought and work is known to most people in the field but it was not outside. Moving forward my belief is this will change. So, before I describe packaging for heterogeneous integration let me do two things. First is let me emphasize that this is not a new idea. So, what exactly does heterogeneous integration mean? In a very, very, very simplistic high level sense heterogeneous integration is when you take different elements, silicon or prepackaged components, have it all tested and then stick it on a substrate. It could be a motherboard or a package and you get higher functionality like the sum is much better than the parts that constitute the sum. In 2005 I helped make the slides for Dr. Bill Holt who ran Intel's technology and manufacturing group. Bill Holt was a pretty far thinker, he was pretty strategic in his thought process. So, we synthesized the few things he was going to say in his talk at Interpac into this slide. Bill pointed out that if something can be developed on the same silicon process node, you cannot beat Moore's law. In terms of its economy, in terms of performance you just can't beat Moore's law. There is no other industry where you can double the size in a fixed period of time. The period of time has changed but the fact that you double the number of transistors has not changed. If, however, you cannot integrate something on the die then the next best platform is the package. So, whenever it is heterogeneous the package is the best of all of them. We've used this idea many, many times. We've used the idea of heterogeneous integration or HI for mainly time to market. When you cannot design a complex chip, you design its constituent parts, teach them on a package. They don't perform as well but you get your purpose done and you get it out very quickly. This is the first example of integration. We integrated SRAMs. Now most of you who work in this field know that SRAMs can be very easily integrated into a silicon process. You just need a little bit of time. So this idea never made it out past the first generation. The first generation it made it, next generation you got better SRAMs because you do Moore's law shrinking. You get it on the die. This doesn't make sense. However, other ideas like memory controllers could be integrated on package and it made sense to integrate them and optimize them on a different process. And the one that has driven integration much, much more than before is the idea of truly heterogeneous silicon. For instance, DRAM integration on package where DRAM can be used both for local capacity and for a local high bandwidth interconnect between different entities on the package. And these are examples of devices that are and have been in the market for a while. So in time as the idea of value of heterogeneous integration has become more and more clear. I just point out to a single paragraph taken out of a report that was sent to the White House. This report said that not only semiconductors but the ability to package them is going to dominate how we are going to move forward or progress in the future. And as Dr. Chang said, the ability for the United States or the ability for any high performing government or country to have these technologies in house and to be able to develop these technologies and deploy them is going to be a differentiator and this will be a key value proposition moving forward. So why the interest? I already made a case for the interest. Let me go into this a little bit more detail. For instance, I take the example of a very well-known application for heterogeneous integration, a 10 nanometer FPGA die. For that die to perform very well, you have the need for high capacity memory next to it. And you need to connect it with a link that is able to transmit a certain amount of bandwidth typically high at a low power. When technologies that deliver that interface started to come to the fore is when these kind of ideas started becoming mainstream. Today, high bandwidth memory is available in volume. It can be interconnected with a very high bandwidth low power interface and it will continue to evolve as time goes by. The other argument that I made was of optimization. There are some technologies that can be intrinsically built into the same silicon process node. There are some that make a lot of sense to optimize in a different process node. Whenever you have IP that is optimized and today the sources of IP are all over the world. Whenever you can optimize IP on a process and you can integrate it on package is when you can get better functionality. One of the arguments against silicon and the value of Moore's law because you shrunk was you could stay within radical limits. A typical radical is 33 millimeters by 25 millimeters. But if you want to put way more silicon then you can capture in a reticle. You could argue I'll do two reticles, three reticles, four reticles. But how do you guarantee that the adjacent die are all going to work? Much better for you to try and integrate on package guarantee yourself that your part works when you integrated it. And then the two arguments, which is why heterogeneous integration was always popular, which is yield resiliency. When you begin a new process, yield of that process is slow. And as you start building more and more and learning more of the process, the yield goes up. If you want to buy or guard yourself against failures and improve cost, in the beginning you introduce it on a package and then later introduce it on a die. And then time to market. The design complexity of designing SOCs is much higher than integrating and designing on a package. All these arguments say that heterogeneous integration is here to stay as long as it is truly heterogeneous. And this is a case, you know, the package is a really, really good compact platform for heterogeneous integration. Why do I say this? I'll show you an example on the left. This was done as part of a DARPA program called CHIPS. In this program, we took an FPGA in the center and we connected it to a high bandwidth DAC from a separate company optimized on a different process. The other curious thing you will see in here is at the bottom left, there is an optical electrical converter chip connected to the FPGA using a standardized link called AIB. There are two or three things that are pretty curious about the whole thing. One is I take a 55 millimeter by 55 millimeter package. I stick a bunch of different die optimized on different surfaces. I don't even particularly care how they are built. The only thing I care about is that where they latch to the die is a common interface and that common interface is publicly available. Intel has made AIB publicly available. Different people can optimize different elements, mix and match and bring it together. So, there is a lot of value in being able to do this sort of integration on package. This idea when we first developed it working with DARPA was the genesis of Intel being funded to start a digital factory that does heterogeneous integration called the SHIP factory. The state of the art heterogeneously integrated packaging factory. It's very simple. You know, conceptually you have a package, you have common interfaces, you develop die, different people come in, you make sure they are all secure, stitch them together. You have a part. You know, and you can optimize each of them separately. This was the basis for the US government funding Intel to do stuff like this. It addresses the security question I talked about. It addresses the value proposition question and it allows the opportunity to do plug and play. And not just Intel. I mean, Intel focuses on CPUs, GPUs, memory controllers, FPGAs, you know, typical high and high performing parts. But in the bigger picture, there is a realization across the industry that heterogeneously integrated components on package will carry a lot of value. And there is even an industry roadmap, which is publicly available where we have started to scope. What it takes to describe a path forward for a community consisting of academics, industrial researchers, students, equipment manufacturers, materials developers, the entire supply chain. This roadmap, if you look at the scope of the roadmap, to me it is sometimes interesting that we managed to get so many people together and wrote something. And it is a beginning in no way an end, but this describes to you what we believe is how you can get the entire community to march in the same direction. So, I talked about the value of the package as a compact platform. If we want to move this technology forward, there are a few things that we will have to work on and this will go into the root of my talk. We have to make sure that we continue to evolve high bandwidth power efficient links on die, on package, ok, enabled by IO on die. While we do this, we have to make sure that the data that leaves the package also is supported and there is a diversity of protocols. There is PCIe, there is DDR, there is the CXL spec, all of these have to be supported. And I will show you a couple more that are quite interesting. We have to make sure that while we deliver this bandwidth and we change the speeds, we will make all the data integrity guaranteed, which means you have to be able to deliver noise isolation for both single ended and differential solutions. Bringing everything together almost everybody knows is going to make things run hot, so you got to figure out how to cool it. And bringing everything together and delivering power to it individually so you can control it so that you don't spend a whole bunch of power is also just as important. So the ability to support a bunch of power delivery networks simultaneously and control each of them is going to dictate how successful we are. There is other stuff here that I have not focused because I work primarily in the high performance industry. But the last and by far the most important in all of this is our ability to develop manufacturing processes, materials and equipment that is able to turn things around fast at high yield and high efficiency will be the differentiator. Make no mistake as everybody who works in these fields know this is a highly, highly, highly competitive world. And in competitive worlds of this nature, our ability to understand each of the constituent elements and to put them together so that we guarantee performance of the end element will be the differentiator between success and failure. And if you look at the top people working in this field, each and every one of these companies fully understands what it takes to deliver stuff like this. And the amount of debate that goes on in this field both for performance as well as for economics is truly astonishing. And to those of you who think packaging is just cardboard boxes, I would suggest these are highly intelligent cardboard boxes. The amount of engineering science that goes on in this field is tremendous. I'll focus only on one main aspect but I'll do a fly through with few other aspects. I'll talk primarily about interconnects. And for the purposes of this talk, let me distinguish what the word interconnect means. If I connect in a plane, I call it 2D. If I connect in the Z, I call it 3D. You have two choices. You can send through a few interconnects a bunch of data fast or you can send a bunch of data through many interfaces slow. So the first I'll call fast and narrow, the second I'll call wide and slow. The wide and slow is called parallel, the fast and narrow is called serial. There is sufficient understanding in this field that says if I really do the wide and slow, which means I build complex packages, I'm going to end up getting much better latency if I hire bandwidth and a lower power. And that is the basis for wide advanced packaging. Now, what does advanced packaging mean? You take a piece of silicon, it has bumps, sits on a package, connects to pads on the package, then wires fan out, they go layer by layer and fan out, right? So if I said I need to increase the density, I have to reduce the width of the wire, I have to reduce the space between the wire, and in an ideal world I have to cut down the size of the pad. This is a linear metric for escape. The other is an aerial metric, number of bumps per unit area. To reduce, to increase the number of bumps per unit area, you reduce the bumpage. So this is the Intel centric view of what packaging looks like. In the XY plane, we have many technologies, but the one technology we focus on on the high end is called EMIB. EMIB what you do is you produce localized interconnects between the die, just where you need them. The second is the four-varose technology where you stack die and you go in the Z plane. And since you can move in the X, you can move in the Z, why not move in both? That's the third combination. Combine EMIB and four-varose and you get an interconnect which is pretty impressive. For time I will just highlight to you that even if you were to go out and buy packages today, you can get a really, really wide range of properties in terms of interconnect density. You can get as low as 30 to all the way up to 500 and maybe even a thousand. And there are cases where you can go past a thousand. This is available today. There are IO circuits that connect these at high density which deliver high bandwidth, high power efficient, highly power efficient performance. And if I look at the vertical interconnect, the interconnect of choice today is using solder. We know that when solder goes to somewhere in the 20, 30-ish micron range, it is going to slowly but surely run out of steam. But hybrid bonded interconnects where copper-copper joints are made also exist and can deliver very high IO density. And as I said, if you combine the 2D and the 3D, you get to a configuration which gives you tremendous flexibility in what you can bring together. You take a wafer. This wafer has 2 silicon piers at the bottom, not exposed yet. You attach the top die, fill the gaps between the bumps to protect the joints, mold it to build structural rigidity, grind it to expose the back of the die so that you have access thermally for cooling these parts. Flip it over, expose the through silicon vias, create pads, plate solder, reflow the solder to make bumps, magic. Different kinds of die integrated on stacks, multiple stacks connected to compare and die through email. The logo is outdated, the jingle is not. So, you know, when I presented stuff like this in 2019 to a bunch of industry analysts, everybody said you guys are good at drawing cartoons and making animation, which is probably true. However, this is a real part out in the market today, ok. It's called Ponte Vacuum. It's called EMIB. Two tiles. This is a petaflop in your hand. You have compute tiles, a memory tile, a interconnect of area, a base tile, a bunch of memory, a connectivity die, a multi-tile package EMIB, greater than a 100 billion transistors, 47 active die from five different process nodes. This is built in manufacturing. The people who build it say it is very hard to build, but it's a fact that we can build it. Engineers are pretty ingenious. This is what it looks like. Two tiles. The die to die stack is at 36 micron pitch. Number of active die per stack is 16. Maximum active top die size is 41 millimeter square. The base is 450 millimeter square, close to a reticle. The package is less than 100 millimeters, 77 by 62. 11 EMIBs embedded in there. This in my mind is sort of a demonstration of the capability of the technology, ok. And this is today. The promise of the future is illustrated in this slide. It talks about pitch shrink in one axis. It talks about connectivity in another axis. And embedded in these are the power efficiency numbers. Today, the EMIB interconnect hits 0.5 picojoule per bit. We know what it takes to hit less than 0.1 picojoule per bit. One-fifth the power efficiency, one-fifth better in power efficiency. And you know the real question is so, why bother and is this good enough? So, I did a very simple plot. At the top left is the rate at which off package 30 speeds have been increasing. A rough rule of thumb says they will double every 2.1 years. And I think they put in the 2.1 years so that they would be convinced they were not kidding. If they said 2, then you would have been suspicious. But 2.1 makes it sound like somebody really knows what they're doing. In reality, my suspicion is it's anywhere between 2 to 3 years, maybe 4, but the point is that there is a doubling. The second is look at the demand for bandwidth. These are typical high, they call them roof line I think, applications. The bandwidth demand is on an accelerating cadence. But the most interesting graph in my mind is the bottom. The bytes per flop, as in compute moves faster than the bandwidth, hence the bytes per flop actually goes down until you come to somewhere in the 2018-ish point where we introduce HPM on the package and this curve started to slowly taper up. The right challenge here is to get this curve back to where it started and then above. Though that, which is why you should scale interconnects in packages. How do you physically scale interconnects? We have EMIB in high volume production today, capable of 55 microns or less. We have Foveros capable of 50 to 36 microns and can go down as much as solder will let us. We have two technologies that are deep into development or advanced pathfinding where we can not just stack one end of the other, but overhang them. And by overhanging them, get better connectivity. And then transition from solder to copper. So the point is even though we have fairly good connectivity today, we have plans for much better connectivity. There are other aspects of heterogeneous integration which were just as interesting. I wish I had more time to talk about this, but let me just quickly fly through this. This is a table that explains to you how if you send electrical signals, you get how well they perform. For instance, if I send an electrical signal through copper using the PCIe Gen 6, I can go about 12 inches before I need to boost it. I have a power efficiency number of a certain value. If I transition from this to optical, I get significantly higher bandwidth density. I can send more signals per unit area or per unit length. I can send it to very long reaches greater than 100 meters of design correctly. Latency can be managed to be roughly equal. And if these are designed intelligently and the entire industry comes up to support this infrastructure, you can get similar power efficiency numbers. What this says is that off-package optics is slowly but surely coming to a place where it makes a lot of sense. We had taken the same idea and a couple of years ago announced the first co-packaged optics Ethernet switch, where this is not such a complex picture, but essentially what you have in the center is a switch connected to socketable optical modules that send signals out optically through fibers. The other thing we have done is the example I showed you before, where there was a single optical chip connected to the FPGA. We have a program to go to five of them funded by the federal government and conceptually quite possible. So co-packaged optics, you know the sort of in my mind at least the logical thought in the industry always was optics will replace copper. I don't believe that makes a whole lot of sense. I believe optics will complement copper and together we will deliver higher bandwidth. But I don't envision a state where we are going to replace wholesale. And I think the initial thought process were a little bit overhyped or over ambitious, but conceptually you can do better with optics if you complement copper. You can improve off-package bandwidth. You can improve reach. You can improve power efficiency. You can do new applications like resource disaggregation where memory and computer are separate or networking is separate. And there are ways of doing this which haven't been envisioned yet. The question everybody asks is, so what about thermals? You're bringing all this together. What do we do with thermals? And I think this is one place where we can add a lot of value together. In thermals it's a matter of thermal cold design and coming up with new technologies to be able to cool. If we do this right, I have a feeling that we will solve thermal problem. I don't see major showstoppers in here, but a lot of good engineering will have to happen in design materials and technologies that cool. And then this is another very quick fly by. But what we are doing is we are taking composites of different materials put together, assembled at high yield. In doing this, all kinds of materials are introduced. They will lead to all kinds of stresses. Being able to design this right, finding the right materials and being able to yield them in high volume ways will be the differentiator. So equipment, materials, process and design co-optimization. And there's a lot of opportunity in this. To be able to do this, we have to have materials characterization. We must have tools. We must have predictability. We must have metrologies to make this happen. And there are opportunities in each of these spaces. So for people starting careers in this or looking at developing careers, there's many opportunities in the future. So let me end my talk with two comments. Andy Bryant, former Intel chairman said Intel as a company, we start the ingredient, we start with the sand and everything is value added by engineers. It's our people. There's a guy, some of you may know, Rajiv Mongia, who is an ex-Purdue guy and he started to work with me. Those of you know, Rajiv know that his bit rate or rate of speech is faster than most data transmission. So I asked Rajiv, you know, can you please give me something which I can tell a whole host of Purdue engineers. So this is what he said. You know, he said we often celebrate how many boiler makers have landed on the moon than any other university, which is probably true. I saw the image of Neil Armstrong and that's kind of tough. I suspect it's kind of tough. But he said even harder is the boiler makers at Intel that are helping turn sand into gold. So I don't have much time to thank you, but this is one of my two slides. My association with Purdue goes back to more than 25 years and I always come away with conversations with Purdue engineers and Purdue faculty enriched because I learned a little bit more. Let me use my second to last slide to point out that this has been a long and a fruitful association. Professor Subbaran who is here is somebody I met when he was at IBM and he was a good soccer player by the way. I have worked with him for nearly 30 years. One of the things that I found out, you know, I was at a review in SRC and they at the end of it said, you know, other people may mess up, but Ganesh never gives a back talk. I have held that in my mind and it has always proven to be true and to the day Ganesh proves me wrong, I'm going to continue to believe this. You have a cooling technology research center that Professor Weibel runs right now which offers a lot of really good intelligent engineering advice to a lot of people like us who learn from there. To me, this is the beginning. I believe packaging research collaborations with Purdue that have fueled many of the quiet revolutions are now going to fuel some of the not so quiet revolutions. I believe that the many, many students from Purdue that have come to our shores are people who have made a difference and will continue to make a difference. I think we have to establish the right kind of infrastructure. We have to learn to collaborate in manners that we have not done before so that we can make these associations a lot more effective. You know, the sort of intellectual depth that exists in this university is tremendous. We just have to figure out ways to leverage that intellectual depth. Second to last, but I'm pretty much saying the same thing. H.I. is important. We know how to do H.I. today. We need your help to move H.I. forward. There's many challenges and I cannot imagine better people to collaborate on this than Purdue. Thank you very much. Thank you so much, Robbie. This is wonderful. Thank you so much, everyone, and thank you also to our online audience. We have over 175 people attending the talk online from many places, including Intel and many universities. So, time for questions for our distinguished guests. We have the mic. So, I have a question for you. How long did it take for your invention of the EMIB to actually be commercialized? Can you give some insights into that? Yeah, I can. So EMIB was invented in 2006 or 2007, conceptualized. The value was not... The need wasn't there. The value was recognized, but the need wasn't there. There were many building blocks that did not exist, but because we had a vision of an integrated component, we knew what we needed to develop. In 2011 is when we took the original idea of the bridge and made it an embedded idea and we started working on it. It took us about two, two and a half years of real hard engineering work to prove that the first part could be built. So we announced it in 2014. So I would say about four years of good solid engineering led us to this and then a lot of process development between now and then and a number of investments along the way to keep this going. But once you convince engineering management of the value of something, they will back you. That's a good part of working at it until the senior management is technically extremely savvy. They can recognize the value of investment of this kind. So yeah, that's sort of the broad answer. And now it's in production. Can you give us an idea of what are the biggest metrology challenges facing heterogeneous integration? Yeah, let me break it up first in terms of... Metrology can be broken up in a number of ways. There are metrologies that you put in your process development in which case your challenges are getting the right level of resolution and the speed at which you get data and then how you analyze the data. So that's one class. And I'll get to the specifics also. The second type of metrology that we worry about are characterization metrologies, which means we talk about material properties kind of lightly. But understanding the behavior of a material as a function of time, temperature and humidity is a pretty complex task. To get the right class of metrology with the right level of resolution is the second class of problems I would worry about. The third is characterizing the reliability and managing to understand it kind of deal. So those are the three types. Now metrologies, typically people think of metrologies in terms of how long is something, how soon will something break, what is the smallest defect can I seek? The tendency of thought in metrology tends often to be very mechanical or visual or data bagger. But metrology can be a lot more than that. It could be surface characterization, it could be chemical characterization. And there are quite a few areas that we have not fully understood. Typical ones we kind of understand. So those broadly, in my mind, quantity of data, quality of data, speed of resolution are the big ones. And encompassing the gamut, in an ideal world if you build a part, imagine that you had a rider on that part through the entire process. And you knew exactly what was happening to that part. You would be so much better off. Today, metrologies of this nature do not exist. But there's no reason why we cannot build metrology in nature. I've always been fortunate to have Ravi when I give good talks, not when I have bad talks. And I've never heard Ravi give a bad talk. So with that, let me start. We've had MCMs for a long time. Old MCM technology on glass ceramic substrates. And we've had multi-chip modules and organic substrates for a long time. What convergence of either technology or cost drove it around, say, 2010 onwards towards heterogeneous integration and tight pitches? Three things in my mind. One is the fact that IP diversity is much more now. In the earlier part, we used to think mainly about memory, oftentimes of SRAM because it was high bandwidth accessible memory. And we were looking at controller chips, which essentially control the high computer. But I don't believe we ever went past a point where we thought about more than that. But today you can take a transceiver and optimize it on a process. Today you can take an optical chip and optimize it on a completely different process. The fact that you can now use a platform that has got high bandwidth, low, high power efficiency connections or low power connections is what has slowly but surely caused a change in thought. In addition to the fact that you want to introduce parts faster with higher yield resiliency. So, you know, I oftentimes see this both as a acceleration of past technology trends and an increase in diversity. And that's what's happening today. And then, of course, there's the economics of the whole business. Yeah. OK. Yeah, I want to ask a question. So I saw something like papers or testing on using the microfluid channel to cooling down the power electronics devices. Is there any possible, like, using the microfluid channel to cooling down the CPU cooler? Yeah. In fact, if you Google, you will find Dojo, the chip that Tesla announced a few months ago, uses microfluidics to cool. And they're not the only one, by the way. We use microfluidics in a lot of internal applications as well. Microfluidics is both an important area, as well as an interesting area of work. And I am pretty sure it is one of those technologies whose, when it's time comes, it will be proliferated. Hi, Ravi. It's a great talk. Thank you for your time. You talked a lot about the importance of interconnect and high bandwidth memory and, like, high bandwidth interconnection. But what prevents, well, at least for my mind, I come from a thermal background. So the electronics is not my strength. What prevents the complete integration of the components into one stack where you don't have a bottleneck for interconnects with a monolithic 3D technology? Why can't we completely integrate the logic and the memory into one complete volumetric component? If you look at it strictly from the perspective of somebody who's trying to optimize, if you can integrate in a single stack, you can test it and it yields very well, you can't beat it because you reduce the distance of traverse of an interconnect, which makes it intrinsically superior. It's only when you find out that the economics doesn't allow you to integrate on-dye is when you start at integrating off-dye. As I said, you know, you can't beat more slow. There's nothing like it in terms of doubling entities, compute entities, or whatever, switching entities. You just can't beat it. So if you can do it, you should do it. If you can't do it, then think about other ways. But the way to really think about this is do both, like figure out how to integrate on-silicon and if you absolutely cannot, find the next best ways to integrate. If that makes sense. Thank you for the t-shirt as well, Matt. I had a question as well. I was wondering if you could give us your insight, maybe advice to undergraduate and graduate students that are currently studying in the broad field of semiconductors and microelectronics of what kind of skills or what kind of experience or what kind of focus would you recommend that I want to pay attention to if they want to lead this next revolution in this space? Thank you. So there are, if you were to look at areas, there are quite a few areas which are very, very promising. For instance, you know, just starting out randomly, if you were to look at IO design and how to do intelligent IO design with high bandwidth, the electrical engineers have a place to play. If you were to look at thermals, thermal management is going to be important. People who are able to understand how to design power delivery networks are also significantly important. As I said, photonics is coming into the play. People who understand photonics. Underlying methods, for instance, understanding and being able to make materials operate in integrated situations. I can point to you, by the way, material science is one of those fields which has an opportunity to dominate if we find the right classes of materials integrated at the right level. That should give you an idea that people who study material science can do it. The other thing I point out, Professor Gorey and I had lunch today. He was telling me one of his students worked on something completely different. IR imaging for biological applications. Today is one of Intel's most successful thermal engineers because he was able to take the fundamentals that carried him forward. And I think that has a message underlying in there that if we know the basics clearly, we can apply it to different things. You know, I got a degree in fracture mechanics and I very could solve integral equations very poorly and slowly compared to all my peers. But I did have a vision of solving integral equations for the rest of my life. I don't think I made a good judgment call at that time and I'm pretty lucky that I decided not to do that. I heard the mic when Professor Perollis asked the question and I was actually going to ask you about Ashish Gupta and what background and how he succeeded in Intel. But seeing that you quoted Rajiv Mongia, who is an alumnus of Purdue as well as UC Berkeley, but Rajiv's field happens to be combustion. Yes. And from what I hear, he has been successful in Intel in Seattle and Rajiv and I communicate routinely on his very beautiful photographs of birds that he watches in the upper Seattle area and so forth. So connected to that, just two examples of boiler makers working in Intel with very different dissertation backgrounds but then very different personalities. Could you comment on human characteristics and personalities that succeed in a very, very tough field that Intel is in the business for? Let me answer it in two ways. First of all, their core technical skills are tremendous. Both of them are very, very good engineers. They can and their ability to analyze, synthesize and define a path to solve is what has made them successful, even though their backgrounds were not. The other thing I believe they have is they are very good communicators. My own, I have learned over the years that if you present to any kind of audience, especially managers who listen to all kinds of stuff during the day, if you don't have a tellable story, if you are not able to say why you are there and if you are out there, you can't get extract value of a senior manager's attention, it's a complete waste of time. So their ability to communicate, I think, in addition to their technical skills has been the differentiator and the fact that they bring a certain amount of positivism to all this. And I should point out one thing. Don't ask Rajiv Mongia to send you his pictures because if he does, he'll crash your PC. It has crashed my PC twice. Right, and with this question I want to thank Ravi once again. Thank you. We're going to take a short 15-minute break. What follows after that is going to be a really interesting panel looking into the future from multiple perspectives and Ravi will also join us along with many other distinguished guests from Purdue. So we're going to reconvene in 15 minutes at 3.45. Until then, I want to really thank you for attending on behalf of the Elmore Family School of Electrical and Computer Engineering, School of Mechanical Engineering that are co-hosting this and the College of Engineering for the series. So thank you all. We will see you soon.