 Good afternoon, everybody. I think we'll get started. My name is Claude Cochini, and I'm the associate dean for academic affairs in the college and a professor of mechanical engineering, actually. So this is my home. And so this is our continuing series of celebration of faculty careers, which we started about three years ago, as a result of strategic planning from the faculty to give the opportunity to senior faculty, specifically for faculty who had been full professors for seven years or more, to get a chance to share with everyone, other faculty, staff, and students their experiences and the journey they went through in getting to where they are. And so today, by the way, so once they get to give a talk, a colloquium like this, they get a chance to meet with the department head and the dean to talk about their plans for the next seven years. So today, we have the distinct pleasure of having Professor Ganesh Subarayan. He received his PhD in 1991 from Cornell. And he started his professional career, actually, at IBM Corporation. And then afterwards, he went and he was a faculty member at the University of Colorado, where he was an assistant and associate professor. And he came to Purdue in 2002. And we've had great pleasure of having him as part of our faculty. His research interests are in computational cell mechanics, computational geometry, and microelectronics reliability. He was actually a pioneer in using geometric models directly for analysis, which is an area called isogeometric analysis. And so we are so pleased to have him tell us about his own area. Thanks, Ganesh. Thank you so much, Claude. I really appreciate the opportunity to talk today. The way it happened, a little bit of background, the way it happened was sometime in fall, I got an email from Anil. And there were three phrases in that email that I couldn't ignore. One of them was, I strongly request your participation. I look forward to your response today. No is not acceptable with NO capitalized. So here I am giving the talk today. So it is really my pleasure. Thank you all for taking the time to come. I really appreciate it. I know many of my faculty colleagues are here. And I know your days are very busy. And I really appreciate you taking the time to come. And I'll also introduce a couple of distinguished friends here a little bit later. A little bit about myself, early education. I was born in Chennai, Madras, which the state government decided to rename to Chennai because they want to erase any vestige of colonialism, any colonial vestige. So they don't want any British given names there. So it became Chennai. And then moved to Bangalore at age 12 to go to my eighth grade. Moved to Bangalore, which is there. It's about 300 kilometers away, 320 kilometers away. And then attended Indian Institute of Technology in Madras after the 12th grade. And during the five years I was there, I was going back and forth as that arrow showed, quite often. And this is what I don't know if any of you have had a chance. I know some of you are from Madras, and you know what Madras is like. And some of you may have visited Madras, especially if you're not from India. If you visited Madras, Madras is like any other city, crowded, polluted. But IIT literally looks like this even now. So you can see deer crossing. It's really like a rainforest. It's a very nice place. So I was very fortunate to have the chance to go there. And then I came to, after my five years at IIT, I had the chance to come to grad school to do two Cornell. And so I landed in JFK. I landed in JFK and then took a short hop flight, a little plane. And this is all new experience to me. And driving through New York City, someone picked me up from the airport in JFK and dropped me back at JFK. And everything seemed to go so much faster. Everything seemed so much more organized, less noisy. And it was a very exciting experience, I must say. So I took the short flight to IITaka. And this is the picture that typically people tend to have in the catalog. And it looked exactly like that, spectacular. Very beautiful. You can see the Kiga Lake in the background, clock tower. And within walking distance, within less than half a mile, almost inside the campus is a 60-foot waterfall, IITaka Falls. And if you decide to go bike up seven miles, you can come to Tganak Falls, which is 215 feet taller than Niagara, not as wide, but taller than Niagara. And it's a spectacular place. And this is the engineering quadrangle. There was a sundial there. And I was told when I landed in IITaka that sundial was designed and built by Professor Richard Phelan, who I knew from the book that he had written on machine design. So here I was in the place where Professor Phelan worked and looking at the sundial that he had built. So it was a great experience. And this is the Upsen Hall, where mechanical engineering is housed. And this ugly building didn't exist at the time I was a graduate student. It's been built since then. It's the nano center. And they always looked new. They were always built around mid-2000. So this one was no exception. It was built around mid-2000. So I was an Upsen Hall. And Professor Don Bartel was my PhD advisor. I was admitted to an MS-PhD, MS-PHD. Cornell always admitted all their students to MS-PHD was one's choice, whether one wanted to pursue a PhD directly or to go through an MS. I chose to pursue a direct PhD by advice by Professor Bartel. And Cornell also required us to do minors. I did two minors, one in computer science and another in solid mechanics. And my major was in systems and design, which is sort of a nebulous area at Cornell, which lumped together a lot of different areas. And that little circle there was my office. There was a corner office. And that was where I spent five years. And the field that you see is a baseball field. Professor Fisher is not here. Tim Fisher is not here. But that was probably where he was as a catcher in the baseball field. So in Cornell, you three years that your advisor paid for your tuition. And after that, if you passed what's called an A exam, which is the equivalent of our prelim exam, then you go on reduced tuition. So the advisor is even more anxious to get you on past the prelim exam or get you out of the door. Or get you out of the corner one way or the other. Because that puts you on a reduced tuition. So at the end of three years, I had to go and take the A exam. And I passed the A exam. So I was on the PhD track officially at that point. But sometime before that, about six months before that, my advisor said, let's look at bone remodeling. And I have to give you a little bit of a background about myself. In India, if you want to do engineering, the subjects that you need to, the marks that count towards your engineering admission is your scores in physics, chemistry, and math. Biology is not counter. And I barely passed biology. And here it was with my advisor who's saying that I should do something on bone remodeling. There's a little bit of history to that. And the reason for that is my advisor had, I think, a third or a half time appointment at Cornell Hospital for Special Surgery where he was designing the prosthesis, custom prosthesis. And he would commute every week to New York City. So bone was something that I was very interested in. He wanted to know how custom prosthesis affected the bone around the prosthesis that indicated the success or failure of a surgery, a prosthetic surgery. So he wanted me to look at bone remodeling. So that's how I got started on that. And up until then, I'd worked on solid mechanics, a gain background in solid mechanics and optimization. And one of the things that bone remodeling, then I started to read a little bit about evolutionary biology. And one of the things about bones is if you look at load-bearing bones, you find this trapecular arrangement, this nice order. And in 1800, there was a person by the name Wolf who said, well, there must be a law governing that trapecular arrangement. And it came to be known as Wolf's Law. And people have been since then trying to figure out the mathematical form of the Wolf's Law. And I don't think it's ended. So I was one of those guys who was looking for a mathematical form to explain the structure. So that was what I did. So I formulated the problem as a constrained variational problem. And it was a Pareto optimal problem. It was inspired heavily by two people. One was McNeil Alexander, a biologist. And it's basically a trade-off between mass and the amount of energy stored. So the optimal structure must be a trade-off between the two. Because if the mass was very large, then the body is spending a lot of metabolic cost to build bone mass if it's not serving a purpose, if it's to prevent risk of failure. And if it is too thin, if it has high risk of failure, then evolutionary biology would say we wouldn't exist. You would have been eaten up a long time ago, because your bones would have been broken. So that was the idea here. And it turned out I used some relationship for modulus to apparent density because I developed by Carter and Hayes in 1977. And so that's how I posed the problem. And this was inspired by two people. One was this book by McNeil Alexander who went on to become very famous for other robotics researchers as well, because he had models for locomotion that was quite heavily used in robotics as well. But he wrote this beautiful book, very thin book, very succinct. It was a delight for me, someone who hated biology to read this book, called Optima for Animals, where he modeled behavior, he modeled structure, and energy consumption, et cetera, as optimization problems. And the other person who influenced me was someone who worked on multi-criteria optimization. Wolfram Stadler, I didn't mention him here. I didn't cite him here. But he was another person who influenced my way of modeling here. So then I tried to discretize this problem using finite elements and solve this. And I found that if I want to solve this optimization problem, a single iterate, my optimization took 16 hours on an IDM-1970 mainframe, which I had access to at that time. And that's not going to work. If it is single iteration is going to take 16 hours, I was not going to graduate anytime soon. So I had to find another way to do that. So then I started to look in a little bit more. And I found that there was a variation sensitivity analysis. One could take complex functionals and determine the sensitivity on a fixed domain. So I said, instead of solving this problem as one where these boundaries also could vary, which in reality, boundaries could vary, I'll keep a fixed domain by very density inside. So that way, I reduce the complexity of the problem. That's what I chose to do. And that was the sensitivity. And these epsilon A, epsilon A0 are adjoint quantities that are obtained by solving an adjoint boundary value problem. And epsilon A0 is related to your objective. It really is a coupling between the original objective and the principle of virtual work. That's really where it comes from. And you do that. And then I found out that I can implement this in a problem-independent manner within the finite element code if the user provided three functions. How to evaluate psi? How to evaluate the derivatives of psi with respect to behavioral quantities? How to evaluate derivatives with respect to design quantities, in this case density? If I did this, then I can solve this problem in a problem-independent manner. For any new problem, whatever be the objective if the user provided this, I could do automatically sensitivity calculation. I did that. And then it turned out, when I implemented that, the single iteration was 3 and 1 half minutes. I had the possibility of graduating. Now I could. So that I didn't know at that time. This I had solved by 88 or 89. I didn't know at this time that this sort of a problem is called topology optimization. And this was something that was coined later. And this relationship is called SIMP, solid isometric material with parameterization, as it's called. And this was a term that is used commonly in topology optimization literature. This relationship is used. And this was coined in 1993. And I had used the relationship from four bones developed by Carter and Hayes in 1977. And this person wrote a book on it. And he gets credit for doing this SIMP work. And he's got about 5,000 citations. And I have zero. So that was my experience. I graduated. I had two choices. One was to do a post-doc with a computer science professor, who was my committee, who was on my committee, trying to implement parallel non-linear programming algorithms. Or the other was to go work at IBM. IBM paid a little bit more, seemed more engineering. So I decided to go to IBM. And I went to work at Endicard, New York. And where IBM actually started in the early 1930s, building small machines, which punched holes in paper tapes, things like that, business machines. And the person who interviewed me and hired me into IBM was Bill Chen. And Bill Chen is a Cornell alum. So he was charged with the responsibility of interviewing Cornell students. And Professor Samakia was the manager for whom I worked. He interviewed me later. And accepted me into his department. And Professor Samakia, Bill Chen, and I have continued to have a wonderful mentor-mentee relationship over all these years. Professor Samakia right now just became the president of SUNY Polytechnic. And these are all colleagues with whom I had a great deal of interaction. They helped me a great deal to adapt to industry. Dave Lyke, in particular, was the first person to whom my manager assigned me to do a project. And Dave has a background in marine biology. I was leading a project building a supercomputer circuit board. And so the first meeting, they had some problem with lamination. Things wouldn't line up. They were trying to get tolerances, which are very, very tight. Things wouldn't line up. So I went to the first meeting. I said, I'll do the modeling. The next meeting I went. And Dave said, do you have the model done? And can you tell me what's going on? That was my shock. That was my first introduction to industry. And where I understood how things work in industry. It's not the model that's important. What you deliver is more important. The model is so that was and then Deyand and I worked on a lot of propelio projects. George Thiel was a kindred spirit. And he's one who relied on Mathematica more than finite element analysis. He's a TNAM student from Urbana-Champaign. So it was a great environment. So what I want to illustrate by this is people who keep you honest, and people who help you solve problems, and people who keep you intellectually stimulated in industry environment. You need to have colleagues like that. That's what I got from that. And one other colleague of mine in the same department with whom I used to have a lot of mechanics conversation. He's a fracture mechanics guy, was Dr. Tien Wu. And years later, he went on. He's now the chief operating officer of ASC group, and which is a $9 billion company. So it's not often that one has colleagues with whom you've worked who now head a $9 billion company. And the lab where I worked, it no longer has an IBM logo. It was sold. IBM has moved out of hardware business. IBM is focusing more on software services. So it's a sad story. So my first academic job was I moved from Indicat, New York to Boulder. And again, another spectacular place. Notice all the pictures are in fall. It's spectacular fall colors. It's the same. So 180 days, it's cloudy and gray. But the pictures are always taken in fall. But the Boulder does remain sunny and bright 300 days in a year. So it's very beautiful, if I can tell you. Except the engineering building. I don't know why, but they always think engineers don't like aesthetics. They like drab looking, functional looking buildings. So that's why the engineering building looks like that. And this is the view of continental divide that I took when I climbed up this front range one day. Spectacular. Those are all 11,000, 12,000 feet peaks. So it was a great place. I was very fortunate to have a chance to begin my academic career there. And I've been really fortunate to work with exceptional students. I've been privileged to work with exceptional students. And the way it worked for me in my academic career is that usually students teach me. I learn from my students. And I have curiosity. I learn from my students. They teach me. And then I claim I know that stuff. So that's how usually it works. And I've had a chance to work with many of them. And all the first seven graduated from University of Colorado, Xu Feng was my first student at Purdue. And I was very pleasantly surprised when Xu Feng said he would attend the talk. He's at G in Cincinnati. He's here. So I'm really delighted that he could make it all the way from Cincinnati to here. So thank you for coming. So from here, Xu Feng on, they all graduated from Purdue. And there are many who are here as well. And I want to thank them all for coming as well. They had a choice of not coming. And most of my students have gone on to work at these places. And many of them were supported by Semiconductor Research Corporation, which is a consortium of these companies. And so usually the way it works for them is by third or fourth year, they have an offer. They don't want to think about any other opportunities. They want to go on to industry. They don't want to fight tenure. They don't want to go to academia. So that's how it works for my students. And I've had several MS Advices as well. And I'm very happy that some of them are here as well. And I didn't do it all by myself. Several colleagues have helped me over the years. And I've also listed in chronological order and the students that they helped at Co-Advice as well. So let me just real quickly, as you'll notice, I tell long stories. So let me real quickly go over some of the research that I've had a chance to do over my career. I'll start with this problem because it's a very interesting problem. It's basically when I moved to Boulder, the IBM printing division moved to Boulder as well. And Dr. Jack Zabel was now suddenly given the charge of finding a way to match color for a room size printer that IBM bought from a company in Belgium called Zycon and rebranding and marketing it as an IBM printer. And so, and here is a person, is a Purdue grad, Herrick Lab grad, used to printing technology that's based on the dot matrix printer type, printing technology, where you are used to vibration and now all of a sudden you are in electro-photographic printing, color printing, and everything is new. So what color printing is all about is I have an image on my screen, I wanna print it on my printer and I wanna make sure that the image on the screen is produced as accurately as possible on my printer. The problem with that is image on the screen, this is a color space as it's called, it's a device independent color space, X, Y is just a generic coordinates. And what you're seeing here is a spectral locus of the colors that human eye can perceive, starting with blue here, deep blue here all the way to red here, it's actually a circle like this, it goes like that, and greenish yellow is here, that's the brightest color that human eye can perceive. And the monitors have additive color production scheme, that's why they look triangular, blue, green, and red here, whereas the printers subtractively produce color, they filter, cyan is a red filter for instance, cyan is a red filter, and similarly, yellow is a blue filter and so on. So therefore the printer gamut, as it's called, the range of colors a printer can produce is very different from the range of colors the monitor can produce. So if I'm given a color point here, what is the nearest color point in the printer space? That's really the gamut mapping problem. So what we did was, and again, there's a connection to geometry to almost everything that I'm gonna talk about today. So what we did was to pose this as a Pareto optimal problem, you also will notice that I like Pareto optimal formulations. So we posed it as a Pareto optimal problem where we can optimize either the color accuracy or the amount of ink used. And then the way you characterize a printer is you print whole bunch of patches of colors like this and use a spectrophotometer to basically determine the device independent coordinates corresponding to each of these patches. These are points in a color space. Now you have a bunch of data points. What do you do with those data points? We said let's use the Q-Hull algorithm on the geometry center in Minnesota to fit a tetrahedral mesh to it. Now I can do interpolation and I can also take the same data points, train them through an artificial neural network. Now I have two ways to match the CMYK to an LAV that comes from the image. LAV is the device independent space. We did this and we published it in ACM transactions and graphics. Here's another thing. We wrote a 43 page paper and I tell a long story. So, and here are some examples. So here is the error minimization when you limit the amount of ink to four, ink fraction to four, which means you can put any amount of ink you like. This is when you limit it to one and a half, ink fraction to one and a half, cyan is one, magenta is one, yellow is one, black is one. So one and a half means you only allow a total of one and a half. And you see that it's reasonable for the lot less ink that you're spending here. And here these are problems where you're minimizing ink with an allowable error on color. So it's a very different physics, but the same computational techniques can be applied to the same problem. And it turns out there are some challenges with the number of printing points that you print. Normal tendency would be to print seven by seven by seven patches, which is an awful lot of patches that you're characterizing with a spectrophotometer, especially if it's a grad student, putting a photometer on the patch and looking at this LAV value. So we found that we could do that with 149. That's all required. Then Professor Herlman was the one who hired me. I saw him at a couple of conferences before I came to Purdue. I got to know him briefly because he had also worked on SRC related projects and I was delighted to have the opportunity to come to Purdue. So we left with our kids and we left the opportunity to hike in Rocky Mountain National Park, camp at Mesa-Wurdy, and instead we exchanged it for the opportunity to go to world-class museums in Chicago and excellence schooling. Our kids have made the best use of and an occasional trip back to Colorado. So that's what we've done. And at that same time, I also was given a new professional service role, a given an opportunity to serve on a new professional service role, which was to serve as editor-in-chief of IEEE Transactions and Advanced Packaging. I was entrusted that responsibility by Paul Westling who was the VP for IEEE CPMT Society. I'm grateful to him for trusting me with that responsibility. So this journal ranked by impact factor second in 2003 among all manufacturing-related journals and third in 2005 along among all manufacturing-related journals. Okay, so let me go on to talk about my research. I'm gonna divide it into parts here. Gonna start with CAD-CAE integration. This is most of this work happened at Purdue, although it started a little bit earlier. Some of it started a little bit earlier. So when I worked in industry, my management didn't care what method I used to solve a problem as long as the problem was solved. Now I came to academia. I didn't publish a lot of papers out of my PhD work. In fact, I had zero journal papers out of my PhD work because my advisor didn't think it was important. I didn't think it was important because I was gonna go to industry. But I came to academia and all of a sudden there was this thing called tenure. So I needed to figure out, I needed to write papers. So in industry, I didn't care what method I used as long as I solved the problem, but in academia I found out what problem you solve is not important, but the method is more important. So here I was chasing a problem. I had seen this droplet shape, constrained droplet shape problem between circular pads solved by a colleague in IBM for a real application. I said, let me make the problem more interesting. Let me make it with square pads, square pads, twisted an offset and I want to find the droplet shape. Now that's a three dimensional shape. It's a nice shape optimal design problem. So it met a lot of criteria. So I said, okay, let me formulate this problem in a different coordinate system, not a cylindrical coordinate system because the pads are offset and twisted. So I can't use cylindrical coordinate system. So let me use an off centered coordinate system with a centroidal locus that I track and I describe positions on the surface and I came up with this formulation. I discretized it, solved, optimized, discretized it, optimized the location of each of these nodes using an optimization algorithm. And then I got the shape and then I automated the transfer of that. I created a three dimensional mesh automatically because now I had centroidal locus. So I could construct this three dimensional mesh. I could transfer it to abacus to elastic plastic creep analysis, try to predict fatigue life. So that's what I had done. And I wrote a paper, it's a single, those days I had to write papers by myself. In fact, a little bit of a story, my advisor came to visit me, my doctoral advisor came to visit me when I joined Colorado and I was whining about how I was not able to find students. And he said, well, it's always a good idea to do it on your own initially. So that's when I got started to do this on my own and that helped me, that was a great advice. So, and then, you know, this seemed like there were about hundreds of nodes here that I'm optimizing, seemed like a very wasteful optimization problem. Why do I have to optimize hundreds of unknowns for a shape that I know is smooth? So why can't I exploit the smoothness and reduce the number of optimization problems, optimization unknowns? And I knew from optimal design literature that if you use nodal positions as unknowns, you could end up with non-smooth boundaries, four matrix conditioning. And there was some work in 1984 which basically had used what they called as design elements where they used B-spline patches, which would be optimized. And then the B-spline patches would be in turn paramished automatically at each iteration. So that was work by Brian Confluori from 1984. I said, well, let's not, why not we do the same thing? And instead of using finite elements as nodes as positions, why not use nerves themselves directly and optimize the unknowns on the nerves? And there are far fewer unknowns to describe the smooth shape compared to the mesh. And that we published Nerve Space Solutions in Computer Methods and Applied Mechanics and Engineering. Probably one of the earliest works in terms of using nerves directly for analysis. Now, nerves are spline geometries. I'm going to skip this. They are spline geometries. And they have many properties that are actually as good, if not better than finite element shape functions as interpolants. So that area in 2005 became a field of its own. It's called isogeometric analysis. The use of nerves for analysis, use of geometric models directly for analysis became a field of its own called isogeometric analysis. And there are whole conferences dedicated to that topic each year, thanks to this person. Don't use. So now there are two ways you can represent geometry in CAD. One is called CSG and the other is a BRAP model. In CSG, you start with predefined primitives where it's very easy to tell if a point is inside or outside. These predefined simple primitives can tell if a point is inside or outside. And you compose them, Boolean compose them. You define a procedure to define your final geometry. You actually don't construct your final geometry unless absolutely necessary. But you have a procedure. You're sort of like a recipe. This is how you cook your final dish, but you don't actually cook it until necessary. So that's the CSG procedure. But unfortunately, if you have complex shapes like this, CSG, you need to have primitives that look like that as well. So CSG is not very useful when you have sculpted surfaces, complex surfaces. So BRAP, you have an explicit boundary. You trim the boundaries. You stitch them together to construct the final shape. So you have a boundary that is well-defined, but you don't have any points inside. Here you have points inside that you can tell whether it's inside or outside, but you don't have a boundary. So that's the trade-off that you have. So what we did was we tried CSG was actually, was created by one of my professors. Prior, he took him coming to Cornell. Herb Walker, I had him in a class. So he knew him as the founder of solid modeling. And he had a pure CSG CAD system as well called PARO. Nowadays, most CAD systems are not pure CSG or BRAP. They are kind of hybrid in between. So we said, well, now in CSG relies on point sets, why can't I describe within each of these primitives a functional approximation? Now in function space, I compose these approximations as opposed to thinking of point sets, compositions in point set, composition of point sets. And these function space for these approximations to converge, they obey a property called partition of unity. And then we wrote a paper on how to construct these partitions of unity. These are some of this work is actually started with David Nathaykar and Shufang was instrumental in doing some of this work. And this is an application. So that was the basic idea of, what CSG now requires is that I need to have tri-variate a functional representation that relies on three parameters. Most CAD systems don't provide you a functional representation based on three parameters. They only give you a functional surface representation, surface, the skin, not the solid itself. So you only have a two-parameter entity, not two-parameter surface, not a three-parameter solid. So we said, well, one thing we could do is we could try to construct a tri-variate solid modeler. Since CSG is a procedural description, what we could do is we could symbolically construct computer algebra for CAD as opposed to one where you interact with a mouse on a computer screen. You could symbolically describe it, because it's all purely procedural. It follows that theoretic logic. So I can write a symbolic CAD system, and that's what Ole Morgan's thesis was. And so we wrote a symbolic CAD system. We called it hijung. And unfortunately, this CAD system is not quite commercial. It was developed in the university, but however the idea exists. So now we can use this set of a CSG modeler to do analysis directly. So now supposing I have a plate with a hole, and I want to find the optimal orientation. This might look familiar to Shufan. So this is a plate with a hole. We want to find the optimal orientation. I don't, each time the hole orientation changes, I don't have to reconstruct a mesh. I can think of this hole as a composition on the underlying domain of the plate. So I have an approximation corresponding to the plate. I have an approximation corresponding to the hole. And I can do a Boolean composition of functional approximations corresponding to the two to describe the approximation corresponding to the plate with a hole. Then I can move the location of the hole and optimize the location of the hole. During the entire process, I don't have to remesh. I'm only studying the interaction. Then we can take this further and ask the question, what if I have, in addition to the plate and the hole, I also have material distribution that I need to change around? Turns out this problem was motivated by a biological problem that I had looked at long time ago. Turned out in 1972, there was a person by the name Al Burstein in Cleveland who had done an experiment on rabbit femur. That was a time when people were doing fracture fixation by drilling holes in bones. And the fear was for engineers, is anytime you see a hole, you worry about stress concentration. So by putting a hole in a bone, do you weaken the bone for fracture fixation? So he did a beautiful experiment on rabbits, unfortunately, and that part was not beautiful, but everything else was. So the experiment what he did was he drilled holes in once one type of the rabbit and in one he put basically allowed the bone to fill in. In another set, he put a soft rubber plug and a third set he put screws in and after eight weeks, he twisted them all and found that they all took the same energy as the control femurs. So in other words, bone adapted, no matter whether you allow the hole to fill or hold to remain. And the way it adapted was, if you allow the hole to fill, it will fill the hole. If you put a soft plug in, it basically make the bone around the hole denser. And that's really the problem that we modeled here. And this sort of problem computationally modeling was not very easy to do at the time we did this. I still believe it's a challenging problem because now you have to change the shape of this hole. As you're optimizing the whole orientation, you also have to simultaneously change the density around. Okay, here's another application. Here's another application of this sort of a computational approach. So this is from microelectronics industry. In chips, there is something called a thermal interface material. That's between the heat spreader and the dye. The thermal interface material looks like this. It's usually particle filled composite. Usually the matrix material is either epoxy or silicone. It has very low conductivity. The matrix usually of the auto 0.2, 0.3 watts per meter Kelvin. The particle is alumina, boron, nitride, sometimes even metals, silver. And they have very high conductivity anywhere between 25 to 200 to 400. And what GE was trying to do, this was a joint project with GE. GE had a large grant from Department of Commerce and I was doing the modeling work. We were doing the modeling work. So what GE was trying to do was to determine the particle size distribution. And they wanted to know, given a particle size distribution, what is the effective thermal conductivity of the composite. And you do experiments at various volume fractions of this particulate system, volume fraction being a surrogate for all the particle size distribution. What you find is the effective conductivity that you get up to about 30% matches the classical Maxwell's model or the Rayleigh expression or the Hash and Strickland bounds. All of that will match okay until about 30%. But beyond 30%, they don't match very well. So the normal question is, is it because a special arrangement of particles are interfaced and most often people would say, well, you know, the experiments don't match the model. The model must not be correct. Therefore, the interface is probably imperfect. So I felt that maybe we should explore the particle arrangement and the physics of energy transport between particles a little bit more. And so what we did was, and this is Shufeng's work again. So what we did was we took particle arrangement, same volume fraction for the number of particles much smaller, simulated the effective conductivity. And what we find is that in all cases, we matched right on top. And this is not one simple simulation. There were 30 such simulations. It's really the average statistical average of 30 such simulations. The experiments themselves are 15 such experiments. Each of those is 15 such experiments. And what you find is match. What you see is really the physics is basically near percolation energy transport in this particle field systems which the simulation was able to capture quite well. And okay, so that's an application. And the reason why we needed this computational procedure is because when I'm rearranging this particle arrangement 30 times, I don't want to remesh every time. So we were doing it more or less automatically. That was also the time around early 2000 when everybody had jumped into nano. I jumped into nano as well. So we did size dependent conductivity estimation for these silica particles and published it in physical reviews. And we moved on nano particles, nano wires, nano films. We estimated the pizza resistance. There was actually a student's PhD thesis. We moved on. Okay, so now I'm gonna talk a little bit about the work that we've done in the last few years. Now, one of my goals is let's say I have a B-Rap geometry. The B-Rap geometry, the disadvantage of B-Rap geometry, boundary representation, or an explicit geometry representation for an interface or a boundary is that any topological change is very hard to model with an explicit boundary representation. Whereas application of boundary conditions, it's a lot easier if I have an explicit boundary condition. And with an implicit geometry, as you will see, these are all examples of problems with evolving boundaries, cracks, solidification, inter-metallic growth, shape optimal design. These are all examples of problems with evolving boundaries. So normally when you use B-Rap models, what you would have to do is to start with a skin, mesh the interior, do and do the analysis. Now, meshing is not always easy if your parts are very intricate and they have thin parts. It's very hard to mesh thin parts. So now what I could do is to immerse these surfaces inside a domain and then somehow analyze it. Now, how do we do this is the question. I want to preserve the geometric exactness of my B-Rap model. I want to immerse it in a domain and do the analysis at no point sacrificing my geometric exactness. And so the question, there are two choices when you do that. One is you can keep the boundary implicit, meaning that I take away the boundary, understand its influence on the background. That's the implicitized boundary. Or I could keep the boundary explicit. The advantage of explicit boundary is at any point I know curvatures, normals, et cetera, which I need for any physics, complex physics. I know the curvature and normals, et cetera. Implicitized, my governing equation is complex. I don't have curvatures and normals until I define it. Only in the limit of refinement I know the curvatures and normals. And boundary conditions are difficult to impose as well. And you have on top of that large degrees of freedom that you need to track which tells you whether your point is on one side of the boundary or on the other side. Given a point, I can only tell whether it is on one side of the boundary or on the other side. I don't quite know where the boundary is. I don't have an explicit description of the boundary. Whereas with an explicit boundary, I have simpler governing equation. This is a certification problem, Stefan problem. I have a simpler governing equation. Geometric quantities are directly computed. Curvatures and normals are explicitly known. And I can impose boundary conditions directly. And I can also achieve higher continuity of the interface. So what we have attempted to do was to construct explicit immersed boundary and solve variety of problems. That's what we've done over the last few years. There are two issues when you do this explicit immersed boundary. First issue is you need to know what is the distance of influence. This interface that you put in typically has physics associated with it. That physics dies as a function of distance. So you need to have a measure of distance from that surface. Those are all CAD issues that come about. And you also need to know given a point, what is the nearest point on the interface that it influences. So that's called point projection. So you need to know something about distance and you need to be able to project the point onto the surface. Both of these you need to be able to solve. Now if I want to do distance, the natural temptation would be to go with an iterative scheme like Newton-Raphson scheme. And if you do that, there are problems with Newton-Raphson scheme. If I have a short curvature, the parameter value that you map to from any point, let's say you are going along the line and you're asking the question, what is the closest point on the interface? The point to which you map with Newton-Raphson could be sometime here or here if you have a short curvature. So Newton-Raphson is non-robust. It's not very good. And it could be also discontinuous. For instance, if I'm anywhere here, I cannot reach any point on the surface. So Newton-Raphson has problems of both non-existent and it could also be incorrect. It could be non-smooth. So you could use linearized geometry, but there are problems with that. If you're not preserving the geometry, I can linearize to calculate distance at simpler computationally, but there are problems with that. So we developed an algebraic procedure based on algebraic geometry principles to construct a measure of distance. These work like level sets, but except they're algebraic level sets, they work exactly from an exact, meaning exacter nerves boundary. And published to work. And you can then get distance measures for any complex surfaces and they're robust compared to Newton-Raphson method. So here is point projection. Same idea can be used for point projection as well. So Newton-Raphson would never be able to reach these points. It jumps between this point and this point, whereas the algebraic procedure does well. And here are some oscillations that you would have Newton-Raphson no problem with the algebraic procedure that we developed. Algebraic level sets that we developed. Okay, so here's an example. Stefan problem, classical problem. So now we have enrichments corresponding to the interface. That is the interface between the solid, the two phases, solid and liquid phase. And these enrichments are both on the temperature field as there was gradient in temperature field because the gradient jump is the one that drives the interface to move. And now we can enforce explicitly the Gibbs-Thompson condition, temperature condition based on curvature, temperature condition based on velocity. And here is a solution for that. And this is dendritic Stefan, dendritic solidification. So you start with a circular, initially circular solid, placed at initial at this location here. And it's initially super cooled. You heat and immediately from the corner you start to create these solidification fronts which we can model explicitly. Now all of these boundaries are modeled explicitly. Now when you go through at every iteration you need to determine whether your boundary needs to be coarsened or it needs to be refined. So you need to do both of them. All of those are included in this. And that's all there in the publication that I listed here. All these details are there. Okay, so now I'm going to switch to another application. So I've talked about solidification. Again, the basic idea is the same. I want to keep track of an explicit boundary. I have some measure of distance from the boundary, an algebraic level set that I compute without any iteration, no Newton-Raphson iteration. I preserve the exactness of the boundary and I'm able to project from any point on to the boundary. So I keep all those algebraic geometry ideas. So now I go on to looking at a different class of problem, CAC problem. So these kinds of problems are very common in electronics. Here I have a sharp corner and here I have a crack. Now, there is special behavior associated with sharp corners and cracks. Elasticity theory tells us a sharp corner is going to be a point of singular stress. Crack, I have discontinuity across the crack, a jump in displacement across the crack and I have singular stresses at the crack tip as well. That's what elasticity tells me. And they all are of the form one over r to the power lambda that's the nature of singularity at the sharp corner as well as the crack tip. Lambda is one half in the case of a crack tip. And in the case of a sharp corner it could be any value in that range. So what we would like to do is to analyze these kinds of problems taking a geometric view. I think of this curve, the crack, as a lower dimensional, a one parameter entity in 2D, embedded in an underlying domain. I can now compute from this crack, measure of distance. I can say how far this crack's influence will go. I can measure the influence from the crack tip, the singularity, I can enrich my underlying field with known behaviors at the crack or at the crack tip. I can do all of that. And then I can, these are the formulation of the problem. Here is the enrichment idea of the displacement field, continuous displacement, displacement jump across a crack phase, tip asymptotic displacement. And here is the simulation of that. And there's a fair amount of implementation that I'm not going through here. And there's a fairly large, Fortran code underlying all of this simulation. So here's an example of crack. This is the one Mises stress. All simulated, done without strictly as a composition of a field associated with the crack with the underlying field has a composition of the two. And here you can apply that to problems of this nature, practical problems of this nature. And we've actually applied that same idea. Now we can ask the question, if I have a crack that is a bleakly incident on an interface, does it turn into the interface or does it cross the interface? And the answer depends on what is the strength of the interface relative to the toughness of the material, homogeneous material. The answer depends on that. And depending on the toughness ratio of the interface to the homogeneous material, we've been able to automatically propagate the cracks. These kinds of simulations are hard to do with commercial codes. I don't know of any that has existed with using commercial codes. So we can do that in two ways. Here crack represents a displacement jump. In the right hand side, crack is really a more model like a damage. So crack represents a region of zero stiffness, material with zero stiffness, zero modulus. So you can model it both ways. And these are practical problems. These kinds of cracks come about quite a bit in electronics as I'll show you next. So here I'm gonna, again, focus on electronics. This is what's called the back end of a die, back end of a chip. And this is about three to four microns. And underneath this is about 500 microns of silicon. So this is the top few microns where all the action is, where all the circuitry is. And what you're trying to do, the electronics industry is trying to do is to make these dielectric more and more porous. So the dielectric constant is closer to that of air. So you can send your signals faster. That's really what electronics industry is trying to do. But when you make it porous, you make it more susceptible to fracture during processing and during use. And one of those fractures occurs at the first layer between multiple materials. And I'll talk about that in a second. There is other problems. The other problem is these corners are positions of singularities and they have singularities of this nature. And often there's not just a single dominant singularity, but there are multiple singularities, it turns out, in these corners. And to analyze them, you need it's really an eigenvalue problem. The solution to that eigenvalue problem gives you the strengths of singularities. So given a material set, given the back end of a die with varying signal line architecture that you can put in, you can typically, in a typical semiconductor technology of today, you can find several locations which are potentially susceptible for high stress, stress concentrations, singular stresses. There are seven or eight of them here. There are eight of them here. And the question is, which of these is most susceptible for cacinization? And when you do the asymptotic analysis, you find that materials, when you have a less stiff material surrounded by, in 270 degrees by a stiff material, then those corners are a lot more susceptible and the strength of singularity could be as high as 0.45 for the current generation of technology. Now as you make it more and more porous, when the strength of singularity becomes 0.5, essentially what you have is, even though you have a 90 degree angle, you have created a crack if the strength of singularity is one half. So essentially that has the same behavior as a crack. So we analyzed it and we found this seven to be location potentially where crack could start. And there was a situation that we were trying to analyze. So given three material sets, unless we want to go to this material set, they are at this material set, they were at this material set. And when you go to this, the question is where would the crack initiate? How would the crack propagate? And here is a simulation of damage. What you see is those corners at the interface between the oxide and the ultra-locate dielectric is what the simulation says would be potential locations where the cracks would initiate. Now in order to do these simulations, every interface is modeled as an enrichment. Every interface has a geometry associated with it and an enrichment associated with it, enriched field associated with it. And you initiate a crack based on a damage law and that crack propagates automatically. All of that is modeled here. And unless you also typically want to know at what process step do these cracks originate? Remember, these are four microns. You have no way to know them by observation. You can only do them post-martem. You can only cross section after assembly. So we can track the process steps and we can predict at what process step this crack would initiate. You can see the damage is maximum after what's called the reflow step which is cooled on after the solder joints are bonded onto the circuit, solder joint between the chip and the board are bonded. Okay, so one more problem. So another question that we asked ourselves a few years ago is, supposing I were to have a structure and I want to insert a stiffener or I want to put a hole. Where should I put the hole? Or where should I put the stiffener? Can I find sensitivity to moving this hole or stiffener? And it turns out that question is not a trivial question. It's a very difficult question to answer. If I put a stiffening inclusion here, what is the location orientation of that inclusion and how should I change the shape of that inclusion? That's a difficult question to answer. So we posed what's called a configuration optimization problem. It's a trade-off optimization problem where you given an arbitrary objective and mass. You can form a weighted combination which basically is a trade-off between the two. This arbitrary objective is relative to a homogeneous material. F zero corresponds to homogeneous material. M zero is a homogeneous material. Now I put in either a soft or a stiff inclusion. Soft inclusion with density lower than the surrounding. Stiff inclusion with density greater than the surrounding. I put in a soft or stiff inclusion. I posed this optimization problem where I want to minimize some objective. Arbitrary objective, subjective principle of virtual work which I need to satisfy. And I asked the question, what is the change in the objective due to change in boundary of either the inclusion or the outside boundary? So that you can derive, it's a very long expression but it turns out it's not important for us to look at this big expression but this V here is the velocity with which the domain is changing. The velocity at every point inside the domain. And I do that and then I simplify this arbitrary design velocity and this quantity here comes about in quite a bit. At any time you have an inclusion that's the configurational tensor. And then you can ask the question, what is my velocity where either a translation, rotation or scaling? And that reduces the problem quite a bit. And you get these forms of sensitivity to arbitrary translations, to arbitrary rotations or to arbitrary scaling of any arbitrary objective. And if you have your inclusion where to be a crack these three quantities will turn out to be the J, L and M integrals. It turns out it's a generalization of the idea of J, L and M integrals. You do that, now I can put an arbitrary heterogeneity into my domain and use these sensitivities to move them. And what you're seeing here is the optimal location of a hole on a structure with this parabolic loading. And in each iteration I compute this sensitivity and I move it. And again, this is again a moving boundary problem like the one that I mentioned. And I can do that with a crack in the presence of a crack as well. And this is the motion of a hole in the presence of a crack or an inclusion in the presence of the crack. Now, we can also do the reverse. We can keep the inclusion fixed and move the crack and ask the question, where is the crack likely to cause the most damage or least damage? We could answer that question as well with the sensitivity that we have here. And this is the more complicated implementation three-dimensional. Okay, I have a couple of us, a few more non-geometry oriented, mechanics oriented problem. I'm almost out of time here. What I'm gonna do is I'm gonna spend maybe five minutes, talk very briefly about the kinds of problems that we've had a chance to work on. And this was the PhD work of Sanjay Gowal. So we were looking at metal film buckling. And so for instance, if we fabricate aluminum line on a very weakly bonded substrate, SU-8 substrate, and you heat it up, the aluminum film, it turns out will buckle. And then if you heat it further, it will start to propagate in. Now you can use this phenomena to extract back the facture toughness of the interface between the aluminum film and the substrate. And that's what we did here. We had to develop the theory for how buckling would induce debonding. So we had to develop a theory for that. And we had to develop a model for that. And we actually used that to extract back the facture toughness. But it turns out when you buckle these metals, the metals actually undergo plastic deformation. If you don't account for plasticity, you can't extract facture toughness accurately. You would overestimate the facture toughness, it turns out. So, and by accounting for the plasticity, you estimate facture toughness. This was the thesis work of Sanjay Gowal. And this is the work of Anirudh was here. So recently, more recently, we looked at dynamics and stability of reactive interfaces. Again, an interface is very common in electronics as well. So what you're seeing here is an inter-metallic. It's the copper-tin inter-metallics, U6, SN5. And that usually initially forms as a scallop. After solidified and after a while, it flattens out. The question is what causes it to flatten out? What causes the scallop shape in the first place? That's the question that we were trying to address. Because any inter-metallic in electronics is not very good because it's brittle, it fractures. It makes your electronic part more susceptible to failure. So we developed a model for the interfacial velocity, which basically has copper here at the bottom, which comes from the copper pad. The copper diffuses. This is the copper pad. Copper diffuses through the inter-metallic, comes to the interface, reacts with the tin, which is in the molten state, forming CO6 SN5. And then that CO6 SN5 diffuses along the interface. And now, depending on the rate at which the reaction occurs, relative to the rate at which the diffusion occurs, you could either have a flat surface or you could have a scalloped surface. This is what we modeled. This was a phase field model, by the way. It's not the computational approach that we developed before because I was not quite ready for this problem. We're working on it right now. So we have, these are the governing equations. We're gonna skip through this and this is the interfacial simulation. What you're seeing here is the color code is really the concentration of copper. Okay. And I'm gonna skip through this. Now, we can also study the stability of the interface. Can I ask a question? There is a stability parameter that determines whether this interface will scallop or will flatten. So we have situation where this interfacial parameter, this lambda relates reaction rate. This is the diffusion along the interface. So depending on the value, if this will scallop and here, oops. Okay. This is not working. And so if I have a rough surface and I have some wavelengths would flatten, some wavelengths would scallop and eventually you'll get a stable scalloped interface that explains what we see experimentally when we look at the cross sections of these sort of joints. Okay. Last, maybe I'll take two minutes to explain this and then I'm done. So we've also looked at fatigue over the last several years, specifically in Sardar. In general, the approaches to modeling fatigue are mostly empirical, Kauffman-Manson rule. So you're based on an intact material. You predict how long it'll last. You don't actually follow the process of failure. So here what we try to do is to ask the question, what if I had a simple failure description, a damage description at every material point, and track the locus of points which have critical damage value? And then we did experiments as well. These are experiments as well. And we tracked crack funds over a period of time. And what you're seeing here is a simple viable description of material failure, accumulated damage and the crack funds that is predicted. Now, interestingly, it turns out the model predicts that after a while, after the crack has gone through a certain distance, it'll actually undergo exponential growth in damage and it'll zip through. And you see that in the morphology of the fractured surface as well, you eventually, when you start bending things back and forth, eventually it becomes very easy. And in the same way, these joints are fractured and sheared, eventually you'll have shear overload and the crack will just zip through. And you see that in the model as well. So the question is, which comes first? The microstructure or the stress, geometry and stress? In this particular case, geometry and stress dictated more than the microstructure. Microstructure follows the geometry and stress. And so what we said as well, the Bible description is okay. It's interesting that it provides us results that are matching very well with experiments, but it's not physically intuitive. So maybe we should develop some theory. So we went through and went through a process of developing a theory. We said, let's use max entropy principle, develop the theory and then we can predict, we can model damage as a cumulative failure distribution corresponding to a max entropy distribution. And then we did experiments. These are my group, most of my students build their own micro scale experiments. We built a micro scale mechanical tester. We take these sort of joint samples that you can see here. That's the sort of joint sample. There's a capacitance sensor here, which can measure displacement sub micron precision, lab view controllers, precision stages, my manual stages to reduce misalignment. Do the cycling test and measure load drop, calculate in elastic dissipation per cycle, fit to the max entropy model. The model does fit the data quite well. So that tells us it's okay. We can extract that single parameter that this model damage model has. There is only a single parameter here for this damage model. And then we take the damage model parameter extracted at 25C and use it to predict fit to data at 125C. 125C is pretty close to melting temperature for solder. It's 0.9 or so of homologous temperature and we still fit quite well. And then we take a real package, thermally cycle it, not mechanically cycle it, but thermally cycle it. And we track the crack fronts and then compare the experimentally observed crack front path to the predicted crack path. And we actually do quite well and we do reasonably well as well. So okay. So in summary, what I've tried to do was to go from a purely geometric view towards the end, I went to more and more physics problems where physics is very important as well, complex physics as well. So in many of these problems, the point that I want to make, the pieces that I want to make is if you keep in mind the geometric issues, geometry issues, they may be sometimes more important than the physics itself. So especially in fracture for instance, the physics is very well known, but handling the computational geometry issues can be quite challenging. So having a geometry centric viewpoint can be advantageous. You can preserve the exact geometry, I've tried to show you a technique by which you can preserve the exact geometry, exact to cad that is exact to nerfs that is. Reduce the number of unknowns, improve robustness. It's not often possible to do that, you can improve all three of them at the same time. Okay. And you know, any career is not the effort of one person. Many people contribute, students have contributed, but my move to Purdue has required sacrifice on the part of others in my family. My wife was sitting in the back. She's had to give up her career in order for us to move to Purdue. So I want to acknowledge her role in whatever I presented today as well. Thank you. Could you explain the nerves reconstruction? Because I've seen it, it seemed that it was, well include dynamics, we would say, TBD or monothermicity preserving type of, it was a very early slide where you had 1D, you have debris through it. So nerves is a parametric, geometric model. And it can handle solutions with this continuity. It's basically, you can create discontinuities. It's basically a rational polynomial. It's a rational polynomial in the representation. The way you create discontinuities is by doing what is called repeated knots. So at any given point, a knot is the point where two splines join. Splines join. Now, if I have multiple points here, inserted points here, not points here, it turns out I cause a discontinuity that I reduce the continuity of my continuous curve. So it's handled naturally, so that way I can model corners. Question, the Boolean operation that you do that allows you, for example, to optimize the shape of a hole by linear, by superposition of two problems without having to remesh the background problem, is that allowed by the linearity of the problem or is that how you use it? So it's just, so, let me see if I can go to that slide. So this is not, in any way, imposing on the underlying constitutive behavior of the material. All I'm doing is constructing an approximation. It's just in, it's basically saying that I have to construct an approximation at any point. I look at any point, the approximation at that, with a field value at that point is a composition of two fields, the composed quantity and the underlying entity. So that approximation at any point can then be passed through any constitutive behavior that I care for. So there is no limitation in terms of non-linearity or material non-linearity or geometric non-linearity. Let's thank Professor Subrahmanyam for coming up by and get some food. We don't want to take that back, so.