 I'm Eckhart Grohl. I am the head of the School of Mechanical Engineering, and I'm just here for a brief second. My job is to introduce the leader of the college, the Ederson Dean of the College of Engineering. He came in about two and a half years ago. He changed a lot of things that are happening here. He's our fearless leader, and as he says himself, he is the best cheerleader for the college. This is Meng Cheng, the Dean of the College. Well, thank you, Eckhart. You know that while introducing somebody, it's very important if somebody introduces the introducer. And I will not try to do cheerleading in front of you right now, as we have just outstanding, outstanding distinguished lecture coming to Purdue Engineering today and visitors from Texas Austin with Professor Hughes. And before I introduce briefly his incredible credential, I want to highlight why we're doing this, that this Purdue Engineering Distinguished Lecture Series hosts eight to nine outstanding leaders from academia industry around the world in different disciplines in engineering in order to hear from the best and the brightest. And also to showcase our wonderful faculty and students here. And as the largest top 10 engineering college in the United States, we're proud of our faculty, students and staff. And as we aspire to the pinnacle of excellence at scale, we also know that we need strategies and becoming the best in the world between the physical and the virtual sides of engineering, to what we touch and what we code, if you will, between the atoms and the bytes is a critical pillar for our strategy. And in that context, we warmly welcome today's distinguished lecturer who symbolizes for decades that excellence of computational science and engineering. And Dr. Hughes right now is in this Texas Institute of Computational Science and Engineering, where it has been there for 17 years and before that across industry and the university such as Berkeley, Caltech and for many years at Stanford. Dr. Hughes has been a leader in the entire field of applied computational mathematics and computational understanding to engineering and sciences. Here's the recipient of many different highest level of recognition medals from multiple societies in ACM, in Siam, American Society for Mechanical, American Society for Civil Engineers. I promise to save some time for his actual lecture. So I'm abbreviating a lot of the honors and awards. He has also received multiple honorary doctorate and perhaps most telling is I want to make sure I read the correct wording here. It's not what awards you receive, but what awards is named after you that other people receive. And in 2012, the Computational Fluid Mechanics Award of the US Association for Computational Mechanics was renamed as the Thomas J.R. Hughes Medal. And this is a telling sign when clearly the community regards you as a legend. And Dr. Hughes is a member of the National Academy of Engineering and the National Academy of Sciences and American Academy of Arts and Sciences. And he has demonstrated what excellence and interdisciplinary research can do together. We are truly honored and delighted. Welcome Dr. Hughes here today and talk about the latest and greatest of his research. Thank you very much. Thank you very much for that very kind introduction. It's a great pleasure to be here. It's an honor to be here. I'm gonna tell you about something that has a little bit of historical significance here at Purdue. And it's a topic that I've been working on for about 15 years, a little more than 15 years in fact, called Isogeometric Analysis. 15 years ago I came here and I presented a talk in the Department of Computer Science. And that talk was entitled, as you see there, Isogeometric Analysis, et cetera. We had named this idea, Isogeometric Analysis, only a couple of months before. We didn't even have a paper on this topic until at least a year later. I'll come back to that point later on. But this was really the first time these words were really uttered in this area. So I'll tell you about that of course, but I want to begin a bit with setting a context for that discussion. And that will be finite element analysis, which I assume just about everybody has some awareness of. Then I'll talk about what motivated Isogeometric Analysis and still motivates it and presents some of the basic tools that are used that are familiar in certain disciplines like computer aided design, but not so much in analysis. I'll talk about one of the, I think, really outstanding aspects of Isogeometric Analysis since the concept called cave refinement. That really is a unique tool and it fills in a part of the space of possible approximations that we can make to solve various equations. I'll also talk about spectral analysis, proceed that with a little bit about functional analysis of these methods and applications. I'll present applications to show you how it's being used in the design analysis space for wind turbines, a flex cable, boiling simulations. And part of this subject concerns itself with building models. So we'll talk about how you build computational models, which turns out to be extremely big problem, perhaps a bigger problem than the problem of solving equations on those models. And that will cover trim and immersion and also some work on heartfelts. So let me begin. Finite element analysis. Now we just had a panel discussion and I mentioned when I started my career, finite element analysis was a brand new thing. I started doing research on it and it was resisted by most everybody. But over the years, it has become a standard tool in many, many areas of engineering and science and it's acknowledged to be a reasonable technology and an outstanding success, in fact. Car crash, for example, that was once done almost completely experimentally. Now computers are designed for crash, cars are designed for crash on computers. So this is a picture as you can see of a Mercedes-Benz being crashed in various ways. And if you look inside, you can see there are passengers here and airbags have been deployed. So the fidelity of modeling now is quite impressive. They have anthropomorphic dummies. They have women dummies, men dummies, big dummies, little dummies, children dummies. So that you can model all aspects of a crash dynamics. And crash is not just automobiles. Everything that's designed and built has to withstand loads and accidents. So you design almost everything for crash in some sense. Some of the biggest models in the world are iPhones. The number of elements to model iPhones, 100, 200 million elements. Finite element analysis has been successful in many areas. One of the areas is in the cardiovascular space. Computational medicine is now a field and more and more we're doing engineering type analysis, predictive analysis, patient specific analysis using these tools. So this is a full body model by a student who was at Stanford many years ago in my classes. And a technique that we developed there was a simplified fluid structure interaction analysis when you look at that. It almost looks like this is a one dimensional model, but it's a three dimensional model. Those are all, the arteries themselves are all modeled in detail in 3D and I'll show you that in a moment. But these are about 100 vessels and 15 million element models. It's full fluid structure interaction. You have varying tissue properties throughout the model. This gives you a sense of the scale of the resolution. Very, very fine as you can see. That's the surface mesh. That's basically the artery wall. But that is resolved internally to model boundary layer phenomena and it's a very accurate flow simulation model, fluid structure interaction model. Finding elements are particularly useful in this area of fluid structure interaction because you can do structures and solids and fluids in exactly the same way and integrate those equations in a multi physics type context. You can do very interesting things with a full body model like this. You can study inclinations, inclination of the body. You can study what happens when you stand up quickly or slowly, how your barrier receptors respond. And as you can see here, this is quite a detailed model. Here's another example that I like to show. I like to show these examples because I have something to do with all three of them. Heart Flow is a very interesting technology company that is providing to the clinic a coronary artery diagnostic capability that's completely done with imaging and computers on a patient specific basis. You model the so-called fractional flow reserve which is the pressure drop which is a very meaningful metric to clinicians. And you can do this completely non-invasively when in the past it was done with the catheterization and invasive angiography. So all these wonderful successes, it's everywhere. What's the problem with finite element analysis? Well, one of the things that you might not have realized but all those calculations used absolutely the lowest order elements, the simplest elements. And if you study the research literature of finite elements, you find that there is a tremendous promise, let us say, of higher order elements but it's never really been realized. And part of the theme through my talk, I'll try to explain why that is the case. Widely used commercial and industrial finite element codes, they rely heavily on the lowest degree elements, quadrilaterals and hexahedra in structural analysis and typically triangles and tetrahedra and fluids. Another problem and an enormous problem in fact is model creation, creating the finite element analysis models. Now the normal engineering process is that you create a geometric model, a geometric design and that is done in CAD software. Now from that, you have to create finite element models. You have to translate the CAD description which is one representation to finite elements. It turns out that process has intermediate steps, geometry, cleanup, feature removal, et cetera, things to be done to make it suitable to generate meshes. And roughly speaking you find that throughout industry that takes up about 80% or more of analysis time, overall analysis time. It's a major bottleneck in the product development cycle. In some industries it's even more and it's not getting better. I mean the technologies are improving but the reason it's not getting better is because the models are getting bigger and more complicated. Also all of the simplifications that are made in the process of developing the finite element model are geometric approximations. And what we see more and more these days is geometric approximations are sometimes the largest contributor to the errors in the analysis. You remove fillets, you eliminate holes, things of this nature have a big effect on output. So there's a study that was done at Sandia National Laboratories that tried to create the anatomy of the overall process and it was over many different types of analysis with many, many different cases. A wonderful study in data analysis. And that was where this quote of 80% of overall analysis time came from but everybody will tell you similar things like this. So what's the problem? Why is it that creating finite element meshes from CAD is so complicated and time consuming? Well let me give you an example. This is a Honda automobile and that's a B pillar and the B pillar is one piece of metal that is cut, stamped, shaped, and holes are drilled in it. It's very simple, you would think just a part. You should be able to build an analysis model for that in a very simplified way. What you're seeing there is graphics. Boolean operations have created that picture from many, many patches. We'll talk about patches in a little while. Images of rectangular domains mapped into physical space. That model has 1,280 trimmed surface patches. So you trim off the part that's not part of the model and you leave the rest behind. All those trim interfaces between these surfaces, they're not watertight, they're gaps and overlaps. All that has to be fixed before you have a coherent geometry to build a finite element model. So this is just part of the problem on a simple example. So isogeometric analysis. The ideas were to change the way we do finite element analysis and to sort of solve the problem of model creation to some extent by getting rid of model creation and reconstituting finite element analysis within CAD geometry. That was the basic objective. Obviously one could simplify finite element model development thereby and the potential there was to integrate design and analysis, perhaps in a closed loop where you work right within the design. So that was the idea. So I showed you that title of a talk I gave 15 years ago. We published our first paper on this in 2005, about a year later after that talk. And if you go to web of science and you try to find citations to the topic of isogeometric analysis, you'll find one in 2005. That was our first paper. If you go there this week, you'll find a little bit more than what I found on Tuesday that there were 11,573 citations this year and there were 48,588 total so far. And you can see it's kind of growing at least quadratically here. And continues to grow. So this concept has had some impact. That's where we think it'll go at the end of the year. We still have about a month or so to go. So what is isogeometric analysis? It's based on technologies that come from computational geometry. The types of technologies that are used in design. Nerves, T-spline, subdivision surfaces. They're also used in animation, graphic art, visualization, many things that you see and interact with every day. You might think that compared with sophisticated engineering technologies that these are not as sophisticated, but in fact, just the opposite is true. They're much more sophisticated. They include standard finite element analysis as a special case, but they offer many possibilities that are utilizable in analysis and design. First of all, precise and efficient, in some sense exact geometric modeling. Simplified mesh refinement strategies and new mesh refinement strategies. They also provide smooth basis functions with compact support. And if you're interested in solving partial differential equations, higher order partial differential equations, you need smooth basis functions. So things that were always a bit complicated and led to awkward formulations in finite element analysis, traditional finite element analysis. They can be handled much more easily in isogeometric analysis. Now what we didn't anticipate in the beginning, in fact, we just hoped it was as accurate as traditional finite element analysis, was that isogeometric analysis with these types of representations would be much more accurate. In fact, they are, they have superior approximation properties across the board. And I'll spend a lot of time on that. And of course, the possibility to integrate design in analysis. Now if you think of isogeometric analysis as sort of a analysis technology exclusively, you can think of it as subsuming classical finite element analysis. And the approaches for refining and improving accuracy in traditional finite element analysis are mesh refinement called H refinement. H is the mesh parameter. P refinement where you actually maybe keep the mesh the same, but you increase the polynomial order on each element. But keeping the continuity in the typical finite element, low continuity sense of continuous or even discontinuous, but not smooth. Now K refinement is a P-like technology where you go to higher and higher order polynomials, but you go smoother with each order elevation. So it's smooth refinement. So I'll show a few examples of some of the basic technologies because they kind of support even the more elaborate and more modern ones that have now been developed even within isogeometric analysis. So B-splines and NURBS, we'll start with B-splines. They are polynomials. They can be defined recursively dimensionally. So you start on a dimension one. You partition an interval into elements you can think of it that way. And you can recursively define the functions by starting with piecewise constants, one on one element and zero on the rest. And then you can order elevate. And the way you order elevate is you put this to the right side here and then you derive linears. You take the linears to the right side, you derive quadratic. So this is the famous Cox-Dabour recursion relationship. It's one of the many ways to define B-spline polynomials. So if you provide that partition in a uniform way, it's called the uniform knot vector in this area, what you would start off with is the piecewise constants as you see here. Then as you order elevate, you go to piecewise linears and you notice now I was discontinuous here, now I'm continuous. And then when you order elevate again, you go to piecewise quadratics. And now not only are you continuous, but you have a continuous derivative. And if you keep doing this process, each time you will be supported by one additional knot span and you will get smoother. So if you go to cubics, we would be supported by four knot spans and we have two continuous derivatives. This is different than finite element analysis. These functions here are more typical of finite element analysis. Even when you go higher order, you're just continuous typically, not smoother. So how do you utilize these functions to build geometric objects? You create, first of all, a patch. Here the patch extends from zero to five and we've created a quadratic basis on it. And it is generated by what are called open non-uniform knot vectors. They're open because at each end, the multiplicity is P plus one. So three zeros here, three fives here. And non-uniform because we've repeated a knot here. Now you can see this is sort of the basic quadratic B spline. But when we repeat a knot, we reduce continuity to C zero. If I add another knot here, I split the patch in two. Okay, so I can control both order and continuity with these technologies. So you can build geometries. And if I take that basis and I multiply each basis function by coordinates, so-called control points, I get a geometric object like this. And you can see that in general, it's non-interpolatory. It's only interpolatory at these points of C zero continuity at the ends. And I can manipulate the shape by moving those control points. And the shape will move in a monotonic fashion. And that's because the functions are monotonic and they're positive. Unlike finite element bases, which are oscillatory. You move a point with a higher order finite element basis, you get oscillations. So this is completely different. Obviously that would be useless for design. So these functions support design. But there's an equivalent description. This is exactly the same object on the right. But this is, in some sense, a deconstruction of this geometric object into elements. You can actually represent polynomials on each one of these element domains and exactly generate this object. It's called Bezier extraction. It turns out to be an isomorphism between what you might consider the finite element point of view of a CAD file and the traditional CAD file. So what's H refinement? H refinement here is just not insertion. Put more knots in there. And you have exactly the same geometric object. You have a not insertion algorithm that preserves the geometry. So when you refine here, you don't change the geometry. In finite elements, you change the geometry when you design. In fact, you don't even know where the geometry is sometimes. You have to go back to CAD to find it. Because it's not embedded in the mesh. Here, the mesh is in the geometric object. And so you'd have more elements for a more accurate analysis. And you can do this to your heart's content. And you can see the control points kind of converge on the object as you age refine. You can start all over with the same object. You can purify. You keep the continuity the same. C1, we started with C1 continuity. But you raise now to cubics. So these are C1 cubics. Mesh is the same. Not so moving around to preserve the geometry. You do it again. And so on and so forth. Now, NERBS, what are NERBS? They are rational B-splines. A B-spline in the numerator, a B-spline in the denominator. And they are the most commonly used computer aided geometric design technology in engineering. So these are all NERBS objects here. And one of the interesting things about NERBS, you see these are precise circles, not approximate circles. And they can be explicitly represented through rational polynomials. And the way you do that is through a projection operator. You take polynomials in a higher dimensional space and you project them into the space and interest. So you could take, for example, a particular construction of a circle is to take four quadratic curves and then do the projection that you see in the picture and you'll get an exact circle. And from that, you can start to build objects like a toroidal surface by taking the product of circles. That's the control net. The control points that are like a scaffold that controls the shape of things. So here's the mesh. The mesh is exactly a toroidal surface. And you can age refine. Again, without changing the geometry at all, the refinement occurs within the geometry. Or you can do peer refinement. Keep the mesh the same, go to higher order, keep the continuity exactly the same. And you can go to three dimensions and build three dimensional objects. So this tells you a little bit about the basic NERBs and B-spline technologies. There are unstructured versions of this. T-splines, U-splines, et cetera, that are now part of the tool set of isogeometric analysis. I won't talk about them because it gets a bit technical. But if you compare from an analytical point of view, say NERBs-based isogeometric analysis with standard finite elements analysis, the functions have compact support. They're supported on finite intervals. They satisfy a partition of unity. They sum to one. They possess a fine covariance. That means that every affine map is exactly represented in physical space. When you move the control points, according to an affine map, every point in the object moves with that affine map. If you combine affine covariance with the isoparametric concept, where you use the geometry with one set of, say, NERBs basis functions or whatever, and also, in addition to the geometry, the kinematic field itself, like the displacements and mechanics, and all patch tests are a priori satisfied. Exactly, even though you're working with rational functions. And you get all standard error estimates in so-believe norms. And that surprises some people. Of course, some people think that the success of the fine element method is because of its polynomial basis, that you get exactly the same error estimates in terms of the NERBs basis, the rational function basis. And estimates like that take this form. I won't go into this. I know Professor Kei knows exactly what this is. So at least somebody does. But this is estimates of the approximation power in so-believe norms. And the shape, the form of these types of estimates here, you have something that you might call the proverbial constant. It's not a constant at all. It depends on many things, but in particular, it depends on the smoothness of the functions. And we'll see later, there's a dramatic effect there. Also, when you refine in an H sense, as I showed you before, the geometric map doesn't change. It stays the same. So this is fixed, unlike in classical finite element analysis. But the powers of H would be exactly the same for the same orders of finite elements. So what you would see if you did a very simple test case, like an elastic plate, infinite plate with a circular hole, you pull it in the X direction at infinity. There's an exact solution for this problem. And you do H refinements. You see a sequence of meshes like this. This is an exact circle. So these are really NERBs. These are not polynomials now. These are NERBs. But what you see is the geometry is not only exact in the initial mesh. It's the geometry map is fixed forever, never changes. And the results, even on two elements, you don't get a bad result here, but you converge the classical stress concentration factor of three. We're pulling it with 10. We get 30 there. And if you look at the refinement in a norm like the L2 norm of stress, equivalent to the H1 norm, you see convergence rates like this. For quadratic bases, you get quadratic convergence, cubic, cubic, quartic, and so on. And H refinement means that H is getting smaller and the number of degrees of freedom is increasing. So let me say a little bit about K refinement. What is it? Well, to understand K refinement, you have to look at P refinement. P refinement is order elevation keeping the continuity of the polynomials the same. So you start in say classical finite element analysis. Let's say we start with C0 quadratics. We go to C0 cubics, C0 quartics, C0 quintics. And you can see you get a lot of functions, but the continuity is just continuous, not smooth. In K refinement, you have the same mesh, the three element mesh. You order elevate cubic, quartic, quintic, but each time you get smoother. You go from, in this case here, the C1 to C2 to C3 to C4. And notice another thing. Many fewer elements, same mesh, same H, same order of approximation, many fewer elements. This type of refinement scheme really occupies the space between classical low order finite element methods that are C0 continuous and spectral methods. If you go forever with this process, you essentially generate spectral approximations. Now, when you do these refinements, as I showed you, there were more degrees of freedom. There were more basis functions when you did P refinement. And really, you begin to understand why P refinement and higher order finite element techniques are not used. Sort of a first glimpse of that comes from just counting degrees of freedom. If you do, if you start with n basis functions of order P after r refinement levels, you can think of this by bisection, the number of basis functions, each of order P plus r is n times r for P refinement and order n plus r for K refinement. When you cube that, if you go to a 3D block, say that has 100 degrees of freedom in each coordinate direction, so you start off with say a million equations, you go through 10 refinement levels, you wind up with about 1.3 billion equations. You're getting a glimpse of why P refinement is kind of dead in the water. If you do the same thing with K refinement, same mesh, same order approximation, you started with a million, you still have a order of a million. It's basically, you're just adding some functions on the boundary. It's a magic technology. What you're doing is you're working on the same mesh, you never change the mesh, and the number of degrees of freedom, not constant but almost constant, and you're jumping from curve to curve just by order elevation. You're picking up orders of magnitude and accuracy. It's quite an amazing technology and actually if you do a complete solve both the whole shooting match say for a problem in 3D and you go from linears up to 10th order with K refinement, you're maybe doubling to tripling the cost of analysis and picking up maybe 13 orders of accuracy. So just doing a few and you get enormous improvements for extremely small amounts of additional computational effort. So it's really a very powerful technology and it has been perhaps the technology that has supported most of the claims that you'll see in isogeometric analysis where people seem to be able to get incredibly accurate results compared with traditional find elements analysis. Now there are other properties here of NURBS for example, that are to be in contrast with classical find elements analysis. Classical find element analysis tends to be based on Lagrange polynomials, interpolatory polynomials. So if you have a data set and you try to approximate it by interpolation you're gonna get oscillations and there's no way of really getting rid of that. If you have more points, make it richer in data the oscillation amplitudes don't change it's just the wavelength changes but with NURBS you get a variation diminishing property you don't interpolate but you just get smoother as you go to higher order. So quite a contrast. If you have more points you can make that as steep as you like if you wanna model a discontinuity. This is just a standard benchmark problem for automobile crash worthiness. You take a box beam and basically drive it into a wall. You perturb it so that it buckles in an accordion mode. It's fully non-linear analysis here elastic plastic. This was in the early days of implementing iso geometric analysis in the commercial code LSDINA. In those days LSDINA was strictly a low order code. It was the only elements and finite element analysis that were robust enough to handle these crash simulations, et cetera. And this was the first time higher order elements were ever used to solve problems like this. So these are C3 cortex. One element has a support over about five knot spans here. So it's smooth basis functions. They do a beautiful job using the standard contact algorithms in LSDINA. The interesting thing you find here is that as you go to higher order you get more robustness. And that can be seen, the reasons why can be seen with spectral analysis. Spectral analysis is sort of complimentary to functional analysis which is the standard mathematical tool in finite element analysis. But spectral analysis can reveal many things that functional analysis doesn't. And you'll see that in this example of just doing a very simple eigenvalue problem. The reason you're interested in an eigenvalue problem it really tells you about the approximability of the space you are working with on the particular mesh you're working with. So the eigenvalues and the eigenvectors can be used to express the error in an elliptic problem, a parabolic problem, a hyperbolic problem. So this is an interesting way of looking at the approximability of the mesh. When you do functional analysis you get results that are asymptotic. They're telling you what happens as you refine the mesh. They're not telling you what happens on the mesh. They're not getting that global picture. If you look at the standard finite element eigenproblem error estimates, well here they are, I won't go into all the parameters and all, but they're asymptotic. In some sense the mesh has to be refined compared with the particular mode to get any accuracy. So they're telling you what's happening in the well-resolved modes, not what's happening in the rest of the spectrum that you carry in an analysis. There is a result that I call the Pythagorean eigenvalue error theorem that relates all of the relevant quantities. The error in the eigenvalue, the error in the eigenfunction, and then the energy error in the eigenfunction. And you have this relationship. This holds for every mode in the system. So you can use that to create a budget for approximation. Now if you see one continuous B splines, quadratic B splines, and you color code the budget here, you see the three curves. This is the eigenvalue error, the L2 eigenfunction error, and then the energy error, the sum of those two. That's about what you'd expect. In the low modes, and this is normalized by the total number of modes, so if you had a thousand modes, for example, the 800th mode would be at this point here. It's an invariant spectrum. So you get really good accuracy in the low modes, and then it starts to deteriorate in the high modes. No surprise. What about finite elements? Quadratic finite elements. Same order, same number of degrees of freedom. You get this. This picture for the eigenvalues goes back to my final element book of 30-something years ago. But in fact, the eigenfunctions exhibit a pathology that is really strange. You get these spikes around certain places in the spectrum. That means if you arouse that mode in an analysis, you're gonna have an enormous error. Contrast that with on the right. You don't see that at all. You basically see almost the best approximation on the right in the eigenfunctions. This was a real surprise when this came out. If you go to higher order, higher smoothness, you get what you would expect. On the right, you get more accurate results. On the left, actually the last part of the spectrum is growing. You get more of these spikes. Do it again. Go up to Cortex. Now, we're essentially off the frame, off the page with the errors. If you look just at the eigenvalues, these are actually the square roots of the eigenvalues, the frequencies. You can see what happens. This last part of the spectrum called the optical branch diverges with P. You have no approximation whatsoever. You have a lack of robustness. You're carrying garbage in your calculations and you have poor conditioning. This is linear finite elements. They don't have an optical branch but they're not very accurate as you can see. But as you go through the refinement process with B splines and herbs, it's essentially almost the entire spectrum converges. So you have much better global properties in the particular mesh you're working with. And if you see this, you understand why higher order finite elements are fragile in many applications. Because in crash dynamics, you will arouse the highest modes in the system and those are the modes out there and they're completely spurious. So that explains in another way of looking at why higher order finite elements have not achieved their promise. But what you see here is you go to higher order you actually get more robust. You become more robust in contrast with finite elements. So let me show a few applications, wind turbines. This is sort of illustrates the complete design through analysis paradigm of isogeometric analysis. This is a design through analysis parametric design program that was written in the visual programming language Grasshopper by Ming Chen Su at Iowa State and his students. And what we're doing is turbine blade design, as you can see here. It's parametrically designed. You move the parameters, you could change that. And you've closed the loop between design and analysis. The analysis is in there as well. This is a real blade model. It has many, many different parts. All composites, all stacks, layups, and quite a complicated little object. These are results from Sandia National Lab using standard finite element code answers, nothing wrong with it, using standard finite element shell technology and the convergence of the critical buckling mode. You can see the first mesh here with what, I don't know. 20,000 elements was about 35, 36% off, something like that. And this is the convergence profile. You get the right answer when you get out to about 400,000 elements. And these elements are six degree of freedom elements. Here for isogeometric analysis with Kirchhoff leather shells that have no rotations, just displacements, only three degrees of freedom. Basically the design mesh is almost the exact solution. And we see this type of stuff typically in analysis. Another example, a flex cable from a company core form. And you see this is large deformation analysis. You're trying to see when a flex cable starts to buckle. It's a complicated three dimensional object as you can see here. This is the type of finite element analysis they were doing in these things. It was prohibitively expensive, about 10 million volume elements. This is a 10,900 volume element U-spline model, unstructured spline model. And you can see it's sort of composite with copper and adhesive, et cetera. And that's the, you can see it, the simulation of flex cable. And you're interested in when it goes into its sort of buckling mode, as you sort of see right at the end there. The interesting comparison is the number of degrees of freedom, first of all. But ultimately, the times, the IGA analysis is more accurate and it's about a thousand times faster. So, and it's an exact geometry, no geometrical approximation. This is a boiling example and it is for the Navier-Stokes-Cortovac equations. These are equations that govern liquid vapor to phase flows. In essence, a phase field model. You have Cortovac stresses in the equations and they involve third derivatives. If you were to do this in a classical finite element way, you would have to use a mixed formulation to treat the third order derivatives, the Cortovac stresses. And you would have many more degrees of freedom. You have all sorts of stability issues. You don't have that at all with the isogeometric analysis, because you can just use a basic Galerkin formulation with smooth basis functions. This is a picture of a boiling example where you heat a liquid pool. This is the free surface in blue and then you generate some vaporization on the bottom. The vapor plumes rise up, they pierce the surface, they form droplets on a, they condense on a cool upper plate and then they drop down in the flow. And this is the, that was the density. This is the temperature profile. As you can see the richness of the resolution down here. And this is a animation of that. So you can see the vapor plumes rising, they pierce the free surface, they'll recondense on the top, form droplets and then it'll pop down. And this is extremely easy, completely stable just using these smooth basis functions of isogeometric analysis. Now with building isogeometric models, this is really the interesting area because this is the potential of doing things faster. You can do analysis faster in most of every case, but building the models is the crucial step. And you have to deal with trimmed files, trimmed CAD files. Well, trim has sort of a analysis counterpart called immersion and there's been a lot of work in recent years that really has brought tremendous attention to this subject because it's been shown now that you can actually get higher order accurate results, the full accuracy of methods, even when you're cutting elements right through the middle and all over the place. So I won't go into some of the details here, but I'll just show you a couple of examples. So this is an example of trim. You take a design. So you have this can like design and it's a CAD model, that's a trimmed NERBs file. So all the trimming has been done. That's the trimmed not lines. That's the untrimmed file. That's the part that's trimmed away. So that's in your database. So what you do though is you essentially work with that trimmed file, you take that and that is viewed as not fine enough for analysis. So all you have to do is not insert in each of those patches, which is extremely easy and get it down to a resolution that you want to use for analysis. All the refinement was done in the CAD file. You've never left the CAD file. The CAD file is your analysis file and you glue these surfaces together with interface technology that emanates from discontinuous Galerkin methods, so-called Nietzsche method and penalty methods. So how does that go? This is comparison with a fine element code, Nastran, a classic code. And if you look, you can see the geometry and isogemetry analysis smooth. Even around these trimmed areas, things are looking nice and continuous. This is with Nastran. The Nastran model is much bigger, 855,000 degrees of freedom versus 160,000. We don't know the solution to this problem, but it looks nice, of course, as you can see comparatively. And the important thing here is you've done all of your analysis within the CAD file. So you've closed the loop. You can do shape optimization. You can change the shape, whatever you want to do. And you stayed in the CAD file and your design has reflected whatever changes you've made. Now immersion is sort of the three-dimensional version of that. Now here we have a solid wheel. What you're seeing is a surface mesh. This is a T-spline surface. So it's an unstructured spline model of the surface. It's watertight and you'd like to do an analysis of that, but in traditional volume-based analysis, you have to generate a solid mesh. And that's a challenge. But what you can do is you can just drop it into a box of V-splines. These are all the elements that intersect the design. Do adaptive refinement, whatever you like, to get sufficient accuracy around the bottom here, around the rim. This is a particular method called the finite cell method. It uses a specific quadrature scheme. This is the quadrature mesh. I won't go into that. But you can solve that problem with that simple, virtually automatic procedure. And a lot of very fast techniques are starting to come on the scene, and sometimes they're not telling you how they do problems so quickly, modeling, design through analysis. They're all trim techniques, immersion techniques, just like this. You can get good accuracy with them. So this is really powerful for developing models. So let me show you some examples of heart valves. There's a lot of technology here. There's fluid mechanics technology on an ale mesh. The artery is a patient-specific segment of an aorta. The fluid domain is moving with the wall. The wall is modeled as a hyper-elastic solid, nonlinear solid. And here we have a valve in there, an aortic valve. And the valve is actually moving through the moving background mesh. And so it is immersed in the background mesh. It is moving dynamically through it and making contact through the fluid. So there are many, many technologies here. So what's the design problem here? Well, valve replacement surgery. This is a very, very common procedure, but one that is rather traumatic. The best bioprosthetic valves are made from porcine or bovine tissues. And they typically have a lifespan of maybe 10, 15 years at most. Very often you'll have an individual. I have a colleague who has had now multiple valve replacement surgeries. And they are quite traumatic, as I said. So what you can do is you can design better ones to have a greater life. And the way you do that is by doing analysis and testing different designs. So this was a jig that was actually conforming to experiments that were done at University of Texas. And you can see valves can be dropped into this fully-nerbs model. And here are different valves. Some of them are in herbs, some of them are on T-splines. And you just drop them in, take them out, and you can do optimization and find the best designs. So again, you have all of these technologies taking place, but they work together very nicely. So that's the last application I will show you. I think it's a nice one of immersion coupled with moving meshes and many different technologies in the cardiovascular modeling area. So let me conclude. Isogeometric analysis has become now one of the most active areas of finite element analysis. It is a generalization of finite element analysis. And also even computer-aided geometric design research. The original goal, still an overarching goal, is to improve engineering product design and focusing on the design through analysis process. Although I might say a lot now is starting to happen in manufacturing here, especially with additive manufacturing. These tools of immersion work very well for that. If you want to introduce new technologies and get people to use them, you better do one, two, or ideally three, better, faster, and cheaper. But I think here with this technology compared with classical finite element analysis, we can improve the quality of analysis, we can do faster model development, faster solution, and decrease cost. And lately, it's taken a while, but this is now starting to gain quite a bit of traction in industry. Several of the modeling companies that develop finite element meshes and analysis models now have isogeometric options. So they can create an input file from design to go right into a code like LSDINA, for example. Now we were talking about in the panel discussion before new ideas, and I mentioned that I've been in the finite element business now for over 50 years. And when you introduce new things, you're gonna be met with resistance. People don't like change because you're pulling a rug out from under them sometimes. And my experience in the early days of finite element analysis is the resistance was absolutely ferocious. It was really a blood battle. I like these words of Arthur C. Clark about new ideas. I have to say though, I guess who was initially resistant to isogeometric analysis is the finite element people. Because they're the establishment now. Of course, that's me. So I'm on both sides of the coin here. But so Arthur C. Clark said that new ideas pass through three periods. The first is it can't be done. The second is it probably can be done, but it's not worth doing. And the third phase is I know it was a good idea all along. Thank you very much for your attention. Experiencing technical difficulties. Thank you very much, Dr. Hughes for the presentation. Was wonderful to hear. Do we have any questions from the audience? Who wants to go first? I look at my esteemed colleagues in the first row. Usually they're in the last row. Go ahead. Thank you very much for that. And I have two questions. One is a technical question. So when you do the spectral analyzers and comparing the finite element to the isogeometry analyzers, so I'm wondering whether the polynomial in the finite element depending on how you choose the interpolation points. For example, if I choose the Gaussian or buttock or torture points, whether the behavior will be better. So this is the first question. So the second question is about, so I am wondering, do you think there is an ultimate numerical method to deal with all the questions in engineering or most part of the question? And if your answer is the isogeometry analyzers, so what would be the remaining thing to make it the ultimate method? So yeah, that's the two questions. Did you use the word alchemy? So it's the final or it's the very best or all the best? I'll start with the last question and work backwards. It's a tool. There are many tools. Every tool that exists probably can do one problem really well. So it's not like this changes the landscape completely. I mean you could say, did finite elements change the landscape completely? No it didn't, finite differences never went away, finite volumes never went away, spectral methods never went away. It became dominant in many areas but it wasn't the end of the story. This is probably not the end of the story either but it's a new story that maybe expands our ability in finite element analysis and better integrates it with design. Now you asked me about moving the points. Those spectral analysis, you're basically doing a discrete Fourier transform to get those results. You really need uniformity. So that's a shortcoming of spectral analysis but within the framework of spectral analysis this is what you see. Now it's very interesting though with k-refinement one of the questions was why can't we see that in functional analysis? We could never get a handle on this constant but lately there are now works coming out in that area where it was just a paper published in the Merrish Mathematique that actually has analytical results that shows the isogeometric basis is better than the C0 finite element basis for basically all orders and also discontinuous basis. And it also shows some very interesting things. There's a big jump between the last levels of continuity going to the maximum continuity you get a big jump compared with the previous one which is kind of things we see in practice. There's another paper now it's in review in the Merrish Mathematique on that. I'm not the author of these papers but they're very interesting papers and I think they will change some of the mathematical perception of approximation. So the functional analysis results they're not so dependent on the uniform mesh and they're multi-dimensional too. Now, so I guess that I've answered both your questions. I think. Any other questions? So you mentioned the additive manufacturing. So in that process, both the process and the final product, the microstructures are kind of messy. What the issues of traditional finite elements are going to hit to model that material and how isogeometric analysis can come to say. There's some incredible CAD work being done now on volumetric CAD. I mean, these representations, traditional CAD doesn't say what's going on inside an object if you have a closed set of surface areas. That's called a solid model but there's no information about the interior. It's just the surfaces that are represented. Now in volumetric CAD, you're actually modeling these as three-dimensional primitives, three-dimensional nerve patches and things like that. And there's a lot of work being done in that area on microstructure because now you're trying to deal with three-dimensional structures. There's something non-uniform within the design. So there's work being done in this VREP paradigm. That seems to be very, very utilizable in additive manufacturing. That's where some of the work is being motivated by. The microstructure is absolutely incredible. Scaling down to very, very small scales with very intricate structures and that being present in the geometric model is utilizing these techniques. It's actually amazing to see these things. Now on the other hand with additive manufacturing, if you're just talking about overall shape, I mean, you can't model that very well because it's being built. It's being layered and put together. So what you do is you adopt more so-called Eulerian approach like in fluid mechanics. You model the space in which the object is going to appear and then you use sort of the inverse of immersion techniques. You start to build up the model and the object within the volumetric space and then you have all these cut cells around there and some of them, nothing is happening. But in others, you're actually building an object and you can be analyzing and simulating the process of 3D printing with these techniques. So that's what people are doing with iso-geometric analysis in that area. Okay. Who else? Yes, David. Appreciate the talk. Quick question, you've got this slide up here that almost seems like you would like to see a question on it. So you talk about 50 years ago and resist FEA. So you've got these three periods, I see here. So where do you see IGA in these three periods right now based on your past experience? 2.5. Some people have become unconvinced and there are people that really resisted and had all sorts of nasty things to say that now we're using it happily. Okay. Anybody else? All right, I get the sign for time. So with that, we're closing our seminar presentation. I would like to thank you one more time very much for coming to Purdue, giving the presentation. Thank you very much.