 Okay, so I'm going to talk a little bit about the role of simulation in hands-on science. So it's not really like a research talk that I'll talk a little about some of the things I've worked on a little bit. But for the most part, I want to sort of make some broad comments about some current trends in simulation and computation, things that you're probably aware of, and then maybe a few things that are maybe a little more subtle and kind of interesting, I think. So hopefully it'll be interesting and broad for everybody. First of all, let me tell you a little about where I'm from since everybody's from all over the world. I live in Boston, which is right there at that arrow. I noticed when I was looking at this map, it's surprisingly looks like it's about the same latitude as Trieste, or actually a little bit south, because I noticed the sun stays out a little bit longer here. The problem in Boston is though it gets a tad colder there, we get a kind of a lot more snow. So this is my daughter. This was only halfway through this past winter. That's the roof of the garage next to us. So it kind of, we sometimes get a little bit more snow there in Trieste, but at least this time of year in Boston it's quite lovely, so maybe not quite as nice as here. I work at a place called Olin College. It's kind of maybe a little bit unusual. It's a very small school. We only have about 350 students total. We only offer degrees in engineering. That's my background. I'm actually an engineer. And it's an undergraduate college as well, so there's no masters and no PhD students. So everything I do there is with undergraduates. We're out in the suburbs of Boston, just maybe 10 or 15 kilometers outside of the city. And another interesting thing about Olin is that we have a very strong focus on hands-on learning. So all the education is very much in the style of this school, which is I think why I tend to come back here in this place is exciting for me. So we do a lot of projects. We do a lot of lab courses. We get students in engineering involved in design and working with companies and all kinds of things like that. So this program kind of resonates, I guess, with me and things I like to teach and do there. Oops, wrong direction. My background in engineering is in fluid dynamics. So I've worked on a number of problems, so that seems to be the theme that keeps coming back. So my comments today about simulation are probably going to be a little bit biased towards fluid-type simulations. So everything I say might not be quite as relevant for different types of simulation, but that will be the sort of bias I bring to everything is the field of fluid dynamics. So in the first talk that Professor Swinney gave on the first day, he talked about big science and hands-on science. And I think it's probably pretty clear that these two distinctions kind of make their way up in simulation, where big science would be super computing and hands-on science would be using something like a laptop. And by big science and super computing, I mean we really mean big science. The current winner of the big science award in super computing is this machine in China. And it takes 18 megawatts just to run it. It's about a half a billion dollar computer. And it's capable of 33 petaflops. So a flop is a floating point operation per second, so like multiplying some numbers together. Peta is 10 to the 15, right, 10 to the 15. So that's a lot of flops. And by hands-on science, I mean just sort of using basically computers that you can buy at the store on Amazon. So things that don't require a lot of money and they don't require a lot of expertise to set up. Something you can buy and just turn on. You know, a good hands-on computer. I was worried about some software that I would be able to stall it. So I brought my Linux workstation with me just in case I needed it. I bought this thing for like $100 and it's fairly capable. And it's this big. So that's what I mean by sort of hands-on simulation and computing. It's just using ordinary, low-cost, easy to use computers. An important thing, though, about sort of simulation and this distinction between big science and hands-on science, which, of course, is more of a grayscale continuum, is that problems sort of continually flux from this realm to that realm, from this realm to that realm. So what was a big science problem one day can become a hands-on science project the next. So problems always are moving kind of in this direction from big science to hands-on science. I think a great example of this, the one I like, at least for this crowd, is if you go back and look at the original paper by Lorenz. So this is a 50-year-old paper, right? It was in 1963, the sort of famous paper in non-linear dynamics, about sensitivity to initial conditions, the little spirals that we've all become familiar with. And you know, it's a very beautiful paper. It still holds up well today. You read it today and all the ideas. I mean, the big ideas are really there. And it's really still a nice paper even 50 years later. And it's one that, in this crowd, at least in the beginning of every book in non-linear dynamics, this is sort of like the first example. But it's important to remember, if you look in that paper, even though no one talks about this, is the problem involves supercomputing. So this is the computer he used. It was about half a million dollar machine, and today's dollar, sort of equivalent. Lorenz was at MIT, so he had access to this kind of thing. He had access to the expertise of the people who could program it. It was not trivial to program these types of machines. And his simulations took a long time. The classical story, I don't know if it's true, was about how he would start his simulation and go off. And then he would restart it days later and type the numbers in. And that was how he discovered that if you didn't type the exact number in, you got different results. Whether that's true or not, I actually don't know. But it did take a long time to run these calculations. And his calculations were just on this sort of his model where these three coupled ordinary differential equations. So by today's standards, not a very intense calculation. But for the time, it required supercomputing. And he wouldn't have been able to sort of make his discoveries without access to these big computations. And so this sort of notes that the sort of scaling laws, there's something maybe a little bit different about simulation in hands-on science and big science. So in the first talk, Professor Swinney talked about the James Webb telescope and how big it is and how awesome it is and how much money it's going to cost. But there's no future I can envision where this becomes a hands-on science problem. There's some problems, physical things, shooting them to space. It just takes energy. Things cost money. Sensors are big. I mean, there's no way this will ever be a project that can be hands-on. Eric and I can't get together and say, you know, let's go buy a telescope on Amazon and we'll just shoot it up there and it'll work. But in simulation, there's no problem I envision that anybody is working on or is conceiving that eventually would not be accessible with a laptop or any computer. So it's sort of maybe a different scaling laws are going on or something a little bit different than about physical experiments. The law that you've probably all heard of is called Moore's Law. It's often maybe misused. So Moore's Law is this idea that the number of transistors on a circuit doubles every some number of years. So the law was coined by a guy, Gordon Moore, who is one of the co-founders of Intel. He's one of these billionaires who's probably did enough that he should be worth a billion dollars or however much he's worth. Very brilliant engineer. And he had sort of made this offhand comment in a kind of a trade publication early in his career that the number of transistors was doubling every year. He later revised that to doubling every two years. And over 40 years, that law has actually held up pretty well. So the curve is there as doubling every two years. And that's just data points of different chips over time. And you see going from in the 70s, from about when I was born, from 2000 up to 2 billion, 40 years later. Just gave my age away. Damn it. And any measure of computational speed kind of follows these laws. So if you plot computing performance, so the number of floating operations per second, over 40 years, lots of different processors, they kind of fall along this line. If you look at kind of the fastest processors today, we're approaching about a teraflop. So 10 to the 12th multiplications per second. This amounts to, if you kind of fit this curve, it's roughly 100 times performance every decade, every 10 years we go up about a factor of 100, roughly. If you look at other measures such as calculations per kilowatt hour, so how much energy does it cost to do the calculations, they follow the same trends. These are important because if you're doing a simulation of an airplane, you don't want to use as much energy as an airplane. But they're also following a trend of about 100 times better every decade. So meaning I can do the same amount of energy 100 times more calculations. If we look at supercomputers rather than just single processors, we find a slightly different scaling law. Here we get something roughly more like 1,000 times per decade because not only is it the single processor is getting better, but it has to do with software and making things in parallel. So there's other advances that allow you to even get more out even if it's not just the processors getting faster. So every 10 years we see about 1,000 times performance every decade. This curve is a little bit funny here because I took it from this website. This is the fastest computer in the world as a function of year. This is the 500th fastest computer. So they kind of rank the 500 top computers in the world. This is the sum of all of them. So I don't know why they put that on there because you would never be able to add them all up. But what's interesting is if you look at this curve, and if you remember, I said that the best single processor CPU right now is around a teraflop, that's about what the best supercomputer was around 2000, so about 15 years ago. If you sort of factor in the fact that even these really, really big machines, you never have access to the whole machine. They're always shared resources. You're always in the queue. So no one actually ever gets to use that full power for the whole year. And you kind of sort of discount that. I would say, you know, you're probably looking at the, even for people who have good access, you're probably closer to this curve than that curve in practice. And so therefore it's probably more like a decade, 10-year time lag between what a supercomputer can do and what a fairly low cost computer that you just buy on Amazon can do. Roughly. Roughly about a 10-year time lag, I would say. Yes, because they build a big computer and it holds on for a while, then someone builds a big one and it holds on for a while. And so, yeah, because it's more of an average, right? It's lower doubted, yeah. So you can see every three to five years it looks like someone builds a bigger computer. So the current winner, the one in China, it looks like it's held on for four years. But it won't stay there much longer, right? Someone will beat it soon. There's a question about whether these kind of trends can continue. There's some interesting things you can find online written by some higher-ups at Intel. So currently, the current Intel process, 14 nanometers is the feature size. That's about as small as they can go right now. And depending on who you've asked and what you read and who you believe, I can't find anyone who believes anything less than five to seven nanometers. Is that about what you would say? Yeah, so no one at Intel really believes that it'll be efficient or possible to go below five to seven nanometers. And that's like the feature size of like little transistors. And that date's projected roughly five to 10 years from now, maybe 2020, maybe 2025. And right now anyway, there's not like anything on the horizon that looks like it's gonna take over, like some other technology. Eventually, of course, something must take over at some point in the future if we manage to keep this planet alive that long. But it's not clear whether these kind of trends will actually continue much beyond five to 10 years, at least in feature size. But even if feature sizes don't change, I'm not pessimistic on the number of calculations per second, that there's other advances that we've seen in software as in multiprocessors that might make these things, the sort of computing power increase, even if the individual CPUs have to slow down or stall at least for a little while. So an interesting exercise is to take your field and then propagate these trends upon what would be possible in the future. So I did this for fluid dynamics because it's the field I know. So in fluid dynamics, we characterize everything by something called the Reynolds number. The Reynolds number is this magic dimensionless number. It's the ratio of inertia to viscosity. When the Reynolds number is big, the flow is complicated. There's lots of spatial modes, lots of time modes. So you need lots of lots of points, both in space and time to integrate things, to do a simulation. So the Reynolds number kind of captures everything about the flow, how big does the simulation need to be? If we want to calculate the actual fundamental conservation of mass, momentum and energy equations, not making some sort of approximations. So the Reynolds number captures everything. And it turns out the scaling goes as the Reynolds number cubed. So if I want to make a factor of 10 increase on the Reynolds number, I need a factor of 1,000 increase in computing power. So if you then say, okay, well my laptop right now, if I just pulled it out and used some commercial software or whatever, I could do a simulation of about 1,000. So the Reynolds number of 1,000 is no problem, give or take. And so that I'll call right here, 10 to the 3 at time zero. And so if I assume 1,000x increase in computing power per every 10 years, and since this goes as a factor of cubed as well, that means every 10 years I can get an order of magnitude bigger in Reynolds number. If it's 100 times performance per decade in computing power, the slope is lower. And so then if I say, okay, well, what's the Reynolds number of, I don't know, some random things that we might want to calculate, it kind of gives me an estimate of how many years in the future I would just be able to get out my laptop, put the thing in, click a button and just get the result, right? With basically no problem. So if I took an animal swimming, I don't know, I picked a dolphin. I estimated the Reynolds number was right about here. So we're still a fair ways out. So maybe 40, 50 years out, depending upon what the computing power does. If I took like an industry problem, like an aircraft and it's not much bigger than an aircraft flying in terms of Reynolds number, that's about as big as you'd ever want to go. Maybe we're 50 to 175 years out. If I took the atmosphere, like if I want to do a full simulation of the atmosphere, now we're getting up to Reynolds number about 10 to the 15. So we got some years to go. So this is either optimistic or pessimistic, I guess, depending on your point of view. 100 years is kind of a long time. I may not be around to see that, most likely. But it's pretty amazing that at some point, we could just basically do a calculation of the biggest things we can think of with no problem. Yeah, so it depends on what that does, right? Yeah, so that's another interesting, would be an interesting measure to look at, whether with these trends, if we get to this point, would it be the same energy requirements as building the wind tunnel and testing it? So one of your homework assignments for today is to think about what this means for your field. So I think it's kind of an interesting exercise to just think about the problems you're interested in and what would be possible five, 10, 15, 20, 50 years from now, it's just kind of an interesting thing to think about, like what would be sort of a routine calculation that we would look back like we do at Lorenz's thing where it's like quaint, the Royal McBee, oh, 40, 96 words, right? What's it gonna look like in 50 years for the kind of problems you're interested in? For hand, so I'm gonna play this video in the background while I talk. So for hands-on science though, there's something I think that excites me even more than the increases in computer power, which is basically improvements in software and be able to do sort of complex simulations without a lot of background or sort of formal training or getting into the details behind these numerical codes. So what's running in the background is I'm just, it's a real-time video. I'm making up a problem as I go. I'm gonna solve a problem in fluid dynamics using this commercial finite element, what they call multi-physics software called ComSol. And I'm just sort of making up a problem here. And what's interesting though is over, I would say it's really been like the past five years. So these kind of softwares have been in industry for a long time. People have been wanting to do finite elements of solid mechanics and structures and fluid mechanics for a long time. Since the 60s people have been trying to do this. And it used to be that every kind of market, industry, or field had their own set of packages. So if you were in the auto industry, there was things you used. If you were doing structures in the auto industries, there were packages you used. All these industries were very fragmented. Things have gotten dramatically better over the past several years, where these packages have really started to consolidate. And so now you don't have to have a specialty package to do fluid mechanics, solid mechanics, heat transfer, thermodynamics, material science, electromagnetics. They're becoming integrated and they're becoming good enough that you don't actually have to be an expert in simulation methods to sort of use them. You can just kind of draw the thing you want. You still have to have a good physical understanding. Either as a scientist or engineered to interpret the results, you still have to have a good understanding to set the problems up. So there's still a good physical understanding, but you don't have to be an expert in numerical analysis to use these tools. I use these tools in my undergraduate fluid mechanics course all the time. It takes me half a class to teach the students how to use it. And then we can just sort of slide things around and we can check on the fly, sort of qualitative behavior in fluid flow. So here I think I was setting up a problem. I'm talking faster than the movie's going. Just a flow problem. I'll kind of skip it ahead. I just click on some stuff. I make a mesh. Is it running still? Yeah, it's still running. And there I just click the button to calculate it. You know, I just put some cylinders in a flow. I don't know, I'm just sort of making something up. But you see the whole process from beginning to end, setting up the problem to getting the result took maybe two or three minutes at most. And you know, when I was in graduate school, I couldn't have done this. Like it would have taken me months and months and months to develop this code or to figure it out from scratch. And now, you know, I can just set it up and do it. And so this is, I think, gonna be more revolutionary than for sort of hands-on science, our ability to simulate complex things and complex physics in different fields without having to be an expert in numerical analysis and computer science. It reminds me, and this sort of thing reminds me of this little essay that I found some years back that I really like. It's probably about 10 years old. It was written by this guy, Lloyd Trefethan, who's a very well-known numerical analysis guy at Oxford. And his essay was called The 10-Digit Algorithm, and the mantra was 10 digits, five seconds, in just one page. And so the point was on the 10 digits is that nothing in physics is known better than 10 digits. So if you have an algorithm that computes things accurate to 10 digits, then it's right, right? There's no need to go any further, right? It's, you've nailed it. The five seconds I think is really interesting. So I'll just read what he said. He says, in computing with humans, response time is everything. If a program runs in less than five seconds, its author will effortlessly adjust, improve, and experiment. The process of exploration becomes interactive and pleasurable. If a program runs for a minute or an hour, on the other hand, it's a very different situation from the human point of view. And the one-page argument was that one page or one screen, you can print it out and you can share it with somebody and you can talk about it and you can look at it. Anything longer than that, it's too complicated. And it's all about what he says linking to people. And so even though these finite element softwares that allow you to do multi-physics, they may not look like programming languages in the traditional sense. They still have a summary that basically encapsulates the problem that you've set up in all the parameters and you can print it out or put it on the screen and yeah, it's about one page. And so this idea of being able to interact with simulation and computation in this way, this human way of just being able to iterate and experiment on the fly, I think is gonna be really important for the future and it's kind of already here. So I'm gonna do a quick survey. I'll give you a minute to think about it and then discuss with your neighbor and then converge on an answer. You can have two different answers. But I just wanna know what your estimate is based on what you know of what sort of the time lag is between a routine computation and one that requires an expert. So another way to say it is think of something in your field that somebody's working on right now that's really, really hard. It requires kind of an expert to programming it. It requires an expert who knows a lot about numerical analysis and computing. And the results of that, while the results might be something beyond just the methods, it really requires an expert to implement it, sort of a publishable thing. And how long would it take for that to be something that just kind of anybody could set up and in a day's work, you could basically regenerate those results? So I know it's less than 50 years because if I look at Lorentz's paper, would categorize of one that required an expert, I can set that problem up and I can regenerate his results at about 15 minutes. It doesn't mean I'm as smart as Lorentz. It doesn't mean I have the insight that he had. It doesn't mean it's no way to diminish the work, the actual work that he was interested in. It just means the computation part of it is nothing now. So I know it's less than 50 years. So take a few minutes and we'll do it in increments of five years as an estimate so there's no point in going narrower than that. So as a time lag, five years, 10 years, 15, 20, 25, 30, it's less than 50, I know that. Has to be. So take one minute and then we'll take kind of a quick poll and we'll see what people think. And there's no right answer and it might vary too. I'm just sort of curious what the mean in the room is. Yeah. What? Harry, you're sure he could go to the bank and be back in time for your talk? So he went, that's where he went this morning? That's where he is. Oh well, it's not that he doesn't already know or heard or whatever. That's right. All right, since this is an informal poll, let's go ahead and do it. So unlike Professor Schatz, we're not gonna be secret. We're just gonna make our votes known. So we'll do it in increments of five years. This is totally informal and I don't even claim to know. There's not really is an answer. There's just sort of a balance with margins of errors. So did anyone think it's five years? So we have a couple votes for five years. How about 10? Fair number of votes for 10 years. How about 15? I can't hold up 15. 15. 15. We have a few votes for 15. 20? One for 20? Anything greater than, anyone greater than 20? A couple votes for greater than 20. No, just to be able to develop the code to do it. So if you had the idea, you could just implement it and be done with the implementation in no time. Kind of that aspect. So it sounded like, it looked like the mean was about 10 years. We had a few at five. We had kind of a tail going off, you know, 15, 20, but most people said 10. 10 was actually my vote as well. And I based this on one data point, which was myself. So I just sort of sat there and I was looking at things that I was working on. And I came to something I was working on about 10 years ago. This was this kind of microfluidic coupled problem of like flow and electrodynamics and ion transport and all this kind of stuff. It was a little bit non-standard at the time so it required writing our own simulations kind of from scratch. It required getting a bunch of people together so there were experimentalists and theory. There were several authors on the paper and it required, I probably worked a couple months, not full time, but like in the back of faculty meetings when I wasn't paying attention and things like that to get the code to actually work even though this wasn't anything that crazy but it was just it was non-standard so it took time to do it. And so I pulled it up, I pulled out the software and I was able to regenerate all the results. It took me about 30 minutes. And so this was about a 10 year old paper. It took about 30 minutes now. And so I think well what would be different if this work were being done now than rather than 10 years ago? I think the work still holds up because it wasn't really about the methods or the simulation. It was just the simulation was part of the story to sort of explain what all the mechanisms were. So it was required that we do it to make the paper good but it was about the physics so none of that would change but I think the team would be smaller. There would be no need to have this collaboration. The graduate student who had done the experiments originally and sort of made the first observation in a lab probably would have just, his advisor said why don't you go simulate that thing and he would have done it and it would have taken him maybe more than 30 minutes but it wouldn't have taken months and they probably would have just done it him and the advisor. So it would have been a much more self-contained project with just the individuals who did the experiments also doing the simulations and kind of everything together. The other thing that was interesting that I haven't had a chance to play around with is now I can basically get the result instantly. So this goes back to the 10 digit algorithm that now, even though these weren't like super intense calculations that took months, right? I did them on my laptop. They took hours or overnight but now it's like it just runs and so I can sit there and I can play around with it and I, oh what happens if we change this? And so it's kind of a much different experience too because you can have explore things in simulation that we weren't really able to do before. And so back in 2004 we really used simulation as this bridge from sort of the mechanism to the experiments to kind of prove that we understood what was going on. Whereas now we could think about maybe using the simulations in a sort of more interactive way with the experiments and maybe designing new experiments or coming up with new ideas in simulation that we then believed that we could then test in the lab. So I think it opens up some opportunities to maybe think about a little differently about how we've traditionally used simulation. So let's make a few random predictions. None of these are that grand. Do you guys know who this guy is? Ray Kurzweil? You ever heard of him? So he's a reasonably accomplished computer scientist who is maybe a little bit nuts. So he's a chief engineer at Google so I can't make fun of him too much because he's a really bright guy obviously. But he wrote this book a few years back about the singularity and how basically AI and biotechnology and people were gonna converge and all sorts of crazy things are gonna happen. You can look it up on Wikipedia. Some of them are, you know, some of the predictions are interesting and some are pretty out there. But maybe he'll prove to be right. So I'm not gonna be as grand as him. And any of these you wanna debate just or if you have something you wanna add, we can discuss, we're doing pretty good on time here. So my prediction is that Moore's Law for Transistors is gonna be a problem five to 10 years and we're gonna be done with it or there's gonna be a pause until something new happens. But I'm still optimistic that computing power and power per watt are gonna continue to get better and better and better and better. So even if we can't do anything immediately about the processors, that we're gonna keep growing in computing power. The power per watt I think is an interesting one because the brain puts kind of a good, a pretty high bar for us. Our brains use about 25 watts. So nature has proven that it is possible to do a lot of computation without a lot of energy. We've got a long ways to go though to figure out how the brain works. So I think computing power is gonna continue to grow and that assuming a hundred times performance per decade is not an insane thing to do. You know, as I said, for big science, I think these kind of complex simulations are gonna continue just to become routine. Where it not only can do quickly, but you don't need an expert to do them and that trend is just gonna keep continuing. But as I showed with sort of the scaling and fluid dynamics, this kind of big science work is gonna remain for at least my lifetime. We're not at the end yet. There's lots of big problems yet still that are important that people are gonna work on. So it's not gonna go away as a field. But I think it is gonna become a field that becomes much more specialized and much more concentrated in industry and big groups. So it's gonna be much harder for individuals to make itty contributions because industry I think is there's just too much money and too many resources that it's gonna be sort of more concentrated with a specialist and the average person is gonna kind of know less and less about these big simulations. So I think the market's kind of going in that direction. CAE stands for computer aided engineering. So that's kind of the trade term for these big packages that run these multi physics finite element simulations. The industry needs have gotten very strong and I think are gonna take over the market here. That these tools that were fragmented among academia and different industries, that they're all gonna come together, they are already coming together and it's gonna be even better. And the industry push is that people really want in industry to integrate analysis and design. They don't wanna have a bunch of people designing stuff and then they just go through it because we have to see who arounds them. They don't wanna have to be able to, then ship that over to some other house that then runs a bunch of codes and then comes back iteratively. They want the designers to be right in the loop with it instantly. So they want to be able to draw the parts and then just click buttons and simulate it. But they want correct results. Industry does not want bad results. You can't spend a lot of money designing an airplane and it not work. So the demand for the simulations to be accurate and get good results but for them to be run by people who are not specialists. They're specialists in engineering and science but they're not specialists in numerical methods. So I think this trend in industry is gonna continue to push and that at least in these kind of things for fluid mechanics, solid mechanics, material science, electromagnetic, the industry is gonna have all the new innovations. There's too many hard problems left that individuals are gonna have a very hard time coming up with. It's a very mature field now. For hands-on science, I'm very excited. I think these new tools are gonna change the way that we do and think about using simulation with experiments. I think we're evolving away from this kind of tradition where we always use simulation as this kind of bridge between theory and experiment. So I'm not quite sure how this is gonna play out but I think there's a lot of opportunities there. I think the best people and the best papers in the kind of hands-on field will integrate experiments and simulation. When I was a graduate student, we used to put people in bins in fluid mechanics. We'd say, what do you do? Oh, I'm an experimental fluid mechanics. What do you do? I'm a numerical fluid mechanics person or I'm a theoretical fluid mechanics person. I don't think those exist anymore. They exist from, maybe I still have some bias because I'm older, but the younger people do all of it together. And I don't know many people who, I mean, I think Eva Conso and I went to graduate school at the same place around the same era and we would probably both have a little bit bias towards simulation and theory but now we both have experimental labs and I know lots of people who have evolved that way because it's not enough to just do the simulations. And on the other hand, I know people who were very good experimentalists doing very complicated things I could never replicate. One of them I now know runs a company around doing simulation. And so I think at least in some of these fields that it's not possible to just say, oh, I'm gonna do experiment or I'm gonna do theory, you've gotta do all of it together. I think it's a great opportunity. It's a great way to work and there's a lot of opportunity but it's gonna be harder to bend yourself as doing just one thing. And I think the corollary to that is as more people use simulation, fewer and fewer of us will actually understand what's inside these codes. I don't know if that's a bad thing. It's certainly true about computers and we've all got one of these and how many of us could explain what the heck's inside of it. I don't know, you push on the button and you watch YouTube videos. That's how I use it, right? And so at some point when the tools get so good you stop worrying about and you kind of trust what's inside of them and so I think this is gonna happen with a lot of these codes that will maybe in an educational sense know kind of roughly what's being done but there's gonna be very few people who actually understand the details and the people who do will be concentrated in industry just like in the computer chip, right? There aren't individuals who really could design an Intel chip. It takes a big army and a lot of experience. Artificial intelligence, I think one cool thing that's gonna happen, maybe this won't come true but I think it will, is that all this stuff will eventually move to the cloud because everything moves to the cloud. Someone's gonna figure out how to put all this stuff so you don't even need to install software. You just run it through the web. It's already happening in computer-aided drafting so there are already companies that are actually quite big and making money with new business models to do drawing packages and manufacturing. So eventually someone's gonna put in the ability to simulate and there's gonna be a button on a website and you just click. What's the physics and it's just gonna do it for you? And I think the way it's gonna do it for you is that in the cloud the solvers were learned by experience. This is not a crazy idea. This is how video games work now. I don't know if you know this but when you go online and play a great, very intellectual game like Call of Duty where you sit there and shoot at people, you fight against all these computer-generated soldiers. The computer-generated soldiers learn in the cloud. They learn from watching you play in the US and they watch from you playing in China and India and they learn from all these people over the world how they play the game and that's how they decide how these little actors, these computer-generated actors will work. This is what Google thinks is gonna happen in robotics. This is why Google is buying all these robotics companies that all the learning and software will be in the cloud. So in these solvers it's actually very difficult because there's all these kind of hidden parameters that experts have to figure out. And I think the future is that basically when I solve a problem and figure out how to get it to work that somebody else in Japan will solve the same problem but since I already figured it out, the computer will just recognize it and put the pieces together. That might be my most crazy prediction. Some fields I think will remain an exception, biology is one. We don't even know kind of what the equations are so there's a lot of fun, there's some fields where the challenges are so fundamental that everything I just said is probably irrelevant. And that's it. So I think there's a lot of opportunities that hand us on science. I think I'm really excited about kind of the more interactive nature of these complex things. I think there's opportunities to use simulation more integrated with experiments in kind of new ways and I think it's an interesting thing to think about. I think this idea of they're not requiring specialization and numerical analysis and training to be able to do these good simulations opens up a lot of opportunities. It opens up multidisciplinary ones because it's letting me work in fields that maybe I don't know have all the background in that would have been required some years ago and so it lets me collaborate with other people. I think 10 years is the magic number. It's my magic number. Both for computing power and training and skill, anything being done now will be trivial in 10 years. And I think as computing becomes easier and easier and easier just as with all the things that hands on science, it's picking the problems that matter. It's not the methods. It's the what you're computing, not the how. This is what's gonna become important. So anyway, I did pretty good on time. I didn't go over. So I'd be happy to take any questions. Anyone wanna discuss? We got a few minutes and then we'll end early and have time to hike up the hill. Yeah. So I haven't looked at what the sort of price in today's dollars is as a function of time. They are getting less expensive but they do cost money and so there is this sort of calculation you have to do. So I mean as a scientist I would say you know your most valuable resource is your time. What are you gonna spend your time working on? What problems and what kind of things? And yeah, so you have to make a calculation about whether it's worth purchasing or not. Most of the prices or their pricing model is to get money from industry more than academia so it's often more affordable. You know in all of these things there's, yeah there's nothing that doesn't exist in the world of open source that exists for paying money but it's just the interfaces and the ability to do it without the training that is really, you know, but these things are complicated and Intel's not gonna give you a processor because they like you and the company's not gonna give you software because they like you, they're there to make money so they're gonna price it for what they think they can get. Fuck, software thing is easy to always get answered but that often it's easy to get a wrong answer. Like if you just misuse it slightly and you're not savvy, do you think that that's less true or? It's less true than it used to be but it will always be true, right? So it's, as with anything, right? If you, it is always possible to get results that are incorrect and so this is where it comes in that you have to have some background in physics, you have to have some background in engineering, you have to have some physical intuition from the problem, you have to have an experiment to validate things but you don't need to be an expert in the details of finite elements and know that I need to set this parameter to this and use this solver and you just don't worry about that stuff. But you just, you know, I mean, and is this true with even like simple things like, you know, simple differential equation is like no problem to solve but you don't, you believe, you have to first validate that the equation's right and matches the experiment. So there's all that, that aspect of validation will always be there. And even when I've mentioned where it says statics or fluid mechanics, for example, closing objects on a surface, it was all about, well, what can we do which is not in the current package of those we were looking at from the work or points we were looking for? No, it exists now, but it didn't exist. This is like changing very rapidly. So yeah, so I think all that's true but I think it's all changing. So some of this was, I'm thinking more about like just kind of where it is now and what it's gonna be like in five years. I think the sort of continuum physics, so the fluid mechanics, solid mechanics, heat transfer, all that is well on its way to being standard and the packages will just work. When I first started trying some of these packages 10 years ago, it was the same problem, nothing works. And every year I try them, I can do more and more complicated things with very, very little effort. So, but it's definitely true that there are many systems where we don't know what the equations are, we don't know what the physics are and there's lots and lots of work to understand that. Yeah, no, that's kind of what I think is gonna happen. But again, it all comes back to what are the forces that are driving the change. So in sort of classical physics, again, fluids, electromagnetism, heat transfer, materials, solid mechanics, there's a very, very strong industry need and industry is very good once things get mature and sort of pushing things forward if there's a market for it. For a quantum chemistry, I don't know quite what the market is, right? So it's a very different issue if there's a strong industry need versus if it's purely academic and research need. The tools will develop very differently into very different time scales. So for example, if it took somebody to, let's say, a GI tenure, it was a wonderful experiment. You know, he had one person who helped him build this. Is it true that he graduates today will do the same experiment in roughly two weeks? That's a great question. I think it really depends upon the problem. So a lot of these scaling laws and computing are the same as in consumer electronics. And this is what Professor Swinney talked about a lot in his first talk. And so things with like imaging and cameras. I mean, I've heard Professor Shattuck tell me back in the day when he first started grading their materials, they would take like pictures and then paste them on the board and then do particle tracking by hand. Is that? Absolutely. And so those types of experiments, yes, can be done faster, right? How many can you track now? A million or a hundred thousand? Or as many as I want, right? As many as I want. And so yeah, there's some things like that. I'm sure in your world in the sort of optics and electronics, there's lots of things that are possible now that just were not. But something like Taylor's experiment where you're building a thing and you spin it. Yeah, I mean, that stuff still costs money. Materials cost money. Good materials cost time to machine. His brilliance was coming up with the idea, but I'm not sure that that experiment could be done really any different now than a hundred years ago. And again, I think things like these really big things like telescopes that go into space. I mean, you know, those are never gonna caught be free, right? But I'm just talking about table time. But I think it just depends a lot upon, you know, the field and the tools. I mean, in biology, you know, it used to be hard to read a letter and now the whole genome, you can get for nothing, right? It's getting much easier. So a lot of those scaling laws in some aspects of biology are doing the same thing, but not all, right? There's still some things that take time. So I think it's very problem dependent. It's just like whether there are tools that exist that are following these kind of, you know, doubling type phenomena. As much as people raising hands, so maybe they're gonna have a comment and I'm not gonna answer it. I think that's what I'm trying to say. And more generally, all of these kind of controllers are, you know, an indisible family of more powerful ones. So that's exactly the Raspberry Pi, they are, you know, all of these kinds of things that we are introducing in our hand-drawn school. 3D printing. 3D printing. 3D printing. 3D printing. Last year, I guess somebody did us the hand-drawn question. Do you like it? It's a table experiment. You could. You can't 3D print glass, though. That's what I need to like. I had 3D printing in mind when I asked that question. Yeah. And microfluidics is another big driver. Yeah. There's so many groups investing in so many ways of making microfluidics. Yeah. The interesting thing with microfluidics is I always hear this kind of electronics analogy. The problem, though, in some mechanical system is there's no op amp. So in electronics, the op amp is the device we use to buffer so you can impedance match so that what you put downstream doesn't affect what's upstream. Mechanical things there aren't op amps. Sometimes these functions you just can't concatenate and group them together one after another. So there's some things in the mechanical world and the physical world that I think will always be challenging. If you look at fields like robotics, it's all gotten quite modular and standard on the mechanical side. All the hard problems left are software and mechanical problems and actuators because they still take a lot of energy. Someone else had their hand up. We should end and get to our session. So why don't we end and...